text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Non-Lethal Heat Shock of the Asian Green Mussel, Perna viridis, Promotes Hsp70 Synthesis, Induces Thermotolerance and Protects Against Vibrio Infection Mild heat stress promotes thermotolerance and protection against several different stresses in aquatic animals, consequences correlated with the accumulation of heat shock protein 70 (Hsp70). The purpose of this study was to determine if non-lethal heat shock (NLHS) of the Asian green mussel, Perna viridis, an aquatic species of commercial value, promoted the production of Hsp70 and enhanced its resistance to stresses. Initially, the LT50 and LHT for P. viridis were determined to be 42°C and 44°C, respectively, with no heat shock induced death of mussels at 40°C or less. Immunoprobing of western blots revealed augmentation of constitutive (PvHsp70-1) and inducible (PvHsp70-2) Hsp70 in tissue from adductor muscle, foot, gill and mantel of P. viridis exposed to 38°C for 30 min followed by 6 h recovery, NLHS conditions for this organism. Characterization by liquid chromatography-tandem mass spectrometry (LC-MS/MS) revealed that PvHsp70-1 and PvHsp70-2 respectively corresponded most closely to Hsp70 from P. viridis and Mytilus galloprovincialis. Priming of adult mussels with NLHS promoted thermotolerance and increased resistance to V. alginolyticus. The induction of Hsp70 in parallel with enhanced thermotolerance and improved protection against V. alginolyticus, suggests Hsp70 functions in P. viridis as a molecular chaperone and as a stimulator of the immune system. Introduction Aquatic organisms experience environmental stresses including temperature fluctuation, salinity shift, oxygen deprivation and pollution [1][2][3] as well as disease-causing biotic stressors such as bacteria, virus, fungi and parasites [4]. Stress disrupts the normal physiology and cellular homeostasis of all organisms, potentially resulting in their death [1,5]. The heat shock response, an integral part of the physiological system that protects against environmental perturbations [6], involves the synthesis of heat shock proteins (Hsps) which, by molecular chaperone activity facilitate the proper folding of nascent proteins, prevent stress-induced irreversible protein denaturation and mediate storage and refolding of partially denatured protein [4]. Hsps also appear to stimulate the innate immune response of aquatic organisms thereby shielding cells against injury due to pathogens and making them more tolerant of disease and infection [7]. Non-lethal heat shock (NLHS) is an effective method to protect aquatic organisms against stress, an outcome often associated with increased Hsp accumulation [7,5]. NLHS increases Hsp70 in the common carp, Cyprinus carpio, allowing it to survive a normally lethal temperature [8]. NLHS of Artemia franciscana promotes Hsp70 build-up, induces thermotolerance and guards Artemia larvae against V. campbellii and V. proteolyticus, two pathogens of this branchiopod crustacean [9,10]. Exposing Penaeus monodon to a short hyperthermic stress enhances Hsp70 accumulation and resistance against gill associated virus (GAV) [11]. The concurrent induction of heat tolerance, resistance to bacterial infection and Hsp70 synthesis, suggests a role for Hsp70 in mediating the effects of stress perhaps via chaperoning and/or immune activation [12,1,4]. Several issues impede sustainable production of the Asian green mussel Perna viridis [13], a major aquaculture species in Malaysia, with fluctuation in water temperature due to climate change the most serious [14][15][16][17]. Additionally, bacteria, parasites and heavy metals hinder the successful cultivation of P. viridis and other bivalves in cage culture systems [18][19][20]. These types of problems occur in Marudu Bay, Malaysia, where temperature changes due to an influx of water lead to secondary infections by V. alginolyticus, a common pathogen of bivalves and crustaceans, causing mortalities of 95-98% in cultured P. viridis. Oysters cultivated in the same rafting area exhibit clinical signs similar to those of P. viridis. Sessile organisms like P. viridis depend on physiological responses such as the increased synthesis of Hsps to accommodate stresses because they cannot escape by swimming [1,21]. Thus, the synthesis of Hsp70 in P. viridis upon NLHS was investigated in this study, revealing a potential role for this protein in tolerance to heat and resistance to bacterial infection. Culture of P. viridis Adult P. viridis measuring 70-80 mm in length were purchased from various long-line culture farms in Masai, Johor (1°29 0 36.5@N 103°52 0 40.94@E). Animals were acclimatized in the Universiti Malaysia Terengganu Marine Hatchery under constant aeration (>6 ppt) at 28°C and salinity of 30 ppt for two weeks prior to use. During acclimation, mussels were fed daily with the microalgae Chaetoceros sp. at 6 x 10 7 cells/ml, the final number of algae in the tank. The rearing water was replaced every 2 days. Determination of Median Lethal Temperature (LT 50 ), and Lethal Heat Temperature (LHT) for L. viridis To determine the minimum temperatures that caused 50% mortality (LT50) and 100% mortality (LHT) groups of 20 L. viridis acclimatized at 28°C were exposed to abrupt 30 min heat shocks ranging in temperature from 34°C to 44°C in a water bath accurate to ±0.5°C. Mussels were then transferred to 28°C and mortality was determined 24 h later by counting live animals; gaping mussels that failed to respond to gentle tapping on the shell were considered dead. The percent mortality was calculated as (N 0 -N t )/ N 0 × 100 where N 0 and N t are initial and final numbers of living mussels [8]. Ten mussels were tested at each temperature and experiments were done in triplicate with non-heated animals as controls. Protein Extraction, SDS Polyacrylamide Gel Electrophoresis and Immunoprobing of Western Blots For protein extraction approximately 100 mg of tissue prepared individually from the adductor muscle, foot, gill and mantle was rinsed with sterile, cold, distilled water several times, and homogenized in 500 μl cold buffer K (150 mM sorbitol, 70 mM potassium gluconate, 5 mM MgCl 2 , 5 mM NaH 2 PO 4 , 40 mM HEPES, pH 7.4) [22] containing a protease inhibitor cocktail (Sigma-Aldrich Inc, USA) [5]. Two-times concentrated SDS polyacrylamide gel electrophoreses sample buffer [23] was added to equal volumes of tissue homogenate, mixed by vortexing, heated at 95°C for 5 min, cooled and centrifuged at 2200 x g for 60 sec. Ten μl samples of the supernatant containing 0.2 mg protein were loaded in individual lanes of 7% SDS polyacrylamide gels and resolved by electrophoresis at 120 V for 15 min, followed by 150 V for 45 min. Two gels were run simultaneously of which one was stained with Biosafe Coomassie (BioRad Laboratories, USA) and the other blotted to polyvinylidene fluoride transfer membrane (BioRad Immun-Blot PVDF, USA). Membranes were incubated in 50 ml of blocking buffer which consisted of phosphate buffered saline containing 0.2% (v/v) Tween-20 and 5% (w/v) bovine serum albumin. Blots were then incubated for 60 min at room temperature with a mouse monoclonal antibody (Thermo Scientific, USA, MA3-006) diluted 1:5000 in PBS (BioRad Laboratories, USA) and which recognized both constitutive and inducible Hsp70. Goat anti-mouse IgG coupled with horseradish peroxidase (HRP) conjugate (Affinity BioReagents Inc., Golden, CO) was employed at a dilution of 1:5000 in PBS as secondary antibody. Diaminobenzidinetetrahydrocloride dehydrate (DAB) at 0.7 mM was used in association with 0.1% (v/v) H 2 O 2 in 0.1 M Tris-HCl, pH 7.6, for detection of antibody reactive proteins [22,10]. Human recombinant Hsp70 (Sigma Aldrich Inc.-H7283, USA) served as control for antibody reactivity. The blots were scanned with a GS-800 calibrated densitometer (BioRad Laboratories, USA) and quantification was performed by measuring the bands for PvHsp70-1 and PvHsp70-2 with Quantity One software (BioRad Laboratories, USA). The amounts of Hsp70 in tissues of P. viridis exposed to NLHS were interpreted as reflective density/mm 2 , the density value generated by Quantity One software. Human Hsp70 served as the control for antibody specificity. Determination of NLHS Adult mussels (n = 5) acclimated at 28°C were abruptly heat shocked at temperatures ranging from 30 to 40°C for 30 min and then transferred to 28°C for 6 h recovery prior to protein extraction. Mussels held at 28°C served as controls. As determined by SDS polyacrylamide gel electrophoresis and immunoprobing of western blots the temperature that induced maximum Hsp70 accumulation in mussel tissues and did not result in mussel mortality was 38°C. In subsequent experiments, adult mussels (n = 30) acclimated to 28°C were heated at 38°C and allowed to recover at 28°C for 0, 6, 12, 24 and 48 h prior to protein extraction. After 48 h, protein was extracted every 2 days until 10 days post-heat shock. Determination of Hsp70 band density on Western blots, heat shock parameters and recovery conditions established that the optimal conditions for NLHS of P. viridis were 30 min heat shock at 38°C followed by 6 h recovery at 28°C, a value used in subsequent stress experiments. Identification of Hsp70 by Mass Spectrometry In preparation for liquid chromatography-tandem mass spectrometry (LC-MS/MS) (Wischgoll et al., 2009), protein samples from mussels receiving NLHS were resolved in 5% SDS polyacrylamide gels by electrophoresis, stained with Biosafe Coomassie (BioRad Laboratories) and destained. Gel slices containing PvHsp70-1 and PvHsp70-2 were excised and freeze-dried overnight. Proteins were then digested with trypsin and the extracted peptides [24] were loaded onto a C18 column 300SB, 3.5 μm column (Agilent Technologies, USA) and separated with a linear gradient of water/acetonitrile/0.1% formic acid (v/v). The peptides were analyzed by electrospray ionization mass spectrometry with a Shimadzu Prominence Nano HPLC system (Shimadzu, Japan) coupled to a 5600 TripleTOF mass spectrometer (AB Sciex, USA). Protein identification was performed with Mascot sequence matching software (Matrix Science, USA) and the Ludwig NR database. NLHS and the Induction of Heat and Bacterial Tolerance in P. viridis Thermotolerance induction was determined by challenging mussels exposed to NLHS at their LT50 and LHT with survival ascertained 24 h later as described above. Ten mussels were used for each treatment and experiments were done in triplicate. Control animals were held at 28°C. To determine LC50, the concentration of V. alginolyticus causing 50% mortality of mussels, bacteria were grown overnight at 28°C with constant shaking in marine nutrient broth to stationary phase, harvested and suspended in sterile seawater prior to determining density at 600 nm. The number of bacteria was calculated from a standard curve obtained according to the equation, y = (2 x 10 8 )x -(3 x 10 7 ), where y is the number of bacteria/ml and x is the OD 600 value. Mussels were incubated by immersion with 1 x 10 6 , 1 x 10 7 , 1 x 10 8 and 1 x 10 9 V. alginolyticus/ml for various times. Survival was determined daily by counting live animals. Gaping mussels that failed to respond to gentle tapping on the shell were considered dead. The experiment was done in triplicate. Mussels (n = 10) subjected to NLHS were challenged with 1 x 10 8 V. alginolyticus/ml, for 72 h, the LC 50 , after which survival was determined as described in the previous section. Mussels challenged with V. alginolyticus without NLHS served as controls. The experiment was done in triplicate. Data Analysis Survival percentages were ArcSin-transformed to satisfy normality. The significance of differences between survival of groups of challenged mussels either exposed or not exposed to NLHS and the amounts of Hsp70 were evaluated by using one way ANOVA with SPSS version 20.0 for Windows. LT 50 and LHT for Adult P. viridis Heating for 30 min at temperatures ranging from 34°C to 40°C did not kill adult P. viridis but mortality occurred above 40°C, with LT 50 and LHT at 42°C and 44°C respectively (Fig 1). Heat Shock Induced Hsp70 in P. viridis Following 6 h recovery 70 kDa proteins were observed when extracts of adductor muscle, foot, gill and mantle from P. viridis heated at temperatures from 28-40°C were resolved in 7% SDS polyacrylamide gels and stained with Coomassie blue (Figs 2 and 3). Immunoprobing of polyvinylidene fluoride membranes containing protein extracts resolved in SDS polyacrylamide gels with a monoclonal antibody to Hsp70 revealed single 70 kDa bands at lower temperatures but two bands at higher temperatures (Fig 2). The upper band was termed PvHsp70-1 and the lower PvHsp70-2. The amount of PvHsp70-1 visible in immunostained blots increased at 34 and 36°C in adductor tissue and at 36 and 38°C in other tissues (S1 Table) (S1A Table). PvHsp70-2 was induced at 38 and slightly at 40°C in adductor muscle, whereas in foot, gill and mantle PvHsp70-2 was apparent at 36, 38 and 40°C (S1B Table). Heat shock at 40°C all but eliminated PvHsp70-2 in adductor muscle and reduced PvHsp70-1 in all organs with adductor muscle also showing a decline at 38°C. PvHsp70-1 and PvHsp70-2 protein bands on Western blots were scanned, revealing bands that were significantly different from control values (Fig 3). In the following experiments 38°C was used for heat shock because the synthesis of PvHsp70-1 and PvHsp70-2 was enhanced at this temperature in all tissues of P. viridis examined and no death of mussels occurred. P. viridis Hsp70 Varied with Recovery Time after Heat Shock PvHsp70-2 was induced in all tissues examined when P. viridis were heated at 38°C for 30 min followed by 6 h recovery (Figs 2, 4 and 5) (S1D Table). In adductor muscle, PvHsp70-2 declined at 12 h and was not detectable at day 2 whereas PvHsp70-1 was reduced at day 8 but still visible at day 12 (S1C Table). In foot, gill and mantle, PvHsp70-2 was present at day 4 but was difficult to detect at day 6. PvHsp70-1 decreased at either day 4 or 6 in these three tissues (Figs 4 and 5). Heat shock at 38°C for 30 min with 6 h recovery induced PvHsp70-2 in all tissues of P. viridis examined, hence these conditions were used for NLHS in subsequent experiments. Identification of Hsp70 Isotypes in P. viridis by Mass Spectrometry Although only a single prominent band was distinguished around 70 kDa in 7% SDS polyacrylamide gels (Fig 2), two bands were seen upon electrophoresis in 5% gels (Fig 6). The two bands were excised and resident proteins analyzed by mass spectrometry. The upper band contained an Hsp70 equivalent to PvHsp70-1 and the lower band an Hsp70 corresponding to PvHsp70-2. Utilization of Mascot sequence matching software with Ludwig NR database revealed that PvHsp70-1 and PvHsp70-2 were respectively most similar to Hsp70 from P. viridis and M. galloprovincialis (Table 1). Both isotypes were similar to Hsp70s in other bivalves ( Table 2). NLHS Promoted Thermotolerance in P. viridis Exposure to NLHS increased the survival of mussels approximately 2-fold upon LT50 challenge whereas 50% of animals primed with NLHS were viable after LHT (Fig 7). viridis upon NLHS at 38°C with different recovery length were determined as just described. Data are presented as mean ± standard errors. Bars that are not visible denote the absence of PvHsp70-2. Asterisk (*) represents statistical difference against the control treatment (P<0.05). The experiment was performed in duplicate. c and 28, mussels not receiving NLHS (control). doi:10.1371/journal.pone.0135603.g005 NLHS Increased the Tolerance of P. viridis to Bacterial Infection Exposure of non-heated P. viridis to V. alginolyticus for 72 h revealed an LC 50 of 1.0 x 10 8 bacteria/ml (Table 3). Priming P. viridis with NLHS enhanced survival upon 72 h exposure to 1 x 10 8 V. alginolyticus/ml from 50% to almost 100% (Fig 8). Discussion The synthesis of Hsp70, the best studied stress protein, is induced in aquatic animals by heat stress [25,4,26] and in this work immunoprobing of western blots revealed that two Hsp70 isotypes increased in P. viridis after NLHS. The P viridis Hsp70s, PvHsp70-1 and PvHsp70-2, respectively matched Hsp70 in P. viridis and M. galloprovicialis most closely, while being similar to Hsp70s from other bivalves. PvHsp70-1 and PvHsp70-2 increased in the adductor muscle, foot, gill and mantle of P. viridis upon NLHS. Accumulation of Hsp70 in the tissues examined in this study was also seen in O. edulis, M. galloprovincialis, and C. gigas [22,27,28]. PvHsp70-1 was observed in mussel tissues before heat shock indicating that it was produced constitutively, whereas PvHsp70-2 was not apparent until recovery after NLHS at temperatures favourable for growth, observations similar to those for other aquatic organisms exposed to heat perturbation [9]. PvHsp70-2, the inducible isotype of Hsp70, was present in all tissues of P. viridis examined after 6 h recovery from heat shock, but shorter post-shock times were not examined so synthesis may have occurred earlier. PvHsp70-2 was induced maximally by a 30 min heat shock at 38 to 40°C, in line with findings that Hsp70 is induced when aquatic organisms experience temperatures 5-10°C above their normal growth requirement [29,30]. PvHsp70-2 persisted for several hours after induction by NLHS perhaps providing protection against subsequent stress, but their synthesis eventually decreased as seen in other species [29]. Constitutive Hsp70, transiently induced by heating P. viridis, may function cooperatively with inducible Hsp70 to protect cells against heat, pathogens, heavy metals and other insults [31,7,4]. P. viridis Hsp70 may be crucial for cell survival during stress, expediting protein repair and reducing protein denaturation that could lead to death [32,4]. In this study, a 30 min heat shock at 38°C with 6 h recovery was selected as the NLHS because this treatment increased the amount of PvHsp70-1 and PvHsp70-2 in P. viridis without causing mortality. The possibility of Two 70 kDa protein bands were resolved in 5% SDS polyacrylamide gels. Tissues corresponding to the adductor muscle, foot, gill and mantel were isolated from P. viridis exposed to NLHS, homogenized, centrifuged and applied to a 5% SDS polyacrylamide gel. Labeled arrows indicate excised regions of the gel subsequently shown by mass spectrometry to contain PvHsp70-1 and PvHsp70-2. M, molecular mass in kDa; C, mussels not receiving NLHS; HS, mussels receiving NLHS; F, foot; G, gill; MA, mantle; MU, adductor muscle. Hsp70 breakdown generating a second band is low because the mass-spectrometry indicated two different Hsp70s and protease inhibitors were used when making the protein extract. Induced thermotolerance refers to the ability of an organism to withstand an otherwise lethal temperature, a condition achieved by priming animals with NLHS [8]. In this study, groups of mussels primed with NLHS and exhibiting increased Hsp70 survived LT50 challenge, whereas approximately 50% survived LHT challenge. Clearly, NLHS promoted thermotolerance in P. viridis, an observation similar to that for Mytilus edulis where exposure to a short heat shock increased resistance to lethal heat for 3 d [33]. In other bivalves, constitutively expressed and induced stress proteins are thought to mediate thermotolerance and this may require their cooperation. For example, the up-regulation of constitutive 77 and 72 kDa proteins and the synthesis of an inducible 69 kDa protein by NLHS promote thermotolerance in C. gigas [22]. Additionally, increases in constitutive and inducible Hsp70s correlate with NLHS enhanced the thermotolerance of P. viridis. P. viridis acclimatized at 28°C were exposed to NLHS and then heated for 30 min at their LT 50 (42°C) and LHT (44°C). Survivors were counted 24 h after challenge. Data are presented as mean ± standard errors. Asterisk (*) represents statistical difference against the control (P< 0.05). The experiments were performed in triplicate. Non-induced, mussels not receiving NLHS; Induced, mussels exposed to NLHS. thermotolerance induction in the adult oyster, Ostreola conchaphila, [34] and Hsp70 accumulation parallels increasing protection of A. irradians irradians juveniles against LHT, with thermotolerance lasting at least 7 days [35]. The correlation between Hsp70 accumulation and increasing thermotolerance in bivalves is observed for other aquatic organisms such as fish and shrimp [4]. These studies indicate the importance of constitutive and inducible Hsp70 isotypes in bivalve thermotolerance, however Hsps in addition to Hsp70 may contribute to thermotolerance by protecting proteins against heat denaturation and assisting protein refolding [36][37][38], both vital to cell homeostasis. Cross-protection or cross-tolerance, an enhanced tolerance to a particular stress, acquired by an initial transient, but different, stress [39,40,7,4], was demonstrated by challenging P. viridis subjected to NLHS with V. alginolyticus. The enhanced protection of P. viridis against V. alginolyticus correlated with increasing amounts of PvHsp70-1 and PvHsp70-2 in all tissues of NLHS increased the bacterial tolerance of P. viridis. P. viridis acclimatized at 28°C were exposed to NLHS, and then incubated with 1 x 10 8 V. alginolyticus/ml, the LC 50 for this species. Survivors were counted 72 h after challenge. Data are presented as mean ± standard errors. Asterisk (*) represents statistical difference against the control (P< 0.05). The experiment was performed in triplicate. Non-induced, mussels not receiving NLHS; Induced, mussels exposed to NLHS. doi:10.1371/journal.pone.0135603.g008 P. viridis examined. A role for Hsp70 in averting infection is suggested for C. virginica where the augmentation of a constitutive 69 kDa protein and the induction of a 72 kDa protein after sub-lethal heat shock promotes survival upon challenge with Perkinsus marinus [41]. The accumulation of Hsp70 after a short heat stress corresponds to increased resistance of P. monodon against gill associated virus (GAV), an effect accompanied by reduction of viral replication [11]. Additionally, accumulation of Hsp70 after NLHS enhances the tolerance of A. franciscana larvae against pathogenic V. campbellii and V. proteolyticus (Sung et al., 2007). Clearly, the accumulation of Hsp70s induced by NLHS correlates with survival against subsequent infection, suggesting Hsp70 influences the immune response. Hsps may stabilize cells against injury in response to pathogen proliferation, mediate folding of cell proteins synthesized due to bacterial pathogens, store and re-fold partially denatured protein and stimulate the innate immune response, possibly by sending danger signals to the innate immune system [42-44, 4, 7]. Although bivalves rely solely on an innate, non-lymphoid system of immune responses [1,45], some of the immune mechanisms in bivalves are structurally and functionally similar to those in vertebrates [46]. Hsp70 attenuates infections in vertebrates by activating toll-like receptors (TLRs) and transducing signals from inflammatory reactions to cells of the innate immune system such as macrophages, dendritic cells and neutrophils [47,48]. The extracellular Hsp70 family promotes inflammatory cytokine production [49] and may elicit production of inducible nitric oxide synthase [50], interleukin (IL) 1-β, IL 6 and tumor necrocis factor α (TNFα) [51], to guard against infection. Invertebrate Hsps restrict bacterial infection by activation of TLRs [7], but there is no evidence indicating a relationship between Hsps and TLRs in bivalves. Considering that TLR genes occur in bivalves such as C. farreri [52], C. virginica [53], A. irradians [54] and M. mercenaria [55] immune activation via TLRs is possible in P. viridis. The data presented herein demonstrate that Hsp70 is induced in P. viridis by NLHS and that Hsp70 plays a role in increasing the thermotolerance of P. viridis and enhancing survival against V. alginolyticus challenge. Further work is required to elucidate the role of Hsps in induced thermotolerance and the immune response of mussels, perhaps with the application of molecular tools such as RNA interference (RNAi) an option. Such studies are of fundamental interest and have applied significance through formulation of strategies to protect aquatic organisms against stress and disease, of particular importance in aquaculture. Supporting Information S1 Table. Amounts of PvHsp70-1 interpreted as reflective density/mm 2 in tissues of P. viridis exposed to NLHS. The amounts of PvHsp70-1 in adductor muscle, foot, gill and mantle of P. viridis exposed to heat shock at 30, 32, 34, 36, 38 and 40°C were determined by densitometry analysis of antibody-stained Western blots as described in Materials and Methods. Data are presented as mean ± standard deviation. Asterisk ( Ã ) represents statistical difference against the control treatment (P<0.05). 28, mussels not receiving NLHS (control) (S1A Table). Amounts of PvHsp70-2 interpreted as reflective density/mm 2 in tissues of P. viridis exposed to NLHS. The amounts of PvHsp70-2 in adductor muscle, foot, gill and mantle of P. viridis exposed to heat shock at 30, 32, 34, 36, 38 and 40°C were determined by densitometry analysis of antibody-stained Western blots as described in Materials and Methods. Data are presented as mean ± standard deviation. Asterisk ( Ã ) represents statistical difference against the control treatment (P<0.05). 28, mussels not receiving NLHS (control) (S1B Table). Amounts of PvHsp70-1 interpreted as reflective density/mm 2 in tissues of P. viridis upon NLHS at 38°C with different recovery length. The amounts of PvHsp70-1 in the adductor muscle, foot, gill and mantle of P. viridis upon NLHS at 38°C with different recovery length. Data are presented as mean ± standard deviation. Asterisk ( Ã ) represents statistical difference against the control treatment (P<0.05). c and 28, mussels not receiving NLHS (control) (S1C Table). Amounts of PvHsp70-2 interpreted as reflective density/mm 2 in tissues of P. viridis upon NLHS at 38°C with different recovery length. The amounts of PvHsp70-1 in the adductor muscle, foot, gill and mantle of P. viridis upon NLHS at 38°C with different recovery length. Data are presented as mean ± standard deviation. Asterisk ( Ã ) represents statistical difference against the control treatment (P<0.05). c and 28, mussels not receiving NLHS (control) (S1D Table). (DOCX)
5,721.8
2015-08-19T00:00:00.000
[ "Biology" ]
On the use of the Jander equation in cement hydration modelling The equation of Jander [W. Jander, Z. Anorg. Allg. Chem. (1927) 163: 1-30] is often used to describe the kinetics of dissolution of solid cement grains, as a component of mathematical descriptions of the broader cement hydration process. The Jander equation can be presented as kt/R =[1-(1-α) (1/3) ] where k is a constant, t is time, R is the initial radius of a solid reactant particle, and α is the fractional degree of reaction. This equation is attractive for its simplicity and apparently straightforward derivation. However, the derivation of the Jander equation involves an approximation related to neglect of particle surface curvature which means that it is strictly not correct for anything beyond a very small extent of reaction. This is well documented in the broader literature, but this information has not been effectively propagated to the field of cement science, which means that researchers are continuing to base models on this erroneous equation. It is recommended that if the assumptions of diffusion control and unchanging overall particle size which lead to the selection of the Jander equation are to be retained, it is preferable to instead use the Ginstling-Brounshtein equation [A.M. Ginstling, B.I. Brounshtein, J. Appl. Chem. USSR (1950) 23: 1327-1338], which does correctly account for particle surface curvature without significant extra mathematical complication. Otherwise, it is possible (and likely desirable) to move to more advanced descriptions of particle-fluid reactions to account for factors such as dimensional changes during reaction, and the possibility of rate controlling influences other than diffusion. Introduction 1 The ability to predict the rate, and thus the extent, of hydration of cementitious solid precursors (Portland cement, alternative cements and/or supplementary cementitious materials) lies at the heart of any model which describes the evolution of the chemistry or microstructure of pastes, grouts, mortars or concretes based on these materials. Such models are essential to the description of heat evolution, internal chemical and geometric evolution of hydration products, and performance in service. Many models have been developed and published to describe different aspects of the hydration of various types of cements, with different degrees of chemical and microstructural specificity, and making a wide range of different assumptions regarding the rate-controlling processes and mechanisms [1,2]. It is not the purpose of this Letter to enter into the debate regarding the relative merits of each of the specific detailed models that have become available 1 , but rather to provide an assessment of one of the underlying equations which is often incorporated (implicitly or explicitly) into such models: the equation of Jander [3], Eq. 1, used to describe the rate of consumption of a solid precursor grain during a chemical reaction process such as hydration: where k is a constant, t is time, R is the initial radius of a solid reactant particle (e.g. a cement grain), and α is the fractional degree of reaction. The Jander equation is used to describe the rate of retreat of the surface of a partially reacted spherical solid particle, where the rate-controlling step is the diffusion of reactants through a product layer to an interface at which an (assumed instantaneous) reaction takes place, the product layer directly replacing the space filled by the initial reactant particle with no change in volume. This is a classic example of a 'shrinking core reaction' [4][5][6]. In the specific case of cement chemistry, this would correspond to the limitation of the reaction rate solely by formation of, and diffusion through, the inner product. The importance of this particular type of mechanism is the reason for the inclusion of the Jander model (although in a reorganised and rearranged form) in the now widely-followed formulation of Parrot & Killoh [7] to describe what is often identified as a diffusioncontrolled regime during the process of cement hydration. It has also been noted that Jander equation implicitly assumes that all reacting particles are mono-sized, and although corrections explicitly describing certain particle size distributions [4,8] have been introduced, these have not seen widespread use in cement science. Other empirical adaptations of Jander model have been proposed and used in the literature, including either modification of the power law exponent from 2 to another value (introduced in [9] and used by various cements researchers since), or the use of a logarithmic time-dependence (introduced in [10] for glassmelting kinetics and also adopted by various cements researchers); these forms do not have a rigorous analytical derivation and so are of doubtful validity. The relevance of these physical assumptions to the case of cement hydration has previously been called into question [11], and will undoubtedly vary depending on the specific timeframe, and type of cementitious material, being modelled [7]. There is increasing evidence that a pure diffusion controlled model is not likely to give a realistic description of cement hydration processes, particularly at earlier age, as interfacial and aqueous-phase processes are also influential in determining reaction rates. However, at a more fundamental level and even if the underlying assumptions were taken to be valid, the Jander approach is itself flawed, and this is the topic of the current Letter. It is also noted that other authors have provided discussion along these lines including in the specific context of gas-solid reactions in metallurgy [4,12], and for purely solid-state reactions relevant to pharmaceuticals [13], but the continued usage of the Jander equation by construction materials scientists appears to raise the need for its discussion in a topic-specific journal. Derivation of the Jander equation, and a 2 (correct) alternative: Ginstling-Brounshtein The derivation of the Jander equation commences with the assumptions embodied in Fig. 1. Based on these assumptions, and as the thickness of the product layer, the diffusion-controlled mechanism requires: where is a constant which effectively incorporates physical and chemical parameters. This can be integrated to yield: The extent of reaction of a spherical particle, α, is then defined according to Fig. 2. The fundamental error in the Jander [3] formulation then lies in the next step, where the substitution of α defined in spherical coordinates (Fig. 2) is made into Eq. 3 which was derived in Cartesian coordinates. This substitution, which neglects the surface curvature, uses Eq. 4: which when substituted directly into Eq. 3 (and setting = 2 for simplicity), yields Eq. 1, the Jander formula [3]. To avoid this erroneous substitution, the integration must instead be carried out in spherical coordinates, i.e. with full consideration of surface curvature. Eq. 2 should then be replaced by Eq. 5 [4]: Integration of Eq.5 yields Eq.6: Using the definition of α from Fig. 2 in Eq.6 then yields the correct description of the particle-fluid reaction assuming rate control by diffusion through a product layer with product volume equal to initial particle volume, known as the Ginstling-Brounshtein equation, Eq. 7 [14]: Why is this important? -comparison between 3 the Jander and Ginstling-Brounshtein models Fig. 3 shows a comparison between the predictions of the Jander and Ginstling-Brounshtein models. These models are generally fitted to extent of reaction vs time data, and so to replicate this process while giving a comparison of the models on a realistic basis, they are presented in Fig. 3a normalised to match the times required to reach specified extents of reaction (50, 75 and 100%) between the two models. The parameterisation process here is therefore essentially the equivalent of taking a single extent of reaction 'measurement' and fitting both models to that data point, then observing the differences between the models at all reaction extents other than the one used in fitting. Sharp et al. [15] have previously presented this type of analysis of the Jander, Ginstling-Brounshtein and other kinetic equations based solely on matching the time to 50% reaction; the comparison presented here at different reaction extents provides additional insight into the differences between the models, and the pitfalls which may be encountered in parameterising them for practical use. This method of presenting the models shows that the main differences between the kinetic predictions of the two models occur at higher reaction extents. This is the most evident when the models are parameterised to give equal times to 100% reaction (grey dashed vs grey solid line in Fig. 3a); in this case, the predictions for the time required to reach intermediate reaction extents (e.g. 50% reaction) differ by more than 100% from the Jander to the Ginstling-Brounshtein model. The divergence becomes less notable at intermediate extents of reaction when the models are parameterised using equal time to either 50% or 75% reaction (solid and dashed black lines in FIg. 3a, respectively); in such cases, the predictions of both models are rather similar up to a reaction extent of ~80%, but diverge from each other significantly after this. The reason for the divergence between the models at high extents of reaction is related to the geometric errors in the formulation of the Jander model, where both the area and the radius of curvature of the reaction interface become much smaller as the solid reactant is consumed and replaced by a thickening product layer. The neglect of the effects of this curvature is therefore more critical at these higher extents of reaction, and the rate of consumption of the final 20% of the solid reactant becomes very markedly different between the two model formulations. Fig. 3a, showing that there is in fact rather little difference between the predictions of the two models below 20% reaction extent (α = 0.2), as long as the parameterisation is conducted so that the two models are parameterised using data for 75% reaction or less. However, the fundamental principle that it is better to use an equation that is analytically correct than one that is not, and the negligible extra complexity of the Ginstling-Brounshtein expression compared to the Jander model in terms of inclusion in a model code, would both support the use of the Ginstling-Brounshtein model [14] as a preferred description of cement grain consumption during hydration, if the assumptions of diffusion control, constant volume, and solely inner product formation are to be retained. These assumptions themselves have been (rightly) criticised as oversimplifications of the actual process of cement hydration [2], but in instances where it is desirable to build a model from this simplified starting point, it is at least necessary to build it using mathematically correct equations. Consequences, recommendations and 4 conclusions It has previously been proposed by Cable [16], writing in defence of the Jander model, that the model was originally formulated only to describe low extents of reaction, and Fig. 3b does show that fitting the model solely in such a range would give results that are likely to match those of the Ginstling-Brounshtein model to within experimental uncertainty. However, the original Jander paper [3] does present experimental data and model fits for extents of reaction exceeding 80%, which calls this proposal into question. The extents of reaction of blast furnace slag and fly ashes in practical cementitious blends (w/c 0.40; replacement levels 30-40%) have been determined by multiple techniques in a recent RILEM round robin test [17]; at 90 d, slag had reacted around 40-50% and fly ashes 20-30%. According to the literature survey of Zeng et al. [18], the extent of cement hydration at this age and w/c ratio would be expected to be around 70-80%. Thus, based on the findings presented in the previous section, it may be expected that the fitting of either the Jander or Ginstling-Brounshtein models to extent of reaction data obtained at 90 d or earlier for such materials would give similar predictions of reaction rates during this timeframe. However, there will be instances where much higher extents of reaction are important within meaningful timeframes: Portland cement hydration beyond 12 months, or blending of cementitious systems with silica fume, can lead to extents of reaction (of one or more components) that exceed 90% (α = 0.90). Any application of the Jander model to such cases will introduce severe errors. Giess [19] also conducted a comparative analysis of the Jander and Ginstling-Brounshtein models, up to α = 0.94, and applying the Arrhenius temperature-dependence relationship to calculate fundamental rate constants from the model parameters. The use of the Jander equation was also seen to lead to an error of as much as 20% in the extracted rate constants, particularly at higher extents of reaction. A further level of development beyond the Ginstling-Brounshtein approach was provided by Vallensi [20] and by Carter [12], who each derived equations which account for both curvature of particle surfaces and the potential for formation of a reaction product which does not fill exactly the same space as the original unreacted solid grain. The formulation of Carter [12] presents this in a more userfriendly manner, Eq. 8: where is the ratio of the specific volumes of the product and reactant; this is a key parameter which has also been used in microstructurally-based models of cement hydration [21] and can thus be relatively readily obtained from the literature. However, this model does not enable any discrimination between processes taking place in inner and outer product regions. For the specific case of cement hydration, Taplin [22] also derived (and then further developed in subsequent publications) a set of equations involving rate control by diffusion through both inner and outer product regions, which can be reduced to the Ginstling-Brounshtein model in the limit of low influence of the outer product [11]. Xie and Biernacki [2] have described in detail the development of many other models based on different sets of assumptions about controlling mechanisms, geometry and reaction product formation; the available models have gained in sophistication (but not always in clarity regarding the underlying mechanisms) in the past decades as computing power and the ability to store and manipulate three-dimensional reaction simulation snapshots have improved. However, these models are usually based at a fundamental level on simple analytical expressions describing the reaction rate and mechanism associated with each individual cement grain as it hydrates, and the mode and location of growth of the hydrates. It is therefore essential that the underlying physicochemical processes are captured as accurately as is realistically possible. For this reason, the key conclusion of this Letter is that the Jander equation is not suitable for use in describing cement hydration, even if the assumption of diffusion control is to be retained, either as a stand-alone model or as an underpinning component of a broader model structure, as it is derived from a fundamentally flawed mathematical derivation.
3,375.4
2016-10-28T00:00:00.000
[ "Mathematics" ]
Hsp105α Suppresses Hsc70 Chaperone Activity by Inhibiting Hsc70 ATPase Activity* Hsp105α is a mammalian member of the HSP105/110 family, a diverged subgroup of the HSP70 family. Hsp105α associates with Hsp70/Hsc70 as complexes in vivo and regulates the chaperone activity of Hsp70/Hsc70 negatively in vitro and in vivo. In this study, we examined the mechanisms by which Hsp105α regulates Hsc70 chaperone activity. Using a series of deletion mutants of Hsp105α and Hsc70, we found that the interaction between Hsp105α and Hsc70 was necessary for the suppression of Hsc70 chaperone activity by Hsp105α. Furthermore, Hsp105α and deletion mutants of Hsp105α that interacted with Hsc70 suppressed the ATPase activity of Hsc70, with the concomitant appearance of ATPase activity of Hsp105α. As the ATPase activity of Hsp70/Hsc70 is essential for the efficient folding of nonnative protein substrates, Hsp105α is suggested to regulate the substrate binding cycle of Hsp70/Hsc70 by inhibiting the ATPase activity of Hsp70/Hsc70, thereby functioning as a negative regulator of the Hsp70/Hsc70 chaperone system. Hsp105␣ is a mammalian member of the HSP105/110 family, a diverged subgroup of the HSP70 family. Hsp105␣ associates with Hsp70/Hsc70 as complexes in vivo and regulates the chaperone activity of Hsp70/ Hsc70 negatively in vitro and in vivo. In this study, we examined the mechanisms by which Hsp105␣ regulates Hsc70 chaperone activity. Using a series of deletion mutants of Hsp105␣ and Hsc70, we found that the interaction between Hsp105␣ and Hsc70 was necessary for the suppression of Hsc70 chaperone activity by Hsp105␣. Furthermore, Hsp105␣ and deletion mutants of Hsp105␣ that interacted with Hsc70 suppressed the ATPase activity of Hsc70, with the concomitant appearance of ATPase activity of Hsp105␣. As the ATPase activity of Hsp70/Hsc70 is essential for the efficient folding of nonnative protein substrates, Hsp105␣ is suggested to regulate the substrate binding cycle of Hsp70/Hsc70 by inhibiting the ATPase activity of Hsp70/Hsc70, thereby functioning as a negative regulator of the Hsp70/Hsc70 chaperone system. Hsp105␣ and Hsp105␤ are mammalian stress proteins that belong to the HSP105/110 family. Hsp105␣ is constitutively expressed and induced by various forms of stress, whereas Hsp105␤ is an alternatively spliced form of Hsp105␣ that is specifically produced following heat shock at 42°C (1-3). Hsp105␣ and Hsp105␤ suppress the aggregation of denatured proteins caused by heat shock in vitro, as does Hsp70, but refolding activity of these proteins is yet to be revealed (4). These proteins exist as complexes associated with Hsp70 and Hsc70 (a constitutive form of Hsp70) in mammalian cells (5,6) and suppress the chaperone activity of Hsc70 in vitro and in vivo (4,7). Furthermore, Hsp105␣ and Hsp105␤ are phosphorylated at Ser 509 by protein kinase CK2 (CK2) 1 in vitro and in vivo, and the CK2-mediated phosphorylation modulates the inhibitory effect of Hsp105␣ on Hsp70/Hsc70 chaperone activity (7). Recently, Hsp105␣ and Hsp105␤ were suggested to function as a substitute for Hsp70 family proteins to suppress the aggregation of denatured proteins in cells under severe stress, in which the cellular ATP level decreases markedly (8). The HSP70 family is a major and well characterized group of heat shock proteins. Several different species of HSP70 family proteins are present in different compartments of eukaryotic cells and play important roles as molecular chaperones that prevent the irreversible aggregation of denatured proteins and assist folding, assembly, and translocation across the membrane of cellular proteins (9,10). The chaperone activity of Hsp70/Hsc70 relies on its ability to bind to short exposed hydrophobic stretches of polypeptide substrates in an ATP-regulated fashion. The ATP-bound Hsp70/Hsc70 exhibits low affinity and fast exchange rates for substrate, whereas the ADPbound form has high affinity and slow exchange rates for substrate (11)(12)(13)(14). Conversion of ATP-bound Hsp70/Hsc70 to the ADP-bound form is induced by its intrinsic ATPase activity, which is facilitated by co-chaperones of the HSP40 family (15). Many proteins have been identified as regulators of Hsp70/ Hsc70-mediated refolding of denatured proteins (16 -20). Hip stabilizes the ADP-bound form of Hsp70/Hsc70 and prevents the ATP-ADP cycle of Hsp70/Hsc70 (16). BAG-1 inhibits the chaperone activity of Hsp70/Hsc70 through the promotion of the dissociation of ADP from Hsp70/Hsc70 (17,18). CHIP suppresses the reaction cycle of Hsp70/Hsc70 by preventing the binding of ATP or inhibiting the hydrolysis of ATP (19,20). The predicted secondary structure of Hsp105␣ and Hsp105␤ is composed of an N-terminal ATP-binding, a ␤-sheet, a loop, and a C-terminal ␣-helical domain, similar to those of HSP70 family proteins (2,3). The ␤-sheet domain of Hsp105␣ and Hsp105␤ binds denatured proteins such as Hsp70/Hsc70 (8). However, although the ATP-binding domain of Hsp105␣ and Hsp105␤ is conserved among HSP70 family proteins, the ATP binding of the domain in HSP105 family proteins has not been elucidated. Furthermore, although Hsp105␣ suppresses the chaperone activity of Hsp70/Hsc70 (4, 7), the precise mechanism of the suppression has not been clarified yet. In the present study, we examined the mechanisms by which Hsp105␣ regulates Hsc70 chaperone activity and revealed that Hsp105␣ suppresses the chaperone activity of Hsc70 by inhibiting the ATPase activity of Hsc70 with the concomitant appearance of Hsp105 ATPase activity. EXPERIMENTAL PROCEDURES Plasmids-Expression plasmids for His-tagged mouse Hsp105␣ and deletion mutants of Hsp105␣ in Escherichia coli have been described previously (4,7). To construct an expression plasmid (pTrcHis70) for His-tagged human Hsc70 in E. coli, human Hsc70 cDNA derived from the plasmid pHSC7 (21) was subcloned into XhoI-KpnI sites of the expression vector pTrcHisA (RIKEN Gene Bank, Ibaraki, Japan). For the construction of deletion mutants of Hsc70, a PCR was performed with pTrcHis70 as the template and specific 5Ј end-phosphorylated primers (underlining indicates the additional KpnI site): His-Hsc70N2 (1-501 amino acids), 5Ј-GGGGTACCAGATGTCCAAGGGACCTGCA-3Ј and 5Ј-GGGGTACCTCAAATCTTGTTCTCTTTTCCCGT-3Ј; His-Hsc-70C1 (509 -646 amino acids), 5Ј-GGGGTACCGTTTGAGCAAGGAAG-AC-3Ј and 5Ј-AATCTTCTCTCATCCGCC-3Ј; His-Hsc70C2 (393-646 * This work was supported in part by a grant-in-aid for scientific research from the Ministry of Education, Science, Sports and Culture of Japan (to T. H.). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. Protein Purification-His-tagged proteins were purified by Ni 2ϩ -chelating agarose column chromatography (Invitrogen) followed by Mono Q anion-exchange column chromatography (Amersham Biosciences) (4, 7). Among a series of deletion mutants of Hsp105␣, only His-Hsp105C3 (see Fig. 1A) was purified by the Ni 2ϩ -chelating agarose column chromatography as the protein was tightly bound to and hardly eluted from the Mono Q column. To purify the GST-tagged mouse Hsp105␣, the transformant containing pGEX-105 was grown at 37°C in LB medium with 100 g/ml ampicillin until the A 600 reached 0.5. After a 3-h treatment with 0.3 mM isopropyl-␤-D(-)-thiogalactopyranoside at 37°C, the cells were ruptured by sonication, and the lysate was loaded onto a glutathione-Sepharose 4B column (Amersham Biosciences) equilibrated with phosphate-buffered saline (PBS). The column was washed with PBS, and bound protein was eluted with an elution buffer containing 50 mM Tris-HCl, pH 8.0, and 10 mM glutathione (reduced form). To remove the GST tag from GST-Hsc70K71A, 50 g of GST-Hsc70K71A was incubated with 10 units of ProScisstion protease (Amersham Biosciences) in 50 l of cleavage buffer containing 50 mM Tris-HCl, pH 7.5, 150 mM NaCl, 1 mM EDTA, and 1 mM dithiothreitol (DTT) at 5°C for 24 h. Then, the reaction mixture was incubated with 20 l of glutathione-Sepharose 4B beads (50% in cleavage buffer) at 4°C for 1 h, and the unbound fraction that contains Hsc70K71A was collected. FIG. 2. Hsp105␣ mutants interacting with Hsc70 suppress the Hsc70 chaperone activity. Luciferase (164 nM) was incubated with Hsc70, Hsp40, and Hsp105␣ or a deletion mutant (2 M each) at 42°C for 30 min. Then, to 10-l aliquots, rabbit reticulocyte lysate was added at 40%, and the mixture was incubated further at 25°C for 30 min. Luciferase activity was assayed, and the relative activity of luciferase is expressed as a percentage of that of the control with Hsc70/Hsp40. Each value represents the means Ϯ S.D. from four independent experiments. Statistical significance was determined with the unpaired Student's t test. *, p Ͻ 0.05. BSA, bovine serum albumin. N, Hsp105N. Purified proteins were separated by SDS-PAGE, and Coomassie Brilliant Blue-stained bands were quantified by densitometry. The concentration of proteins was estimated with bovine serum albumin as a standard. In Vitro Pull-down Assay-The interaction between Hsc70 and deletion mutants of Hsp105␣ was analyzed by pull-down assay. His-tagged Hsp105␣ or the deletion mutant (20 M) was incubated with Hsc70 (20 M) in 50 l of a binding buffer containing 20 mM Tris-HCl, pH 7.5, 150 mM NaCl, and 0.5 mg/ml bovine serum albumin at 4°C for 1 h. Then, the reaction mixtures were incubated with 10 l of Ni 2ϩ -chelating agarose at 4°C for 1 h, centrifuged, and washed several times with the binding buffer. Bound proteins were eluted with an elution buffer containing 20 mM Tris-HCl, pH 7.5, 150 mM NaCl, and 500 mM imidazole, separated by SDS-PAGE, and detected by Western blotting using anti-Hsp70 (Sigma) and anti-PentaHis antibodies (Qiagen). The interaction between Hsp105␣ and deletion mutants of Hsc70 was determined by GST-pull down assay. His-tagged Hsc70, or the deletion mutant (20 M) was incubated with GST-Hsp105␣ (20 M) in 50 l of PBS at 4°C for 1 h, and then the mixtures were incubated with 10 l of glutathione-Sepharose 4B beads (50% in PBS) at 4°C for 1 h. The beads were washed several times with PBS, and bound proteins were eluted with a buffer containing 50 mM Tris-HCl, pH 8.0, and 10 mM glutathione, separated by SDS-PAGE, and detected by Western blotting using anti-GST (Amersham Biosciences) and anti-PentaHis antibodies. Protein Refolding Assay-The protein refolding assay was conducted as described previously (4,7). Briefly, luciferase (164 nM) was incubated with Hsc70, Hsp105␣, and/or their mutants (2 M each) in a buffer containing 25 mM Hepes-KOH, pH 7.5, 50 mM KCl, 5 mM MgCl 2 , 5 mM DTT, and 2 mM ATP at 42°C for 30 min. To aliquots (10 l) of the reaction mixtures was added 20 l of reactivation buffer containing 25 mM Hepes-KOH, pH 7.5, 50 mM KCl, 5 mM MgCl 2 , 5 mM DTT, 2 mM ATP, 10 mM phosphocreatine, 3.5 units of creatine kinase, and 60% rabbit reticulocyte lysate. After incubation at 25°C for a predetermined period, luciferase activity was assayed using a luminometer after mixing an aliquot of the reaction mixture with 50 l of a luciferase assay solution (Promega). Analysis of ATP Hydrolysis and ATPase Activity-ATP hydrolysis was determined as described previously (4) Domains Necessary for the Interaction between Hsp105␣ and Hsc70 -To elucidate how Hsp105␣ regulates the Hsc70 chaperone system, we first examined the interaction between Hsc70 and Hsp105␣ deletion mutants (Fig. 1A). A series of His-tagged Hsp105␣ deletion mutants was incubated with Hsc70, and pull-down assays were performed using Ni 2ϩ -chelating beads. Hsp105⌬L and Hsp105⌬C5 interacted with Hsc70 the same as wild-type Hsp105␣, whereas Hsp105⌬␤ and Hsp105⌬␤L lacking the ␤-sheet domain failed to interact with Hsc70. Furthermore, Hsc70 did not interact with Hsp105N2 or Hsp105C3, which lacked an ␣-helix and ATP-binding domain, respectively. These results indicated that all domains of Hsp105␣ except the loop are essential for the interaction between Hsp105␣ and Hsc70. Next, we examined the interaction between Hsp105␣ and Hsc70 deletion mutants (Fig. 1B). A series of His-tagged Hsc70 deletion mutants were incubated with Hsp105␣, and pull-down assays were performed using Ni 2ϩ -chelating beads. Hsc70 deletion mutants lacking the N-terminal ATP-binding, ␤-sheet, or C-terminal ␣-helix domain did not interact with Hsp105␣, suggesting that all domains of Hsc70 are necessary for the binding to Hsp105␣. Interaction of Hsp105␣ with Hsc70 Is Required for the Suppression of Hsc70 Chaperone Activity-We next examined the suppressive effect of Hsp105␣ and its mutants on Hsc70 chap- Hsp105N. B, Hsp105␣ or its deletion mutant (2 M each) was incubated with or without Hsc70 (1 M) and Hsp40 (0.5 M) in buffer containing 500 M [␥-32 P]ATP (0.1 mCi/mmol). Then, ATP hydrolysis was determined. Among deletion mutants, as the Hsp105␣ mutant lacking an ATP-binding domain (Hsp105C3) could not be purified by MonoQ column chromatography due to its tight adsorption to the column, the Hsp105C3 preparation that still contained some bacterial proteins was not used for the measurement of ATP hydrolysis. Each value represents the means Ϯ S.D. from three independent experiments. Statistical significance was determined with the unpaired Student's t test. *, p Ͻ 0.01; **, p Ͻ 0.05. erone activity. Luciferase was incubated with Hsc70, Hsp40, and Hsp105␣ deletion mutants at 42°C for 30 min, and luciferase activity was assayed after the addition of rabbit reticulocyte lysate (Fig. 2). Hsp105⌬L and Hsp105⌬C5, which lacked the loop and the C-terminal 5 amino acids, respectively, and were able to interact with Hsc70, suppressed Hsc70 chaperone activity similar to Hsp105␣. In contrast, the mutants that were unable to interact with Hsc70 did not suppress the chaperone activity of Hsc70. Thus, the direct interaction of Hsp105␣ with Hsc70 seemed to be necessary for the suppression of Hsc70 chaperone activity. Hsp105␣ That Interacts with Hsc70 Enhances ATP Hydrolysis in the Reaction with Hsc70, Hsp40, and Hsp105␣-We have shown that hydrolysis of ATP increases when Hsp105␣ is added to the reaction with Hsc70 and Hsp40 (4). Then, we next examined the effect of Hsp105␣ deletion mutants on ATP hydrolysis in the reaction with Hsc70 and Hsp40 (Fig. 3). Hsc70 displayed a low basal rate of ATP hydrolysis, which was enhanced ϳ4-fold by Hsp40 (Fig. 3B). On the other hand, Hsp105␣ and its deletion mutants did not show any intrinsic ATPase activity (Fig. 3A). However, ATP hydrolysis in the reaction with Hsc70, Hsp40, and Hsp105␣ or the deletion mutant of Hsp105⌬L or Hsp105⌬C5 was enhanced ϳ2-fold when compared with that in the reaction with Hsc70 and Hsp40 (Fig. 3B). The Hsp105␣ mutants that did not interact with Hsc70 and suppress Hsc70 chaperone activity did not affect the hydrolysis of ATP in the reaction with Hsc70 and Hsp40. Thus, Hsp105␣ that interacted with Hsc70 seemed to enhance ATP hydrolysis in the reaction with Hsc70 and Hsp40. These findings suggest either that Hsp105␣ enhances the ATPase activity of Hsc70 or that Hsc70 and Hsp40 induce ATPase activity of Hsp105␣. Hsp105␣ Suppresses ATPase Activity of Hsc70 -Although Hsp105␣ contains an ATP-binding consensus sequence similar to that of HSP70 family proteins, no ATPase activity of Hsp105␣ has been detected yet. Then, we first examined the possibility that Hsp105␣ enhances the ATPase activity of Hsc70 (Fig. 4). When Hsc70 was incubated with ATP in the absence of K ϩ , Hsc70 existed predominantly in the ATP-bound form, whereas in the presence of K ϩ , Hsc70-ATP was hydrolyzed to ADP due to the intrinsic ATPase activity of Hsc70. The hydrolysis of ATP was significantly enhanced by the addition of Hsp40, consistent with the stimulation of Hsc70 ATPase activity by Hsp40 (15). The addition of Hsp105␣, either with or without Hsp40, suppressed the hydrolysis of Hsc70-bound ATP in a dose-dependent manner (Fig. 4A). Furthermore, when the effect of Hsp105␣ deletion mutants on the hydrolysis of Hsc70bound ATP was examined, Hsp105⌬L and Hsp105⌬C5 that interacted with Hsc70 were found to suppress the hydrolysis of Hsc70-bound ATP, similar to Hsp105␣. However, the mutants that did not interact with Hsc70 did not suppress the hydrolysis (Fig. 4B). These findings suggested that Hsp105␣ did not enhance the ATPase activity of Hsc70 but rather suppressed Hsc70 ATPase activity by interacting with Hsc70. ATPase Activity of Hsp105␣ Is Induced by Hsc70 -We examined the second possibility, that Hsc70 and Hsp40 induce ATPase activity of Hsp105␣. When Hsp105␣ was incubated with ATP in the absence of K ϩ , Hsp105␣ existed predominantly as the ATP-bound form, and Hsp105␣-bound ATP was not hydrolyzed to ADP even in the presence of K ϩ (Fig. 5A). However, Hsp105␣-bound ATP was converted to ADP by Hsc70, either with or without Hsp40, in a dose-dependent manner, but not by Hsp40 alone (Fig. 5, A and B). Then, to determine whether the conversion of Hsp105␣bound ATP to ADP by Hsc70 is due to the ATPase activity of Hsc70, we prepared an Hsc70 mutant that was defective in ATP binding. The Lys residue at position 71 of human Hsc70, which is essential for hydrolysis of ATP, was substituted with Ala to yield Hsc70K71A (23). Hsc70K71A did not bind ATP (Fig. 6A) but interacted with Hsp105␣ (Fig. 6B). When ATPbound Hsp105␣ was incubated with Hsc70WT or Hsc70K71A, hydrolysis of Hsp105␣-bound ATP was observed in the presence or absence of K ϩ , and the hydrolysis was not significantly affected by Hsp40 (Fig. 6C). Furthermore, although Hsc70 did not show any ATPase activity in the absence of K ϩ , Hsc70WT also enhanced the hydrolysis of Hsp105␣-bound ATP in the absence of K ϩ . These findings suggest that the ATPase activity of Hsc70 is not necessary for the hydrolysis of ATP bound to Hsp105␣, whereas the ATPase activity of Hsp105␣ is induced by interaction with Hsc70. DISCUSSION Hsp105␣ associates with Hsp70/Hsc70 (5, 6) and suppresses its chaperone activity in mammalian cells (4,7). The mechanism by which Hsp105␣ regulates the chaperone activity of Hsp70/Hsc70, however, had not been elucidated. Here, we demonstrated that Hsp105␣ suppresses the ATPase activity of the Hsc70 chaperone, with the concomitant appearance of Hsp105␣ ATPase activity. (20 M) in 50 l of binding buffer for 1 h at 4°C, and then a pull-down assay using Ni 2ϩchelating agarose was performed. Proteins bound to the beads were eluted, separated by SDS-PAGE, and detected by Western blotting using anti-Hsp70 and anti-PentaHis antibodies. C, [␣-32 P]ATP-bound Hsp105␣ (1 M) was incubated with Hsc70WT or Hsc70K71A (1 M each) in the presence or absence of Hsp40 (0.5 M) in buffer containing or not containing K ϩ at 25°C for 10 min, and the conversion of Hsp105␣-bound ATP to ADP was analyzed by thin-layer chromatography. K, Hsc70K71A.
4,125.8
2004-10-01T00:00:00.000
[ "Biology" ]
Search for Correlations between HiRes Stereo Events and Active Galactic Nuclei We have searched for correlations between the pointing directions of ultrahigh energy cosmic rays observed by the High Resolution Fly's Eye experiment and Active Galactic Nuclei (AGN) visible from its northern hemisphere location. No correlations, other than random correlations, have been found. We report our results using search parameters prescribed by the Pierre Auger collaboration. Using these parameters, the Auger collaboration concludes that a positive correlation exists for sources visible to their southern hemisphere location. We also describe results using two methods for determining the chance probability of correlations: one in which a hypothesis is formed from scanning one half of the data and tested on the second half, and another which involves a scan over the entire data set. The most significant correlation found occurred with a chance probability of 24%. Introduction The search for the sources of the highest energy cosmic rays is an important topic in physics today. The energies of these cosmic rays exceed 100 EeV and the acceleration mechanisms of the astrophysical objects responsible for these events remain unknown. Anisotropy search methods such as those used in Xor γ-ray astronomy are difficult to use due to deflections in the trajectories of these charged cosmic rays from Galactic and extragalactic magnetic fields. For a galactic magnetic field strength of ∼ 3µG and coherence length of ∼ 1 kpc, a 40 EeV cosmic ray should be deflected by two to three degrees over a distance of only a few kpc [1]. There are several reports on anisotropy by previous experiments. An excess of events near the direction of the Galactic center has been reported by the SUGAR and AGASA experiments [2,3]. The Pierre Auger collaboration, however, has recently reported that they have not seen any excess at that location [4]. In addition, the Auger collaboration reported no significant excesses in any part of the southern hemisphere sky [5]. Two reports of anisotropy have been found in the northern hemisphere sky. A dip in the intensity of cosmicray events near the direction of the Galactic anticenter has been reported by both the AGASA and High Resolution Fly's Eye (HiRes) experiments, but the significance is too low to claim an observation [6]. Additionally the AGASA "triplet" is correlated with a HiRes high-energy event [7]. These reports of anisotropy in the northern sky await confirmation or rejection by the Telescope Array experiment [8]. Another method for searching for anisotropy is to search for correlations in pointing directions of cosmic rays with known astrophysical objects that might be sources. In these cases, a small event sample that shows no excess over the expected background can, nevertheless, exhibit correlations with a priori candidate sources, adding up to a statistically significant signal. Past searches have found correlations with BL Lacertae objects; BL Lacs are a class of AGN with a jet pointing toward the Earth, and are plausible candidates for cosmicray sources. Correlations have been found with data from the AGASA, HiRes and Yakutsk experiments, all in the northern hemisphere [9]. The Auger collaboration has searched for correlations with BL Lac objects in the southern hemisphere but has found nothing significant [10]. Again the northern hemisphere correlations await confirmation by the Telescope Array experiment. There have been speculations that Active Galactic Nuclei (AGN) may contain acceleration regions of the appropriate size and magnetic field strength to accelerate nuclei to the highest energies [11,12]. One should therefore expect the brightest and closest AGN to produce the highest-energy cosmic ray events at Earth. These events would also have suffered the smallest deflections due to the intervening magnetic fields and would point back, most directly, to these AGN. The large number of identified AGN make them interesting candidates for studying possible correlations with ultrahigh energy cosmic rays. Three ideal parameters for determining correlations between cosmic rays and AGN are the maximum difference in angle between the cosmic-ray pointing direction and the AGN θ max , the minimum cosmic-ray energy E min , and the maximum AGN redshift z max . The Pierre Auger Collaboration have reported a search of two independent sets of their data for correlations with cosmic rays with AGN. They scanned their first data set and found that the most significant correlation occurs for cosmic rays with parameters (θ max , E min , z max ) = (3.1 • , 56 EeV, 0.018). With these selection criteria, they find 12 pairings with AGN from 15 events in the first data set. In the second data set, they find 8 pairings from 13 events and a corresponding chance probability of 0.0017 [13,14]. The HiRes experiment collected data from 1997 to 2006, operating two fluorescence detectors located atop desert mountains separated by 12.6 km in west-central Utah. The HiRes data have been analyzed monocularly, using the data from one detector at a time [15], and stereoscopically, using the data from both detectors simultaneously [16]. The angular resolution is about 0.8 • Table 1 Parameters for the functions in Equation 1 that give the coordinates (in celestial right ascension and declination) of the lower boundaries of the 10 bins of equal exposure for the HiRes detector shown as the 10 lightest shaded regions in The pointing directions of the stereo data extend from zenith to about −32 • in declination (celestial coordinates). The corresponding exposure of is dependent on right ascension due to seasonal variations in the duty cycle of the detector. The boundaries of regions of equal exposure are best described by where δ and α are celestial declination and right ascension measured in degrees and A, B and C are fit parameters. Table 1 gives values of A, B and C for plotting the boundaries of the 10 bins of equal exposure shown in Figures 3 and 4. Figure 1 shows the monocular spectra for the two HiRes sites [15] and that of the Pierre Auger Observatory [17]. At the highest energies where Auger observes an anisotropy signal, the energy scales of HiRes and Auger differ by about 10%. To account for this difference, the energy scale of the HiRes stereo data set used in this analysis has been decreased by 10% to agree with the Auger energy scale. All energies quoted for the HiRes data from this point on will include this 10% shift. There are 13 events with energies greater than 56 EeV in the full HiRes stereo data set, the same number as in the Auger test data set. The Véron-Cetty and Véron catalog In this paper, we report on searches for correlations between the pointing directions of ultrahigh energy cosmic rays observed stereoscopically by the HiRes experiment and AGN from the Véron-Cetty and Véron (VCV) catalog, 12 th edition [18]. The VCV catalog includes ∼ 22000 AGN, ∼ 550 BL-Lacs and ∼ 85000 quasars compiled from observations made by other scientists, and does not evenly cover the sky. Not only does the Galaxy and its associated dust cover large parts of the sky, particularly in the southern hemisphere, making the identification of AGN extremely difficult in those areas, but some of the sky surveys included in the catalog have covered only small bands of the sky. This makes the total density of AGN in the VCV catalog very uneven across the sky in a way that is neither totally random nor systematic. The locations of a closer subset of sources, with redshift z < 0.1, are more evenly distributed. One property of the search method in (θ max , E min , z max ) is that the large size of the catalog and the size of the correlation angle circles determine that one can scan over only a narrow range of θ max and z max . To illustrate this using simulated events with isotropically distributed pointing directions, Figure 2 shows that the number of random pairings with AGN is determined by the choice of θ max and z max . As θ max and z max are increased, the number of random pairings increases, rapidly overcoming any real correlations between cosmic rays and AGN. Method We perform three searches for correlations between cosmic rays and AGN. In the first search we look for correlations in the HiRes stereo data using the (θ max , E min , z max ) parameters prescribed by the Auger collaboration [13]. In the second, we divide our stereo data into two equal parts in a random manner, determine the optimum search parameters in the first half of the data by scanning in a three-dimensional grid in (θ max , E min , z max ), and then examine the second half of the data using these "optimum" parameters. By choosing the best parameters from the first half of the data and using them to form a hypothesis to be tested using a statistically independent sample, no statistical penalties are incurred in the application to the second half of the data. In the third and last search, we analyzed the complete data set using the statistical prescription described by Finley and Westerhoff [19] (see also Tinyakov and Tkachev [20]) to arrive at a chance probability that includes the statistical penalty from scanning over the entire data set. Finally, in addition to searching for correlations with AGN, we analyzed the degree of auto-correlation in the stereo data over all possible angles and values of E min . To arrive at the appropriate chance probabilities for the numbers of correlations seen in each method, we generated 5001 random samples of events using the hour angle -declination method [21,6]. In this method the hour angle and declination of one event and the sidereal time of another are randomly paired to generate a sky plot with the same number of events as the data. Such a sample reproduces the overall observed distribution of events very well. Search for Correlations using the Auger criteria The Auger collaboration has reported the results of searches in (θ max , E min , z max ) over two independent data sets. In a scan over the first data set, 12 of the 15 events with E min = 56.0 EeV were found to lie within θ max = 3.1 • of AGN with z max = 0.018 with 3.2 chance pairings expected. Using the parameters (3.1 • , 56.0 EeV, 0.018), 8 of 13 events in an independent test data set were found to be paired with AGN with 2.7 chance pairings expected. The chance probability for this occurrence was found to be 0.0017 [13,14]. A scan of the entire HiRes data set at (3.1 • , 56.0 EeV, 0.018) found 2 AGN pairings for a total of 13 events. Figure 3 shows the locations of the 2 correlated events and the 11 uncorrelated events. We looked for correlations in the 5000 simulated data sets at (3.1 • , 56.0 EeV, 0.018) and found the average number of correlated pairs to be 3.2. In addition, 4121 sets had 2 or more correlated events for a chance probability of 82%. We thus find no evidence for correlations of cosmic-ray events with AGN in our field of view at (3.1 • , 56.0 EeV, 0.018). The HiRes data are therefore consistent with random correlations. Search in two independent data sets Next, we randomly divide the HiRes stereo data into two equal sets, first examining only one half and setting the other aside. We scan the first half simultaneously in θ max from 0.1 to 4.0 • in bins of 0.1 • , in E min from 10 19.05 to 10 19.80 eV in bins of 0.05 decade, and with an AGN z max from 0.010 to 0.030 in bins of 0.001. For each grid point in the scan, the total number of cosmic rays correlated with at least one AGN is accumulated. We then conduct the same scan in each of 5000 simulated sets with identical statistics to the first half, adding up the total number of correlations in each set for each grid point. At each point, the number of correlated events in each of the 5000 simulated sets is compared with the result in the first half of the data. The criteria for the most significant correlation were found to be (1.7 • , 15.8 EeV, 0.020) with 20 correlated events from a total of 97. Only 25 of 5000 simulated sets had 20 or more correlations. Using these criteria as our hypothesis, we then examine the second half of the data at (1.7 • , 15.8 EeV, 0.020) and find 14 correlated pairs from 101 events. In a set of 5000 simulated events with identical statistics to the second half, 741 sets contained 14 or more correlated events for a chance probability of 15%. For comparison, the point with the most significant correlation in the second half occurs at (2.0 • , 20.0 EeV, 0.016) with 14 correlated events of a total 69 and a chance probability of 1.5%. These results are again consistent with random correlations. Scanning the entire data set We follow the prescription of Finley and Westerhoff [19] for determining the most significant correlation in the entire data set while also calculating an appropriate statistical penalty for scanning over the entire data set. We scan the data simultaneously in θ max , E min and z max counting the number of correlated events, n corr at each point. This process is repeated for each of the 5001 simulated sets with P data , the probability for observing n corr or more correlations at (θ max , E min , z max ) calculated from where P mc (θ max , z max , E min , n) is the fraction of the first 5000 simulated sets with exactly n events at (θ max , E min , z max ). The value of P min is then taken to be the values of (θ c , E c , z c ) which minimize P data . This is found to occur at the critical values (2.0 • , 15.8 EeV, 0.016) where there are 36 correlated events out of 198 in the data and 9 of 5000 simulated sets with 36 or more correlated events, for a chance probability of 0.18%. To find the true significance of this signal, we apply the same process to each of the first 5000 simulated sets, finding the value P i min = P i (θ i c , E i c , z i c ) by comparing n i corr with n corr for the other 5000 sets. We then count the number of simulated sets n * mc for which P i min ≤ P min . The chance probability is then found as In this, our most robust method, there were 1210 simulated sets with P i min values of 0.0018 or less for a chance probability, P chance = 24%. Figure 4 shows a sky map of the most significant correlation in the HiRes data. From this final analysis, we draw the same conclusion: HiRes data are consistent with random correlations with AGN. Auto-correlation analysis In addition to searching for correlations with AGN, studies of auto-correlation can be useful for searching for anisotropy in the data. We have analyzed the degree of auto-correlation in the data over all possible angles and made comparisons with the average number of pairs of events for 2000 isotropic simulated Table 1. The darkest shade indicates the region with no exposure. data sets. We find no evidence of auto-correlation for any values of E min . Figure 5 shows a comparison of the normalized number of pairs of events with energies above 56 EeV in the stereo data to the average normalized number of pairs for 2000 isotropic simulated data sets. The 1σ uncertainty is found by ordering the simulated sets by their maximum deviation from the average and plotting only the first 68% of those simulated sets. As a further check, we scan the data in θ max and E min and determine a statistical penalty using the same method presented in Section 3.3. We scan the data in θ max from 0.5 • to 30.0 • in bins of 0.5 • and in E min from 10 19.05 to 10 19.80 eV in bins of 0.05 decade. The critical values which minimize P data are found to occur at (2.0 • , 44.7 EeV) where there is one pair of events out of a possible 406 in the data and 227 of 1000 simulated sets with one or more pairs for a chance probability of 23%. Applying the same process to the 1000 simulated sets, we find 971 sets for which the critical point occurs with a chance probability less than 23%. The probability of measuring the observed degree of correlation in an isotropic data set is 97%. Conclusions We have searched for correlations between the pointing directions of HiRes stereo events with AGN from the the Véron-Cetty Véron catalog using three different methods. As search parameters for our analysis, we used the maximum difference in angle between the cosmic-ray pointing direction and an AGN θ max , the minimum cosmic-ray energy E min , and the maximum AGN redshift z max . Our first analysis, using the criteria prescribed by the Pierre Auger Observatory for their most significant correlation, (3.1 • , 56.0 EeV, 0.018), finds 2 correlated of 13 total events with an expectation of 3.2 chance correlations. The corresponding chance probability was found to be 82%. In our second search the total HiRes stereo data were then divided into two equal but random parts and we performed a scan in θ max , E min and z max over one half of the data to determine which parameters optimized the correlation signal. We then examined the other half of the data using these search parameters and found a smaller signal with a chance probability of 15%. Finally, we examined the entire HiRes stereo data using a more robust method to calculate the chance probability with appropriate statistical penalties. The most significant correlation was found to occur at (2.0 • , 15.8 EeV, 0.016) with 36 correlated of 198 total events. This corresponds to a chance probability of 24%. We conclude that there are no significant correlations between the HiRes stereo data and the AGN in the Véron-Cetty Véron catalog. We also examined the degree of auto-correlation at all angles and energies. The probability that the data are consistent with isotropy is 97%.
4,324.6
2008-04-02T00:00:00.000
[ "Physics" ]
Development of a Cell-Based SARS-CoV-2 Pseudovirus Neutralization Assay Using Imaging and Flow Cytometry Analysis COVID-19 is an ongoing, global pandemic caused by the novel, highly infectious SARS-CoV-2 virus. Efforts to mitigate the effects of SARS-CoV-2, such as mass vaccination and development of monoclonal therapeutics, require precise measurements of correlative, functional neutralizing antibodies that block virus infection. The development of rapid, safe, and easy-to-use neutralization assays is essential for faster diagnosis and treatment. Here, we developed a vesicular stomatitis virus (VSV)-based neutralization assay with two readout methods, imaging and flow cytometry, that were capable of quantifying varying degrees of neutralization in patient serum samples. We tested two different spike-pseudoviruses and conducted a time-course assay at multiple multiplicities of infection (MOIs) to optimize the assay workflow. The results of this assay correlate with the results of previously developed serology and surrogate neutralization assays. The two pseudovirus readout methods produced similar values of 50% neutralization titer values. Harvest-free in situ readouts for live-cell imaging and high-throughput analysis results for flow cytometry can provide unique capabilities for fast evaluation of neutralization, which is critical for the mitigation of future pandemics. Introduction Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the cause of coronavirus disease 2019 (COVID- 19), emerged in December 2019 and resulted in a global pandemic [1]. SARS-CoV-2 displayed significant pathogenicity and has caused significant mortality worldwide. A variety of approaches to combat SARS-CoV-2 have resulted in several prophylactics and therapeutics, including RNA-and viral vector-based vaccines, new antivirals, and monoclonal antibodies (mAbs) [2]. SARS-CoV-2 uses a glycoprotein, called spike protein, to enter the host cell through the host cell receptor ACE-2 [3,4]. TMPRSS2, a serine protease, cleaves the spike protein and facilitates viral entry [5]. Broadly protective vaccines prevent the SARS-CoV-2 spike and new variants from binding to the ACE-2 and TMPRSS2 host cell receptors and are vital for combating the pandemic [6]. To predict the effectiveness of these vaccines, it is paramount to determine the titer of neutralizing antibodies (nAbs). Several methods to quantify nAb titers in patient serum have been established, including live virus, pseudovirus, and ELISA-based neutralization assays [7,8]. Use of live pathogenic SARS-CoV-2 requires biosafety level 3 (BSL-3) containment that is not available for most laboratories performing diagnoses of infection, development of antivirals, and other basic or applied research. Alternatively, neutralization assays based on pseudoviruses offer better safety and improved ease of use, only requiring BSL-2 containment. Pseudovirus assays are comparable with live pathogenic SARS-CoV-2 microneutralization assays when implemented as alternative assays [9,10]. Pseudoviruses are recombinant viruses that are engineered to express a surface protein from another virus (i.e., SARS-CoV-2 spike protein) used on the coat of the pseudovirus [11]. Genes within a pseudovirus are altered to limit or abolish native surface protein expression, and a plasmid is used to express alternative surface proteins and, sometimes, a fluorescent reporter. Numerous cell lines expressing ACE-2 and/or TMPRSS2 and pseudoviruses modified with SARS-CoV-2 spike protein, including lentiviral and vesicular stomatitis virus (VSV)-based pseudoviruses, have been generated to facilitate the study of SARS-CoV-2 [12,13]. In this study, we developed a pseudovirus neutralization assay using a VSV pseudovirus with a SARS-CoV-2 spike protein and a Green Fluorescent Protein (GFP) reporter, which served as a convenient reporter of infection 8 . Neutralization was measured by both live-cell imaging and flow cytometry. We show that both readout methods quantified the presence and absence of neutralizing antibodies in patient serum samples. Live-cell imaging and flow cytometry analysis showed comparable quantifications of the neutralization. Comparison of the pseudovirus assay to the bead-based serology and neutralization assays showed high correlation, and receiver operator curve (ROC) analysis resulted in a high area under the curve (AUC). Pseudovirus Evaluation Two different pseudoviruses were tested, a lentivirus and a VSV-based pseudovirus with the original Wuhan-Hu-1-strain spike protein and GFP reporter. After 24 h at a multiplicity of infection (MOI) of 1.0, the VSV-based pseudovirus had the most GFP expression with an average of 80.9 ± 3.06% compared to the lentivirus at 26.6 ± 1.99% (Figure 1). Additionally, the VSV-based pseudovirus was significantly brighter, which enabled better visualization for imaging and gating for flow cytometry. To improve both the speed and ease of use of the assay, the VSV-based pseudovirus was used for the neutralization assay. Additionally, no coating or fibronectin coating was tested with VSV-pseudovirus infections, and no changes in GFP expression were observed. Figure 1. Testing Two Different Spike-based Pseudoviruses for GFP Expression. We tested two commercial spike-VSV and spike lentivirus-based pseudoviruses with GFP and eGFP reporters, respectively. An MOI of 1 for each pseudovirus was applied to HEK293T-hACE-2-TMPRSS2-mCherry cells and incubated for 24 h. After 24 h, the cells were washed and processed on an NxT Attune Flow Cytometer. Representative data are shown from one sample of two technical replicates with three biological replicates each. An average of GFP expression of 80.9 ± 3.06% was calculated for VSV and an average of 26.6 ± 1.99% for lentivirus. The dashed line indicates what is considered background, which was based on media only controls. Assay Time-Course Optimization To develop a rapid and easy-to-use neutralization assay, cells were infected with the pseudovirus to determine the optimal time and MOI for the assay. HEK293T-hACE-2-TMPRSS2-mCherry cells were infected with the VSV-spike pseudovirus at MOIs of 0.5, 1.0, or 2.0 for 2, 4, 8, 16, and 24 h ( Figure 2). Maximum GFP expression was reached at 8 h for all MOIs. A two-way ANOVA with multiple comparisons showed that the 16 h timepoints were not significantly different in GFP expression compared to the 8 h timepoint for each MOI (p = 0.067), and GFP expression at 16 h did not differ between the three MOIs tested (p = 0.299). However, the 16 h timepoint at an MOI of 0.5 was chosen for the neutralization assay because the difference between the timepoints and MOIs was insignificant. These assay parameters allowed for more manageable workflow timing and reduced the virus consumption. A small increase in dead cells was seen at the 24 h timepoint at an MOI of 2.0. tively. An MOI of 1 for each pseudovirus was applied to HEK293T-hACE-2-TMPRSS2-mCherry cells and incubated for 24 h. After 24 h, the cells were washed and processed on an NxT Attune Flow Cytometer. Representative data are shown from one sample of two technical replicates with three biological replicates each. An average of GFP expression of 80.9 ± 3.06% was calculated for VSV and an average of 26.6 ± 1.99% for lentivirus. The dashed line indicates what is considered background, which was based on media only controls. Assay Time-Course Optimization To develop a rapid and easy-to-use neutralization assay, cells were infected with the pseudovirus to determine the optimal time and MOI for the assay. HEK293T-hACE-2-TMPRSS2-mCherry cells were infected with the VSV-spike pseudovirus at MOIs of 0.5, 1.0, or 2.0 for 2, 4, 8, 16, and 24 h ( Figure 2). Maximum GFP expression was reached at 8 h for all MOIs. A two-way ANOVA with multiple comparisons showed that the 16 h timepoints were not significantly different in GFP expression compared to the 8 h timepoint for each MOI (p = 0.067), and GFP expression at 16 h did not differ between the three MOIs tested (p = 0.299). However, the 16 h timepoint at an MOI of 0.5 was chosen for the neutralization assay because the difference between the timepoints and MOIs was insignificant. These assay parameters allowed for more manageable workflow timing and reduced the virus consumption. A small increase in dead cells was seen at the 24 h timepoint at an MOI of 2.0. Serum Evaluation Following the addition of the virus and serum mixtures to the cells, each well was imaged after 16 h (Supplementary Figure S1). The resulting mCherry and GFP images were segmented and analyzed to determine the percent neutralization of each dilution (Supplementary Figure S2A). The cells were dissociated with trypsin EDTA from the plate and analyzed using flow cytometry. Each dilution was also analyzed using FlowJo, and the data were analyzed using the same gating strategy (Supplementary Figure S2B). Using both imaging and flow cytometry readouts for the pseudovirus assay, no neutralization The results of a two-way ANOVA with multiple comparisons showed that there was no statistically significant difference in infection or cell death between the 8 and 16 h timepoints and that there was no significant difference between the three MOIs at the 8 and 16 h timepoints as well. Based on these results, reduced pseudovirus consumption, and ease of use, an MOI of 0.5 and the 16 h timepoint were chosen for the neutralization assay. Serum Evaluation Following the addition of the virus and serum mixtures to the cells, each well was imaged after 16 h (Supplementary Figure S1). The resulting mCherry and GFP images were segmented and analyzed to determine the percent neutralization of each dilution (Supplementary Figure S2A). The cells were dissociated with trypsin EDTA from the plate and analyzed using flow cytometry. Each dilution was also analyzed using FlowJo, and the data were analyzed using the same gating strategy (Supplementary Figure S2B). Using both imaging and flow cytometry readouts for the pseudovirus assay, no neutralization was identified for any of the 28 known negative serum samples. Neutralization was identified in 44 out of 50 known positive serum samples by calculating the NT50 from the calculated neutralizations across the serum dilutions ( Figure 3A). Six serum samples that were known positive samples, but had low IgG titers, were not identified for neutralization by either neutralization assay. An additional six serum samples did not produce an NT50 value for the surrogate bead-based neutralization assay. For comparison of NT50 values in the positive serum samples measured using imaging and flow analysis, a Wilcoxon matchedpairs signed-rank test was performed. No significant differences were found between the imaging and flow analysis (p = 0.446), and a Spearman rank correlation test showed high correlation between the two assays. (r s = 0.988, p value < 0.0001) ( Figure 3B). tion by either neutralization assay. An additional six serum samples did not produce an NT50 value for the surrogate bead-based neutralization assay. For comparison of NT50 values in the positive serum samples measured using imaging and flow analysis, a Wilcoxon matched-pairs signed-rank test was performed. No significant differences were found between the imaging and flow analysis (p = 0.446), and a Spearman rank correlation test showed high correlation between the two assays. (rs= 0.988, p value <0.0001) ( Figure 3B). Bead-Based Surrogate and Serology Assay Comparison The NT50 values were averaged between the live-cell imaging and the flow cytometry readouts to determine average pseudovirus NT50 values. These were then compared to two bead-based assays [14], a previously described spike IgG serology assay and a bead-based surrogate neutralization assay. Evaluation using a Spearman rank correlation coefficient (rs) showed significant correlation between the NT50 values of the pseudovirus assay and these assays (serology: rs = 0.797, p < 0.0001, and surrogate: rs =0.880 p < 0.0001, Figure 4). While all samples provided an IgG titer for the serology assay, only samples with an NT50 value were compared to the pseudovirus assay. Next, all assays were compared to one another using ROC analysis, which determined that the serology assay had the highest AUC (0.965, p < 0.0001) compared to the pseudovirus (0.946, p < 0.0001) and surrogate (0.902, p < 0.0001) assays ( Figure 5). Based on the AUCs, all three assays were excellent predictors for the presence of neutralizing antibodies; however, the serology assay performed the best. The serology and pseudovirus assays had almost identical sensitivities and specificities, while the surrogate neutralization assay had lower values at the optimal thresholds. With an optimal threshold of 16.28 BAU/mL, the sensitivity of the serology assay was 89.58 and the specificity was 100. For the pseudovirus assay, the optimal threshold was an NT50 of 2.006 with a sensitivity of 89.8 and a specificity of 100%. The surrogate assay had an optimal threshold NT50 value of 8.87, with a sensitivity of 77.6% and a specificity of 82.8%. Bead-Based Surrogate and Serology Assay Comparison The NT50 values were averaged between the live-cell imaging and the flow cytometry readouts to determine average pseudovirus NT50 values. These were then compared to two bead-based assays [14], a previously described spike IgG serology assay and a bead-based surrogate neutralization assay. Evaluation using a Spearman rank correlation coefficient (r s ) showed significant correlation between the NT50 values of the pseudovirus assay and these assays (serology: r s = 0.797, p < 0.0001, and surrogate: r s = 0.880 p < 0.0001, Figure 4). While all samples provided an IgG titer for the serology assay, only samples with an NT50 value were compared to the pseudovirus assay. Next, all assays were compared to one another using ROC analysis, which determined that the serology assay had the highest AUC (0.965, p < 0.0001) compared to the pseudovirus (0.946, p < 0.0001) and surrogate (0.902, p < 0.0001) assays ( Figure 5). Based on the AUCs, all three assays were excellent predictors for the presence of neutralizing antibodies; however, the serology assay performed the best. The serology and pseudovirus assays had almost identical sensitivities and specificities, while the surrogate neutralization assay had lower values at the optimal thresholds. With an optimal threshold of 16.28 BAU/mL, the sensitivity of the serology assay was 89.58 and the specificity was 100. For the pseudovirus assay, the optimal threshold was an NT50 of 2.006 with a sensitivity of 89.8 and a specificity of 100%. The surrogate assay had an optimal threshold NT50 value of 8.87, with a sensitivity of 77.6% and a specificity of 82.8%. . Both comparisons demonstrated high correlation, as shown by the high correlation coefficients and statistical significance calculated by the Spearman rank correlation test. The pseudovirus assay showed higher correlation with the surrogate assay than the serology assay. Due to differences in that some samples were positive for neutralization under some assays but not others, different sample numbers (n) are shown in the two correlation plots. . Both comparisons demonstrated high correlation, as shown by the high correlation coefficients and statistical significance calculated by the Spearman rank correlation test. The pseudovirus assay showed higher correlation with the surrogate assay than the serology assay. Due to differences in that some samples were positive for neutralization under some assays but not others, different sample numbers (n) are shown in the two correlation plots. Based on the sample results, ROC curves were generated for the pseudovirus neutralization assay, bead-based surrogate assay, and spike IgG serology assay. Sensitivity and specificity were determined based on the specified threshold. Overall, the pseudovirus assay and serology assay had comparable areas under the curve (AUCs), sensitivity, and specificity. Discussion One goal for pandemic preparedness is to develop rapid, easy-to-use, high-throughput assays to help determine COVID-19 serological diagnosis and vaccination efficacy. Pseudovirus assays offer BSL-2 convenience and ease of use to laboratories measuring serology neutralization. When testing two pseudoviruses, we found that the VSV-based pseudovirus resulted in a higher percent GFP expression after 24 h than the lentivirusbased pseudovirus (Figure 1). After selecting the VSV pseudovirus, a time-course assay tested three different MOIs to determine the optimal time and MOI to use for this assay ( When considering the pseudovirus assay readout methods, both live-cell imaging and flow cytometry readouts provided comparable measurements of 50% neutralization titer. Using a dual approach for determining the neutralization provided insight on optimizing samples for both flow cytometry and imaging. The mCherry reporter in the cell Figure 5. Receiver Operator Characteristic (ROC) Curve Analysis of Serology, Surrogate, and Pseudovirus Neutralization Assays. Based on the sample results, ROC curves were generated for the pseudovirus neutralization assay, bead-based surrogate assay, and spike IgG serology assay. Sensitivity and specificity were determined based on the specified threshold. Overall, the pseudovirus assay and serology assay had comparable areas under the curve (AUCs), sensitivity, and specificity. Discussion One goal for pandemic preparedness is to develop rapid, easy-to-use, high-throughput assays to help determine COVID-19 serological diagnosis and vaccination efficacy. Pseudovirus assays offer BSL-2 convenience and ease of use to laboratories measuring serology neutralization. When testing two pseudoviruses, we found that the VSV-based pseudovirus resulted in a higher percent GFP expression after 24 h than the lentivirus-based pseudovirus ( Figure 1). After selecting the VSV pseudovirus, a time-course assay tested three different MOIs to determine the optimal time and MOI to use for this assay ( When considering the pseudovirus assay readout methods, both live-cell imaging and flow cytometry readouts provided comparable measurements of 50% neutralization titer. Using a dual approach for determining the neutralization provided insight on optimizing samples for both flow cytometry and imaging. The mCherry reporter in the cell line was useful for the imaging readout by providing a fluorescence-based method to segment and normalize the total cells to the virus-infected cells that expressed GFP. Analysis of the total cells by imaging required the optimization of the mCherry segmentation parameters specific to this cell type, while the flow cytometry analysis based on scatter was more straightforward (Supplementary Figure S2). The cell count and distribution within the well were important for the optimization of the imaging assay readout. Optimization of the cell plating density was required to balance sufficient cells needed to yield sufficient sampling using flow cytometry while not reaching confluency when the image segmentation was challenging. The plating density was ultimately optimized at 7500 cells/well for this assay. Imaging also revealed that cells were frequently lost during media exchanges when the cells were plated directly on tissue culture plastic. Fibronectin improved the HEK293 cell adhesion and caused fewer cells to wash away during media exchanges, which increased the cell retention for the assay and the cell visualization for imaging [15]. Imaging, however, was sensitive to assay artifacts such as bubbles, requiring their removal, which sometimes resulted in fields of view being rejected for analysis. When considering the sample analysis, the imaging assay imaged the well plates directly with automated analysis output and without any sample harvest or preparation. Little sample preparation, which consisted of a sample harvest and two washes, was required for running the samples through the NxT Attune autosampler, forming both the flow cytometry and imaging high-throughput methods. Overall, the imaging method for this neutralization assay relied on a fluorescent reporter within the cell, a fibronectin coating, and care to ensure that no bubbles were introduced during the addition of the serum-virus mixture. The flow cytometry method required additional sample harvests and washes but had a straightforward gating strategy to quantify neutralization. The imaging method provided automated in situ cell measurements within the plate and automated image analysis, but the flow cytometry method was able to process more cells than were able to be imaged. Nine 10× magnification fields of view were used to image cells to avoid microplate well-edge effects. This area represented approximately 13% of the total well surface area and sampling of the cell population, whereas flow cytometry was able to analyze the entire cell population. Despite the smaller sampling by imaging, there were no significant differences between the NT50 values of the imaging and flow cytometry analyses (p = 0.446), and the results were highly correlated, demonstrating potential to use these analysis methods interchangeably ( Figure 3B, r s = 0.988). Additionally, these two orthogonal measurements of neutralization on the same pseudovirus-infected samples provided assurance that the measured percent infectivity and resulting pseudovirus neutralization titer was correct. Serum samples processed via the pseudovirus neutralization assay were concurrently processed with the previously validated bead-based serology assay and bead-based surrogate neutralization assay [14]. Comparisons of the pseudovirus assay to the bead-based serology and neutralization assays demonstrated significant correlations to both assays (Figure 4, r s = 0.7971 and r s = 0.8796 p < 0.0001). Higher anti-spike IgG titers correlated with higher NT50 values, indicating that the pseudovirus neutralization assay has potential to be benchmarked to a quantitative value (BAU/mL), enabling improved comparability of NT50 titers. Additionally, differences between the characteristics of these assays could result in low correlation for some of the samples. First, the pseudovirus assay was a cellbased assay, while the serology and surrogate assays were both bead-based assays. For the bead-based neutralization assay, the correlation showed that the NT50 values had a trend of being higher for the pseudovirus assay than the surrogate assay. One potential explanation for this was that the target antigen for the surrogate assay was the RBD protein compared to the pseudovirus containing a spike protein. It is likely that there are some neutralizing antibodies directed towards other regions of the spike not within the RBD domain [7,8]. There were also differences in the neutralization output metrics between the pseudovirus assay, which output NT50, and the serology assay, which output BAU/mL. When considering the false negative rate, the pseudovirus neutralization assay was not able to identify six known positive serum samples. Of these samples, the serology assay only considered two as positive. The surrogate neutralization assay was not able to identify twelve confirmed infection serum samples as positive, including the same six samples as the pseudovirus assay. It is possible that the some of these false negative samples were sampled from patients early during infection and had little to no IgG antibodies or neutralizing antibodies directed to the RBD or the overall spike. Despite the differences in performance between the three assays, a high level of correlation was still observed when comparing the pseudovirus assay to the two established bead-based assays (Figure 4), thereby providing validation of the pseudovirus assay results. Another potential explanation for the differences in neutralization could be infection by different SARS-CoV-2 variants, such as Omicron, Delta, Gamma, or Alpha. A potential limitation of the study is that samples for this study were collected prior to knowledge of other variants than Wuhan-Hu-1 or the development of methods that can identify different variants. Differences in neutralization based on infections by different variants will be addressed in future studies. In summary, this work highlights a cell-based pseudovirus neutralization assay that is straightforward and easy to perform. The use of a pseudovirus offers a safer alternative to the live-virus neutralization assays, which require BSL-3 laboratories. The assay can be easily adapted using pseudoviruses of new SARS-CoV-2 variants and future pandemic viruses. The hands-off nature of the imaging readout and the high-throughput capability of the flow cytometry readout show fast and convenient ways to quantify neutralization, which is critical to future pandemic preparedness. The methods demonstrated here can serve an important role to quantify neutralizing antibody titers needed for the future development of antivirals and monoclonal therapeutics. Pseudovirus Evaluation To improve cell adhesion, 96-well tissue-culture-treated plates (Corning, Corning, NY, USA, 3595) were coated with 10 µg/mL of fibronectin (Sigma, St. Louis, MO, USA, F1141) for 4-6 h before washing and seeding the cells. HEK293T-hAce2-TMPRRS2-mCherry cells were seeded at 7500 cells per well and allowed to adhere at 37 • C with a 5% volume fraction of CO 2 for 24 h. HEK293T cells were used as a negative control. After 24 h, the average cell count for each well was determined after trypsinizing 10 wells on each plate using a Assay Time-Course Optimization HEK293T-hACE-2-TMPRSS2-mCherry cells were plated on a 96-well tissue-culturetreated plate at 7500 cells per well, placed in a 37 • C incubator with a 5% volume fraction of CO 2 for 24 h, and tested in the time-course study with three different MOIs of VSV-based pseudovirus: 0.5, 1.0, and 2.0. The MOI was calculated by trypsinizing the cells as described in Section 2.2, and the GFP expression was monitored by flow cytometry at 0, 2, 4, 8, 16, and 24 h. After each timepoint, supernatant and cells were harvested, as in Section 2.2. Cells were stained for Live Dead Violet (ThermoFisher, L34963) per the manufacturer's recommendations. Pseudovirus Neutralization Assay Serum samples of convalescent, vaccinated, and pre-COVID patients were collected before 2021 and provided by the Frederick National Laboratory for Cancer Research of the National Cancer Institute, the Center for Disease Control, and Abbott Laboratories through respective material transfer agreements (MTAs). Seventy-seven serum samples were tested, including twenty-eight known negative samples (Table S1). These serum samples were used for all assays described in this study to make a direct comparison of the three assays. The sample identity was blinded until after analysis. Known neutralizing and non-neutralizing monoclonal antibodies were provided by Regeneron through an MTA and were used in the study as positive and negative controls on each plate. The World Health Organization international standard (WHO IS, NIBSC code: 20/136) was also used in the study as a control [7]. As described in Section 2.2, 96-well tissue-culture-treated plates were coated with fibronectin for 4-6 h. The wells were seeded with 7500 HEK293T-hACE2-TMPRSS2-mCherry cells per well and allowed to adhere at 37 • C with a 5% volume fraction of CO 2 for 24 h. After 24 h of adhesion, an average cell count from 10 wells was used to calculate the MOI. A 9-point curve to obtain 50% neutralization titer (NT50) was generated with an initial 5-fold dilution of serum in OPTI-MEM (Gibco, 31985070) + 2% HI-FBS, and then 8 subsequent 3-fold dilutions (Supplemental Figure S1). VSV-based pseudovirus expressing SARS-CoV-2 spike protein from SARS-CoV-2 (Wuhan-Hu-1) with a GFP reporter (Creative Biogene, CoV-012) was added to a media-diluted serum for a final MOI of 0.5 and incubated for 1 h at 37 • C. Media were removed from the HEK293T-hAce2-TMPRSS2-mCherry cells, and the virus-serum mixture was added to the wells and immediately placed on a Cytation 5 Cell Imaging Multimode Reader (Agilent, Santa Clara, CA, USA) for live-cell imaging for 16 h. After imaging, samples were harvested and run on the NxT Attune flow cytometer. Controls included media-only wells, virus and media wells, a known neutralizing mAb, and a non-neutralizing mAb. The known non-neutralizing mAb was diluted at 6 µg/mL and 0.06 µg/mL in singlet, while the neutralizing mAb was tested in singlet with 5 ten-fold serial dilutions starting at 6 µg/mL. Live-Cell Imaging Data Processing and Analysis Using the Cytation 5, brightfield, GFP, and mCherry fluorescence images were acquired using a 10x objective across nine fields of view per well. The nine fields of view were acquired in the center of the well to avoid well-edge effects. Imaging was performed every 75 min across all sample wells. Image processing and analysis were performed using a custom script implemented with MATLAB R2022a (MathWorks, Natick, MA, USA). GFP and mCherry images were segmented using the empirical gradient threshold method [16]. The total cell area and infected cell area were determined from the mCherry and GFP segmented images, respectively, enabling calculation of the percent GFP-positive cells at every timepoint (Supplementary Figure S2A). The percent GFP-positive cells over the time course was fit to a logistic curve. The maximum of the logistic curve was used to determine the endpoint percent GFP for a given dilution. Serum Sample Analysis via Flow Cytometry After imaging, supernatants from each well were collected and wells were washed with phosphate-buffered saline (PBS). For each wash, supernatants were collected. Wells were trypsinized for 3 to 5 min, the trypsin was inactivated with media, and each sample was diluted with PBS + 2% HI-FBS. Cells were spun at 200× g for 5 min and supernatants were removed. Cells were resuspended in PBS + 2% HI-FBS and put on a non-tissue-culturetreated 96-well plate for loading on the NxT Attune Autosampler. To set up fluorescence compensation, HEK293T-hAce2 cells were used as an unstained control. HEK293T-hAce2 cells that had been infected with VSV-∆G spike pseudovirus for 24 h were used for GFP compensation, while HEK293T-hAce2-TMPRSS2-mCherry cells were used for mCherry compensation. Samples were analyzed using FlowJo v10.8.1 (Becton Dickinson, Sparks, MD, USA) and gated for cells based on the side and forward scatter areas, single cells based on the forward scatter height and area, and GFP expression (Supplemental Figure S2B). The GFP gate was determined based on the autofluorescence of the non-infected cell control. The same gates were applied to all samples to determine the percent of GFP-infected cells. Only wells with a minimum of 1500 single cells were included in the analysis. The spike IgG serology assay and surrogate neutralization assay were performed as previously described [14]. Briefly, spike (Wuhan-Hu-1 strain)-conjugated MagPlex-C microbeads were aliquoted onto a 96-well plate at 10,000 beads/well. Patient serum samples, diluted controls, and reference standards were serially diluted and incubated for 30 min in the dark at room temperature while shaking (800 rpm). A magnetic separator was used to pull beads down for at least 1 min on the 96-well plate, and then the wells were washed with 1X PBS, 1% bovine serum albumin (BSA), and 0.05% Tween 20 (PB-T) in triplicate. PE-labeled anti-IgG was added and incubated again. After two washes, 100 µL of wash buffer was applied and samples were analyzed with 3000 to 5000 gated bead events on the CytoFLEX LX (Beckman Coulter, CA, USA) flow cytometer. The spike IgG serology assay was standardized to the WHO IS for anti-SARS-CoV-2 immunoglobulin (NIBSC code: 20/136), and the results are presented in binding antibody units per milliliter (BAU/mL). For the bead-based surrogate neutralization assay, patient serum samples were serially diluted and incubated with RBD (Wuhan-Hu-1 strain)-coated beads in a 96-well plate at 10,000 beads per well for 30 min, shaking at 750 rpm at RT in the dark. The plates were washed three times with PB-T after using a magnetic separator for 1 min. Biotinylated ACE-2 (0.625 µg/mL) was added to each well and incubated while shaking for 1 h. The beads were washed again and then incubated with PE-Streptavidin (0.6 µg/mL) for 30 min of shaking. The beads were washed and resuspended in PB-T and run on the Cytoflex S (Beckman Coulter, Brea, CA, USA) flow cytometer. If neutralization occurred, lower serum dilutions resulted in lower fluorescence signals. Statistical Analysis All statistical analyses were conducted in GraphPad Prism 9 (GraphPad Software, San Diego, CA, USA). For the time-course assay, a two-way analysis of variance (ANOVA) with multiple comparisons was used to determine if there was a statistically significant difference between the three MOIs tested across the different timepoints. For the pseudovirus neutralization assay, the percent GFP was used to calculate the percent reduction. For the flow cytometry analysis method, the reduction percentage was defined as Reduction% = 100 × ((% GFP ) Max − % GFP )/(% GFP ) Max ), where (% GFP ) max is the control sample treated with pseudovirus without a serum sample. For the imaging analysis method, the reduction percentage was defined as Reduction% = 100 × ((Area% GFP ) max − Area% GFP )/(Area% GFP ) Max ), where (Area% GFP ) max is the control sample treated only with pseudovirus without a serum sample. The percent reduction was determined for each dilution. The NT50 was calculated using the variable slope model based on the Hill slope of the dilutions, and the NT50 was determined for both readout methods [14]. A Wilcoxon matched-pairs signed-rank test and a Spearman rank correlation test were used as nonparametric tests to compare the NT50 values from the imaging and flow cytometry analysis to one another [17,18]. Six samples with known COVID infection but low IgG titers did not give an NT50 value for the pseudovirus assay, and twelve samples did not give an NT50 value for the bead-based surrogate assay. Only samples that gave an NT50 value were used to determine correlation, with negative serum samples not included either. A receiver operating characteristic (ROC) curve was generated, including all samples for the three assays, to determine the sensitivity and specificity [19]. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.
7,413
2023-08-01T00:00:00.000
[ "Biology" ]
Investigation Effects of Selection Mechanisms for Gravitational Search Algorithm The gravitational search algorithm (GSA) is a population-based heuristic optimization technique and has been proposed for solving continuous optimization problems. The GSA tries to obtain optimum or near optimum solution for the optimization problems by using interaction in all agents or masses in the population. This paper proposes and analyzes fitness-based proportional (roulette-wheel), tournament, rank-based and random selection mechanisms for choosing agents which they act masses in the GSA. The proposed methods are applied to solve 23 numerical benchmark functions, and obtained results are compared with the basic GSA algorithm. Experimental results show that the proposed methods are better than the basic GSA in terms of solution quality. Introduction Optimization process in computer science is to find the best solution from all feasible solutions, in which the best solution maximizes the profit function or minimizes the cost function.Especially if optimization problems have high dimensions or non-linear characteristics, to find optimal solution is so hard because search space of optimization problem increases exponentially with dimension increasing.To overcome this situation, many optimization algorithms especially inspired from nature have been suggested in recent years such as particle swarm optimization developed by inspiring bird flocking or fish schooling [1], ant colony algorithm which simulates behavior of real ants between nest and food source [2], bee colony algorithms which were inspired intelligent behavior of honey bees [3,4].In addition to these algorithms, some algorithms which were inspired by various natural events were developed.Such as harmony search algorithm inspired by natural musical performance process when a musician seek for a better condition of harmony [5], genetic algorithm based on natural evolution [6] and the GSA simulated Newton's law of gravity [7]. The GSA is a heuristic optimization technique which is inspired by Newton's law of gravity [7].In this algorithm, the main rule is that each agent has features attract masses to extend each other.When all agents attract each other, the effect which agents have big masses to solution is decrease, so to find optimum solution is difficult and the convergence of the method to the optimum or near optimum has slowed.Which masses will be attraction effect and which masses will be disregard is a problem in the GSA.To overcome this problem, selection mechanisms in the GSA are analyzed in this study. The rest of paper is organized as follows: Section 2 presents a literature review for the GSA.The basic GSA and the selection mechanism are explained in Section 3. The experiments are presented in Section 4. Section 5 discusses the experimental results and the study and finally, conclusion and future works are given in Section 6. Literature Review on GSA GSA is heuristic optimization technique which is inspired by Newton's law of gravity.Each agent is named as object and success in optimization algorithm are related as masses in GSA [7].Using the mass of each agent and distance of between agents are calculated new position of agent.Mass and distance are inversely.Effect of big and close agent is more the solution. While new position of each agent is calculated, to avoid fast convergence is used random number in basic GSA.Main problem of fast convergence is to get stuck of local minimum the method.To overcome fast convergence and local minimum, Han and Chang [8] suggested using chaotic variables instead of random numbers in modified GSA. Unconstrained acceleration has caused more diversification in the population.Khajehzadeh et al. [9] proposed velocity clamping for velocities of the agents in order to prevent diversification in the population.Therefore, velocity of the agent is constrained between maximum and minimum values of the velocities. GSA starts to search with random solutions on the search space.An opposition based learning method is proposed for the initialization and also continuation working of the GSA by Shaw et al. [10].In this way, the convergence rate of GSA has improved and the robustness of GSA is increased. In order to improve global search ability of GSA, "Disruption" operator, inspired from astrophysics, has been added to basic GSA [11].The new method is used for optimizing 23 numeric functions and obtained results are compared with basic GSA, PSO and GA.The experimental results show that proposed method is superior to basic GSA, PSO and GA. Li and Zhou [12] developed a new version of GSA by combining the search strategy of PSO and the improved GSA (IGSA) is applied to parameters identification of hydraulic turbine governing systems.The experimental results show that (IGSA) is capable with respect to PSO, GA and basic GSA in terms of solution quality. Niknama et al. [13] suggested self-adaptive GSA to improve convergence characteristic in basic GSA.Two methods developed to improve solution in this technique.New solution in GSA moves independently the previous solution because GSA has not memory.Finding best solution is used to find new solution in the first technique.Second technique was developed for which solution has not local minimum.New solution was produced using three different agent are selected in second technique.Produced solution was used new solution with probability. Gravitational Search Algorithm One of the newest heuristic optimizers is gravitational search algorithm (GSA) which is based on law of gravity, law of motion and interaction of masses [7].In GSA, potential solutions correspond to the position of masses and masses corresponds the fitness value of the solution produced for the optimization problem.By using law of gravity and law of motion, Rashedi et al. [7] have defined interaction between the masses.The GSA is an iterative algorithm and the algorithm is explained step by step as follows: Step 1. Initialization The masses are randomly produced on the solution space using Equation (1). ( ) where , i j X is the ith mass position value on the jth dimension, max j X and min j X is the upper and lower bound for the jth dimension, respectively, N is the number of masses and D is the dimensionality of the optimi-zation problem. After the random solution is produced, the sizes of the masses are calculated as follows: ( ) ( ) Step Law of Gravity In GSA, interaction between the masses is based on action-reaction.Force acting on a mass is calculated as follows: For each mass i, , F t is the force acting on dth dimension of masses on ith position on iteration time t, () is the gravitational constant number on iteration time t, ( ) i M t and ( ) j M t is the active and passive gravitational masses on iteration time t, respectively, ( ) t is the dth dimension of masses on jth position at iteration time t, ( ) , i j R t is the Euclidian distance between the mass on ith position and the mass on jth position and is the small constant number. After the force for each dimension is calculated using Equation ( 4), the total force on the mass is obtained as follows: , 1 ( ) ( ) where () is the total force acting on dth dimension of the mass on ith position on iteration time t and is a random number produced in range of [0,1], which is used for providing stochastic characteristic to GSA. Law of Motion The motion depends on acceleration of the mass and the acceleration of the mass is calculated as follows: ( ) ( ) ( ) where ( ) a t is the acceleration on the dth dimension of the mass ith position.The velocity and new position of the mass are calculated as follows: ( ) ( ) where ( ) is the velocity for dth dimension of the mass on the ith position on iteration time 1 t + , ( ) is the dth dimension of the mass on ith position at iteration time 1 t + and , i d r is the random num- ber produced for dth dimension of the mass ith position in range of [0,1]. Step 4. Termination After new positions for the masses are obtained by using Equations ( 6)-( 8), the fitness of the solutions are calculated using Equations ( 2) and ( 3).The obtained solution with best fitness value is stored and if a termination condition is met, the algorithm is terminated and the best solution is reported.Otherwise, running of the algorithm is continued from the Step 2. Basic GSA is presented as a flowchart in Figure 1. In the basic GSA, all agents in the population are used for calculating force acted on a mass.In order to increase convergence and local search capabilities of the method, which agents will be used are decided for determining acting force on a mass by using four selection mechanisms-roulette wheel, tournament, random and rank-based selection mechanisms and these mechanisms are given below. Random Selection In order to provide enough diversification in the population, a certain number of particles are selected from the population.Instead of whole population, the selected particles are used for calculating force acting on the mass.In this selection mechanism, each agent has same selection probability and whole solution space is searched by using this mechanism but the local search on the solution space is reduced. Roulette Wheel Selection This selection mechanism is based on the fitness of the solutions.For calculating force acting on the mass, a certain number of particles are selected from the population using roulette wheel selection.Being selected probability of a particle is given as follows: where i p is the being selected probability of the ith particle, i fit is the fitness value of the solution of ith particle and N is the number of particles.Being used roulette-wheel selection, it is aimed that the convergence rate of the method is increased because the agent has been affected mostly good solutions obtained previous iteration. Tournament Selection Tournament selection covers running several tourneys between the particles randomly selected from the population.The winner of each tourney (the agent with the better fitness) is used for calculating force acting on the mass.The tournament size is important for the tournament selection mechanism because if the tournament is larger, weak particles have a smaller chance to be selected.In the tournament selection, when the number of tournament is increased, the agent has been affected mostly the best solution, and the solutions of the population are quickly improved especially for unimodal functions.When the number of tournament is decreased, the solution with low fitness can be selected and the diversity in the population can increase and global search ability of the method is improved. Rank-Based Selection Rank-based selection is an alternative selection mechanism used for obtaining chromosomes which will be subjected to crossover operation in genetic algorithm [20].In the rank-based selection, the agents are sorted in ascending order by using their fitness values.For each rank, a selection probability is calculated as follows: where, pos p is agent's probability of being selected in position pos, N is the number of agents and SP is the selective pressure.In the positions, position of least fit agent is first order 1 and the position of fittest agent is Nth order. Experimental Results In order to investigate effects of the selection mechanisms to the performance of the GSA, 23 benchmark functions taken from [7] are used and obtained results are compared the results of basic GSA. Benchmark Functions 23 test functions given in Tables 1-3 and taken from [7] are divided to different groups.F1-F7 functions have only one local minimum and this local minimum is global minimum.These functions are unimodal functions, and used for investigating local search ability of the method.If a function has more than one local minimum, this function is called as multimodal function and the global search capability of the method is tested on these functions (F8-F13).Another difficulty for a method is dimensionality of the optimization problem [21,22].While F14-F23 test functions are small-sized functions, the dimensionality for F1-F13 functions is taken as 30. Control Parameters The population size of the methods is taken as 50 in all experiments.The stopping criterion for the algorithms is ( ) ( ) ( ) [0,1) ( ) ( ) 10 cos 2 10 ( ) ( ,5,100,4) ( ) ( ) ( ) ( ) maximum iteration number (MIN), and MIN is 1000 for F1-F13 functions and 500 for F14-F23 functions.The gravitation constant (G) used in GSA depends on iteration time and is calculated as follows [7]: G t is the gravitation constant at time step t, 0 G is the initial gravitation constant which is taken as 100 in the initialization of the algorithm, is the maximum iteration number and is scaling factor and it is set to 20. Number of masses which will be act force is taken as 10, 15, 20, 25, 30, 35 and 40 in GSAF (GSA with roulette wheel selection), GSAT (GSA with tournament selection), GSAR (GSA with random selection), GSAL (GSA with rank-based selection).Experimental studies show that the more successful results are obtained when the number of masses which will be act is taken as 40.Therefore, the number of masses which will be act is taken as 40 in the comparisons.In GSA with rank-based selection mechanism, the linear ranking is used and the selective pressure is taken as 2. Comparisons The mean results obtained by GSAF, GSAT and GSAR, GSAL are compared the mean results of basic GSA and these results are presented in Table 4 for unimodal test functions, Table 5 for multimodal functions and Table 6 for the multimodal test functions with fix dimension. According to Table 4, the results obtained by GSAR and GSAF methods are relatively better than the results of basic GSA because these functions are unimodal functions and the local search ability of the GSAF and GSAR is better than GSA. Based on Table 5, due to the fact that the masses in basic GSA have been affected by all the masses in the population, the diversity in the population is kept during the iterations and the results obtained by basic GSA are slightly better than the results of proposed methods.According to Table 6, the dimensionality is the important factor for the methods and the proposed methods are better than the basic GSA, except GSAT.In the GSAT, masses have been affected from the same mass and the diversity in the population has been lost and this has caused the stagnation of the population.This situation is balanced in other selection mechanisms by using fitness values or randomness. Results and Discussion In this study, we used four selection mechanisms-roulette wheel, tournament, random and rank-based selections and obtained the better results than GSA.Experimental results show that the selection mechanisms directly affect the performance of GSA because to obtain a new position for the agent in GSA is important for the performance of GSA.For the population-based heuristic approaches, information sharing and interaction between the agents describe behavior of the method and to be selected the agents using their fitness values provides to obtain high quality solutions for the numerical benchmark functions.Experimental results show that fitness-based selection mechanisms such as roulette-wheel and tournament is appropriate for the unimodal and multimodal functions with fix dimension, but for multimodal functions with huge local minimums these mechanisms have caused early saturation of the population, to lost diversification in the population and to get stuck of local minimums.Therefore, the random, rank-based selection mechanisms are more appropriate for solving the multimodal functions instead of fitness-based selection mechanisms.But it should be mentioned that the random selection mechanism can cause the slow convergence to the optimum or optimums and reduce the local search ability of the method.The effect of the selection mechanism used for generating new chromosomes is known on the Genetic algorithm in the literature but in this study, the effect of selection mechanism for the GSA has been investigated and obtained results are used for comparing and discussion.In GSA, the selection techniques do not have the same effect with the GA because velocity updating equation of GSA does not work likewise crossover of GA.The obtained results show that the proposed techniques for GSA have positive effect on solving the numerical function.Consequently, it is shown that the solution quality is improved by using selection mechanism in original GSA and, the suitable selection mechanism for GSA should be used in order to obtain more quality solution depending on structure of the optimization problem. Conclusion and Future Works We analyzed four selection mechanisms for the GSA on the 23 benchmark functions based on solution quality.The experimental results show that using the appropriate selection mechanism provides to obtain the quality solutions.Because GSA with the selection mechanism has high performance on the continuous optimization problems, our future works include applications of proposed method in various optimization problems. the size of inertial mass on the ith position on iteration time t, ( ) i m t is the size of the mass which is on the ith position on iteration time t, ( ) i fit t is the fitness value of the mass on the ith position on iteration time t, best and worst fitness values in the mass population on iteration time t. Figure 1 . Figure 1.The flow chart of the GSA. Table 1 . The unimodal benchmark functions. Table 2 . The multimodal benchmark functions. Table 3 . The multimodal benchmark functions with fix dimensions. Table 4 . The comparison of the methods on the unimodal test functions. Table 5 . The comparison of the methods on multimodal test functions. Table 6 . The comparison of the methods on the multimodal test functions with fix dimension.
4,061.4
2014-03-17T00:00:00.000
[ "Computer Science" ]
Modeling the infrared cascade spectra of small PAHs: the 11.2 μm band The profile of the 11.2 μm feature of the infrared (IR) cascade emission spectra of polycyclic aromatic hydrocarbon (PAH) molecules is investigated using a vibrational anharmonic method. Several factors are found to affect the profile including: the energy of the initially absorbed ultraviolet (UV) photon, the density of vibrational states, the anharmonic nature of the vibrational modes, the relative intensities of the vibrational modes, the rotational temperature of the molecule, and blending with nearby features. Each of these factors is explored independently and influence either the red or blue wing of the 11.2 μm feature. The majority impact solely the red wing, with the only factor altering the blue wing being the rotational temperature. Introduction Polycyclic aromatic hydrocarbons (PAHs) are believed to be ubiquitous in the interstellar medium (ISM) [1]. The formerly named unidentified infrared bands (UIBs), now designated the aromatic infrared bands (AIBs), are generally believed to be due to the infrared (IR) emissions of transitions between the vibrational modes of the PAHs [2,3]. The excitation process and the following emission of these IR photons are not in thermal equilibrium [4]; a typical PAH molecule in the ISM will absorb only one ultraviolet (UV) photon per year. After absorption, the PAH becomes excited electronically and within a few hundred femtoseconds it will return to the electronic ground state where the excess energy is converted into vibrational mode excitation through a non-adiabatic process [5]. These vibrationally excited PAHs then relax largely through cooling slowly (hundreds of milliseconds) through the emission of individual IR photons. Since vibrational modes are coupled through anharmonic interactions, the temperature-dependent population of each vibrational mode perturbs the energy (i.e., emission frequency) of all other modes. During this emission process, the vibrational temperature changes and so does the shift in the frequency of the emitted photon due to the couplings. Also, as the PAH continues to emit, the amount of frequency shifting decreases. This results in what is referred to as a cascade IR spectrum, whose profile is asymmetric in shape, usually with a long red wing and little-to-no blue wing [6]. This asymmetric profile is linked to the size of the PAH, i.e., the density of states, and the initial energy of the UV photon absorbed. For a detailed review of the properties and importance of astronomical PAHs see Reference [1]. The observed IR features due to interstellar PAHs are now routinely characterized through the fitting of emission model based on theoretical IR spectra of PAHs collected in the NASA Ames PAHdb [7]. Typically, the calculations of the IR spectra are performed with density function theory methods (B3LYP being to most common functional), using low-to-modest sized basis sets (4-31G to 6-31G*), within the double harmonic approximation [8]. The profiles/ bandwidths are then generated through convolving the Published as part of the special collection of articles "Festschrift in honor of Fernand Spiegelmann". "stick-spectra" with Gaussian and/or Lorentzian profiles (typically a standard HWHM of 15 cm −1 ). These harmonic models only account for temperature-dependent populations through changes in intensity during the emission process [9]. The IR emissions of PAHs containing up to 384 carbon atoms have been calculated in such a manner [7]. The major drawback to these models is that they are not considering the dependence of the emitted IR photon frequency on the actual vibrational temperature during the cascade. This is influenced by the coupling among vibrational modes and requires an anharmonic treatment [10]. Early studies on the effect of anharmonicity on peak shifts and profiles variations of emission features [11][12][13] relied on temperature-dependent absorption measurements for few small PAHs in thermodynamic equilibrium [14,15]. However, it is unclear how these measurements relate to the intrinsic vibrational properties of the PAHs and how they can be extended to larger PAHs [16]. More recently, the effects of anharmonicity on the emission profile of highly excited PAHs have also been studied using both quantum chemistry [10,17,18] and molecular dynamics [19,20], or a combination of the two [21]. Nowadays, anharmonic effects can be accounted for computationally using the vibrational self-consistent field method [22], for example, or through vibrational second-order perturbation treatment (VPT2) [23,24]. VPT2 in particular has shown great success in reproducing the experimental spectra of isolated, cold, gas-phase IR spectra of PAHs [25][26][27][28][29][30][31]. Anharmonic treatment of a vibrational spectrum is computationally expensive, which limits the size of computable PAHs down to only ∼ 30 carbon atoms, which is smaller than the commonly accepted size of an interstellar PAH (50-100 carbon atoms) [32]. Nevertheless, the analysis of the anharmonic IR cascades of small PAHs still provides invaluable insights into how a typical PAH emits IR photons. Of particular interest to astronomers is the PAH features at 11.2 m (892 cm −1 ) which is attributed to outof-plane C-H bending modes of "solo" hydrogen atoms in neutral PAHs [9,33]. These features appear together with a satellite one at 11.0 m (909 cm −1 ), which has been attributed to both out-of-plane C-H bending modes in cationic PAHs [9], and SiPAH + complexes [34]. The 11.2 m feature is characterized as having a steep blue edge, with a long red tail and the profile shows small variations which are dependent on the local environment of the observations [9,35]. The specific shape and its variations have been explained as due to anharmonicity and to the superposition of the emission by populations of PAHs [12,13,36,37]. The 11.2 m feature dominates most of the AIB spectra of many astronomical objects, and it has been used to classify the PAHs in numerous ways, including determining their size and charge status through the 3.3/11.2 m and 6.2/11.2 m intensity ratios [32,38]. A more solid understanding of the spectral behavior of the 11.2 m feature and the influence of anharmonicity would thus be highly beneficial. Building upon the work of Refs [17,18], and based upon the detailed formalism provided in Ref [10], a detailed theoretical cascade model for the 11.2 m feature is examined. The nature of this profile is explored in the context of PAH size, initial IR cascade temperatures, rotational temperature effects, and blending with neighboring features. Theory and background The details of the theoretical anharmonic IR methods are explained elsewhere [25,39,40]. In brief, a second-order vibrational perturbation approach is used, whereby the energy of a given vibrational level is given by where the k is the harmonic frequency, n k are the number of quanta in the k th vibrational mode, and kl are anharmonic constants (see Reference [41] for their derivation). Transition energies are then obtained by subtracting the energy of the starting level from the energy of the ending level. The transition energy from n k → n k − 1 for a given k th vibrational level is then given by For a detailed background of the cascade spectra of PAHs see References [10,17]. In brief, a Wang-Landau walk [42] is performed over a given energy range for the PAHs accumulating a count of states visited in order to construct an estimate for the anharmonic density of states (DOS). After construction of the DOS, a second Wang-Landau walk is performed using the DOS as the weighting in the walk, and however, in this walk, an energy-dependent spectrum is accumulated at every energy visited. These energy-dependent spectra are then used in a cascade process. Starting at a given energy (ranging from 6 eV for a typical reflection nebula, to 10 eV for a typical HII region), the corresponding energy-dependent spectrum is accessed, and an IR photon is chosen for emission based upon its Einstein A coefficient, which in turn is proportional to the intensity (in km/mol) at a given frequency multiplied by the frequency squared. A histogram is updated with the frequency of the emitted photon. The total energy of the system is updated by subtracting the energy of the emitted photon, and then, the new corresponding energy-dependent spectrum is accessed, and a new IR photon is chosen for emission in the same manner. This process is repeated until the PAH has emitted all its energy; then, the PAH is excited again to the original starting energy, and the process is repeated until the desired resolution has been met. Ideally, the probability of a given starting energy is proportional to the distribution of photons from the stellar source times the UV absorption cross-section of the given PAH. In order to simplify the analysis and show trends, a single starting energy is selected for each simulation. This has the effect of slightly shortening the extent of red wings (due to not sampling higher starting energies), and un-biasing toward absorption at a particular energy. A rotational temperature profile model was adapted from Reference [43]. The rotational population of interstellar PAHs is largely set by the ro-vibrational cascade. In this cascade, ΔJ = +1 transitions are favored by the slightly larger Einstein A coefficients, but this is counteracted by the slightly larger statistical weight of ΔJ = −1 transitions [44,45]. This will lead to a Gaussian distribution characterized by an effective rotational excitation temperature. During the cascade process, each selected IR photon was subjected to an uncertainty in emission energy equivalent to a Gaussian profile generated by the rotational energy of the PAH, with a half-width-half-maximum (HWHM) estimated by the P-Q-R rotational branch separation given in Reference [43] by with B being the average rotational constant of the PAH in wavenumbers, k B is the Boltzmann constant, h is Planck's constant, c is the speed of light, and T rot characterizes the average rotational population distribution that is set by the cumulative effect of the vibrational cascades where J IR is the most probable rotational quantum number, and mean is the average vibrational frequency of the emitting vibrational modes of the PAH. The factor 6 in this expression reflects the summation over the rotational K ladders. Radiative and collisional deexcitation of the rotational states will drive the actual rotational excitation temperature of the molecule down from this value, but this effect is small for the interstellar emission regions [43]. While the exact rotational constants can be used in this model, for the analysis, in sect. 4.5, the following approximation for the rotational constants is used where N c is the number of carbon atoms in the PAH. This allows for a systematic -but approximate -approach to the rotational broadening by varying the effective "size" of the PAH ( N c ) Methods This work follows largely the methods and parameters outlined in previous work [10,17] for generating the IR cascade spectra. The geometry optimizations, harmonic IR calculations, as well as the calculation of the quartic force field (QFF) terms, were performed using the Gaussian16 software package [46]. The VPT2 treatment was performed using a locally modified version of the SPECTRO [47] for direct control over resonances and polyad sizes). All calculations were performed using density functional theory (DFT), using the B3LYP [48,49]/N07D [50] (a basis set based on 6-31G, with a limited number of diffuse and polarized functions, shown to perform well for anharmonic calculations.) functional/basis set combination, with the convergence parameters recommended for anharmonic calculations [27]. Comparison of the theoretical anharmonic zero-Kelvin spectra with measured ion-dip spectra for a variety of PAHs shows excellent agreement (to within 0.3%) in peak frequency without the need for correction factors [28][29][30][31]. Anthracene (C 14 H 10 ) -a molecule with a strong solo out-of-plane bending mode -is used as a stand-in for the analysis of these parameters. These trends are indicative for the behavior of the out-of-plane bending modes of all neutral PAHs with strong 11.2 m features. However, the data set is currently too limited (in both number of PAHs, and size of PAHs) to draw any firm size-dependent extrapolations at this point. Results and discussion The profile of the 11.2 m feature is found to be controlled mainly by three variables: the energy of the absorbed UV photon before the cascade process, the size of the emitting PAH, and the rotational temperature of the PAH (which also has a size dependence). However, the variation in profiles due to size is found to manifest in multiple ways including: changes in the DOS, changes in the anharmonic character, and changes in the relative intensities of the other vibrational modes. In addition to these intrinsic variations, the 11.2 m profiles can also be affected through external factors, such as blending with neighboring features. Due to the complicated nature of these intertwined effects, variations in each aspect (cascade energy, rotational temperature, DOS, anharmonic character, relative intensity, and blending) are considered individually. Cascade energy The first variable, the maximum energy absorbed, is the easiest to model and explain. Figure 1 shows the IR cascade spectrum of anthracene at various starting cascade energies, with the other variables being kept constant (3.3 to 11.2 m ratio set to zero (see sect. 4.4), and rotational broadening set to a HWHM of 1 cm −1 (see sect. 4.5)). As can be seen, the extent of the red wing is controlled directly by the cascade starting energy, and the blue wing is left unchanged. Additionally, no change in peak position is observed. The growth of a red wing, and lack of change in blue wing and peak position can be explained by the anharmonic nature of the 11.2 m feature. All significant anharmonic couplings between the solo CH-modes and all other modes are found mainly to lower the energy of the vibrational mode. That is to say, the anharmonic constants used in Eq. 2 are found to be largely negative [10]. This leads to a broad emission band at lower frequencies when the PAH has a high internal energy, and a sharply peaked emission band at low internal energies, which in turn does not overshoot the zero-Kelvin peak position (879.7 cm −1 ). Figure 2 shows the cascade spectrum of anthracene (C 14 H 10 ) at 6 eV, with the equivalent cascade spectrum of tetracene (C 18 H 12 ) at 8 eV. The two profiles are almost identical. The DOS of a given PAH is related directly to the number of vibrational modes of the PAH. As the size of PAHs is increased, so too does the DOS. The similarity in molecular structure and vibrational frequencies of PAHs results in a density of states that has great resemblance for all PAHs. As a result, to a good approximation, the microcanonical vibrational excitation temperature scales inversely with the number of vibrational modes to some power (typically 0.4) [43]. This translates into narrower cascade features for larger PAHs for a fixed excitation energy relative to that for a smaller PAH molecule for the same excitation energy. This means, all else being equal, a larger PAH will produce a similar cascade profile as a smaller PAH that is excited by a lower energy, as shown in Fig. 2. Anharmonic character In the VPT2 treatment, each vibrational mode is coupled to all other vibrational modes through their anharmonic constants. The degree by which the frequency of a particular vibrational mode i is perturbed is given by Eq. 2. Figure 3 plots the anharmonic constants for the 11.2 m feature of anthracene (C 14 H 10 ), tetracene (C 18 H 12 ), and pentacene (C 22 H 14 ) as a function of normalized vibrational mode number (n/max(n)), such that the equivalent vibrational mode types align roughly. The lower numbers represent vibrational Fig. 1 The IR cascade spectra of the isolated 11.2 m feature of anthracene beginning from varying maxima of energy absorbed. Purple (left most) is a maximum of 8 eV absorbed, black (right most) is a maximum of 1 eV absorbed, with the remaining spectra being step sizes of 1 eV in between the two extremes Fig. 2 The IR cascade spectrum of the isolated 11.2 m feature of anthracene starting from a maximum of 6 eV absorbed (green) compared to the same feature of tetracene starting from a maximum of 8 eV absorbed (black) (the band position of tetracene has been shifted by 21.5 cm −1 to align with anthracene) modes which are higher in energy. The value seen around the normalized mode number of 0.15 represents the coupling of the 11.2 m vibrational mode to the CH-stretching modes ( ∼ 3.3 m, 3030 cm −1 ), and the values seen around 0.65 are the coupling to the out-of-plane CH-bending modes ( ∼ 11.2 m, 892 cm −1 ), including the self-interaction. As can be seen, the absolute value of the large negative individual anharmonic constants decreases with size of the PAH, however, since the number of vibrational modes increases, the total sum of the anharmonic constants remains nearly constant. These two effects largely cancel out when considering the zero-Kelvin anharmonic correction, leading to very similar shifts with respect to the harmonic position. However, the same amount of vibrational energy in a PAH results in less and less broadening as the size of the PAH increases; this is because the spectator modes -the excited vibrational modes coupled to the solo out-of-plane mode but not emitting -are coupling through smaller anharmonic constants thus causing a smaller frequency shift. This effect couples with the DOS effect mentioned in sect. 4.2 resulting in narrower features for larger PAHs. Relative intensities The resulting profile of a cascade emission band is the convolution of the emission profiles at different internal energies weighted by the fraction of the energy that is instantaneously emitted in that particular band at that internal energy. Hence, the profile will depend on the relative strength of all modes. We illustrate this here by varying the relative integrated IR strength of the C-H stretching modes relative to the CH out-of-plane bending modes. Figure 4 shows the 11.2 m feature of anthracene where the ratio of the 3.3 to 11.2 m band intensities has been adjusted artificially to different values (0.5, 1.0, 1.5, 2.0). As can be seen, the red wing fades away at higher 3.3 to 11.2 m ratios. This occurs because at higher internal energies (i.e., the start of the cascade), more IR energy is being released through the 3.3 m vibrational modes, but as the PAH cools eventually the 11.2 m begins to dominate the emission. This has the affect of turning a convex profile into a concave profile. The 3.3 to 11.2 m ratios have been shown previously to correlate directly with size of the PAH [32]. This means smaller PAHs will have Fig. 3 The anharmonic constants of the 11.2 solo out-of-plane m vibrational modes of anthracene (black), tetracene (blue), and pentacene (brown). The vibrational mode numbers have been normalized such that similar couplings to vibrational modes align between the three PAHs when plotted Fig. 4 The IR cascade spectrum of the 11.2 m feature of anthracene at 8 eV with the intensity of the 3.3 m emission region absent (black), scaled by 0.5 (orange), unscaled (blue), scaled by 1.5 (green), and scaled by 2.0 (brown) Fig. 5 The IR cascade spectrum of the isolated 11.2 m feature of anthracene at 8 eV with rotational temperatures ranging from 0 K (black) to 400 K (brown) broader profiles, similar to the black profile of Fig. 4, while larger PAHs will tend toward the slimmer brown profile. Figure 5 shows an 8 eV IR cascade spectrum of anthracene adopting different rotational broadenings through varying N C in Eq. 5, with all other parameters remaining fixed. As can be seen, the profile of the steep blue wall of the cascade feature is controlled by the rotational temperature. In fact, rotational temperature broadening is the only factor found to control the shape of the blue wing. Additionally, the sharp peak at the zero-Kelvin position can be seen to blend out at higher rotational temperatures. This portion of the profile is emitted at low internal excitation temperatures and is sharply peaked due to the lack of coupling to spectator vibrational modes at low energy. The blue rise reflects then mainly the adopted rotational broadening. Rotational temperature The peak position of the 11.2 m feature is also found to be controlled mainly by rotational temperature. Starting cascade energy has no effect on the position of the 11.2 m feature (see Fig. 1, although neighboring features may also play a role (vide infra)). Figure 6 illustrates the change in peak position with rotational temperature of anthracene. The 11.2 peak position shifts toward the red as the rotational temperature increases, eventually reaching a plateau. At low internal energies, the dominant source of broadening for the feature is due to rotational temperature. Instead, at higher internal energies, the dominant source of broadening is the vibrational temperature. This has the effect of eroding the steep blue wing of the feature up until the profile appears symmetric, and the shifting of peak position stops. Rotational broadening is related to the size of the PAH, since the rotational constant B is inversely proportional to the size of the PAH squared (Eq. 5). Larger PAHs will therefore have a steeper blue wing than smaller PAHs at similar rotational temperatures. Figure 7 shows the 11.2 m cascade feature of the full IR cascade of anthracene changing through the increase in the intensity of a cascade "mock" feature at 11.0 m. Neighboring features are also found to affect the profile of the 11.2 m feature through blending. This blending is observed to change the peak position, blue wing, and red wing all subtly. Blending More interestingly, the red wing of the neighboring feature increases the apparent intensity of the 11.2 m feature giving the illusion that the ratio of 11.2 to 11.0 intensity is not increasing as much as it is. The brown spectrum of Fig. 7, for example, has a ratio of the 11.0 to 11.2 of 1.5, but a naïve fit to the spectrum may conclude incorrectly that the 11.2 m feature has an equal intensity to the 11.0 m feature. Photo fragmentation In this analysis, the effect of photo fragmentation on the energy cascade has been neglected . In principle, at high internal excitation, fragmentation becomes a competitive energy loss channel. As a result, molecules with such high internal energies do not contribute to the emission process and the increase in the width is truncated at these energies [17]. In practice, this is of limited interest as in interstellar regions where H-loss becomes important, these PAHs are rapidly stripped of all their H's [51] and then destroyed. Conclusions While qualitative in nature, this work provides insight into the profile of the 11.2 m feature. In order to draw more firm conclusions, and extrapolate to larger PAHs a bigger data set would be required. Unfortunately, anharmonic calculations of PAHs are not straightforward and are hampered by some unresolved issues [27,28,31]. Computational power remains a roadblock for calculating anharmonic spectra of larger PAH species. However, it is not the only roadblock. Anharmonic computations of larger PAH molecules have been found to be unsuccessful, due largely to numerical instabilities, including large anharmonic corrections of several hundreds of wavenumbers in out-of-plane bending modes (where corrections are typically in the tens of wavenumbers). The cause for this is not known; it could be due to implementation issues in the DFT methods or it may be due to an unresolved issue for out-of-plane bending vibrations [52]. Additionally, experimental spectra of large PAHs under low-temperature gas-phase conditions are lacking in order to verify the anharmonic calculations themselves. Nevertheless, much can be extracted from this subset of small PAHs. The 11.2 m feature profile is found to be primarily controlled by the size of the PAHs. However, individual aspects are affected by PAH size in different subtle ways. The initial cascade energy controls the extent of the red wing when considering a single species. However, the extent of the red wing at a given cascade energy is dependent on size in two ways: one, the DOS of larger PAHs spreads the energy over many more vibrational modes, resulting in a cooler cascade than a smaller PAH; two, the individual anharmonic coupling constants become smaller with increasing PAH size, resulting in smaller band shifts at all energies. Both of these effects mean a shorter red wing as the size of the PAH increases. The shape of the red wing is controlled by size through the relative intensities of the 3.3 to 11.2 m features, which in turn are controlled by the size of the PAHs. A large 3.3 to 11.2 m ratio is typical of small PAHs, while a small ratio is typical of large PAHs. A small 3.3 to 11.2 m ratio in turn results in a broad convex profile, while a large 3.3 to 11.2 m ratio results in a concave profile. The profile of the blue wing is controlled by the rotational temperature of the PAH, and to a lesser extent by blending with neighboring features only at higher frequencies than the 11.2 m feature itself. Larger PAHs have smaller rotational broadening, due to smaller rotational constants, resulting in steeper blue-wings for larger PAHs [12,13]. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
6,014.4
2021-08-13T00:00:00.000
[ "Chemistry", "Physics" ]
Regulation of Human Immunodeficiency Virus Type 1 Gene Transcription by Nuclear Receptors in Human Brain Cells* Infection of cells of the central nervous system by the human immunodeficiency virus type-1 (HIV-1) leads to HIV-1-associated neuropathology. Recent studies have demonstrated the importance of long terminal repeat (LTR) binding sites in determining the pathogenicity of HIV. Here we have investigated the presence and the functional role of transcription factors that have the potential to interact, directly or indirectly, with the nuclear receptor-responsive element in the LTR of HIV-1, in different human cell lines of the brain. Cotransfection experiments showed that in oligodendroglioma TC-620 cells, the retinoic acid receptor and the retinoid X receptor activate LTR-driven transcription in the absence of ligand. Addition of all- trans - or 9- cis -retinoic acid re-verses this effect. In contrast, in astrocytoma, neuronal, and microglial cells, no significant effect of the retinoid acid pathway was detected. This retinoid response is mediated by distinct molecular interactions in the lymphotropic LAI and the neurotropic JR-CSF HIV-1 strains. Moreover, retinoid receptors were found to antagonize the chicken ovalbumin upstream promoter transcription factor- as well as the c-JUN-mediated LTR transactivation. Our findings demonstrate the impor- tance of the retinoic acid signaling pathway and of cross-coupling interactions in the repression of HIV-1 LTR gene expression. The molecular mechanisms central and In The molecular mechanisms controlling HIV-1 1 pathogenesis in the central nervous system are not understood. Tissue macrophages and microglial cells are major target cells and reservoirs for virus during HIV disease in the brain. In addition, infection of neuronal and glial cells also appears to contribute to HIV-1-related pathology (1)(2)(3)(4). HIV-1 gene expression is regulated by an interplay of viral and cellular host proteins that interact with a number of binding sites present in the long terminal repeat (LTR) (5). Recent reports highlighted the importance of the U3 region of the LTR in determining the pathogenicity of HIV-1 (6). Therefore studies concerning the various elements of the modulatory region of the LTR and their interaction with host proteins present in brain cells appear quite crucial. Transcription factors belonging to the steroid-thyroid-retinoid nuclear receptor superfamily have been shown to interact with the nuclear receptor-responsive element (NRRE) located within the Ϫ356/Ϫ320 region of the LTR (7-10). The NRRE sequence represents a site for complex regulatory interactions among a variety of hormone receptors, orphan receptors as well as the AP-1 transcription factor. We have recently reported that AP-1 is unable to bind directly to the NRRE sequence and that it interacts with the Ϫ247/Ϫ222 sequence present in neurotropic HIV-1 strains (10); however, indirect binding of a FOS protein to the NRRE site has been reported (11). Our recent findings indicate that COUP-TF (chicken ovalbumin upstream promoter transcription factor), an orphan member of the hormone receptor superfamily (12,13), is one of the major species that binds to the NRRE and functions as a potent activator of LTR-driven transcription in brain cells. 2 A number of studies have described that retinoic acid receptors (RARs) and retinoid X receptors (RXRs), in the presence of their ligand, modulate LTR-driven transcription in non-CNS-derived cells. 9-cis-Retinoic acid (RA) activates both RARs and RXRs, whereas alltrans-RA only activates RARs (14). Ligand-dependent stimulation of LTR-directed transcription by RAR␣ and RXR␣ was observed in choriocarcinoma JEG-3 cells (9) and in CV1 cells, via the NRRE site (8). In the presence of phorbol myristate acetate, transcriptional activity was enhanced in U937 monocytes treated with RA, through a distinct RA and phorbol myristate acetate-responsive element located in the Ϫ83/ϩ80 LTR region (15). A negative effect of RA on HIV LTR activity was documented in HeLa and U937 cells, via the NF-B element (16). Moreover, various effects of RA on the replication of HIV in different cell types have been reported (17)(18)(19). Taken together, these data reveal the importance of the type of target cell in the RAR-and RXR-mediated transcriptional response and point out that distinct molecular mechanisms control the retinoid signaling pathway. In this report we have investigated the presence of members of the RAR and RXR family in various cell types of the brain. We have further examined the functional effect of RAR-␣, RXR-␣, and RA treatment on HIV-1 LTR-driven transcription in human neuronal, glial, and microglial cells. Our results show that the nuclear receptors regulate HIV-1 gene expression in a unique manner; unliganded RAR and RXR lead to activation, whereas the addition of RA antagonizes the receptor-mediated activation in oligodendroglioma cells. Moreover, we elucidate the interactive physiological networks of the retinoid receptors RAR and RXR, of the orphan receptor * This work was supported by the Institut National de la Recherche Médicale (U338), the Agence Nationale des Recherches sur le SIDA (ANRS), the Association "Ensemble contre le SIDA," the Fondation pour la Recherche Medicale (FRM), and by a grant from ANRS (to B. E. S.). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 COUP-TF and the transcription factor AP-1, in the regulation of HIV-1 gene expression in various cells of the brain. We show that the retinoid receptors function as repressors of COUP-TFmediated transactivation and that RXR acts as a repressor of c-JUN-mediated transactivation. Our findings reveal the importance of retinoic acid as well as of cross-coupling interactions in the inhibition of LTR-driven HIV-1 gene transcription. MATERIALS AND METHODS Plasmids-To generate LTR(JR-CSF)-CAT, Ϫ283/ϩ20LTR-CAT, Ϫ159/ϩ20 LTR-CAT, the plasmid pSAFYre containing the JR-CSF LTR (gift of Dr. J. Clements; Ref. 20) was digested with EcoRV ϩ BglII, EarI ϩ BglII, and AvaI ϩ BglII, respectively. The LTR inserts were isolated, blunt-ended, and subcloned in the SmaI site of pUC19-CAT0. To generate LTR(LAI)-CAT, Ϫ283/ϩ80LTR-CAT and Ϫ159/ϩ80LTR-CAT, the plasmid pSV1b-CAT containing the LAI LTR (gift of Dr. N. Israel) was digested with BglII ϩ HindIII, EarI ϩ HindIII, and AvaI ϩ HindIII, respectively. The LTR inserts were blunt-ended and subcloned in the SmaI site of pUC19-CAT0. The Ϫ68/ϩ29 LTR-CAT vector was constructed by subcloning in the SmaI site of pUC19-CAT0 the BstNI-BstNI blunt-ended LTR insert. To construct the LTRmut3-CAT vectors, site-directed mutagenesis was performed with the mutant oligonucleotide 3Lmut: 5Ј-CCAGGGGTAAGATATCCACAAAGCTTTG-3Ј. To construct the 3L/tk-CAT and 3Lmut/tk-CAT vectors, one copy of the 3L and 3Lmut oligonucleotide was respectively subcloned in the blunt-ended SalI site of pBLCAT2, containing the herpes simplex virus thymidine kinase promoter in front of the CAT gene (gift of C. Kedinger, Unité INSERM 184, Strasbourg, France). The human RAR-␣ and RXR-␣ cDNA in the pSG1 vector and the reporter vector containing the RARE DR5G sequence inserted in pBLCAT8ϩ were a gift of P. Chambon (21). The hc-JUN vector was a gift from C. Quirin-Stricker (Unité Transfections and CAT Assays-Cells (10 6 ) cultured in medium containing 10% charcoal-stripped fetal calf serum were transfected by the calcium phosphate precipitation method with 1 pmol of plasmid reporter DNA and when indicated with 0.5 pmol of expression vector, as described previously (24). 24 h after transfection, cells were treated for 24 h with 1 M all-trans-RA or 9-cis-RA in ethanol or with 0.1% ethanol final concentration as a control. Each transfection was done in duplicate and repeated a minimum of three separate times with at least two different plasmid preparations. Cell extracts were prepared 48 h after transfection. CAT assays were performed as described previously (24). Since the efficiency of transfection is different in the different cell lines, different amounts of cell extracts were used, 5, 5, 10, and 20 g of protein for microglial, astrocytoma, oligodendroglioma, and neuroblastoma cells, respectively. Pattern of RAR and RXR Expression in Different Human Brain Cells-To analyze the functional effect of retinoid control of HIV in different human brain cells, we first examined the presence of endogenous retinoic acid receptors RAR-␣, -␤, or -␥ and RXR-␣, -␤, or -␥. Western blot analysis was performed with nuclear protein extracts from human oligodendroglioma TC-620, astrocytoma U373-MG, neuroblastoma SK-N-MC, and microglial cells using a set of monoclonal antibodies directed specifically against human RAR-␣, RAR-␤, RAR-␥, and RXR-␣, -␤, -␥ (26,27). As shown in Fig. 1, the expected RAR-␣ species with an apparent molecular mass of 51 kilodalton (kDa) were detected with antibodies Ab9␣F; RAR-␣ was expressed at a high level in SK-N-MC and TC-620 cells. With microglial proteins, a 40-kDa band, specifically detected with antibodies Ab9␣F, could correspond to a truncated RAR-␣ resulting either from alternative splicing events or artifactual proteolysis. RAR-␤ of expected molecular mass 51 kDa was recognized with antibodies Ab8␤(F)2, only with SK-N-MC and TC-620 proteins. Antibodies Ab4␥F allowed the detection, only in glial cells, of RAR-␥ of expected molecular mass 51 kDa. Antibodies 1RX6G12 specific for RXR-␣, -␤, and -␥ detected RXR species in glial and neuronal cells but not in microglial cells. Similar results were obtained when the blots were probed with polyclonal antibodies, instead of monoclonal antibodies. These results point out the cell type-specific expression of RARs and RXRs in brain cells. While in TC-620 cells, all RARs and RXR species were expressed at a high level, in U373-MG cells, the level of RAR-␣, -␥ and RXRs was considerably lower. In SK-N-MC cells, all species were expressed, except RAR-␥. Surprisingly, in microglial cells, no typical RAR or RXR protein could be detected. Functional Effect of Retinoic Acid Receptors on Reporter Gene Expression in Brain Cells-We first tested the contributions of RAR and RXR to ligand-dependent transcription from a promoter containing a retinoic acid response element (RARE) linked to the herpes simplex thymidine kinase promoter and the CAT gene. The response element consists of two direct repeat half-sites of the GGGTCA sequence separated by a 5-base pair spacer; it binds efficiently RAR or RXR homodimers and RAR/RXR heterodimers (21). Each brain cell line was transfected with the RARE-tk-CAT vector, and CAT activities were determined. In the absence of transfected receptors, alltrans-RA elicited a 4 -5-fold transcriptional response in glial cells, confirming the presence of functional endogenous receptors (Fig. 2). 9-cis-RA elicited a 2.4-fold stimulation in TC-620 cells, indicating the presence of functional RXR receptors. Overexpression of RAR-␣ or RXR-␣ increased the transcriptional response to all-trans-or 9-cis-RA, respectively, in all cell lines, with varying intensities depending on the cell line (Fig. 2). These results are consistent with a number of studies in FIG. 1. Western blot analysis of nuclear proteins from human glial, neuronal, and microglial cell lines. Nuclear protein extracts were prepared from neuronal SK-N-MC (lanes S), oligodendroglioma TC-620 (lanes T), astrocytoma U373-MG (lanes U), and microglial (lanes M) cells. Proteins (10 g) were subjected to an SDS-10% polyacrylamide gel, transferred to a nitrocellulose membrane, and probed with anti-RAR-␣, RAR-␤, RAR-␥, and RXR-␣, -␤, -␥ monoclonal antibodies (gift of P. Chambon). The signal was visualized using the enhanced chemiluminescence system (Amersham Corp.). The upper band (NS) in each panel corresponds to a nonspecific immunoreaction. Arrowhead indicates the endogenous retinoic acid receptors. Lower molecular weight reactive components were found in microglial cells. The position of the prestained molecular mass markers (Bio-Rad) is indicated in kilodaltons (kDa). many other cell types indicating that RAR and RXR behave as ligand-inducible transcriptional activators (8,28,29). Ligand-independent Activation of LTR-directed Gene Transcription by RAR-␣ or RXR-␣ in Brain Cells-To analyze the regulation of HIV-1 gene expression by RAR, RXR, and retinoic acid in different brain cells, we performed cotransfection experiments with an expression vector for RAR-␣ or RXR-␣ and a HIV-1 LTR-CAT reporter construct. We used two distinct LTR-CAT constructs containing the LTR of either the lymphotropic LAI strain or the neurotropic JR-CSF strain (30). After 24 h, cells were either treated with 10 Ϫ6 M all-trans-or 9-cis-RA or left untreated. After further 24 h, cells were harvested and CAT activities were determined. In TC-620 cells incubated in the absence of RA, overexpression of RXR-␣ and RAR-␣ resulted in a respective 3-and 4-fold transcriptional activation (Fig. 3). Surprisingly, this stimulation was inhibited by the presence of RA. Similar effects were observed with both types LAI and JR-CSF LTR. This RA-mediated inhibition contrasts with the RA-induced activation observed with the RARE-tk-CAT vector in the same cells, suggesting that the configuration of the retinoid response element in the HIV-1 LTR could be responsible for this effect. In contrast, in U373-MG, SK-N-MC, and microglial cells, retinoic acid receptors, and RA were unable to significantly modify HIV-1 CAT activity (Fig. 3). The effect of RA concentration on RA-mediated inhibition of LTR activity was tested in TC-620 cells. Maximal inhibition by RA was achieved at 10 Ϫ8 M and half-maximal inhibition occurred at 10 Ϫ10 M (Fig. 4). Retinoic Acid Receptors Act on the NRRE and on Downstream-located Sites of the HIV-1 LTR in TC-620 Cells-Previous studies performed in non-CNS-derived cells have shown that RAR and RXR stimulate transcription of the HIV-1 LTR via the NRRE site (8,9). Other investigators (16) have shown that RA inhibits transactivation of the HIV-1 LTR via the NF-B element in HeLa cells. To identify the region of the LTR responsible for the retinoid response in TC-620 cells, we performed transfection experiments using LTR-CAT vectors containing progressive 5Ј deletions of the LTR LAI and JR-CSF region (Fig. 5). Deletion of the NRRE site up to position Ϫ283 (constructs 2) reduced the RAR-and RXR-mediated transcriptional activation of LTR LAI by 40% and did not modify the activation of LTR JR-CSF. Removal of the AP-1 site present in the LTR JR-CSF (construct 3) resulted in a 60% decrease of CAT activity. Further deletion of the NF-B sites up to position Ϫ68 (construct 4) abolished the transcriptional stimulation (Fig. 5). These results point out the difference in the molecular interactions that control LTR LAI and LTR JR-CSF gene transcription. They indicate that the NRRE site of LTR JR-CSF is not involved in the RAR-mediated effect; cross-coupling interactions, especially with the AP-1 element, are likely to contribute for the most part to the retinoid effect. In contrast to LTR JR-CSF, the NRRE site of LTR LAI is involved to some extent in the RAR-mediated activation, together with downstreamlocated elements, such as NF-B. However, the precise nature of the complex molecular interactions remains to be determined. Regulation of HIV-1 (JR-CSF) LTR-driven Expression by Transcription Factors Acting on the NRRE Site in Brain Cells-To examine the effect of heterodimerization of nuclear receptors on LTR-driven gene transcription, we cotransfected TC-620 cells with RAR-␣ and RXR-␣ vectors in equal amounts. Surprisingly, the results showed that, in contrast with RAR or RXR homodimers, RAR/RXR heterodimers possess no ability to enhance transcription (Fig. 6). Previous reports have described that RAR or RXR are able to form heterodimers with the orphan nuclear receptor COUP-TF (31,32). We have recently shown that COUP-TF, present in different brain cells, interacts with the NRRE sequence (10) and functions as a potent activator of LTR-directed HIV-1 gene expression in TC-620 cells (Fig. 6). 2 COUP-TF has been described as a repressor of the retinoid response of the HIV-1 LTR in CV-1 cells (8). We therefore investigated whether COUP-TF could antagonize RAR-or RXR-mediated induction from the neurotropic JR-CSF LTR. Cotransfection experiments performed in TC-620 cells showed that COUP-TF did not repress the retinoid action; in contrast, the retinoid receptors functioned as repressors of COUP-TF-induced activation and reduced transcription to a level similar to that obtained with the receptors alone (Fig. 6). Nuclear receptors are known to modulate gene expression by acting as transrepressors of the transcription factor AP-1 (JUN-JUN, or JUN-FOS) activity (for review, see Ref. 33). We have reported previously that c-JUN and COUP-TF were able to stimulate 7-and 10-fold, respectively, the CAT activity of the LTR(JR-CSF)-CAT vector (Ref. 10; Fig. 6). 2 We therefore investigated by cotransfection experiments whether the combined action of different nuclear receptors and c-JUN was able to modulate HIV-1 gene transcription. When both COUP-TF and c-JUN expression vectors were cotransfected together, c-JUN did not significantly alter COUP-TF-mediated stimulation. Interestingly, overexpression of both RXR and c-JUN proteins resulted in a drastic decrease of the c-JUN-induced stimulation. In contrast, RAR was unable to antagonize the positive c-JUN response (Fig. 6). These results reveal the importance of RXR in repressing the c-JUN-mediated transactivation of the HIV-1 LTR. To examine whether cross-coupling interactions between nuclear receptors and AP-1 occurred on the NRRE site, we first performed transfection experiments using a reporter vector containing the NRRE sequence linked to the thymidine kinase promoter and the CAT gene. Interestingly, this 3L-tk-CAT reporter vector led to a 4-fold increase in CAT activity in the presence of the COUP-TF expression vector and to a 5-fold increase in the presence of a c-JUN expression vector (Fig. 7, lanes 12 and 13). Overexpression of both c-JUN and COUP- TF FIG. 2. Effect of all-trans-and 9-cis-RA on transcription from a promoter containing a DR5 response element. Transient expression experiments were performed in human oligodendroglioma TC-620, astrocytoma U373-MG, and neuroblastoma SK-N-MC cells. A reporter vector containing the DR5G retinoic acid response element linked to the herpes simplex thymidine kinase (tk) promoter and the CAT gene (21) was cotransfected with expression vectors for RAR-␣ or RXR-␣ as indicated. 24 h after transfection, cells were exposed to 1 M of either all-trans-RA or 9-cis-RA as indicated. Cell extracts were prepared 48 h after transfection, and CAT assays were performed. Results are the average of at least three independent experiments done in duplicate. led to a 7-fold increase in CAT activity (lane 16). As a control, the activity of the control 3Lmut-tk-CAT vector, containing a mutant NRRE site unable to bind COUP-TF, remained unchanged (lanes [17][18][19][20][21][22][23][24]. Since we recently demonstrated that AP-1 is unable to directly bind to the NRRE sequence (10), this result suggests that AP-1 is able to interact, directly or indirectly, with COUP-TF bound to the NRRE site. Surprisingly, in TC-620 cells, the NRRE site was unable to confer RAR or RXR responsiveness on the tk promoter (lanes 10 and 11), in contrast to results reported in F9 cells (7). However, cotransfection of RAR or RXR and the c-JUN expression vector strongly decreased the c-JUN-induced stimulation (lanes 14 and 15). Since c-JUN interacts indirectly with the NRRE site, this result reveals that c-JUN is able to modulate transcription via the NRRE site by interacting, directly or indirectly, with COUP-TF, RAR, or RXR. We recently described that overexpression of c-JUN increased the CAT activity of the 5N-tk-CAT vector, containing the AP-1 binding site spanning the Ϫ247/Ϫ222 region of the JR-CSF LTR (10). It was interesting to test whether, vice versa, RXR was able to modulate the positive effect of c-JUN acting on its DNA binding site. Overexpression of RXR and c-JUN resulted in a 50% decrease of c-JUN-induced stimulation (results not shown), indicating that cross-coupling interactions between RXR and c-JUN modulate transcription from the 5N-tk promoter. Taken together, these results suggest that, within the JR-CSF LTR, RXR and c-JUN modulate transcription by acting either on the NRRE or on the AP-1 site. DISCUSSION In this report we have investigated the regulation of HIV-1 gene transcription in human brain cells by transcription factors that interact, directly or indirectly, with the nuclear receptor response element (NRRE), spanning the Ϫ356/Ϫ320 region of the lymphotropic LAI and the neurotropic JR-CSF LTR. Regulation of HIV-1 LTR-directed Gene Transcription by RAR, RXR, and RA-We have first focused our studies on nuclear receptors belonging to the RAR and RXR families. Our results show that different members of the RAR and RXR families are present in human glial and neuronal cells. Surprisingly, no typical member of the retinoid family could be detected in human microglial cells, which represent the primary target of HIV-1 infection in brain. Since these results were obtained in a microglial cell line, it would be interesting to examine the existence of these receptors using primary cultured microglial cells. Nuclear receptors are known to mediate both positive and negative effects on promoter activity in response to ligand binding (34). A number of reports have described the use of retinoids to alter HIV-1 replication in certain cell types. Retinoic acid exerts various effects on the replication of HIV, which depend on the type of target cell and the time of treatment (17)(18)(19). Our transient expression data indicate that in different brain cells RAR-␣ and RXR-␣ are able, in response to ligand binding, to activate transcription of a thymidine kinase promoter driven by a retinoic acid response element. However, these receptors are unable, in the presence or absence of their ligand, to affect HIV-1 LTR-driven gene expression in astrocytoma, neuronal, and microglial cells. In contrast, in oligodendroglioma TC-620 cells, RAR-␣ and RXR-␣ function as activators of HIV-1 gene transcription in the absence of ligand binding. Interestingly, in the presence of all-trans-or 9-cisretinoic acid, the stimulation mediated by unliganded RAR or RXR is almost completely reversed. A similar negative effect of RA on HIV-1 LTR activity was recently described in HeLa cells and the U937 monocyte cell line (16). In contrast, a RA-dependent stimulation of HIV-1 gene transcription by RAR and RXR was reported in choriocarcinoma JEG-3 cells and in CV1 cells (8,9). These distinct transcriptional effects may be accounted for by the existence of recently described cell-type specific coactivators or co-repressors (35,36). Several mechanisms of negative regulation by nuclear receptors have been reported (33,34). One mechanism has been described for thyroid hormone receptors (T3R); thyroid hormone (T3) inhibits stimulation mediated by unliganded T3R, in the LTR of Rous sarcoma virus and in the LTR of HIV-1. T3R has been shown to interact with several sites in the HIV-1 proximal promoter spanning the NF-B and the Sp1 elements (37), although other investigators found that T3R binds to a single site overlapping the Sp1 sequence (38). A second mechanism involves blocking a positive transcription factor such as AP-1; it appears that DNA binding by the receptor may not be sufficient or even required for ligand-dependent repression and that an interplay with additional factors may be involved. Our findings reveal that, depending on the LTR sequence, both mechanisms are likely to be involved in the RAR and RXR regulation of HIV-1 gene transcription in TC-620 cells. In the lymphotropic HIV-1 LAI strain, the NRRE site plays a major role in RAR-and RXR-mediated transcriptional stimulation. Similarly, the NRRE site was found to be responsible for the ligand-dependent activation mediated by RAR and RXR in JEG-3 (9) and in CV1 cells (8). When the NRRE sequence was deleted, downstream-located elements, such as NF-B, are able, to some extent, to mediate the RAR and RXR action in TC-620 cells. Similarly, the NF-B element was shown to be responsible for the negative effect of RA on HIV LTR activity in HeLa and U937 cells (16). In contrast, the NRRE site in the LTR of the neurotropic JR-CSF strain is not indispensable for the retinoid effect, since removal of the NRRE region did not affect transcription. The retinoid action can be mediated by downstream-located elements, such as the Ϫ247/Ϫ222 AP-1 region, recently described in the LTR JR-CSF (10) and, to a lesser extent, the NF-B region. Similarly, distinct mechanisms, depending on the LTR sequence, were found to govern COUP-TF-induced stimulation in TC-620 cells. 2 These data show how sequence variations in regulatory sites within the LTR modify the binding properties of transcription factors and further demonstrate the flexibility of the interactions between various elements of the LTR. Regulation of HIV-1 JR-CSF LTR-directed Gene Transcription by RAR and RXR in the Presence of COUP-TF or AP-1- Multiple nuclear receptors bind to the NRRE and modulate HIV-1 LTR-driven transcription in non-CNS-derived cells (9). We have previously shown that the orphan nuclear receptor COUP-TF is expressed at a high level in brain cells and leads to a dramatic transcriptional stimulation in oligodendroglioma TC-620 cells. 2 In contrast to results described in CV1 cells, where COUP-TF had no action by itself and repressed the retinoid response (8), our data reveal that either RAR or RXR is able to dramatically inhibit the COUP-TF function. Moreover, high levels of RXR expression repress RAR activity, which suggests that the transcriptional activity of the LTR may be modulated by the relative amount of RXR present in TC-620 cells. Thus regulation of LTR-driven transcription may depend on the actual cell concentration of RARs, RXRs, COUP-TF, as well as of cell-specific transcriptional intermediary factors, which are responsible for coupling the transcription factors to the transcription machinery. In addition, our findings reveal that the nuclear receptor signaling pathway, by interfering with the AP-1 pathway, is able to inhibit HIV-1 gene expression. We have described previously that the transcription factor AP-1, composed of the JUN CAT assays were performed with extracts of TC-620 cells that were cotransfected with the indicated pBL-CAT2 reporter vectors (shown on bottom) and expression vectors (on the left). Oligonucleotides 3L and 3Lmut correspond to the wild-type and mutant NRRE sequence, respectively. CAT activities are expressed relative to the activity of each reporter vector set at 1. They correspond to the average of at least three independent experiments done in duplicate. and FOS proteins, stimulates transcription from the LTR(JR-CSF) promoter in TC-620 cells, by acting on the Ϫ247/Ϫ222 binding site (10). Here our data further reveal the existence of direct or indirect interactions between the c-JUN component of AP-1 and nuclear receptors bound to the upstream-located NRRE site. Vice versa, cross-coupling interactions are likely to occur between RXR and c-JUN acting on the AP-1 site. These findings may explain the antagonism between RXR and c-JUN, which leads to a strong inhibition of the c-JUN response. In contrast RAR is not able to antagonize the positive action of c-JUN. A number of studies have demonstrated the interplay between the regulatory circuits stimulated by AP-1 and those activated by nuclear receptors. Inhibition of AP-1 activity by RARs was reported in non-CNS-derived cells, and RXRs were described to inhibit AP-1 binding to its TRE site in vitro. Vice versa, AP-1 was shown to antagonize the activity of several hormone receptors through protein-protein interactions (for review, see Ref. 33). Interestingly, while RAR-mediated AP-1 transrepression has been described to be ligand-dependent, our findings reveal a ligand-independent effect. The molecular mechanisms underlying AP-1 transrepression by RAR or RXR remain elusive; however, the concept has emerged that AP-1 and nuclear receptors do not interact directly but act indirectly through transcriptional mediators (for review, see Ref. 33). Our data reveal the complexity of the protein-protein interactions between nuclear receptors and AP-1 as well as the diversity of their interactions with different LTRs. Since the AP-1 transcription factor is composed of different members of the JUN and FOS family, it would be interesting to examine whether the composition of the AP-1 complex is critical for RXR-mediated repression and for overall modulation of HIV-1 gene transcription. In conclusion, our data demonstrate the importance of the retinoid receptor pathway in the inhibition of HIV-1 gene transcription in TC-620 cells. It is noteworthy that the positive effect of RAR or RXR is inhibited by the presence of retinoic acid. Moreover, while RAR contributes to repress the positive COUP-TF response, RXR, by antagonizing the positive action of distinct transcription factors, such as RAR, COUP-TF, and AP-1, is a potent negative regulator of HIV-1 gene transcription.
6,286
1996-09-13T00:00:00.000
[ "Biology" ]
Robust Small Target Co-Detection from Airborne Infrared Image Sequences In this paper, a novel infrared target co-detection model combining the self-correlation features of backgrounds and the commonality features of targets in the spatio-temporal domain is proposed to detect small targets in a sequence of infrared images with complex backgrounds. Firstly, a dense target extraction model based on nonlinear weights is proposed, which can better suppress background of images and enhance small targets than weights of singular values. Secondly, a sparse target extraction model based on entry-wise weighted robust principal component analysis is proposed. The entry-wise weight adaptively incorporates structural prior in terms of local weighted entropy, thus, it can extract real targets accurately and suppress background clutters efficiently. Finally, the commonality of targets in the spatio-temporal domain are used to construct target refinement model for false alarms suppression and target confirmation. Since real targets could appear in both of the dense and sparse reconstruction maps of a single frame, and form trajectories after tracklet association of consecutive frames, the location correlation of the dense and sparse reconstruction maps for a single frame and tracklet association of the location correlation maps for successive frames have strong ability to discriminate between small targets and background clutters. Experimental results demonstrate that the proposed small target co-detection method can not only suppress background clutters effectively, but also detect targets accurately even if with target-like interference. Introduction Infrared small target detection has been widely used in the airborne early warning, infrared guidance, surveillance and tracking and other fields [1][2][3][4]. In these applications, the infrared small targets have the following characteristics: (1) often immersed in strong noises or complex background (cloud clutter, plants and buildings, etc.), (2) with less texture and shape Information, (3) non-cooperative and without fixed law of movement. These characteristics make it very difficult to detect infrared small targets, and it has always been the hot and difficult issue of infrared detection field. Because of the movement (jitter) of the infrared observation platform or the change of the imaging background, it is difficult to obtain the accurate infrared background by sequential detection methods [5][6][7], because the infrared small targets are easily mistaken for background and vice versa. In this case, the single frame detection methods have received a great attention recently, and are valid for infrared small target detection with static or changing backgrounds [8][9][10]. However, it is difficult to suppress clutters (cloud boundary, targe-like artifacts), which are very similar to real targets from the view of high intensities, because of the limited target information available in a single frame. Fortunately, the commonality of targets in the spatio-temporal domain can be used to build better target detection models and suppress suspected clutters and noise. To the best of our knowledge, tracklets information are rarely used in existing infrared target detection methods. Note that tracklets information are widely used in tracking problem, in which the target position in the first frame is given in advance [11,12]. However, there is no such prior target information in detection problem in which either a small target exists in a frame or not is still ambiguous. As discussed above, upon encountering suspected targets or clutters, using the commonality of targets in the spatio-temporal domain is necessary for better detection performance. The commonality features of targets in the spatial domain can be utilized by combining two one-dimensional dense and sparse reconstruction models [13,14]. Different from the one-dimensional dense and sparse reconstruction models [13,14], in this paper we consider the two-dimensional form of dense and sparse reconstruction no longer transforming a matrix to a vector. A two-dimensional dense reconstruction model is proposed based on the global singular value decomposition (SVD) [15], which sets the first few singular values equal to zero and preserves the remaining singular values unchanged. However, this method does not give a general method to select the scope of singular values, and the center-bias mechanism will suppress small targets located at the edges of the image while suppressing clutters or noise. To address this limitation of the global SVD-based reconstruction method [15], we use differences of adjacent singular values to select the proper singular value scope for target extraction, and meanwhile use a sigmoid function to regularize the singular values in order to suppress the background components. The intuition is that each singular value indicates the ability of the corresponding sub-image to approximate the original image. In [8,16], the authors give one-dimensional sparse reconstruction models based on the patch-image model. However, these methods have the following limitations: (1) The detection performance depends largely on the patch size (it was set to 50 × 50 in [8] or 51 × 51 in [16]), and the patch vectorization and the pixel reconstruction from overlapped patches could also increase the running time of the algorithm. Moreover, in the patch-image model, one target may appear in different locations of several aligned patches, and after vectorization the intrinsic structure and correlations in the image could be broken, which could influence the separation of target and background later; (2) The algorithm uses L1-norm to measure the sparsity of small targets, but L1-norm treats each pixel independently in terms of intensity, thus the pixels with higher intensities (cloud border, artifacts), are easily mistaken for target pixels, and difficult to be removed through a global threshold [8]. Due to our observation, in an infrared background image, columns (rows) also have non-local self-correlation property and columns (rows) in distant locations are approximately linearly correlated with each other. Hence, to address the first limitation of the patch-image model, we directly consider each column (row) of an image as a column (row) of the observation matrix instead of dividing the image into patches and forming a patch vectorization matrix. Thus, we refer the proposed sparse reconstruction model as a global sparse reconstruction model. Moreover, we exploit entry-wise prior in the sparse reconstruction model to better separate targets from complex backgrounds. The intuition behind the entry-wise prior is that, each pixel in a target should be weighted differently according to its local weighted entropy which measures the local difference between the target and neighboring background. Thus, both the local target features and the global background features are incorporated into the proposed sparse reconstruction model. For each frame, to increase the confidence level that candidates are real targets, correspondence between suspected targets obtained by dense and sparse reconstructions is conducted to suppress clutters and false alarms further. As we know, the target region in an infrared image has striking discontinuity with the surrounding background. However, due to our observation, the pixels with higher intensities (cloud border, artifacts) as a whole also have this property. Because of the limited target information available in a single frame, these targe-like false alarms could also be detected as real targets. In order to suppress false alarms further, especially the highly suspected targets, in this paper we adopt multiple frame target refinement by tracklets association, based on the facts that real targets and false alarms have different movement characteristics, and false alarms should not be temporally continuous between successive frames like real targets. Due to that the spatio-temporal target commonality is used to refine the rough detection result of each frame in this paper, thus we refer to the propose method as a target co-detection model. In this paper, we propose a novel infrared target co-detection model that combines the self-correlation features of backgrounds and the commonality features of targets in the spatio-temporal domain to detect infrared small targets in a sequence of images with complex backgrounds. In the first step, the dense reconstruction model is proposed to extract a coarse target map with benefit of regularization of singular values. In the second step, we design a sparse reconstruction model to extract a sparse target map. In the third step, the correspondence between suspected targets of two types of target maps are conducted to suppress clutters and noise. In the fourth step, the tracklets are associated to suppress false alarms and form trajectories which are used to confirm targets for each frame. The contributions of this paper are summarized as: (1) A dense target extraction method based on regularization of singular values is proposed. Due to the introduction of a sigmoid function, the background components in the target map can be inhibited further. It should be noticed that we do not minimize the nuclear norm but only use the singular value information; (2) A sparse target extraction method based on entry-wise weighted robust principal component analysis is presented. The entry-wise weight uses the structure prior based on the local difference between the target and neighboring background existing in a natural scene from viewpoint of human recognition, which can promote the complex background suppression effect and keep the small target, and (3) we propose a false alarm suppression and target refinement method based on location correlation of the dense and sparse reconstruction maps for a single frame and tracklet association of the location correlation maps for successive frames. Based on the spatio-temporal commonality features of targets, this method can effectively detect small targets and suppress false alarms as much as possible. The remainder of this paper is organized as follows. Section 2 reviews the related work from the view of processing units in the target detection. Section 3 presents our detection approach comprising of single frame target extraction and multiple frame target refinement. The evaluation on real infrared data set and comparisons are presented in Section 4. Conclusions are given in Section 5. Related Work In recent years, many infrared small target detection methods have been proposed for different applications. According to the processing units in the detection process, we categorize these approaches into pixel-wise, patch-image, and whole-image groups. As discussed later, each group has its own characteristic. The pixel-wise detection methods usually estimate one pixel at a time based on its local neighborhood or its temporal profile, so they could make better use of local difference between the current pixel and its neighborhoods, but are not suitable for the cases when the background scene in a sequence changes fast and weak dim targets are contained in a single image. Besides, the patch-image detection methods calculate each patch in light of the patch set which consists of low-rank and sparse parts, could suffer from more running time caused by vectorization and is also not suitable for detection of weak dim targets with complex background, and the non-local background patches could help separating targets from the patch set. Also according to low-rank matrix approximation, the whole-image detection methods could estimate a whole image from a sequence or a single image, thus the global background feature is used in separating targets from the background, but the whole-image model could also suffer from the problems of vectorization, fast changing background and low signal-to-clutter ratio. Pixel-Wise Detection Methods Besides some classical pixel-wise detection methods, such as the topHat method [17], the maxMean and maxMedian methods [18], more pixel-wise approaches have been proposed recently. In [9], the authors proposed an effective small target detection approach according to the contrast mechanism of human vision system and derived kernel model, therein the local contrast measure was defined to compute the dissimilarity between the current location and its neighborhood. In [19], the authors presented local mutation weighted information entropy to suppress background and enhance the gray value of targets. In [10], inspired by the concept of local difference, the authors proposed a weighted local difference measure for the detection of infrared small targets. In addition, in [20], the authors developed a multiscale facet model to enhance targets and then used the multiresolution representation to reduce the false alarm rate. A fast-saliency method based on the facet model was presented for real-time infrared small target detection, and therein the facet kernel operator was designed and used in separating small targets from the background [21]. A biologically inspired method named multiscale patch-based contrast measure was proposed for small infrared target detection, which could increase the contrast between target and background [22]. Furthermore, other pixel-wise detection methods are also proposed using temporal information. Suggested by the singular value decomposition, a temporal filter was developed for dim target detection in evolving cloud clutters [23]. A nonlinear adaptive filter was proposed to detect infrared moving dim targets, and has high performance in removing large fluctuations on temporal profiles that are caused by evolving clutters [5]. By combining spatial and temporal information together, a target detection method was introduced using spatial bilateral filter and temporal cross product, which are respectively used to extract the spatial target information and the features of temporal profiles [6]. Subsequently, a spatial-temporal bilateral filter was presented to detect target trajectories, by extracting spatial and temporal target information simultaneously [7]. As discussed above, pixel-wise detection methods use a local region or a temporal profile to extract target information under the assumption that the target location has conspicuous discontinuity with the nearby background. However, when the the imaging background changes fast or the background has many types of clutters, jamming objects and noises are still the key factor to influence the detection performance. Patch-Image Detection Methods In [8], the authors proposed an infrared patch-image (IPI) model for target detection in a single frame. In the IPI model, a frame is divided into small patches and the patches are stacked as columns of a new matrix for robust principal component analysis (RPCA). The intuition behind the IPI model is that the local patches in distant regions in an infrared background image could be approximately linearly correlated with each other. Subsequently, the IPI model inspires much related work [16,24]. In [24], the authors generated an image patch set according to multi-scale transform and patch transform, and every patch was given an individual regulation weight which was computed by combining the information of patch size, patch entropy and target saliency level. In [16], the authors generated an image patch set according to the same scheme in [8], and also stacked all the patches as columns of a matrix, and each patch is given an individual regulation weight based on the steering kernel. However, one target may appear in different locations of several aligned patches, so adding the steering kernel at the central position is not always applicable. As a whole, the patch-image models describe the sparsity of small targets with L1-norm, and the cloud borders or target-like artifacts which have similar intensities with targets are easily mistaken for target pixels. In addition, the performance of IPI models depends on the patch size (it was set to 50 × 50 in [8] or 51 × 51 in [16]), and when the patch has a higher dimension, the vectorization in patch-image model needs more computation time. Whole-Image Detection Methods In [15], the authors proposed a visible image saliency detection approach based on SVD. The intuition behind this approach is that the large singular values mainly indicate the non-salient background information and slight saliency information, while the intermediate singular values indicate most or even all of the saliency information, and the small singular values contain little or even none of the saliency information. However, this approach does not give a general selection scheme for the scope of singular values which indicates the salient components, and the used center-bias mechanism could suppress small targets at the edges of an infrared image. In [25,26], the authors considered the low-level vision problems where a priori target rank information is available in advance, and minimized the partial sum of singular values instead of minimizing the nuclear norm [27]. In [25][26][27], the individual frames are stacked as columns of a matrix before performing RPCA and each frame is seen as an independent entity. RPCA-based background modeling for static video sequence assumes that these background variations are low-rank and the foreground activity is sparse due to spatially localized. When the background changes fast, it is difficult to obtain accurate backgrounds and then the target regions. The Proposed Method In this section, we aim to design an infrared small target detection system which consists of two parts as shown in Figure 1. The first part aims to extract highly suspected targets in each frame, and the second part is to confirm true targets from highly suspected targets. The first step of the first part is to suppress complex backgrounds (such as clouds, plants, and strong noises), and detect the suspected targets from a single frame using dense and sparse computation models separately. The second step of the first part is to associate the dense and sparse reconstruction maps obtained by the two computation models in the first step, and suppress false alarms in each frame. Repeating the first step and the second step for consecutive frames, many single-frame detection results can be obtained accordingly. So the second part is to refine the single-frame detection results of different frames based on tracklet association using target location and appearance features. Infrared Dim Target Model An infrared image can be usually described as [8] where (x, y) denotes the coordinate of a pixel, F(x, y), B(x, y), D(x, y) and N(x, y) are the pixel intensity at coordinate (x, y) for the original infrared image, the background image, the target image, and the random noise image respectively. We can get dense and sparse reconstruction maps from a single image, depending on whether there is a sparse constraint on the target image D in the decomposition process. Frequency Analysis of Infrared Images It is well known that the singular value decomposition is a powerful tool, and is widely used in latent semantic analysis, recommendation system, defect detection, background suppression and so on [28,29]. The SVD of an infrared image F with size m × n can be defined as where {u i }, {v i } and {σ i } are the left singular vectors, right singular vectors, and singular values respectively, and {σ i } is arranged in descending order. We can find from (2) that an infrared image F can be represented as a sum of different frequency components {u i v T i } regularized by {σ i }, and the low frequency components correspond to the background part B which always changes quite slowly, the medium frequency components correspond to the target part D which usually appears as a bright area, and the high frequency components correspond to the noise part N. So, each part of an infrared image can be obtained by regularizing proper singular values for different purposes, such as background approximation with low pass filtering, and target detection with band pass filtering. Low-Rank Analysis of Infrared Images As discussed above, the infrared background image B usually changes slowly and occupies most part of the original image F, and the image columns (rows) have the property of non-local self-correlation, i.e., columns (rows) are approximately linearly correlated with each other. Thus the background image B can be well approximated by a low-rank matrix. For a target image, the total number of target pixels is far less than the total number of pixels in the whole image, because of the small size of each target (usually no more than 10 × 10). So it is reasonable to assume that the target image is sparse. Based on low-rank property of the background image and sparse property of the target image, the low-rank decomposition model can be used to build up a separation model for small targets and complex backgrounds. Target Extraction via Dense Reconstruction A dense reconstruction model is developed here to extract target regions in an infrared image. For an original infrared image F, we can construct filters v i (orthogonal bases of row space) by performing SVD on F. Based on these filters, we can compute a one-dimensional response R r i by filtering infrared image F Similarly, we can obtain a one-dimensional response R u i by filtering infrared image F with filters u i (orthogonal bases of column space) Combining Equations (3) and (4), we can compute a two-dimensional response R i Equations (3)- (5) show that a pair of filters (u i , v i ) can be used to generate a two-dimensional response R i which just corresponds to one frequency component of infrared image F. To better utilize multiple frequencies of F to extract target information, we choose to combine the obtained two-dimensional responses in a reasonable bound together to obtain the final target map where l denotes the low cut subscript, h denotes the high cut subscript, and ρ i denotes a linear weight or non-linear weight. When ρ i equals to σ i , Equation (6) degenerates to Equation (2), up to the low and high cut subscripts. In Equation (6), R i denotes a two-dimensional response of dense filters v i and u i , and all its elements is weighted by only one weight ρ i , thus the map D d is a global dense representation for infrared targets. For each response R i , the background clutter is still a key factor to influence the final target map, thus the corresponding weight ρ i should be regularized to suppress the background further. In Equation (6), weight ρ i is defined as the logistic sigmoid function of σ i As mentioned above, each singular value indicates the ability of the corresponding response to approximate the original image. Thus, the singular values can be used to estimate parameters l and h. Note that the first component always corresponds to the main part of the background, so we do not consider the first singular value in computing l and h. Let σ i = σ i − σ i+1 , i = 2, . . . , r − 1, and σ 1 = 1 r−1 ( σ 2 + · · · + σ r−1 ), we can compute l using the following equation A similar consideration can be used to obtain the parameter h. Let σ 2 = 1 r−l ( σ l + · · · + σ r−1 ), we can calculate h using the following equation ζ 1 and ζ 2 are scaling factors. For the final target map D d , a small part of background clutter and noise is still required to be removed, because D d comprises of a series of responses {R i } which is regularized by a global weight ρ i . In fact, the remaining clutter and noise is not necessarily to be gaussian. Therefore, we use Chebyshev's theorem to remove the clutter and noise in D d , and set Th = µ + cτ as the global threshold, where µ and τ denote the mean and standard deviation of D d , and c is a positive number and denotes the multiple of standard deviations [30]. Target Extraction via Sparse Reconstruction As discussed above, it can be concluded from the view of low-rank representation that the task of target map computation can be formulated as a convex optimization problem: where . * denotes the nuclear norm of a matrix, . F is the Frobenius norm of a matrix, . 1 represents the sum of absolute values of matrix elements, • denotes the entrywise product of the weighting matrix W and the target image D, λ > 0 is a regularization parameter which controls the tradeoff between the background image B and the target image D, and > 0 is the upper bound of noise energy. By introducing a multiplier µ > 0, the optimization problem (10) can be relaxed as: It can be shown that for some proper value µ( ), any solution of (11) is equivalent to the solution of (10) [27]. To achieve superior convergence, the accelerated proximal gradient (APG) algorithm with a continuation technique on µ is used to solve (11) [31][32][33]. The convex optimization problem (11) can be decomposed into two subproblems that minimize B and D respectively (for details please see the appendix): The subproblems (12) and (13) can be solved by the following equations respectively [34,35] The details of the solution via APG is described in Algorithm 1. The computation of the entry-wise weight matrix W is based on the local difference features between the target and neighboring background, and these local difference features could be well measured by the local weighted entropy [19]. For a pixel F(x, y) which has a small neighborhood containing n kinds of gray values f 1 , f 2 , . . . , f n , its local weighted information entropy is expressed as where n i is the number of gray value f i in the neighborhood. Then H is convolved with a two-dimensional Gaussian operator Ga(x, y) = exp{− 1 2σ 2 0 (x 2 + y 2 )} to generate a new version H. Consequently the weighting matrix W is computed as α(1 − H), where α denotes a regularization parameter which controls the prior impact in the weight matrix. Target Confirmation via Location Correlation In order to extract as many highly suspected targets as possible, we use the location cue to remove false alarms and validate candidates in each frame. The intuition is that real targets though obtained by different methods should be located at the same position, but the random noise is not necessarily so. Therefore, through location correlation, some random noise should be suppressed and the ones occurring both in the dense and sparse reconstruction maps may be real targets with high probability. Suppose that the regions obtained from D d and D s are denoted by G d j with coordinate set X d j and G s i with coordinate set X s j respectively, each region corresponding to a suspected target, and that D c = 0 has the same size with D d and D s . The main steps of target confirmation are described as follows: 1. For each G s i in the target map D s , we find G d j in D d whose coordinates are overlapped with that of G s i , namely X d j ∩ X s i = ∅. 2. For each successfully correlated pair (G s i , G d j ), we select pixels in D d with coordinates X s i as the correlation result, namely D c (X s i ) = D d (X s i ). Note that in step 2, we choose the pixels with coordinates X s i in the dense reconstruction map D d . This step could not only avoid the drawback of L1-norm target measure, which treats each pixel independently in terms of intensity and weakens the intensities of the boundary target pixels, but also avoid the drawback of dense reconstruction, which enlarges the target area by combining nearby false alarms together. In essence, the dense reconstruction extracts suspected targets from the view of L2 norm that measures the minimal residual, and thus the boundary target pixels obtained from the dense reconstruction have more bright values than the ones obtained by the sparse reconstruction. Multiple Frame Target Refinement After obtaining candidates from each target map D c k , we generate suspected target tracklets based on target location and appearance features of consecutive frames, then perform non-maxima suppression to remove false tracks formed by noisy or clutter, finally refine the suspected targets in each target map D c k according to the obtained tracks. Suppose that the candidates in the target map D c p and D c k are denoted by G Note that the italic symbol k denotes the image index, and the upright symbol p indicates the association result before the kth frame. For each candidate, E represents the energy of pixels, and (x, y) denotes the average coordinate of pixels. Hence, the tracklets can be generated by repeating linking {G p j } and {G k i } together, whereas {G p j } is updated with each association. The details of generating tracklets are described as follows: , (x k i , y k i ))} as small tracklets, here E θ denotes the average energy of all candidate regions. 4. Update each track X t with a proper tracklet selected from the set {((x p j , y p j ), (x k i , y k i ))}. 5. We set D c p = D c k , k = k + 1. 6. Repeating the above steps until k is greater than L, we finally obtain tracks {X t }. After obtaining trajectories through the above process, a non-maximum suppression (NMS) scheme is used to prune false trajectories. For each trajectory X t , a displacement gain measure ∆ 1 t and a length ratio measure ∆ 2 t are computed as where (x t i , y t i ) and (x t i−1 , y t i−1 ) denote elements of track X t , and len(·) is a function to solve the length of track X t . Next, the trajectories that not only have greater displacement gain than a threshold θ 1 but also have longer length gain than a threshold θ 2 , are selected. The intuition is that real targets and false alarms have different motion features, and the trajectories formed by false alarms are diverse from the true trajectories produced by real targets. Finally, each suspected target in the target map D c k is refined by measuring the distance of its centroid to the valid tracks, and the final detection result D r k is obtained accordingly. Data Sets In order to fairly evaluate the performance of infrared detection methods, a representative data set consisting of three public infrared sequences with different complex backgrounds is used and the detailed features are listed in Table 1. In Sequence 1, the detection is influenced by strong noise, plants and trees [36]. In Sequence 2, the background changes rapidly due to the movement of imaging platform, and a plane moves from the thick cloud region to the sky [37]. In Sequence 3, the detection barrier is noise and changing wispy clouds [11]. On a whole, the data set contains various situations in airborne infrared target detection. Therefore, using the given data set could fairly show the performance of infrared detection methods. Evaluation Metrics The main objective of the proposed method is to effectively suppress the background noise and clutters, and then significantly reduce false alarms to improve detection performance. In this paper, the metrics of signal-to-clutter ratio gain (SCRG), background suppression factor (BSF), precision, recall and F-measure (PRF) are used to evaluate the performance of infrared detection methods. More specifically, the SCRG measures the enhancement of targets relative to the backgrounds before and after detection, and is defined as follows [38,39]: (19) where SCR out and SCR in are the local SCR values computed from the filtered and original images respectively. Moreover, the BSF evaluates the background inhibition degree, and is defined by [40]: (20) where δ in and δ out denote the standard deviations of the original and processed images respectively. In addition, the precision and recall reflect the false alarm rate and miss rate respectively, and the F-measure is the weighted harmonic mean of precision and recall [41]. Overall Performance of the Proposed Method In subsection of dense reconstruction, the scaling factors ζ 1 and ζ 2 are both set to 1, and the multiple c of standard deviations from the mean is set to 3. In subsection of sparse reconstruction, the window of local weighted entropy is set to 5 × 5, λ is set to 1/sqrt(max(m, n)), µ 0 and µ are set to σ 3 and 1 r−2 ∑ r i=3 σ i respectively, α is set to 2, and σ 0 and the size for the Gaussian filter are set to 1 36 min(m, n) and 1 6 min(m, n) respectively. In addition, in subsection of multiple frame target refinement, the circular gate is set to 15, the displacement gain is set to 1 2 max(∆ 1 t ), and the length threshold is set to 0.6. we first verify the validity of all the procedures of the proposed method, and test the proposed method on all the sequences in the data set with the same configuration parameters. More specifically, the visual illustrations of the dense reconstruction map (DRM) D d k , the sparse reconstruction map (SRM) D s k , and the location correlation map (LCM) D c k of D d k and D s k , and the refined map (RM) D r k are shown in Figure 2, and the quantitative evaluations are given in Table 2 and Figure 3. Note that in Figure 2 the red circles denote the detected real targets, and the blue circles denote some target-like false alarms. As depicted in Figure 2, D d k and D s k display the ability to reveal the small target in the preliminary results. In addition, the location correlation map D c k of D d k and D s k is shown to contribute to the results further by suppressing the false alarms and enhancing the target. Consistently, as illustrated in Table 2, the SCRG and BSF of D c k was a good tradeoff between that of D d k and D s k . Although D s k obtained higher SCRG than D d k did on Sequence 3, it obtained lower SCRG than D d k did on Sequence 1 and Sequence 2, because the very challenging dataset based on an actual application is diverse and characteristic. Hence, a proper tradeoff based on location correlation is necessary for suppressing false alarms in D d k and keeping target pixels in D s k . However, after correlation of D d k and D s k , there still exist highly suspected targets in the correlation map D c k , and some representatives are labeled by blue circles. So in this paper, we use multiple frame target refinement to suppress these false alarms. From the visual and quantitative results in Figures 2 and 3 and Table 2, the refined map D r k produced the best target detection performance. Parameters Analysis For the dense reconstruction model, the main relevant parameters include the regularization weight ρ i and multiple c. As previously discussed, the regularization weight ρ i helped to eliminate the background influence and enhance the target further, and the multiple parameter c controlled the global threshold for target extraction in the dense reconstruction map D d . To show the benefits of using ρ i rather than σ i in the dense reconstruction model, we consider several combinations (ρ i , c) and (σ i , c) in the experiment, and set c = 1, 3, 5, 7. As seen from Figure 4a-d and the first four rows of Table 3, for Sequence 1 and Sequence 2, the combinations (ρ i , c) helped target map D d obtain the better precision, recall, and F-measure, and achieve the higher average SCRG and BSF values. Note that the symbol ∞ in Table 3 indicates that the target map obtained by the combination (σ i , c = 7) is a zero matrix which means that there is no target or background information in the target map, hence the choice of ρ i is superior to σ i in the process of target reconstruction. In addition, we found from Figure 4e-f and the last two rows of Table 3 that there was no diverse difference between the results obtained by using combinations (ρ i , c) and (σ i , c) on Sequence 3, and the combination (ρ i , c) obtained a slightly worse result than (σ i , c). The reason is that ρ i could also suppress the target when suppressing the background, and the background in Sequence 3 is not as complex as that in Sequence 1 and Sequence 2. As a whole, the regularization weight ρ i is more suitable for more complex background suppression than σ i . As mentioned, c is also an important parameter. As shown in Table 3, the BSF increased with increasing c value, while the SCRG decreased with increasing c value. Hence, an intermediate c value could be a good choice. The consistent conclusion can be seen from Figure 4, because for example the case c = 7 could cause high miss rate while the case c = 1 could result in lots of false alarms. Table 3. Average scores of the dense reconstruction maps with different combinations. Sequences Metrics For the sparse reconstruction model, the regularization parameter α controlled the prior impact in the weight matrix. We varied α from 1 to 4 in the experiment, and illustrated the precision, recall, and F-measure in Figure 5. From the illustration, it could be observed that an intermediate α would give a better tradeoff between miss rate and false alarms. For example, the precision, recall, and F-measure of α = 4 showed a high miss rate on Sequence 1 and Sequence 2, because the real targets were suppressed by the overlarge prior weight. In contrast, when the low miss rate was guaranteed, the false alarm rate of α = 1 is higher than other settings, suggesting that a too small α is also not a proper choice. As described in subsection of multiple frame target refinement, a trajectory comprises of small tracklets which mainly depend on the gate size δ of G k i . We varied the gate size δ from 5 to 25 in the experiment, and showed the relationship between the trajectory detection rate and gate size in Figure 6. It could be observed that a small gate size δ < 10 could result in a low trajectory detection rate on Sequence 2, a small gate size δ < 15 could cause a low trajectory detection rate on Sequence 3, and a large gate size δ > 20 could lead to a low trajectory detection rate on Sequence 1, because the real targets in Sequence 1, Sequence 2, and Sequence 3 have different velocities which can be inferred from Figure 2. For example, because the target in Sequence 1 has the lowest velocity, a very large gate size could lead to false association between the current frame and the former frame, and moreover the target in Sequence 3 has the highest velocity, thus a very small gate size could result in association failure between consecutive frames. From the illustration, we can find that the interval [15,20] could be a proper scope of the gate size for Sequence 1, Sequence 2, and Sequence 3. Comparison with State-Of-The-Art Approaches In order to show the performance of the proposed method, we selected five state-of-the-art infrared small target detection methods, including two classical methods (MM [18], TH [42]), and three recent methods (IPI [8], MPC [22], WLD [10]) based on the patch-image model, the multiscale contrast measure and the local difference measure. As a whole, the adopted comparison approaches are representatives of the advanced level of infrared small target detection at present. The detailed parameter settings used in the experiments are described in Table 4 for reproduction. The detection results of the proposed method and the existing methods are visually shown in Figure 7 in which the red circles denote the detected real targets, the blue circles denote the false alarms, and the green circles indicate that the real targets were lost. Note that the results of the existing methods in Figure 7 are obtained using a global threshold µ + 3τ, and the original images corresponding to Figure 7 are the same as those of Figure 2. In addition, the quantitative evaluation results are provided in Table 5 and Figure 8. Weighted local difference measure WLD L = 4, entropy neighborhood 5 × 5 6 Proposed co-detection method COD λ = 1/ max(m, n), entropy neighborhood 5 × 5 As depicted in Table 5, the MM achieved higher BSF than TH on Sequence 1, Sequence 2 and Sequence 3, but gave the worst performance in terms of the SCRG and the visual illustration of Figure 7. In addition, the TH and MPC achieved the top two SCRG on Sequence 3, but gave the poor detection results as seen from the visual result of Figure 7 that many target pixels were lost in the detection result of MPC, and that much background cloud was left in the detection of TH. Moreover, the IPI model exhibited excellent background suppression performance, but the SCRG is very low when the background was complex in Sequence 1. Furthermore, the WLD achieved the highest BSF on Sequence 3, but obtained less SCRG which was slightly better than that of MM. As shown in Figures 7 and 8b-c, for the IPI and WLD methods, the false alarms can be well suppressed with an appropriate threshold on Sequence 2 and Sequence 3. However, as shown in Figures 7 and 8a, the existing five state-of-the-art methods still performed poorly on Sequence 1, because a very challenging test sequence based on an actual application (existence of targe-like false alarms as shown in Figure 7) was used for the comparative testing in this paper. Thus, the success of the existing five methods based on only a single image was restricted to its own specific application. Therefore, exploitation of the motion and appearance cues from the image sequences was necessary to further improve the detection performance of a single frame. Although the proposed method do not have the highest SCRG and BSF, based on the visual comparison in Figure 7 and the quantitative comparison in Figure 8, it is clear that the proposed method consistently performed well on all three sequences and outperformed other test methods from the view that all false alarms including the target-like ones in each image are well suppressed. Computational Complexity The computational complexity and time for the proposed method and other existing methods were given in Table 6. All the experiments were carried out on a computer with a 3.2 GHz Intel CPU and 4-GB memory. The image size is m × n, the patch-image size in the IPI model is m × n, β denotes the iteration number of the algorithm, and p denotes the pixel number in the support region. As depicted in Table 6, the six test methods differed greatly in running time, though there existed a little difference in the computational complexity. For the MM and TH methods, the time difference is mainly caused by the max operation in MM. For the MPC and WLD methods, the time difference lies in the sort operation in the computation of local entropy. In addition, the IPI method took the longest time in the test methods, the cost is mainly caused by the vectorization and median operations. However, the running time of the proposed method is only about one-fiftieth the time of IPI method. The essential reason is that the non-patch scheme and local prior weight contribute to improving convergence speed. Although the proposed method took more time than the MM, TH and MPC methods did, it is acceptable from the view of detection performance. Conclusions In this paper, a novel infrared target co-detection model, which combines the self-correlation features of backgrounds and the commonality features of targets in the spatio-temporal domain, is proposed to detect small targets in a sequence of images with complex backgrounds. On one hand, the nonlinear weights has been constructed based on the logistic sigmoid function, and has more advantages than weights of singular values in suppressing background and keeping small targets. On the other hand, the entry-wise weight has been designed based on the local weighted entropy, and can extract real targets accurately and suppress background clutters efficiently. Finally, the location correlation of the dense and sparse reconstruction maps for a single frame and tracklet association of the location correlation maps for successive frames are performed to suppress false alarms and confirm suspected targets. The experiments have testified the effectiveness of the proposed co-detection model. Author Contributions: The work presented here was carried out in collaboration with all of the authors. Jingli Gao and Chenglin Wen conceived the idea and research theme. Jingli Gao designed and performed the experiments, and wrote the paper. Meiqin Liu supervised the work and analyzed the experimental results. Conflicts of Interest: The authors declare no conflict of interest. Appendix A With X = (B, D) T , g(X) = B * + λ W • D 1 , and f (X) = 1 2µ F − B − D 2 F in (11), thus ∇ B f (X) = ∇ D f (X) = 1 µ (B + D − F), and ∇ f (X) = (∇ B f (X), ∇ D f (X)) T , we can obtain So f is Lipschitz continuous, and the Lipschitz constant L f = 2 µ . According to the proximal gradient approach, we could approximate f (X) locally as a quadratic function and then solve (A2) to update the solution X. It can conclude that the problem (A2) is separable and can be decomposed into two subproblems (12) and (13).
10,542
2017-09-29T00:00:00.000
[ "Engineering", "Environmental Science", "Computer Science" ]
Approximate and Parametric Solutions to SIR Epidemic Model : This article provides a detailed exploration of the SIR epidemic model, starting with its meticulous formulation. The study employs a novel approach called the upper and lower bounds technique to approximate the solution to the SIR model, providing insights into the dynamic interplay between susceptible S , infected I , and recovered R populations. A new parametric solution to this model has been presented. Applying the Adomian decomposition method (ADM) allows for the attaining of highly accurate approximate solutions in the context of the SIR epidemic model. To validate the accuracy and robustness of the proposed approach, a numerical exploration is conducted, considering a diverse range of experimental parameters. This numerical analysis provides valuable insights into the sensitivity and responsiveness of the SIR epidemic model under varying conditions, contributing to the broader understanding of infectious disease dynamics. The interplay between theoretical formulation and numerical exploration establishes a comprehensive framework for studying the SIR model, with implications for refining our ability to predict and manage the spread of infectious diseases Introduction The SIR epidemic model provides a rigorous framework for understanding the temporal evolution of a population's susceptibility S, infection I, and recovery R to time t.This model is described by a set of non-linear ordinary differential equations, as presented in Equation ( 1) [1,2]: These equations encapsulate the dynamics of S, I, and R populations, with the initial conditions outlined in Equation (2): where r, α, S(0) and I(0) are non-negative constant parameters.The equation describing the behavior of R evolves from the infected population I, expressed as The SIR (Susceptible-Infectious-Removed) model is a fundamental and significant epidemiological model due to its simplicity, clarity, and historical importance.Although more complex and modern models [3][4][5][6][7][8], have been developed to address specific small differences in infectious disease dynamics, the SIR model remains valuable.The choice of model depends on analysis objectives and the level of detail needed. A substantial body of literature has been dedicated to numerical methods for solving these equations, such as the homotopy analysis method (HAM) [9].Recently, in [10], the authors explored this coupled non-linear system and introduced an approximate solution through asymptotic approximants. This paper is devoted to finding the new upper and lower bounds of solutions to this SIR epidemic model in explicit forms.Further, finding lower and upper bounds for solutions to differential equations is an important mathematical and analytical tool.It can provide insights, validation, and practical benefits in understanding and modeling various physical and engineering systems.These bounds help to characterize the behavior and limitations of the system under study. It is often challenging, if possible, to find explicit solutions to some systems.However, converting such a system to an equivalent second-order nonlinear differential equation can help in finding solutions in parametric form.As a result, a new solution in parametric form can be provided. Overall, the Adomian decomposition method (ADM) [11][12][13][14][15][16][17][18][19][20][21][22][23] is a powerful analytical method for approximating solutions to the SIR epidemic model, contributing significantly to understanding disease dynamics and informing public health strategies.The ADM provides an analytical framework that offers closed-form or series solutions, which helps in obtaining analytical expressions for the SIR model's behavior over time.Moreover, the ADM allows for the handling of nonlinear differential equations, enabling the study of complex epidemic models that often exhibit nonlinear dynamics and providing a more realistic representation of disease spread.The ADM can yield accurate solutions, particularly for problems with known or expected smooth solutions, contributing to a precise understanding of epidemic behavior and dynamics.Therefore, we use the ADM to find excellent approximation solutions for the SIR epidemic model concerning real scenarios.Results obtained from the Adomian method are compared with numerical solutions, providing validation and a better understanding of the model's behavior. Upper and Lower Bounds of Solutions The following fundamental lemma is an important tool in finding the upper and lower bounds of system solutions (1)-(3).Lemma 1.The following system, du dt can be converted into an equivalent first-order nonlinear equation in u, where c 1 is a constant of integration. Proof.Differentiating the first equation of Equation ( 6) with respect to t, we obtain We eliminate dv dt and e v from (8) by using the first and second equations of Equation ( 6), and we obtain Integrating Equation ( 9) with respect to t, we obtain Equation (7), which is a first-order nonlinear equation in the unknown function u. Conversely, we can reduce Equation (7) to Equation ( 6) by setting the transformation du dt = −re v ; thus, Equation ( 9) can be decoupled into which is indeed (6). We now rewrite Equation (7) in the form where f (t, u) = re u − αu + c 1 with u(0) = ln S 0 .Now, using the well-known exponential inequality e u ≥ 1 + u, u ∈ R, we get that is, where For comparison purposes, we have the following initial-value problem for the linear equation: The solution to w is then easy: Since g(t, w) is continuous in t and w and Lipschitz in w, we can apply the comparison theorem to Equation (11) with u(0) = ln S 0 to get Consequently, S ≥ e w , t ≥ 0, (17) which is the lower bound of S. To now find the lower bound of I, from the system (5), we take the second equation: In view of this and since e u ≥ e w , we obtain A simple integration from 0 to t leads to Hence, Consequently, It remains, therefore, only to find the upper bounds of S and I.In finding them, we shall make use of Equations (1).Indeed, from the first equation of Equations (1), and in view of (22), we obtain A simple integration again from 0 to t gives us or where Thus, we have proven the following: ) is a solution to the SIR epidemic model ( 1) and (3), then there are upper and lower bounds of the solution, such that we have where S 1 = e w , where w(t) and Solutions in Parametric Forms The following lemma is also an important tool in finding solutions in parametric forms to system (1)-(3).Lemma 2. System (6) can be also converted into an equivalent second-order nonlinear equation in v Proof.Differentiating the second equation of ( 6) with respect to t, we obtain We eliminate du dt and e u from the right-hand side of (42) by using the first and second equations of ( 6), and we obtain (7). Conversely, we can reduce Equation (41) to Equation ( 6) by setting the transformation dv dt = re u − α, thus Equation ( 41) can be decoupled into which is indeed (6), and this completes the proof. We rewrite Equation (41) as follows: Setting The substitution x = −αt leads to Thus, the solution to this second-order nonlinear equation in the unknown w(x) in parametric form is given by (Section 2.7.3-2 in [24]): where We have the following: Theorem 2. The solution (S, I) in parametric form is given by First, we define the linear operator L, nonlinear operator N, and inverse linear operator We rewrite (5) in Adomian's operator-theoretic notation as Next, we apply L −1 to both sides of (50) as where we calculate Upon substitution, we have Using the classic Adomian decomposition method, we decompose the solution into the solution components to be determined via recursion and the nonlinearity into the corresponding Adomian polynomials tailored to the particular nonlinearity as where the first few Adomian polynomials [11,12] are given in the Appendix A. Upon substitution, we obtain Then, we establish the Adomian recursion scheme as The first few components of the solution (u, v) are given by Hence, Attaining a satisfactory approximate solution for S(t) will enable the determination of I(t) and R(t) using Equations ( 5) and (3), respectively.Notably, deriving from the second equation within system (5), we obtain Integrating this equation from 0 to t, we obtain Then, and It is important to note that when using the Adomian method, certain considerations must be taken into account.Indeed, the Adomian decomposition method is a precious tool for analyzing the SIR model, offering a systematic, analytical, and interpretable solution that significantly improves our understanding of infectious disease dynamics.It is imperative to thoroughly examine the convergence of the ADM series. Uniqueness Theorem We can obtain a uniqueness theorem to this system beginning with the initial-value problem of Equation (7) with u(0) = ln S 0 . Since Applying the Mean Value Theorem to the function r(e u 1 − e u 2 ), we obtain Thus, Assuming that u 1 and u 2 are two different solutions to this initial-value problem, Using Gronwall's inequality, we obtain Hence, the solution u to this initial-value problem exists and is unique.In view of this, we conclude that the solution (u, v) to (6) exists and is unique. Consequently, Theorem 3. The SIR epidemic model has only one solution (S, I, R). Convergence Theorem The series solution obtained using the ADM converges with rapidity, and successive terms (u i , v i ) are easily computed.For the proof of the convergence of Adomian's method, we refer to the useful reference [25], in which Cherruault [25] proposed an interesting technique to prove the convergence under suitable and reasonable hypotheses for the numerical resolution of nonlinear equations.To examine this convergence, let us take the initial-value problem of Equation ( 7) with u(0) = ln S 0 , which is equivalent to the above system (50).For every convergent series The Adomian technique is equivalent to determine the sequences (U n ) [25]: A simple computation leads to Let X = C[0, T] be a Banach space of all continuous functions on [0, T] with norm ∥ u ∥, where ∥ u ∥= max t∈[0,T] | u | .Consider the sequence of partial sums (U n ) with n > m.Thus, Hence, Similar arguments can be applied as in [26] to show that (U n ) is a Cauchy sequence in X. Numerical Explorations and Analysis In the beginning, we will investigate the numerical solutions to the SIR epidemic model ( 1) and ( 3) with the initial conditions (2) using the numerical solution using a Fehlberg fourth-fifth order Runge-Kutta method with a degree-four interpolant [27][28][29][30].This method is also known as the Runge-Kutta-Fehlberg (RKF45) method and offers several advantages over traditional fixed-step methods like the standard fourth-order Runge-Kutta method.Indeed, solving ordinary differential equations using numerical methods is advantageous due to its adaptivity, higher accuracy, efficient error estimation, versatility, and widespread applicability. In Figure 1, we present the numerical solutions to the SIR epidemic model ( 1) and ( 3 [10].The representation is as follows: the solid black line corresponds to S(t), the dashed blue line to I(t), and the dash-dotted red line to R(t).The solutions are in units of people and t is in months. Here, we emphasize the essential point that is an essential property of the SIR model.The "positivity property" is a key concept in epidemiological modeling.It refers to the fact that variables like the number of individuals in different compartments, such as those who are susceptible, infected, or recovered, cannot be negative.This is because, in the context of infectious disease dynamics, it is impossible to have a negative number of individuals in any of these compartments.Therefore, the positivity property is an essential characteristic that must be considered when modeling the spread of infectious diseases. Conversely, in Figure 2, we present the parametric solutions to the SIR epidemic model for the initial conditions α = 2, r = 1/5, I(0) = 25, S(0) = 75, R(0) = 0.The parametric solutions to the SIR epidemic model offer several advantages in understanding and analyzing the dynamics of infectious diseases.In essence, parametric solutions to the SIR epidemic model facilitate a comprehensive understanding of infectious disease dynamics, providing insights into how different factors influence the spread and control of diseases and aiding in the development of effective public health strategies.1) and ( 3) with the initial conditions (2).For the case α = 2, r = 1/5, I(0) = 25, S(0) = 75, R(0) = 0 from [9]. In addition, to explore the finding of a good approximation for the SIR epidemic model, we suggest a useful, powerful technique that is called the Adomian decomposition method (ADM), which is used to solve nonlinear ordinary and partial differential equations.Its advantages lie in its simplicity, efficiency, and applicability to a wide range of nonlinear problems.However, the application of the ADM to the SIR epidemic model provides robust approximate solutions.Specifically, the method decomposes the nonlinear differential equations governing the dynamics of susceptible S, infected I, and recovered R populations into a series of functions.This decomposition enables the iterative solution of each constituent.In the context of the SIR model's nonlinear differential equations representing the changing rates of S, I, and R populations, ADM effectively breaks down these equations into a series of functions.This breakdown allows for the step-by-step determination of individual terms within the solution series.Meanwhile, it is essential to consider the reduction in numerical volume and the swift convergence of the approximate solutions for further progress.Our suggestion to employ the ADM technique that accounts for these factors is crucial for effective progress.The iterative process of the ADM involves computing Adomian polynomials and their associated coefficients.This computation method generates an approximate solution, which can be refined by including additional terms in the series expansion.Utilizing ADM facilitates the acquisition of an analytical approximation for the solution to the SIR epidemic model.This analytical approximation proves instrumental in comprehending disease dynamics, predicting its behavior under diverse conditions, and evaluating the effects of interventions or parameter variations on the transmission of infectious diseases. In Figure 3, we present a comparison between the numerical solutions to the SIR epidemic model with the fourth order of the solutions that are obtained using the ADM techniques for the initial conditions I(0) = 25, S(0) = 75, R(0) = 0 with the parameters α = 2, r = 1/5 [9,10].The visual representation is delineated as follows: the dashed blue line depicts the numerical solution, the dotted black line illustrates the third-order solution derived using ADM, and the solid red line represents the fourth-order approximation attained through the ADM technique.The evident observation reveals a remarkable concurrence between the numerical solution and the fourth-order solutions obtained through the ADM technique for the function S(t).There is appropriate agreement with numerical and fourth-order solutions by the ADM for the I(t) and R(t) functions. In Figure 4, we present a comparison between the numerical solutions to the SIR epidemic model with the fifth order of the solutions that are obtained using the ADM techniques for the initial conditions I(0) = 7, S(0) = 254, R(0) = 0 with the parameters α = 2.73, r = 0.0178, which involve a scenario studied by Khan et al. [9,10] to simulate the 1966 bubonic plague outbreak in Eyam, England [9,10].The figure displays the numerical solution as solid lines and the fifth-order solution obtained using the ADM technique as dashed lines.S, I, and R correspond to the black, blue, and red lines, respectively.Upon careful observation, it is evident that there is a significant agreement between the numerical solution and the fifth-order solutions obtained through the ADM technique for the functions at small values of t.For large values of t in months, numerical and fifth-order solutions obtained through the ADM for the functions have an appropriate agreement.The solutions are given in units of people.The right panel demonstrates a flawless alignment between the numerical solutions and the sixth-order solutions achieved through ADM.This highlights a significant advantage wherein ADM, even with lower orders of decomposition, can achieve remarkable agreement with the numerical solutions.Figure 5 showcases a comparative analysis between the numerical solutions to the SIR epidemic model and the sixth-order solutions obtained using the ADM techniques.This evaluation pertains to specific initial conditions: I(0) = 2, S(0) = 4206, R(0) = 0, combined with the parameters α = 0.0164 and r = 2.9236 × 10 −5 .In the left panel, a comparison between the seventh-order solutions obtained through the ADM and the numerical solutions reveals a notable discrepancy at larger time values, whereas, in the right panel, employing the eighth-order ADM solutions notably enhances the agreement.It should be noted that this scenario is a reference to a study conducted on the COVID-19 outbreak in Japan [10,31].A slight increase in the order of decomposition in the ADM demonstrates improved concordance between numerical solutions and those derived via the ADM method.The ADM technique offers a valuable approach that delivers a strong approximation for the SIR epidemic model using lower-order decompositions, contrasting with other methods necessitating higher-order series expansions [10].By increasing the order of decomposition, we can effectively handle situations where both α and r are small.This approach ensures that we maintain the required level of accuracy in our results.The proposed technique, which is directly stated, is said to offer a more efficient approach.This suggests that the proposed method can provide a solution of comparable quality while requiring fewer terms in the series expansion.This efficiency could have practical implications, such as reducing computational resources, simplifying calculations, or speeding up the modeling process.In essence, this point emphasizes the efficiency and effectiveness of the proposed technique in obtaining accurate solutions to the SIR model while streamlining the computational demands associated with other referenced methods [10].On the other hand, the positive property persists in all the obtained results (Figures 3-5). Further, by comparing the results obtained through the ADM with numerical solutions, we can gain valuable insights into the convergence behavior.If the ADM solutions closely match the numerical solutions for a variety of scenarios, it indicates that the convergence is favorable.This information can be instrumental in improving the accuracy and reliability of the ADM model. The inclusion of additional terms in the series expansion can improve convergence.As we progressively refine the solution, we often assess the convergence behavior (Figures 4 and 5). Conclusions In conclusion, this research offers a comprehensive examination of the SIR epidemic model, delving into its meticulous formulation and employing innovative methodologies.The upper and lower bounds technique provides valuable insights into the interactions among susceptible, infected, and recovered populations and contributes to enhancing our theoretical comprehension through the derivation of an existence and uniqueness theorem.Moreover, applying the Adomian decomposition method demonstrates its effectiveness in yielding highly accurate approximate solutions to the SIR model. Furthermore, the validation process, which involves numerical exploration across diverse experimental parameters, underscores the proposed approach's accuracy and robustness.This numerical analysis sheds light on the SIR model's sensitivity and adaptability under varying conditions, enriching our broader comprehension of infectious disease dynamics. The suggested techniques explicitly highlight the useful efficiency, implying that it can provide an accurate satisfactory solution while utilizing fewer terms of the series expansion in the ADM method.This efficiency could have significant practical applications such as reducing the demand for computational resources, streamlining calculations, and expediting the modeling process. This study has created a strong framework for studying the SIR model by combining theoretical formulation with numerical exploration.The findings of this study have significant implications for enhancing our predictive capabilities and improving strategies to manage and reduce the transmission of infectious diseases effectively. Figure 4 . Figure 4. ADM and numerical solutions to the SIR epidemic model (1) and (3) with the initial conditions (2).For the case α = 2.73, r = 0.0178, I(0) = 7, S(0) = 254, R(0) = 0 from [9].(Left panel) The representation is as follows: the solid lines correspond to the numerical solution, and the dashed lines are the fifth-order solution obtained using the ADM technique.(Right panel) The dotted lines represent the sixth-order solution obtained using the ADM.Black line: S, blue lines: I, and red lines: R.
4,596.2
2024-03-16T00:00:00.000
[ "Mathematics", "Medicine" ]
Inclusion Relations between -Modulation Spaces and Triebel–Lizorkin Spaces e modulation space 푠 푝,푞 was first introduced by Feichtinger [1] in 1983 by the short-time Fourier transform. modulation space has a close relationship with the topics of time-frequency analysis (see [2]), and it has been regarded as a appropriate space for the study of partial differential equations (see [3–5]). e -modulation space is introduced by Gröbner [6] to link Besov and modulation spaces by the parameter 0 ≤ 훼 ≤ 1. One can find some basic properties about modulation spaces in [7, 8]. Among many features of the -modulation spaces, an interesting subject is the inclusion between -modulation and function spaces, have been concerned by many authors to this topic, see [8–11]. As applications, -modulation spaces are quite recently applied to the field of partial differential equations. In [12], Misiolek and Yoneda proved locally ill-posedness of the Euler equations in the frame of -modulation spaces. Furthermore, Han and Wang [13] proved a global well-posedness for the nonlinear Schrödinger equations on -modulation spaces, and also in [14] studied the Cauchy problem for the derivative nonlinear Schrödinger equation on -modulation spaces. Introduction e modulation space 푠 푝,푞 was first introduced by Feichtinger [1] in 1983 by the short-time Fourier transform. modulation space has a close relationship with the topics of time-frequency analysis (see [2]), and it has been regarded as a appropriate space for the study of partial differential equations (see [3][4][5]). e -modulation space is introduced by Gröbner [6] to link Besov and modulation spaces by the parameter 0 ≤ 훼 ≤ 1. One can find some basic properties about -modulation spaces in [7,8]. Among many features of the -modulation spaces, an interesting subject is the inclusion between -modulation and function spaces, have been concerned by many authors to this topic, see [8][9][10][11]. As applications, -modulation spaces are quite recently applied to the field of partial differential equations. In [12], Misiolek and Yoneda proved locally ill-posedness of the Euler equations in the frame of -modulation spaces. Furthermore, Han and Wang [13] proved a global well-posedness for the nonlinear Schrödinger equations on -modulation spaces, and also in [14] studied the Cauchy problem for the derivative nonlinear Schrödinger equation on -modulation spaces. Remark 1. Modulation spaces are special -modulation spaces in the case 훼 = 0, so our theorems also works well in the special case 훼 = 0. In this research, we are interested in studying the inclusion relations between -modulation spaces 푠,훼 푝,푞 and Triebel-Lizorkin spaces 푝,푟 for 푝 ≤ 1, which greatly improve and extend the results for the inclusion relations between local Hardy spaces and -modulation spaces obtained by Kato in [10]. Suppose that 푐 > 0 and 퐶 > 0 are two appropriate constants, which relate to the space dimension , and a Schwartz functions sequence 훼 푘 푘∈ℤ satisfies en 훼 푘 푘∈ℤ constitutes a smooth decomposition of ℝ . e frequency decomposition operators associated with the above function sequence are defined by for 푘 ∈ ℤ . Let 0 < 푝, 푞 ≤ ∞, 푠 ∈ R, and 훼 ∈ [0, 1). en the -modulation space associated with the above decomposition is defined by Remark 6. We recall that the above definition is independent of the choice of exact (see [8], proposition 2.3). Also, for sufficiently small 훿 > 0, one can construct a function [15, 9, Appendix A]). Let be a cube in ℝ and 푚 > 0, then is the cube in ℝ concentric with and with side length times the side Let 푎 ∈ ℝ, then 푎 + = max(푎, 0) and [푎] stands for the largest integer less than or equal to . Main Results Now, we state our main results as follows. We prove the following two propositions used for the proof of the eorem 12. Journal of Function Spaces 4 Proof. Take to be a nonzero smooth function whose Fourier transform has small support, such that ✷ 푓 = 푓 and ✷ ℓ 푓 = 0 if 푘 ̸ = ℓ, where we denote 푓 Which is the desired conclusion. By the definition of -modulation space 푠,훼 푝,푞 , we have On the other hand, we use the orthogonality of ∈ℤ as 푁 → ∞, we obtain (32) Proof. Let be a nonzero Schwartz function whose Fourier transform has compact support in {휉 . We first prove that the inclusion . By the definition of -modulation space, we obtain that On the other hand, we turn to the estimate of ᐉ ᐉ ᐉ ᐉ ᐉ ᐉ ᐉ ᐉ 푝,푟 , using the orthogonality of as 푁 → ∞, we obtain (42)
1,078
2020-01-30T00:00:00.000
[ "Mathematics" ]
Communication Emitter Motion Behavior’s Cognition Based on Deep Reinforcement Learning Considering the successful application of deep reinforcement learning (DRL) on tasks of moving objects, this paper innovatively applies deep deterministic policy gradient algorithm (DDPG) to complete the cognition task on multi-dimension and continuous communication emitter motion behavior. First, we propose a DDPG-based behavior cognition algorithm (DDPG-BC). It chooses direction, velocity, and communication frequency as state space, gains experience from interaction between network and environment and outputs deterministic cognition results. Second, under the condition of sufficient prior information such as geographic information, we further propose a novel algorithm named DDPG-based behavior cognition with Attention algorithm (DDPG+A-BC). It introduces attention mechanism into DDPG-BC which limits exploration scope and the randomness of initial state and improves the exploration efficiency and accuracy. The simulation experiments verify that DDPG-BC and DDPG+A-BC show good cognition ability on two different data set. And the algorithms are all superior to other DRL algorithm and existing cognition method with higher cognition accuracy and less time. In addition, we also discuss the influence of episode, reward function, and added attention mechanism on algorithm performance. I. INTRODUCTION Nowadays, with the help of various positioning technologies, we can quickly obtain a large amount of moving object data. However, when mining the information carried by the moving objects, simple observation and tracking no longer meet our goals and needs. Instead, we hope to explore what happens behind the movement to enrich the information content of objects for better control and decision support. Therefore, the cognition of people, animals, vehicles and other moving objects has become a research hotspot at present. The purpose of this paper is to analyze and cognize the motion behavior of communication emitter and its carrier in motion and find out the corresponding possible causes. For example, as shown in FIGURE 1, when communication emitter and its carrier or platform goes through this area, the original plan is to go straight (planned route). But near the interference, communication performance may degenerate. The associate editor coordinating the review of this manuscript and approving it for publication was Yu-Huei Cheng . For example, obstacles like buildings will cause the signal propagation refraction and diffraction, or the place like airports will add too much noise to communication channel. The emitter may choose diversion (angle change), acceleration (velocity change), or changing the communication frequency in order to avoid interference and maintain the communication ability (actual route). The cognition process of emitter motion behavior is to discover the corresponding possible causes or determine whether interference or strike has occurred on the premise of mastering the changes of direction, velocity and communication frequency. The data of communication emitter motion behavior are usually multi-dimension, continuous, and limited. However, recent research on motion behavior cognition prefer to classify discrete behaviors and analyze the category which each behavior belongs to. It is obviously inconsistent with the emitter's characteristics and increases the workload of preprocessing. Zitouni et al. [1] proposed a probabilistic formulation of different categories of socio-cognitive crowd behavior and a framework which can be considered as VOLUME 9, 2021 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ a mid-level layer between detection and detailed semantics. But it aimed to evaluate probabilities of various socio-cognition behaviors and compared the model outputs to manually annotate ground-truth data. Goldberg et al. [2] used meta-cognitive model (MCM) for future hurricane evacuation with combination of past behavior and subjective confidence. [3] proposed a conceptual framework named experience-oriented intelligent things (EOIT) to extract driving behavior fingerprints which needs enough experience obtained by catching driving data in advance. At the same time, Hornischer et al. [4] believed the spatial information is minimal definition of a cognitive map and developed a minimal model of agents to explore environment by means of sampling trajectories. That means the formation of internal cognition are related to the spatial overlap of cognitive maps, so the introduction of geographic information can effectively support the cognitive process. In process of cognizing communication emitter motion behavior, we hope that: 1) specific cognition results will be obtained directly rather than the probability of different results; 2) experience can be gained after interaction between network and environment of the cognition model without requirement for a lot of subjective experience; 3) the exploration efficiency of the network are able to be improved; 4) the cognitive results will be more objective, only according to the physical parameters of the emitter. By implementing the above effects, the algorithm proposed in this paper not only conforms to the characteristics of emitter motion data, but also can realize the control of cognition with the pursuit of rewards. And DDPG in DRL may become a good choice to solve the problem. Therefore, this paper proposes a cognition method of communication emitter motion behavior based on DDPG. The main contributions are as follows: 1. Two cognition algorithms of communication emitter behavior -DDPG-BC and DDPG+A-BC. DDPG-BC cognizes communication emitter motion behavior based on DDPG algorithm. With attention mechanism introduced, it further transforms into DDPG+A-BC, which explores in attention position and results in better cognition effect. 2. Verification and related discussion on the performance and effect of DDPG-BC and DDPG + A-BC. II. RELATED WORK In order to realize the cognition of communication emitter motion behavior and combine with the information characteristics provided by the moving emitter, this paper aims to explore the autonomous learning ability of DRL algorithm in the cognition of emitter behavior and observe whether the introduction of attention mechanism can help improve the learning efficiency of the network. A. BEHAVIOR COGNITION Behavior modelling and activity interpretation are of increasing interest in the information society [5]. The research on behavior cognition mainly focuses on computer science and network and social psychology, and the research targets mainly include human [6], animal [7], traffic [8], [9] and robot [10]. The Google team proposed in 2006 that the motion behavior cognition system should be composed of four modules of ''sensor-identification-transformation-controlled system (SITR)'' [11]. When the sensor receives the raw data of moving objects, it will classify and process the raw data corresponding to various behaviors, then translate all kinds of data into behaviors, and finally realize the cognition and control of behaviors. Pei et al. [5] proposed Context Pyramid when cognize human behavior using smartphone sensors and divided it into six levels: raw sensor data, physical parameter, features/patterns, simple contextual descriptors, activity-level descriptors, and rich context. The basic idea of motion behavior cognition is that, given a tracked feature or object, its time series should provide a descriptor that can be used in a general cognition framework [12]. Whether it is the known features or the raw physical data to be processed, correct cognition requires that the behavioral parameters we are faced with are sufficiently descriptive and will be a general element when a certain behavior occurs. [13] and [14] described human behavior using Wi-Fi channel state information (CSI) and modelled CSI data based on body movement. With the development of science and technology, it means that, as long as relevant data can be obtained, motion parameters such as velocity and direction and emitter signal parameters such as communication frequency can participate in the target's motion behavior cognition. B. DEEP REINFORCEMENT LEARNING It is not difficult to find that the cognition of motion behavior puts forward higher requirements for the selection of features. The features acquired by deep learning (DL) often have certain semantic features and strong discriminative ability, which can more effectively represent the behavior characteristics [15]. The generation and development of reinforcement learning (RL) are inspired by behavioral psychology. States and actions in RL network interact with each other in the environment. However, RL can only deal with low-dimensional state and action space, so the success of deep neural network on large training data sets motivated the generation of DRL, which can be directly applied on data and process training samples by using stochastic gradient updates [16]. Mnih et al. [17] developed a novel agent, deep Q networks (DQN), to create a single algorithm that is able to develop a wide range of competencies on a varied range of challenging tasks. DQN interacts with the environment through a series of observations, actions, and rewards and can be used for RL tasks with discrete action. Actions are selected in a way that maximizes the accumulation of future rewards, and deep neural networks are used to approximate optimal value action functions. Although DQN algorithm has an excellent performance in various applications [18], it still has limitations such as overestimation of the model and inability to handle continuous action problems. Due to the DQN algorithm has difficulty in calculating the probability of each action or the corresponding Q values in large continuous action space, Lillicrap et al. [19] proposed DDPG algorithm in 2015 for applying DRL on tasks with continuous action space. DDPG algorithm is a kind of widely used DRL algorithm which can study ''end-to-end'' strategy in higher dimensional, continuous action space [20]. DDPG provides a model-free algorithm based on deterministic policy gradient (DPG), which has both Actor and Critic systems and combines two RL algorithms based on value (such as Q-learning) and action probability (such as policy gradient, PG). In addition to the Actor-Critic framework, DDPG algorithm uses the same learning algorithm, network structure, and hyperparameters as DQN. Hausknecht and Stone [21] focused on using deep neural network in structural (parameterized) continuous action spaces, represented a successful extension of DRL to the class of parameterized action space MDPs and prepared for the learning in the continuous and bound action spaces. Silver et al. [22] proposed an off-policy Actor-Critic algorithm that learned a deterministic target policy from an exploratory behavior policy and used DPG for RL algorithm with continuous action. DPG obtains expected gradients of action value by learning approximation of action-value function (Q function) and updates deterministic strategy via chain-rule to make the estimation more effective [23]. While DPG algorithm can solve the problem of high-dimensional continuous action space and combine the advantage of DQN which takes high-dimensional state space as input with an Actor-Critic framework, DDPG algorithm has the ability to handle continuous action control tasks. C. ATTENTION MECHANISM Attention mechanism is to select specific inputs which are a methodology derived from human attention. It enables practitioners to adjust attention direction and weight model according to specific task and objects to achieve the goal of reducing sequential computation costs [24]. Attention mechanism realizes via adding attention weight in the hidden layer so that the content that does not conform to the attention model will be weakened or forgotten. Attention mechanism mainly applies to learning weight distribution and task focus. Task focus is to design different network structures (or branches) through task decomposition to reduce the training difficulty of the original task. Learning weight distribution is to pay different attention to different parts of input data, which can act on the original image, spatial scale and historical features of different moments. [25] explained that the attention mechanism is to use standard back-propagation techniques and to stochastically maximize a variational lower bound, and they divided attention into two variants: ''hard'' attention mechanism and ''soft'' attention mechanism. ''Hard'' attention takes hard decisions when choosing parts of the input data, and ''soft'' attention takes the entire input into account, weighting each part of observations dynamically [26]. One of the long-standing challenges for RL agents is to deal with noisy environments [27]. Inspired by human perception, it can use two basic concepts of machine learning, attention and memory, to better cope with the noisy environment and deal with a more complex task. It is coincided with the design principle and processing power of DRL algorithm and makes the combination of DRL and attention mechanism have research and application in the field of robots and unmanned driving. Sorokin et al. [28] presented an extension of DQN by ''soft'' and ''hard'' attention mechanisms and proposed deep attention recurrent Q network (DARQN) to directly monitor the training process online through the built-in attention mechanism. To sum up, in this paper, we choose to combine DDPG algorithm with attention mechanism in order to complete the motion behavior cognition task of communication emitter. We propose the motion behavior cognition algorithms DDPG-BC and DDPG+A-BC and verify the feasibility and performance of the algorithms. III. DDPG-BASED BEHAVIOUR COGNITION FOR COMMUNICATION EMITTER A. PROBLEM ANALYSIS The analysis of the motion behavior of communication emitter is based on the motion trajectory and signal characteristic parameters of the emitter. These parameters of target emitter can be extracted as the raw data, from which valid physical parameters are selected, and the motion state is obtained after preprocessing. Motion state will be the input of DDPG-based behavior cognition module. When prior knowledge meets the conditions, specific attention can be added into cognition module to help the cognition process (strategy learning process) explore and learn. Finally, cognitive results are obtained to judge the working status of the emitter and its platform or carrier. The cognition process of communication emitter behavior is shown in FIGURE 2. B. DDPG-BC One of the main challenges of learning in continuous action spaces is exploration, and one of the great advantages of VOLUME 9, 2021 algorithms like DDPG is that it can handle the exploration independently from the learning algorithm. DDPG algorithm is an improvement of DPG algorithm. On the basis of PG, DPG takes the state space as the algorithm model's input, but the output is no longer the probability of a certain action. Determinist action value, corresponding to a specific action, will be obtained through optimal action policy function µ θ (s), where s is state and θ is policy parameter. Compared with DPG algorithm, deep neural network is added into DDPG, and DDPG takes Actor-Critic as the basic framework. DDPG imitates the idea of DQN. It uses memory tank and two neural networks with the same structure but different parameter update frequency of DDPG network to approximate policy function µ(s, θ µ ) and value function Q(s, a; θ Q ) respectively, which makes the learning process more effective and stable. In the function above, a is action, θ µ is policy network parameter, and θ Q is value network parameter. Meanwhile, Actor can easily select appropriate actions in the continuous action space, while Critic can update step by step and evaluate actions selected by learning the relationship between environment and rewards. In addition, DDPG introduces the experience replay to remove correlation and dependency between samples when Actor interacts with the environment. Experience pool stores the state in t, action, reward, and state in t + 1 (s t , a t , r t , s t+1 ) as experience and, each time, samples small batches of data from experience pool as training samples for policy and value network. On the one hand, let Q, µ and Q , µ be Critic network, Actor network, and target network respectively, then target Q value can be expressed as By minimizing loss function Critic network will be updated via policy gradient On the other hand, define target function as expectation of discount accumulative rewards and finding optimal deterministic behavior policy µ * is equivalent to maximize target function Finally, update target network 3 shows the Actor-Critic network structure in this paper. The Actor network has three layers and can choose the optimal action. The Critic network has four layers, including an input layer, two hidden layers and an output layer, which are used to train and generate Q values and update the Actor network. The Actor network selects action and sends into the environment, and the experience obtained after interaction is stored in the experience pool. Each time, bs training sample are sampled from the experience pool and sent to the dual network. The whole learning process is more stable and converges faster. Activation function and how modules work are shown in the diagram. The network parameters are shown in TABLE 1. DDPG-BC sends state space of emitter into DDPG network, explores steps times in each episode of training, learns the optimal cognition strategy, and realizes the correct cognition of communication emitter behavior. The pseudocode of DDPG-BC is summarized in TABLE 2. C. DDPG+A-BC When observing the real world, a human usually focuses on some fixation points at first glance of the scene [6]. When the prior information meets the conditions, the introduction of attention mechanism is considered as the help for solving the problem in this paper. If we know the trajectories or the geographic activity area or other relevant information of the moving communication emitter, we can build attention model, which participates in the learning process of DDPG, reduces computing cost, improves learning efficiency and accuracy, and better cognizes the motion behavior of the emitter. According to the normal activity experience, hot spots or related areas concerned by moving objects will become a major factor affecting behavior. Because ''hard'' attention mechanism is generally considered a non-differentiable approach, it is not as widely used as ''soft'' attention. But [29] believed that feature magnitudes correlate with semantic relevance and provide a useful signal for our mechanism's attentional selection criterion. Therefore, compared with the ''soft'' attention mechanism, we hope to introduce an additional and explicable hyperparameter based on the ''hard'' attention mechanism in the training process. Then we use this built-in attention mechanism to focus on attention regions when making selections so as to improve the training speed and accuracy. Suppose that the attention region model decides to focus on in t is M t , which totally has L positions. If we hope model to extract features on i-th position (from L), attention position M t,i will be considered as the start of exploration. Schematic diagram of attention mechanism based on geographic information is demonstrated in FIGURE 4. When a communication emitter travels along a given route (blue dotted line), it will pass over an attention region (shaded area) that will affect the normal operation of the emitter or the movement of its platform/carrier. Therefore, geographic information can be added into cognition process as the attention module in Fig. 2. It delimits the scope of network exploration and changes to focus cognition on the attention route (solid red line) for better cognition efficiency and accuracy. It should be noted that geographic information is not the only background information that can function as an attention mechanism. Other information can also be applied to set the scope of attention and act on network learning and exploration. After analyzing DDPG-BC, we believe that the algorithm may have some problems with cognition tasks. One is too wide selection range of actions and too strong randomness. According to the idea elaborated above, this paper proposes DDPG+A-BC. Based on DDPG -BC, attention mechanism is added before the exploration process to determine attention positions according to geographic information. We are intended to get initial state in or near attention positions and to keep exploration in attention region after any action operation. It will limit exploration scope (action selection) and improve exploration efficiency. DDPG+A-BC's pseudocode is summarized in TABLE 3. IV. EXPERIMENTS RESULTS Because there is no publicly available data set of communication emitter, we apply two simulation experimental data sets on verifying the performance of DDPG-BC and DDPG+A-BC, which are mainly divided into two parts: spatio-temporal data and signal parameter data. Spatio-temporal data is derived from actual data set, and we add communication frequency data for each sampling points to construct the data set for simulation. Python 3.6 and Tensorflow 2.0 are used to complete the programming implementation. A. DATA AND ENVIRONMENT We define < v, ϕ, f > as the state space for communication emitter behavior's cognition network. v, ϕ, f represent velocity, direction(angle), and communication frequency change values sequence between sampling points respectively. The Agent will continuously select the action to be performed, analyze the corresponding state parameters and output cognition results. Definition of parameter in time t and cognition results are shown in TABLE 4 and TABLE 5. We assume that the working mode of the radio is abnormal when it exists communication frequency conversion. The cognition criterion of the frequency conversion in detail is shown in the demonstration of data sets below. And the definition of cognition results can be adjusted according to the model settings. (1)The spatio-temporal data of data set 1 are obtained from publicly available flight trajectory data provided by Flightradar24. The simulation data set consists of 5 categories, each representing the motion trajectory of the same communication emitter. There are 140 groups of data with 150-300 sampling points in each group. In data set 1, according to the characteristics of the simulation data set, when A3 occurs, v, ϕ, and f of the emitter is greater than 35, 150, and 500kHz respectively at same position/region. In the motion data of the same emitter, if the motion state of A3 only happens occasionally in a certain place, it will be judged as A2. At this time, sharp and large changes in state parameters usually occur. The rest are all determined as A1. (2)The spatio-temporal data of data set 2 comes from the Geolife project [30]- [32] of Microsoft Research Asia. In order to increase the diversity and complexity of motion states in the simulation data, nine groups of pedestrian trajectories from the Geolife Trajectory 1.3 dataset are adopted, with a total of 6,862 sampling points. In data set 2, according to the characteristics of the simulation data set, when A3 occurs, v, ϕ, and f of the emitter is consecutively greater than 10, 100, and 500kHz respectively at a certain region. In the motion data of the same emitter, if the motion state of A3 only happens once in a certain place, it will be judged as A2. The rest are all determined as A1. FIGURE 5 shows the cognition result of DDPG-BC, which is displayed by highlighting the experimental results (the red represents A3, and the blue represents A2). For all the simulation data graphs in this paper, coordinates of all positions have been expressed as longitude and latitude coordinates. To measure the performance of the algorithm, we use accuracy which can be calculated by B. EXPERIMENTS RESULTS OF DDPG-BC P(i |j) represents the number of samples when actual sample is i, while cognition result is j, i, j = A1, A2, A3. By observing FIGURE 5 and TABLE 6, it can be found that DDPG-BC is able to realize cognition task of emitter behavior, and the accuracy is 90.434%. However, A2 and A3 cannot be well differentiated. Cognition results of data set 2 with DDPG-BC are demonstrated in FIGURE 6. Combining FIGURE 6 and TABLE 7, we find that the cognition results of DDPG-BC roughly conform to the experimental setting, and the accuracy is 85.427%. The accuracy is reduced due to the complexity of the data, and it also cannot be able to distinguish A2 and A3 well. C. EXPERIMENT RESULTS OF DDPG+A-BC In data set 1, considering data characteristic and situation, we define the same region as exploration region (marked by box) for three groups to limit the scope of exploration, as shown in FIGURE 7. And then, A3 will be regarded as attention position where will be randomly selected as the initial state of the network. It should be noted that the addition of attention mechanism depends on whether the effect of attention is good or bad. For example, the cognition for A2 will be left out because of the limitation of the attention region in experiment. From TABLE 8, it is obvious that the cognition accuracy has been greatly improved compared with DDPG-BC, reaching 99.029%, and the discrimination effect of A2 and A3 has increased from 71.317% to 93.153% according to Eq.(8), indicating that the cognition process of using geographic VOLUME 9, 2021 information as attention for communication emitter behavior can effectively improve the cognition performance. FIGURE 8 shows the final cognition results of data set 2 with DDPG+A-BC. Combining with TABLE 9, it can be found that the cognition accuracy increased by about 7% compared with DDPG-BC, reaching 92.495%, and the discrimination effect of A2 and A3 increased from 73.383% to 81.876%. A. DISCUSSION FOR TRAINING EPISODE Emitter behavior's cognition network is trained for 200 and 500 episodes, respectively. Each round explores for 200 steps. Losses, Qvalues, and TotalRewards in Figure 9 are used to observe the training results of the network, and the dotted line in TotalRewards represents the threshold to determine whether the network learns correctly. The training goal of DDPG network is to maximize target function (rewards) and minimize loss of value network. The loss of 200 episodes of training (FIGURE 9(a)) is approximate zero without convergence, and action value (Q value) is unstable. After 500 episodes (FIGURE 9(b)), losses and cumulative rewards converge, correct cognition is achieved after 235 episodes, and Q value tends to be stable. B. DISCUSSION ON REWARD FUNCTION Another problem of DDPG-BC is that the reward function may make the difference between cumulative rewards of A2 and A3 is too small after training in steps times, leading to Agent's inability to accurately distinguish two situations. In this article, two reward functions are applied on cognition network to discuss the influence of reward function during cognition process. We have referred to the reward function of 'MountainCarContinuous-v0', a continuous controlling environment from gym and of automated vehicle behavior decision making proposed in [8]. Eq.(9) and Eq.(10) are used to observe the influence of reward functions. r2 changes the reward for A3 in r1 from the fixed to the associated with state value. During the training, the reward will be continuously provided. (10) FIGURE 10 shows the average cumulative reward of r1 and r2 at DDPG-BC and DDPG+A-BC, respectively. In any algorithm, r1 achieves correct cognition effect faster than r2, and attention fails to have a beneficial effect on cognition process with r2. In addition, it indicates that the reward with state values is not conducive to network learning and may even reduce the efficiency of the cognition process. Figure 11 shows the average cumulative reward after 500 episodes. DDPG+A-BC has been able to correctly cognize in the first 18 episodes and performs stably after the 19th episode, while DDPG-BC does not have such ability until after the 235 rounds. In addition, as shown in TABLE 10, the average training time of DDPG+A-BC is FIGURE 11. Comparison in TotalRewards. VOLUME 9, 2021 about 60s shorter than that of DDPG, which proves that DDPG+A-BC can improve the efficiency of cognition. D. COMPARISON WITH DQN Due to the limitation of DQN algorithm in practical application, researchers proposed a Double-DQN algorithm [33], [34] to solve the overestimation problem of DQN. Double-DQN also has two Q network structures. By decoupling the selection of target Q value action and the calculation of target Q value, the network can avoid overestimation while approaching the optimal target as soon as possible. In Section 3.2, we have mentioned that the value network of DDPG is based on Q network, and the experience replay of DQN algorithm is adopted to eliminate the correlation between samples. Therefore, it is possible to observe whether the algorithm in this paper has a better performance by comparing the behavior cognition results based on DDPG and Double-DQN. Since DQN can only deal with discrete actions, the selection of actions is limited to a certain range, divided equally into 11 actions (refer to 'Pendulum-v0' in gym) to be selected by the network. FIGURE 12 and Table 11 respectively show the average loss and average training time of DDPG and Double-DQN network. It can be intuitively seen that the loss of DDPG converges faster, and DDPG has shorter average training time. The training time required for DDPG is reduced due to the reduction of action space in the comparison experiment. DDQN-BC and DDQN+A-BC represent the methods based on Double-DQN algorithm by imitating DDPG-BC and DDPG+A-BC algorithms. The accuracy of cognition results under four algorithms are compared on different experiment data sets respectively, as shown in FIGURE 13. (a), (c) show the comparison of the accuracy of cognition results of DDPG-BC and DDQN-BC on the data sets 1 and data set 2 respectively; (b), (d) compare that of DDPG+A-BC and DDQN+A-BC on two data sets with attention. After comparison and synthesis, it is found that: 1) the accuracy of cognition results based on DDQN is about 75% of that based on DDPG, and the cognition effect of DDPG-BC and DDPG+A-BC is better; 2) regardless of any of the algorithms, the introduction of attention mechanism can improve the cognition accuracy; 3) in general, the more complex the cognitive sample, the lower the cognitive accuracy, and vice versa. E. COMPARISON WITH EOIT EOIT is a conceptual framework and an experience-based approach [3]. According to EOIT's idea, the first 70% of each group of data set in data set 1 is taken as experience. For data set 2, 27 pedestrian trajectories from Geolife Trajectory 1.3 are reselected to constitute simulation experimental data as experience. Cognition results are obtained based on experience, and accuracy is used to measure performance. FIGURE 14 demonstrates the cognition accuracy of EOIT and the algorithms in this paper. It can be found that the cognition accuracy of DDPG-BC and DDPG+A-BC is on average 31.27% higher than that of EOIT. Since the acquisition of experience requires consideration of all data in the specified range, EOIT will take longer time than DDPG+ A-BC as shown in TABLE 12. VI. CONCLUSION In order to cognize the motion behavior of communication emitter, DDPG-BC and DDPG+A-BC are innovatively proposed in this paper. Firstly, considering the characteristics of emitter in multiple dimensions, large data and continuity and DRL's good learning ability and wide application in motion problems, we propose DDPG-BC based on DDPG for cognition tasks and set change values of velocity, direction, and communication frequency as state space. DDPG-BC will obtain specific cognition results directly and gain experience VOLUME 9, 2021 from the interaction between network and environment. And then, we further propose a novel cognition algorithm named DDPG+A-BC with the introduction of attention mechanism. In addition to emitter's physical parameters, it uses geographic information (but not limited) to focus on attention positions in the process of DDPG network exploration, which can limit exploration scope and initial randomness of network to improve cognition efficiency. The simulation results show that DDPG-BC can complete the cognition task on two different data sets with accuracy reaching 90.434% and 85.427% respectively. The addition of attention mechanism increased the cognition accuracy by 8.311% and 7.068%, leading to more precise cognition results and less cognition time. And compared with Double-DQN algorithm and existing cognition method EOIT, the algorithms proposed are all superior with less time and higher accuracy. In addition, the influence of training episode, reward function, and data complexity on cognition results are discussed respectively. YUFAN JI was born in China, in 1996. She received the B.S. degree in information engineering from the National University of Defense Technology, Hefei, China, where she is currently pursuing the master's degree in information and communication engineering. Her research interests include communication systems, deep learning, reinforcement learning, and data mining.
7,260.4
2021-01-01T00:00:00.000
[ "Computer Science" ]
Modulation of Obesity and Insulin Resistance by the Redox Enzyme and Adaptor Protein p66Shc Initially reported as a longevity-related protein, the 66 kDa isoform of the mammalian Shc1 locus has been implicated in several metabolic pathways, being able to act both as an adaptor protein and as a redox enzyme capable of generating reactive oxygen species (ROS) when it localizes to the mitochondrion. Ablation of p66Shc has been shown to be protective against obesity and the insurgence of insulin resistance, but not all the studies available in the literature agree on these points. This review will focus in particular on the role of p66Shc in the modulation of glucose homeostasis, obesity, body temperature, and respiration/energy expenditure. In view of the obesity and diabetes epidemic, p66Shc may represent a promising therapeutic target with enormous implications for human health. Introduction The p66 Shc protein is encoded by the Shc1 locus, together with two shorter isoforms known as p52 Shc and p46 Shc [1,2]. While the last two proteins are generated by the same mRNA using different translation initiation sites [2], p66 Shc is produced from a different exon arrangement at the 5 end. The structure of p66 Shc and of the Shc1 locus were extensively reviewed by different authors and will not be discussed in detail in this review [3][4][5]. The three Shc isoform proteins share a common structure, which comprises a phosphotyrosine-binding domain (PTB), a collagen homology 1 (CH1) domain rich in prolines, and a sarcoma homologous type 2 domain (SH2). Shc protein family members are present in mammals, amphibians, fishes, insects (D. Melanogaster), nematodes (C. Elegans), and yeasts, and a typical characteristic of them is to have PTB and SH2 domains in the same order from the N-to C-terminals [3,5]. From an evolutionary point of view, p66 Shc appears to be the most recent isoform, as it is found in vertebrates, but not in yeasts, nematodes, and insects. Unlike p52 Shc and p46 Shc , p66 Shc has an additional collagen homology region (CH2); moreover, p46 Shc does not have a cytochrome c binding domain (CB), which is shared between p52 Shc and p66 Shc . Role of p66 Shc in Signal Transduction The three aforementioned Shc proteins are also different in terms of the signaling pathway wherein they are involved. It is well known that p52 Shc and p46 Shc are able to transduce the signal from tyrosine-kinase receptors (RTKs) to Ras and mitogen-activated protein kinase (MAPK) pathways [1,6] ( Figure 1A). Shc binding to RTKs causes a phosphorylation of three tyrosine residues in their CH1 domain, which is required for the recruitment of the Grb2/Sos1 complex (growth factor receptor-bound protein 2 and son of sevenless 1) at the SH2 domain, which in turn leads to the activation of Ras [4], as Sos1 is a guanine nucleotide exchange factor (GEF). Given its structure, p66 Shc should be able to form the same complexes and activate Ras. However, many studies have indicated that p66 Shc has an inhibitory role on the Ras-MAPK pathway, regardless its ability to bind Grb2 [2,[6][7][8][9]. It has been proposed that p66 Shc competes with p52 Shc and p46 Shc for the binding with Grb2, causing the disruption of the Grb2/Sos1 complex, and in this context it seems that the phosphorylation of Ser36 by p66 Shc is required [7,10,11]. Therefore, an increased activation of p66 Shc might be enough to inhibit the Ras-MAPK pathway. However, the role of p66 Shc in transducing RTK signals is far from being completely understood. p66 Shc and Longevity It was initially reported that deletion of the p66 Shc gene was sufficient to cause an increase in the average and maximum longevity in mice [12]. Indeed, mice in which p66 Shc was deleted had a 30% increase in their life-span compared with WT. These results were surprisingly similar to those obtained by putting mice under calorie restriction [33,34], but p66 Shc-/-mice were not leaner, nor did they eat less than WT. This observation supported the idea that a decreased ROS production was protective against the accumulation of DNA damages caused by free radicals, thereby delaying ageing and promoting an increase in life-span. Despite the inhibition of apoptosis, these mice did not show an increased susceptibility toward tumorigenesis. As already mentioned, p66 Shc-/-is a downstream mediator of p53 in the apoptotic pathway, but its deletion does not interfere with other p53-dependent pathways [16]. Indeed, p53 -/-mice displayed increased mortality due to spontaneous tumorigenesis, which was not observed in p66 Shc-/-mice. A more recent study on this matter Shc Figure 1. Role of p66 Shc in signal transduction. (A) p52 Shc and p46 Shc are activated by phosphorylation in tyrosine residues within their CH1 domain, when bound to RTKs and possibly other receptors. Subsequently, the recruitment of the Grb2/Sos1 complex allows for the activation of Ras and the MAPK pathway. p66 Shc can compete with the other two isoforms for the binding of Grb2, interfering with Ras activation. (B) After being activated by RTKs, and the concomitant phosphorylation in Ser 36 by kinases such as PKCβ or JNK, p66 Shc is subjected to cis-trans isomerization by Pin1. It then translocates to the inter-membrane space of the mitochondrion, after being dephosphorylated by PP2A. Without stimulation, p66 Shc is bound to other proteins, like HSP70, and therefore is inactive. After stimulation with UV-light or H 2 O 2 , p66 Shc can bind to cytochrome c and contribute to the formation of ROS. See the main text for further details. There is vast literature showing that p66 Shc plays a major role in the response to oxidative and environmental stress stimuli [4,[12][13][14][15][16][17][18] (Figure 1B). Kinases like c-Jun N-terminal kinase (JNK) or protein kinase C β (PKCβ), which are activated in response to stress stimuli, can phosphorylate a particular serine residue (Ser 36 ) of p66 Shc within its CH2 domain [19,20]. This step is followed by a cis-trans isomerization by peptidyl-prolyl cis-trans isomerase 1 (Pin1), which allows the translocation of p66 Shc into the inter-membrane mitochondrial space, after it has been dephosphorylated by protein phosphatase 2A (PP2A). A more recent paper found that Ser 36 might not be the crucial phosphorylation site to mediate the PKCβ response, while Ser 139 , Ser 213 , and Thr 206 might be involved [21]. At the mitochondrial level, and without pro-apoptotic stimuli (such as H 2 O 2 or UV radiation), p66 Shc is bound to high-molecular weight complexes and heat shock protein 70 (HSP70) or other proteins involved in the inter-membrane transport [22][23][24]. After stimulation, however, p66 Shc can interact with cytochrome c through its CB domain, generating reactive oxygen species (ROS), by diverting electrons from the mitochondrial electron transport chain (ETC) [4,15,18,25]. In this regard, it is worth mentioning that some authors, based on the structure of p66 Shc , questioned its ability to be an acceptor of electrons from the ETC (reviewed in [5,26]). However, it should be noted that, in the absence of further experimental data to corroborate this notion, this remains a mere speculation. In any case, even if the exact mechanism might be still debated, it is well known that p66 Shc is involved in the production of ROS, and an excess in ROS production can interfere with many cellular processes and induce apoptosis. Apart from increasing mitochondrial ROS production, there are two other mechanisms whereby p66 Shc can increase ROS levels: (i) by decreasing the production of ROS scavengers through inhibition of forkhead box O (FOXO) transcription factors and (ii) by increasing the activity of membrane NADPH oxidase via Rac1 activation (reviewed in [5,15]). The involvement of p66 Shc in the induction of apoptosis is confirmed by the fact that its elimination or over-expression have opposite effects, making cells more resistant or more susceptible to apoptosis, respectively, ( [12,16] and reviewed by [4,5,14,27]). However, the fact that p66 Shc favors ROS formation thereby stimulating apoptosis could be a too simplistic view, since both an anti-oxidant [28] and an anti-apoptotic behavior of p66 Shc [29] have been reported, albeit only in specific cell types and conditions. It was also reported that p66 Shc can participate in the induction of apoptosis, acting downstream of p53 [16]. The activation of p53 in response to H 2 O 2 confers stability to the p66 Shc protein and probably an increase at the transcript level, since there is a p53-binding region within the p66 Shc promoter [30]. Indeed, p53 can be activated even in the absence of p66 Shc , but the cells become apoptosis-resistant in such conditions. As discussed above, PKCβ can phosphorylate p66 Shc , and a study pointed out a link between p66 Shc and the autophagic pathway [31]. Autophagy is a highly regulated process through which the cells can recycle components that are either unnecessary or malfunctioning. It is well known that starvation activates autophagy, and the authors demonstrated that p66 Shc can inhibit autophagy, following starvation in mouse embryonic fibroblasts (MEF) in a PKCβ-dependent manner. A recent paper investigated the induction of autophagy in vivo in the muscles of mice after downhill running, which is a type of exercise known to induce muscle damage, ROS production, and activation of the autophagic process [32]. Their data indicate that p66 Shc−/− mice have higher LC3 lipidation than wild type (WT) mice, but it is not further increased after exercise and other autophagic markers are not significantly different. p66 Shc and Longevity It was initially reported that deletion of the p66 Shc gene was sufficient to cause an increase in the average and maximum longevity in mice [12]. Indeed, mice in which p66 Shc was deleted had a 30% increase in their life-span compared with WT. These results were surprisingly similar to those obtained by putting mice under calorie restriction [33,34], but p66 Shc−/− mice were not leaner, nor did they eat less than WT. This observation supported the idea that a decreased ROS production was protective against the accumulation of DNA damages caused by free radicals, thereby delaying ageing and promoting an increase in life-span. Despite the inhibition of apoptosis, these mice did not show an increased susceptibility toward tumorigenesis. As already mentioned, p66 Shc−/− is a downstream mediator of p53 in the apoptotic pathway, but its deletion does not interfere with other p53-dependent pathways [16]. Indeed, p53 −/− mice displayed increased mortality due to spontaneous tumorigenesis, which was not observed in p66 Shc−/− mice. A more recent study on this matter dismantled the notion that p66 Shc regulates life-span: by using a higher number of mice compared with the original study, three different mouse strains (C57BL/6J, 129Sv, and a hybrid C57BL/6J-129Sv), and animals housed in two different facilities, p66 Shc−/− did not show increased life-span [35]. The authors noted how the average and maximum longevity of the WT mice were unusually low in the original study [12], and this could have been due to environmental stress. Moreover, the suspicion that p66 Shc was not involved in longevity determination was already raised by a study conducted on centenarian humans, in which it was found that the expression of p66 Shc in isolated fibroblasts was elevated, instead of being reduced [36]. Importantly, the role of p66 Shc in the determination of lifespan was further investigated in another study, wherein a telomerase RNA component (TERC −/− ) and p66 Shc−/− double knockout mouse was generated. TERC −/− mice have a decreased average lifespan, and it was observed that the concomitant deletion of p66 Shc was unable to restore this defect, while it was able to ameliorate other aspects, like sterility, weight loss, and multi-organ atrophy. To date, the exact phenotype of these mice has not been fully investigated [37]. In summary, it is possible that p66 Shc does not truly regulate life expectancy, but it is involved in the determination of health-span, which has a very strong translational impact to human pathology. p66 Shc , Body Weight Regulation, and Obesity According to the authors of [12], the body weight of p66 Shc−/− mice was identical to that of WT mice, and so was food intake. In contrast, a more recent paper found that p66 Shc−/− mice were leaner, with body weight differences being mainly due to a decreased amount of abdominal and inguinal fat, particularly evident in males, while the weight of other organs was not different [38]. Our group also showed that, under standard diet, p66 Shc−/− were leaner than WT mice [39]. The observation that organ weight was not different between WT and knockout mice was confirmed by another study, in which no differences in total body weight or in fat-free mass was found, except at older ages (27-month-old animals) [40]. It was also shown that p66 Shc−/− mice subjected to a 5% calorie-restriction (CR) regimen, between 4 and 18-months-of-age, are leaner than WT mice, but not with a 40% CR [35]. The effect of calorie restriction on body weight was also studied in 18-month-old animals [41]: at baseline there were no differences between WT and knockout animals, and the same results were found when a 26% CR regimen for 2 months or a 40% CR regimen for 3 days was applied. The mechanisms whereby p66 Shc would regulate body weight are incompletely understood. On one side, there may be a heat dissipation from the ETC due to increased uncoupling in the adipose [38]. Furthermore, insulin is able to activate the production of H 2 O 2 in pre-adipocytes from brown adipose tissue, but not if p66 Shc is ablated [38], and this event is necessary to modulate the activity of the Akt-Foxo1 pathway. In particular, if p66 Shc is missing, the phosphorylation of Akt is blunted. A proper response to insulin stimulation allows for the accumulation of triglycerides both in brown and white pre-adipocytes, by favoring their import and contemporaneously inhibiting β-oxidation processes. Even if there is a general consensus about the fact that p66 Shc deletion confers protection against obesity, some conflicting data were found. It was shown by many authors that p66 Shc−/− mice are obesity-resistant, whether obesity is genetically- [39,42] or diet-induced [38,39,43]. However, in another study, a new p66 Shc−/− mouse model was generated, named ShcL [44], and some contrasting data were shown. ShcL mice were susceptible to, not protected from, diet-induced obesity, becoming more obese than WT animals in response to a high-fat diet (HFD). The authors showed that ShcL mice did not have perturbations in the expression pattern of p52 Shc and p46 Shc , compared with the original p66 Shc−/− mice (called ShcP), while p46 Shc was increased in the adipose tissue of the ShcP; they reasoned that this might be the reason for the discrepancy in obesity. However, the idea that an increase in p46 Shc together with the absence of p66 Shc in the adipocytes, eventually coupled with a decreased expression of p52 Shc , is responsible for a decreased fat accumulation was not supported by further experimental data. In addition, we did not find any increase in adipose tissue p46 Shc protein expression in ShcP mice [39]. Very few studies have focused on adipokines. In particular, it was demonstrated that in lean p66 Shc−/− mice there were decreased plasma levels of leptin and adiponectin [39,45]. The same was also demonstrated in obese knockout animals for adiponectin [39] and leptin [45]. One study, in contrast, found no differences in plasma leptin concentration between WT and p66 Shc−/− mice, but only increased plasma leptin levels in females compared with males, regardless of genotype, when animals were prenatally exposed to HFD [46]. Regarding adiponectin, it was higher in knockout females compared with WT, and also in females versus males, but only in p66 Shc−/− . Adiponectin was also measured in primary brown adipocytes [45] or in the adipose tissue [46], finding a decrease in p66 Shc−/− adipocytes or an increase only in p66 Shc−/− females, respectively. The concentration of circulating plasminogen activator inhibitor 1 (PAI-1) was similar between WT and knockout animals, which was increased in obese Lep ob/ob animals, regardless of genotype [39], while its expression in white adipose tissue, measured by quantitative PCR, was decreased [45]. Finally, TNFα production was decreased also in p66 Shc−/− mice, both in the plasma of obese animals and in primary brown adipocytes [45]. p66 Shc , Diabetes, and the IGF-1 Axis The possibility that p66 Shc regulates body weight and that its deletion improves obesity, made p66 Shc a possible candidate gene against obesity-related diseases. [47] Furthermore, based on the role of p66 Shc in ROS production, there is a consensus about the ability of p66 Shc−/− mice to counteract many side-effects of pathologies commonly attributed to oxidative stress, including chronic diabetic complications. In fact, it has been demonstrated that the absence of p66 Shc confers protection toward diabetes-induced endothelial damage [48,49], diabetic nephropathy [46], and diabetic cardiomyopathy [50] and improves the healing of diabetic ulcers [51]. The exact mechanisms at work have never been clearly dissected, and there may be several pathways affected by p66 Shc deletion in addition to the regulation of cellular oxidant status. It was reported that the insulin-like growth factor 1 receptor (IGF1-receptor) can phosphorylate p66 Shc , and in MEFs derived from mice where the expression of IGF1-receptor was reduced (IGF-1R +/− mice) there was also a reduced tyrosine phosphorylation of p66 Shc and p52 Shc [52]. IGF1 stimulation is also able to induce the phosphorylation of p66 Shc in tyrosine residues in L6 myoblasts [53], and the silencing of p66 Shc leads to abnormal phosphorylation of extracellular signal-regulated kinase 1/2 (ERK1/2), which is elevated in basal conditions and blunted after IGF1 stimulation. Moreover, the reduced expression of p66 Shc caused an increased glucose uptake in basal conditions, preeminently due to an ERK-mediated remodeling of the actin cytoskeleton, but also to an increase of GLUT1 and GLUT3, both at the protein and mRNA levels [54]. Role of p66 Shc in the Regulation of Glucose Homeostasis Many studies tried to shed light on the role of p66 Shc in insulin and glucose metabolism. This is reasonable, as Shc adapter proteins can interact and transduce the signaling evoked by insulin receptors (reviewed in [55]). Moreover, it was demonstrated that IRS-1, p66 Shc , and S6K can associate to form a complex (reviewed in [56]). However, as already noted for body weight and obesity, contrasting data can be found in the literature in regard to the role of p66 Shc in glucose and insulin tolerance at the whole-body level. As reported above, GLUT1 and GLUT3 were upregulated in L6 myoblasts with reduced p66 Shc expression. Moreover, basal glucose transport was increased, while the adenoviral-mediated overexpression of p66 Shc produced opposite effects [54]. Using different cell types (HeLa and MEFs), it was confirmed that p66 Shc deficiency enhances glucose uptake [57]. Also, cells have an increased proportion of metabolites for fatty acids biosynthesis. Another partial confirmation came from a study conducted in the skeletal muscles of p66 Shc−/− mice, wherein glucose uptake was not studied in detail, but it was found that glucose content was similar to that of fed WT animals, and glycogen was more abundant in knockout muscles, either in fed or starved conditions [58], indicating that glucose uptake in knockout animals was probably not impaired. In primary adipocytes, basal glucose uptake was found to be identical between WT and p66 Shc−/− but increased after insulin stimulation only in knockout cells [42]. Using the aforementioned ShcL p66 Shc−/− knockout mice, it was confirmed that insulin-stimulated glucose uptake was increased in cultured adipocytes, compared with that of WT-derived adipocytes [44]. However, this matter may be more complicated than it seems. In the same paper [44], using the initial p66 Shc−/− mice (ShcP) as in [42], opposing results were found regarding insulin-stimulated glucose uptake in adipocytes, which was decreased in the first and increased in the latter, respectively. An attempt to reconcile these contrasting data was made in a more recent study [56], where the authors discussed the possibility that p66 Shc plays both a positive and a negative role on the insulin pathway, by acting upstream and downstream of mTOR/S6K. Basically, it was proposed that in WT obese mice there is insulin desensitization in the adipose tissue due to constitutive activation of S6K (and its substrate S6), which leads to IRS-1 degradation. In parallel, decreased PI3K recruitment to IRS-1 impairs downstream Akt signaling. Finally, we also reported that glucose uptake in isolated skeletal muscles was lower in p66 Shc−/− mice, compared that with WT, after insulin stimulation [39]. Some studies reported an increased lactate production in response to p66 Shc deletion. This was demonstrated in immortalized p66 Shc−/− MEFs as a result of increased anaerobic glycolysis and decreased mitochondrial respiration [59], and confirmed in the same cell type and also in HeLa cells [57]. In the latter paper, the authors showed that p66 Shc deficiency in HeLa cells enhances the glycolytic metabolism, favored by the concomitant activation of the pentose phosphate and hexosamine pathways, which contributes to the maintenance of a proper redox balance within the cell (by provision of NAPH) and provides a positive feedback on the signaling [57]. On the other hand, when p66 Shc expression was restored in p66 Shc−/− MEF cells, glycolytic metabolism was impaired. Interestingly, this paper identified a link between p66 Shc and mTOR signaling and it was shown that activation of mTOR is associated with an increased anabolic metabolism and protein synthesis. In particular, both S6K and Akt, targets of mTORC1 and mTORC2, respectively, were phosphorylated in p66 Shc−/− compared with control HeLa cells after serum stimulation, but not if cells were pre-incubated with the mTOR inhibitor Torin. Consistently, an increased production of lactate and citrate was also observed in the skeletal muscles of p66 Shc−/− mice [58], but in this case the authors showed a decreased glycolytic capacity, both in a fed and fasted state, as suggested by a decreased activity of key glycolytic enzymes, such as hexokinase, phosphofructokinase, and pyruvate kinase. The relationship between p66 Shc and glucose homeostasis was also studied in obese mice, and conflicting results have been obtained even in this case. p66 Shc−/− Lep Ob/Ob double knockout mice on a mixed genetic background (C57Bl/6J and 129Sv) had improved insulin sensitivity (similar to that of WT lean mice) and glucose tolerance compared with p66Shc WT Lep Ob/Ob animals, but were still glucose intolerant [42]. No differences in fasting glucose between lean WT and p66 Shc−/− mice were reported, nor in glucose tolerance tests (GTT) and insulin tolerance tests (ITT). Another study, however, found that lean p66 Shc−/− were more insulin sensitive and glucose tolerant than WT animals [44] at 3 months of age. At 24 months, only insulin sensitivity was improved, in comparison with WT animals. Moreover, insulin sensitivity was reported to be higher in muscle samples from both ShcL and ShcP mice than from control WT. The same paper reported an improved insulin sensitivity in ShcP mice even after a high fat diet (HFD), which was lost in ShcL mice, at least in the liver. However, our data depicted a different scenario in mice on a pure C57Bl/6J background: worsened glucose tolerance in 18-week-old and more insulin resistance in 30-week-old p66 Shc−/− lean compared with WT lean animals, while obese p66 Shc−/− were more insulin resistant and equally glucose intolerant [39]. We further confirmed that obese p66 Shc−/− are not protected from insulin resistance and glucose intolerance [43]. These data on mice were supported by data on human samples, in which a decreased expression of p66 Shc in the visceral adipose tissue was associated with a lower BMI, but without any improvement in diabetes, dyslipidemia, or hypertension [39]. We and others found that the muscles of obese p66 Shc−/− mice have increased ectopic fat accumulation, which in part can explain these results [32,39,44]. Moreover, we also demonstrated that the microbiota hosted by p66 Shc−/− mice is different from that of WT controls, and that this might explain the differences between studies [43]. Finally, a recent paper studied the metabolic effect on the offspring of WT and p66 Shc−/− where dams were given a HFD before and during pregnancy. In this experiment, p66 Shc−/− progeny was protected from the deleterious effect of the diet, and that was especially evident for females [46]. However, p66 Shc−/− mice were more insulin resistant than WT animals (9 weeks after birth), and this was worsened in animals exposed to pre-natal HFD. WT and knockout males showed a similar response to glucose in GTT, while p66 Shc−/− females had an improved glucose tolerance, especially evident in the HFD group. Body Temperature Regulation, Respiration, and Energy Expenditure Interestingly, some studies indicated a possible link between p66 Shc and body temperature regulation. In the paper of Berniakovich and colleagues [38], it was shown that p66 Shc−/− mice had a higher basal body temperature (at 22 • C external temperature), compared with WT animals. This might be due to an increased metabolic activity in the brown adipose tissue (BAT), since p66 Shc−/− express higher amounts of uncoupling protein 1 (UCP1) in this tissue [38]. In the same work, it was demonstrated that the deletion of p66 Shc had a dramatic impact on cold adaptation. When housed at 5 • C, the body temperature of these mice dropped by about 6 • C, compared with a halved change in WT animals. The trough was reached more rapidly in knockout than in WT animals: after 3 or 4 h, respectively [38]. However, body temperature returned to normal values within 6 h, both in WT and p66 Shc−/− mice, so even if cold adaptation was affected, thermogenesis was not impaired. A possible explanation to this phenomenon comes from a lower thermal insulation in the knockout mice, due to a decreased white fat mass (as previously discussed). A more recent study confirmed this notion by demonstrating a negative selection toward p66 Shc−/− mice when left in large outdoor enclosures for one year [45]. This study is particularly relevant, as it sheds light on the reason why p66 Shc was phylogenetically conserved, despite its role in the induction of the oxidative stress. p66 Shc might be important to promote survival in conditions of environmental stress, whilst its metabolic role might be competitively disadvantageous or detrimental in modern life-style conditions, favoring the development of obesity and metabolic syndrome. In this regard, p66 Shc may be considered a typical thrifty gene [60]. Concerning respiration and energy expenditure, p66 Shc−/− male mice have a higher oxygen consumption at basal level, and a slightly higher energy expenditure than wild-type animals [38]. A more recent study found rather different results, as the respiratory quotient (calculated as the ratio between the volume of CO 2 produced and the volume of the O 2 consumed) was increased in p66 Shc−/− mice compared with WT animals, which is consistent with an increased glucose utilization in knockout mice [40]. In this study, energy expenditure was also measured, and p66 Shc−/− mice displayed lower values than WT mice. However, when fat-free mass was taken into account instead of total body weight, this difference was no more statistically significant. The response of p66 Shc−/− mice to calorie restriction was also investigated in 18-month-old animals [41]. When a 26% CR was applied for 2 months there were no differences between WT and knockout mice. However, after 24 h during the 3 days of 40% CR there was a significantly lower energy expenditure during the light phase, compared with the WT animals, but in this case, fat-free mass was not measured. Conclusions and Perspectives The great majority of the in vivo studies on p66 Shc were conducted on one single p66 Shc−/− mouse strain [12]. This mouse model was originally generated in the 129Sv strain, and it was afterwards backcrossed to the C57BL/6J strain, and then crossed with other mouse models, such as p53 −/− , TERC −/− , and Lep Ob/Ob [16,35,37,39,42]. The availability of this knockout was surely fundamental in the exploration of the metabolic role of p66 Shc and in confirming results that were produced in cell lines, wherein p66 Shc was silenced or deleted. However, relying on a single mouse model can be a limitation, and independent confirmations are required. Another p66 Shc−/− , named ShcL, was made available recently, and some results were surprisingly in contrast with previous data [44]. To date, this mouse was used only in the referred paper. More importantly, it will be extremely helpful to develop tissue-specific and inducible p66 Shc -knockout mice to better dissect the role of p66 Shc in different tissues and at different developmental stages or before/after environmental stresses (e.g., HFD). As shown in the present review, looking at different ages with different duration of HFD regimens, starting the diet at different ages can lead to contrasting results, which sometimes appear difficult to reconcile ( Table 1). As a final complication, we showed that p66 Shc depletion can also influence gut microbiota, which in turns affects the metabolism of the mouse. Altogether, these notions encourage exploring further the role of p66 Shc in the regulation of body weight and metabolism. In view of the obesity and diabetes epidemic, p66 Shc may represent a promising therapeutic target with enormous implications for human health. Table 1. Summary of the contrasting results obtained on the role of p66 Shc in physiological conditions and metabolic diseases. WT = wild type.
6,847.4
2019-02-01T00:00:00.000
[ "Biology", "Medicine" ]
XYZeq: Spatially resolved single-cell RNA sequencing reveals expression heterogeneity in the tumor microenvironment XYZeq is a novel scalable platform that directly encodes spatial location from tissue into single-cell RNA sequencing libraries. INTRODUCTION Over the past decade, massively parallel single-cell RNA sequencing (scRNA-seq) has emerged as a powerful approach to catalog the remarkable cellular heterogeneity in complex tissues (1,2). While scRNA-seq can profile the transcriptomes of thousands of cells in a single experiment, it requires the dissociation of tissue into singlecell suspensions before library preparation and sequencing, eliminating any spatial information (3)(4)(5)(6). Several strategies have emerged to obtain molecular and spatial information simultaneously from complex tissue. Imaging-based strategy combines high-resolution microscopy with fluorescence in situ hybridization to achieve subcellular resolution and could profile the entire transcriptome (7)(8)(9)(10), but this requires lengthy iterative microscopy workflows and large probe panels. Another approach is to hybridize RNA directly from tissue slices onto a microarray containing spatially barcoded oligo(dT) spots or beads to encode location information into RNA-seq libraries. These approaches can sample the entire transcriptome without the need for iterative rounds of hybridization (11), and recent improvements using DNA-barcoded beads (high-definition spatial transcriptomics and Slide-seqv1/v2) report spatial resolutions at or below the diameter of a single cell (12)(13)(14). However, because of the low numbers of mRNA molecules captured per bead, these spatial transcriptomic approaches often aggregate neighboring beads before downstream analysis, resulting in lower effective resolution and averaging of transcript abundances from multiple cells. As a result, annotation of specific cell types present within each spatial unit of analysis is accomplished by aggregating gene sets computationally defined from orthogonal scRNA-seq datasets (15,16). While integration methods have demonstrated the ability to localize cell types within the spatial organization of complex tissue, they rely on having available data from two independent assays and have limited ability to infer how spatial context influences the cell state of individual cell types. RESULTS To overcome these limitations, we have developed XYZeq, a method that uses two rounds of split-pool indexing to encode the spatial location of each cell from a tissue sample into combinatorially indexed scRNA-seq libraries (17,18). Critical for the performance of XYZeq, we fixed tissue slices with dithio-bis(succinimidyl propionate) (DSP), a reversible cross-linking fixative that has been shown to preserve histological tissue morphology while maintaining RNA integrity for single-cell transcriptomics (19). In the first round of indexing, a fixed and cryo-preserved tissue section is placed on and sealed into an array of microwells spaced 500 m center to center. The microwells contain distinctly barcoded reverse transcription (RT) primers (spatial barcode). This step physically partitions intact cells from tissue into distinct in situ barcoding reactions. After RT, intact cells are removed from the array, pooled, and distributed into wells for a second round of polymerase chain reaction (PCR) indexing, imparting each single cell with a combinatorial barcode (Fig. 1, A and B). After sequencing and demultiplexing, the spatial barcode maps each cell back to its physical location in the array (Fig. 1B). This combinatorial barcoding strategy theoretically could enable spatial transcriptomic analysis of large sets of single cells-with two rounds of split-pool indexing, 768 spatial RT barcodes, and 384 PCR barcodes, up to 294,912 unique single-cell barcodes can be generated. To determine whether XYZeq can assign transcriptomes to single cells, we performed a mixed-species experiment where a total of 11 distinct ratios of DSP-fixed human [human embryonic kidney (HEK) 293T)] and mouse (NIH 3T3) cell mixtures were deposited into each of the 768 barcoded microwells, creating a cell proportion gradient along the columns of the array ( Fig. 1C and Materials and Methods). XYZeq was used to generate scRNA-seq data for 6447 cells. A total of 94.8% of cell barcodes were assigned to a single species with an estimated barcode collision rate of 5.1% based on the percentage of cell barcodes with reads mapping to both human and mouse transcriptomes (fig. S1A). We hypothesized that a portion of collisions were due to contamination from ambient RNA released by damaged cells. Using DecontX (20), a hierarchical Bayesian method that assumes the observed transcript counts of a cell is a mixture of counts from two binomial distributions, we removed contaminating transcripts, reducing the collision rate to 0.7% ( Fig. 1D and Materials and Methods). After computational decontamination and removal of collision events, we obtained a median of 939 unique molecular identifiers (UMIs) and 439 genes per human cell and 816 UMIs and 336 genes per mouse cell. Mapping each single cell to its originating microwell, we observed a high concordance between the observed and expected cell type proportions along the columns of the wells (Lin's concordance correlation coefficient = 0.91; Fig. 1E and fig. S1B). Together, these results demonstrate that a minimal amount of barcode contamination takes place from single cells in each well and between neighboring wells on the array after pooling, indicating that the XYZeq workflow successfully produces spatially resolved scRNA-seq libraries. We next applied XYZeq to a fixed and cryopreserved heterotopic murine tumor model established by intrahepatic injections of a syngeneic colon adenocarcinoma cell line, MC38, into immunocompetent mice. This model mimics tissue-infiltrating features of metastatic cancer and is associated with a relatively well-defined tumor boundary (21,22). MC38 tumor cells also have immunomodulating properties with previous data showing immune cells infiltrating the tumor/tissue interface approximately 10 days after tumor inoculation (23,24). Thus, we predicted that XYZeq could simultaneously capture the gene expression states and spatial organization of parenchymal liver cells, cancer cells, and tumor-associated immune cell populations. A 25-m slice of fixed-frozen liver/tumor tissue from a C57BL/6 mouse was placed on top of the prefrozen microwell array while a sequential 10-m slice was fixed for immunohistochemical staining ( fig. S2A and Materials and Methods). We also deposited fixed human HEK293T cells into the same array at an average of 58 cells per well to serve as a mixed-species internal control to experimentally quantify collision rates. We performed XYZeq and observed an initial collision rate of 7.3% based on comparing the ratio of human versus mouse transcripts ( fig. S2B). After computational decontamination and further quality control, which includes filtering cells based on cell counts and mitochondrial expression, the collision rate was reduced to 4.4% ( Fig. 2A and Materials and Methods). After removing collisions, we obtained a total of 8746 cells and detected a median of 1596 UMIs and 629 unique genes per HEK293T cell and 1009 UMIs and 456 unique genes per cell from the heterotopic murine tumor model at 46% sequencing saturation (Fig. 2B). A hematoxylin and eosin (H&E)-stained serial section of the tissue revealed a histological boundary between the tumor and adjacent liver/tumor tissue (Fig. 2C). As expected, we observed HEK293T human cells distributed across the entire array, while mouse cells were sequestered within the boundary of the murine tissue (Fig. 2D). Note that empty spatial wells with no cells detected were likely due to a limited number of cells targeted for sequencing (~10,000). We obtained a median of 3 human cells per well and 9 mouse cells per well with a total of 13 cells per well expected ( fig. S2C). XYZeq revealed distinct cell types within the murine liver and tumor. Semisupervised Leiden clustering revealed 13 cell populations in the murine tumor model ( fig. S3A), from which seven cell types were annotated on the basis of markers that define each population: hepatocytes, cancer cells (MC38), Kupffer cells, liver sinusoidal endothelial cells (LSECs), mesenchymal stem cells (MSCs), lymphocytes, and myeloid cells (Fig. 3A). The annotation of MC38 tumor cells was supported by a high correlation of chromosomal copy numbers estimated from XYZeq scRNA-seq data and publicly available MC38 cytogenetic data (Pearson r = 0.78) (25). Notably, a partial amplification of chromosome 15 and a partial deletion of chromosome 14 observed in the XYZeq data were consistent with common chromosomal abnormalities seen in MC38 cells ( fig. S3B). As a negative control, we saw low chromosomal copy number correlation when comparing MC38 cells to hepatocytes (26) and immune cells (21) (Pearson r = 0.05 and r = 0.17, respectively) (fig. S3B). A heatmap showing differentially expressed genes across seven cell types uncovered distinct clusters of cells defined by expression of canonical genes that are relatively exclusive to each cell type (Fig. 3B). Note that we estimated uniformly low rates of contamination of each cell cluster (median under 1%) with the exception of hepatocytes, which had a slightly higher rate at 2.2% ( fig. S3C and Materials and Methods). We found comparable median UMIs and genes detected across all cell clusters including immune cell populations that have been difficult to profile using other combinatorial indexing methods ( fig. S3, D and E) (27). Cell types expected in non-tumor-bearing liver were identified using markers previously described, which included hepatocytes, Kupffer cells, and LSECs (26). Consistent with the known heterogeneity of hepatocytes, we identified hepatocyte subsets annotated by the expression of pericentral markers (Glul, Oat, and Gulo) ( fig. S3F) (26). MC38 adenocarcinoma cells comprised a large uniform cluster and were distinguished by the expression of the known marker Plec (22). Myeloid cells were defined by canonical markers Cd11b and Cd74 (28), but other noncanonical markers were also observed, including Myo1f (29) and Tgfb (30). Lymphocytes showed a similar mix of broad and specific expression patterns of cell type markers, with expression of pan-lymphocyte marker Il18r1, T lymphocyte marker Prkcq, and cytotoxic T cell marker Cd8b (31)(32)(33). Last, we detected a cluster of MSCs/stromal cells that expressed both broad mesenchymal cell markers Rbms3 and Tshz2 and stem/stromal cell markers Prkg1 and Gpc6 ( fig. S3F) (34)(35)(36)(37)(38). We next assessed the reproducibility of XYZeq while comparing changes in the transcriptional landscape across the z-layer of the organ. Four nonsequential 25-m tissue slices from the same frozen liver/tumor sample block were processed and analyzed. The average expression over all cells for genes detected across all slices was highly correlated between each pair of slices (average pairwise Spearman r = 0.93) ( fig. S4A). We noted that among the four tissue sections, slices 1 and 2, which were the two most proximal slices in their z coordinates (separated by 80 m), had the highest expression correlation (Spearman r = 0.96). In contrast, slices 1 and 4, which were the most distal in z coordinates (separated by 830 m), had the lowest correlations (Spearman r = 0.91). Further, clusters jointly annotated across all four slices consisted of cells from each slice, suggesting that the observed heterogeneity is not due to batch effects ( fig. S4B). We further compared the quality of the scRNA-seq data generated by XYZeq to another single-cell technology that is commercially available. To accomplish this, we compared the cell type clusters identified from XYZeq to those identified from an independent scRNA-seq dataset of the same liver/tumor model generated using the 10x Genomics droplet-based Chromium system. Most cell populations detected by 10x were also observed by XYZeq, except neutrophils, erythroid progenitors, and plasma cells ( Fig. 3C and fig. S5A), which are immune cell populations known to be sensitive to the cryopreservation (39) required for XYZeq. 10x did not capture MSCs even though cells were isolated from fresh liver/tumor samples. In addition, B cells identified using the 10x platform correlated with the myeloid population detected by XYZeq, likely due to the transcript capture of Ly86, Cd74, and several class II histocompatibility antigen genes (e.g., H2ab1 or H2dmb1). For the six cell types identified in both the 10x and XYZeq data, we observed high correlations in both the cell type proportions (Lin's concordance correlation coefficient = 0.99; fig. S5B) and the pseudobulk expression profiles of each cell type (Pearson r = 0.64 to 0.86, P < 0.01; Fig. 3C). Next, we turned to the critical question of whether XYZeq can determine the spatial location of each cell. To do this, we compared the spatial localization of each cell cluster to the images of H&Estained sequential slices. First, to determine that we could accurately define liver from tumor tissue, we confirmed that the density of hepatocytes and cancer cells across the spatial wells overlap with the histological annotation of the adjacent section (Fig. 3D). Projection of other cell types revealed distinct spatial organization patterns for myeloid cells, lymphocytes, Kupffer cells, MSCs, and LSECs ( Fig. 3D and fig. S6A). Quantification of cellular composition occupying each spatial well revealed that MSCs, lymphocytes, and myeloid cells To assess the generalizability of XYZeq to other tissues, we processed samples from the same heterotopic murine tumor model in the spleen. We recovered a total of 7505 cells at a median of 1312 S8A). We observed that all four spleen/tumor slices contributed to each cell type cluster, suggesting that the annotated clusters are not due to batch effects ( fig. S8B). A heatmap showing differentially expressed genes across the six cell types revealed distinct clusters of cells expressing canonical genes that are relatively exclusive to each type ( fig. S8C). Cells from each type could be spatially mapped across the tissue ( fig. S8D). Collectively, these results demonstrate that XYZeq can generate spatially resolved scRNA-seq data from different fixedfrozen tissues. The ability to obtain spatial and single-cell transcriptomic data simultaneously allowed us to assess the effects of cellular composition on gene expression patterns across space. We applied non-negative matrix factorization (NMF) to both the liver/tumor and spleen/tumor scRNA-seq data to define modules of coexpressed genes and associated the expression of each module in each cell type with its expression across spatial wells. Using our approach, we identified 20 modules of coexpressed genes in each tissue (Materials and Methods). As a proof of principle of the approach, we first identified liver module (LM) 14 from the liver/tumor data, which was predominantly expressed by the hepatocyte cluster in the t-distributed stochastic neighbor embedding (tSNE) space (Fig. 4A). As expected, the highest LM14expressing wells were enriched for hepatocytes, suggesting that the spatial variability of this module is largely driven by the frequency of hepatocytes (Fig. 4B). Next, we reasoned that because both the liver and spleen were injected with the same tumor cell line, the invading tumors may induce a shared gene expression profile that vary over space, driven in part by the cellular composition of the tumor microenvironment. To test this hypothesis, we first identified pairs of matching gene modules between the two tissues from the NMF analysis (Materials and Methods). We found four distinct LMs that had at least 25% of genes overlapping with spleen/tumor modules (SMs) (Fig. 4C and fig. S9A). Gene ontology analysis of the modules revealed the enrichment of genes implicated in tumor response, immune regulation, and cell migration (figs. S9, B and C, and S10B). Consistent with the enrichment analysis, many of the genes from these modules have been implicated in tumorigenesis (complete gene lists are in table S1). Unlike LM14, further analysis of these matching modules revealed a heterogeneous composition of cell populations that contributed to the expression of specific module genes ( fig. S9D and Materials and Methods). For example, the tumor response module LM5 and its matching modules SM2 and SM12 (Fig. 4C and fig. S9A) consisted of genes predominantly expressed in MC38 tumor cells with some expression in myeloid cells and lymphocytes (Fig. 4D, fig. S9D, and Materials and Methods). The immune regulation modules, LM13 and LM19 (matched with SM7 and SM20), consisted of genes expressed primarily in both conventional (e.g., myeloid and lymphocytes) and nonconventional (e.g., Kupffer cells from liver samples) immune cells (Fig. 4, C and D, and fig. S9D). The expression of these overlapping modules was highest in regions densely infiltrated with cancer cells (Fig. 4, E and F). Collectively, these results show that the joint analysis of scRNA-seq and spatial metadata from XYZeq can identify spatially variable gene modules due to differences in cellular composition across tissue samples. We next focused our analysis on matching modules LM10 and SM15/SM17, which are primarily expressed by MSCs and enriched for genes involved in cell migration (Figs. 4C and 5A and figs. S9D and S10, A and B). Because MSCs are known to have homing abilities to injured or inflamed sites (40), we hypothesized that LM10 could be differentially expressed in MSCs based on their proximity to the tumor. To test this hypothesis, we first computed a tumor proximity score for each well based on the composition of and distance from nearby wells ( Fig. 5B; see Materials and Methods and fig. S11 for score definition). Projecting the proximity score onto MSCs in tSNE space revealed that the transcriptional heterogeneity of the population is associated with spatial proximity to tumor (Fig. 5C). We then analyzed the MSC expression profiles using tradeSeq (41) to identify differentially expressed genes that tracked with the proximity score. We identified and clustered 177 genes from the liver/tumor tissue (P < 0.05) and 66 genes from the spleen/ tumor tissue (P < 0.05) that are associated with the continuous, one-dimensional proximity score (Fig. 5D). The genes were broadly divided into three groups based on the proximity cells to tumor: intratumor, tumor-tissue boundary, and intratissue with statistically significant genes highlighted for the spleen/tumor tissue (Benjamini-Hochberg false discovery rate < 0.05) (Fig. 5D). For MSCs found in the intratumor regions of the spleen/tumor, many of the differentially expressed genes are reported to regulate the extracellular matrix (ECM) (Fig. 5D, right) (42)(43)(44)(45), suggesting that MC38 cells may induce a local gene expression program in neighboring MSCs that could contribute to malignant remodeling of the ECM. Last, we leveraged the scRNA-seq data from XYZeq to visualize how individual MSCs expressed Tshz2 and Csmd1, two genes of divergent function that are spatially variable with respect to the tumor in the spleen. Both genes are characterized as tumor suppressor genes and are often silenced in cancer cells to promote malignant growth and metastasis (36,46,47). However, we found that spleen/ tumor MSCs expressed lower levels of Csmd1 but higher levels of Tshz2 in closer proximity to the tumor (Fig. 5E). The mean differential expression of these genes was specific to splenic MSCs and not expressed by MC38 tumor cells. The expression pattern of each of these genes in space revealed a pattern consistent with the aforementioned spatial trajectory analysis, suggesting that their heterogeneous expression in MSCs may be determined by the location of the cells with respect to tumor (Fig. 5F). Together, these results reveal that joint analysis of spatial and single-cell transcriptomic data from XYZeq can detect transcriptionally variable genes within specific cell types (e.g., MSCs) driven by their location within the complex tissue architecture. DISCUSSION We introduce XYZeq, a new scRNA-seq workflow that encodes spatial meta information at 500-m resolution. XYZeq enables unbiased single-cell transcriptomic analysis to capture the full spectrum of cell types and states while simultaneously placing each cell within the spatial context of complex tissue. In murine tumor models, we demonstrate that XYZeq identifies both spatially variable patterns of gene expression determined by cellular composition and heterogeneity within a cell type determined by spatial proximity. Looking forward, XYZeq provides a scalable workflow that can be adapted to multiple z-layers of tissue and can potentially facilitate analysis of entire organs. Large-scale integrated profiling of multiple modalities of single cells mapped to the structural features of their tissue will enable greater understanding of how the tissue microenvironment affects cellular infiltration and interaction in health and disease. Mice, tumor cell line, and tumor inoculation Six-to 12-week-old C57BL/6 female mice were purchased from Jackson Laboratories and housed under specific pathogen-free conditions. MC38 colon adenocarcinoma cell line expressing luciferase was a gift from R. D. Beauchamp (Vanderbilt University). MC38 cell line was cultured in complete cell culture medium (RPMI 1640 with GlutaMAX, penicillin, streptomycin, sodium pyruvate, Hepes, non-essential amino acid, and 10% fetal bovine serum). Cell lines were routinely tested for mycoplasma contamination. For experiments, mice were given an anesthetic cocktail of buprenorphine (300 l) and meloxicam (300 l) 30 min before the procedure. At the time of surgery, one drop of bupivacaine was administered, and mice were anesthetized with isoflurane before intrahepatic (or intrasplenic) injection of MC38 colon adenocarcinoma cells (50 l at 10 × 10 6 cells/ml) using a 30-gauge 1 / 2 -inch needle. Incision was stapled closed, and postoperative care was given to the mice. All experiments were conducted in accordance with the animal protocol approved by the University of California, San Francisco Institutional Animal Care and Use Committee. Cancer model system The intrahepatic and intrasplenic cancer model that we used for the paper is described in great detail in a recently published report by Lee et al. (21). Briefly, intrahepatic and intrasplenic tumors were generated by subcapsular injection of the tumor cells directly into the organs. To establish the ideal time point for sacrificing the mice, in vivo imaging was done on tumor-inoculated mice. Intraorganinjected MC38 cells were modified to express the firefly luciferase. Mice were intraperitoneally infected with d-luciferin (150 mg/kg; Gold Biotechnology) 7 min before imaging with the Xenogen IVIS Imaging System. Mice with detectable tumor nodules with at least 5-mm fluorescence were euthanized for tissue harvesting. Organs to be used for XYZeq were fixed with DSP (Thermo Fisher Scientific) and cryopreserved, while organs used for 10x Genomics Chromium single-cell sequencing were digested in RPMI 1640 complete medium that were supplemented with collagenase D (125 U/ml; Roche) and deoxyribonuclease I (20 mg/ml; Roche) and then processed for single-cell suspension using the gentleMACS tissue dissociator per the manufacturer's protocol (Miltenyi). 10x Genomics Chromium platform Cells isolated from tissue were washed and resuspended in phosphatebuffered saline with 0.04% bovine serum albumin at 1000 cells/l and loaded on the 10x Genomics Chromium platform per the manufacturer's instructions and sequenced on NovaSeq or HiSeq 4000 (Illumina). Tissue harvesting and cryopreservation At day 10 after tumor inoculation, mice were euthanized and harvested for the tumor-injected liver (or spleen) and incubated for 30 min in icecold dimethyl sulfoxide-free freezing media (Bulldog Bio). This was followed by 30 min of incubation in ice-cold DSP (Thermo Fisher Scientific) supplemented with 10% fetal calf serum (FCS) and then neutralized in ice-cold 20 mM tris-HCl (pH 7.5). The organs were placed in a cryomold, sealed airtight, and slowly frozen overnight at −80°C. Cells and reagent dispensing into array The sciFLEXARRAYER S3 (Scienion AG) was used to dispense cells and reagents to the microwell arrays. Drop stability and array quality were assessed for each experiment. Before dispensing into the microwell arrays slides, Autodrop detection was used to assess drop stability and quantify the velocity, deviations, and drop volume for each reagent. Volume entry was used to determine the number of drops required to reach the total designated well volume. Each well oligo(dT) primer (5′-CTACACGACGCTCTTCCGATCTNN-NNNNNNNN[16-base pair unique spatial barcode] TTTT-TTTTTTTTTTTTTT-3′, where "N" is any base; IDT) was spotted into a different well in the array. During barcoding, the dew point control software monitored the ambient temperature and humidity, allowing dynamic control of the temperature of the source plate to maintain nominal oligo concentrations through the duration of the run. Barcoded slides were dried in the wells before storage. Reaction mix (Thermo Fisher Scientific) was added to wells and automated with a 10% bleach wash between each probe to eliminate carryover contamination. Dissociation/permeabilization buffer was printed into each well on the day of experiment, and tissue section was loaded onto the microwell array slides. For all tissue experiments, DSPfixed HEK293T cells were added at 5 l (at 10 × 10 6 cells/ml) to the RT digestion mix before being dispensed across all the wells in the microarray. The average number of HEK293T cells were 58 cells per well; however, the absolute number of cells per well likely varied across the array due to the cells being in suspension inside the dispensing nozzle. Cells harvested from the array after incubation were analyzed on the Aria (BD Biosciences), and datasets were analyzed using FlowJo software (Tree Star Inc.). Array fabrication Photoresist masters are created by spinning on a layer of photoresist SU-8 2150 (Thermo Fisher Scientific) onto a 3-inch silicon wafer (University Wafer) at 1500 rpm and then soft baking at 95°C for 2 hours. Then, photoresist-layered silicon wafer is exposed to ultraviolet (UV) light for 30 min over a photolithography mask (CAD/Art Sciences, USA) that was printed at 12,000 DPI (dots per inch). After UV exposure, the wafers are hard-baked at 95°C for 20 min and then developed for 2 hours in fresh solution of propylene glycol monomethyl ether acetate (Sigma-Aldrich) to develop, followed by a manual rinse with fresh propylene glycol monomethyl ether acetate then baked at 95°C for 2 min to remove residual solvent. Polydimethylsiloxane (PDMS) mixture (Sylgard 184, Dow Corning, Midland) with prepolymer:curing agent ratio of 10:1 was poured over the SU-8 silicon wafer master. This was placed in a 100-mm petri dish and was cured overnight in a 70°C oven. This PDMS-negative mold was peeled off the SU-8 silicon master the following day. PDMS block was placed on a flat surface, and Norland Optical Adhesive 81 (NOA81) (Thorlabs) was poured into the mold to cover the entire surface. A slide was placed on top of the NOA-poured PDMS mold, and a transparent weight was placed on top. NOA was cured for 2 min under UV light, flipping once halfway through the UV curing time. Last, the PDMS mold was detached from the cured NOA microwell array slide (referred to as microwell array chips). The dimensions of each hexagonal well are approximately 400 m in height and 500 m in diameter with the volume of 0.04 mm 3 , which can hold 40 nl of liquid. XYZeq methodology Liver/tumor organ was mounted on a cyrostat (Leica) and sliced at 25 m for use as an XYZeq experimental sample or mounted on a histology slide at 10 m for immunohistochemical staining. On the day of experiment, XYZeq microwell array chips were spotted with an RT cocktail mix that was spiked-in with DSP-fixed HEK293T cells. The microwell array chips were brought down to −80°C, and a tissue slice was placed on top of the array. A digital image was taken to document the orientation of the tissue before sandwiching a silicone gasket sheet between the XYZeq microwell array chip and a blank histology slide. The chip was placed in a microarray hybridization chamber (Agilent) to ensure an airtight seal while undergoing tissue digestion and RT. To recover high-quality RNA from fixed-frozen tissue, the microarray hybridization chamber housing the chip had to undergo a gradual step-wise temperature increase to 42°C before the 20-min incubation to undergo RT. The chip was removed from the chamber and placed in a 50-ml conical tube with 50 ml of 1× SSC buffer and 25% FCS. The tube was vortexed and spun down at 1000 rcf for 10 min. Excess volume was removed, and cells were filtered and stained for DAPI (4′,6-diamidino-2-phenylindole; Life Technologies) before sorting (BD Aria) into 96-well plates preloaded with 5 l of the second RT mix. Plates were reverse-transcribed for 1.5 hours at 42°C, followed by PCR using 2× Kapa HotStart ReadyMix (Kapa Biosystems). PCR amplification was performed with an indexing primer (5′-AATGATACGGCGACCACCGA-GATCTACAC [i5]ACACTCTTTCCCT ACACGACGCTCTTC-CGATCT-3′; IDT). Contents of the PCR plate were pooled into 2-ml Eppendorf tubes, and complementary DNA (cDNA) was purified with AMpure XP SPRI bead (Beckman). cDNA was tagmented and amplified with Illumina Nextera library p7 index (IDT). Final library was analyzed by BioAnalyzer (Agilent) and quantified by Qubit (Invitrogen) and sequenced on a NovaSeq or HiSeq 4000 (Illumina) (read 1: 26 cycles; read 2: 98 cycles; index 1: 8 cycles; index 2: 8 cycles). XYZeq decontamination analysis In our analysis, we recognized that some reads aligning to the mouse genes were present in cells that otherwise had high alignment to the human genome. We suspected that these reads were ambient RNA contamination and sought to remove them. We first removed mousealigned transcripts with an extremely high expression in human cell population [n = 59, log(counts +1) > 6]. The human cell population was considered a control in the contamination detection, because any ambient RNA from lysed cells was expected to contaminate both mouse and human cells. DecontX (20) was then performed to estimate the contamination rate for different cell populations using the humanmouse mixture dataset and therefore derive a decontaminated count matrix from the raw data. Briefly, the algorithm applies variational inference to model the observed counts of each cell as a mixture of true gene expression of its corresponding cell population and the contamination signature (from other cell populations) and then subtracts the contamination signature ( fig. S3C). By considering the humanmouse mixed-species experiment, we could remove those counts potentially contributing to collision and effectively account for all potential transcripts in the lysed cells that contribute to ambient RNA. In fig. S3C, the initial estimated contamination rate for each mouse cell type is plotted with the median estimates ranging from 0.06 to 0.31%, with the highest seen in the hepatocyte cell cluster with 2.18% initial contamination fraction. All the downstream analysis was performed on the basis of the decontaminated data after contamination removal. How distinctions were made between collision rate and contamination rate The collision rate is directly calculated from the gene expression of human-mouse mixture dataset based on the ratio between mouse-aligned and human-aligned transcripts, while the contamination rate for each cell is estimated as a cell-specific parameter in the Bayesian hierarchical model via variational inference from DecontX. To specify the contamination rate, each cell has a beta-distributed parameter modeling its proportion of transcript counts, which come from its native expression distribution. The estimated contamination rate for each cell is the proportion of transcript counts, which come from contamination in the Bayesian model. Each transcript in a cell follows a multinomial distribution parameterized by the native expression distribution of its cell population or contamination from all the other cell populations, given a Bernoulli hidden state, indicating whether the transcript comes from its native expression distribution or from the contamination distribution. Cell species mixing experiment Mixture of HEK293T and NIH 3T3 cells were deposited into wells in a gradient pattern across the columns of the array with a total of 11 distinctive cell proportion ratios. Specifically, columns on the array were spotted with human cell-to-mouse cell ratio of 100/0; The ratio of UMI deduplicated reads aligning to either human or mouse reference genomes was calculated for each cell, and those with less than 66% aligning to a single species were deemed barcode collision cells. XYZeq single-cell analysis Single-cell RNA sequence data processing was performed where sequencing reads were processed as previously described (17). Briefly, raw base calls were converted to FASTQ files and demultiplexed on the second combinatorial index using bcl2fastq v2.20. Reads were trimmed using trim galore v0.6.5, aligned to a mixed human (GRCh38) mouse (mm10) reference genome and UMI deduplicated. Reads were then assigned to single cells by demultiplexing on the first combinatorial index, before the construction of a gene by cell count matrix. The count matrix was processed using the Scanpy toolkit. Cells with less than 500 UMIs and greater than 10,000 UMIs, as well as cells expressing less than 100 unique genes or more than 15,000, were discarded. Cells with more than 1% mitochondrial read percentage were also discarded. Gene counts were normalized to 10,000 per cell, log-transformed, and further filtered for high mean expression and high dispersion using the filter genes dispersion function, with a minimum mean of 0.35, maximum mean of 7, and minimum dispersion of 1. Gene counts were then corrected using the regress out function with total counts per cell and the percentage mitochondrial UMIs per cell as covariates. Subsequent dimensionality reduction was done by scaling the gene counts to a mean of 0 and unit variance, followed by principal components analysis, computing of a neighborhood graph, and tSNE. Leiden clustering was performed with a resolution of 0.8, and cells were grouped to reveal distinct murine cell types and human HEK293T cells. 10x data processing Count matrices were generated using the "count" tool from Cell Ranger version 3.1.0, using the combined human and mouse reference dataset (version 3.1.0) and the "chemistry" flag set to "fiveprime." The count matrix was processed using the Scanpy toolkit. Cells with less than 500 UMIs and greater than 75,000 UMIs, as well as cell expressing less than 100 unique genes and greater than 10,000, were discarded. Cells with more than 7.5% mitochondrial read percentage were also discarded. Gene counts were normalized to 10,000 per cell, log-transformed, and further filtered for high mean expression and high dispersion using the filter genes dispersion function, with a minimum mean of 0.2, maximum mean of 7, and minimum dispersion of 1. Gene counts were then corrected using the regress out function with total counts per cell and the percentage mitochondrial UMIs per cell as covariates. Subsequent dimensionality reduction was done by scaling the gene counts to a mean of 0 and unit variance, followed by principal components analysis, computing of a neighborhood graph, and tSNE. Leiden clustering was performed with a resolution of 1, and cells were grouped to reveal major murine cell types and human HEK293T cells. Heatmap for XYZeq Mouse cells were subsetted from the XYZeq processed data matrix. The processed gene expression values were plotted in a heatmap with a minimum fold change of 1.5 and hierarchically clustered using the heatmap function from Scanpy, with the default settings of Pearson correlation method and complete linkage. XYZeq gene pairplot Four slices of liver/tumor tissue were processed using the XYZeq assay (with HEK293T cells spiked in) and aligned to a joint human and mouse reference. All genes with at least one count in each slice were kept, and the counts across the common set of genes between pairwise slices were plotted in the lower triangle, with the Spearman correlation for the data shown in the upper triangle. Along the diagonal, histograms were plotted showing the distribution of counts per gene for all the nonzero genes for each slice. XYZeq cells per well pairplot Pairplot shows the number of microwells containing pairwise combinations of cell types. For scatterplots, each point in the plot represents a well, and its coordinate positions indicate the number of cells of each cell type present in that well. Every dot on the scatterplot is a gene representing mean per gene for common genes across all cells in the slices. Along the diagonal of the figure are histograms, showing the univariate distribution of cell number per well for the given cell type. Heatmap comparing 10x to XYZeq Mouse cells were subsetted from each of the processed data matrices. For pairwise mouse Leiden clusters found between XYZeq and 10x, the scaled and log-transformed gene expression values of common genes were plotted. For each comparison, a Pearson correlation was calculated and plotted in the heatmap. Row/column labels were ordered according to their corresponding cell types. Correlation plot Mouse cells were subsetted from each of the processed data matrices. Proportions for each cell type (as determined by the Leiden clustering and visualized using tSNE) were plotted, and the coefficient of determination was calculated by fitting to the model that assumes proportions are equal between the two assays. Gene module analysis of top contributing genes To identify gene modules using NMF, genes expressed in fewer than five cells and cells expressing fewer than 100 genes were filtered out. Variance stabilizing transformation was performed on count data, and confounding covariates including number of counts per cell, batch, and mitochondrial read percentage were regressed out by a regularized negative binomial regression model using the SCTransform (48) function in the Seurat R package. Pearson residual values from the regression model were centered, and all negative values were converted to zero. Nonsmooth NMF (nsNMF) was performed on the resulting expression data with a rank value of 20 using the nmf (49) function in NMF R package. In each module, genes were sorted by their magnitude in the corresponding coefficient matrix in a descending order. Gene ontology enrichment analysis was performed for the sorted genes in each module using GOrilla (50). For each module, the top consecutive genes with higher coefficients in this module compared to all the other modules were further selected as genes contributing the most to the module (51) in the tissue-specific analysis. Binary spatial plots were generated by first calculating the median expression across all the cells for each well within each batch based on the log-normalized gene expression data. We then extracted the mean expression across all the genes within one module for each well and calculated the average of mean expression across selected module genes for each well weighted by the number of cells in each well. The wells with a mean expression across genes above the weighted average were labeled as highly expressing for that gene module, and all the other wells with nonzero expression of those selected module genes were labeled as lowly expressing that gene module. tSNE plots representing the gene modules were colored by their mean expression of genes within the annotated module. Overlapping analysis between the gene modules identified in liver/tumor and spleen/tumor Gene modules were first identified using nsNMF with a rank value of 20 for the two tissues, liver/tumor and spleen/tumor, respectively. The top 200 genes in each sorted gene list for a module were selected as having high association with the module. For each module in the liver/tumor tissue, the spleen/tumor module with the largest gene overlap was initially matched as functionally similar. We then removed those matched pairs with fewer than 25% overlapping genes out of top 200 genes in the liver/tumor module. To calculate cell type fractions that make up each module, the average gene expression for each gene across all the cells was calculated. Median expression across all the overlapping genes for each cell type was further computed, which was later transformed into fractions by dividing by the sum of median expression across all the cell types. Defining the proximity score by wells We sought to define a score for each well of the hexagonal well array that would capture how centrally located a well was within either the tumor or nontumor tissue domains. Central to the method was the determination of successive concentric "layers" of wells that were adjacent to a well in question: those corresponding to its immediate neighbors (layer 1), those wells exactly two wells away (layer 2), and so on, for n layers. In the spleen/tumor, we selected several wells on the far side of the tumor region and set the score of these wells to 1. We then took 10 successive layers of wells and decreased the score linearly with each layer, with the wells in layers 10 and beyond set to 0. In the liver, MC38 cells were found in different locations, and therefore, unlike the spleen, there was no single unidirectional spatial dimension to place all MC38 cells at one end and all nontumor tissue cells at the other. Therefore, we used an alternative approach to calculate these scores in the liver/tumor tissue. For each well w x, y , annotated by their x, y position on the hexagonal well array, we calculated the proportion of hepatocytes, p x,y , since the hepatocytes were the most abundant parenchymal cell type in and strictly associated with the nontumor liver tissue t x,y = # of total hepatocytes and MC38 cells in w x,y h x,y = # of hepatocytes in w x,y p x,y = h x,y ─ t x,y Then, for each well in question w x,y , we tabulated the surrounding wells in each of the successive concentric 10 layers. We denote these wells w x′y′ to differentiate from the well in question. For each of those layers l, we took its constituent wells' p x′,y′ and calculated a cell number-weighted average p x,y, l w x,y,l = { w x′y′ ∈ layer l of w x,y } t x,y,l = # of total hepatocytes and MC38 cells in w x,y,l p x,y,l = ∑ x′,y′ w x,y,l t x′,y′ ─ t x,y,l p x′,y′ Then, for the well in question w x,y , we calculated a distanceweighted average of all the p x,y,l , and this became the proximity score s x,y for the well in question. The distance weights for each layer, u l , were based on an exponential decay, terminated to 10 terms and then normalized to 1 by dividing by the sum of all weights u s . We give equal weight to p x,y and the value for the layer 1 neighbors p x,y,1 . A decay factor d of 1.05 was chosen empirically, as it seemed to create the most uniform-like distribution of the scores across all wells d = 1.05, u s = ∑ These calculations were repeated for all wells containing at least one murine cell. Trajectory inference analysis Genes expressed in fewer than five cells and cells expressing fewer than 100 genes were excluded. Variance stabilizing transformation was performed using the SCTransform (48) function in the R Seurat package. The resulting corrected count data in MSC in one tissue were used as the count matrix input in trajectory inference analysis, using the tradeSeq (41) package in R. Genes whose expression is associated with the proximity score were identified by the associationTest function in tradeSeq, based on a Wald test under the negative binomial generalized additive model. The P values were corrected using Benjamini-Hochberg multiple testing procedure, and genes with corrected P values smaller than 0.05 were considered to be significantly associated with the proximity score. SUPPLEMENTARY MATERIALS Supplementary material for this article is available at http://advances.sciencemag.org/cgi/ content/full/7/17/eabg4755/DC1 View/request a protocol for this paper from Bio-protocol.
9,689.4
2021-04-01T00:00:00.000
[ "Biology" ]
Biomarkers in Rheumatoid Arthritis The utilization and identification of biomarkers in rheumatoid arthritis (RA) to facilitate timely diagnosis and the optimal management of the disease is an area of active investigation. This review focuses on biomarkers available for routine clinical use, details potential investigational biomarkers, and raises outstanding clinical questions. Both RF and anti-CCP2 have similar sensitivities for the diagnosis of RA but anti-CCP2 is more specific [14]. Anti-CCP2 is positive in 20%-30% of RA patients who are negative for RF [15]. A systemic review and metaanalysis that included 37 studies of anti-CCP2 positive patients and 50 studies of RF positive patients showed the pooled sensitivities of RF and anti-CCP2 to be 69% and 67%, and 85% and 95%, respectively [16]. This being said, anti-CCP2 positivity may be found in other rheumatologic diseases (eg. myositis, Sjogren's Syndrome), especially in the setting of erosive inflammatory arthritis [17]. Anti-CCP2 positivity may also occur with active pulmonary tuberculosis albeit with minimal rheumatologic symptoms [18]. High titer RF and anti-CCP2 antibodies are both associated with an increased risk of erosive joint damage; anti-CCP2 antibodies may confer a higher risk than RF [19][20][21]. High titer anti-CCP2 is associated with better clinical response to certain biologics (rituximab, abatacept) and thus may aid clinicians in personalizing therapy for the greatest chance of response [22][23]. Erythrocyte Sedimentation Rate (ESR) and C-reactive Protein (CRP) The ESR, the rate at which erythrocytes fall through plasma when suspended in a vertical tube, is an indirect measure of the levels of acute-phase reactants (mainly fibrinogen). ESR levels are influenced by several factors, such as the size, shape, and number of red blood cells, as well as other plasma constituents like immunoglobulins. Elevated ESR levels may be caused by systemic or local inflammatory processes, infection, malignancy, tissue injury, end-stage renal disease, nephrotic syndrome, and obesity. ESR values increase with age and are slightly higher in women than men. Furthermore, many factors may contribute to spuriously low ESR values, like abnormal erythrocyte shape, extreme leukocytosis, heart failure, and cachexia [24]. Not surprisingly, the ESR is not a specific marker of inflammation. CRP is an acute-phase reactant in the pentraxin protein family, which comprises pattern recognition molecules involved in the innate immune response [25]. CRP occurs in both acute and chronic inflammatory states, infectious and noninfectious. Low-grade CRP elevation is associated with various metabolic stressors, including but not limited to atherosclerosis, obesity, type 2 diabetes, sedentary lifestyles, unhealthy diet, and even being unmarried [26][27]. CRP levels vary with age, sex, and race, though less so than ESR levels [28]. Furthermore, there is no standardized reference range or unit of measure for CRP values; these vary between laboratories [29]. In the RA synovium, there is an overabundance of proinflammatory cytokines that stimulate the production of CRP by the liver, thus making it an attractive candidate as a disease activity biomarker [30]. However, CRP measurement in RA is not foolproof. For example, elevated CRP levels have been independently associated with truncal adiposity in women with RA, regardless of articular involvement or the use of biologic agents [31]. Although ESR and CRP measurements are imperfect, both continue to play a role in the diagnosis and management of RA. Elevated ESR and CRP levels are included in the 2010 ACR/EULAR Classification Criteria for RA [2]. CRP values of less than or equal to 1 mg/dL are included in the 2011 ACR/EULAR definition of RA remission used in clinical trials [32]. The ACR has endorsed six RA disease activity measures for use in clinical practice, two of which include ESR or CRP measurement: the Disease Activity Score 28-ESR or CRP (DAS28-ESR or DAS28-CRP) and the Simplified Disease Activity Index (SDAI) [33]. The 2015 ACR Guideline for the Treatment of RA, widely used in clinical practice, encourages the use of these disease activity measures, though does not specify a preference for measures that include laboratory values over those that do not. The guidelines also do not specifically recommend routine monitoring of ESR and CRP in all RA patients [34]. An update to these treatment guidelines is currently in progress, anticipated fall 2021. Multiple studies have shown a correlation between ESR and CRP elevation, and radiographic and functional outcomes in patients with RA [30,35]. Elevated ESR is thought to be a better predictor of these outcomes in early RA, whereas CRP may be superior in later stages of the disease given less susceptibility to other factors like immunoglobulin levels and anemia [30]. This being said, ESR and CRP are normal in about 40% of patients with RA [36][37]. Furthermore, even in those patients with baseline elevation, values may remain stable despite clinical improvement with treatment [38]. Interestingly, ESR and CRP values may also be discrepant [24]. A large observational study that included over 9,000 patients from a practice-based registry noted discordant ESR and CRP values in 26% of patients, despite active RA as measured by joint counts and global assessments [39]. When results are discordant, they may no longer predict the progression of radiographic joint damage [40]. Lastly, biologic therapies like tocilizumab, a humanized monoclonal antibody against the interleukin-6 receptor, will normalize CRP values, eliminating utility as a trackable disease activity biomarker. Multi-Biomarker Disease Activity (MBDA) Test The MBDA test is a commercially available assay that measures 12 serum protein biomarkers and applies an algorithm to summarize the information into a single score that indicates the level of "RA inflammation." The following biomarkers are tested: vascular cell adhesion molecule-1 (VCAM-1), epidermal growth factor (EGF), vascular endothelial growth factor A (VEGF-A), interleukin 6 (IL-6), tumor necrosis factor receptor type 1 (TNF-R1), matrix metalloproteinase-1 (MMP-1), matrix metalloproteinase-3 (MMP-3), human cartilage glycoprotein 39 (YKL-40), leptin, resistin, serum amyloid (SAA), and CRP [41]. A 2019 systematic review and meta-analysis identified eight studies that reported correlations between the MBDA and RA disease activity measures currently used in clinical trials and clinical practice. There was a modest correlation between MBDA, DAS28-CRP, and DAS28-ESR, with weaker correlations observed with SDAI, Clinical Disease Activity Index (CDAI), and Routine Assessment of Patient Index Data (RAPID3) [42]. However, subsequent post hoc analysis of data from the AMPLE trial (abatacept versus adalimumab for RA) showed disagreement between the MBDA test score and these measures [43][44]. One trial showed that the MBDA test may be useful in deciding whether or not to continue biologic therapy in the setting of clinical remission [45], and post hoc analyses of data have shown that a high baseline MBDA score is a strong independent predictor of radiographic progression at one year [46][47][48][49]. Further study is needed in both regards. At this time, the use of this test is not included in the 2015 ACR Guideline for the Treatment of RA [34]. The cost-effectiveness and role of this test in routine clinical practice remain controversial. Investigational Biomarkers for Diagnosis Several biomarkers are currently under study in hopes of improving the accuracy and timeliness of the diagnosis of RA. Approximately 20% to 25% of patients are classified as seronegative RA (negative RF and anti-CCP2 testing). About half of patients are seronegative early in the disease course but eventually become seropositive [14]. Seronegative RA patients experience a delay in diagnosis and delay in initiation of therapy. Hence, they are less likely to attain remission and more likely to suffer joint damage and disability. This suggests a missed "window of opportunity" for intervention (within the first three to six months of illness) [50]. It is unclear whether these patients are truly seronegative, or whether they simply possess RA antibodies that are yet to be identified. Anti-mutated citrullinated vimentin (anti-MCV), an antibody in the ACPA family, has a similar specificity for RA as anti-CCP2. However, systematic review and meta-analysis of the literature did not reveal superior diagnostic accuracy to anti-CCP2, ultimately limiting the adaptation of anti-MCV testing in routine clinical practice [51][52][53]. No study has specifically addressed whether adding anti-MCV testing to RF and anti-CCP2 testing would improve overall diagnostic accuracy for RA. Serum 14-3-3eta, an intracellular chaperonin protein, has been studied as a diagnostic biomarker in RA, but data to date have not been robust enough to defend its routine clinical use. When tested in addition to RF and anti-CCP2, testing may minimally improve rates of diagnosis (from 72% to 78%) or reclassify individuals previously deemed seronegative [54][55][56][57]. Further study is needed in this regard. Antibodies to carbamylated proteins (anti-CarP) have been found in the serum of RA patients. Similar to the other novel biomarkers discussed, studies have not shown increased sensitivity or specificity when compared to the RF and anti-CCP2 testing currently used in clinical practice [58]. Investigational Biomarkers for Disease Activity Monitoring Given the limitations of ESR and CRP testing as described above, the search for a clinically useful biomarker for disease activity monitoring persists. A biomarker that accurately identifies subclinical disease activity could help guide management decisions and lead to better patient outcomes. Multiple types of biomarkers are being investigated for the purpose of RA disease activity monitoring: serum acute phase reactants, genetic factors, and tissue-specific markers from cartilage, bone, and synovium. IL-6, a prominent acute phase reactant in RA, remains under investigation but unfortunately has not been found to correlate with the radiographic progression of the disease [59]. Genetic testing may ultimately play a role in the prognosis and selection of therapy given the well-known association of RA with certain human leukocyte antigen (HLA)-DR alleles. Synovium-specific markers of interest include serum hyaluronan, MMP-1, and MMP-3; these have been shown to correlate with radiographic progression [60][61]. Cartilage and bone-specific markers under investigation include serum cartilage oligomeric matrix protein (COMP) and urine C-terminal crosslinked peptides from type I and type II collagen (CTX-I and CTX-II), among others [62][63]. Serum VEGF, a vascular marker, is elevated in RA patients and correlates with radiographic progression [64]. Synovial fluid biomarkers have also been identified, but the clinical utility of these would be limited given the requirement for arthrocentesis to perform testing. Ultimately, the combined use of multiple biomarkers may prove to be a more effective measure of disease activity. Future studies may follow in this regard. Conclusions To conclude, the biomarkers currently available for the diagnosis, prognosis, and management of RA have several limitations. RF lacks specificity, as any condition that triggers chronic antigenic stimulation may result in positive RF testing. Anti-CCP2 is more specific, but both tests fail to identify 20%-25% of patients with seronegative RA. Disease activity monitoring remains clinical due to the lack of adequate biomarkers for this purpose. ESR and CRP are nonspecific acute phase reactants that may be elevated for a myriad of reasons, and the role of the commercially available MBDA test remains unclear. Novel RA biomarkers of many types (eg. serum, tissue-specific, genetic factors) are under active investigation for both diagnosis and disease activity monitoring. The rheumatology community eagerly awaits data in this regard. Many outstanding clinical questions remain that better biomarker identification could help answer. Does seronegative RA truly exist, or have we simply not yet identified the antibodies this subset of patients makes? Can we identify a biomarker that permits earlier RA diagnosis, widening the golden three-to sixmonth "window of opportunity" to therapeutically intervene? Can a universal biomarker be found that accurately identifies ongoing subclinical disease activity, permitting better titration of RA therapies? Can we identify biomarkers that allow for the personalized selection of RA therapies, permitting more rapid and effective disease control? Future work should pursue answers to these questions. Better biomarkers could lead to earlier diagnosis, treatment, and outcomes. Once a better biomarker is identified, the cost and feasibility of testing will need to be considered in order to ensure clinical utility on a worldwide scale. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2,833.6
2021-05-01T00:00:00.000
[ "Medicine", "Biology" ]
Automatic body condition scoring system for dairy cows based on depth-image analysis Body condition score (BCS) is an important management tool in the modern dairy industry, and one of the basic techniques for animal welfare and precision dairy farming. The objective of this study was to use a vision system to evaluate the fat cover on the back of cows and to automatically determine BCS. A 3D camera was used to capture the depth images of the back of cows twice a day as each cow passed beneath the camera. Through background subtraction, the back area of the cow was extracted from the depth image. The thurl, sacral ligament, hook bone, and pin bone were located via depth image analysis and evaluated by calculating their visibility and curvature, and those four anatomical features were used to measure fatness. A dataset containing 4820 depth images of cows with 7 BCS levels was built, among which 952 images were used as training data. Taking four anatomical features as input and BCS as output, decision tree learning, linear regression, and BP network were calibrated on the training dataset and tested on the entire dataset. On average, the BP network model scored each cow within 0.25 BCS points compared to their manual scores during the study period. The measured values of visibility and curvature used in this study have strong correlations with BCS and can be used to automatically assess BCS with high accuracy. This study demonstrates that the automatic body condition scoring system has the possibility of being more accurate than human scoring. Introduction  The metabolizable energy stored in fat and muscle is vital to maintaining dairy cows. Body weight alone is not a good representative of bodily energy reserves, as the relationship between these variables is affected by parity, stage of lactation, frame size, gestation, and breed [1][2][3][4] . Body condition was defined as the ratio of body fat to nonfat components in the body [5] . Because direct measurements of body adiposity are difficult and expensive, multiple body condition score (BCS) evaluation systems have been developed to indicate and evaluate the relative amount of subcutaneous body fat or energy reserves of a live cow [6] . BCS evaluations can be used to determine whether a cow is in the proper condition for each stage of the lactation cycle. Using BCS information, appropriate dietary changes can be made to maximize the performance of cows [7] . Cows with unfavorable BCS are at high risk of metabolic and other diseases in the peripartum period. At calving, BCS must be sufficient to allow maximal milk production and health, but excessive BCS at this stage may result in calving difficulties and animal losses [6] . In early lactation, BCS is a vital indicator of excessive weight loss, which can lead to metabolic disorders and should be avoided [8] . At dry-off, parturition, and throughout the lactation cycle, BCS evaluations can be used to identify cows that are at risk of milk fever, mastitis, lameness, and infertility [9] ; therefore, BCS is an important management tool for maximizing milk production and reproductive efficiency [9,10] and even preventing potential disease and lameness [6,11] . It is estimated that less than 5% of US dairy herd managers regularly assign BCS values to their cows [12] . Most dairy farms that conduct BCS in the US use a 5-point system [13] . This system measures the relative amount of subcutaneous fat in 0.25-point increments, where 1 denotes a very thin cow and 5 indicates an excessively fat cow [14] . The manual scoring of body condition requires experienced personnel with adequate training. Although a well-trained scorer can score one cow in a short amount of time, it is time consuming to score all the cows in a large herd on daily basis. Additionally, the perception of fatness and the understanding of the BCS guidelines vary from person to person, which causes inconsistencies in data from different scorers. The dairy industry and research community have recognized the need for a quick and inexpensive but accurate technology with which to automatically measure body fat on cows in different stages of lactation. Bewley, et al. [15] explored the possibility of developing an automatic BCS system based on 2D digital images. A total of 23 points were selected manually from each image to analyze the contour and shape of cows. The study showed that the hook angle, posterior hook angle, and tailhead depression were significant predictors of BCS. According to the above method, Azzaro, et al. [16] developed an application with which to extract the 23 anatomical points. In the study, a shape descriptor based on principal component analysis was built and tested. Validation testing showed that the average error of the polynomial model was 0.31 from the manual scores. Several other studies have also been conducted to automate BCS evaluation based on 2D and thermal image processing technology [17][18][19] . As three-dimensional (3D) images contain information on the depth dimension of the body surface of a cow, they have great potential to improve the accuracy of automatic BCS systems. Weber et al. [20] developed an automatic 3D optical system to estimate the backfat thickness (BFT) of cows; this system has great potential for scoring body condition. The correlation coefficient between the observed and estimated BFT was 0.96. Fischer, et al. [21] used a 3D camera to capture the surface of the rear of the cow, and four anatomical landmarks were identified manually from the surface. The principal component analysis was applied to the dataset, and the coordinates of these surfaces in the principal component space were used to build a multiple linear regression model with which to assess BCS. The system still requires some level of human involvement. DeLaval [22] released an automatic BCS system that provides continuous and daily BCS readings [23] ; however, the high cost of the system hinders the popularization and application of the BCS system. Spoliansky, et al. [24] developed an automatic BCS system using a low-cost 3-dimensional Kinect camera. In that study, 14 features were used to build prediction models. When the model was applied, 94% of the errors were under 0.75 BCS points compared to manual scores, and all errors were under 1 BCS point. However, the models required weight information in addition to depth images. Alvarez, et al. [25] used a depth camera to capture the image of the back of dairy cows. After removing the background, the images directly input into a convolution neural network for training. Finally, 1158 images were used for model training and 503 images were tested. The results show that 78% of the samples with a BCS error less than 0.25. In light of the above studies, it is apparent that, because of either cost or accuracy, proposed automated BCS systems fall short of the requirements for farm daily use. As such, there is a considerable need for further research to improve the practicality of those systems to a higher level. Therefore, the objective of this study was to (1) develop a fully automatic 3D computer vision-based system with which to assess the BCS of cows with high accuracy (MAE<0.25); (2) explore the possibility of the automated system being more accurate than human scoring. In order to obtain the information directly related to BCS from the depth image, four specific areas (bones or body structure) on the back of the cow were located automatically. Their visibility or curvature was analyzed for quantitative evaluation of fatness in that area. The machine learning algorithms and regression analysis model were constructed to score the body condition accurately. The stability and consistency of the system were tested and analyzed with the comparison of automated BCS to human scores on daily basis. System setup The data for this study were collected on the University of Kentucky Coldstream Dairy Research Farm. A group of 94 Holstein cows was milked twice a day in the morning (5:00 AM) and afternoon (4:00 PM). All cows left the parlor through a roofed walkway and returned to the free-stall barn after milking. The walkway was paved with a concrete slab floor with walls on both sides. The width of the walkway was 1.03 m, which restricted the movement of the cows. A PrimeSense™ Carmine 1.08 RGB+depth sensor (PrimeSense™, Tel Aviv, Israel) was used to capture depth images of each cow's back contours as it walked through the return alley. The camera system was placed 3.05 m above the floor of the walkway, with its field of view covering the entire width of the walkway. The camera was connected to a computer in the dairy office via a 30m active repeater USB 2.0 cable. The images taken by the camera were sent to the computer and stored on the hard drive. The resolution of the depth images was 320×240 pixels with a frame rate of 30 fps. As shown in Figure 1, the walls of the walkway ensured that the cows were always located in the middle area of the frame and oriented roughly parallel to the midline of the image. To obtain a single file for each cow, we developed software to record the depth images as each cow passed beneath the system. Four fixed lines in the image scene were used to trigger and stop the recording. As the cow walked from the left to the right side of the scene, data recording started when the cow's nose reached the fourth line. Then, depth frames were captured continuously until the tail end of the cow passed the first fixed line. At the beginning of the recording, the software saved the initial image (without a cow in it) as the background image to perform background subtraction later. Data acquisition Depth images of 94 cows were obtained twice a day from April 1st to June 7th, 2014. Over this time, three independent human scorers manually scored every cow in the group once a week on the same day when possible or, at most, within a few days of one another. The median of these three scores from one cow in one week was assigned as the score for the cow that week. Within-cow outliers were removed by comparing the BCS obtained during successive weeks. When a given BCS differed from preceding and subsequent scores by more than ±0.25, the score was removed from the dataset. The objective of this editing technique was to remove individual BCS values that were clearly inconsistent with scores for an individual cow in a short time frame. Because very fat cows were rare in the herd (less than 1.5%), cows with scores above 3.75 were assigned a score of 3.75 to eliminate outliers. Over the two-month data collection period, a total of 94 unique cows were examined at various stages of lactation and levels of body condition. The camera recordings of each cow in a given week were paired with the human-evaluated BCS for that week. The dataset contained 4820 images from 94 cows and their related BCS values from 2.25 to 3.75 (7 classes). N images were randomly selected from each class to build a training dataset. With M i denoting the number of images in each class, N is the minimum in M i . Through this approach, the training model was trained at the same level for each class, and overtraining for the larger classes was avoided. Table 1 illustrates the number of images (M i ) among different BCS classes and the proportion of training data in each class. Eventually, 952 images were selected for training data, i.e. 136 images from each BCS class (N = 136). The BCS values of the training dataset were normally distributed. The classification and regression models were calibrated on the training dataset, and tested on training and entire dataset, respectively. Feature definition The image features used in this study were selected because they are potential indicators of BCS. Body fat reserves on the sacral ligament were measured using the convex hull. Surface curvature was used to define the sharpness of the hook and pin bones. Overall back fatness was also evaluated. Convex hull. As shown in Figure 2, the sacral ligament has a concave curve on thin cows, and the curve is less concave on fat cows. As a thin cow gains fat on the sacral ligament, the concave parts of the curve are filled in, and the concave curve finally becomes a convex curve. In this paper, the convex hull is defined as a convex curve that is tangent to the concave line and lies at a minimum distance from it; therefore, the convex hull can be a tool with which to simulate how the sacral ligament would look if a cow gained body fat. In this study, the visibility of the sacral ligament was measured by the space between the convex hull and the outline of the sacral ligament. Figure 3 illustrates the flowchart for drawing the convex hull of a discrete concave curve. For a discrete concave curve containing P points, two points are iteratively selected from the curve to draw line i,j . If line i,j is tangential to the curve, then it belongs to the convex hull of this curve; otherwise, the points are discarded, and a new pair is selected and tested. Figure 3 Flowchart of drawing the convex hull for a concave curve The convex hull of the sacral ligaments of a cow was calculated based on the above algorithm and shown in Figure 4. The space between the convex hull and the sacral ligament indicates the potential space in which the cow can carry fat reserves. The average distance between the convex hull and sacral ligaments was calculated using Equation (1) to evaluate the visibility of the sacral ligaments quantitatively. where, VSL is the visibility of the sacral ligament; SES is the area of the total space between the convex hull and the sacral ligament; LSL is the length of the sacral ligament; WSL is the width from the left hook bone to the right hook bone, and ASL is the average of WSL over the dataset; ASL/WSL is the coefficient used to eliminate the effect of individual size and shape on the VSL; VSL is independent of the height of the cow. When the VSL is close to 0, the sacral ligament is barely visible. Figure 4 Space between convex hull and outline of sacral ligament Surface curvature. The fat reserved on the hook bone and pin bone was measured by the surface curvature (SC). The bones of thin cows appear sharper than those of fat cows; therefore, the SC of the bones is larger in thin cows. In this study, the SC of a piece of the surface was defined as the ratio of the superficial area to the shadow area of that piece of surface. Therefore, the SC is independent of the height and size of the surface. Figure 5 illustrates the hook bones of a thin cow and a fat cow. In Figure 5a, the hook bone is sharp, and the SC is 1.4; meanwhile, the hook bone in Figure 5b has a flat surface and an SC of 1.17. Back fatness. Because the 5-point BCS system mainly focuses on the area of a cow's back above the thurl, a depth threshold, denoted by DT, was used to segment the region of interest in the depth image. For the j-th column from the back to the front of the cow, the height of the point on the spine in that column was denoted by Hsp j . The points in the depth image were filtered according to the rule described by Equation (2). The points that deviated from the Hsp i by less than DT were reserved. Otherwise, the points were discarded. The information that was unrelated to BCS in the depth image was filtered through this process. , , where, Mask i,j indicates whether the point will be reserved (Mask = 1) or discarded (Mask = 0); Hsp j is the height of the point on the spine in j-th column, and H i,j is the height of the point in the ith row and j-th column. DT was set as 100 in the study. Thin cows had visible thurls with additional points classified as belonging to the thurl area. Therefore, the visibility of the thurl (VTH) can be evaluated by calculating the ratio of the area of the image after and before image cropping using Equation (3). where, APH after and APH before are the area of the region from pin bone to hook bone after and before the image cropping, respectively. The methods for locating the pin bone and hook bone will be presented in the following sections. VTH is independent of the height and size of the cow. Image processing Background subtraction. The first 1200 camera depth frames without cows were used to build the background image. The background image was then continuously updated to avoid potential error from any single background image. The number of frames for building the background was set to 1200 to ensure that the depth-frame sequence contained at least one depth frame that included the floor and walls of the scene. Background subtraction and threshold processing were performed to obtain pure depth data of cow. After background subtraction, the depth value of each pixel on the cow body was converted into the distance to the floor by adding the height of the camera, which was 3050 mm. Image rotation. During the movement of cows, the deviation between the body axis and the horizontal axis of the image may occur, which has a great influence on the subsequent image processing. Therefore, the body axis of the cow needs to be detected and corrected. The spine of the cow was detected by calculating the highest point in each column in the depth image. A line was fitted to the group of points. The image was rotated according to the angle between the x-axis and the fitted line. The symmetry of the rotated cow was tested by calculating the overall difference between the left and right of the cow as defined by the line of symmetry [24] . Image crop. A width threshold was used to eliminate the tail from the depth image. Measuring from the back to the front of the cow, if the width of the pixels in each column was less than a certain threshold, the pixels in this column were set as 0 to remove the tail. Hook bone and sacral ligament detection. The contour of the back of the cow was divided into left and right parts by a symmetry line. As shown in Figure 6 The y-coordinate and size of the bump were determined by analyzing the slope of the sacral ligament along with that of the convex hull. By combining the x-coordinate of the sacral ligament and the y-coordinate with the size of the bump, the hook bones in the depth image were detected, as shown in Figure 6. The surface curvatures of left and right hook bones were calculated and the average value of them was denoted by CHB. The sacral ligaments were isolated by connecting points A and B and extracting the slice on the line from the depth image. After the sacral ligaments were extracted from the depth image, the visibility of it was calculated as VSL using Equation 1. Removal of tailhead and detection of pin bones. The tailhead caused a discontinuous change in the depth image and had a major influence on the analysis of the pin bone; therefore, it was necessary to remove the tail head from the depth image. Figure 7 illustrates the slice containing the tailhead and pin bones. As shown in Figure 7, the tailhead caused two drop points, D 1 and D 2 , in the slice. At points D 1 and D 2 , the distances between the convex hull and the slices were the maximum values, which were denoted as M 1 and M 2 , respectively. The points between D 1 and D 2 were set to 0 to remove the tailhead. From the back to the front of the cow, M 1 and M 2 decreased. If the mean of M 1 and M 2 was smaller than a predetermined threshold, the slice was defined as the end of the tailhead, as shown in Figure 7. The areas separated by the removed tailhead on the left and right sides were the left pin bone and right pin bone, respectively. The surface curvatures of left and right pin bones were calculated and the average value of them was denoted by CPB. For fat cows, the pin bones are entirely hidden, and the length of the tailhead is dramatically shorter than in thin cows. In this study, if the length of the tailhead of a cow was less than 50 mm, then the pin bones of this cow were considered entirely hidden, and CPB was set as 1. For a specific cow, before any further calculations, the length of the tail head was multiplied by ASL/WSL, which was defined in Equation (1), to eliminate the effect of individual cow size. Prediction models For each selected depth image, the image processing generated four features (i.e. VTH, VSL, CHB, and CPB) according to the methods described above. The distribution of those four features among BCS classes were analyzed by plotting their boxplot, and analysis of variance (one-way ANOVA) was used to test for differences in measured features with regard to the BCS class. The decision tree learning method was used to build a classification model to predict BCS values based on features. Linear regression and a backpropagation (BP) neural network were used to build regression models for continuous BCS evaluation. The features and the BCS values were normalized to a range of 0 to 1 before they were applied to the regression models. Decision tree learning. Decision tree learning [26] was utilized to classify the four feature variables into seven BCS levels (2.25 to 3.75 with 0.25 interval) according to the given scores. This predictive model maps a group of observations of an item to the target value of the item. In these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels by using if-then rules. The goal of training is to find the optimal criteria for the if-then rule in each branch of the tree. In the prediction phase, the trained model takes several variables from one observation as input, and a path from the root node to a leaf node is determined by comparing the variables and the criteria of the if-then rule in each branch. The class label that the leaf node represents is the output of the model. Linear regression. The linear regression model operated by assuming that the larger the four features were, the thinner the cow was. When the four features were all 0, the cow was expected to have had a high body condition score. It was assumed that the human evaluator scored the cow based on the highest body condition score in the herd and then reduced the score by perceiving the sharpness of the thurl, sacral ligament, hook bone, and pin bone. Based on that assumption, a model for BCS regression was designed as follows: BCS=µ-w1×VTH-w2×VSL-w3×CHB-w4×CPB (4) where, µ was the highest score in the herd and w1, w2, w3, and w4 were the scores that were subtracted from µ due to the sharpness of the thurl, sacral ligament, hook bone, and pin bone, respectively. VTH and VSL are the visibility of the thurl and sacral ligament, respectively. CHB and CPB are the curvature of the hook bone and pin bone, respectively. BP neural networks. The BP neural network is a multilayer feedforward network that is trained according to an error BP algorithm. This method is one of the most widely used neural network models [27] . A BP network can be used to learn and store mapping relations in an input-output model, and there is no need to disclose the mathematical equation that describes these mapping relations [28] . Due to this characteristic, the BP network is a feasible way to regress the relationship between the inputs and output, regardless of whether it is linear or nonlinear. In the network in this study, the input layer contains four neurons for each feature and two hidden layers with five neurons in each one. The output of the model is the body condition score that is predicted by the input features. The neurons were fully connected to each other in the different layers. The transfer functions of the hidden layers and output layer were 'tansig' and 'purelin' respectively. The maximum epoch of training was set as 100. The learning rate and training goal were 0.1 and 0.0004, respectively. Model Evaluation The decision tree model was evaluated on the entire dataset (D E ) and the training dataset (D T ) with 3-, 5-, and 10-fold validation. K-fold validation was not applicable to the D E because the numbers of samples in different classes of the D E were not even. The linear regression and BP network model were built based on the D T and tested on the D E . The three models were evaluated by the mean absolute error (MAE), the rate of correct classifications (only for the decision tree model), and the rates of predicted scores within 0.25 and 0.5 BCS points of manual scores. The correlation (R 2 ) between the results of the model and the target BCS values was also calculated to evaluate the two regression models. correlation with BCS, followed by the curvature of the pin bone (r = −0.85) and the curvature of the hook bone (r = −0.75). The visibility of the thurl (r = −0.73) had the weakest correlation with BCS among the four predictive features. The four features were positively correlated with each other (p<0.01). Among the features, the correlation between the visibility of the sacral ligament and the curvature of the pin bone was the strongest (r = 0.74), while the correlation between the visibility of the thurl and the curvature of the hook bone was the weakest (r = 0.50). Other correlations among the features ranged from 0.62 to 0.70. Most features among BCS levels are significantly different between grouped data except for VTH values in BCS 3.5 and 3.75, CHB values in BCS 3 and 3.25, as well as CHB values in BCS 3.5 and 3.75. Two groups of data cannot be distinguished from each other if they are not significantly different. However, other features in these BCS groups (i.e. VSL and CPB) are significantly different, and provide great variability that can classify and predict these BCS values. Distributions and medians of VTH values in BCS 2.5 and 2.75 are close. However, the result of ANOVA test shows they are significantly different and come from the different normal populations, which means their variance is quite different and can provide variability for classification and regression. Compared to the other features, the VSL showed an improved linear relationship with the BCS and reduced interclass overlap. The VSL of a fat cow ranges from 2 mm to 6 mm, which indicates that the sacral ligament is barely visible. The CHB and pin bones ranged from 1 to 1.5. The CHB dropped sharply as BCS increased from 2.25 to 2.75, while the tendency became flat for BCS values greater than 3, which showed that there was a nonlinear relationship between the CHB and BCS. The CPB had a similar tendency for BCS values from 2.25 to 2.75 and those from 3 to 3.5. However, the drop from 2.75 points to 3 points was considerable. Table 3 illustrates the classification results of decision tree learning using 3-, 5-, and 10-fold cross-validation based on the training (D T ) and the entire dataset (D E ). As shown in Table 3, the accuracies of 3-, 5-, and 10-fold cross-validations were similar, but the model achieved the highest accuracy when using 10-fold cross-validation, with which 95.38% and 99.68% samples were classified within 0.25 and 0.5 BCS points, respectively, of the manual scores. The accuracy of classifications was improved by 5.25% by increasing the K value from 3 to 10. When the decision tree was trained on D T and tested on D E , the result was not significantly different from that with 10-fold cross-validation on D T . Results of regression The BCS regression model was fitted to a training dataset (D T ), and the result was as follows: BCS=3.94-0.35×VTH-0.71×VSL-0.67×CHB-0.72×CPB (5) The parameters in the model showed that the highest theoretical score in the herd was 3.94 and that a sharp thurl, sacral ligament, hook bone, and pin bone will reduce the BCS by 0.35, 0.71, 0.67, and 0.72, respectively. The R 2 of the regression was 0.88 (p<0.01) as shown in Table 4. The BP network took 12 epochs to finish the training on the D T . The linear regression and BP network were tested on D T and the D E ; the results are shown in Table 4. For the same model, the rates of predicted scores within 0.25 and 0.5 BCS points were not significantly different when using D T and D E as testing data. However, the R 2 was reduced by 0.08 and 0.07 in the linear regression model and BP network, respectively, when they were tested on D E . In general, the BP network achieved higher accuracy than linear regression regarding all the performance indicators, especially the proportion of results within 0.25 BCS points of manual scores; in this respect, the BP network was over 5% more accurate than linear regression. The correlations between the camera and manual BCS values were analyzed to evaluate the regression models; the results are shown in Figure 9. The two regression models were similar in overall performance, but the predicted scores from the BP network were more concentrated than those from linear regression. Figure 10 illustrates the MAE and standard deviation of the BCS error of each cow during the study period (sorted by MAE) when the BP network was used as a prediction model. The average MAE was 0.11, and all cows had MAEs lower than 0.25. The average standard deviation was 0.069, where 95% of the SD values of cows were less than 0.1. Figure 11 illustrates four patterns of camera scores versus manual scores in four selected cows. In Figure 11a, the predicted scores were close to the manual score during the study period, and the MAE was 0.02. The result in Figure 11b was the most common case among cows, where all the intervals were less than 0.25 and the average MAE of the cow was 0.13. Figure 11c illustrates the cow with the maximum MAE (0.22) in Figure 10. Predicted scores for individual cows In Figure 11d, the cow had an abortion, and its BCS dropped from 3 to 2.5 in the first 21 days of lactation 2; the predicted scores tracked that change successfully and even found the change earlier than the manual score changed, with an MAE of 0.09. Discussion Our work describes a method for scoring the body condition of cows with high accuracy. The system was developed and tested based on longitudinal data (2 months) from 94 cows. Four features were extracted from depth images, and three models were used to predict the scores based on these features. Compared to previous studies, this study improved the percentage of predicted scores within 0.25 BCS points of manual scores, raising this proportion to 90% using a decision tree and a BP network model with a fully automatic system. The four features identified in this study all have strong positive correlations with BCS. A linear model was built to demonstrate the weights of the features when the human assigned the scores. The model partly explained the human scoring procedure and demonstrated the theoretical highest score in the herd. Due to the nonlinear characteristics of the features, the accuracy of the linear model was lower than that of the BP network model, which is able to regress linear and nonlinear relationships. The results showed that 95.48% of the samples were scored within 0.25 BCS points of the manual score using a decision tree learning model, which is a higher accuracy rate than that of the linear regression model (86.14%) or the BP network (91.68%). This demonstrates that the decision tree is more feasible for classification than other models if the data are treated as categorical. In this study, the hook bone and pin bone were detected, and their curvatures were used as indicators of fatness. Bewley et al. (2008) used 2D digital images to analyze the outline of the backs of cows and generate hook and tailhead descriptions related to BCS. The results showed that the hook angle, posterior hook angle, and tailhead depression were significant predictors of BCS. In our study, angle descriptions were not involved in the model because the edge of the entity in the depth image had a great influence on the 3D points that were close to the outline, which reduced the accuracy of the angles calculated from the outline in the 3D image. The descriptions of the sacral ligament and thurl were explored in this study because these two areas are frequently evaluated in the existing BCS system [14] . Additional researchers also developed automatic BCS systems based on 2D [16,19] and thermal [17,18] image processing technology. However, it is difficult to accurately detect specific anatomical points of a cow and extract fatness-associated features closely related to BCS based on two-dimensional image data alone. The 3D images used in this study not only provided depth information but also made it possible to measure the physical traits of the cow without additional image calibration. Weber, et al. [20] developed an automatic 3D optical system with which to estimate the backfat thickness (BFT) of cows. The correlation between the observed and estimated BFT values was 0.96, which demonstrated the feasibility of using 3D images to measure the body fat reserves of a cow. It would be worthwhile to study the relationship between BFT and BCS to build a model predicting the latter from the former. Fischer et al. [21] used a 3D camera to capture the surface of the hindquarters of cows, and four anatomical landmarks were identified manually from the surface to predict BCS. However, that study still involved manual processing. Spoliansky et al. also used 3D images of backs to calculate the relative heights of different parts of the cows, and the height information was combined with weight and age to build a model with which to predict BCS. In our study, the model was based only on depth images and required no additional information, which makes this system easy to implement in commercial applications. Sandgren and Emanuelson [23] reported the validation results from a commercial camera system; a total of 95% of the cows were scored within 0.25 BCS points of manual scores, with 99% of the scores having a standard deviation of less than 0.1. The result of our study showed that all cows were scored within 0.25 BCS points compared to the manual scores, and 95% of the cows had a standard deviation of less than 0.1. Therefore, our system exhibited similar performance to the reported commercial camera. However, the commercial camera used daily rolling average scores from seven-day periods as the output, which can improve the consistency of the output scores. Rolling average operation will greatly reduce the dynamic tracking performance of the scoring system, which makes the system insensitive to abnormal changes in a short time. The system proposed in this paper can ensure high precision and good tracking performance without rolling average. The manual scores were categorical data. However, the fatness of a cow may fall between two categorical values. Therefore, the difference between the predicted scores and manual scores may be caused by the difference between the actual BCS and the manual BCS in some cows. A prior study also showed that the MAE of a well-trained expert was 0.25. Thus, the standard deviation of the predicted scores could be a good indicator for evaluating the performance of the automatic BCS system when the MAE of the system is lower than 0.25. The region of the short ribs is another anatomical feature associated with the fatness of a cow. This area was ignored in the current study because the end of the short rib area is invisible on a fat cow, which made it difficult to determine and analyze that area. Future studies should focus on detecting the short rib area and analyzing the fat reserved on it to further improve the accuracy of the system. Conclusions Specific areas that are related to BCS, including the thurls, sacral ligaments, hook bones, and pin bones, can be accurately located through depth-image processing and the use of convex hulls. Measured values of the visibility and curvature of the four areas were strongly correlated with manually assigned BCS values. When the BP network was used, the system can score each cow within 0.25 points compared to the target BCS during the two-month study period (i.e. MAE of each individual cow is less than 0.25); meanwhile, the averaged MAE and SD of all cows were 0.11 and 0.069, respectively. The result shows that the system has high precision and good tracking performance, which demonstrates that the automatic system has the possibility of being more accurate than human scoring. Future studies should focus on analyzing the short rib area to provide an additional fatness-related feature and further improve accuracy.
8,811.6
2020-01-01T00:00:00.000
[ "Agricultural and Food Sciences", "Engineering" ]
Shape shifting by amphibious plants in dynamic hydrological niches Summary Amphibious plants thrive in areas with fluctuating water levels, partly as a result of their capacity to make specialized leaves when submerged or emerged. The tailor‐made leaves improve gas exchange underwater or prevent aerial desiccation. Aquatic leaves are thin with narrow or dissected forms, thin cuticles and fewer stomata. These traits can combine with carbon‐concentrating mechanisms and various inorganic carbon utilization strategies. Signalling networks underlying this plasticity include conserved players like abscisic acid and ethylene, but closer inspection reveals greater variation in regulatory behaviours. Moreover, it seems that amphibious leaf development overrides and reverses conserved signalling pathways of their terrestrial counterparts. The diversity of physiology and signalling makes plant amphibians particularly attractive for gaining insights into the evolution of signalling and crop improvement. I. Introduction Hydrological gradients are a strong determinant of plant species distribution, and species occupying the riparian side of these gradients experience fluctuating water levels and high flooding risks (Silvertown et al., 2015;Sarneel et al., 2019). For plants that are used to terrestrial life, inundation has dramatic consequences (Loreti et al., 2016). In the aquatic environment, gas diffusion is c. 10 000 times slower, which has grave consequences for oxygen (O 2 ) and carbon dioxide (CO 2 ) availability (Nobel 2009). Combined with potential reductions in light availability underwater, photosynthesis will be severely hampered. The resulting energy and carbon crisis is perhaps the greatest challenge for flooded plants. In illuminated conditions, the reduction in photosynthesis also generates oxidative stress as a result of an imbalance between light-harvesting and diffusionlimited carbon fixation (Horiguchi et al., 2019). For plants that typically inhabit the aquatic niche, the sudden aerial exposure is also not without risk. The lack of a thick cuticle makes their leaves prone to desiccation and the sudden exposure to light and high amounts of O 2 lead to excessive reactive oxygen species formation (Yeung et al., 2018). Amphibious plants can successfully occupy the terrestrialaquatic environmental interface. They often propagate via tubers and rhizomes, and/or time their life cycle to coincide with periods of favourable water levels (Sosnova et al., 2010). During shallow flooding, elongation of shoot organs can facilitate re-establishment of aerial contact and is typically combined with aerenchyma formation to improve internal aeration (Pierik et al., 2008;Herzog & Pedersen 2014). Despite possessing leaves with a slightly higher specific leaf area (Box 1), species from the water's edge do not have better underwater photosynthesis than those from higher elevation levels (Winkel et al., 2016). However, many species living in this transition zone do have the capacity to form new leaves adapted to either the new aquatic or aerial conditions. This drastic alteration of leaf form in response to environmental changes is termed heterophylly. Aquatic leaves, compared with those formed aerially, usually show a higher amount of dissection or they retain the simple leaf shape with a more narrow, elongated form. Additional changes in aquatic leaves include a minimal or even absent cuticle, and fewer or absent stomata. Besides the remarkable display of leaf plasticity, some amphibious plants utilize carbon-concentrating mechanisms (CCMs) and/or bicarbonate (HCO 3 -) uptake systems to improve underwater photosynthesis and facilitate the amphibious dual life. A multitude of internal and external signals are used to sense air-water transitions and trigger these dramatic changes. Here we highlight the current understanding of shoot plasticity and photosynthesis physiology, and their adaptive significance for an amphibious lifestyle. We also call for increased leveraging of wild species for broadening our knowledge on mechanisms of plasticity in variable environments. II. Leaf morphological transitions When confronted with a sudden change in environment, existing leaves have limited capacity to undergo drastic morphological changes. Therefore, the initiation of the development of either a terrestrial or an aquatic leaf is established early on in development at the shoot apex. Here we consider three main developmental changes from the terrestrial to aquatic transition perspective ( Fig. 1), either the formation of narrow leaves or the formation of dissected leaves. Both leaf forms typically lack stomatal development. Elongated narrow aquatic leaves When submerged, many amphibious species form new leaves that are longer and narrower. Sometimes the leaves are also pointed at the proximal end, forming an oblanceolate shape. Additionally, these leaves have a higher SLA, are thinner, lack stomata and have minimal cuticle development (Nakayama et al., 2017). The advantage of producing a thin leaf without a cuticle is the reduction of the distance for inward diffusion of O 2 and CO 2 required for respiration and underwater photosynthesis. The exact importance of the aquatic leaf shape remains unclear, but a narrower leaf would have a thinner diffusive boundary layer (Box 1), further enhancing gas exchange with the environment. Detailed investigations in Rumex palustris estimated a 38-fold reduction in CO 2 diffusion resistance in aquatic leaves (compared with unacclimated terrestrial leaves) associated with higher photosynthesis rates and reduced photorespiration (Mommer et al., , 2006. Amongst amphibious plants, abscisic acid (ABA) has emerged as a major regulator of leaf morphological alterations (Nakayama et al., 2017). In Marsilea quadrifolia, the elongated aquatic leaf form requires low ABA conditions, and terrestrial leaves were created when submerged in water containing ABA. Subsequently, specific transcriptional ABA responses could already be observed at the shoot apex (Hsu et al., 2001). The correlation between elevated ABA concentrations and the terrestrial leaf form was also found in Potamogeton wrightii. Here, even salinity stress-induced ABA triggered terrestrial leaf formation underwater (Iida et al., 2016). Such ABA-dependent heterophyllous changes were also observed in Ludwigia arcuata where ABA concentrations were downregulated by ethylene accumulating in submerged tissues, as frequently observed in wetland species (Kuwabara et al., 2003;Benschop et al., 2005) . The amphibious Ranunculus trichophyllus produces extremely thin, rounded aquatic leaves with enhanced abaxial and retarded adaxial development, in contrast to the thick, wide terrestrial leaf form (Kim et al., 2018). Interestingly, its terrestrial relative, Ranunculus sceleratus, does not display such heterophylly. A transcriptome analyses of R. trichophyllus aquatic and terrestrial leaves revealed strong repression of genes associated with wax biosynthesis required for cuticle formation, and secondary cell wall and vascular development. Heterophyllic leaf development was determined by hormonal regulation of gene families involved in leaf polarity control, namely HD-ZIPIIIs and KANADI. Submergence-induced ethylene accumulation stimulated KANADIs required for abaxial development (Kerstetter et al., 2001), whilst HD-ZIPIII-mediated adaxial development (McConnell et al., 2001) was retarded via a submergence-Box 1 Glossary of specialist terms used in this article Specific leaf area (SLA): The amount of leaf area per unit leaf mass (m 2 kg À1 ). It is considered a major factor associated with plant growth variation. As high SLA is associated with thin leaves, it is considered an important trait for underwater photosynthesis. In this case, diffusion distance is lower. Thus high-SLA leaves typically have better gas exchange. Boundary layer: A stationary fluid layer immediately covering the surface of submersed objects (e.g. flooded plant organs). No bulk flow of the liquid is observed in this layer, so movement of all compounds is driven solely by diffusion. This layer therefore severely impairs underwater gas exchange. The thickness of the diffusive boundary layer is dependent on the flow rate of the water and the surface topography of the submerged object. In the context of submerged plants, thicker boundary layers are expected in still or slow-moving water and especially on large leaves. Kranz anatomy: Distinctive leaf anatomy associated with C 4 photosynthesis. Typical 'Kranz' (German for 'wreath') structure includes an outer ring of mesophyll cells and an inner ring of bundle sheath cells surrounding the vascular tissues. Heterophylly: The formation of extremely different leaf forms on a single plant. These leaf forms are created early in the development of a leaf. The extreme variation can be induced by a variety of factors such as age, temperature and humidity. In this paper, we refer to alterations caused by transition between flooded and aerial conditions. induced loss of ABA stimulation. Ranunculus sceleratus lacked these hormonal and transcriptional responses, suggesting that the changes in ABA/ethylene signalling and leaf polarity control are key evolutionary steps for aquatic adaptation (Kim et al., 2018). Leaf morphology is also strongly regulated by light quality cues (Momokawa et al., 2011). The red : far-red ratio (R : FR) rises with increasing water depth. Accordingly, low R : FR triggered terrestrial leaf formation in submerged Rotolla hipuris, with the converse being true for aquatic leaves upon emersion. Interestingly, R : FR values indicative of proximity to the water surface or aerial conditions required high blue light to facilitate underwater terrestrial leaf formation. At high R : FR values typical of deep flooding, blue light had no effect. Thus the integration of light quantity and quality can be critical in detecting water-level fluctuations (Momokawa et al., 2011). Dissected aquatic leaves An extreme form of heterophylly is the formation of highly dissected leaves underwater, with reduced stomatal density and cuticle thickness. A narrow and dissected leaf might also facilitate better water flow around and through it, and so prevent mechanical stress. An increase in dissection index is found in many species across a wide range of phylogenetic lineages. The underlying mechanisms have recently been extensively investigated for species such as Hygrophila difformis and Rorippa aquatica (Nakayama et al., 2014;Li et al., 2017;Horiguchi et al., 2019). While enhanced dissections are triggered by many cues (e.g. temperature and humidity), it can also be the default leaf shape. Common among all types of compound leaves and dissections identified thus far is the fact that leaves originate as simple primordia from the shoot meristem. In simple leaves, the primordia enter a differentiated state via a reduction in class I KNOX gene expression and an (Bharathan et al., 2002). This undifferentiated state then allows leaflet initiation, instigated by PIN1-mediated auxin maxima (Barkoulas et al., 2008). The separation of these leaflets requires CUP SHAPED COTYLEDON (CUC3)-mediated suppression of growth between the auxin maxima, a process that is conserved across eudicots (Blein et al., 2008). Among the Brassicaceae, leaf dissection is further determined by the presence of REDUCED COMPLEXITY (RCO) which locally suppresses growth at the sides of leaves to enhance dissection (Sicard et al., 2014;Vlad et al., 2014). In R. aquatica, the submergence-induced formation of the dissected leaf coincides with a decrease of class I KNOX and CUC3 expression, analogous to the existing knowledge of compound leaf formation. Moreover, a drop in gibberellin (GA) concentrations was observed, and KNOXI genes are known to suppress GA biosynthesis. Subsequently, exogenous GA application or biosynthesis inhibition led to the reversal or exaggeration of leaf dissection, respectively (Nakayama et al., 2014). However, hormonal investigation of submergence-induced leaf dissection in H. difformis found contrasting effects of GA compared with R. aquatica . Here leaf dissection was predominantly driven by ethylene and low ABA concentrations. Although the molecular machinery of leaf dissection is considered conserved across species, two contrasting signalling behaviours were identified here. Development of stomatal density The aquatic leaf has a strongly reduced stomatal density and cuticle thickness. Indeed, detailed molecular investigation in R. trichophyllus identified a strong downregulation of stomatal developmental genes underwater, some of which have been lost altogether in aquatic plants (Olsen et al., 2016;Kim et al., 2018). The underwater regulation of stomatal density and cuticle typically goes hand in hand with that of leaf shape, namely via ethylene, low ABA and/or high R : FR (Kuwabara et al., 2003;Momokawa et al., 2011;Iida et al., 2016;Kim et al., 2018). A thick cuticle in terrestrial leaves, which have higher ABA concentrations than their aquatic counterparts, agrees with the role of ABA in mediating drought responses, which includes strengthening the cuticle (Cui et al., 2016). However, the signals linked to stomatal development of amphibious heterophylly do not always align with patterns commonly observed. Stomatal density increases with high light and low CO 2 availability, and is signalled through systemic leaves (Casson & Hetherington 2010). Although light intensities follow the same trend for heterophylly, the low CO 2 availability underwater is not translated into high stomatal density. Likewise, low ABA concentrations and high R : FR are also known to increase stomatal density (Boccalandro et al., 2009;Tanaka et al., 2013;Jalakas et al., 2018), whereas in amphibious heterophyllous plants, low ABA and high R : FR decrease stomatal density (Kuwabara et al., 2003;Momokawa et al., 2011;Iida et al., 2016, Kim et al., 2018. Thus, the aquatic developmental programme appears to override routine terrestrial regulatory networks determining leaf formation and stomatal density. III. Underwater photosynthesis In air, gaseous CO 2 diffuses relatively easily through the leaf. But when submerged, plants need to access the dissolved inorganic carbon (DIC). Between pH 7 and 10, CO 2 availability is limited and HCO 3 is the dominant DIC form . This poses an additional problem for submerged plants, as HCO 3 -, unlike CO 2 , does not easily cross lipid membranes. It is not surprising, therefore, that many aquatic plants have HCO 3 uptake mechanisms (Maberly & Madsen, 2002;Yin et al., 2017) (Fig. 2). First, the conversion of HCO 3 to CO 2 by apoplastic carbonic anhydrases (CAs). Second, an H + -ATPasemediated acidification of the apoplast and diffusive boundary layer, which pushes the CO 2 /HCO 3 equilibrium towards CO 2 . Third, a symporter-mediated cotransport of HCO 3 -/H + , and subsequent HCO 3 dehydration to CO 2 via cytosolic CAs. A variety of metabolic routes have been identified in aquatic plants to subsequently fix the acquired HCO 3 -/CO 2 . For example, the true aquatics (Box 1) Hydrilla verticillata and Egeria densa can switch between C 3 and C 4 photosynthesis underwater and can do so even in a single cell (Casati et al., 2000;Rao et al., 2002). Few studies report photosynthetic adaptation to submergence in amphibious plants. The aquatic leaves of R. palustris clearly had better photosynthetic capacity underwater than did terrestrial leaves . In the heterophyllous amphibian H. difformis, a combination of biochemical and anatomical leaf adaptations facilitates underwater photosynthesis (Horiguchi et al., 2019). Submergence triggered the formation of highly dissected aquatic leaves with substantial O 2 production underwater. By contrast, submerged terrestrial leaves struggled to capture inorganic carbon, regardless of illumination. The decreased photosynthesis underwater and subsequent excess energy were linked to high oxidative stress in these leaves. Aquatic leaves had a high capacity to utilize HCO 3 -, which was absent in their terrestrial counterparts. Specific inhibitors were used to discern the mechanism for HCO 3 uptake in aquatic leaves. Interestingly, neither the inhibition of the apoplastic CA nor the HCO 3 -/H + symport affected underwater photosynthesis. Significant photosynthesis impairment was observed only when intracellular CA activity was blocked. These observations imply that submerged leaves of H. difformis can import HCO 3 into the cell without H + cotransport (Horiguchi et al., 2019). Although common amongst true aquatic species, the extent of HCO 3 utilization amongst amphibious plants is currently unknown. In true aquatics, CCMs such as the C 4 system, can be induced by a several factors, such as photoperiod, low CO 2 availability and ABA (Casati et al., 2000;Rao et al., 2002). The amphibious sedge Eleocharis vivipara exhibits an aquatic Kranz-less C 3 form and terrestrial C 4 -like traits with Kranz anatomy (Box 1). The terrestrial form can be imposed on the aquatic leaf by ABA application and is considered as a stress signal (Ueno 1998) . Interestingly, in H. difformis biochemical HCO 3 usage, could be mimicked by application of ethylene or prevented by blocking ethylene perception. Even existing terrestrial leaves were sensitive to ethylene and submergence and achieved an intermediary capacity of HCO 3 usage (Horiguchi et al., 2019). The importance of flooding-specific cues, such as ethylene, is also apparent from work on R. palustris, where the morphological adaptations to submergence can also be induced by low light conditions. However, these do not yield any photosynthetic benefit, as does a true aquatic leaf . IV. Conclusions and future challenges Amphibious plants are truly shape shifters, adjusting their morphology and physiology to adapt to fluctuating environments. They have provided crucial insights into developmental regulatory networks underlying leaf plasticity. However, while some consistent regulatory factors (e.g. ABA and ethylene) are recognized, there have also been contradictions (e.g. GA regulation of leaf dissection), and much remains to be discovered regarding other cues such as light, temperature and abaxial dominance in narrow leaves. This will require a greater use of amphibious species for exploring the molecular regulation of adaptive plasticity to water extremes. Given the increased fluctuations in water stress associated with climate change, understanding such adaptations will be important if we are to engineer resilient crops (Voesenek et al., 2014). The plant's current capacity to enhance underwater photosynthesis in existing terrestrial leaves of H. difformis is a good sign that such traits might, at some point, be transferable to crop species.
3,918.6
2019-11-29T00:00:00.000
[ "Environmental Science", "Biology" ]
On the Stiffness of Gold at the Nanoscale The density and compressibility of nanoscale gold (both nanospheres and nanorods) and microscale gold (bulk) were simultaneously studied by X-ray diffraction with synchrotron radiation up to 30 GPa. Colloidal stability (aggregation state and nanoparticle shape and size) in both hydrostatic and nonhydrostatic regions was monitored by small-angle X-ray scattering. We demonstrate that nonhydrostatic effects due to solvent solidification had a negligible influence on the stability of the nanoparticles. Conversely, nonhydrostatic effects produced axial stresses on the nanoparticle up to a factor 10× higher than those on the bulk metal. Working under hydrostatic conditions (liquid solution), we determined the equation of state of individual nanoparticles. From the values of the lattice parameter and bulk modulus, we found that gold nanoparticles are slightly denser (0.3%) and stiffer (2%) than bulk gold: V0 = 67.65(3) Å3, K0 = 170(3)GPa, at zero pressure. INTRODUCTION The mechanical properties of metallic nanoparticles (NPs) have been the subject of intense research because of their potential technological importance in optical sensing 1,2 and probe microscopy, 3 but it remains difficult to obtain accurate theoretical and experimental information at the nanoscale. The determination of the elastic compliance and related parameters such as Young's modulus and isothermal bulk modulus (or compressibility), and the critical yield stress, is crucial if we are to understand key mechanical properties, but to date, literature reports have been inconsistent. 4−6 Although various approaches have been utilized to measure mechanical propertieselectron microscopy under tensile stress in single metal nanowires, 1,7,8 pressure-induced strain by coherent diffraction imaging, 9 stress induced structural transformations, 10,11 acoustic methods, 12 cavitation vibrational frequencies of nanorods, 6,13 or the compressibility determined by high-pressure X-ray diffraction (XRD) 14−18 identifying a methodology to unambiguously determine such parameters is still under discussion. Some theoretical models predict that most materials possess superior mechanical strength at the nanoscale because they lack large-scale crystal defects such as grain boundaries and dislocations. 19,20 Experimentally, the bulk modulus of different materials has been reported to be either independent of NP size, 21,22 enhanced, 4,23,24 or reduced, 25,26 when decreasing the particle size. A recent review on the behavior of NPs under high pressure conditions can be found elsewhere. 27 It is worth mentioning that the bulk modulus of gold has been determined from V(P) data obtained by highpressure XRD, through its equation of state (EOS) for both bulk, micrometer-sized powder and for gold nanoparticles (AuNP) of different sizes and shapes. 2,4,14,18 Although small differences were found in the reported values of the pressure derivative of the bulk modulus of pure bulk gold (K 0 ′ ∼ 5.3−6), all of them agreed on an isothermal bulk modulus around K 0 = 167 GPa. 18 Values of K 0 = 167 GPa and K 0 ′ = 5.5(1) were obtained by fitting the V(P) data measured over the widest hydrostatic pressure range, using helium as the pressure transmitting medium (PTM) to a Vinet EOS. 18 However, previous XRD studies on bulk Au at high pressure using methanol-ethanol (MeOH-EtOH) 4:1 as PTM reported EOS parameters of K 0 = 167 GPa and K 0 ′ = 5.72. 14 Even though there is a difference in the reported values of K 0 ′, this uncertainty yields discrepancies in the EOS for determining P from V below 1% at 30 GPa. Furthermore, this K 0 value agrees within 4% with the bulk modulus determined from the elastic compliances of gold determined from acoustic measurements, such as the ultrasonic pulse-echo technique. 28−30 However, studies of the EOS of gold at the nanoscale yielded values of K 0 varying by up to 70%. For example, Gu et al. 4 reported K 0 = 290(8) GPa (using a MeOH-EtOH 4:1 mixture as the PTM; measurements up to 30 GPa), whereas Hong et al. 5 obtained K 0 = 196(3) GPa (using argon as a PTM; measurements up to 71 GPa) for gold nanospheres (AuNS) of similar size. Conversely, the study of extensional and breathing modes in gold nanorods (AuNR) by Hu et al. 6 yielded a Young's modulus, E = 64(8) GPa, hence a bulk modulus of K 0 = E/ 3(1−2v) = 133 GPa, using a Poisson ratio of v = 0.42, 31 about 19% smaller than that for bulk gold at room temperature. This data variation may be, to a great extent, due to technical limitations at the nanoscale and to sample heterogeneity (polydispersity and aggregation). These effects are enhanced in high-pressure experiments dealing with concentrated nanoparticle dispersions or compacted samples. In such cases, nanoparticles experience nonhydrostatic conditions, giving rise to additional axial stress components and causing their XRD patterns to deviate from those measured under hydrostatic conditions. As pointed out in previous studies, 17,18,32 the lattice parameters obtained under nonhydrostatic conditions can deviate significantly from those obtained under hydrostatic conditions, thus giving rise to different bulk moduli depending on both pressure conditions and diffraction geometry. 23 Figure 1. X-ray diffraction of AuNP at high hydrostatic pressure. (a) Diffraction patterns of 12.3 nm AuNS colloids in MeOH-EtOH 4:1 (green) and Au micrometric powder (gray), both at 1.1 GPa. Note the broadening and shift toward higher Bragg angles of the NP diffraction peaks, with respect to those of bulk gold. The inset shows a magnification of the (220), (331), and (222) peaks. Intensities were normalized to the (111) reflection. Plots (b)−(d) show the pressure dependence of the fcc cell volume for 12.3 nm diameter AuNS, 28.2 nm diameter AuNS, and 10.7 nm diameter AuNR with AR = 3.4, respectively. Filled symbols correspond to experimental data; solid lines correspond to fits of the Vinet EOS to the measured V(P) data. Empty circles in (c) correspond to experimental points in downstroke, obtained during the reversibility test on the 28.2 nm AuNS solution in the hydrostatic range. Error bars in volume are either indicated or smaller than the symbols. Furthermore, as we show in this work, the lack of hydrostaticity at the nanoscale enhances the stress fields acting on the nanoparticlethese may be up to a factor of 10 higher than those acting on the bulk phasethus emphasizing the variability of the EOS derived from XRD measurements under nonhydrostatic conditions. We present herein a high-pressure XRD study, using a dilute AuNP solution itself as the PTM acting on individual nonaggregatedAuNS and AuNR. Compacted gold powder of 2 μm average grain size was used as a pressure calibrant in each load, enabling us to simultaneously compare both bulk and nanoscale gold XRD patterns under the same environ-mentalpressure and temperatureconditions. We aimed at elucidating whether gold nanoparticles are stiffer than bulk gold, as well as their mechanical stability under severe nonhydrostatic conditions, and whether the shape and size of the AuNP remain constant after pressure release. Our methodology was based on using dilute NP dispersions (<10 12 −10 14 AuNP/cm 3 ; [Au] = 5−13 mM), coated with polyethylene glycol (PEG) to guarantee dispersion of individual NPs in a MeOH-EtOH 4:1 mixture, which is the liquid PTM with the highest hydrostatic range. 33 This solvent allows us to reach a hydrostatic limit up to 11 GPaabout 2 GPa less when containing dispersed AuNPs 34 and therefore a relatively wide pressure range for the precise determination of V(P) with ΔV/V ∼ 10 −4 . The AuNP shape, size distribution, and monodispersity were determined by transmission electron microscopy and UV−vis spectroscopy (see Experimental Methods). The in situ control of the AuNP dispersion under high-pressure conditions within a diamond anvil cell (DAC) the device used to produce high-pressure conditions on the NPwas probed by small-angle X-ray scattering (SAXS), which is highly sensitive to the NP aggregation state (through the structure factor) as well as to the NP shape and size (through the form factor). 35 Although the use of SAXS in highpressure experiments with DAC was restricted to 5 GPa due to technical limitations imposed by the X-ray beam spot size (about 200 μm), thus requiring an 800 μm culet diamond and 0.5 mm thick anvils, this technique can still provide crucial in situ information on the NP dispersion under high-pressure conditions, which has not been exploited so far for this purpose. Nonhydrostatic effects on both XRD and SAXS measurements were also investigated through the axial-stress-averaged model, which allowed us to separate the hydrostatic stress from the biaxial stress components obtained in a DAC. 36 We demonstrate that the model accounts well for Bragg angle deviations with respect to a simple cubic lattice analysis. However, due to the biaxial stress distribution around randomly oriented AuNPs, we find that the resultant lattice volume is not sufficiently precise for determining a suitable EOS for gold in the nonhydrostatic pressure range. This result can explain the widespread values of Young's modulus and isothermal bulk modulus, reported in previous XRD studies. 4−6 Our methodology additionally provides an adequate description of the stress distribution within the NP. We used TEM to explore NP deformation or alloying, by recovering the sample after pressure release. RESULTS AND DISCUSION X-ray Diffraction by Gold Nanoparticles at High Pressure. Figure 1 shows typical XRD patterns of 12 nm AuNS and Au micrometric powder, acquired under the same pressure conditions and environmental setup. The XRD patterns at 1.1 GPa show two important differences: (1) Bragg peaks in AuNS are broader than in bulk gold, and (2) the lattice parameter of AuNS (a = 4.0655 ± 0.0005 Å) is slightly smaller than that of bulk Au (a = 4.0699 ± 0.0005 Å); a 0 = 4.0741 Å and 4.0787 (±0.0002) Å for 12 nm AuNS and bulk Au at ambient pressure, respectively, meaning that the NPs are ca. 0.3% denser than the bulk. These slight differences in lattice parameter and density were found to be maintained within the whole hydrostatic pressure range. Although there is a clear difference in the values of the lattice parameters obtained for nanosized and bulk gold at zero pressure, the variations with pressure in the hydrostatic regime are very similar. Figure 1b−d shows the fit of the experimental data volume to a Vinet EOS 37 (1) The obtained values of the bulk modulus and the volume at zero pressure for the three investigated samples are given in Table 1. We used the same value of K 0 ′ = 5.72 reported by Heinz et al. 14 in the fits for all three samples. This left two fitting parameters, to avoid parameter uncertainty and to allow comparison of bulk modulus and volume at zero pressure among all three AuNP samples and bulk Au. Within the fitting accuracy, the same V 0 and K 0 values were obtained, using either a Vinet or a thirdorder Murnaghan EOS 38 for each sample. Contrary to previously reported results, 4,5 we found a compressibility for the NP that was slightly lower (but very similar) to that of bulk gold for all the samples studied. Comparing the results of the three samples (12 and 28 nm AuNS and 10.7 nm diameter with AR = 3.4 AuNR) having an average value of the bulk modulus, K 0 = 170(3) GPa, we consistently observed a small difference, suggesting that AuNP are less compressible than bulk Au, even though we found the bulk moduli to be, within experimental uncertainty, very similar to each other. Nevertheless, it must be noted that, for each experiment, a AuNP sample and a sample of bulk gold were loaded together into the hydrostatic cavity. In each case, the bulk modulus derived through the EOS (eq 1) was systematically higher for the nanoscale sample: K AuNP > K bulkAu . By comparing the XRD patterns of AuNP and bulk Au, we found that the fcc cell volume at zero pressure for gold nanoparticles was 0.3% smaller than that of the bulk metal, the average value being V 0 = 67.65(3) Å 3 for AuNP and V 0 = 67.852(9) Å 3 for bulk Au. Interestingly, we found that the fcc cell volume at zero pressure decreases with decreasing NP volume (see Figure S1 in the Supporting Information (SI)). This behavior agrees with previously published data for gold and silver NPs. 4 (2) a The first derivative of the bulk modulus was fixed at K 0 ′ = 5.72. 14 Fit errors are given in parentheses. changes were completely reversible, during both upstroke and downstroke, under hydrostatic conditions (see Figure 1c and Figure S2 in the SI). Nonhydrostatic Effects. Figure 2 illustrates the effects of nonhydrostaticity on the XRD patterns. Once the pressure transmitting medium solidifies, there is a clear deviation from the hydrostatic behavior: Bragg peaks broaden markedly and modeling of the diffraction pattern from a single lattice parameter ⟨a⟩ (cubic system without strain) is inadequate, as the sample undergoes a tetragonal-like-distortion due to the appearance of uniaxial stresses along the DAC loading direction (see Figure 2b). In order to properly describe the XRD patterns under nonhydrostatic conditions, it was necessary to model the stress field of the solidified solution following the stress model reported in ref 36 (see details in the SI). However, although the model accounts for the measured XRD pattern in the nonhydrostatic range fairly well, it provides inconsistent results for the real V(P) data when we consider the "hydrostatic" lattice parameter extracted from the model ( Figure 2). The V(P) values derived from the stress model in our experiments are systematically lower than those obtained from a stress-free cubic system. This behavior, which is associated with the diffraction geometry of the experiment with the X-ray k vector along the DAC axis, i.e., axial stress direction, has also been observed in other systems. 36 Consequently, the bulk moduli derived by fitting the V(P) data to a Vinet EOS over the whole pressure range are incongruent: 171(1) GPa for 12.3 nm AuNS, 219(4) GPa for 28.2 nm AuNS, and 146(4) GPa for AuNR. These results clearly illustrate that, even when working with the same type of colloidal AuNP samples and under the same experimental conditions, different values of the bulk modulus can be obtained depending on the stress field. This result reaffirms the importance of working under hydrostatic conditions to obtain correct values from the EOS. It thus turns out that application of a hydrostatic pressure to a compacted, polycrystalline nanoparticle necessarily leads to internal stresses within the nanoparticles, which can modify the XRD pattern with respect to the unstressed NP. This situation also arises in highly concentrated NP solutions, where pressure can induce aggregation or alloying phenomena. In this respect, Gu et al. 4 worked with AuNP powder covering 70% of the total volume of the sample chamber (MeOH-EtOH 4:1 as PTM 30%). The presence of nonhydrostatic stresses in the samples due to bridging between the anvils is likely, even when working with a truly hydrostatic PTM, and this probably explains the large bulk modulus of 290(8) GPa that was obtained. Therefore, working with dilute colloidal dispersions is crucial to guarantee the application of homogeneous stresses to individual nanoparticles. Although the stress model does not provide precise values of the lattice parameter, it does give information on the axial stress field acting on the NP, which in turn allows us to precisely identify the solidification pressure of the PTM and the increase in axial stress with pressure ( Figure 3a). The axial stress, which was derived from the Bragg peak shifts of the measured XRD patterns, was consistent with the NP strain derived from the Williamson−Hall (WH) plots 42 ( Figure Figure 2. X-ray diffraction under nonhydrostatic conditions. XRD patterns for selected pressures in the hydrostatic and nonhydrostatic pressure ranges, for 12.3 nm AuNS. XRD patterns in (a) and (b) were modeled within a stress-free cubic system, whereas the pattern in (c) was modeled considering the stress state of the system. 36 Note the abrupt deviation of the stress-free model in the nonhydrostatic region in the XRD pattern in (b). Plots (d)−(f) show the pressure dependence of the fcc cell volume for 12.3 nm AuNS, 28.2 nm AuNS, and 36.4 × 10.7 nm 2 (AR = 3.4) AuNR, respectively. Gray squares correspond to experimental data for bulk gold; green circles correspond to nanosized gold, assuming a stress-free cubic model system; and dark red triangles represent nanosized gold, considering stress by following a model reported elsewhere. 36 Solid lines correspond to fits using the Vinet EOS; dashed lines correspond to the extrapolation of the hydrostatic EOS. Error bars in volume are indicated or are smaller than the symbols. The vertical dashed lines show the hydrostatic limit for the PTM. 3b,c), relating XRD peak broadening to the combined effects of crystallite size and lattice strain (see Figure S4 in the SI). Interestingly, while the uniaxial stress reached values of 0.2− 0.3 GPa for bulk Au in the studied pressure range, it reached 2 at 30 GPa in AuNP. This means that the axial stress acting on the NP was enhanced by an order of magnitude with respect to that on bulk Au, even though both samples were measured under the same environmental conditions in the DAC. Small-Angle X-ray Scattering by Gold Colloids at High Pressure. The analysis of the XRD data was carried out on the assumption that AuNP solutions remained fully dispersed over the whole pressure range, i.e., that the PTM thus acted on individual nanoparticles. This assumption is crucial, as pressure-induced NP aggregation or alloying may itself induce axial stress components, and in turn modifications to the cubic XRD pattern, leading to an erratic determination of V(P). Given the current technical difficulties to perform electron microscopy or individual NP imaging (for NP sizes of about 30 nm) 9 in a DAC for in situ control of the NP solution, we exploited the potential of SAXS to analyze the state of aggregation, as well as the shape and size of NPs, as a function of pressure. Figure 4 shows representative SAXS patterns for 28 nm AuNS in EtOH solutions, together with the corresponding simulations, considering a Gaussian size distribution and the absence of a structure factor (no interactions between individual NPs). The I(q) analysis confirms monodispersity of the (nonaggregated individual particles) AuNS solution (Figure 4a), and the spherical shape, with an average sphere radius r = 14.4 nm and a standard deviation of 0.3 nm, a value fully consistent with the mean NP diameter determined by TEM, d = 28.2 ± 0.4 nm (Figure 4b). Notably, the AuNS solution remained colloidally stable over the whole analyzed pressure range during both upstroke and downstroke. No evidence of aggregation was observed in any of the studied solutions, even under nonhydrostatic pressure conditions, as is evident from inspection of Figure 4a). In fact, the residual plot indicates the existence of a repulsive force between nanospheres, likely derived from the stabilizing agent (PEG) adsorbed onto the NP surface, which contributed to the observed colloidal stability. We compare in Figure 4d the relative variation of the SAXS-determined radii (radius r 0 = 14.4 nm) and the corresponding lattice parameter measured by XRD (a 0 = 4.0752(2) Å, determined from the AuNS EOS data collected in Table 1). Interestingly, relative variations in both lattice parameter and sphere radius decrease at the same rate with increasing pressure; i.e., r and a are proportional to each other. Although the data derived from SAXS have a lower precision and, due to the experimental conditions, the accessible hydrostatic pressure range is also lower, both techniques yield fully consistent results. Similar results were additionally obtained for AuNR dispersions (see Figures S7 and S8 in the SI). These results reveal that the AuNPs do not deform during pressure treatment: the critical shear stress for plastic deformation is not reached under the high-pressure conditions in our experiments. TEM analysis of recovered AuNSs from the colloid after a high-pressure treatment of 31 GPa in MeOH-EtOH 4:1 shows no evidence of significant plastic deformation (1 out of 300 deformed NPs), the size distribution being identical before and after the pressure treatment (see more details in the SI). The recovered AuNR colloid shows the same size and shape distribution after compression to 10 GPa in the hydrostatic regime, but plastic CONCLUSIONS We have demonstrated that the compressibility of gold at the nanoscale (nanospheres of 12.3 and 28.2 nm and nanorods of 36.4 × 10.7 nm 2 ) is slightly lower than that of the bulk metal (2 μm grain size powder). The corresponding EOS was determined for each material from precise V(P) data obtained by XRD from individual AuNP and gold powder, under identical pressure conditions and environmental setup, using synchrotron radiation. These experimental conditions enabled us to carry out a precise comparison of bulk and nanoscale gold metal, and to extract reliable values of their respective volumes and bulk moduli under hydrostatic conditions. We showed that gold at the nanoscale is slightly denser (0.3%) and stiffer (2%) than bulk gold, with V 0 = 67.65(3) Å 3 (a 0 = 4.0745(6) Å) and bulk modulus K 0 = 170(3) GPa, obtained as an average of the values obtained for all NP solutions. This result is consistent with the EOS of gold because the reduction in volume at the nanoscale (−0.3%) corresponds to an ef fective Laplace pressure in the AuNP of 0.5 GPa. This corresponds to a bulk modulus increase of 3.0 GPa according to the pressure derivative of the bulk modulus of K′ = 5.72, which, in turn, implies a bulk modulus increase of 1.8% in the nanoscale metal, consistent with the measured bulk moduli at the nanoscale. We confirmed that the emergence of uniaxial stresses after solidification of the pressure transmitting mediumthe AuNP solution itselfinduced a deviating behavior of the XRD pattern from an ideal cubic system; the higher the nonhydrostatic pressure, the higher the axial stress, and the larger the shift of Bragg reflections. We interpreted XRD shifts according to the stress state model, which provides a good description of the axial stress acting on the AuNP, compatible with the additional broadening of XRD Bragg peaks in the nonhydrostatic pressure range. We showed that the effect of axial stress on the diffraction peaks was enhanced in NPs as compared to bulk gold, thus revealing an axial stress at the NPs, which is an order of magnitude higher than in the bulk phase under the same nonhydrostatic pressure conditions. However, the hydrostatic volume V(P) of AuNPs derived in the nonhydrostatic range underestimates the real anisotropic volume V(P), thus yielding erroneous estimates of the EOS at the nanoscale. This work highlights the importance of working under hydrostatic conditions to extract precise values of V 0 and K 0 which are to be compared with those of bulk gold. We have also demonstrated the suitability of SAXS to probe the aggregation state of AuNP under high-pressure conditions of (Table 1). Error bars in the relative variation of the lattice parameter are smaller than the symbols. The vertical dashed line shows the hydrostaticity limit of the PTM. up to 5 GPa, as well as their shape and size. Finally, we conclude that the AuNP colloidal solutions maintain their stability within the whole analyzed pressure range, and the relative variation of SAXS-determined AuNS radius and corresponding XRD-lattice parameters fairly agree in the accessible pressure range. TEM images of the recovered AuNP solution showed that the shape and size distribution of the nanoparticles are, within experimental accuracy, unchanged before and after applying pressure. Synthesis of single-crystalline AuNS and AuNR: Single-crystalline AuNS and AuNR were synthesized via well-established seeded-growth methods. 43,44 First, gold seeds (∼1.5 nm) were prepared by fast reduction of HAuCl 4 (5 mL, 0.25 mM) with freshly prepared NaBH 4 (0.3 mL, 10 mM) in aqueous CTAB solution (100 mM) under vigorous stirring for 2 min at room temperature and then kept undisturbed at 27°C for 30 min to ensure complete decomposition of sodium borohydride. The mixture turns from light yellow to brownish, indicating the formation of gold seeds. To grow 12 nm gold nanospheres from gold seeds, an aliquot of seed solution (0.6 mL) was added under vigorous stirring to a growth solution containing CTAC (100 mL, 100 mM), HAuCl 4 (0.36 mL, 50 mM), and ascorbic acid (0.36 mL, 100 mM). The mixture was left undisturbed for 12 h at 25°C. The solution containing gold nanoparticles was centrifuged (9000 rpm for 1 h) to remove excess CTAC and ascorbic acid and redispersed in 1 mM CTAB to a final gold concentration of 1 mM. To grow 12 nm gold nanospheres up to 28 nm in diameter, an aliquot of 12 nm AuNS solution (2.14 mL, 1 mM) was added under magnetic stirring to a growth solution (100 mL) containing benzyldimethylhexadecylammonium chloride (BDAC, 100 mM), HAuCl 4 (0.25 mM), and ascorbic acid (0.5 mM). The mixture was left undisturbed for 30 min at 30°C and then washed twice by centrifugation (8000 rpm for 1 h). The particles were finally dispersed in 1 mM CTAB to a final gold concentration equal to 1 mM. The stirring was stopped after 5 min, and the mixture was left undisturbed for 2 h at 30°C. The nanoparticles were washed by two centrifugation rounds (8000 rpm, 30 min) to remove excess reagents. After the second centrifugation step, the solution was redispersed in CTAB (100 mM) to a final gold concentration of 1 mM. Gold nanorods (15 mL, 1 mM) were partially oxidized with Au 3+ (3 mL, 1 mM, 1 mL/h) until the longitudinal absorption band was located at 694 nm. Then, the solution was centrifuged (9000 rpm for 1 h) and redispersed in CTAB 1 mM. The concentration of gold for ligand exchange was 1 mM. Ligand exchange: 45 To replace the surfactant and transfer the gold nanoparticles to the alcoholic mixture, thiolated polyethylene glycol (PEG-SH) of molecular weight of 5K was used. An aqueous solution of PEG-SH (25.4 mg and 10.9 mg for 12 and 28 nm gold nanospheres, respectively, and 21.3 mg for gold nanorods, dissolved in 2 mL of water) dispersion was added dropwise under stirring to a dispersion of gold nanoparticles (12 mL, 1 mM) in 1 mM CTAB. The solution was left for 2 h under stirring and then centrifuged twice in a mixture of methanol-ethanol (4:1). Pegylated gold nanoparticles were finally dispersed in methanol-ethanol (4:1). Representative TEM images and extinction spectra of the AuNP colloids employed in the experiments are shown in Figure 5. The investigated AuNS have average diameters of 12.3 ± 0.3 and 28.2 ± 0.4 nm, and their extinction spectra show characteristic surface plasmon resonance (SPR) bands centered at 521 and 523 nm, respectively. AuNRs have a mean length of 36.1 ± 0.6 nm, a mean diameter of 10.7 ± 0.4 nm, and an AR distribution 3.4 ± 0.2, and the optical spectrum shows the characteristic band structure associated with a transversal SPR at 510 nm and a longitudinal SPR at 694 nm. High-Pressure X-ray Diffraction. XRD measurements of AuNP under high pressure conditions were performed at the SOLEIL Synchrotron (France), using the PSICHÉbeamline. 2D XRD data were collected on a CdTe2M Dectris detector, using a monochromatic X-ray beam with a wavelength of 0.3738 Å, focused to a beam size of 12 × 14 μm 2 (FWHM). A membrane DAC with a 300 μm diameter culets and automatic control of the membrane pressure was employed as the pressure generator. A parallel configuration geometry for diffraction (incident X-ray beam parallel to the DAC load axis) was used. Gold colloids were loaded into a 150 μm diameter hole, within a stainless-steel gasket that had been preindented to a thickness of 35 μm. A compacted polycrystalline gold powder of 2 μm grain average size was used as a pressure calibrant, following the EOS reported by Heinz et al., 14 using the MeOH-EtOH 4:1 AuNP dispersion itself as the pressure transmitting medium. V(P) data of the gold calibrant were fitted to a Vinet EOS which yielded a 0 = 4.0787(2) Å; V 0 = 67.852(9) Å 3 ; K 0 = 167.0; and K 0 ′ = 5.7(2). We followed the EOS reported by Heinz et al., 14 since the experimental measurements were carried out using MeOH-EtOH 4:1 as PTM which is similar to our system. However, using this equation instead of the one reported more recently by Dewaele et al. 18 using helium as PTM only induces a maximum deviation of 0.3 GPa in 30 GPa, ca. 1%. The reversibility of the process was studied within both the hydrostatic regime (0−7 GPa in upstroke and downstroke) and in the nonhydrostatic regime (0−30 GPa in upstroke and downstroke). The described configuration setup provides suitable XRD patterns covering the (111), (200), (220), (311), and (222) Bragg reflections over the whole pressure range and for all samples. Reflections corresponding to (400), (331), and (420) planes could also be recorded in the low-mid pressure range (0−15 GPa) for 12.3 nm AuNS and for AuNR. The lattice parameter was determined by means of a Le Bail-type analysis, 46 fitting pseudo-Voigt profiles to the diffraction patterns. Precise lattice parameters (Δa/a ∼ 5 × 10 −5 ) and FWHM (ΔFWHM/FWHM ∼ 10 −2 ) were obtained in the hydrostatic regime (0−7 GPa range), with a residue R w of ca. 1%. The fitting quality decreased progressively in the nonhydrostatic region (P > 9 GPa), with values of R w around 10% caused by axial stress components. The stress-induced shifts of the Bragg peaks with respect to pure hydrostatic conditions were analyzed by means of the Stress State model reported by Singh, 36 which allowed us to consider the effects of axial stress at the NP (see more details in the SI). We used the stress continuity across grain boundaries approximation, 47 as it provides a better description of the stress field in the DAC. Our analysis showed that the Reuss approximation provided the best match to XRD patterns, as compared to a mixed Reuss−Voigt 48 shear moduli approximation. This model allowed us to obtain those uniaxial stresses that are superimposed on top of the hydrostatic pressure, from the resulting lattice strains derived through the XRD diffraction patterns, and the elastic compliances of bulk gold. 28−30 Small-Angle X-ray Scattering at High Pressure. SAXS measurements were performed on the SWING beamline, at the SOLEIL Synchrotron. The beamline was adapted to high-pressure SAXS by inserting a membrane diamond anvil cell equipped with 1 mm thick, 3 mm diameter anvils ground with 0.8 mm culets. This anvil geometry allowed us to work with gasket cavities of 300 μm in diameter, which properly fit onto the 200 × 150 μm 2 X-ray beam spot and attenuate the beam intensity to only 20% at the working energy. These conditions are crucial for obtaining suitable SAXS signals, I(q), for structural analysis within the 0−5 GPa pressure range. The experiments were carried out employing a monochromatic X-ray beam of 0.8265 Å wavelength passing through the DAC and focused at the two-dimensional EIGER-4 M detector position, located 2030 mm downstream of the sample. The selected sample-detector distance and beam energy (15 keV) allowed us to locate the optimum scattering angular range, in order to obtain the most precise values of the form factor (size and shape of the NP) and the structure factor (aggregate formation or NP precipitation). The gold colloids were loaded onto a membrane DAC with automatic control over the membrane pressure, 800 μm culet diameter diamonds, and a 300 μm drilled hole in a CuBe gasket, pre-indented to 100 μm. Ruby microspheres of 10−20 μm in diameter were placed into the sample chamber as pressure markers, following the relationship between R 1,2line shift and pressure. 49 The hydrostaticity of the pressuretransmitting medium was monitored through the ruby R-line broadening, whose line width is known to slightly decrease with pressure in the hydrostatic range, and progressively broaden with pressure in the nonhydrostatic range. 33,50 The relatively large size of the diamonds enabled us to load a significant amount of sample (0.1 mm 3 ). However, it also limits the achievable pressure range to 5 GPa. In these experiments, we worked with AuNP colloids in EtOH as PTM, since it solidifies at about 3.5 GPa, thus enabling us to explore the effects of both hydrostatic and nonhydrostatic pressure on colloidal stability. SAXS images with 1 s exposure time were normalized and azimuthally integrated into curves using the local application Foxtrot, then further analyzed with the SASfit software, 51 to test the geometries corresponding to each colloid and to explore different structure factors related to NP aggregation. Transmission Electron Microscopy. TEM images were obtained with a JEOL JEM-1400PLUS transmission electron microscope operating at an acceleration voltage of 120 kV. AuNP colloids were measured before and after pressure treatments. In the latter case, the sample was recovered from the pressure cavity of the gasket by transferring the colloidal mixtures onto a copper grid by touching the culet surface of the diamond anvil after pressure release. Although this method can accidentally drag some external AuNP off the hydrostatic cavity and onto the grid, compressed AuNPs can be readily identified by observing 1 out of 300 deformed NPs. This method allows us to explore the aggregation state, as well as size and shape of the compressed NPs.
7,711.2
2021-10-20T00:00:00.000
[ "Physics", "Materials Science" ]
Probabilistic models for the withdrawal behavior of single self-tapping screws in the narrow face of cross laminated timber (CLT) Cross laminated timber (CLT) and self-tapping screws have strongly dominated the latest developments in timber engineering. Although knowledge of connection techniques in traditional light-frame structures can be applied to solid timber constructions with CLT, there are some product specifics requiring additional attention; for example in positioning of fasteners, differentiation in the side face and narrow face of the panels and the influence of potential gaps. The load–displacement behaviour of single, axially-loaded self-tapping screws positioned in the narrow face of CLT and failing in withdrawal was investigated. For the first time a multivariate probabilistic model was formulated together with models relating the parameters with the thread-fibre angle and the density. Different types and widths of gaps, initial slip and / or delayed stiffening as well as softening after exceeding of the maximum load can be considered. Beyond the scope of this contribution, the probabilistic model is seen as a worthwhile basis for investigations into the withdrawal behaviour of primary axially loaded, compact groups of screws positioned in timber products and subjected to withdrawal failure. engineering. Both innovations have considerably contributed to the increasing market share of timber structures in the overall building market, in particular in Europe, in countries like Austria, Germany and United Kingdom (e.g. Teischinger et al. 2011;Statistisches Bundesamt 2016;Lane 2014). Whereas CLT opens up new horizons in timber engineering, in areas which had until now been the realm of mineral building materials such as concrete and masonry, fully or partially threaded self-tapping screws, optimized for loadbearing in axial direction, facilitate economic and versatile applications. Although existing knowledge from linear timber members can also be widely used for connecting CLT elements, some product specifics require additional attention; for example in positioning of fasteners, differentiation in side (the plane surface of CLT) and narrow face (the cross section of CLT). Up to now, there are only a few investigations available focusing on the behaviour of single fasteners and in particular on self-tapping screws in CLT (e.g. Blaß andUibel 2007, 2009;Reichelt 2012;Grabner 2013;Silva et al. 2014;Ringhofer et al. 2014Ringhofer et al. , 2015. The behaviour and performance of fasteners interacting in a group depends decisively on the load-displacement behaviour of single fasteners, their variability and the workmanship. Focusing on the aleatoric uncertainty, accurate modelling of the load-displacement behaviour of single fasteners is of great importance for judging the capacity of the group in a reliable and economic way and for suggestions on the optimal design and use of the fasteners. Although research on the load-displacement behaviour of single fasteners and probabilistic models are available, none of these investigations consider the withdrawal behaviour of axially loaded self-tapping screws. For adequate modelling of the load-displacement behaviour of groups of self-tapping screws, additional consideration of softening after exceeding the maximum load is mandatory, a circumstance that has so far not Introduction Cross laminated timber (CLT), as a laminar engineered timber product, and self-tapping screws, as metal fasteners, have strongly dominated the latest developments in timber been covered in existing research into timber engineering, which focuses on dowel-type fasteners stressed in shear rather than on axially loaded fasteners. In addition to these general circumstances, which are relevant for axially loaded screws positioned in any kind of timber product, there are some important specifics to consider when screws are placed in CLT. Apart from the necessity to differentiate in side and narrow face, the layup of CLT, its orthogonal structure, cause a variety of product parameters potentially influencing the product's properties and hence also the withdrawal properties of screws in CLT. Overall, the main CLT product parameters of interest for screws failing in withdrawal are the orientation and dimension of layers and lamellas, the execution of the narrow face (with or without bonding), types and widths of gaps and stress relief within layers. With focus on screws in the narrow face of CLT, the possibility of gaps between boards within one or in between different layers, and the possibility of screws featuring different thread-fibre angles α along their circumference require consideration; see Fig. 1. In all these cases, withdrawal properties may be influenced remarkably. State of knowledge of axially loaded self-tapping screws positioned in the narrow face of CLT There are only a few investigations into the withdrawal behaviour of axially loaded self-tapping screws positioned in the narrow face of CLT. The first known comprehensive study is reported in Blaß and Uibel (2007). Withdrawal tests were made using industrially produced CLT of three and five layers with and without bonding on the narrow face. Investigations also included the influence of positioning screws in different types of gaps. However, as the width of the gaps varied randomly from test to test, and were on average only 0.5-2.0 mm wide, a direct relationship between withdrawal strength and the type and width of the gaps cannot be determined. Furthermore, tests were restricted to threadfibre angles of 0°, 90° and combinations of both. The ratio between the average withdrawal strengths at 90° and 0° was found with f ax,90,mean /f ax,0,mean = 1.25 and the approach used by Hankinson (1921) (see Sect. 3.1) was applied. The influence of density on f ax was described by a power model with exponent 0.75. In comparison to Blaß et al. (2006), who investigated the withdrawal capacity of axially loaded screws in solid timber, minor conservative but overall congruent regression models were determined. Based on their research in 2007, Blaß and Uibel (2009) suggested using the characteristic withdrawal parameter given a thread-fibre angle of 0° (f ax,k |α = 0° = f ax,0,k = 0.67f ax,90,k ) for a simplified design of axially-loaded self-tapping screws in the narrow face of CLT, irrespective of the orientation of the corresponding layer and thus irrespective of the thread-fibre angle α. State of knowledge of modelling of the loaddisplacement behaviour of single fasteners and groups of fasteners Numerous studies have focused on modelling the load-displacement behaviour of fasteners; for a summary and confrontation see Attiogbe and Morris (1991). Foschi (1974), for example, used an exponential model for modelling the foundation properties of nails stressed in shear. Richard and Abbott (1975) defined a power model which can be applied to describe the load-displacement behaviour of metal fasteners in timber. The models of Foschi (1974) and Richard and Abbott (1975) were later extended, for example, by Blaß (1991) and Jaspart (1991), respectively, introducing ideal plastic yielding of the fastener after exceeding the maximum load. Although these models have already been successfully used in probabilistic investigations into the behaviour of single fasteners and of the group action of fasteners (e.g. Blaß 1991), there are some decisive characteristics which limit their merit in modelling the withdrawal behaviour of axially loaded self-tapping screws; for example: • The point (F max ; w f ), with w f as the deformation associated with the maximum load F max , is not part of the previous mentioned models; thus, the relationship between F max and w f may not be accurately represented; • The extensions for both models assume ideal plastic yielding after exceeding F max ; this approximation appears suitable for dowel-type fasteners such as nails or dowels stressed in shear; however, non-linear softening after exceeding F max , which is typical for selftapping screws failing in withdrawal, is not provided. Glos (1978) elaborated a polynomial model which allows also softening after exceeding the maximum load. His model has five parameters: in terms of a load-displacement curve, given as the initial stiffness k ser (at w = 0), F max and w f as the ultimate load and the corresponding displacement, respectively, F asym as asymptotic resistance at w → ∞ and exponent c as shape parameter. This exponent offers high flexibility in calibrating the model to the non-linear part of test data and constitutes the only non-mechanical material property. Although the model was originally developed to describe the stress-strain relationship of timber in compression parallel to grain, it has meanwhile also been used for other investigations, for example for reliability analyses of parallel systems, see Gollwitzer (1986) and Gollwitzer and Rackwitz (1990). Recently, Flatscher (2014) extended Glos' model by an additional calibration parameter to improve the characterisation of load-displacement curves of various joints. Despite some limitations, Glos' approach is seen as a worthwhile basis for modelling the load-displacement curve of axially loaded self-tapping screws. Its core is used and adapted later in Sect. 2.2. Motivation and objective The small number of probabilistic investigations on fasteners and joints in timber engineering in general and in particular the insufficient and fragmented knowledge of the behaviour of axially loaded self-tapping screws in the narrow face of CLT have motivated to establish a probabilistic model, usable for investigations aiming at a more reliable and perhaps even more economical joint design and application. Therefore, a probabilistic model approach was established, which allows an accurate and reliable characterization of the withdrawal behaviour of axially loaded self-tapping screws positioned in the narrow face of CLT. By modelling the load-displacement curve, initial slip or delayed stiffening at the beginning of loading as well as softening after exceeding the maximum load are addressed. The model parameters were inferred from one of two independently conducted test series by Grabner (2013), the model was validated with data from the other test series conducted on screws influenced by gaps, and its predictive quality was demonstrated. Series of withdrawal tests The data of two independently conducted test series on self-tapping screws positioned in the narrow face of CLT and tested in withdrawal originate from Grabner (2013). The five-layered CLT of both series was made of Central European Norway spruce (Picea abies). For testing of withdrawal, partially threaded screws ASSY 3.0 (d = 8 mm, d 1 = 5.5 mm, l = 400 mm, l thread = 100 mm; ETA-11/0190 2011) and ECOFAST ASSY II (d = 12 mm, d 1 = 7.2 mm, l = 440 mm, l thread = 140 mm; Z-9.1-514 2011) were used, with d and d 1 as the outer and inner thread diameter, respectively. The penetration length of the screw in CLT was 10 d = 80 and 120 mm for d = 8 and 12 mm, respectively. For calculation of withdrawal parameters, the effective length was defined as the penetration length minus the tip length, with l ef = 80-9.1 = 70.9 and 120-13.5 = 106.5 mm. In the case of pre-drilling, the drill diameters used were ≤ d 1 , with 5 and 7 mm for screws of d = 8 and 12 mm, respectively. All tests were conducted as push-pull tests and according to EN 1382 (1999) way-controlled (constant loading rate) with time to failure in 90 ± 30 s. A constant loading rate was applied to better characterise the softening behaviour. A pre-load of, on average, 150 N was used to fix the screw's position in the centre of the circular hole in the counter plate and to reduce measurement artefacts at the beginning of loading. All screws were inserted parallel to the side face of CLT. Measurement of local deformations was performed only for screws of diameter d = 8 mm by using inductive displacement transducers until the load dropped down to ≤ 0.75 F max , with F max as the maximum load per test. The local deformations later used in analysis comprise both, the deformation of the screw part embedded in timber and the local deformation of timber. Thus, the deformation of the composite screw-timber is represented. Local density and moisture content were determined according to ISO 3131 (1975) andEN 13183-1 (2002), respectively. The density of tests with screws placed in gaps was determined as sum of densities of all involved layers multiplied by their theoretical proportion. Tests of screws that penetrated or touched local growth characteristics, like knots, checks, reaction wood and resin pockets, were recorded. Based on a statistical data analysis (Grabner 2013), all tests with knots had to be excluded whereas tests with reaction wood and resin pockets remained in further data processing. All tests were performed at moisture content u = 12 ± 2%. Thus, a correction of test parameters to the reference moisture content of u ref = 12% was not required. Test series I Series I comprised tests in 160 mm thick industrially produced CLT elements with a base material of strength class C24 according to EN 338 (2009) and layer thicknesses (from top to top) of 40, 20, 40, 20 and 40 mm. The aim of the tests was to investigate (1) the influence of the thread-fibre angle α, (2) the influence of positioning screws in different layers and between layers, and (3) the influence of pre-drilling. Concerning (1) and (2), withdrawal parameters were determined in all layers of CLT (top layer, TL; cross layer, CL; and middle layer, ML) with α = 0°, 30°, 45°, 60° and 90°. Tests on screws positioned between two layers were accomplished for α = 0°|90° and 45°|45°. 20 tests were made for each parameter combination. To assure perfect positioning of all screws, in particular screws placed between two layers, pre-drilling was applied. Additional tests without predrilling were conducted in TL and for α = 0°, 45° and 90° to judge the influence of pre-drilling (3). The data of series I is later used to infer model parameters. As the local deformations were only measured for screws of diameter only these data sets can be used. Furthermore, only the data of screws positioned centrically in one layer are considered. As the comparison of withdrawal parameters based on tests with and without pre-drilling does not show any significant differences (Grabner 2013), further investigations are restricted to tests with pre-drilling for test execution in series II, see Table 1. Test series II To investigate the influence of gaps on the withdrawal parameters, the gap type and width w gap were varied, see Table 1. Three types of gaps, gaps between boards within the same layer (butt joint; BuJ), gaps between neighbouring d = 8 mm, layers (bed joint; BeJ) and a combination of both (T-joint; TJ) were examined using fixed gap widths of w gap = (0, 2, 6) mm (see Fig. 1). The range of w gap corresponds to common regulations currently anchored in technical approvals for CLT, see e.g. Brandner (2013a). The CLT elements were produced in the laboratory at the Institute of Timber Engineering and Wood Technology at Graz University of Technology. They were composed of boards with strength class C16 according to EN 338 (2009) and with layer thicknesses (from top to top) of 37, 20, 37, 20 and 37 mm. To minimize the influence of growth characteristics in timber, for example knots, knot clusters and checks, all board segments featuring these characteristics were trimmed out. The residual board segments were mixed and afterwards, during the production of CLT, taken from the batch at random. In doing so, sub-series of CLT with comparable timber properties, i.e. densities, (matched samples) were obtained. Compliance with pre-defined gap widths was assured by step-by-step production, starting with surface bonding of two transverse layers in hydraulic press equipment at a bonding pressure of 0.4 N/mm 2 , by cutting of gaps, sealing the gaps with tape, preventing filling with adhesive, and by continuing the process until five-layer CLT elements were achieved. Pre-drilling was applied to all tests to maximize the precision in the positioning of screws relative to the gaps. The data of series II is later used for validating the probabilistic model regarding the influence of gaps and mixed threadfibre angles. Modelling the load-displacement behaviour of single self-tapping screws failing in withdrawal To represent the load-displacement curve of axially loaded self-tapping screws the core of Glos' model (Glos 1978; see Sect. 1.2) is used and the following important adaptations and extensions are made: (BeJ), T-joint (TJ) Sample size 20 specimens per parameter set → in total 300 tests 20 specimens per parameter set, except joints with w gap = 6 mm (5 per series) → in total 400 tests • At first, the approach is simplified by using F asym = 0. This is argued by the fact that a residual resistance F asym > 0 | w → ∞ cannot be observed in timber primarily stressed in longitudinal and / or lateral shear as this is the case in testing screws in timber against withdrawal. • Secondly, test data indicates delayed stiffening at the beginning of loading, which is a well-known characteristic of natural, hierarchically structured materials (e.g. see Gordon 1988). To account for this phenomenon, and as the principal shape of this first branch of the loaddisplacement curve is in general not decisive for engineering applications, a horizontal shift of the curve is introduced by Δw ini (see Fig. 2). This shift is not of relevance for single screws but may be of importance for investigations into the interaction of screws in groups. However, as a pre-load was applied (see Sect. 2.1) the available test data provides only underestimations for Δw ini . For modelling of single screws, Δw ini is set to zero. • As Glos' model does not provide a linear-elastic part at the beginning of the load-displacement curve (see Fig. 2, right), which can be approximately observed from withdrawal tests with hysteresis loops, its stiffness parameter k ser | w = 0 does not correspond to k ser from tests usually determined according to EN 26891 (1991). This standard defines k ser as gradient of the load-displacement curve until 0.4 F max . For applicability of the standardized k ser as model parameter, a linear part Δw lin is introduced between w ini and w lin after which the model of Glos (1978) starts with k ser | w = w lin , see Fig. 2, left. The corresponding formalisms for parameters k 1 , k 2 and k 3 are also given in Eqs. (1-4). with and with parameters and As most of the data sets provide information at least until ≤ 0.75 F max in the softening domain (w > w f ) it is aimed to provide stochastic information for model parameters up to this domain but control the representation by the model until the end of recorded data. All model parameters are determined directly from tests apart from the shape parameter c which is gained from calibrating the model to real data. If applicable, the parameters ∆w ini , ∆w lin , k ser , F max and w f of each data set were determined according to EN 26891 (1991). Additionally, parameters ∆w ini , ∆w lin and k ser are based on the apparent linear part (constant gradient) in the load-displacement curve, which, in the case of waycontrolled testing, equals to a constant increase in load. This part of the load-displacement curve can easily be found by analysing the load increment per displacement increment, see Fig. 2, right. Thus, the apparent beginning and end of the constant part correspond to w ini and w lin , respectively, with k ser as gradient of F ax (w) vs. w. (1) Statistical analysis and inference The statistical analysis and inference as well as modelling and simulation of virtual load-displacement curves is undertaken in R (2013). For both test series of Grabner (2013) a comprehensive statistical analysis is made to infer reliable statistics for the model parameters in Sect. 2.2. Starting with harmonizing the data basis from test series I, including a density correction, representative distribution models for each parameter, as univariate variables, are determined. Apart from comparisons between statistics and expert judgements, additionally outcomes of pairwise t-tests, conducted on untransformed and logarithmized data sub-sets, and of the Mann-Whitney U test, as parameter-free test, are used to support the decisions; in frame of this hypothesis testing, with H 0 : Z = 0 vs. H 1 : Z ≠ 0, with as pairwise comparison of mean and median values, respectively, assessing the significance level with 5%. Testing of logarithmized data is motivated by the circumstance that properties of timber are frequently well represented by a lognormal distribution and the background that logarithmized lognormal variables are normal distributed. Regression analyses for the relationships between model parameters and α, and correlation analyses, to determine adequate and physically traceable correlations between the model parameters, are made. For decisions on correlations, estimates based on Pearson and the parameter free rank correlation coefficient of Spearman are tested on significance (significance level 5%) and compared. To verify the multivariate model approach virtual load-displacement curves are generated with 1000 runs per parameter setting (gap type, gap width and screw diameter). By means of box-plots statistics of these simulations are compared with that of the second test series of Grabner (2013). As input parameters for the stochastic-mechanical model are only available for screws with d = 8 mm, ratios are used. Deviating from the general box-plots, the whiskers of the simulated data correspond to the 5%-and 95%-quantiles calculated according to rank statistics, with 5%-quantiles marked as (×). In addition to the box-plots, 95%-confidence intervals (CIs) of mean ratios based on t-tests with CIs according to Fieller calculated in R (2013) are provided. These CIs are only approximate as the test was originally developed for mutually independent normal variables. It is proven that test statistic from logarithmized data does not show any noticeable differences. Results and discussion Results and discussion are presented in context with associated literature. Section 3.1 aims at statistics for parameters of the load-displacement model in Sect. 2.2; data sets are combined to increase the power in statistical inference. The multivariate approach is defined in Sect. 3.2 and validated in Sect. 3.3. Table 2 presents the main statistics of density, withdrawal capacity and stiffness for each layer and α separately. An increase in withdrawal properties with increasing α can be observed. This is a well-known circumstance, which has already been anchored in EN 1995-1-1 (2008) and in technical approvals for self-tapping screws. Furthermore, withdrawal properties determined from tests in different layers but comparable density show no significant differences. Thus, a certain amount of locking effect, caused by transversely oriented neighbouring layers, cannot be confirmed in the tested layups with layer thicknesses of 20 and 40 mm and the screw diameter used. General data analysis Focusing on the density, currently used as the only indicating material parameter for the withdrawal properties, average values between 430 and 460 kg/m 3 can be observed in most sub-series, except for sub-series with α = 90° in layers TL and ML. In these two data sets, on average a significantly lower density is given and in layers CL with α = 45°, 60° and 90° which show on average significantly different densities. The same data sets of CL but also with α = 30° reflect unexpected high coefficients of variation in density with CV[ρ 12 ] = 14-17% (E[CV[ρ 12 ]] = 6-10%, see Brandner 2013b). The different layer thicknesses of TL and ML vs. CL together with deviating densities indicate raw material from different proveniences and / or from different cross-sectional regions within the log (juvenile vs. adult timber). Pairwise comparisons between average values and between medians of model parameters (F max , k ser , Δw lin , Δw f and c) at given α but different layers reflected significant differences in sub-sets with different densities. Thus, to reduce possible influences caused by different material qualities of CL on the inferred withdrawal parameters, the data of CL is excluded from further processing; the data of TL and ML are combined, see Table 3. Density correction and representative distribution models To find adequate corrections for the influence of density on the withdrawal parameters, simple regression analyses were performed. In doing so, the representative marginal distributions of the independent and dependent variables were considered. Therefore, tests on normal (ND) and lognormal (2pLND) distribution were applied for each parameter X = (ρ 12 , F max , k ser , Δw lin , Δw f and c) and for each threadfibre angle α = 0°, 30°, 45°, 60° and 90° separately. Overall, the hypothesis of lognormality was rejected in much less cases than that of normality. In two-thirds of all tests the realized significance for 2pLND was higher than for ND, indicating rather lognormal than normal distributed data. Consequently, and because of physical reasons, (properties of a highly hierarchical material, which is additionally restricted to the positive domain ℝ + , see Brandner 2013b) 2pLND is preferred for parameters ρ 12 , F max and k ser . 2pLND is also used for parameters Δw lin , Δw f and c due to its simplicity and because the outcome of the tests is unclear. With 2pLND as marginal distribution, the relationships between (F max , k ser , Δw lin , Δw f and c) and ρ 12 were investigated by means of simple power regression analyses with Simple instead of multiple regression analyses were undertaken to allow regression parameters to be judged set by set. Firstly, this is because, although the data sets of TL and ML are combined the samples are still small. Secondly, the array of ρ 12 in the present data with 380 to 520 kg/m 3 restricts the inference of global relationships. Thirdly, Ringhofer et al. (2014) observed that relationships between withdrawal parameters and ρ 12 may change discontinuously with the changing thread-fibre angle. As expected, a highly significant and positive relationship between F max and k ser vs. ρ 12 , is observed, the exceptions being F max,0 and k ser,90 . The reduced significance in F max,0 vs. ρ 12 has already been mentioned in Ringhofer et al. (2014) and implicitly demonstrated in Pirnbacher et al. (2009). This circumstance is dedicated to changes in the associated failure planes in shear, from RL | RT (corresponding shear , with LR, LT, RT as vectors of failure planes in longitudinal-radial, longitudinaltangential and radial-tangential direction, respectively, see Fig. 3. Besides Δw f | = 0 • ∪ 60 • there were no significant results found in the regression analysis for parameters c, Δw lin and Δw f . The gradients k [see Eq. (7)] are overall negative, indicating a stiffer and more brittle withdrawal behaviour with increasing density, a circumstance which is in line with experimental observations. The exponents k X found in the analysis are considerably different from those given in Blaß et al. (2006) who proposed 0.8, 0.2 and 0.5 for the correction of F max , k ser and Δw f . A comparable exponent for F max can be found, for example, in Blaß and Uibel (2007). However, Newlin and Gahagan (1938, 1.5), McLain (1997, 1.77-1.35), Soltis (1999, 2.0-1.5), Schneider (1999, 1.78) and Hübner (2013, 1.6) found exponents in the range of 1.35-2.0 for the withdrawal strength f ax = F max / (d π l ef ). A comprehensive summary for soft-and hardwoods can be found in Hübner (2013). The correction of density, which reduces the variability in withdrawal parameters, is made to improve the estimates for the average (mean) relationships between withdrawal parameters and α. However, estimates for the coefficients of variation (CV) are still based on uncorrected (observed) data sets as far as the variability in density is within the expected range of 6-10% (Brandner 2013b); this is fulfilled for all thread-fibre angles of the combined data set TL and ML. k 90 = f ax,90 ∕f ax,0 , The outcome, based on non-linear least-squares method, is visualized in Fig. 4 and quantified in Table 4. A significantly reduced variability caused by density correction in all data sets of parameters with a significant relationship to density, F max and k ser , is observed. The medians are not influenced so far. ρ 12,mean of the uncorrected (observed) data set was approximately equal to the reference density. The significantly lower density at α = 90° causes a shift in the medians. The average withdrawal capacity F max,mean | α = 0° and 90° is significantly lower and higher, respectively, than F max,mean | 30° ≤ α ≤ 60°, whereas no significant differences are found between means and medians of log(F max ) | 30° ≤ α ≤ 60°. The approach of Hankinson and the bi-linear model allow only a rough approximation of the relationship F max vs. α. The ratio k 90,mean = 1.45 is higher than in Blaß et al. (2006) and Blaß and Uibel (2007 (2013) and Blaß et al. (2006). The bi-linear approach in Fig. 4 overestimates F max | 0° < α < 90°; a calibration to F max | α = 60° would motivate a constant branch within 30° ≤ α ≤ 90°, however, the resistance at α = 90° would be significantly underestimated. The Hankinson's approach, although commonly used and anchored in EN 1995-1-1 (2008), does not provide an inflexion point with a constant plateau within 30° ≤ α ≤ 60°. Thus the polynomial approach is used further. The relationship k ser vs. α shows a significant reduction in medians from α = 0° to 30° and from 45° to 60°. This course is again attributed to changing failure planes in withdrawal, see Fig. 4. For example, the transition from LR to LT and reverse at α ≈ 45° is already well known for solid timber (see e.g. Denzler and Glos 2007;Brandner et al. 2012). In the case of withdrawal and changing thread-fibre angles from 0° to 90°, there is a transition from LR and LT to primary TL and RT (side boards) or, more infrequently, to RL and RT (rift boards). Rolling shear causes resistances in TL and RL to be lower than those in TR and RT planes, which are exposed to transverse shear. The bi-linear model, as the simplest model for the description of k ser vs. α, is used further. The comparison of both relationships, F max and k ser vs. α, shows that they are not only inverse but also different. This indicates that mechanisms relevant at the maximum load may be different from that relevant at the apparent linearelastic part of the load-displacement curve. The bi-linear approach is preferred for the relationship Δw lin vs. α. This decision is supported by highly significant different medians at α = 0°, 30° and 45°, determined by pairwise Mann-Whitney U tests, whereas the hypothesis of equal medians at 45°, 60° and 90° cannot be rejected. The modified Hankinson approach failed because of a lack of flexibility. The polynomial approach was excluded because of its higher complexity but equal goodness of fit. Over the course of Δw f vs. α, a sharp and regressive decrease from α = 90° to 0° is noticed, with highly significant differences in the pairwise comparison of medians between 0°, 30°; 30°, 45° and 60°, 90°. The polynomial approach is identified as a representative model. Comparable to F ax vs. α, in the course of c vs. α a constant plateau between 30° ≤ α ≤ 60° is observed, as are significantly different medians (and averages of logarithmised data) for α = 0° and 30° and α ≤ 60° and 90°. This provides additional motivation for the polynomial approach being first choice. Correlations between model parameters Now the correlation between parameters X = (ρ 12 , F max , k ser , c, Δw lin , Δw f ) is of interest. The determination of adequate measures of correlation is in general a challenging task. However, in respect to 2pLND as representative distribution model and the definition of Pearson's correlation coefficient for the normal domain, the correlations are analysed for the logarithmised data set and compared with Spearman's parameter free rank correlation coefficient. Due to different and partly reverse relationships of variables X at the extremes of α, inference is made separately for each α. The outcomes are tested on significant differences, their physical plausibility proven and remaining statistics of correlations averaged, see Table 5. As expected, a highly significant correlation is found between ln(F max ) and ln(ρ 12 ), apart from α = 0°. Furthermore, a positive correlation between ln(F max ), ln(k ser ) and ln(ρ 12 ) at a fixed α is expected as the screws are anchored in a denser material. For a given α a positive relationship between ln(F max ) and ln(k ser ) is also confirmed. A highly significant correlation is found also between ln(c) and ln(Δw f ). This is supported by comparable gradients in the softening zone. The positive relationship between ln(Δw lin ) and ln(c) follows the same argument. A significantly negative correlation is observed between ln(k ser ) and ln(Δw f ) and ln(Δw lin ). Withdrawal tests show a positive correlation between stiffness and the tendency to more brittle failures in combination with a reduced plastic (non-linear) zone. In principle, the same circumstances apply for ln(F max ) and ln(ρ 12 ) vs. ln(Δw lin ) and ln(Δw f ). Due to the highly significant positive relationship between ln(F max ) and ln(ρ 12 ) and ln(k ser ), the negative relationships to ln(Δw lin ) and ln(Δw f ) are not that distinctive. These relationships also necessitate a negative correlation between ln(c) and ln(ρ 12 ), ln(F max ) and ln(k ser ). This is because the gradient in the softening zone increases absolutely with increasing brittleness and decreasing plasticity (non-linearity). Concerning the correlation between ln(Δw lin ) and ln(Δw f ), a slightly negative relationship is found, argued by the same circumstance that higher values of Δw f coincide with lower resistance and stiffness but higher non-linearity in the load-displacement curve, leading overall to a decrease in Δw lin . Overall, the correlations of Pearson and Spearman are congruent, apart from ln(Δw lin ) vs. ln(Δw f ). Although a positive correlation is expected and confirmed by Spearman, Pearson's correlation is negative. As the Pearson correlation matrix is in addition not positive definite, a prerequisite for its later use in multivariate modelling, r(ln(Δw lin ); ln(Δw f )) is set to 0.10. Multivariate model approach Within Sect. 3.1 the main statistics, distribution models and correlations for the load-displacement model parameters in Sect. 2.2 and their relationships to α were determined. This information is now used for defining the multivariate model approach, which allows a complete stochastic characterisation of single screw's withdrawal properties. Based on lognormal distributed variables X = (F max , k ser , c, Δw lin , Δw f ) t , the transformation Y = ln(X) makes it possible to operate with a multivariate normal distribution (MVND) in the logarithmic domain, with μ Y and Y as expectation vector of dimension 1 × 5 (equal to the dimension of X) and covariance matrix of dimension 5 × 5, respectively. The expected values for μ Y are estimated from the density corrected data, with X mean,90 = (10,842; 11,994; 5.25; 0.33; 2.56), X mean,0 = (7487; 16,958; 2.32; 0.23; 0.70) and k 90, X = X mean,90 / X mean,0 = (1.45; 0.71; 2.27; 1.41; 3.65). The variances are estimated by averaging the statistics of observed data, with CV[X] = (13, 16, 25, 25, 12)%. The covariances are based on the estimates for Pearson's correlation coefficient in Table 5. Figure 5 visualises the average load-displacement curves for α = 0° and 90°. Whereas the load-displacement curves at α = 0° and 90° in Fig. 5, left are significantly different, for α = 0° has a tendency to be stiffer and more brittle, in a relative view in Fig. 5, right both curves appear more coherent. The distinct softening before F max,0 is not self-evident considering that timber fails normally rather brittle in longitudinal shear. Reasons for the non-linear load-displacement behaviour are (1) the non-linear stress distribution along the screw axis, (2) the inhomogeneous material, and (3) differences in shear properties of and Y = CoVar[ln (X)] LR and LT shear planes. The first two reasons forward the redistribution of stresses along the screw axis after local resistances have been exceeded. The even more pronounced softening before F max given α = 90° is additionally dedicated to the typical non-linear material behaviour of timber before failing in transverse and rolling shear, together with the interaction of both stress fields around the perimeter of the screw. These reasons, together with the significantly different strength and stiffness properties in longitudinal, transverse and rolling shear, provoke the reverse relationships of F max and k ser vs. α. The k 90 factor of F max corresponds approximately to the reciprocal k 90 factor of k ser . Applying this multivariate model approach based on the given parameter setting, variates were generated in R (2013) by using the eigenvalue decomposition. These variates are used to generate virtual load-displacement curves; this by discretizing the displacement comparable to the frequency in data recording during testing, with a displacement-increment of 0.002 mm, with w ini = 0 and for a data frame of w = (0, 10) mm. The curves were modelled until 40% load-drop from F max in the softening domain (w > w f ). Load-displacement curves of single screws positioned between different layers, for example in case of gaps, are generated by considering the parallel system action with uniform load-redistribution after partial failures, i.e. by summing up simulated load-displacement data of single screws, generated for specific α and balanced by their share of contribution. For modelling the influence of gap type and width (see Fig. 1) on the withdrawal properties, the residual lateral areas were calculated as and Values for the (residual) circumference (U res ) U together with details and definitions for both gap types are summarized in Fig. 6. Table 6 shows the main statistics of density, k ser and F max , differentiated in respect to the type and width of gaps as well as the screw diameter. All withdrawal tests in TL, CL and ML were made with α = 0°. The data comprises tests accomplished in the centre of a lamella (no gap, solid timber; ST) and between two lamellas (butt joint; BuJ) with gap widths of w gap = 0, 2 and 6 mm (BuJ 0 , BuJ 2 and BuJ 6 , respectively). To improve the statistical power, the data from TL, CL and ML, given a certain type of gap, was combined. Any possible influences on withdrawal properties caused by differences in density are compensated by applying density correction according to Eq. (8) and by using the exponents found in the current analysis (see Table 6). General data analysis Withdrawal tests accomplished in the intermediate layer (IL) make it possible to analyse the influence on the (11) withdrawal parameters caused by the interaction of α = 0° and 90°. The data comprises tests in bed joints (BeJs) and T-joints (TJs), the second type again with w gap = 0, 2 and 6 mm (TJ 0 , TJ 2 and TJ 6 , respectively). Although mean densities of different CLT layers at equal gap type are well comparable, the mean densities of different gap types (see Table 6) are in some cases quite different. A possible reason is the insufficient precession in predrilling, in particular in cases of w gap = 6 mm, considering drill diameters of 5 and 7 mm for d = 8 and 12 mm, respectively. This may cause some bias in calculated density in cases where the theoretically calculated loss in material due to pre-drilling is not equivalent to the practical execution. This may also cause some bias in density corrections of k ser and F max , applied later. However, this influence is judged to be small. Model validation For validation of the multivariate model approach, developed for the description of the load-displacement behaviour of screws tested in withdrawal using multivariate input parameters, the influence of gap types and widths on the withdrawal parameters k ser and F max are analysed and their distributions are compared based on tests with simulated data. Beforehand, the expectations are briefly outlined which are based on general investigations on stochastic-mechanical systems in Brandner (2013b): • BuJ 0 vs. ST | α = 0°: for k ser , equivalence of mean values k ser,mean | BuJ 0 ≈ k ser,0,mean together with a significant reduction in dispersion, with CV[k ser | BuJ 0 ] ≈ CV[k ser,0 ] / √2, is expected. Due to the non-linear load-displacement behaviour before and after F max , there is a high potential for load-redistribution after partial failures, i.e. after exceeding the capacity of the screw in one layer or lamella. In comparison to F max,0 , thus only a slightly lower F max,mean | BuJ 0 is expected together with significantly reduced CV[F max | BuJ 0 ] ≥ CV[F max,0 ] / √2. In fact, averaging all stiffness values from independently and identically distributed (iid) commonly parallel acting system components together with reduced CV[k ser ] is a general outcome of the stochastic-mechanical model of parallel acting linear-elastic springs. Hereby, both statistics follow approximately the distribution of mean values, with with E[.] as expectation value and N → ∞ as the number of variables X. The averaging of stiffness properties in the linear-elastic zone is also valid for all other investigated types of closed gaps and is explicitly implemented in the stochastic-mechanical multivariate model approach. • BeJ 0 vs. ST | α = 0°: due to the anchorage of the screw to 50% perpendicular to grain, F max,mean | BeJ 0 higher than F max,0,mean is expected. However, because of the reverse relationships of F max and k ser vs. α, F max,mean | BeJ 0 will be closer to F max,0,mean than to F max,90,mean . Also, a reduced CV[F max | BeJ 0 ] is expected. • TJ 0 vs. ST | α = 0°: because of the interaction 0°|90°|0° it is assumed that F max,mean | TJ 0 ≤ F max,mean | BeJ 0 , and CV[F max | TJ 0 ] slightly reduced. • BuJ 0 vs. BuJ 2 and BuJ 6 : it is expected an approximately continuous decreasing F max,mean , proportional to the loss of lateral area of the screw, without influence on CV[F max ]. • TJ 0 vs. TJ 2 and TJ 6 : increasing gap width comes along with decreasing shares of α = 0°. This, together with the loss of volume for anchoring, leads to a decrease in k ser,mean higher than in F max,mean . This is again motivated by the reverse relationship of both properties in respect to α. However, for each property the decrease will be proportional to the loss of lateral area within α = 0°. Thus, a minor raise of CV[F max ] is also expected. With these expectations the validation of the model is now the focus. A summary of the main statistics for screw diameter d = 8 mm is provided in Table 7. A direct confrontation of test and simulation data is provided via box-plots in Fig. 7. Regarding the expectations outlined before, analysing and comparing Tables 6 and 7 and Fig. 7 led to the following observations: overall, a good to very good agreement between expectations and simulations as well as between the distributions of test and simulation data is observed. Variations in test data, in particular from tests conducted in gaps, are higher than in simulations. This is due to the uncertainties induced by placing screws in gaps, even if they are closed and the screw holes pre-drilled. However, remarkable deviations occur between tests and simulations of TJs. Consultation with the staff performing the tests confirmed that, in particular for screws with diameter d = 8 mm, exact positioning was not possible despite pre-drilling. As the resistance against screwing is larger for α = 90°, the test data conforms more to BuJs than to TJs, especially for w gap = 6 mm. Therefore, box-plots of simulation data are split in half; the left side corresponds to X / X mean | TJ 0 and the right side to X | BuJ >0 / X mean | TJ 0 . Even then, large deviations between the distributions in series d = 12 mm may also be caused by uncertainties in determinations and / or correction of densities. Regarding k ser , the partial deviation of results is of particular interest. After all, the stochastic-mechanical multivariate model approach used for the simulations has explicitly implemented the averaging of stiffness values from all interacting load-displacement curves. This deviation seems to be caused by the uncertainties in measuring local deformations of screws tested in withdrawal, a circumstance which is also indicated by CV[k ser ] larger than CV[F max ]. However, as a second important part of the model, a loss in resistance and stiffness proportional to the decrease in lateral area of anchorage given w gap > 0 is assumed. Overall, good agreement between model and test data in Fig. 7 already reveals the validity of this assumptions as simplifications. A comparison of mean-ratios of test data from BuJs with the percentages in Fig. 6 again validates the assumptions made. Conclusion A stochastic-mechanical multivariate model approach was established describing the load-displacement behaviour of self-tapping screws by adapting the model developed by Glos (1978). This model facilitates simulating the withdrawal behaviour of axially loaded self-tapping screws in the narrow face of CLT in dependence of α and the positioning of screws in respect to gaps. Additionally, power regression models for density correction of all withdrawal model parameters are provided. To the authors' knowledge, this is the first report on a probabilistic model for the withdrawal behaviour of axially loaded self-tapping screws in general. A previously unrecognized factor in modelling the load-displacement behaviour of single fasteners, softening, was realized as part of the model. The approach was successfully validated by comparing the model outcome, analysing the influence of gaps and interacting layers with different α. Unlike previous studies (e.g. Blaß and Uibel 2007), this study here identified a much higher significant influence of α on the withdrawal capacity, comparable with Ringhofer et al. (2014). Despite the findings of Blaß and Uibel (2007), the authors report on the first study which explicitly quantifies the influence of gaps in interacting layers of different α on the withdrawal behaviour of self-tapping screws. This model makes it possible to investigate (1) the withdrawal capacity, (2) the initial stiffness, (3) the stiffness dedicated to the maximum load, (4) the deformation at maximum load, and (5) the deformation at 60% of F max | w ≥ w f in the softening zone. The model also includes the ability to consider slip and / or delayed stiffening, a characteristic typical for hierarchically structured natural materials also known as J-curve (Gordon 1988). The inferred model parameters are limited to CLT made of common quality Norway spruce with a density of 380-520 kg/m 3 , self-tapping screws from one manufacturer, of one diameter and of one effective length. However, the limitations in data basis are not automatically limitations of the probabilistic model. For example, Pirnbacher and Schickhofer (2007) report on a comparative study between eight different types of screws, focusing on withdrawal strength. Apart from the differences caused by the handling of the screws, no significant differences in withdrawal strength were found. Quantification of influences caused by the effective length, nominal screw diameters and their relation to the withdrawal parameters can be done according to models from previous studies (e.g. Blaß et al. 2006;Pirnbacher et al. 2010;Ringhofer et al. 2015) at least for withdrawal strength and stiffness. The present study only focuses on withdrawal failures of single screws. However, apart from block shear and head-pull through failure, the uncertainties in withdrawal are largest. The comparison of test and simulation data reveals the high potential of stochastic-mechanical models which allow the whole distribution of parameters to be estimated, and not only the average; this with minimum effort, time and costs. The inferred parameters are based on homogeneous CLT, i.e. composed of layers of equal strength class. However, as long as the relationships between model parameters and density are also verified for a larger bandwidth, this approach can easily be applied to modelling the withdrawal in heterogeneous CLT, i.e. with layers of different strength classes and / or timber species, insofar as density as parameter, currently the only indicating timber property, is still valid. The confrontation of test and simulation data also explicitly outlines inadequateness in test execution, in particular considering the data of T-joints. Apart from this, the results clearly show the remarkable impact of gaps on attainable withdrawal properties. Current CLT productions aim to minimize gaps and gap widths even within cross layers; nevertheless, insertion of screws in open gaps cannot be excluded. For practical applications, the following is suggested: • Application of screws with d ≥ 8 mm; the screw diameter should be significantly larger than the maximum gap width currently allowed in technical approvals for CLT, i.e. w gap ≤ 6 mm; • T-joints should be conservatively treated as butt-joints; secure positioning of screws in open T-joints even in pre-drilled holes is not possible, in particular for smaller screw diameters; • Screws should be positioned inclined parallel to the CLT side face at α = 30° to 60° and, if possible, also perpendicular to the side face at α ≥ 10°; inclined positioning minimizes the influence of gaps on withdrawal parameters as long as the load-bearing penetration length of screws in timber is sufficient (see e.g. Blaß and Uibel 2009). The presented multivariate model approach is seen as a worthwhile basis for investigations into the withdrawal behaviour of axially loaded groups of screws positioned in the narrow face of CLT and subjected to withdrawal failure but this is beyond the scope of this contribution. There is great potential for such applications regarding the development of CLT system connectors as an innovative step forward to an adequate connection system for the solid timber construction technique with CLT in general. Here, the ability to introduce locally concentrated loads of high magnitude is a prerequisite. Compliance with regulations on the minimum spacing and edge distances is thereby supposed. There is definitely a need to define regulations for screws positioned in layers of different α and for the influence of gaps. Concerning gaps it is intended to establish a probabilistic approach for judging their influence in case of industrially produced CLT featuring randomly distributed gap widths and, in respect to the layup of CLT, for randomly positioned fasteners. The aim is to report on this subject elsewhere.
11,792.2
2017-09-02T00:00:00.000
[ "Engineering" ]
Thiosulfoxide (Sulfane) Sulfur: New Chemistry and New Regulatory Roles in Biology The understanding of sulfur bonding is undergoing change. Old theories on hypervalency of sulfur and the nature of the chalcogen-chalcogen bond are now questioned. At the same time, there is a rapidly expanding literature on the effects of sulfur in regulating biological systems. The two fields are inter-related because the new understanding of the thiosulfoxide bond helps to explain the newfound roles of sulfur in biology. This review examines the nature of thiosulfoxide (sulfane, S0) sulfur, the history of its regulatory role, its generation in biological systems, and its functions in cells. The functions include synthesis of cofactors (molybdenum cofactor, iron-sulfur clusters), sulfuration of tRNA, modulation of enzyme activities, and regulating the redox environment by several mechanisms (including the enhancement of the reductive capacity of glutathione). A brief review of the analogous form of selenium suggests that the toxicity of selenium may be due to over-reduction caused by the powerful reductive activity of glutathione perselenide. Introduction: Sulfur Bonding Some long-held theories on sulfur bonding have been called into question by the use modern physico-chemical technology and enhanced computing ability. The challenged theories include the theory of hypervalency of sulfur and the nature of the S-S bond in thiosulfoxides. In 1982, Kutney and Turnbull published a paper titled "Compounds Containing the S=S Bond" [1]; however, new physical and computational data suggest that the S=S (double) bond may not exist. The nature of sulfur bonding relevant to life processes is outlined briefly below. Chalcogen atoms (group 16 of the periodic table) have six electrons in the valence shell providing these atoms with special bonding possibilities. Atoms with an even number of valence electrons have the ability to catenate or bond together in series. Carbon (group 14), with four valence electrons, catenates in three dimensional lattices, but sulfur, with six valence electrons, forms chains of atoms bonded by 2-electron dative bonds ( Figure 1). Branching can occur since a sulfur atom in a chain can donate electron pairs to more than one sulfur atom. At high temperatures, very long chains form but, at lower temperatures, the chains cyclize in rings of eight atoms that can pack in several allotropic forms. Sulfur atoms have a high affinity for bonding to other chalcogens atoms, particularly oxygen as reflected in its ancient name "brimstone" (burning stone). The true nature of sulfur bonding is only now being revealed. Like oxygen, sulfur can form 4-electron dative bonds (compare below Figure 2a a ketone and Figure 2b a thione). For a long time, it was thought that sulfur, unlike oxygen, was able to accommodate more than eight Lewis electrons in its valence shell. This property, called hypervalency is exemplified in Figure 2c the traditional "text-book" representation of sulfuric acid in which the S atom has 12 valence electrons. However, the concept of hypervalency is now in question [2,3]. Recent studies of electron density mapping using synchrotron X-ray diffraction have shown that many chalcogen-chalcogen bonds previously considered to be 4-electron dative bonds are, in fact, 2-electron polar dative bonds [4]. The two alternative structures are shown in Figures 2d,e for thiosulfoxide. Ab initio calculations indicate that the thiosulfoxide bond is a polar 2-electron bond as shown in Figure 2e [5] and much weaker than the previously-assumed double bond shown in Figure 2d [6]. Therefore, thiosulfoxide sulfur is relatively reactive and this undoubtedly contributes to the regulatory functions of sulfane sulfur in biological systems as summarized in this review. There are three systems of nomenclature for sulfur compounds based on the roots "sulf" ("sulph" in the UK), "mercapto", and "thio". Table 1 is a compilation of the structures and nomenclature of sulfur and sulfur-oxygen compounds. Some sulfur atoms in the structures are shown in the classical (4-electron) format but other bonds are shown as 2-electron bonds when the chemical and biological evidence supports this representation. Sulfur in Biology Because of the versatility of the sulfur atom and its prevalence in the primordial environment, it is not surprising that sulfur evolved to fill many structural, catalytic, and regulatory roles in biology. Sulfur is life-supporting in the following processes:  Elemental sulfur reduction to H 2 S provides a source of energy in Desulfuromonas and archaea.  H 2 S oxidation to elemental sulfur provides a source of energy in Beggiatoa.  H 2 S or S 0 oxidation to sulfate provides a source of energy in Thiobacillus and archaea.  Sulfate or sulfite reduction to H 2 S provides a source of oxygen for Desulfovibrio, archaea.  H 2 S splitting during photosynthesis provides a source of hydrogen atoms in purple and green sulfur bacteria. Covalently-bonded sulfur, in a wide range of oxidation states, is a determinant of structure and function in many biological systems. Cysteine and methionine are primary structural elements of proteins and the sulfur of cysteine is an important determinant of the tertiary structure of proteins. The sulfhydryl group of glutathione is a major determinant of redox status in tissues. The sulfhydryl group on proteins is involved in regulating the activity of the proteins both by disulfide bond formation and by persulfuration. It is thought that reversible oxidation of SH groups to the sulfenyl form in regulatory proteins is a signaling mechanism [7] and that sulfuration of transfer RNA is a mechanism for controlling translation [8]. The sulfonyl group, -SO 3 − , provides detergent properties to taurine (2-aminoethane sulfonic acid), the major conjugant for the excretion of cholesterol-derived products in bile. Sulfate occurs as esters of numerous hydroxy compounds: carbohydrates, glycosylaminoglycans (e.g., heparin, chondroitin), lipids (such as cholesterol and sulfatides), proteins (hydroxyl groups of serine, tyrosine and threonine), and hormones (thyroxin). Sulfur is a key component in six major cofactors in mammals (iron-sulfur clusters, coenzyme A, lipoic acid, thiamine pyrophosphate, molybdenum cofactor, and biotin) and two additional cofactors in bacteria and archaea (coenzyme M and coenzyme B). The molybdenum cofactor (MoCo) functions in sulfite oxidase, xanthine oxidoreductase, and aldehyde oxidase in humans and in other enzymes in microorganisms and plants [9]. In MoCo, the pterin platform has two sulfur atoms that bind the Mo atom and, in aldehyde oxidase, there is a third sulfane sulfur atom terminally bonded to the Mo atom; all three sulfur atoms originate as S 0 extracted from cysteine by pyridoxal 5'-phosphate (PLP)-containing cysteine desulfurases [9]. In the tungsten-containing enzymes of hyperthermophilic archaea, the Mo is replaced by its congener, W, on the two sulfur atoms of the same pterin platform [10]. Sulfane sulfur, which is sulfur in the thiosulfoxide form (represented as S 0 ), has been found to have remarkable regulatory functions in biological systems. The following review briefly outlines the unveiling of these functions of sulfane sulfur, its unique nature, and its biogeneration. Sulfur as a Regulatory Agent Interest in sulfur as a regulatory agent began more than 40 years ago in studies with immune cell systems cultured in vitro. In 1970, Fanger and colleagues showed that cysteine, glutathione, or sulfite ion at mM concentrations in the presence of 20% fetal calf serum markedly enhanced the response of lymphocytes to transforming agents [11]. In 1972, Click and colleagues reported that 2-mercapto-ethanol (MER) at micromolar concentrations caused a 2-to 3-fold stimulation of antibody production [12] and T cell proliferation [13]. This finding was soon expanded to other immune systems and other sulfur compounds such as α-thioglycerol (TGL) [14]. The next breakthrough was in 1973 when Broome and Jeng reported that thiols or their disulfides permitted in vitro proliferation of murine cancer cell lines previously not culturable in vitro but carried in live mice [15]. In 1975, one of the present authors (JT) confirmed this growth factor effect with several members of a bank of murine cell lines and showed that the sulfur compounds fall into two categories [16]. As shown in Figure 3, three xenobiotic sulfur compounds, MER, TGL, and TEA (cysteamine, 2-mercapto-1-aminoethane, thioethanolamine) stimulate growth under the following conditions: (a) at μM concentrations; (b) only in the oxidized (disulfide) form [17]; and (c) with any serum (or bovine serum albumin) replacing fetal calf serum. Compounds in the second group (cysteine, glutathione, homocysteine, coenzyme A, thioglycolic acid, and dithiothreitol) are active: (a) only in the reduced (thiol) form, (b) at high (mM) concentrations, and (c) only in the presence of fetal calf serum. Sera other than fetal calf serum are ineffective with the second group [18]. Cystine is active at 1 mM in the presence of a pyridoxal catalyst [17]. The conclusion from these findings is that disulfides in the first group generate a growth factor de novo while the compounds in the second group mobilize the growth factor from fetal calf serum. The mechanism common to the first group is the metabolic generation of a carbonyl group adjacent to the disulfide bond resulting in the labilization of one of the sulfur atoms and its release as sulfane sulfur [17] ( Table 2). The catalysts effective in the cell cultures were found to be alcohol dehydrogenase for the disulfides of mercaptoethanol and thioglycerol, diamine oxidase for the disulfide of cysteamine (i.e., cystamine), and pyridoxal plus a metal ion or the enzyme cystathionine γ-lyase (γ-cystathionase; CTH), for cystine. Surprisingly, viscose dialysis tubing is also effective [17]; it is manufactured by treating cellulose with carbon disulfide and it contains residual sulfane sulfur chains unless it is exhaustively boiled in water before use. Thiols in the second group at high concentrations (10 mM) were shown to release H 2 S from fetal calf serum according to Equation (1) [18]: At lower concentrations (0.1 mM to 1 mM) these thiols do not liberate the sulfur as H 2 S but incorporate it by exchange and transport it as the persulfide (RSSH; Equation (2)). The chemistry of this reaction is discussed in detail below: Protein-S-SH + R-SH  R-S-SH + protein-SH (2) The stimulatory effect of sulfite on cell growth as reported by Fanger et al. [11] is explained by the well-known, reversible, and pH-dependent addition of S 0 to sulfite to generate thiosulfate [24]. The sulfite ion (SO 3 2− ) accepts S 0 from protein carriers and acts as a low molecular weight carrier of sulfane sulfur in the form of thiosulfate (S 2 O 3 2− ): The cumulative data indicate that the growth factor is the sulfur atom (sulfane sulfur, S 0 ). The sulfur-dependent murine cancer cells were found to have two genetic defects. These cells are completely lacking in the enzyme methylthioadenosine nucleoside phosphorylase (MTAP) [25] and deficient in CTH [17,26]. In direct comparison, cells containing MTAP are not dependent on the sulfur factor [25]. Subsequent to the report of the absence of MTAP in mouse cells, the enzyme was found to be absent in a large number of human cancers [27,28]. In vivo, these defective cells can survive by obtaining the growth factor from normal cells in the body. Isolated macrophages were found to be effective "nurse" cells for the sulfur-dependent cell lines [18]. The useful application of xenobiotic sulfur compounds (MER, TGL, and TEA) in biological systems has continued to expand. Aside from their absolute requirement in the MTAP-and CTH-defective cells, these precursors of sulfane sulfur have dramatic effects in increasing the viability, health, vigor, and proliferative capacity of many cell types in vitro. Today, MER, TGL, or TEA are routinely added to many in vitro cell systems involving immune cells, hematopoietic cells, reproductive cells, embryonic cells, and stem cells [29][30][31]. These sulfane precursor compounds are added at μM concentrations and, by integration of kinetic parameters, it can be shown that the concentration of sulfane sulfur in the media at any time is in the nanomolar (nM) range [17]. These sulfane sulfur precursors have been shown to have other effects; cystamine has been shown to have potent anti-HIV effects when the virus is grown in lymphocytes [32][33][34]. Cyst(e)amine has beneficial effects in animal models of neurodegenerative diseases (Huntington disease and Parkinson disease) and is currently in clinical trial for treating Huntington disease [35]. Members of a family of phosphorothioates of the type R-NH-CH 2 -CH 2 -S-PO 3 H 2 have been much-studied as protectants against radiation-induced damage and chemotherapy toxicity [36]. These compounds are designated as a WR (named after the Walter Reed Army Institute of Research where they were first developed) series (e.g., WR-2721, WR-151327). These compounds are thought to act by increasing antioxidant activity such as that due to manganese superoxide dismutase [37]. There are several reports showing that MER, given orally long-term to mice, prevents the spontaneous cancers common in mice and dramatically increases longevity [38][39][40][41]. It should be noted that, although the compounds are frequently used in the thiol form, it is the disulfide which predominates in an aerobic environment. At the time of the discovery of the beneficial effect of sulfane sulfur precursors on cells in culture, S 0 was already known to be involved in several regulatory processes in vitro: activation or inactivation of a large number of enzymes, post-transcriptional modification of transfer RNA, and the biosynthesis of iron-sulfur clusters and molybdenum cofactor. The literature was reviewed in 1989 [42]. In the ensuing years, the role of sulfane sulfur in biosynthetic processes has undergone rapid development and has been reviewed [43,44] These biosynthetic processes involving sulfane sulfur involve not only the synthesis of Fe-S clusters and molybdopterin in mammals but also of biotin, thiamin, and lipoic acid in microorganisms. Sulfane Sulfur from Garlic Plants of the genus Allium are of interest because of their sulfur compounds [45]. Disrupted tissues of these plants contain several compounds which either contain sulfane sulfur (e.g., allyl disulfides and diallyltrisulfides) or generate S 0 during simple metabolic changes that result in β elimination (e.g., alkyl-cysteine disulfides) [21]. These vegetables or the pure sulfur-containing compounds known to be present in them have been reported to have an extensive array of health-related effects. Reported beneficial effects include prevention of carcinogen-induced cancer [46,47], dementia [48] and diabetes [49]; lowering of blood cholesterol [50]; decreased plasma homocysteine levels; and prevention of atherosclerosis and heart disease [51]. "Aged garlic extract" (AGE) [52] is a product of current interest. This is a commercial product prepared by aging minced garlic in 20% ethanol at room temperature for 18 months and then removing the solids. During the aging process, the native sulfur compounds such as alliin and the odoriferous compound, allicin, are slowly converted to compounds which are not only non-odoriferous but better sources of sulfane sulfur than the compounds in the non-aged extracts. These constituents include cysteine alkyl disulfides, cysteine mercaptoallyl disulfide, diallyl disulfide, and diallyltrisulfide. Clinical trials with AGE or these pure chemicals have yielded promising but frequently conflicting results in relation to health effects (not reviewed here, but see [53]). "Hydrogen Sulfide" Since 1996 there have been many reports describing the effects of "hydrogen sulfide" in various biological systems [54]. The agent has been added to the systems usually as a pH-neutral water solution of NaHS. The solutions were generally used in an aerobic environment and, therefore, contained numerous sulfur species, including H 2 S, HS − , S 2− , S 0 , and HS n S − (with n varying from 1 to 8) as well as side-products from the autoxidation of H 2 S namely H 2 S(O), HS• (thiyl radical), H 2 O 2 , O 2 •, and OH• [55]. There has been no rigorous identification of the active agent in this mixture. The effects of this mixture in physiological systems have been reported to be inhibitory or stimulatory. The inhibitory effects may be due to poisoning of the cytochromes of the respiratory chain and the oxygen radicals could have other inhibitory effects. It has been pointed out by several authors that the stimulatory effects may be due to S 0 generated by autoxidation of the sulfide [56][57][58]. This is feasible since S 0 is active at nM to μM concentrations [17] and the H 2 S reagent is added at μM to mM concentrations [54]. Therefore, even a small degree of autoxidation would provide the active factor. This mechanism is supported by evidence that oxygen is required for the vasoactive effects of sulfide solutions in at least two systems [59,60], and is further supported by the proposed mechanism of action involving insertion of a sulfur atom into sulfhydryl groups [61], a process that can occur only with the S 0 species and not with a sulfide species [62]. The uncertainty over the nature of the active agent in NaHS solutions might be resolved by comparing the biological effects of NaHS with those of pure sulfane sulfur-generating systems such as those listed in Table 2. Other sources of S 0 have been found to be biologically active in the H 2 S test systems. For example, the proposed "therapeutic" compounds such as derivatives of thiophosphate and 1,2-dithiacyclopentene-3-thiones (called dithiolethiones) [63] are, in fact, sources of sulfane sulfur (not H 2 S). The sulfur in thiophosphate is clearly a sulfane sulfur and thiophosphates (e.g., Lawesson's reagent) are used for introducing elemental sulfur atoms during organic syntheses [64]. The dithiolethiones, first developed for the vulcanization of rubber, have been extensively studied as anticarcinogenic agents where they are frequently compared to sulfur compounds obtained from garlic [65]. More recently, sulfane chains present in preparations of Na 2 S 3 and Na 2 S 4 have been reported to be highly active in a test system involving signaling in the brain (320-fold more active than NaHS) [66]. The fact that these sulfane chains are present in the "H 2 S" test systems supports the conclusion that the active agent is sulfane sulfur rather than hydrogen sulfide. It should be noted that the proposed agents (thiophosphates and dithiolethiones) are less appropriate as "therapeutic" agents than are mercaptoethanol disulfide and cystamine, which have already been tested in animals [29][30][31]. Properties of Sulfane Sulfur The combined evidence from three fields of sulfur research (sulfur growth factors, sulfur compounds in garlic, and "hydrogen sulfide") indicate that there is a form of sulfur which has remarkably wide-ranging effects in biological systems. This active form of sulfur is difficult to name and define. It has been called "zero valent sulfur", "sulfane sulfur", and "sulfur-bonded sulfur" but technically it can be defined as "thiosulfoxide sulfur or any sulfur atom which can tautomerize to the thiosulfoxide form". It has six valence electrons and readily accepts two dative electrons from another sulfur atom to complete the Lewis eight electron rule [42]. The thiosulfoxide bond is weak [6] and the sulfur is easily ejected as elemental sulfur, transferred to another sulfur atom, or reduced to H 2 S by thiols. These properties apply to the sulfur in several classes of sulfur compounds: (a) Thiosulfoxides in the oxidation series ranging from thiosulfenic acid to thiosulfate as shown in the fourth row of Table 1. These compounds should be considered as having two kinds of sulfur, the inner sulfur with a variable oxidation number (e.g., +4 for thiosulfate) and the outer (sulfane) sulfur with an oxidation number of zero. The activating effect of these unsaturated groups has been documented for the β-ene group in allyl disulfides [1,67] (Equation (4)), the β-keto group created de novo during oxidation of cystamine and mercaptoethanol disulfide as referenced in Table 2 (Equation (5)), and the β-ketimine intermediates in pyridoxal 5'-phosphate (PLP)-catalyzed reactions, such as the desulfuration of cysteine to alanine by the desulfurases discussed below (Equation (6)). (d) α or β-keto thiols. The sulfur of these compounds behaves as sulfane sulfur although it cannot form a thiosufoxide. The classical example is mercptopyruvate [68]. An analogous weakening of the C-S bond is seen when alanine-3-sulfinate is transaminated to sulfinyl pyruvate during the biodegradation of cysteine [69]. For the keto group in the β position, the mechanism may involve keto-enol tautomerization resulting in C=C group adjacent to the C-S bond [70]. The lability of the thiosulfoxide bond provides a facile mechanism for the reversible transfer of sulfur atoms into or out of protein sulfhydryl groups (Equation (7)) and disulfides (Equation (8)): The Rhodanese Homology Domain Sulfane sulfur does not occur in the free form (as shown schematically in Equations (7) and (8)) but is always carried on another sulfur atom. In biological systems, there is a family of carrier proteins that includes rhodanese (thiosulfate-cyanide sulfur transferase) [71], mercaptopyruvate sulfur transferase [68], CTH [72], and serum albumin [73]. The first two are "dedicated" sulfane sulfur carriers in which the sulfur atom is carried as a persulfide on a cysteine residue in a specific domain called the rhodanese homology domain (RHOD) [74]. This highly conserved domain is present in at least 500 proteins in organisms ranging from archaea to humans, including at least 47 in humans. It is present in several classes of proteins, notably the phosphatases of the CDc25 family which help to regulate the cell cycle [75]. The enzymes which incorporate S 0 into functional molecules (MoCo, tRNA) contain the RHOD [43,44]. In molybdenum cofactor synthesis, S 0 is converted to an intermediate thiocarboxylate, R-C(S)-O − , before insertion into the cofactor [9]. As stated above, MoCo occurs in the active site of sulfite oxidase [9]. A rare congenital defect in MoCo synthesis in humans results in defective sulfite oxidase and is lethal shortly after birth. Non-RHOD S 0 Binding to Proteins Equations (7) and (8) show how sulfane sulfur can bind non-specifically to proteins. This can lead to persulfide or polysulfide groups on the cysteine of proteins (Equation (7)) or to polysulfide links between cysteine residues in proteins (Equation (8)). When CTH catalyzes the degradation of cystine, some S 0 becomes bound to the CTH and a trisulfide structure was proposed [72]. A polysulfide link occurs in Cu-Zn superoxide dismutase [76] and has been identified in other proteins [77] particularly in antibodies [78]. In a recent paper, Ida et al. showed that polysulfide groups on thiols and proteins can be stabilized by bromobimane derivatization and that the derivatives can be separated and identified by GC-MS-a technique which they called "polysulfidomics" [58]. Polysulfide groups were found on a surprisingly diverse set of proteins. Polysulfides are frequently found in proteins produced by recombinant technology and this may explain (in part) the unexpected inactivity or altered reactivity of many enzymes produced by this technology. Caution must be used in interpreting these findings in proteins prepared by dialysis because the sulfane sulfur can be inadvertently introduced from the viscose dialysis tubing. It is likely that this type of sulfur binding occurs in vitro in enzymes reported to be affected by S 0 but which are not known to contain the active RHOD [42]. This type of bonding may occur also with cyclin-dependent protein kinase p34cdcK which is inhibited by the S 0 source diallyldisulfide [79] and with the protein tyrosine phosphatase PTP1B which is inactivated by solutions of NaHS [80]. Sulfane Sulfur Generation Following is a list of biochemical systems that can generate sulfane sulfur. In general, the systems involve the generation of a group which labilizes a C-S bond. The activating groups (C=O, C=N, or C=C) are known to delocalize electrons in other systems such as in aldol condensation, PLP-catalyzed reactions, and allylic rearrangement, respectively. Cysteine Deamination (Generation of a C=O Group α to a C-S Bond) Deamination of cysteine to β-mercaptopyruvate (MP) can occur by transamination [68] or by oxidative deamination catalyzed by L-amino acid oxidase [81]. MP was the first described example of a biological compound containing a carbonyl-activated sulfur atom [68]. In vivo, there is a specific RHOD-containing carrier protein which accepts the S 0 from mercaptopyruvate resulting on the formation of pyruvate. When the deamination is carried out in vitro with supraphysiological concentrations of cysteine, the residual cysteine has considerable reducing capacity and much of the sulfur is reduced from MP as H 2 S by the excess cysteine [81]. The oxidation of the compounds shown in Table 2 are additional examples in which a carbonyl group is created adjacent to a C-S bond, thereby labilizing it. Homocysteine Deamination (Generation of a C=O or C=N Group β to a C-S Bond) L-Homocysteine is a substrate for transamination by glutamine transaminase K [82] and oxidative deamination [81,82]. With bacterial or snake venom L-amino acid oxidase, the keto acid is formed along with some H 2 S. The release of the sulfur was attributed to the labilizing effect of the keto or imino group (possibly involving keto-enol tautomerization as studied by Nicolet [70]), The reducing effect of the excess thiol substrate would cause release of some sulfur as H 2 S. In addition, D-homocysteine is a substrate of mammalian D-amino acid oxidase [82 and references therein]. Given the biomedical importance of homocyst(e)ine in health and disease the possibility that homocyst(e)ine may be a source of sulfane sulfur by mechanisms similar to those outlined above needs to be further investigated. Cysteine Desulfurases (C=N in the α Position) Cysteine desulfurases remove the sulfur from cysteine, thereby generating alanine. These are PLP-containing enzymes in which the formation of the ketimine group adjacent to the C-S bond is sufficient to release the sulfur atom without net removal of the amino group [83]. These desulfurases provide the sulfur for the synthesis of iron sulfur clusters and MoCo and for the modification of transfer RNA in all species thus far investigated, including humans [84], and for the synthesis of lipoic acid, biotin, and thiamin in bacteria [85]. Cysteine S-conjugate Lyases (C-S Lyases) (β Elimination) Many PLP-containing enzymes catalyze β-elimination reactions with cysteine S-conjugates, generating ammonia, pyruvate, and a sulfur-containing fragment [21,22,86]. In many cases, the β-elimination reaction is biologically important (e.g., the reaction catalyzed by cystathionine β-lyase). However, in other cases (particularly when the amino acid substrate contains a good leaving group in the β position), the PLP-enzyme is "coerced" into catalyzing a non-physiological β-elimination reaction. For reviews see [21,86]. When the substrate is a sulfide, the sulfur product is a thiol (Equation (9)) and when the substrate is a disulfide, the product is a persulfide (Equation (10)): cy-S-S-R + H 2 O → NH 3 + pyruvate + R-S-SH (10) Cysteine S-conjugate β-lyase activity is associated with a surprisingly diverse set of PLP-containing enzymes (at least nineteen) including kynurinase, several aminotransferases and CTH in mammals [86], and several amino acid decarboxlyases in a variety of microorganisms [87]. The R group depicted in Equations (9) and (10) may be a member of a large spectrum of groups, e.g., alkyl, halogenated alkyl, halogenated alkene, halogenated alkyne, aryl, halogenated aryl, benzothiazole, cysteine (reviewed in [21,22,86]). Reactions shown in Equation (10) lead to sulfane sulfur directly in the form of a persulfide whereas reactions shown in Equation (9) do so indirectly. Thus, in Equation (9) when R is a small alkyl group, an alkyl thiol is released and, in vivo, this thiol enters disulfide exchange with cystine giving rise to the mixed disulfide cy-S-S-R which then enters a new cycle of the C-S lyase system according to Equation (10). Methane thiol arising from the catabolism of methionine [88] and S-alkyl cysteines present in plants of the Brassica family could theoretically be metabolized to a compound (methane thiol persulfide; methyldisulfane) that contains sulfane sulfur by this mechanism (see below). Cystathionine γ-Lyase (CTH) CTH is a special example of a C-S lyase and the prototype of these enzymes. Since it is widely cited as a major source of S 0 and H 2 S [54,66], it merits special comment. With its nominal substrate, L-cystathionine, the enzyme catalyzes a γ-elimination reaction yielding α-ketobutyrate (Equation (11)). Under in vitro test conditions, CTH has the ability to catalyze β-elimination reactions with several disulfides; cystine (Equation (12)) [19], various alkyl cysteine disulfides (Equation (10)) [89], and cysteine-3-mercaptolactate disulfide (cy-S-S-lactate) (Equation (13)) [90], in each case yielding a persulfide. Cy-S-S-lactate occurs in the blood in the congenital defect in mercaptopyruvate sulfur transferase and is associated with mental retardation and other defects [91]: cy-S-hcy + H 2 O → NH 3 + α-ketobutyrate + cysteine (11) cy-S-S-cy + H 2 O → NH 3 + pyruvate + cy-S-SH (12) cy-S-S-lactate + H 2 O → NH 3 + pyruvate + lactate-S-SH (13) Because of the low degree of discrimination of CTH for substrates, it is likely that the mammalian enzyme can also degrade cysteine-homocysteine disulfide (cy-S-S-hcy). Two bacterial homologues are known to use this mixed disulfide as a substrate [92,93]. The mixed disulfide, cy-S-S-hcy, is present in normal human plasma at a concentration of ~3 μM but may reach a concentration of 30 μM in the plasma of patients with hyperhomocysteinemia [94][95][96]. With the mammalian CTH, the reaction could be either a β-elimination reaction (Equation (14)) or a γ-elimination reaction (Equation (15)): cy-S-S-hcy + H 2 O → NH 3 + pyruvate + hcy-S-SH (14) cy-S-S-hcy + H 2 O → NH 3 + ketobutyrate + cy-S-SH (15) In both cases the eliminated fragment is a persulfide. The generation of excessive (toxic) amounts of S 0 from cy-S-S-hcy may account for some of the pathology seen in hyperhomocysteinemia and should be further investigated. There is some evidence that CTH activity is regulated so as to maintain a constant level of S 0 availability. In strains of the fungus, Aspergillus nidulans, defective in enzymes of cysteine synthesis and having low cysteine availability, the enzymes rhodanese and CTH are increased in activity interpreted as a homeostatic mechanism for maintaining the S 0 level [97]. Allyl Disufides (C=C Group in the α Position) Alllyl disulfides undergo spontaneous rearrangement [1,67]. If the reaction is carried out in the presence of triphenylphosphine, one sulfur atom is removed, proving that the mechanism involves a thiosulfoxide intermediate. (For details of this complex rearrangement, see [1,67]). Because of this property, the allyl sulfur compounds which occur in garlic (e.g., diallyl disulfide) contain sulfur which is reactive as sulfane sulfur. The Polyamine Pathway The evidence from MTAP-deficient mouse cells indicates that this pathway is important for S 0 generation, at least in mice. In this pathway, the sulfur of methionine is first converted to methyl mercaptan via the following sequence: methionine → S-adenosylmethionine → decarboxy-S-adenosylmethionine → 5'-methylthioadenosine → 5-methylthioribose-1-phosphate → 2-oxo-4-methylthioribose →→→ 3-methylthio-propionyl coenzyme A → methyl mercaptan. (For the detailed pathway, the enzymes involved, and numerous references see [98]). The methyl mercaptan then forms a mixed disulfide with cysteine by disulfide exchange with cystine and the mixed disulfide is a substrate for the C-S lyases according to the reaction shown in Equation (10). In humans, this pathway may be important in the embryo where polyamine synthesis is rapid [99]. The C-S lyase in the embryo is not CTH since that enzyme is absent in the embryo [100,101], but there seems to be a consensus that CTH is an important source of S 0 after the first year of life when this enzyme is highly expressed in human liver. The "Antioxidant" Properties Attributed to Sulfur Compounds Numerous publications attribute "antioxidant" properties to sulfur compounds even when the compounds have no reducing or other radical-quenching groups [102]. In most cases, there is no obvious antioxidant mechanism. The intended meaning seems to be that the cells or tissues are more healthy and vigorous in the presence of the sulfur compounds and, in some cases, there is an increase in reductants (such as glutathione) chemically unrelated to the added sulfur compound. One such compound is S-allylcysteine for which various mechanisms have been proposed [102]. Other examples are the xenobiotic disulfides of MER, TGL, and TEA which, as disulfides, have no reducing ability but have potent effects in promoting the replication and health of many cell types. Indeed, they are ineffective in cell cultures when in the reduced (thiol) form [17]. However, the "antioxidant" properties of these sulfur compounds is explainable by their ability to give rise to sulfane sulfur, which then exerts antioxidant effects through the following mechanisms: Superoxide Dismutase Requires S 0 Copper-zinc superoxide dismutase (CuZn SOD) has a sulfane sulfur bridge between Cys111 residues of the two units of the dimeric form; the number of sulfur atoms in the sulfane sulfur bridge varies depending on the method of purification, with numbers as high as 7 [76]. The stability of the enzyme is enhanced by the presence of these sulfur atoms [76] and CuZn SOD activity in vitro is increased in the presence of aerobic (and, hence, S 0 -containing) solutions of NaHS [103]. Therefore, sulfuration of CuZn SOD may contribute to the anti-oxidant effects of sulfane sulfur precursors. Since H 2 O 2 is the product of SOD, the increased production of H 2 O 2 in the cancer cells treated with garlic-derived sulfur compounds [104] may be attributable to the increased activity of this enzyme as a result of addition of sulfane sulfur to the enzyme. Thus, Iciek et al. studied the effects of various sulfur-containing compounds derived from garlic on the production of H 2 O 2 in HepG2 cells. Diallyl trisulfide, an S 0 -containing compound that occurs in garlic, was found to be especially effective in stimulating H 2 O 2 production [104]. The Sulfur in the Iron-sulfur Clusters Originates as S 0 Iron sulfur clusters have the ability to process electrons near the lower limit of the physiological redox range. The clusters contain variable numbers of iron and sulfur atoms with each variation adapted to specific functions [105]. Although the sulfur in these clusters is in the sulfide form, it originates as sulfane sulfur (persulfide) which undergoes reduction during the synthesis of the clusters [42,105]. The functions of iron sulfur clusters in bacteria are extremely diverse but in higher animals the functions are mainly in the electron transport chain of oxidative phosphorylation (complexes I, II, and III), in steroid synthesis (adrenodoxin), and in the generation of deoxyadenosyl radical (ado•) by an enzyme called MoaA. The ado• radical is formed when one electron is transferred from the iron of an iron sulfur cluster to S-adenosylmethionine resulting in the release of methionine: S-adenosyl methionine + e → methionine + ado• (16) This radical has numerous functions in bacteria and archaea [106] but in mammals its main function is in the biosynthesis of molybdenum cofactor (MoCo) from guanosine triphosphate [107]. As stated above, MoCo functions in disposing of the end-product of sulfur metabolism, sulfite ion, as well as purine catabolites (xanthine) in animals [9]. Therefore, through iron sulfur clusters, sulfane sulfur contributes indirectly to the redox regulation as well as the disposal of the end product of sulfur metabolism. The Reducing Capacity Is Increased in S 0 -stimulated Cells As stated above, sulfane sulfur greatly enhances the vigor and health of cells. This increased vigor could indirectly increase the cellular content of oxygen-defensive factors such as glutathione, catalase, and superoxide dismutase. Numerous reports have described an increase in cellular glutathione in the presence of sulfur compounds related to sulfane sulfur [29][30][31]45,54]. The presence of air-exposed solutions of NaHS has been shown to increase the expression of Mn-SOD in ischemia-stressed cardiomyocytes as well as the in vitro activity of CuZn SOD [103]. Glutathione Persulfide (G-S-SH) Is a Powerful Reductant In 1971, Massey et al. reported the remarkable finding that S 0 impurities in commercial GSSG samples catalyze the rapid reduction of cytochrome c by the GSH [108]. GSH alone or GSH in the presence of pure GSSG (freed of S 0 ) did not reduce cytochrome c. The impurity in GSSG was identified as the trisufide, GSSSG, and it could be replaced by cystine trisulfide, cy-S-S-S-cy, or elemental sulfur. The sulfane sulfur was catalytic in this process; only the GSH was consumed in the reduction of cytochrome c. The authors concluded that the S 0 was introduced into this system by GSSSG impurities in the GSSG and that the active agent in cytochrome c reduction was the persulfide GSSH, i.e., that the persulfide is a much more effective reducing agent than is GSH. Prütz described the application of this reduction system to resazurin, a phenoxazine dye which turns red and highly fluorescent when reduced and which is used to indicate viability of cells [109]. He showed that sources of S 0 increased the rate of reduction of resazurin by GSH; irradiated cystamine increased the rate by 30-fold and tetrathionate increased the rate by 60-fold (tetrathionate gives rise to S 0 by partial reduction to thiosulfate by GSH). The remarkable increase in reducing capacity of GSSH relative to GSH is not fully explored, but it appears likely that the persulfide tautomerizes to the thiosulfoxide form and that the thiosulfoxide donates electrons with great facility: 2 GSSH → 2 GS(S)H → 2e + 2H + + GSS-SSG (17) These findings were unappreciated for many years. However, a recent publication [58] seems to have re-discovered the effect first reported by Massey et al. (without citing the earlier work). In this study GSH (supported by NADPH and glutathione reductase) destroyed ~5% of added H 2 O 2 in 30 min whereas the same system containing persulfide destroyed 100% of the H 2 O 2 in 30 min. The polysulfidomic analysis revealed that about 10% of glutathione in tissue occurs in the GSSH form [58]. In summary, the effects of many sulfur compounds which have been interpreted as due to anti-oxidant properties are probably indirect and mediated through the sulfane sulfur generated from these compounds. Sulfur Compounds and Elemental Sulfur in Plant Defense There is growing evidence that the ability of plants to defend against virus and fungus infections is related to the availability of sulfate from which plants make all of their sulfur compounds. This has led to the consideration of including sulfate in agricultural fertilizer. Virus or fungus-infected plants have been shown to have increased levels of thiol compounds (cysteine, GSH), thiocyanates (called glucosinolates), and thiazole compounds (called phytoelexins) all of which can have anti-microbial properties; and to excrete H 2 S gas. There is a coincident synthesis of cysteine-rich proteins called defensins, the function of which is not known. These are a family of small peptides each having 8 cysteine residues in a total 18 to 45 amino acids [110]. A remarkable finding is the accumulation of elemental sulfur (S 8 ) in xylem tissue of certain plants infected with appropriate pathogenic fungi. This has been demonstrated in tomato, cocoa, cotton, tobacco, and French bean challenged with appropriate fungi but did not occur with strawberry or maize at least with the fungi tested. Sulfur accumulation was faster and greater in genotypes recognized as "resistant" to the fungus than in genotypes known to be "susceptible". Sulfur was not detected in un-infected control plants. In in vitro testing, the fungi in question showed growth inhibition by elemental sulfur. These findings suggest that there has been a remarkable natural selection for and synthesis of an effective anti-fungal agent by certain plants. This reflects the centuries-old tradition of man in using powdered sulfur as an anti-fungal agent in agronomy. The excretion of H 2 S in infected plants (mentioned above) may be a result of the reduction of some of the elemental sulfur. This emerging subject has been reviewed [111,112]. Is There a Selenium Analog of S 0 ? There is a rapidly-growing literature on the beneficial effects of selenium compounds (selenite, selenate, selenocysteine, selenomethionine) on heart disease, cancer prevention, immunity, diabetes, and dementia [113]. Therefore, it is logical to ask whether there is a form of selenium analogous to S 0 . Selenium was, for a long time, the neglected congener of sulfur. However, its unusual biochemistry has been brilliantly advanced in recent years in the laboratory of Thressa Stadtman [114]. In mammals, selenocysteine (cysteine in which the S is replaced by Se) is a constituent amino acid in several mammalian enzymes: glutathione peroxidase, thioredoxin reductase, iodothyronine deiodinase, and methionine sulfoxide reductase. Selenium is activated as selenophosphate for incorporation into selenocysteine which has its own unique tRNA (tRNA Sec ) and codon for incorporation into proteins. The unique codon is UGA, which is normally a stop codon, but as a result of the special environment within the message binds instead selenocysteine-tRNA Sec . The role of covalently-linked selenium in selenocysteine-containing enzymes is clear but there is much less information on Se 0 (the analog of S 0 ). Se 0 is referred to as "perseleno selenium" but that name does not represent all of the possible structures which include R-Se-SeH, R-S-SeH, and R-Se-SH. The "triselenide" of glutathione is readily formed in vitro from selenite (selenium dioxide in water) by the reaction shown in Equation (18) [115]. Stadtman et al. showed that the selenium in that derivative can be bound and carried by rhodanese in vitro [116]. There is a protein, selenoprotein P, which has 10 selenocysteine residues and is thought to act as a transporter of the amino acid, selenocysteine, for example from the liver to the brain [117]: The selenium field is seriously overshadowed by the potential toxicity of selenium. Its history in nutrition began 80 years ago when it was noted that livestock were poisoned after ingesting selenium-rich plants in Western United States (reviewed in [118]). There have been reports of clusters of selenium poisoning in humans resulting from dietary supplements containing excessive amounts of selenium compounds as described in reports from the CDC in 2010 [118] and from others [119]. Selenium poisoning was epidemic in a district of China that has selenium-rich soil [120], and there is an incident of the death of 20 polo horses after injection with a stimulant containing high amounts of selenium [121]. (see [121] for precise details of lethal doses of selenium). The exact mechanism of selenium toxicity has not been determined giving rise to further uncertainty about its clinical use. For example, the possible role of selenium in cancer prevention is tempered by the possibility that it could damage DNA and cause cancer [122]. Selenium supplements have been shown to decrease dementia symptoms in a mouse model of Alzheimer's disease [123] suggesting clinical potential. However, there is the contravening finding that selenoprotein P is found in abundance in the plaque of Alzheimer's disease brains [124]. This raises the disturbing possibility that this protein may contribute to the disease process and indeed, memory loss is a symptom of selenium poisoning in humans. Does perseleno selenium have biological roles analogous to S 0 ? It appears that it can be generated in vivo, that it may be carried on sulfurtransferases, and that it has some similar functions such as the selenation of certain tRNAs. However, there is another (and probably more important) similarity between S 0 and Se 0 which may explain the observed effects of selenium and may even explain its toxicity. As outlined above, the persulfide of glutathione, GSSH, has much greater reducing capacity than does GSH in the systems tested [108,109]. In 1993, Levander et al. showed that selenite could replace S 0 in producing this effect in the cytochrome c system [125] and in 1994 Prütz tested selenite in the resazurin system [109]. The striking result was that GSH reduced resazurin 50 times faster with selenite as a source of Se than it did with tetrathionate as a source of S 0 . The rate with selenite was ~3000 times faster than the rate with GSH alone. The ratio of selenite to GSH was 1:100. It was concluded that the active reductant is the perselenide of GSH, GSSeH, formed by two reactions; the selenite is first reduced to GS-Se-SG by GSH according the Equation (18) and the GS-Se-SG then gives rise to GS-SeH through an exchange reaction with another GSH according to Equation (19): GS-Se-SG + GSH  GS-SeH + GSSG (19) The effect of selenium in facilitating reduction reactions is not restricted to the systems described above; GSH can be replaced by other thiols (e.g., cysteine, mercaptoethanol) and the effect occurs in other redox systems. Thus, Rhead and Schrauzer showed that the reduction of methylene blue by mercaptoethanol is increased 20-fold by the presence of trace amounts of selenite [126]. However, the effect in vivo is likely to apply mainly to GSH because of its high concentration in cells (1 to 10 mM) and the role of GSH in determining the redox status in cells. This remarkable reducing property of GSSeH needs to be explored in more detail to determine whether it may explain some or all of the observed beneficial effects of selenium compounds in biological systems. It may even account for the toxicity of selenium by creating an over-reducing redox environment. Conclusions Rapidly accumulating data indicate that sulfane sulfur has important functions in cells. The broad diversity of effects suggests that its functions are general and not specific to any tissue or any process. Moreover, it should not be called a "signaling agent" since there is no evidence that it acts in a controlled rise and fall pattern (as with neurotransmitters or hormones). Rather it appears to be an essential factor that must be available at low and constant concentration. Its overall effect is to keep all cells in an optimum state of health with regard to viability, vigor, longevity, and proliferative capacity. There are several mechanisms by which S 0 could have this effect in mammals. These include maintenance of sulfur-containing cofactors (MoCo, Fe-S clusters), the control of protein synthesis via modification of tRNA, the regulation of the activities of enzymes, and the maintenance of the reducing capacity of cells.
10,550.4
2014-08-01T00:00:00.000
[ "Chemistry", "Biology" ]
Theoretical Study on Adsorption Behavior of SF6 Decomposition Components on Mg-MOF-74 SF6 gas is an arc extinguishing medium that is widely used in gas insulated switchgear (GIS). When insulation failure occurs in GIS, it leads to the decomposition of SF6 in partial discharge (PD) and other environments. The detection of the main decomposition components of SF6 is an effective method to diagnose the type and degree of discharge fault. In this paper, Mg-MOF-74 is proposed as a gas sensing nanomaterial for detecting the main decomposition components of SF6. The adsorption of SF6, CF4, CS2, H2S, SO2, SO2F2 and SOF2 on Mg-MOF-74 was calculated by Gaussian16 simulation software based on density functional theory. The analysis includes parameters of the adsorption process such as binding energy, charge transfer, and adsorption distance, as well as the change in bond length, bond angle, density of states, and frontier orbital of the gas molecules. The results show that Mg-MOF-74 has different degrees of adsorption for seven gases, and chemical adsorption will lead to changes in the conductivity of the system; therefore, it can be used as a gas sensing material for the preparation of SF6 decomposition component gas sensors. Introduction Sulfur hexafluoride gas is an arc extinguishing medium widely used in gas insulated switch gear (GIS). However, insulation defects inside GIS can lead to partial discharge (PD) and other faults. SF 6 decomposes into extremely unstable low-fluorine sulfides (SF n , n = 1~5) in an overheated environment with long-term failure [1][2][3][4][5]. Although SF n can collide with F atoms in the environment to recover to SF 6 molecules, in the presence of micro-amounts of O 2 and H 2 O, SF n further reacts to form SO 2 , SOF 2 , SO 2 F 2 , H 2 S and other major products [6][7][8]; if the fault occurs on the basin insulator, SF n can react with electrode materials, insulation materials, etc., to generate characteristic gases CF 4 , CS 2 , etc. [9,10]; therefore, the types of the above decomposition products are closely related to the fault types. Detecting the main decomposition components of SF 6 in GIS equipment by the gas sensor method is an effective method to diagnose discharge faults and types [11][12][13][14], and guarantees the safe operation of electrical insulation equipment. Calculation Parameter Setting and Model Construction The modeling involved in this paper is completed in GaussView, and the structural optimization and single point calculation are completed in Gaussian 16 software. In the application of Gaussian series quantum chemistry simulation software, Sciortino, G. et al. found that the PBE0 functional and the def2-tzvp basis set have extremely high accuracy in the calculation of Ni(II) complexes [58]. PBE0 functional together with def-tzvp is the best-performing method, and is excellent in the study of platinum-catalyzed chemical reaction mechanism based on DFT principle [59]. Therefore, when dealing with the exchange correlation term of electrons, we also use the PBE0 hybrid functional with higher calculation accuracy than LDA, GGA and meta-GGA. Because the sensitivity of geometric structure optimization to the basis set is much lower than that of single point energy calculation, and the time consumption of these tasks is much higher than that of single point calculation, the def2-svp (2-zeta) basis set with appropriate accuracy is selected for structural optimization, and the def2-tzvp (3-zeta) basis set with higher accuracy is used in single point energy calculation; the GD3BJ algorithm is used to correct the Van der Waals effect, and sets the charge to 0 and the spin multiplicity to 1. By reading the crystallographic information (.cif) file, each periodic structural unit of Mg-MOF-74 contains 638 atoms, 704 chemical bonds and 18 polyhedra. Due to the application of high-precision functional, the computational power cannot meet the optimization of the overall periodic structure and the single point calculation of adsorption. It can be seen from Figure 2 that the smallest repeating unit in Mg-MOF-74 is a cluster containing four Mg atoms, three benzene rings, three hydroxyl groups, three carboxyl groups and one coordinated water (inside the white dotted box). Therefore, this paper uses this cluster (without bound water) as the adsorption substrate to simulate the adsorption characteristics of seven gas molecules on this segment, and then infers the macroscopic adsorption characteristics of the material on gases. Calculation Parameter Setting and Model Construction The modeling involved in this paper is completed in GaussView, and the structural optimization and single point calculation are completed in Gaussian 16 software. In the application of Gaussian series quantum chemistry simulation software, Sciortino, G. et al. found that the PBE0 functional and the def2-tzvp basis set have extremely high accuracy in the calculation of Ni(II) complexes [58]. PBE0 functional together with def-tzvp is the best-performing method, and is excellent in the study of platinum-catalyzed chemical reaction mechanism based on DFT principle [59]. Therefore, when dealing with the exchange correlation term of electrons, we also use the PBE0 hybrid functional with higher calculation accuracy than LDA, GGA and meta-GGA. Because the sensitivity of geometric structure optimization to the basis set is much lower than that of single point energy calculation, and the time consumption of these tasks is much higher than that of single point calculation, the def2-svp (2-zeta) basis set with appropriate accuracy is selected for structural optimization, and the def2-tzvp (3-zeta) basis set with higher accuracy is used in single point energy calculation; the GD3BJ algorithm is used to correct the Van der Waals effect, and sets the charge to 0 and the spin multiplicity to 1. By reading the crystallographic information (.cif) file, each periodic structural unit of Mg-MOF-74 contains 638 atoms, 704 chemical bonds and 18 polyhedra. Due to the application of high-precision functional, the computational power cannot meet the optimization of the overall periodic structure and the single point calculation of adsorption. It can be seen from Figure 2 that the smallest repeating unit in Mg-MOF-74 is a cluster containing four Mg atoms, three benzene rings, three hydroxyl groups, three carboxyl groups and one coordinated water (inside the white dotted box). Therefore, this paper uses this cluster (without bound water) as the adsorption substrate to simulate the adsorption characteristics of seven gas molecules on this segment, and then infers the macroscopic adsorption characteristics of the material on gases. Calculation Parameter Setting and Model Construction The modeling involved in this paper is completed in GaussView, and the structural optimization and single point calculation are completed in Gaussian 16 software. In the application of Gaussian series quantum chemistry simulation software, Sciortino, G. et al. found that the PBE0 functional and the def2-tzvp basis set have extremely high accuracy in the calculation of Ni(II) complexes [58]. PBE0 functional together with def-tzvp is the best-performing method, and is excellent in the study of platinum-catalyzed chemical reaction mechanism based on DFT principle [59]. Therefore, when dealing with the exchange correlation term of electrons, we also use the PBE0 hybrid functional with higher calculation accuracy than LDA, GGA and meta-GGA. Because the sensitivity of geometric structure optimization to the basis set is much lower than that of single point energy calculation, and the time consumption of these tasks is much higher than that of single point calculation, the def2-svp (2-zeta) basis set with appropriate accuracy is selected for structural optimization, and the def2-tzvp (3-zeta) basis set with higher accuracy is used in single point energy calculation; the GD3BJ algorithm is used to correct the Van der Waals effect, and sets the charge to 0 and the spin multiplicity to 1. By reading the crystallographic information (.cif) file, each periodic structural unit of Mg-MOF-74 contains 638 atoms, 704 chemical bonds and 18 polyhedra. Due to the application of high-precision functional, the computational power cannot meet the optimization of the overall periodic structure and the single point calculation of adsorption. It can be seen from Figure 2 that the smallest repeating unit in Mg-MOF-74 is a cluster containing four Mg atoms, three benzene rings, three hydroxyl groups, three carboxyl groups and one coordinated water (inside the white dotted box). Therefore, this paper uses this cluster (without bound water) as the adsorption substrate to simulate the adsorption characteristics of seven gas molecules on this segment, and then infers the macroscopic adsorption characteristics of the material on gases. The optimized Mg-MOF-74 cluster and gas molecular model are shown in Figure 3. In order to represent the total energy change of gas molecules adsorbed on Mg-MOF-74, the binding energy during adsorption is defined as: The optimized Mg-MOF-74 cluster and gas molecular model are shown in Figure 3. In order to represent the total energy change of gas molecules adsorbed on Mg-MOF-74, the binding energy during adsorption is defined as: In Equation (1): EMOF-gas is the total energy of the system after Mg-MOF-74 adsorbs gas molecules, EMOF is the total energy of Mg-MOF-74 clusters before adsorption, and Egas is the total energy of gas molecules before adsorption. The basis set superposition error (BSSE) is corrected using the counterpoise method proposed by Boys and Bernardi [60], and EBSSE is the correction value. The charge transfer amount is the number of charge transfer obtained by analyzing the charge population through the Hirshfeld charge model. In Equation (2): ΔQ is the charge transfer amount of the system, Q1 is the charge of gas molecules after adsorption, and Q2 is the charge of gas molecules before adsorption. The adsorption distance is defined as the distance between the gas molecule and the adsorption site of Mg-MOF-74. The Van der Waals radius is 1/2 of the distance between two adjacent nuclei when atoms interact with each other through Van der Waals force. The covalent radius is 1/2 of the nucleus spacing when the atoms of the same element form diatomic molecules. The change of charge density is analyzed by the distribution of the yellow region (atom has electron loss property) and the blue region (atom has electron gain property) in the differential charge density diagram. In this paper, the discrete orbital occupation diagram is broadened by Gaussian function to obtain the density of states (DOS) curve, and the chemical adsorption of Mg-MOF-74 on gas is further analyzed by the total density of states, gas density of states and local density of states. Results The adsorption calculation in this paper makes the SF6, CF4, CS2, H2S, SO2, SO2F2 and-SOF2 gas molecules vertically close to the unsaturated sites on the Mg-MOF-74 material; In Equation (1): E MOF-gas is the total energy of the system after Mg-MOF-74 adsorbs gas molecules, E MOF is the total energy of Mg-MOF-74 clusters before adsorption, and E gas is the total energy of gas molecules before adsorption. The basis set superposition error (BSSE) is corrected using the counterpoise method proposed by Boys and Bernardi [60], and E BSSE is the correction value. The charge transfer amount is the number of charge transfer obtained by analyzing the charge population through the Hirshfeld charge model. In Equation (2): ∆Q is the charge transfer amount of the system, Q 1 is the charge of gas molecules after adsorption, and Q 2 is the charge of gas molecules before adsorption. The adsorption distance is defined as the distance between the gas molecule and the adsorption site of Mg-MOF-74. The Van der Waals radius is 1/2 of the distance between two adjacent nuclei when atoms interact with each other through Van der Waals force. The covalent radius is 1/2 of the nucleus spacing when the atoms of the same element form diatomic molecules. The change of charge density is analyzed by the distribution of the yellow region (atom has electron loss property) and the blue region (atom has electron gain property) in the differential charge density diagram. In this paper, the discrete orbital occupation diagram is broadened by Gaussian function to obtain the density of states (DOS) curve, and the chemical adsorption of Mg-MOF-74 on gas is further analyzed by the total density of states, gas density of states and local density of states. Results The adsorption calculation in this paper makes the SF 6 , CF 4 , CS 2 , H 2 S, SO 2 , SO 2 F 2 andSOF 2 gas molecules vertically close to the unsaturated sites on the Mg-MOF-74 material; after reaching the most stable state, by extracting the adsorption parameters (binding energy, charge transfer amount, adsorption distance), the change of bond length and bond angle of gas analysis after adsorption is measured, and the adsorption capacity of Mg-MOF-74 to seven gases is comprehensively judged by analyzing the change of orbital occupancy. It can be seen that the F atoms in CF 4 and SF 6 interact with the Mg atoms in the adsorbed substrate, Figure 4a after reaching the most stable state, by extracting the adsorption parameters (binding energy, charge transfer amount, adsorption distance), the change of bond length and bond angle of gas analysis after adsorption is measured, and the adsorption capacity of Mg-MOF-74 to seven gases is comprehensively judged by analyzing the change of orbital occupancy. It can be seen that the F atoms in CF4 and SF6 interact with the Mg atoms in the adsorbed substrate, Figure 4a Parameters of adsorption behavior: The adsorption energies, charge transfer and adsorption distance in the adsorption process of seven gas molecules SF6, CF4, CS2, H2S, SO2, SO2F2 and SOF2 on Mg-MOF-74 are listed in Table 1. 2. Parameters of adsorption behavior: The adsorption energies, charge transfer and adsorption distance in the adsorption process of seven gas molecules SF 6 , CF 4 , CS 2 , H 2 S, SO 2 , SO 2 F 2 and SOF 2 on Mg-MOF-74 are listed in Table 1. Combined with Formula (1), the adsorption energy of Mg-MOF-74 material to each gas molecule is less than 0, indicating that the adsorption process system reaches a lower energy stable state after heat-releasing; the relationship among the adsorption capacity of the material to each gas is: H 2 S > SO 2 > SOF 2 > SO 2 F 2 > CS 2 > SF 6 > CF 4 . According to Formula (2) in this paper, the gas molecules lose electrons during the adsorption process, and the substrate material Mg-MOF-74 obtains electrons. The adsorption distance among each gas molecule and the substrate is less than the sum of Van der Waals radius and larger than the sum of covalent radius (shown in Supplemental material Table S1). Therefore, according to the adsorption distance, it can be inferred that the strength of the chemical bond formed by the adsorption of seven gases by Mg atoms on Mg-MOF-74 material belongs to weaker chemical action. 3. Differential charge density: Combined with Formula (1), the adsorption energy of Mg-MOF-74 material to ea gas molecule is less than 0, indicating that the adsorption process system reaches a low energy stable state after heat-releasing; the relationship among the adsorption capacity the material to each gas is: H2S > SO2 > SOF2 > SO2F2 > CS2 > SF6 > CF4. According to Formula (2) in this paper, the gas molecules lose electrons during t adsorption process, and the substrate material Mg-MOF-74 obtains electrons. The adsorption distance among each gas molecule and the substrate is less than t sum of Van der Waals radius and larger than the sum of covalent radius (shown in Su plemental material Table S1). Therefore, according to the adsorption distance, it can inferred that the strength of the chemical bond formed by the adsorption of seven gas by Mg atoms on Mg-MOF-74 material belongs to weaker chemical action. 3. Differential charge density: Figure 5a-g shows the charge density difference in the adsorption process. The y low regions show the electron losing property and the blue regions show the electr gaining property. The charge distribution near the F atom of SF6 and CF4 is relatively uniform, and t charge distribution inside the molecule does not change significantly. The charge dist bution near the C atom in CS2 is uniform, and the Mg atom bonded to the S atom in CS wrapped by the yellow regions. The charge distribution near the H atom in H2S is unifor and the Mg atom bonded to the S atom in H2S is surrounded by the yellow regions. T Mg atom bonded to the O atom in SO2 is wrapped by the yellow regions. The charge d tribution near the S atom and F atom in SO2F2 is uniform, and the Mg atom bonded to t O atom in SO2F2 is wrapped by the yellow regions. The charge distribution near the S a F atoms in SOF2 is uniform, and the Mg atom bonded to the O atom in SOF2 is wrapp by the yellow regions. The above results show that the Mg atoms on the Mg-MOF-74 m terial exhibit electron-gaining properties during the adsorption process, while the g molecules exhibit electron-losing properties. The charge distribution near the F atom of SF 6 and CF 4 is relatively uniform, and the charge distribution inside the molecule does not change significantly. The charge distribution near the C atom in CS 2 is uniform, and the Mg atom bonded to the S atom in CS 2 is wrapped by the yellow regions. The charge distribution near the H atom in H 2 S is uniform, and the Mg atom bonded to the S atom in H 2 S is surrounded by the yellow regions. The Mg atom bonded to the O atom in SO 2 is wrapped by the yellow regions. The charge distribution near the S atom and F atom in SO 2 F 2 is uniform, and the Mg atom bonded to the O atom in SO 2 F 2 is wrapped by the yellow regions. The charge distribution near the S and F atoms in SOF 2 is uniform, and the Mg atom bonded to the O atom in SOF 2 is wrapped by the yellow regions. The above results show that the Mg atoms on the Mg-MOF-74 material exhibit electron-gaining properties during the adsorption process, while the gas molecules exhibit electron-losing properties. The Change of Bond Length and Bond Angle of Gas Molecules after Adsorption The bond length changes of SF 6 , CF 4 , CS 2 , H 2 S, SO 2 , SO 2 F 2 and SOF 2 molecules before and after adsorption and the bond angle changes are shown in Supplementary Materials Tables S1-S8. The bond length of SF 6 , CF 4 , CS 2 , H 2 S, SO 2 , SO 2 F 2 and SOF 2 gas molecules changes slightly due to adsorption; the obvious changes in the bond angle (atomic number shown in Figure 2b The changes of bond length and bond angle before and after adsorption of the above seven gas molecules, together with the adsorption energy, adsorption distance, charge transfer amount and differential charge density diagram, strongly prove that there are certain interactions among gas molecules and the adsorption substrate. The Orbital Occupation Changes of Each System before and after Gas Adsorption The orbital occupation calculation results of Gaussian 16 software are imported into Multiwfn software to obtain discrete orbital occupation information under different energies, as indicated by the blue arrow in Figure 6a. The changes of bond length and bond angle before and after adsorption of the above seven gas molecules, together with the adsorption energy, adsorption distance, charge transfer amount and differential charge density diagram, strongly prove that there are certain interactions among gas molecules and the adsorption substrate. The Orbital Occupation Changes of Each System before and after Gas Adsorption The orbital occupation calculation results of Gaussian 16 software are imported into Multiwfn software to obtain discrete orbital occupation information under different energies, as indicated by the blue arrow in Figure 6a. The Gaussian function is used for broadening, and the half-peak width is set to 0.01 eV to obtain a continuous density of states diagram (as indicated by the green arrow in Figure 6b, which can clearly and intuitively analyze the orbital occupation change. According to the extra-nuclear electron arrangement of Mg atoms, there are no filled electrons in the 3p orbital of Mg atoms, and Mg atoms form Mg 2+ by losing two electrons in the 3s orbital. Therefore, the bonding interaction during adsorption is analyzed by the overlap of the 3s and 3p orbital curves of Mg atoms on Mg-MOF-74 material with the outer orbital curves of SF6, CF4, CS2, H2S, SO2, SO2F2 and SOF2 gas molecules. The Gaussian function is used for broadening, and the half-peak width is set to 0.01 eV to obtain a continuous density of states diagram (as indicated by the green arrow in Figure 6b, which can clearly and intuitively analyze the orbital occupation change. According to the extra-nuclear electron arrangement of Mg atoms, there are no filled electrons in the 3p orbital of Mg atoms, and Mg atoms form Mg 2+ by losing two electrons in the 3s orbital. Therefore, the bonding interaction during adsorption is analyzed by the overlap of the 3s and 3p orbital curves of Mg atoms on Mg-MOF-74 material with the outer orbital curves of SF 6 , CF 4 , CS 2 , H 2 S, SO 2 , SO 2 F 2 and SOF 2 gas molecules. 1. The orbital occupation of Mg-MOF-74 after adsorbing SF 6 gas is shown in Figure 7a-c. By observing Figure 7a, it can be found that after adsorbing SF 6 gas, a new orbital occupation appears near the position of −1.95 eV, and the other energy positions do not change significantly. Compared with the total orbital occupation, it can be seen that the orbital occupation changes near the energy of −1.95 eV, 0.44 eV and 1.65 eV after adsorption of SF 6 are contributed by SF 6 , as shown in Figure 7b. By analyzing the 3s, 3p orbitals of the adsorbed substrate Mg atom and the 2p orbital occupation of the SF 6 gas molecule F atom, it can be clearly found that the 2p orbital of the F atom and the 3s orbital occupation broadening curve of the Mg atom overlap near the energy −1.95 eV position (as shown in Figure 7c), which indicates that the Mg-MOF-74 material has bonding adsorption effect on the SF 6 gas. The Change of Bond Length and Bond Angle of Gas Molecules after Adsorption The bond length changes of SF6, CF4, CS2, H2S, SO2, SO2F2 and SOF2 molecules before and after adsorption are shown in Table 4, and the bond angle changes are shown in Supplementary Materials Table S1-14. The bond length of SF6, CF4, CS2, H2S, SO2, SO2F2 and SOF2 gas molecules changes slightly due to adsorption; the obvious changes in the bond angle (atomic number shown in Figure 2b The changes of bond length and bond angle before and after adsorption of the above seven gas molecules, together with the adsorption energy, adsorption distance, charge transfer amount and differential charge density diagram, strongly prove that there are certain interactions among gas molecules and the adsorption substrate. The Orbital Occupation Changes of Each System before and after Gas Adsorption The orbital occupation calculation results of Gaussian 16 software are imported into Multiwfn software to obtain discrete orbital occupation information under different energies, as indicated by the blue arrow in Figure 6a. The Gaussian function is used for broadening, and the half-peak width is set to 0.01 eV to obtain a continuous density of states diagram (as indicated by the green arrow in Figure 6b, which can clearly and intuitively analyze the orbital occupation change. According to the extra-nuclear electron arrangement of Mg atoms, there are no filled electrons in the 3p orbital of Mg atoms, and Mg atoms form Mg 2+ by losing two electrons in the 3s orbital. Therefore, the bonding interaction during adsorption is analyzed by the overlap of the 3s and 3p orbital curves of Mg atoms on Mg-MOF-74 material with the outer orbital curves of SF6, CF4, CS2, H2S, SO2, SO2F2 and SOF2 gas molecules. Figure 7ac. The orbital occupation of Mg-MOF-74 after adsorbing CF 4 gas is shown in Figure 8a-c. The orbital occupation of Mg-MOF-74 after adsorbing SF6 gas is shown in orbital occupation changes near the energy of −1.95 eV, 0.44 eV and 1.65 eV after adsorption of SF6 are contributed by SF6, as shown in Figure 7b. By analyzing the 3s, 3p orbitals of the adsorbed substrate Mg atom and the 2p orbital occupation of the SF6 gas molecule F atom, it can be clearly found that the 2p orbital of the F atom and the 3s orbital occupation broadening curve of the Mg atom overlap near the energy −1.95 eV position (as shown in Figure 7c), which indicates that the Mg-MOF-74 material has bonding adsorption effect on the SF6 gas. Figure 8 ac. The orbital occupation of Mg-MOF-74 after adsorbing CF4 gas is shown in (a) (b) (c) By observing Figure 8a, it can be found that there is no new orbital occupation after adsorption of CF4 gas. Compared with the total orbital occupation, it can be seen that the orbital occupation changes near the energy of 0.42 eV, 1.16 eV and 2.21 eV after CF4 adsorption are contributed by CF4, as shown in Figure 8b. By analyzing the 3s and 3p orbitals of the Mg atom of the adsorption substrate and the 2p orbital occupation of the F atom of the CF4 gas molecule, it can be clearly found that the 2p orbital of the F atom does not overlap with the 3s and 3p orbital occupation broadening curves of the Mg atom (as shown in Figure 8c), which indicates that the Mg-MOF-74 material barely has bonding adsorption effect on the CF4 gas. By observing Figure 8a, it can be found that there is no new orbital occupation after adsorption of CF 4 gas. Compared with the total orbital occupation, it can be seen that the orbital occupation changes near the energy of 0.42 eV, 1.16 eV and 2.21 eV after CF 4 adsorption are contributed by CF 4 , as shown in Figure 8b. By analyzing the 3s and 3p orbitals of the Mg atom of the adsorption substrate and the 2p orbital occupation of the F atom of the CF 4 gas molecule, it can be clearly found that the 2p orbital of the F atom does not overlap with the 3s and 3p orbital occupation broadening curves of the Mg atom (as shown in Figure 8c), which indicates that the Mg-MOF-74 material barely has bonding adsorption effect on the CF 4 gas. 3. The orbital occupation of Mg-MOF-74 after adsorbing CS 2 gas is shown in Figure 9a-c. It can be seen from Figure 9a that after adsorbing CS 2 gas, a new orbital occupation appears near the position of −2.03 eV, and the other energy positions do not change significantly. Compared with the total orbital occupancy, it can be seen that the orbital occupancy changes near the energy of −2.03 eV, 0.84 eV and 3.69 eV after adsorption of CS 2 are contributed by CS 2 , as shown in Figure 9b. By analyzing the 3s and 3p orbitals of the Mg atom of the adsorption substrate and the 3p orbital occupation of the S atom of the CS 2 gas molecule, it can be clearly found that the 3p orbital of the S atom and the 3p orbital occupation broadening curve of the Mg atom overlap near the energy −2.03 eV position (as shown in Figure 9c), which indicates that the Mg-MOF-74 material has bonding adsorption effect on the CS 2 gas. By observing Figure 7a, it can be found that after adsorbing SF6 gas, a new orbital occupation appears near the position of −1.95 eV, and the other energy positions do not change significantly. Compared with the total orbital occupation, it can be seen that the orbital occupation changes near the energy of −1.95 eV, 0.44 eV and 1.65 eV after adsorption of SF6 are contributed by SF6, as shown in Figure 7b. By analyzing the 3s, 3p orbitals of the adsorbed substrate Mg atom and the 2p orbital occupation of the SF6 gas molecule F atom, it can be clearly found that the 2p orbital of the F atom and the 3s orbital occupation broadening curve of the Mg atom overlap near the energy −1.95 eV position (as shown in Figure 7c), which indicates that the Mg-MOF-74 material has bonding adsorption effect on the SF6 gas. Figure 8 By observing Figure 8a, it can be found that there is no new orbital occupation after adsorption of CF4 gas. Compared with the total orbital occupation, it can be seen that the orbital occupation changes near the energy of 0.42 eV, 1.16 eV and 2.21 eV after CF4 adsorption are contributed by CF4, as shown in Figure 8b occupation broadening curve of the Mg atom overlap near the energy −2.03 eV position (as shown in Figure 9c), which indicates that the Mg-MOF-74 material has bonding adsorption effect on the CS2 gas. The orbital occupation of Mg-MOF-74 after adsorbing CF4 gas is shown in 4. The orbital occupation of Mg-MOF-74 after adsorbing H2S gas is shown in Figure 10a-c. It can be found from Figure 10a that there is no new orbital occupation after adsorption of H2S gas. Compared with the total orbital occupation, it can be seen that the orbital occupation changes of the system near the energy of 0.35 eV, −6.95 eV and −7.47 eV after adsorption of H2S are contributed by H2S, as shown in Figure 10b. By analyzing the 3s and 3p orbitals of the Mg atom of the adsorption substrate and the 3p orbital occupation of the S atom of the H2S gas molecule, it can be clearly found that the 3p orbital of the S atom and the 3s orbital occupation broadening curve of the Mg atom overlap near the energy of 0.35 eV (as shown in Figure 10c), which indicates that the Mg-MOF-74 material has bonding adsorption effect on the H2S gas. Figure 11a-c. It can be found from Figure 10a that there is no new orbital occupation after adsorption of H 2 S gas. Compared with the total orbital occupation, it can be seen that the orbital occupation changes of the system near the energy of 0.35 eV, −6.95 eV and −7.47 eV after adsorption of H 2 S are contributed by H 2 S, as shown in Figure 10b. The orbital occupation of Mg-MOF-74 after adsorbing SO2 gas is shown in By analyzing the 3s and 3p orbitals of the Mg atom of the adsorption substrate and the 3p orbital occupation of the S atom of the H 2 S gas molecule, it can be clearly found that the 3p orbital of the S atom and the 3s orbital occupation broadening curve of the Mg atom overlap near the energy of 0.35 eV (as shown in Figure 10c), which indicates that the Mg-MOF-74 material has bonding adsorption effect on the H 2 S gas. 5. The orbital occupation of Mg-MOF-74 after adsorbing SO 2 gas is shown in Figure 11a-c. It can be seen from Figure 11a that after the adsorption of SO 2 gas, a new orbital occupation occurs near −3.56 eV, and the remaining energy positions do not change significantly. Compared with the total orbital occupation, it can be seen that the change of orbital occupation near the energy of −3.56 eV, −10.35 eV and 1.38 eV after SO 2 adsorption is contributed by SO 2 , as shown in Figure 11b. By analyzing the 3s and 3p orbitals of the Mg atom of the adsorption substrate and the 2p orbital occupation of the O atom of the SO 2 gas molecule, it can be clearly found that the 2p orbital of the O atom and the 3p orbital occupation broadening curve of the Mg atom overlap near the energy −3.56 eV position (as shown in Figure 11c), which indicates that the Mg-MOF-74 material has bonding adsorption effect on the SO 2 gas. It can be seen from Figure 11a that after the adsorption of SO2 gas, a new orbital occupation occurs near −3.56 eV, and the remaining energy positions do not change significantly. Compared with the total orbital occupation, it can be seen that the change of orbital occupation near the energy of −3.56 eV, −10.35 eV and 1.38 eV after SO2 adsorption is con- 6. The orbital occupation of Mg-MOF-74 after adsorbing SO 2 F 2 gas is shown in Figure 12a-c. occupation near the energy of −3.56 eV, −10.35 eV and 1.38 eV after SO2 adsorption is contributed by SO2, as shown in Figure 11b. By analyzing the 3s and 3p orbitals of the Mg atom of the adsorption substrate and the 2p orbital occupation of the O atom of the SO2 gas molecule, it can be clearly found that the 2p orbital of the O atom and the 3p orbital occupation broadening curve of the Mg atom overlap near the energy -3.56 eV position (as shown in Figure 11c), which indicates that the Mg-MOF-74 material has bonding adsorption effect on the SO2 gas. 6. The orbital occupation of Mg-MOF-74 after adsorbing SO2F2 gas is shown in Figure 12a-c. (a) (b) (c) By observing Figure 11a, it can be found that there is no new orbital occupation after adsorption of SO2F2 gas. Compared with the total orbital occupation, it can be seen that the change of orbital occupation near the energy of −1.02 eV, −0.1 eV and 0.89 eV after adsorption of SO2F2 is contributed by SO2F2, as shown in Figure 12b. By analyzing the 3s and 3p orbitals of the Mg atom of the adsorption substrate and the 2p orbital occupation of the O atom of the SO2F2 gas molecule, it can be clearly found that the 2p orbital of the O atom and the 3p orbital occupation broadening curve of the Mg atom overlap near the energy -1.02 eV position (as shown in Figure 12c), which indicates that the Mg-MOF-74 material has bonding adsorption effect on the SO2F2 gas. By observing Figure 11a, it can be found that there is no new orbital occupation after adsorption of SO 2 F 2 gas. Compared with the total orbital occupation, it can be seen that the change of orbital occupation near the energy of −1.02 eV, −0.1 eV and 0.89 eV after adsorption of SO 2 F 2 is contributed by SO 2 F 2 , as shown in Figure 12b. By analyzing the 3s and 3p orbitals of the Mg atom of the adsorption substrate and the 2p orbital occupation of the O atom of the SO 2 F 2 gas molecule, it can be clearly found that the 2p orbital of the O atom and the 3p orbital occupation broadening curve of the Mg atom overlap near the energy −1.02 eV position (as shown in Figure 12c), which indicates that the Mg-MOF-74 material has bonding adsorption effect on the SO 2 F 2 gas. 7. The orbital occupation of Mg-MOF-74 after adsorbing SOF 2 gas is shown in Figure 13a-c. It can be found from Figure 13a that after the adsorption of SOF 2 gas, a new orbital occupation appears near −2.07 eV after adsorption, and the other energy positions do not change significantly. Compared with the total orbital occupation, it can be seen that the orbital occupation changes near the energy of −2.07 eV, −0.93 eV and 0.77 eV after adsorption of SOF 2 are contributed by SOF 2 , as shown in Figure 13b. It can be found from Figure 13a that after the adsorption of SOF2 gas, a new orbital occupation appears near −2.07 eV after adsorption, and the other energy positions do not change significantly. Compared with the total orbital occupation, it can be seen that the orbital occupation changes near the energy of −2.07 eV, −0.93 eV and 0.77 eV after adsorption of SOF2 are contributed by SOF2, as shown in Figure 13b. By analyzing the 3s and 3p orbitals of the Mg atom of the adsorption substrate and the 2p orbital occupation of the O atom of the SOF2 gas molecule, it can be clearly found that the 2p orbital of the O atom and the 3p orbital occupation broadening curve of the Mg atom overlap near the energy −0.93 eV position (as shown in Figure 13c), which indi- By analyzing the 3s and 3p orbitals of the Mg atom of the adsorption substrate and the 2p orbital occupation of the O atom of the SOF 2 gas molecule, it can be clearly found that the 2p orbital of the O atom and the 3p orbital occupation broadening curve of the Mg atom overlap near the energy −0.93 eV position (as shown in Figure 13c), which indicates that the Mg-MOF-74 material has bonding adsorption effect on the SOF 2 gas. Conductivity Analysis after Adsorption Pham, H.Q et al. have systematically studied the electronic band structure of a series of reticular metal-organic framework materials based on density functional theory [61]. By calculating the HOMO-LUMO gap under different types, different numbers of substituents and different C Ar -C Ar -C=O dihedral angle models, it is revealed that the band gap energy can be predicted by the HOMO-LUMO gap of MOFs organic ligands. The results show that the electronic band structure of MOFs can be calculated by first-principles calculations of organic linkers instead of complex and time-consuming calculations on periodic systems. In this section, by introducing the frontier molecular orbital theory to calculate the energy gap, the conductivity change of the cluster after adsorbing gas is qualitatively analyzed. It indicates the effect of gas adsorption on the conductivity of Mg-MOF-74 material. The frontier molecular orbital distribution and energy of Mg-MOF-74 before and after adsorption of seven gases are shown in Figure 14. From the diagram, it can be seen that the highest occupied molecular orbital (HOMO) of the Mg-MOF-74 cluster and the system after adsorbing SF 6 , CF 4 , CS 2 , H 2 S, SO 2 , SO 2 F 2 and SOF 2 are mainly distributed on the surface of the benzene ring, which is also the same as the law in the literature [61]. The lowest unoccupied molecular orbital (LUMO) of SF 6 , CS 2 , SO 2 and SO 2 F 2 systems are mainly distributed on the surface of gas molecules. The LUMO of CF 4 , H 2 S and SO 2 F 2 systems are mainly distributed on the surface of the other benzene ring of the cluster. With the change of frontier orbital distribution, the HOMO and LUMO energies of the seven systems have different degrees of increase and decrease, respectively. For HOMO, the change of SO 2 F 2 system is the most obvious, and for LUMO, the change of SO 2 system is the most obvious. The energy gap's change demonstrates that when the Mg-MOF-74 cluster only has chemical adsorption for CF 4 , SF 6 , CS 2 , H 2 S, SO 2 , SO 2 F 2 and SOF 2 gases, the conductivity of each system has different degrees of improvement, and the order of promotion is: SO 2 > CS 2 > SOF 2 > SF 6 > SO 2 F 2 > CF 4 > H 2 S. Therefore, in the application, it is possible to analyze the gas composition by comparing the resistance value response difference of the material to seven gases under the same sensor electrode preparation, the same gas flow rate, and the same measurement temperature, and then determine whether SF 6 decomposes or not. The frontier molecular orbital distribution and energy of Mg-MOF-74 before and after adsorption of seven gases are shown in Figure 13. From the diagram, it can be seen that the highest occupied molecular orbital (HOMO) of the Mg-MOF-74 cluster and the system after adsorbing SF6, CF4, CS2, H2S, SO2, SO2F2 and SOF2 are mainly distributed on the surface of the benzene ring, which is also the same as the law in the literature [61]. The lowest unoccupied molecular orbital (LUMO) of SF6, CS2, SO2 and SO2F2 systems are mainly distributed on the surface of gas molecules. The LUMO of CF4, H2S and SO2F2 systems are mainly distributed on the surface of the other benzene ring of the cluster. With the change of frontier orbital distribution, the HOMO and LUMO energies of the seven Conclusions In this paper, GaussView software is used to construct the cluster of Mg-MOF-74 material and seven gas molecular models of SF 6 , CF 4 , CS 2 , H 2 S, SO 2 , SO 2 F 2 and SOF 2 . The adsorption properties of Mg-MOF-74 clusters to gases are calculated by Gaussian16 software based on the DFT principle. The main conclusions are as follows: When the adsorption of each gas molecule by Mg-MOF-74 material reaches a stable state, the relationship among the adsorption capacity of each gas is: H 2 S > SO 2 > SOF 2 > SO 2 F 2 > CS 2 > SF 6 > CF 4 .During the adsorption process, the metal atoms in the MOFs gain electrons, the gas molecules lose electrons, and the adsorption process leads to changes in the bond length and bond angle of the gas molecules. The density of states curve obtained by orbital occupation broadening shows that the adsorption of CF 4 by Mg-MOF-74 belongs to physical adsorption, and the adsorption of the other six gases belongs to chemical adsorption. The frontier molecular orbital analysis shows that the chemical adsorption of Mg-MOF-74 on gas causes the change of conductivity of the system. Therefore, the response difference of the material to the gas in GIS and pure SF 6 gas can be used to determine whether there is a fault inside the equipment. Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/nano13111705/s1, Table S1: The sum of Van der waals radius and covalent radius; Table S2: Changes of bond length and bond angle after SF 6 molecules adsorption; Table S3: Changes of bond length and bond angle after CF 4 molecules adsorption; Table S4: Changes of bond length and bond angle after CS 2 molecules adsorption; Table S5: Changes of bond length and bond angle after H 2 S molecules adsorption; Table S6: Changes of bond length and bond angle after SO 2 molecules adsorption; Table S7: Changes of bond length and bond angle after SO 2 F 2 molecules adsorption; Table S8: Changes of bond length and bond angle after SOF 2 molecules adsorption.
9,937.2
2023-05-23T00:00:00.000
[ "Physics" ]
Roles of the miR-137-3p/CAPN-2 gene pair in ischemia-reperfusion-induced neuronal apoptosis through modulation of p35 cleavage and subsequent caspase-8 overactivation Background: Neuron survival after ischemia-reperfusion (IR) injury is the primary determinant of motor function prognosis. MicroRNA (miR)-based gene therapy has gained attention. Our previous work explored the mechanisms by which miR-137-3p modulates neuronal apoptosis in both in vivo and in vitro IR models. Methods: IR-induced motor dysfunction and spinal calpain (CAPN) subtype expression and subcellular distribution were detected within 12 h post IR. Dysregulated miRs, including miR-137-3p, were identified by miR microarray analysis and confirmed by PCR. Luciferase assay confirmed that CAPN-2 is a corresponding target of miR-137-3p, and their modulation of motor function was evaluated by intrathecal infection with synthetic miRs. CAPN-2 activity was measured by the intracellular Ca 2+ concentration and mean fluorescence intensity in vitro . Neuronal apoptosis was detected by flow cytometry and lactate dehydrogenase (LDH) release. The activities of p35, p25, Cdk5 and caspase-8 were evaluated by ELISA and Western blotting after transfection with specific inhibitors and miRs. Results: The IR-induced motor dysfunction time course was closely associated with CAPN-2 protein upregulation, which was mainly distributed in neurons. The miR-137-3p/CAPN-2 gene pair was confirmed by luciferase assay. miR-137-3p mimic significantly improved IR-induced motor dysfunction and decreased CAPN-2 expression, even in combination with recombinant rat calpain-2 (rr-CALP2) injection, whereas miR-137-3p inhibitor reversed these effects. Similar changes were observed in the intracellular Ca 2+ concentration and CAPN-2 expression and activity when cells were exposed to OGD/R and transfected with synthetic miRs in vitro apoptosis was effective in preserving hindlimb motor function in rodent models [5,9]. Recently, some studies have shown that disturbed ionic homeostasis, such as ischemic or mechanical injury-induced excessive intracellular calcium ion concentration ([Ca 2+ ]) in neurons, could eventually trigger neuronal apoptosis by influencing vital biological functions and metabolism [10][11][12]. Thus, preserving intracellular calcium homeostasis may represent a promising treatment for attenuating neuronal apoptosis after IR insult. Increased intracellular Ca 2+ can activate a verity of proteases [13]. Belonging to a family of calcium-dependent neutral proteases, calcium-activated neutral proteinases (CANPs, also called calpains) are the most well-known effector that reacts to intracellular Ca 2+ dysregulations through calcium-binding subunits [14,15]. Eleven types of calpain isoforms have been identified in humans so far, of which calpain-1 (µ-calpain, CAPN-1) and calpain-2 (m-calpain, CAPN-2) are the most widely ubiquitous isoforms in the central nervous system (CNS) [13]. Being distributed in the same subcellular localization (cytoplasm) and sharing a common small subunit (known as CAPN-4) upon activation, CAPN-1 and CAPN-2 seem to have similar biochemical properties [13,16]. CAPN-1 and CAPN-2 have previously been demonstrated to be overactivated in various models of neurodegenerative diseases and injury, although they require micromolar and millimolar calcium levels for activation, respectively [13,[14][15][16]. However, in contrast to traditional views, some studies have recently suggested that CAPN-1 activation plays prosurvival roles whereas CAPN-2 plays neurodegenerative roles, based on their opposite functions in promoting neuronal plasticity following CNS injury [17][18][19]. The most notable characteristic of calpains is their ability to perform partial truncation, which is a proteolytic cleavage of protein substrates, such as cytoskeleton proteins, membrane-bound proteins and protein kinases, at specific sites [13]. Commonly, the downstream products of CAPN-mediated truncation are bioactive, which can further amplify neurotoxic insults or oxidative stress by activating a subsequent signaling pathway [13,20]. For example, the membrane-bound protein p35, known as a specific neuronal activator for cyclin-dependent kinase-5 (Cdk5), has been demonstrated to be a major substrate exclusively regulated by CAPNs in the pathogenesis of neurodegenerative disease [13,19,20]. Further exploration of downstream targets in rodent in vivo and in vitro experiments revealed that Cdk5 overactivation induced by cleaved p35 occurred exclusively in the presence of CAPN-2 [19][20][21]. In those studies, overexpressed CAPN-2 precisely cleaved the normally membrane-bound p35 into the more stable p25 form, which finally led to inappropriate increases in p25/Cdk5 activation and protein levels of caspase-3, a final executioner of neuronal apoptosis [19,22,23]. Acting as an upstream activator of caspase-3, caspase-8 is implicated in various models of CNS diseases and is critical for neuronal apoptosis [24,25]. Thus, preventing caspase-8 proteolysis is especially important for controlling a series of broad caspase activations [24,25]. Consistently, the p35 protein from baculovirus effectively blocked the apoptosis cascade by forming a p35-caspase-8 complex via a thioester bond [24,26]. The structural experiments further identified the N terminus of p35 as the major element necessary to preserve the intact covalent bond within the p35-caspase-8 crystal structure [26]. Thus, an increase in CAPN-2-mediated p35 cleavage may be reasonably inferred to also lead to conformational changes in p35 to further initiate caspase-8 and downstream caspase activation. Collectively, strong evidence suggests that the destructive functions of CAPN-2 during CNS injury are greatly attributed to its catalyzed substrates and downstream signaling pathways [17,19,20,27]. However, no study has explored the abovementioned hypothesis in spinal cord IR injury using methods specifically targeting CAPN-2. MicroRNAs (miRs) are a group of small, endogenous, noncoding RNAs [28]. miRs are widely expressed in the CNS and able to negatively regulate target genes by either degradation or posttranscriptional repression [5,6,28]. In our previous studies, we identified hundreds of aberrant miRs in injured spinal cords by microarray analysis [5,6,29]. Intrathecal pretreatment with synthetic miR mimic resulted in significant improvement in neurological deficits by recovering the altered miR expression [5,6,29]. These findings suggest promising miR-based gene therapy targeting CAPN-2. In this context, we first searched bioinformatical databases, identify potential miRs that may have binding sites with CAPN-2 among all dysregulated miRs detected in microarray analysis. Our present study suggested that miR-137-3p and miR-124-3p had target interactions with CAPN-2, which is supported by another study that explored the roles of miR-137-3p in rescuing motoneuron degeneration after brachial plexus root avulsion injury [30]. Then, we studied the functions and mechanisms by which the miR-137-3p/CAPN-2 gene pair regulates neuronal apoptosis by pretreatment of in vivo and in vitro models with synthetic miRs, a selective CAPN-2 inhibitor, recombinant rat calpain-2 (rr-CALP2) or a specific caspase-8 inhibitor. Experimental animals Sprague-Dawley rats weighting 200-to 250g were obtained from the Animal Center of China Medical University (Shenyang, China). All rats were preacclimatized 7 days before surgery. They were housed in standard cages under a 12-h light/dark cycle with the temperature at 23-24°C, humidity at 40-50%. The experiments were performed in accordance with the Guide for the Care and Use of Laboratory Animals (United States National Institutes of Health publication number 85-23, National Academy Press, Washington DC, revised 1996). Rat IR model establishment and experimental groups The rat IR model was preformed by occluding the aortic arch for 14 minutes [4,29]. Briefly, after being anesthetized, the rats were catheterized at the left carotid artery and the tail artery to measure proximal and distal blood pressure (BP), respectively. Following exposing the aortic arch, the clamp was placed between the left common carotid artery and the left subclavian artery for 14 min to induce ischemia. The ischemia was confirmed as a 90% decrease in distal BP. Then the clamp was removed to induce the reperfusion for 12h. The sham-operated rats were preformed the same procedures without inducing ischemia. MiR microarray analysis As we previously reported, the rat miRNA microarray analysis was performed with the miRNeasy mini kit (Qiagen, West Sussex, United Kingdom) [29,31]. The L4-6 segments of the spinal cord were collected at 4h after reperfusion. According to the manufacturers' instructions, 2.5μg total RNA samples were firstly labeled with the miRCURY™ Hy3™/Hy5™ Power labeling kit (Exiqon, Vedbaek, Denmark) and hybridized on a miRCURY™ LNA Array (version 18.0, Exiqon, Vedbaek, Denmark). After removing the nonspecific bindings, the fluorescent images of microarray slides were scanned by an Axon GenePix 4000B microarray scanner (Axon Instruments, CA, USA). The fluorescent intensity of the scanned images were loaded into the GenePix Pro 6.0 program (Axon Instruments) for feature extraction. The average of the replicated miRs with intensities of 50 or more were used to calculate a normalization factor. After normalized by the median normalization method, the significantly different miRs were identified by Volcano Plot filtering. Finally, the hierarchical clustering was performed to determine the differences of the miR expression by MEV software (version 4.6, TIGR ). Intrathecal injection and drug delivery All treatments in vivo including the synthetic miRs (Dharmacon,Chicago, IL, USA) and recombinant rat calpain-2 (rr-CALP2,B71107, 150U/L, Calbiochem, China ) were diluted into 20 µl in total volume and intrathecally injected, as we previously described [5,6]. Briefly, the needle of a 25μ microsyringe was inserted into the L 5-6 segment of spinal cords by the sign of a tail flick. Then, the concentration of 100 μmol/L of miR-137-3p mimic, 125 μmol/L miR-137-3p inhibitor or 100 μmol/L negative control (NC) was co-administered with Lipofectamine 3000 (Invitrogen, USA) at a 24h-interval for five consecutive days before surgery. Likewise, the rr-CALP was dissolved into a final concentration of 75 U/L immediately before injection. The number of days and the dosage used in this stud were evaluated by the overall effects in vivo by PCR and Western blotting in preliminary experiments. Only the rats displayed normal motor function were included for further study. Motor function assessment All being fully preacclimatized to the testing environment, the hind-limb motor functions were scored with Tarlov scores by two observers by the double-blind method [5]. Oxygen-glucose deprivation and reperfusion (OGD/R) model As we previously performed, the OGD/R model was established in 70-80% confluent VSC4.1 neurons to mimic the IR insult in vivo [5]. After twice washes and replacement the medium with glucose-free Hank's Balanced Salt Solution (HBSS), the neurons were kept in an anaerobic chamber (95% N2 and 5% CO2) at 37 °C for 6h. Then the medium was changed into initial medium and air condition for another 18h to induce reoxygenation. The control neurons were cultured in normal and atmosphere for 24h without depriving oxygen and glucose. For in vitro experiment, the neurons were pretreated with the synthetic miR and specific inhibitors 24h before underwent OGD/R insults [5]. As we previously, after seeded at a concentration of 4×10 5 per well, the miR-137-3p mimic (50 nmol/L) or NC (50 nmol/L) was cotransfected with 5 μL Lipofectamine 3000, whereas for inhibitor experiments, the Roscovitine (10 µM, Cdk5 inhibitor, Sigma-Aldrich Co., China) or Z-IETD-FMK (50 µM, caspase-8 inhibitor, R&D Systems,United States) was added into culture medium alone. The concentration of each treatment and the in vitro effects were determined by PCR in preliminary experiments. Detection of CAPN-2 activity The tensin homolog (PTEN) is a selective substrate for CAPN-2. PTEN is degraded as a result of CAPN-2 activation and is widely used for quantitative analysis of neuronal CAPN-2 activity in vivo and in vitro [19,32]. As previously described, Lactate dehydrogenase (LDH) assay The LDH released from VSC4.1 neurons was detected by a commercial LDH Assay Kit (Abcam, CA, USA). According to the manufacturer's instructions, 50 µL medium was collected at 24h after each treatment and measured at the absorbance of 450 nm. Detection of Caspase-8 activity The caspase-8 activity was detected by the caspase-8 assay kit (Abcam, CA, USA), which is based on the spectrophotometric detection of p-nitroaniline (pNA) moiety after it is cleaved from the labeled substrate Ac-IETD by caspase-8. The sample were measured in triplicate at the absorbance at 405 nm. According to the manufactures instructions, the activities in supernatants were measured at 450 nm after each treatment. Each sample were performed in triplicate and the average was presented as ng/L. Detection of neuronal apoptosis by flow cytometry The apoptotic neurons were by detected by BD FACSCalibur flow cytometry (BD Bioscience, MA, USA) with excitation at 488 nm and emission at 530 nm [5]. Briefly, the 1×10 5 neurons were first stained with 10 µl Annexin V-fluorescein isothiocyanate (FITC) at 37 ℃ for 15 min and then counterstained with 5 µl propidium iodide (PI) for 30 min in the dark. The fluorescence was excitation at 488 nm and emission at 530 nm. Each sample was prepared in triplicate. Statistical analysis The data were expressed as the mean±standard deviation (SD) and analyzed using SPSS 19.0 software (SPSS, Chicago,USA). Statistical comparisons between two groups were assessed by t tests or Mann-Whitney tests, whereas comparisons among three or more groups were performed one-or two-way ANOVA followed by the Tukey-Kramer test. A P value <0.05 was considered statistically significant. Temporal changes in motor dysfunction and spinal CAPN subtype expression post IR All rats exhibited normal motor function before undergoing IR surgery. As shown in Figure 1A, compared with sham-operated rats, the rats in the IR groups displayed IR-induced aberrant spinal miR-137-3p expression and negative regulation of CAPN-2 expression in vivo Microarray analysis showed that several aberrant miRs were greatly dysregulated in injured spinal cords at 4 h post IR (Figure 2A). Among these miRs, miR-137-3p has been indicated to be closely associated with neurodevelopment and CNS disease and to be highly expressed in the brain [30,33]. Thus, we hypothesized that miR-137-3p was also widely expressed in spinal cord tissues and confirmed that it This negative target interaction was further confirmed by a luciferase reporter assay, in which the miR-137-3p mimic significantly decreased the luciferase activity in cells containing the wild-type (WT) 3¢-UTR but not the mutated (MT) 3¢-UTR ( Figure 2C, P < 0.05). As we previously reported, we assessed the potential in vivo interactions by intrathecal pretreatment with synthetic miRs [5,6]. Consistently, compared with the IR group, the group given intrathecal injection of miR-137-3p mimic had significantly lower CAPN-2 protein and mRNA expression, whereas the group pretreated with miR-137-3p inhibitor injection had significantly higher CAPN-2 expression ( Figure 2D-E, P<0.05). As expected, the synergistic upregulation in CAPN-2 expression post IR that occurred with injection of rr-CALP2, a recombinant CAPN-2 that specifically upregulates CAPN-2 expression, was partly reversed by miR-137-3p mimic injection (P<0.05). No significant changes were detected when injected with miR-137-3p NC injection had no significant effects on CAPN-2 expression (P>0.05). Effects of the miR-137-3p/CAPN-2 pair on IR-induced hindlimb motor dysfunction To further clarify the regulatory roles of the miR-137-3p/CAPN-2 pair in vivo, the hindlimb motor function was assessed ( Figure 2F). As expected, compared with baseline and sham-operated rats, all IR-injured rats showed obvious hindlimb motor dysfunction during the reperfusion period (P<0.05). Compared with the timematched injured rats in the IR group, the rats injected with the miR-137-3p mimic exhibited higher average Tarlov scores, whereas the rats injected with the miR-137-3p inhibitor showed lower Tarlov scores (P<0.05). Likewise, in conjunction with the mRNA and protein levels of CAPN-2, rr-CALP2 injection reversed the improvement in motor function, indicated by Tarlov scores comparable to those for the IR group (P>0.05). There were no detectable differences between the IR-injured rats with or without miR-137-3p NC treatment at any observed time point (P>0.05). PTEN is a selective substrate for CAPN-2 [32]. Therefore, the OGD/R-induced changes in CAPN-2 expression and activity were further confirmed by assessment of PTEN at the same observed time points. As shown by representative images of double fluorescent staining, both PTEN and p35 fluorescent labels were predominantly distributed in the cytoplasm and nucleus of VSC 4.1 neurons ( Figure 3B). Consistent with previous studies [19,32] Additionally, as the final executor of the apoptotic network, caspase-3 protein expression was measured in each treatment group at the same time point. The Western blotting results showed that the protein expression of caspase-3 changed in accordance with the change in the protein expression of caspase-8 ( Figure 5C, D). In addition, changes in LDH release from injured neurons exhibited the same pattern as changes in flow cytometry among the treatment groups, suggesting an important role of the miR-137-3p/CAPN-2 pair in regulating subsequent Cdk5 and caspase-8 overactivation and neuronal apoptosis ( Figure 6C, P<0.05). No significant differences were observed between the injured neurons with or without miR-137 NC treatment (P > 0.05). Discussion Our previous studies revealed that IR-induced dysregulated miRs in spinal cords played important roles in driving pathogenesis during the reperfusion period and finally caused severe motor and sensory dysfunction [5,6,29]. Recently, an increasing number of studies have suggested that miR-based gene therapy is a promising treatment for neurological recovery by effectively preventing neuronal apoptosis. In the present study, we investigated the function and mechanisms of miR-137-3p and its target CAPN-2 in both in vivo and in vitro IR models to better understand the pathophysiological mechanisms and find better treatments in the clinic. Previous studies have explored the crucial roles of CAPNs in determining neuronal survival during CNS injury [16, 17-19, 27, 30]. Some studies have observed prosurvival roles for CAPN-1 activation but destructive roles for CAPN-2 activation in retinal ganglion cell degeneration [17,19]. However, in a recent rat contusive spinal cord injury model, CAPN-1 was activated and contributed to impaired locomotor function [16]. These differences might be explained by the preferential participation of CAPN isoforms to different cellular functions even in different substructures of the same cells [14,35]. Thus, the contradictory roles of the activation of these two isoforms provide great challenges for our detailed understanding of the pathophysiological mechanisms of IR injury. In this context, we examined CAPN-1 and CAPN-2 protein expression and assessed motor function using Tarlov scores at several time points post IR. Our results showed that only the temporal expression patterns of CAPN-2 were negatively correlated with IR-induced motor dysfunction, with initial significant differences detected at 0.5 h post IR and reaching the maximum at 4 h post IR (Fig. 1). This finding was consistent with a previous study on spinal cord injury, in which the progressively increased calpain content in the lesion was first detected as early at 30 min after trauma and reached a 91% increase at 4 h after trauma [36]. We further explored the cellular distribution of CAPN-2 in major spinal cord cell types by double immunofluorescence at 4 h post IR when CAPN-2 expression reached its peak. Representative images and quantification showed that CAPN-2 was primarily expressed in spinal neurons, indicating that neuronal CAPN-2 might be the major effector during the reperfusion period. MiRs are small RNA molecules that negatively regulate gene expression by binding with the 3′-UTR of targets via complementary base pairs [28]. MiRs are widely expressed in the CNS and have been implicated in multiple pathological processes, including IR [29]. We have suggested that some miRs, including miR-187-3p, miR-27 and miR-125b, that are highly expressed in spinal cords may provide new insights for research and clinical treatment [5,6,29]. Likewise, in this study, using miR microarray analysis and luciferase assay, we found that miR-137-3p expression was greatly changed at 4 h post IR and had a target interaction with CAPN-2 (Fig. 2). Continuous intrathecal injection of synthetic miRs before IR was previously reported to effectively regulate miR expression and corresponding target gene expression in vivo [ 5,6,29]. Given the complicated cellular crosstalk in vivo, we defined the overall effects of the miR-137-3p/CAPN-2 gene pair by assessing motor function in a rat model. As expected, in contrast to the decrease in CAPN-2 protein expression, intrathecal injection of miR-137-3p mimic greatly increased Tarlov scores, indicating decreased motor dysfunction, whereas treatment with miR-137-3p inhibitor or NC did not have these effects. Moreover, to further confirm the above interaction, a direct delivery of exogenous CAPN-2 (rr-CALP2) was intrathecally performed. Consistent with the ability of exogenous CAPN-2 to activate the intrinsic CAPN-2 in uninjured nerves [35,37], the synergistic increase in CAPN-2 expression caused by exogenous rr-CALP2 injection was significantly prevented by miR-137-3p mimic, as comparable CAPN-2 protein levels and similar behavioral assessments were observed throughout the reperfusion period in injured rats with or without combined injection with rr-CALP2 (Fig. 2). These findings suggested that miR-137-3p acted as a functional regulator of CAPN-2 in the spinal cord. As a trigger, CAPN-2 requires millimolar (0.250-0.750 mM) calcium concentrations for its activation [13]. Thus, parallel in vitro experiments were performed to better [14,38]. Consistently, our current data showed that the synthetic miR-137 mimic also prevented the increase in the intracellular [Ca 2+ ] and decreased CAPN-2 expression and activity in stressed neurons. In addition, PTEN is a selective substrate for CAPN-2 [32] and is thus widely used to quantitatively measure neuronal CAPN-2 activity in vivo and in vitro [19,32]. As expected, PTEN and CAPN-2 were identically distributed in the cytoplasm of neurons, but their protein levels and immunoreactivities changed in opposite directions in response to transfection with the different treatments. Decreased PTEN expression indicates an increase in CAPN-2 activation; therefore, these results suggest that OGD/R-induced CAPN-2 overactivation was regulated by synthetic miR. Previous in vitro and in vivo studies have shown that CAPN-2 upregulation triggers neuronal apoptosis [20,39]. These studies found that activated CAPN-2 directly and precisely cleaves its substrate, membrane-bound protein p35 into p25, which consequently results in Cdk5 activation in cultured primary neurons and retinal ganglion cells [13,20,39]. Similarly, our in vitro immunofluorescent staining ( Fig. 4) revealed that the cytoplasmic and nuclear labels for p35 and p25 fluorescence completely overlapped with the CAPN-2 labels in VSC4.1 neurons. In a previous study, during polybrominated diphenyl ether-153-induced neuronal apoptosis, p35 was found to accumulate in the perinuclear region and plasma membrane, and p25 was localized in both the cytoplasm and nucleus [20]. Given that the p35/Cdk5 complex mainly exerts its function in the nucleus, these mislocalizations of p35 and p25 might be a sign of the formation of the p25/Cdk5 complex [20,40,41]. Additionally, our Western blotting results showed that the expression pattern of Cdk5 protein in each group changed in agreement with the levels of the p25 and CAPN-2 protein and opposite to the levels of the p35 protein ( Fig. 4). On the other hand, the changes in p35 and p25 activities in accordance with the protein level of CAPN-2 in neurons transfected with miR-137-3p mimic or NC also demonstrated that the conversion of p35 into p25 requires the presence of CAPN-2. Additionally, selective inhibition of CAPN-2 expression has been shown to preserve both the structure and function of vulnerable neurons [14,17,19]. In this study, pretreatment with miR-137-3p and Cdk5 inhibitor comparably inhibited the number of neurons located in the A4 and A2 quadrants of the flow cytometry dotplot graphs and LDH leakage in culture medium after OGD/R insult (Fig. 6). These findings all support the hypothesis that the dynamic localization of p35 and p25 are considered a signal for p25/Cdk5 activation-induced apoptosis. Caspase-cascade activation has been suggested to be central for neuronal apoptosis during IR injury [5,9]. Acting as the apical member of the caspase family, caspase-8 overactivation has been shown to hold special importance in controlling a series of broad caspase-cascade networks by initiating downstream caspase activation, such as activation of caspase-3 [24,42]. Meanwhile, as a cysteine protease, caspase-8 requires proteolytic cleavage before activation. Under normal conditions, caspase-8 forms the p35-caspase-8 complex through a covalent bond with the N terminus of p35 [26]. The N terminus of p35 is known to be a major element necessary for preserving the p35-caspase-8 complex, and the baculovirus p35 protein has been demonstrated to effectively block the apoptosis cascade by preventing caspase-8 proteolysis and activation [24,26]. In contrast, CAPN-2 overexpression-mediated p35 cleavage may cause conformational changes in p35 and consequently initiate aberrant caspase-8 activation. In agreement with the above hypothesis, a previous study showed that CAPN-2 was required to initiate endoplasmic reticulum stressinduced apoptosis mediated by a caspase-8-dependent pathway [25]. Similar in our in vitro immunofluorescence and Western blotting results, the fluorescent labels for p35 in the cytoplasm were identically distributed with the caspase-8 labels in neurons, and the protein expression of p35 and caspase-8 were changed in opposing directions when CAPN-2 expression was downregulated by pretreatment with miR-137-3p mimic (Fig. 5). In addition, our hypothesis that CAPN-2 initiates caspase-8-mediated apoptosis was additionally supported by the similar results observed in neurons transfected with miR-137-3p mimic and those transfected with the caspase-8-specific inhibitor Z-IETD-FMK. Both treatments exhibited comparable inhibitory effects on the protein level and activity of caspase-8, the number of injured neurons in the A4 and A2 quadrants of the dot-plot graphs and the amount of LDH leakage (Figs. 5 and 6). Acting as the final executor of the caspase family, caspase-3 expression was equally decreased in neurons transfected with miR-137-3p mimic and Z-IETD-FMK, supporting the assumption that CAPN-2-induced caspase-8 activation may simultaneously lead to caspase-3 activation. Indeed, similar observations were made in a study of hydrogen peroxide-induced apoptotic pathways [25]. Of note, transfection with the Cdk5 inhibitor, roscovitine, had no significant effects on the expression of CAPN-2 or p35, possibly due to the eventual modulation of the CAPN-2/p35-p25/Cdk5 pathway [20]. Roscovitine is known to be highly specific for Cdk5 and is therefore unlikely to influence the activity of the apical members of the signaling pathway [20]. Undoubtedly, whether caspase-8 directly activates caspase-3 by proteolytic cleavage or by activating caspase-3 activators needs to be elucidated in further study. Conclusion This study highlights the roles of CAPN-2 in triggering motor dysfunction after spinal cord IR injury and investigates the target interactions with miR-137-3p in both in vivo and in vitro models. The effects of the miR-137-3p/CAPN-2 gene pair on neuronal apoptosis may be attributed to CAPN-2 inhibition resulting in an inhibition of the cleavage of the substrate p35, consequently preventing the overactivation of p25/Cdk5 and the initiation of the caspase-8-mediated caspase cascade. Ethics approval and consent to participate The animal experiments in this study were approved by the Ethics Committee of Modulation of VSC4.1 neuronal apoptosis by the miR-137-3p/CAPN-2 gene pair after OGD/R. A
5,892.6
2019-12-18T00:00:00.000
[ "Biology" ]
STRUCTURAL MONITORING WITH GEODETIC SURVEY OF QUADRIFOGLIO CONDOMINIUM ( LECCE ) Monitoring buildings for moving elements has been always a problem of great importance for their conservation and preservation, as well as for risk mitigation. In particular, topographic surveying allows, through the use of the principles and instruments of the geodetic survey, to control moving points which have been identified and measured. In this study case, twelve survey campaigns were done for monitoring a building located in the city of Lecce. The condominium was built five years ago on an old quarry filled with debris to allow construction. Later in time, obviously, cracks started to appear on walls within the property, and for this legal actions were taken. The survey schema adopted has been that of triangulation/trilateration, from two vertices with known coordinates. With this methodologies four cornerstones have been identified, established with forced centering on pillars with anchor plates, connected to same number of framework points, considered stable. From these, 23 control points located on the structure with rotating prisms anchored at the same manner have been surveyed. The elaboration has been carried out by generating redundancy of the measures and compensating the values with least mean squares. The results obtained by the activity of survey and elaboration have confirmed the existence of ongoing phenomena. The causes that have generated the phenomenon have been, subsequently, investigated and have been considered attributable to the existence of a sewer pipeline and a water pipeline not properly put in place and consequently broke down due to the geological characteristics of the site. INTRODUCTION The engineering structures are subjected to deformation due to -sometimes -unknown factors impacting with certain frequency and intensity (such as changes of ground water level, geotechnical phenomena, structural phenomena, etc.).Because the causes are unknown it is necessary define a conceptual model.Monitoring and analyzing deformations of structures constitutes a special branch of Geodesy Science.The geodetic techniques allow, through a network of points interconnected by angle and distance measurements, to supply a sufficient redundancy of observations for the statistical evaluation of their quality and for error estimation.They give global information on the behaviour of the deformable structure (Moore, 1992;Glennie, 1997;Armer, 2001).Geodetic techniques have traditionally been used mainly for determining the absolute displacements of selected points on the surface of the object with respect to some reference points that are assumed to be stable.In order to establish an adequate system for monitoring, which should be non-destructive and involve long periods of time, it is necessary to take into account the environment in which such measurements are required, establish an adequate survey procedure and, finally, analyze the results obtained.In general, the monitoring of structures has a different purpose from the testing of structural components; the dictionary definition of monitoring is to watch or listen to something carefully over a certain period of time for a special purpose (Woodhouse et al., 1999;Carpinteri, 2006;Ball, 1991).The geodetic modelling of the object means dissecting the continuum by discrete points in such a way that the points characterize the object, and that the movements of the points represent the movements and distortions of the object.This means that only the geometry of the object is modelled.Furthermore, modelling the deformation process means conventionally to observe (by geodetic means) the characteristic points in certain time intervals in order to monitor properly the temporal course of the movements.This means that the temporal aspect of the process is modelled .This kind of modelling and monitoring of an object under deformation in space and time has been the traditional geodetic procedure.Consequently, the deformations of an object are described solely in a phenomenological manner (Welsch, 2001).Conventionally, in order to detect possible movements, estimated coordinates obtained from least squares adjustment of observations at different epochs are compared with each other by using statistical tests.Therefore, this procedure requires a common coordinate system and the referring measurements to a common temporal fixed reference. Study area In the immediate outskirts of the city of Lecce (Apulia -Italy, figure 1) the Quadrifoglio condominium is part of a building complex which comprises of four buildings named destined to residential homes, besides a nearby villa (figure 2).The buildings of the complex are situated on a public municipal road provided in the general development plan, built together with the annexed urbanization, such as sidewalks, public lighting, water supply and drainage networks.After about a year from the necessary permission for safety from the Municipal Technical Office, the tenants have witnessed daily signs of collapse of their homes.Due to continuous downpours, the land on which the buildings have been built have begun to lose consistency.It was, in fact, an area destined to quarry extraction of Lecce stone (figure 3), then filled with the debris material characterized by a high degree of permeability.There are, however, multiple causes which have been assessed that have aggravated the situation and that call into question the failure to complete the internal road (urbanization network) and shoddy works of water and sanitation channelling. SURVEY DESIGN The project is divided into the following steps: (i) acquisition of general information of the structure's behaviour; (ii) identification of significant control points in order to determine the repeated readings in such a way that it has a comprehensive reading of the structural behaviour; (iii) knowledge of the characteristics of deformation and of the significant directions of movement in order to define an operating range of measurement; (iv) choice of the reference system, the operating system and the most suitable instrumentation; (v) evaluation of the minimum risk condition.The monitoring activity has as a main reference the deformation of the structure, regardless of the quality of the materials and the size of the structure that are obviously verified and certified in the beginning.The evaluation of the risk threshold is evidently connected to the inferred values from the calculation report and the tension state that is configured with displacements greater than that of the project. Once the maximum values not to be exceeded are fixed (risk threshold), the problem differs in the following two cases: continuous monitoring and, hence, connection of movements to units of recording that automatically trigger the alarm system or monitoring at predetermined time interval, in which the operator each time evaluates the degree of risk and behave accordingly. Design and installation phase The main question to answer was therefore if the movements of the building structure indicates a stabilization with a future decrease in risk or an active phenomena which will degenerate.A discrete monitoring was carried out using high-precision total station and forced centering for the station vertices and using fixed control points.The geological situation previously described has required, for the installation of the cornerstones of measurement, the search of stable areas located near the structure and that respond to the need to be with each other mutually visible (Brebu, 2012).Four stations (100, 200, 300 and 400) have been placed (figure 5), two located along San Cesario street, respectively to the right and left of the building being monitored, one located on the roof of a building place to North-West of the building and finally, the fourth positioned in construction area located South-West of the building.The reference system adopted for the control activities corresponding to the cornerstones is shown in figure 6, and is a local reference system with origin at the vertex 100, x-axis along the line joining the vertex 100 with 400 and axis y such as to complete the clockwise triad.For each cornerstone a structure was built consisting of a a square base 100x100x20 cm on which has been built a pillar of square cross-section 40x40x160 cm The points to be checked, in the design phase, have been chosen in function of their visibility from at least two stations and, in any case, structurally significant.On each of them has been planned the installation of a forced centering consisting of a pivot port prism for tunnels and artifacts and a miniprism with metal frames, complete with spirit level, target plate, centering accuracy of 1 mm and a reflective range of 2000 m. TOPOGRAPHIC NETWORK Each of the four stations have been connected to two other external vertices, as it is necessary to double check if the network is robust.From these stations the final network which was realized connected each to the 23 control vertices placed on the building (figures 7-9).The redundancy of the scheme of the network has allowed a good control of the error propagation and of the presence of any gross errors, and the further rigorous compensation of vertices with the estimate of the coordinates and of the precision corresponding (Deakin, 1999).The compensation has been performed with the least squares method using the equations of observation that bind the measurements performed with the parameters to be estimated (the coordinates of the vertices).The planimetric problem has been split out from the one altimetric by performing, respectively, a compensation to the angles and distances and one to the heights (Henriques, 2001). The method implemented the compensation with the method of least squares with variable number of iterations (maximum 10), until the stabilization of the estimated RMS (Root Mean Square).For vertices without redundancy, in the absence of constraints the coordinates have been calculated without an estimation of the errors (Sepe, 2007).Topographically the network has been realized by executing the schema of triangulation (figure 10) in which, starting from two cornerstones, the control vertices Pi (x i , y i ) have been measured.The system enables angular measurements of great accuracy and reliability and is, moreover, equipped with a dual-axis compensator that constantly monitors both components of the inclination of the vertical axis (Dunisch, 2001).In table 1 For each day of survey the stability of the same has been evaluated and it has been verified that any errors were contained in the precision of the method and that they were less than the measured displacements of the control vertices (table 3).For each control vertex the values of the variances and the covariances of the three coordinates have been determined (table 5) and, consequently, the parameters related to the error ellipses have been calculated. Table 4. Comparison of the vertices coordinates with time t 0 STATISTICAL TESTING In monitoring the object to be investigated is typically represented by a cluster of points, whose positions are fixed by topographic measures at different epochs of time.If movements occur, they cause displacements of the cluster, resulting in position differences between epochs.These differences can be, typically, of the same order of magnitude as the observational errors.Therefore, statistical analyses and particularly hypothesis testing are needed to reasonably detect significant displacement of individual control points or significant network deformations.For example, the global congruency test (Cooper, 1987;Setan, 1995;Erol, 2004;Barbarella, 1990) may be a useful tool for the examination of the total deformation of a network between two epochs.If the observed deformation is small compared to the accuracy of the measurements, the network is regarded as congruent at those two epochs, otherwise the observed deformation is deemed significant, which usually requires further analyses.Two statistical approaches have been applied in this case: the classical statistical approach and the Bayesian approach. The classical statistic considers the data as realizations of random variables and the unknown parameters as deterministic, while in Bayesian statistics the data are considered constant and the unknown parameters are random variables characterized by a priori pdfP(θ).Therefore, the Bayesian approach allows one to refresh a priori information contained in the pdfP(θ) of the parameters, given the data produced, and the update is reflected in the definition of a pdfP(θ|d) a posteriori. Test of classical statistics In a first hypothesis it is assumed that the adjusted observations collected in the first survey are uncorrelated, both in the spatial domain (distance between points) and in the time domain (time between measurement sessions), with respect to those acquired in the subsequent phases. It is also assumed that estimated point coordinates in the repeated surveys and their differences x = x ix 0 are normally distributed, with variance respectively xi  With such assumptions it turns that: where: xi are known from the least squares adjustment of the observations.The null and alternative hypothesis for congruency testing are H 0 : x = 0 (1) (i.e no significant deformation occurred for a point between two epochs) (2) (i.e existence of significant deformation)with the following test statistics: The null hypothesis is, therefore, accepted at the level of significance if the test statistic (3) does not exceed the critical value of the Z distribution (normal standardized distribution).In this study a significance level of p = 5% has been applied, which gave a Z crit = 1.96 (Baarda, 1968).In order to better discriminate if the differences in point positions were due to actual displacements or to random errors and/or movements of the control points, the test has been applied as in table 6 and 7 (Costantino, 2011).The points that showing statistically relevant displacements are highlighted in bold.-----------------------------------------------16 0.0030 -0.0046 0.009 05_11_10/25_02_11 ID Table 7. Differences of adjusted coordinates and statistical analysis results between the first and the last survey. Bayesian analysis of the displacements of the network For the application of Bayesian analysis it has been decided to adopt a simplified approach, analyzing separately the three coordinates and, therefore, considering it one-dimensional.It is considered, therefore, the single coordinate, called h, obtained by compensation of the network at different times. The quantities to be considered are the displacements h between the different sessions (i) of all control points of the network P j : h follows a normal distribution with mean ℎ (unknown) and variance ℎ 2 (known from previous compensation).Therefore, for each point of the network it can be written: (5) The average ℎ is, in turn, a random variable that we suppose is distributed with a normal probability density, with average μ and variance 0 2 .The parameters of this distribution, defined as the prior of the Bayesian formulation, are the a prior information (, 0 2 ) will be fixed during the numerical treatment of the problem.Starting from the Bayes formula: It is possible to clarify the terms of the second member, first considering the normal distribution and, subsequently, assuming that if there are no displacements (ℎ = 0), this which corresponds to 0 ≡ ℎ = 0 . From these, the final formulation can be reached, summarized below: with: where: = mean of the variances between t i epochs and 0 ; m = mean of the displacements in the two epochs; ℎ = mean value of the coordinate in the two epochs; = error function (Beyer, 1978). The significance analysis of movements by the Bayesian approach will make possible, therefore, a reduction to a comparison between the two equations of (8). The interpretation of the results Bayesian analysis has been carried out recalling that in planimetry the expected accuracies are on the order of tenths of a millimeter, while in altimetry of millimeter, with a significance level a=5%. On the basis of these considerations, six different elaborations have been made, depending on the initial assumptions.The values of (ℎ ≠ 0) resulting from the comparison of all the survey measurements with the first, are reported below with reference to the first, the third and the sixth elaboration. RESULTS AND CONCLUSIONS The analysis of the results has been carried out by comparing all surveys with the "zero" defined as the first survey or rather that of the 5/11/2010 and applying the verification tests. From the results tables there is obvious significant positive altimetric variations or rather subsidence of all the vertices except 10, 11 ,12, 13, 14; negative planimetric variations in the direction of the axis x and positive in the direction of the axis y of the vertices 10, 11, 12, 13, 14; planimetric positive variations in the direction of the axis x and negative in y-axis direction 9, 17, 18, 19.The maximum order of these variations can be estimated to the nearest centimeter.For the remaining vertices, no considerations can be formulated given the lower reliability of these measurements.Analyzing the displacements that each point has had, it would seem that the building undergoes differing movements.In fact, there is a slipping and lowering of the portion of building attested on the quarry area; and secondly, a different behaviour of the remaining part of a building insistent on the rock.The latter tends to break away from the remaining portion.Even though this hypothesis isn't entirely confirmed by the classic test, it is fully supported by results of Bayesian analysis, especially when the prior data considered are those closest to the phenomena taking place ( = 0.01 and 0 = 0.006 m).In light of the monitoring activities carried out in this study, geological and structural technical investigations have been undertaken that have shown the existence of concurrent causes to the movements.In particular, the realization of the Quadrifoglio building on an area which was previously used as a quarry and then filled with debris material characterized by a high degree of permeability has emerged to be the main cause of the dynamics of the movements. Figure 5 .Figure 6 . Figure 5. Scheme of the cornerstones of the network Figure 7 . Figure 7.Control vertices located on the North and West sides Figure 13 . Figure 13.Vertical displacement vectors in West side Table 1 . Accuracy TS30The coordinates of the stations 100, 200, 300 and 400 have been calculated and re-determined as shown in table 2. Table 2 . Coordinates of the station vertices Table 3 . Verification the stability of station verticesStarting from these vertices the control points of the network have been compensated both planimetrically and altimetrically.In particular, in table 5 the control points not visible by two vertices of station are shown in yellow.Subsequently, the results have been compared of the different campaigns with those of the survey zero (t0 corresponds to 5/11/2012) (table4). Table 5 . Example of calculation Table 6 . Differences of adjusted coordinates and statistical analysis results between the survey of 05/11/2010 and 25/02/2011. Table 10 . Results of Bayesian analysis with prior data μ = 0.005 m and 0 = 0.006 m Table 11 . Results of Bayesian analysis with prior data μ = 0.0075 m and σ 0 = 0.006 m In tables 10, 11, 12, 13 the results of the test are reported.In particular, in table 12 the first elaboration between the survey of 05/11/2010 and 25/02/2011 with prior data = 0.005 m and 0 = 0.006 m, are reported.In table 11 the third elaboration for the same date and prior data = 0.0075 m and 0 = 0.006 m are reported.In table 12 the sixth elaboration for the same date and prior data = 0.01 and 0 = 0.006 m are reported.Finally, table12show the result of the sixth elaboration between the survey of 05/11/2010 and 25/05/2011 with prior data = 0.01 and 0 = 0.006 m. Table 12 . Results of Bayesian analysis with prior data μ = 0.01 m and σ 0 = 0.006 m Table 13 . Results of Bayesian analysis with prior data μ
4,417
2014-01-07T00:00:00.000
[ "Mathematics" ]
LeLePhid: An Image Dataset for Aphid Detection and Infestation Severity on Lemon Leaves Aphids are small insects that feed on plant sap, and they belong to a superfamily called Aphoidea. They are among the major pests causing damage to citrus crops in most parts of the world. Precise and automatic identification of aphids is needed to understand citrus pest dynamics and management. This article presents a dataset that contains 665 healthy and unhealthy lemon leaf images. The latter are leaves with the presence of aphids, and visible white spots characterize them. Moreover, each image includes a set of annotations that identify the leaf, its health state, and the infestation severity according to the percentage of the affected area on it. Images were collected manually in real-world conditions in a lemon plant field in Junín, Manabí, Ecuador, during the winter, by using a smartphone camera. The dataset is called LeLePhid: lemon (Le) leaf (Le) image dataset for aphid (Phid) detection and infestation severity. The data can facilitate evaluating models for image segmentation, detection, and classification problems related to plant disease recognition. Summary The dataset, called LeLePhid in short, provides images of lemon leaves. This dataset contains 665 photos of the top and back of lemon tree leaves in which there are healthy and unhealthy leaves; these were collected manually in citrus crops from Junín, Ecuador, in winter, from December to May, when the weather is warm and rainy in this country. For the annotation process, it was carried out with the Labelbox © annotation tool, and to assign the severity of the infestation, three annotators manually inspected the image and set the grade of infestation severity according to [1] and the OIRSA method [2]. These data can be used for training, testing, and validation of computational models related to image segmentation and object detection in plant disease studies. At the same time, they can be helpful for researchers and professionals working on computer vision-based models for image classification and object detection using images of healthy leaves and leaves with the presence of aphids. The data annotations can be used to develop and improve the accuracy of lemon leaf aphid infestation severity and detection algorithms. Data Description The LeLePhid dataset provides lemon leaf images that can be used to develop and evaluate the performance of models of image segmentation, object detection, and classification problems related to plant diseases. The dataset contains imagery of the upper and back sides of leaves of lemon trees manually collected in citrus crops around Junín, Ecuador. On each image, the foreground leaf is identified, and its status is labeled, i.e., healthy and aphid 1 presence. The dataset also includes annotations to identify the infestation severity of the leaves affected by aphids. It can be used to design automatic aphid counting models because, as stated in [3], compared with manual counting 2 , these models can calculate the percentage of the affected area through analyzing the image information. The released files for the so-named LeLePhid dataset are two folders: the raw data are available in the "Images" folder (665 images of lemon leaves) and pre-processed data are available in the "Annotation" folder (.json and .xlsx files). Samples of them are depicted in Figures 1 and 2. Figure 1 shows an example of the annotated images for segmentation purpose. In a green limited-area is identified the a lemon leaf. In purple areas the aphids presence. In Figure 2A, the class of the image is healthy meanwhile in Figure 2B the class is aphids, i.e., the leaf has presence of this insect. In addition, Tables 1 and 2 describe the levels or infestation severity on each lemon leaf available in the dataset. Finally, Figure 3 describes the distribution of images by health status and levels of infestation of aphids. Methods LeLePhiD is designed to support computer vision research related to image processing with a particular focus on the detection and infestation severity of aphids on lemon leaves. The pipeline of the creation of this dataset is shown in Figure 4. In Figure 4, we can see three steps, including the data acquisition, the incorporation of annotations, and the validation. In the following subsections, we detail each part of the process of creating this dataset. Data Acquisition The lemon leaf images were manually acquired on a crop field in a rural area of Junín, Manabí, by using a 2 megapixel smartphone camera. Lemon images were captured following the procedure in [4] during cloudy, sunny, rainy, and windy days. The images were taken at a distance of 30-50 cm from the plant. The data capture process was performed in a time window of two weeks with different climatic conditions and background scenarios. We took 665 leaf images of the upper and back sides of healthy and unhealthy lemon plants. All images were rotated to a vertical position and resized to 800 × 600 pixels, keeping the aspect ratio. The process can be observed in Figure 4A. Annotations The annotation process was performed by using the Labelbox © annotation tool and can be observed in Figure 4B. In the object segmentation annotation, for each image, the foreground leaf is identified, and also, if the leaf is diseased, the area affected by aphids is marked (Figure 1). In the classification annotation, each image is labeled healthy or aphids according to the leaf health status (Figure 2). These annotations were assigned based on the comprehensive evaluation of the images of leaves according to the experience of the annotators. Figure 1 shows an example of an annotation where the green-limited area identifies a lemon leaf and the magenta areas show the presence of aphids. In Figure 2, the labeled lemon leaf images are shown. In Figure 2A, the image class is healthy; meanwhile, in Figure 2B, the class is aphids, i.e., the leaf has the presence of this insect. Note that only certain areas with white spots and texture correspond to aphids. Other spots related to other leaf conditions are not considered in this study. To assign the infestation severity of each leaf, three annotators manually inspected the image and set the grade of infestation severity according to [1] and the OIRSA method [2]. The description of the levels of infestation severity grades of the affected area in lemon leaves can be observed in Table 1. Validation The validation process can be observed in Figure 4C. The consistency of the annotations was validated using the agreement between annotators. This was achieved by seeking matches in the category assigned to each image by the annotators. To quantify this, we used the kappa coefficient and the interpretation suggested by [5]. It can be simplified in Table 2 as follows: In Table 2, any kappa value below 0.60 indicates inadequate agreement among the annotators and that little confidence should be placed in the labeling process. Here, the percentage of data reliability corresponds to the squared kappa value. The final value of each label (level) was selected using a plurality strategy, i.e., when the matches are greater than 2. In cases of ties, the value was arbitrarily chosen in random order. The level of agreement obtained by our annotators was 91.0%, which means that the real percentage of affected area by aphids with LeLePhid is almost perfect. Finally, the LeLePhid dataset contains 665 lemon leaf images distributed into 330 healthy leaves and 335 leaves with aphid presence. The latter are categorized according to leaf infestation severity and distributed as summarized in Figure 3. In Figure 3, we include the image distribution according to the infestation severity levels. Note that there are 330 images with 0% of affected area, i.e., they correspond to healthy photos. The 335 images with aphid presence are divided into four levels. The first one has 129 leaf images with less than 5% of affected area (level 1). Most of the leaf images (161) are of level 2, i.e., they have between 5% and 20% of affected area. Further, there are 38 images with between 20% and 50% of infestation (level 3). Finally, the dataset contains seven leaf images with more than 50% of affected area (level 4). User Notes The data described in this paper are from a citrus crop near Junín, Ecuador (latitude −0.9277, longitude −80.2058). They were acquired using a smartphone camera. The identifications of the leaf, its state, and the area affected by aphids were individually incorporated as annotations over the image. The annotation is provided as a JSON file supported in any computer vision software. The possibilities of practical application are the following: • The data can be used to train, test, and validate computational models related to image classification on plant disease studies. In this sense, we already have evidence from a previous work [6], where convolutional neural networks (CNNs) were used to board a binary classification problem related to lemon leaves with aphid presence. The quality of LeLePhid was evidenced by allowing the model to achieve average rates between 81% and 97% of correct aphid classification. • The data can be helpful to researchers and professionals working on computer visionbased models for image segmentation and object detection using images of healthy leaves and leaves with aphid presence. Cases such as those discussed in [3,7] are examples of the potential that our dataset can offer from the point of view of continuous improvement of machine learning algorithms to address segmentation and identification problems related to plant diseases. • The data can serve as a motivation to encourage further research into the agriculture sector and computer vision methods for citrus pest identification. Image annotation is the data labeling technique used to make the varied objects recognizable for computers. Our dataset includes image annotations of leaves and aphid-infected areas to make them recognizable or even understandable for computers. These annotations can be used to help the large-scale monitoring of the health of crops through, for instance, devices such as UAVs (unmanned aerial vehicles) or drones, where works in [8][9][10][11] have already demonstrated the benefits that can be obtained in the agricultural sector when devices such as drones are used in conjunction with computer vision. Note that most of the images used by algorithms of the two first bullet points were captured in controlled environments, i.e., computer vision laboratories where the photos are treated artificially: constant backgrounds, homogeneous luminosity, and other conditions not usually occurring in lemon crops. Our dataset stands out from the others because the images were captured during cloudy, rainy, sunny, and windy days and considered scenarios with a variety of backgrounds in a typical lemon crop. This ensures that the algorithms learn from representative images of the type and complexity of real-world scenes. Data Availability Statement: Data regarding images and annotations can be accessed at: repository name: LeLePhid; data identification number: DOI: 10.17632/tndhs2zng4; direct URL to data: https://data.mendeley.com/datasets/tndhs2zng4; accessed date: 13 January 2021. Conflicts of Interest: The authors declare no conflict of interest.
2,538.2
2021-05-17T00:00:00.000
[ "Biology", "Computer Science", "Environmental Science" ]
Millimeter wave transmittance/absorption measurements on micro and nano hexaferrites Millimeter wave transmittance measurements have been successfully performed on commercial samples of micro- and nano-sized particles of BaFe12O19 and SrFe12O19 hexaferrite powders and nano-sized particles of BaFeO2 and SrFeO2 powders. Broadband millimeter wave transmittance measurements have been performed using free space quasi-optical spectrometer, equipped with a set of high power backward wave oscillators covering the frequency range of 30 – 120 GHz. Real and imaginary parts of dielectric permittivity for both types of micro- and nanoferrites have been calculated using analysis of recorded high precision transmittance spectra. Frequency dependences of magnetic permeability of ferrite powders, as well as saturation magnetization and anisotropy field have been determined based on Schloemann’s theory for partially magnetized ferrites. Micro- and nano-sized ferrite powders have been further investigated by DC magnetization to assess magnetic behavior and compare with millimeter wave data. Consistency of saturation magnetization determined independently by both millimeter wave absorption and DC magnetization have been found for all ferrite powders. These materials seem to be quite promising as tunable millimeter wave absorbers and filters, based on their size-dependent absorption.Millimeter wave transmittance measurements have been successfully performed on commercial samples of micro- and nano-sized particles of BaFe12O19 and SrFe12O19 hexaferrite powders and nano-sized particles of BaFeO2 and SrFeO2 powders. Broadband millimeter wave transmittance measurements have been performed using free space quasi-optical spectrometer, equipped with a set of high power backward wave oscillators covering the frequency range of 30 – 120 GHz. Real and imaginary parts of dielectric permittivity for both types of micro- and nanoferrites have been calculated using analysis of recorded high precision transmittance spectra. Frequency dependences of magnetic permeability of ferrite powders, as well as saturation magnetization and anisotropy field have been determined based on Schloemann’s theory for partially magnetized ferrites. Micro- and nano-sized ferrite powders have been further investigated by DC magnetization to assess magnetic behavior and compare with millimeter wave data. Consistency of s... Millimeter wave transmittance measurements have been successfully performed on commercial samples of micro-and nano-sized particles of BaFe 12 O 19 and SrFe 12 O 19 hexaferrite powders and nano-sized particles of BaFeO 2 and SrFeO 2 powders.Broadband millimeter wave transmittance measurements have been performed using free space quasi-optical spectrometer, equipped with a set of high power backward wave oscillators covering the frequency range of 30 -120 GHz.Real and imaginary parts of dielectric permittivity for both types of micro-and nanoferrites have been calculated using analysis of recorded high precision transmittance spectra.Frequency dependences of magnetic permeability of ferrite powders, as well as saturation magnetization and anisotropy field have been determined based on Schlöemann's theory for partially magnetized ferrites.Micro-and nano-sized ferrite powders have been further investigated by DC magnetization to assess magnetic behavior and compare with millimeter wave data.Consistency of saturation magnetization determined independently by both millimeter wave absorption and DC magnetization have been found for all ferrite powders.These materials seem to be quite promising as tunable millimeter wave absorbers and filters, based on their size-dependent absorption.© 2016 Author(s).All article content, except where otherwise noted, is licensed under a Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).[http://dx.doi.org/10.1063/1.4973597] I. INTRODUCTION Ferrites and garnets are mostly ferromagnetic oxides with dielectric and magnetic properties that are useful and important for microwave and millimeter wave (MMW) applications. 1M-type hexagonal barium ferrite with stoichiometric chemical formula BaFe 12 O 19 has been well established as a low cost permanent magnets 2 and high density magnetic recording media. 3A free space magneto-optical approach has been successfully employed to study ferrites at millimeter wave frequencies. 4This technique enables the acquisition of precise transmission spectra for determining the dielectric and magnetic properties of both isotropic and anisotropic ferrites from a single set of direct measurements.This paper examines the complex permittivity and permeability of micro-and nano-sized powdered barium and strontium hexaferrites and nano-sized particles of BaFeO 2 and SrFeO 2 powders in a broadband MMW frequency range from 30 to 120 GHz, encompassing the ferromagnetic resonance frequency of these materials.Micro-and nano-sized ferrite powders have been also investigated by DC magnetization at room temperature to assess magnetic behavior and compared with millimeter wave absorption, based on magneto-optical approach. A. Samples preparation Ninety-nine percent pure barium (BaFe 12 O 19 ) and strontium (SrFe 12 O 19 ) microferrite powders have been developed by Advanced Ferrite Technology GmbH with the average fine particle size of 3 µm.Barium (99.5% pure) and strontium (99.8% pure) ferrite nanopowders with the same stoichiometric chemical formulas as for microferrites have been developed by Sigma-Aldrich, Inc.The average fine particle sizes of both barium and strontium nanoferrites are reported by the manufacturer to be less than 50 nm.Ninety-nine percent pure barium (BaFeO 2 ) and strontium (SrFeO 2 ) iron oxide nanopowders have were developed by American Elements with the average fine particle size of 60 nm.The samples of different effective specific gravities (densities) have been prepared by uniformly packing ferrite powders in specially fabricated transparent plane parallel Mylar walls containers with the thickness of 12 mm to ensure the accuracy of millimeter wave measurements. B. Measurements technique 5][6][7] Present study presents the characterization of the frequency dependent electromagnetic material properties studied by a free space transmittance millimeter wave spectrometer.High vacuum, high power backward wave oscillators (BWO's) were used as the sources of coherent radiation, continuously tunable from 30 to 120 GHz.A pair of pyramidal horn antennas and a set of polyethylene lenses along the propagation path from the source antenna to the receiver antenna form a Gaussian beam focused onto the sample.A simplified schematic diagram of the MMW spectroscopic system is shown in Fig. 1. The mathematical relationships between transmittance and reflectance spectra, and refractive and absorption indexes are presented below (see, also Refs. 5 and 6), where c is the speed of light, n is the refractive index of the sample material, k is the absorption index, µ is the complex permeability of the sample material, ε is the complex dielectric permittivity, T is the transmittance, R is the reflectance, ϕ is the phase of the transmitted wave, and ψ is the phase of reflected wave.The millimeter wave measurements have been performed in a frequency sweep mode.After obtaining the transmittance spectra of the ferrite materials, optimization procedures were applied to extract the best-fit dielectric and magnetic parameters of the measured samples (see Refs. 4, 7, 8 for more details).5][6][7][8] Here, λ is the wavelength, D and d are the cross section and thickness of a plane-parallel specimen, respectively. II. RESULTS AND DISCUSSION Transmittance spectra of all barium and strontium ferrite micro-and nano-powdered materials have been recorded in millimeter waves and are shown in Fig. 2. Values of the effective specific gravities are shown in the inset of the graph. Quite deep (down to opaque) and relatively wide absorption zone in transmittance spectra has been observed for both barium and strontium microferrites.This deep absorption is the natural ferromagnetic resonance that shifts to millimeter wave range due to the strong magnetic anisotropy of barium and strontium ferrites.Experimentally observed width of the absorption line does not reflect the actual width of ferromagnetic resonance line. 4,8Experimental width is supposed to be broadened because of saturation of the absorption line, the phenomenon, which is well known in optics.For the nano-sized barium and strontium hexaferrite materials as well as barium and strontium iron oxide nanopowders, absorption in MMW due to ferromagnetic resonance is also observed.Periodic structure observed in all transmittance spectra at the frequencies above the zone of deep absorption allows calculation of the dielectric constants of all materials (see, Table I, below). For the calculation of complex magnetic permeability, Schlömann's equation 9 for partially magnetized ferrites has been used: where ω is the frequency, H A is anisotropy field, 4πM S is saturation magnetization, γ is the gyromagnetic ratio.Demagnetizing factors are determined by the theory of Schlömann's model for nonellipsoidal bodies. Best matching has been done by varying three parameters: saturation magnetization, anisotropy and gyromagnetic ratio.Anisotropy can be easy determined by the frequency of the deep absorption zone f Res in the transmittance spectra.Saturation of magnetization strongly depends on absorption level at ferromagnetic resonance.The millimeter wave transmittance data is used to compute the µ eff as above.From this µ eff we have modeled the values of anisotropy field and saturation magnetization.Frequency dependences of complex magnetic permeability for barium and strontium micro-and nano-sized powdered samples are shown in Figs.Both barium and strontium microferrite materials show quite strong anisotropy field of H A = 17.6 kOe and H A = 19.2kOe, respectively.1][12][13] Similar behavior has been observed for nanoferrrite powders: relatively strong anisotropy field of H A = 15.3 kOe and H A = 17.1 kOe and very weak saturation magnetization of 4πM S = 0.55 kG and 4πM S = 0.85 kG for barium and strontium nanoferrites, respectively.For of barium and strontium iron oxide nanopowders, relatively strong anisotropy field of H A = 16.5 kOe and H A = 19.5 kOe and very weak saturation magnetization of 4πM S = 0.46 kG and 4πM S = 0.63 kG have also been observed for barium and strontium powders, respectively.Magnetic properties of all powdered materials are shown below in Table I. Low values of saturation magnetization observed in diluted ferrite materials compared with pure ferrite ceramics can be explained by the presence of a considerable amount of dilution component. 8Specific gravities (densities) of commercially available pressure-formed (sintered) solid ferrite ceramics and magnets are around 4.7-5.2g/cm 3 for both barium and strontium ferrites.In the case of pure powdered materials, the presence of the air between micro-and nanoferrite particles can be considered as a dilution component.The values of specific gravities of microferrite powders are found to be about 2.5 times lower in comparison with solid materials.For nanoferrites that difference is even bigger: 5 to 10 times.That dramatic difference in densities of pure solid and powdered ferrite materials explains relatively low MMW absorption at ferromagnetic resonance for nanoferrites (see, Fig. 2.).It has also been shown for nanoparticles that surface spin disorder can result in lower saturation magnetization than is expected for bulk materials. 14,15he resonance frequency of nanoferrites is found to be slightly shifted to the lower frequency compare with micro-sized barium and strontium ferrite materials.7][18] Domain wall motion resonance is sensitive to both the microstructure of the polycrystalline ferrite (ferrite grain size) and the volume loading of the ferrite (the post-sintering density).The spin rotational relaxation, becoming pronounced in the high frequency range depends only on the volume loading of the ferrite and the dispersion parameters.Commercial milling techniques in magnetic materials technology usually reduce the particle sizes from multi-domain to single domain; however, the particle size of milled powder has a broad distribution of 2 to 5 µm for microferrites.For barium and strontium ferrites, the critical domain diameter is about 1 µm and strong resonances exhibited in the obtained spectra can be accounted for by domain wall motion, as the particle size under study is not sufficiently small to approach single domain characteristics.For nanoferrite powdered materials, the grain size is found to be around 50 nm, i.e. more than ten times less than the domains dimensions (sub-domain size). Manipulating of the grain sizes and specific gravities of ferrite powders change the ferromagnetic resonance frequency and the level of millimeter wave absorption.Both factors: shift of resonance absorption and level (power) of absorption seem to be very helpful in millimeter wave applications. Micro-and nano-sized ferrite powders have been further investigated by DC magnetization to assess magnetic behavior and compare with millimeter wave data.Shown in Figs.Saturation magnetization values, determined from DC magnetization results are found to be slightly higher compare with MMW measurements (∼10-15%).It can be attributed to that Vibrating Sample Magnetometer system operates to slightly higher values of densities of powdered samples (∼10-15%) due to vibration of the powders.Dielectric constants, resonance frequency anisotropy field, saturation magnetization for all micro-and nano-sized barium and strontium ferrite powders are systemized and presented in Table I. III. SUMMARY AND CONCLUSION Micro-and nano-powdered barium (BaFe 12 O 19 ) and strontium (BaFe 12 O 19 ) hexaferrite materials as well as nano-sized barium (BaFeO 3 ) and strontium (SrFeO 3 ) iron oxide powders have been investigated in the millimeter wave range.Broadband transmittance spectra measurements have been performed using a free space, quasi-optical spectrometer.Complex dielectric permittivity and magnetic permeability of micro-and nanoferrites have been calculated from the transmittance spectra. Absorption zones centered around 49 and 53 GHz have been observed in transmittance spectra of barium and strontium powdered microferrites, respectively, due to natural (spin) ferromagnetic resonance.Pronounced absorption peaks have also been observed at 42.5 and 48.2 GHz for powdered barium and strontium nanoferrites, respectively.Significant absorption peaks have also been observed for barium (46.1 GHz) and strontium (54.5 GHz) iron oxide powders.Magnetic properties, including saturation magnetization and anisotropy field have been determined based on Schlömann's theory for partially magnetized ferrites.Consistency of saturation magnetization determined independently by both millimeter wave absorption and DC magnetization have been found for all ferrite powders. The presence of air in both micro-and nanoferrite powdered materials has been considered as a dilution component for the pure ferrite.The influence of particle size of ferrites on the ferromagnetic resonance frequency and level of MMW absorption has been found.Tunable millimeter wave absorbers and filters, based on manipulation of the physical properties of micro-and nano-sized powdered ferrite materials, are suggested. FIG. 1 . FIG. 1. Schematic diagram of the free-space quasi-optical millimeter-wave spectrometer operated in transmittance mode. FIG.2.Millimeter wave transmittance spectra of all barium and strontium ferrite micro-and nano-powdered materials under study. FIG. 3 FIG. 3. a. Frequency dependences of real part of magnetic permeability of micro-and nano-sized barium and strontium hexaferrite powders, b.Frequency dependences imaginary part of magnetic permeability of micro-and nano-sized barium and strontium hexaferrite powders. FIG. 5. a. Hysteresis curves of barium and strontium micro-and nano-sized hexaferrite powders, b.Hysteresis curves of barium and strontium iron oxide nanopowders. TABLE I . Dielectric constant and magnetic parameters of micro-and nanoferrite powders.
3,228.4
2017-05-01T00:00:00.000
[ "Materials Science" ]
Analytical and Numerical Analysis of Functionally Graded Heat Conduction Based on Dirichlet Boundary Conditions An analytical and numerical solution for the one dimensional of heat conduction in a slab exposed to different temperature at both ends is presented. The distribution of heat throughout the transient direction obeys to functionally graded (FG) temperature based on Dirichlet boundary conditions. The variation of functionally graded temperature can be described by any form of continuous function. In this case, where the external heat fluxes are not directly definite based on the Dirichlet or mixed boundary conditions, the fluxes that concluded over the slab faces are free to vary until the equilibrium condition is reached. By numerically solving the resulting heat-conduction equation, the distribution of temperature which vary with time through the slab is obtained. The obtained analytical results are presented graphically and the influence of the gradient variation of the temperature on shape formed with changed time is investigated. *Corresponding author: Essa S, Department of Civil Engineering, Erbil Technical Engineering College, Erbil, Iraq, Tel: 9647504823149; E-mail<EMAIL_ADDRESS>Received October 19, 2016; Accepted October 31, 2016; Published November 11, 2016 Citation: Essa S (2016) Analytical and Numerical Analysis of Functionally Graded Heat Conduction Based on Dirichlet Boundary Conditions. J Material Sci Eng 5: 293. doi:10.4172/2169-0022.1000293 Copyright: © 2016 Essa S. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Introduction Functionally graded materials (FGMs) are defined as the perfect materials in mechanical, thermal and corrosive resistant properties. Different from fiber-matrix laminated composites, FGMs do not have the problems of de-bonding resulting from large inter-laminar and thermal stresses. The impression of FGMs was firstly introduced by Japanese researchers in the mid-1980s, as ultra-high temperature resistant materials for various engineering fields for instance aircraft, space vehicles, and nuclear reactors. FGMs consist of two or more materials which microscopically inhomogeneous and spatial composite materials such as a pair of ceramic-metal. The mechanical properties of the material structure changes gradually throughout the thickness with varying continuously and smoothly from top to the bottom surface. Noda [1] showed many topics range from thermoelastic to thermoinelastic problems. He suggested that temperature dependent properties of the material should be taken into account in order to achieve more accurate analysis. For a historical era, FGMs studying focused on the analyses of thermal stress in the ceramic coatings, static deformation, and forced vibration. Noda and Jin [2,3] presented a steady thermal stress for a crack elastic solid based on nonhomogeneous infinite, and concluded that effects of the thermal stress intensity for cracks in FGMs. Cho and Oden [4] used a Crank-Nicolson and Galerkin scheme to investigate a parametric study of thermal stress characteristics. Reddy [5] proposed a theoretical formulation and finite element models for the analysis of functionally graded plates (FGPs). Praveen and Reddy [6] studied responses of FGPs based on static and dynamic thermoelastic responses and concluded that, the differences of pure ceramic or metal plates depend on responses of FGPs. Yang and Shen [7] presented a free and forced vibration analyses of functionally graded plates subjected to impulsive lateral loads under thermal environments. Dirichlet boundary conditions (DBCs) generally hold two forms: homogeneous Dirichlet boundary conditions (HDBCs) and inhomogeneous Dirichlet boundary conditions (IDBCs). The former can be considered as a special case of the latter with zero imposed value. Zhang and Zhao [8] developed a weighted finite cell method (FCM) with high computing accuracy which extended to define boundary value function so that the inhomogeneous Dirichlet boundary conditions (IDBCs) are imposed exactly. Theoretical Formulation There are many models for expressing the variation of material properties in FGMs. The most commonly used of these models is the power law distribution. In this study, the new expression of heat transfer through in slab is assumed based on two parameters as: Where, m and k are the parameters which are used to define the variation. Here, φ 0 =ɸ(x) is initial value of the variation at x=a Eq. (1) is a nonlinear function and its variation (shape) is controlled by using two parameters. One may observe that, the adjustment of the parameters m and k is not easy for describing the desired variations. Figure 1 shows the variation of temperature of metal in the transverse direction of slab. Exact Solution of the Heat Equation Consider a one-dimensional diffusion equation which is a partial differential equation for the temperature T(x,t): Where, c(x) is the specific heat of the material, κ is the constant of proportionality (thermal conductivity) of the material and S(x,t) represents a given source of heat energy per unit volume. The equation simplifies when κ and c are independent of position: Where, χ is the material thermal diffusivity, The initial temperature in the slab heat equation is a first-order PDE in time: The second order equation in space requires two boundary conditions to satisfy the solution. By putting the faces in a suitable thermal contact at a specified temperature T 1 (t) and T 2 (t) so the Dirichlet boundary conditions are achieved as: The equilibrium solution ɸ(x) that satisfies these boundary conditions, as well as the time-independent heat equation: Using the direct integration (analytically), equation goes to general solution: The constants C 1 and C 2 are chosen to match the boundary conditions. For a temperature balance to exist, there can be no net heat energy infused into the section by the source: , else the temperature should increase or decrease as the overall energy content in the slab varies with time. The relation between the equilibrium temperature and the deviation from equilibrium ∆T(x,t) is: By Substitution Eq. (10) into the heat equation Eq. (3) and application of Eq. (8) indicates that ∆T(x,t) satisfies the homogeneous heat equation: With homogeneous boundary conditions, and initial condition: We will only consider the case where κ and C are constants, so that Eq. (11) becomes the diffusion equation with no heat source, We look for a solution of the form Which can be separated into two ODEs with respect to independent x and t. By putting of each separated equations of Eq. (14) are equal to a constant λ, After applying Dirichlet boundary conditions, solution of Eq. (16) for the eigenmodes is, And; The solution of Eq. (15) provides the time-dependent amplitude for each eigenmode as, Therefore, the general solution of the heat equation away from equilibrium is, The constant D is absorbed into the Fourier coefficient A n which can be found by matching the initial condition, Eq. (12), Equation (21) is a Fourier sine series and the coefficient A n is determined as, Dirichlet boundary conditions in a uniform slab are applied to get equations (20) and (22) which is the solution for the deviation from equilibrium. In addition, Eq. (22) is solved numerically by using trapezoidal method which can be seen in the numerical example. substituting the exact solution T in the discrete system. This means the numeric solution T ∆x converges to exact solution in a given norm if The Crank-Nicolson Method The consistency between the solutions of the continuous and discrete problems does not guarantee also have a tendency to zero. On the other hand, the difference between the differential and discrete operators on a smooth sufficient function tends to zero. Numerical Example In this section, the distribution of temperature throughout the transverse direction of slab is tested based on assumed temperature gradient. The properties of slab are selected as the parameters in Eq (1) Table 1. Conclusions In this paper, simple method to solve the transient response problem of a functionally graded temperature variation in the 1-D Space derivative approximation which is used for is just an average of approximations in points ( , ) i j x t and 1 ( , ) i j x t + , Introducing 2 t x α = χ ∆ ∆ one can rewrite Eq. (24) as, The terms which appear in the right-hand side of Eq. (25) are known. Hence, Eq. (25) form a tridiagonal linear system (AT=b) which can be solve simultaneously to find the temperature at every node at any point in time. The complex stability analysis will be procedure in some simple steps. The equation (24) can be written as, ( ) T T T T T T T T t x The equation (26) has a consistency, T T T T T T T T To make a stability analysis, the scheme will be written into two form stages: T T T T T t x T T T T T t x + + − Where, slab has been proposed. The assumed temperature is embedded into the diffusion equation with no source of heat then exact solution is obtained. In addition, the integral part of A n coefficient in Eq. (22) is solved numerically using trapezoidal method and the difference is exhibited in Table 1. In this table, the differences of temperature between the two methods are too small. Crank-Nicolson (CN) method is applied to solve the diffusion equation based on the given variation ɸ(x). The assumption of variation is selected for two variables k and m. The temperature at any point goes to take the assumed variation with time. By fixing the numerical method at the shape of variation function, the times that the exact solution is needed to get the numerical solution are computed. Using these methods, the time required to reach the assumed model can be calculated at any position with the given Dirichlet boundary condition.
2,231.6
2016-11-11T00:00:00.000
[ "Engineering", "Physics" ]
Graphene Oxide/Fe-Based Composite Pre-Polymerized Coagulants: Synthesis, Characterization, and Potential Application in Water Treatment This study presents for the first time the synthesis and characterization of GO (graphene oxide), PFSiC (polyferric silicate chloride), and hybrid GO-PFSiC derivatives, aiming to enhance synergistically the performance of coagulation, when applied for the treatment of water. The structure and the morphology of composite GO-PFSiC coagulants were studied in detail by the application of FTIR, XRD, and SEM characterization techniques. Furthermore, the proposed coagulants were applied for the treatment of simulated turbid surface water. The effects of the reagent’s dosage, pH value, and experimental/operational conditions on the coagulation efficiency, applied mainly for the removal of turbidity, were examined. The results, obtained from the FTIR and XRD measurements, showed the presence of a bond between the PFSiC and the GO surface, indicating that the PFSiC particles are distributed uniformly on the surface of graphene, which was also confirmed by the SEM images. Especially, the composite compound GO-PFSiC1.5-15-0.5 presents the most uniform distribution of iron on the surface of graphene oxide and exhibits the optimum coagulation efficiency, while it significantly reduces the turbidity for doses above 3–5 mg/L, i.e., achieving the respective legislation limit as proposed by WHO. Specifically, at the alkaline pH values (>7.9), the removal of turbidity reaches 96%. Consequently, the results of this study render these materials as potential coagulant agents for further research and applications, aiming to also achieve the co-removal of other water components. Introduction The rapid development of nanomaterials, which present the advantages of larger surface areas and more activated functionalized sites, brings an alternative way for the preparation of novel specific reagents, dedicated to enhanced water purification [1,2]. Carbon-based nanomaterials are the most known representatives in this category, due to their numerous unique advantages, and considerable number of existing studies [3][4][5][6]. The most promising materials for treating and delivering pure water are considered those that incorporate nanoscale features and tailorable chemical properties [7]. Graphene [8,9], a two-dimensional (2-D) material, has attracted increasing interest in the field of composite materials, because of its excellent thermal [10], electrical [11], and mechanical [12] properties. This unique 2-D plane structure, coupled with an extremely high surface area, makes graphene an ideal support material. Multifunctional hybrid materials that can take advantage of the extraordinary C 2020, 6, 44 3 of 13 Preparation of Graphene Oxide (GO) The graphite flakes, used for the preparation of graphene oxide, were purchased from Sigma Aldrich (USA). Graphite oxide (GrO) was prepared by using the modified Hummer's method [33], as improved by Debnath et al. (2014) [34]. In brief, graphite flakes (5.0 g) and NaNO 3 (2.5 g) were mixed with 120 mL of H 2 SO 4 (95%) in a 500-mL flask. The mixture was stirred for 30 min in an ice bath. Afterwards, 15.0 g KMnO 4 was added into the mixture under vigorous stirring, keeping the reaction temperature lower than 20 • C. The mixture was further stirred at room temperature overnight. Thereafter, another 15.0 g of KMnO 4 were added into the previous mixture and stirred at room temperature for another 2 h, aiming to oxidize totally any remaining graphite. On addition of the deionized water into this mixture, a yellowish paste was obtained, and the temperature was increased to about 98 • C with effervescence. This diluted suspension was then stirred for another 4 h and 50 mL of H 2 O 2 (30%) were added afterwards to the mixture. As the reaction progressed, the color of the pasty mixture gradually turned into light brownish. The mixture was subsequently washed with 100 mL of 5% HCl and 30% H 2 O 2 (five times) to purify it from any residuals of MnO 2 and sulfates. Finally, a solid was obtained after drying at 60 • C under vacuum. Graphene oxide (GO) was prepared from the solid graphite oxide by sonicating the solid in double distilled water, aiming to convert it into a stable colloidal suspension. Then, the colloidal graphene oxide (GO) was precipitated by using 0.1 M NaOH solution and the solid precipitates were subsequently filtered through a 0.45-mm nylon membrane filter to separate the liquid phase and finally, they were subjected to drying at 60 • C under vacuum and the GO was obtained as grey powder. Preparation of PFSiC Composite poly-ferric-silicate chloride (PFSiC) coagulants were produced at room temperature (22 o C), according to a modified procedure of Zouboulis and Moussas (2008) [25] and Tolkou et al. (2015) [27] under various experimental conditions. The initial solutions used for the preparation of pre-polymerized coagulants were 0.5 M FeCl 3 ·6H 2 O (Merck) and 0.5 M NaOH (Merck, as the added alkaline agent). The 0.5 M polysilicic acid solution (pSi) was prepared according to the respective literature [26]: 1.74 mL of pSi solution were added to 20 mL of 0.5 M FeCl 3 ·6H 2 O solution, under vigorous stirring (0.3 mL/min, 20 rpm), and subsequently, 30 mL of alkaline solution were added slowly under magnetic stirring (0.1 mL/min, 70 rpm) in the mixture at the temperature of 30 • C. The produced composite pre-polymerized material, termed as PFSiC 1.5-15 to indicate the proportions of the aforementioned components, was left under stirring to mature for about 1 h and then diluted with water to the final concentration of 0.1 M (relevant to Fe). The main physico-chemical properties of this coagulant agent (used in the subsequent experiments) were pH value 1.9, turbidity 65 NTU, and conductivity 29 ms/cm. Preparation of Composite GO-PFSiC Material The Fe-containing graphene oxide composite materials were prepared after the impregnation of prepared graphene oxide with the pre-polymerized Fe-based coagulant (PFSiC), followed by thermal treatment at 85 • C. For a typical impregnation procedure, taking, e.g., GO-PFSiC 1.5-15-1 as a typical case with a molar ratio of [GO]/[Fe] = 1100 mg of GO were dispersed in 100 mL of water (1 g/L). The appropriate amount of aquatic solution of PFSiC 1.5-15 , i.e., 100 mg Fe (as 17.3 mL of 0.1 M coagulant solution) were added. Because of the highly acidic pH value of PFSiC 1.5-15 , the precipitation of ferric (hydro)oxide species [35] was avoided. After 30 min of stirring, the dispersion was sonicated for 30 min. It was then filtrated, washed with deionized water, and finally, it was dried in the air. A weighed amount of dried impregnated graphene oxide composite was placed in a furnace and heated up overnight to 85 • C. Two more relevant materials, but with different molar ratios, were also synthesized by following the same procedure, i.e., Table 1, while the weight ratio of Fe to GO was increased stepwise in order to define the most efficient coagulant. Small portions of coagulant powders were obtained after drying the aquatic solution of PFSiC in the oven (~40 • C). GO and GO-PFSiC composites were already in powdered form and used to observe the morphology of respective products, by employing a ZEISS EVO 50 scanning microscope, at EHT = 10 kV. The SEM sample was prepared by placing a drop of dilute ethanol dispersion of the prepared composites onto a copper plate attached to an aluminum sample holder and the solvent was allowed to evaporate at room temperature. FT-Infrared Spectroscopy (FTIR) FTIR spectra of the prepared composite products were recorded in the range of 4000−400 cm −1 , by using a Nicolet 380 FTIR Spectrometer-Thermo Scientific spectrophotometer to allow the observation of surface species and the possible polymeric interactions between the components. X-ray Diffraction Spectroscopy (XRD) The X-ray diffraction spectroscopy (XRD) is an important tool for the identification of crystalline compounds and it was applied to identify the possibly formed crystalline phases. The Malvern Panalytical Xpert instrument (Cu-K radiation) was used and the solid samples were analyzed in the range of 20 • -80 • 2θ with a scan rate of 1 • /min. Coagulation Experiments Performed by Jar-Tests Convenient jar-test experiments were applied for the examination of coagulants' efficiency, by using a jar-test apparatus (Aqualytic), equipped with six paddles, employing 1-L glass beakers. The simulated contaminated surface water (1 L) was prepared by using tap water of Thessaloniki (N. Greece) and clay (kaolin) suspension of an initial concentration of clay suspended particles of 10 mg/L. The pH was properly adjusted by using HCl or NaOH solutions of appropriate concentrations (1-0.01 M) and measured by a Metrohm Herisau pH meter. The properties of water samples are presented in Table 2, while the conditions used in the jar-test experimental runs are shown in Table 3. After the coagulation treatment, samples were collected from the supernatant of each beaker and were analyzed for the determination of turbidity (NTU), by using a HACH RATIO/XR Turbidimeter. In order to evaluate the synergistic effect of GO and PFSiC in water treatment, residual turbidity values, obtained by the application of combined materials, were compared with those obtained with standalone GO and PFSiC reagents. For clay suspension, the residual turbidity percentage (RT, %) was expressed as: where T and T o are the turbidities of the treated and raw water, respectively. Characterization The characterization of the new composite materials by using FTIR measurements was carried out to confirm the bonding or types of functional groups of molecules that are being present in the GO and can be impregnated into the new GO-PFSiC materials. FTIR analysis was performed at wavenumbers 400-4000 cm −1 , where in this range the characteristic absorption of functional group vibrations, as presented by the GO and the composites, can be identified. In Figure 1, the FTIR spectra can be shown, comparing the different examined materials, i.e., GO, PFSiC 1.5-15 , GO-PFSiC 1.5-15-1 , GO-PFSiC 1.5-15-0.5 , and GO-PFSiC 1.5-15-0.3 . In the FTIR spectra, primary hydroxyl C-OH (1380 cm −1 ) groups, epoxy C-O groups (1186 cm −1 ), and carbonyl C=O (1820 cm −1 ) groups can be detected in the GO spectrum, which reveals that large numbers of oxygen-containing groups have been introduced/interact into/with the GO structure [36,37]; noting also that there is an absence of absorption at the wavenumber 1725 cm −1 , which matches with the characteristics of C=C aromatic stretching vibration, indicating that GO is completely oxidized [38]. The band at 1038 cm −1 , which was found only in the spectra of pre-polymerized coagulants, can be assigned to the asymmetric Si-O stretching vibrations of Si-O-Si bonds, which is indicative of the high polymerization degree of silica. In addition, two characteristic peaks exist at 1100 and 1080 cm −1 that could be attributed to the asymmetric stretching vibration of Fe-O-Fe [30] and Si-O-Fe bonds [39], respectively. Moreover, in the spectra of PFSiC 1.5-15 and also of the composites GO-PFSiC, it can be observed that there is a peak at the wavenumber 3279.75 cm −1 , which indicates the characteristic absorption of stretching vibrations from the -OH group. The wide peaks in these vibration ranges are characteristic of -OH groups that undergo hydrogen bond formation. The absorption at the wave number 1623.36 cm −1 expresses the characteristic of vibration stretching from the C=C group. FTIR analysis can also provide a strong clue to the formation of the co-ordination bond C-O-Fe between the GO sheets and the iron particles. This can be noticed from the presence of the characteristic absorption bands of the stretching vibrations of Fe-O in the PFSiC material that were found at the wavenumber 617.19 cm −1 , which confirms the success of PFSiC loading/interactions with the GO sheets. The difference that can be observed between the FTIR spectrum of GO and the GO-PFSiC materials is that there is a weakening of the absorption intensity at the wave number around 1580 cm −1 , regarding the composite GO-PFSiC, as compared to simple GO, which at the wave number 1580 cm −1 is considered as an absorption characteristic of the carboxyl group. This shows that there has been a reduction in the quantity of carboxyl groups, because they are playing role as a ligand that can interacts with the Fe atoms of PFSiC [38,40,41]. The morphology and structure of GO and of the relevant composites were further examined by obtaining representative SEM images of these materials. As typical examples, the SEM images of GO, PFSiC, and GO-PFSiC materials are shown in Figure 2. Both the pristine GO and the prepared impregnated GO-PFSiC materials were found to exhibit a typically wrinkled sheet-like structure, as it can be observed, indicating that iron was homogeneously distributed onto the GO sheets. Especially, the composite GO-PFSiC 1.5-15-0.5 , as illustrated in Figure 2d, presents the most uniform distribution of iron on the surface of graphene oxide. The morphology and structure of GO and of the relevant composites were further examined by obtaining representative SEM images of these materials. As typical examples, the SEM images of GO, PFSiC, and GO-PFSiC materials are shown in Figure 2. Both the pristine GO and the prepared impregnated GO-PFSiC materials were found to exhibit a typically wrinkled sheet-like structure, as it can be observed, indicating that iron was homogeneously distributed onto the GO sheets. Especially, the composite GO-PFSiC1.5-15-0.5, as illustrated in Figure 2d, presents the most uniform distribution of iron on the surface of graphene oxide. Figure 3b-e can be detected, which are the characteristic peaks of cubic Fe [42], indicating that the respective peaks of PFSIC1.5-15 are formed in the composites and that the PFSiC particles are distributed uniformly on the surface of graphene, which was also confirmed by the SEM images ( Figure 2) [43]. It should be noted, however, that the intense and characteristic peak at 45° belongs to the sample carrier during the analysis by the XRD instrument and not to another compound contained in the sample [24]. In addition, the sharp peak at 32° is observed only in Figure 3b, which corresponds to PFSiC1.5-15, and belongs to NaCl that is produced during the preparation of PFSiC1.5-15 solution and it disappeared in Figure 3c-e, which present the XRD patterns of solid composite GO-PFSiC hybrids. However, it is observed that with the increase of the Fe/GO ratio (see Figure 3b to Figure 3e, respectively), the intensity of the peak at 32° gradually decreases, indicating that at a higher [Fe]/[GO] ratio, amorphous ferric hydroxide materials are intercalated between the GO sheets and a homogeneous amorphous composite is formed [13]. In addition, as shown in Figure 3a, GO material presents two characteristic peaks at 25.58° and 38.8°, corresponding to the partially oxidized part that shows few layers of graphene, and to natural graphite, which undergoes imperfect oxidation, respectively. The presence of a bond between the PFSiC and the GO surface can be seen from the shift of the GO peak to 30.58°, which is caused by the reduction of some graphene oxides to graphene, due to the precipitation reaction of Fe ion [38,44]. The results obtained from the XRD analysis are presented in Figure 3 for the cases of GO (Figure 3a The results obtained from the XRD analysis are presented in Figure 3 for the cases of GO (Figure 3a), PFSIC1.5-15 (Figure 3b), and GO-PFSiC hybrids (Figure 3c-e). From the XRD patterns, similar diffraction peaks at 35.6°, 43.3°, 53.6°, 57.3°, and 62.8° in Figure 3b-e can be detected, which are the characteristic peaks of cubic Fe [42], indicating that the respective peaks of PFSIC1.5-15 are formed in the composites and that the PFSiC particles are distributed uniformly on the surface of graphene, which was also confirmed by the SEM images ( Figure 2) [43]. It should be noted, however, that the intense and characteristic peak at 45° belongs to the sample carrier during the analysis by the XRD instrument and not to another compound contained in the sample [24]. In addition, the sharp peak at 32° is observed only in Figure 3b, which corresponds to PFSiC1.5-15, and belongs to NaCl that is produced during the preparation of PFSiC1.5-15 solution and it disappeared in Figure 3c-e, which present the XRD patterns of solid composite GO-PFSiC hybrids. However, it is observed that with the increase of the Fe/GO ratio (see Figure 3b to Figure 3e, respectively), the intensity of the peak at 32° gradually decreases, indicating that at a higher [Fe]/[GO] ratio, amorphous ferric hydroxide materials are intercalated between the GO sheets and a homogeneous amorphous composite is formed [13]. In addition, as shown in Figure 3a, GO material presents two characteristic peaks at 25.58° and 38.8°, corresponding to the partially oxidized part that shows few layers of graphene, and to natural graphite, which undergoes imperfect oxidation, respectively. The presence of a bond between the PFSiC and the GO surface can be seen from the shift of the GO peak to 30.58°, which is caused by the reduction of some graphene oxides to graphene, due to the precipitation reaction of Fe ion [38,44]. Determination of Optimum Coagulation Experimental Conditions In Figure 4, the results of the experiments, as applied for the determination of optimum coagulation conditions during jar-test experimental runs, are presented. For this reason, GO and PFSiC1.5-15 were compared, regarding the removal of turbidity and expressed as the residual turbidity (NTU), (a) in different sedimentation times, and (b) in different rapid mixing period times (Table 3), It should be noted, however, that the intense and characteristic peak at 45 • belongs to the sample carrier during the analysis by the XRD instrument and not to another compound contained in the sample [24]. In addition, the sharp peak at 32 • is observed only in Figure 3b, which corresponds to PFSiC 1.5-15 , and belongs to NaCl that is produced during the preparation of PFSiC 1.5-15 solution and it disappeared in Figure 3c-e, which present the XRD patterns of solid composite GO-PFSiC hybrids. However, it is observed that with the increase of the Fe/GO ratio (see Figure 3b-e, respectively), the intensity of the peak at 32 • gradually decreases, indicating that at a higher [Fe]/[GO] ratio, amorphous ferric hydroxide materials are intercalated between the GO sheets and a homogeneous amorphous composite is formed [13]. In addition, as shown in Figure 3a, GO material presents two characteristic peaks at 25.58 • and 38.8 • , corresponding to the partially oxidized part that shows few layers of graphene, and to natural graphite, which undergoes imperfect oxidation, respectively. The presence of a bond between the PFSiC and the GO surface can be seen from the shift of the GO peak to 30.58 • , which is caused by the reduction of some graphene oxides to graphene, due to the precipitation reaction of Fe ion [38,44]. Determination of Optimum Coagulation Experimental Conditions In Figure 4, the results of the experiments, as applied for the determination of optimum coagulation conditions during jar-test experimental runs, are presented. For this reason, GO and PFSiC 1.5-15 were compared, regarding the removal of turbidity and expressed as the residual turbidity (NTU), (a) in different sedimentation times, and (b) in different rapid mixing period times (Table 3), based on previous research tests [27], by using 10 mg/L as an indicative dosage. Determination of the Optimum Dosage and Type of Material The coagulant dosage is considered to be among the most critical parameters that present a significant effect on the efficiency of the coagulation process. All prepared coagulants in this study were compared in terms of turbidity removal ( Figure 5) (expressed as NTU units), in order to obtain the optimum type of coagulant and subsequently, to determine the optimal dosage of the examined materials by treating simulated surface water samples, noting that the respective initial values of this water were 17.2 NTU and pH 7.6 ± 0.2 ( Table 2). According to the obtained results, it can be observed that the application of composite/hybrid materials shows a better coagulation efficiency than the pure graphene oxide (GO) or the simple PFSiC1.5-15 coagulant. Particularly, the composite GO-PFSiC1.5-15-0.5 with the ratio [GO]/[Fe] = 0.5 was found to significantly reduce the turbidity for dosages above 3-5 mg/L, i.e., much lower than the relevant data found in the respective literature, e.g., >24 mg/L [23]. In addition, the turbidity removal was increased, as the dosage of GO-PFSiC1.5-15-0.5 was increased, reaching 93% (~1 NTU) for a dosage of 30 mg/L. According to the recommendations from the World Health Organization [32], turbidity must be lower than 5 NTU before the water can be adequately sanitized for human consumption and Particularly, when studying the different applied sedimentation times (Figure 4a), the solutions were stirred rapidly at 160 rpm for 2 min, followed by slow stirring at 40 rpm for 10 min, and then, the suspension was rested for 30-60 min without any stirring to allow the separation by sedimentation of the produced flocs. As noticed in Figure 4a, the optimum sedimentation time was found to be 50 min, where the system reaches a dynamic equilibrium, indicated by the relatively constant value of turbidity after 50 min of sedimentation. Then, maintaining the optimal settling time at 50 min, the different durations of the initial rapid mixing period (Figure 4b) were tested at 160 rpm for 1, 1.5, 2, and 2.5 min, followed by the slow stirring at 40 rpm for 10 and 50 min of sedimentation. According to Figure 4b, the optimum time in this case was 2 min of rapid mixing. After that, the residual turbidity (NTU) remains rather constant. Therefore, the jar-test protocol selected for the optimization of coagulation performance was rapidly mixed at C 2020, 6, 44 9 of 13 160 rpm for 2 min, slowly stirred at 40 rpm for 10 min, and finally, sedimentation was performed for 50 min. Determination of the Optimum Dosage and Type of Material The coagulant dosage is considered to be among the most critical parameters that present a significant effect on the efficiency of the coagulation process. All prepared coagulants in this study were compared in terms of turbidity removal ( Figure 5) (expressed as NTU units), in order to obtain the optimum type of coagulant and subsequently, to determine the optimal dosage of the examined materials by treating simulated surface water samples, noting that the respective initial values of this water were 17.2 NTU and pH 7.6 ± 0.2 ( Table 2). significant effect on the efficiency of the coagulation process. All prepared coagulants in this study were compared in terms of turbidity removal ( Figure 5) (expressed as NTU units), in order to obtain the optimum type of coagulant and subsequently, to determine the optimal dosage of the examined materials by treating simulated surface water samples, noting that the respective initial values of this water were 17.2 NTU and pH 7.6 ± 0.2 ( Table 2). According to the obtained results, it can be observed that the application of composite/hybrid materials shows a better coagulation efficiency than the pure graphene oxide (GO) or the simple PFSiC1.5-15 coagulant. Particularly, the composite GO-PFSiC1.5-15-0.5 with the ratio [GO]/[Fe] = 0.5 was found to significantly reduce the turbidity for dosages above 3-5 mg/L, i.e., much lower than the relevant data found in the respective literature, e.g., >24 mg/L [23]. In addition, the turbidity removal was increased, as the dosage of GO-PFSiC1.5-15-0.5 was increased, reaching 93% (~1 NTU) for a dosage of 30 mg/L. According to the recommendations from the World Health Organization [32], turbidity must be lower than 5 NTU before the water can be adequately sanitized for human consumption and should be ideally below than 1 NTU. It is worth noting that the optimum GO-PFSiC1.5-15-0.5 material, as illustrated in Figure 2d, presents the most uniform distribution of iron on the surface of graphene oxide. According to the obtained results, it can be observed that the application of composite/hybrid materials shows a better coagulation efficiency than the pure graphene oxide (GO) or the simple PFSiC 1.5-15 coagulant. Particularly, the composite GO-PFSiC 1.5-15-0.5 with the ratio [GO]/[Fe] = 0.5 was found to significantly reduce the turbidity for dosages above 3-5 mg/L, i.e., much lower than the relevant data found in the respective literature, e.g., >24 mg/L [23]. In addition, the turbidity removal was increased, as the dosage of GO-PFSiC 1.5-15-0.5 was increased, reaching 93% (~1 NTU) for a dosage of 30 mg/L. According to the recommendations from the World Health Organization [32], turbidity must be lower than 5 NTU before the water can be adequately sanitized for human consumption and should be ideally below than 1 NTU. It is worth noting that the optimum GO-PFSiC 1.5-15-0.5 material, as illustrated in Figure 2d, presents the most uniform distribution of iron on the surface of graphene oxide. Determination of Optimum pH The effect of pH on the coagulation ability of optimal GO-PFSiC 1.5-15-0.5 material, as obtained from the previous experiments, was examined in the pH range 4-9, by using 10 mg/L of GO-PFSiC 1.5-15-0.5 . The removal of turbidity, expressed as percentage (Figure 6a) and as residual values (NTU) (Figure 6b), is presented in Figure 6. According to the results, it is found that the treated water by the composite material GO-PFSiC 1.5-15-0.5 can follow the respective legislation (by WHO) limits with an increase of pH values over 8. In particular, at more alkaline pH values (>7.9), the removal of turbidity reaches 96%, by applying only the 10 mg/L dosage, while in pH 6, where very small flocks rather slowly formed and barely settled, the relative percentage decreases significantly (68%). However, at pH 4-5, the turbidity removal efficiency was highly improved (up to 75-85%), in accordance with the relevant literature [23]. The original pH value of the examined simulated surface water (at pH 7.6 ± 0.2) was in the optimum pH range for the treatment of turbid water samples, in agreement with Figure 6. In strong alkaline media, the formation of flocks was very quick and began even from the first minute of high-speed agitation. composite material GO-PFSiC1.5-15-0.5 can follow the respective legislation (by WHO) limits with an increase of pH values over 8. In particular, at more alkaline pH values (>7.9), the removal of turbidity reaches 96%, by applying only the 10 mg/L dosage, while in pH 6, where very small flocks rather slowly formed and barely settled, the relative percentage decreases significantly (68%). However, at pH 4-5, the turbidity removal efficiency was highly improved (up to 75-85%), in accordance with the relevant literature [23]. The original pH value of the examined simulated surface water (at pH 7.6 ± 0.2) was in the optimum pH range for the treatment of turbid water samples, in agreement with Figure 6. In strong alkaline media, the formation of flocks was very quick and began even from the first minute of high-speed agitation. Conclusions This study presented the application of new composite materials obtained with the interaction between GO (graphene oxide) and PFSiC (polyferric silicate chloride), i.e., an Fe-based prepolymerized coagulant agent, as a promising hybrid material to synergistically enhance water treatment. The hybrid GO-PFSiC derivatives are firstly reported in the literature. The structure and morphology of composite GO-PFSiC coagulants were studied in detail by the application of FTIR, XRD, and SEM techniques. According to the presented results, the success of PFSiC loading on GO sheets was confirmed by the FTIR spectra at a wavenumber of 617.19 cm −1 , corresponding to the Fe-O bond in the GO-PFSiC material. The XRD analysis confirmed the characteristic peaks of PFSIC1.5-15 in the composites with GO, indicating that the PFSiC particles were distributed uniformly on the surface of graphene, especially in the composite GO-PFSiC1.5-15-0.5, which was also confirmed by the SEM images. Furthermore, the proposed coagulants were applied for the treatment of simulated surface water contaminated with clay. The effects of the dosage, pH, and experimental/operating conditions on the coagulation efficiency, regarding the removal of turbidity, were examined, aiming to determine the optimal turbidity removal conditions. The jar-test protocol selected for the best coagulation Conclusions This study presented the application of new composite materials obtained with the interaction between GO (graphene oxide) and PFSiC (polyferric silicate chloride), i.e., an Fe-based pre-polymerized coagulant agent, as a promising hybrid material to synergistically enhance water treatment. The hybrid GO-PFSiC derivatives are firstly reported in the literature. The structure and morphology of composite GO-PFSiC coagulants were studied in detail by the application of FTIR, XRD, and SEM techniques. According to the presented results, the success of PFSiC loading on GO sheets was confirmed by the FTIR spectra at a wavenumber of 617.19 cm −1 , corresponding to the Fe-O bond in the GO-PFSiC material. The XRD analysis confirmed the characteristic peaks of PFSIC 1.5-15 in the composites with GO, indicating that the PFSiC particles were distributed uniformly on the surface of graphene, especially in the composite GO-PFSiC 1.5-15-0.5 , which was also confirmed by the SEM images. Furthermore, the proposed coagulants were applied for the treatment of simulated surface water contaminated with clay. The effects of the dosage, pH, and experimental/operating conditions on the coagulation efficiency, regarding the removal of turbidity, were examined, aiming to determine the optimal turbidity removal conditions. The jar-test protocol selected for the best coagulation performance was rapid mixing at 160 rpm for 2 min, slow stirring at 40 rpm for 10 min, and finally, sedimentation for 50 min. According to the obtained results, the composite material GO-PFSiC 1.5-15-0.5 with the ratio [GO]/[Fe] = 0.5 can significantly reduce the turbidity for dosages above 3-5 mg/L, i.e., much lower than the respective presented values in the relevant literature. The effect of pH on the coagulation ability of GO-PFSiC 1.5-15-0.5 was tested in the pH range 4-9 by using a 10 mg/L dosage. It was found that the GO-PFSiC 1.5-15-0.5-treated water can follow the stringent legislation (by WHO) limits with an increase of the application pH values. In particular, at more alkaline pH values (>7.9), the removal of turbidity can reach 96%, by applying only a 10 mg/L as dosage. Consequently, the results of this study render these materials as potential coagulants for further research and potential applications regarding the co-removal of other water components.
6,952.4
2020-06-30T00:00:00.000
[ "Environmental Science", "Chemistry", "Materials Science" ]
Mo-La2O3 Multilayer Metallization Systems for High Temperature Surface Acoustic Wave Sensor Devices Developing advanced thin film materials is the key challenge in high-temperature applications of surface acoustic wave sensor devices. One hundred nanometer thick (Mo-La2O3) multilayer systems were fabricated at room temperature on thermally oxidized (100) Si substrates (SiO2/Si) to study the effect of lanthanum oxide on the electrical resistivity of molybdenum thin films and their high-temperature stability. The multilayer systems were deposited by the magnetron sputter deposition of extremely thin (≤1 nm) La interlayers in between adjacent Mo layers. After deposition of each La layer the process was interrupted for 25 to 60 min to oxidize the La using the residual oxygen in the high vacuum of the deposition chamber. The samples were annealed at 800 ∘C in high vacuum for up to 120 h. In case of a 1 nm thick La interlayer in-between the Mo a continuous layer of La2O3 is formed. For thinner La layers an interlayer between adjacent Mo layers is observed consisting of a (La2O3-Mo) mixed structure of molybdenum and nm-sized lanthanum oxide particles. Measurements show that the (Mo-La2O3) multilayer systems on SiO2/Si substrates are stable at least up to 800 ∘C for 120 h in high vacuum conditions. Introduction The development of advanced metallization-substrate material systems with improved properties in the high-temperature range and their comprehensive characterization have become predominant topics in the development of wireless surface acoustic wave (SAW) temperature sensors in recent years. Nevertheless, aside from material developments used at the laboratory level with a thermal stability up to 900 • C for a relatively short time period there is no material system in the market yet with sufficient lifetime and reliability for temperatures above 600 • C in harsh environmental conditions which meets all the requirements for the SAW sensor chips and the sensor antennas. Besides this, the piezoelectric substrate material, the metallization for the interdigital transducers (IDT) and additional functional layers like diffusion barriers (to the substrate) or cover layers (to protect the sensor surface against ambient or packaging elements), as well as the sensor antenna have to maintain their functionality during application at those temperatures for a long time and with high reliability. In the high temperature range, the most critical criteria for the applicability of the IDT material is its low electrical resistance and high thermal stability especially with respect to mechanical properties and creep. In order to be able to simulate and design dedicated SAW structures all SAW relevant material parameters and their temperature dependence need to be known. This means, degradation processes due to a stress-induced material transport (acoustomigration), a relaxation of mechanical stress, an irreversible change in the electric resistance or a thermally-induced change in the constitution of the materials like phase transitions, cracking or blistering and so forth are not acceptable in the structured film-substrate material system. In order to use a SAW device as a sensor a strong correlation between the measured frequency characteristics and the observed parameter is essential for a highly reliable operation of the SAW device. Depending on the acoustic power density, the operation frequency and the temperature including thermal cycling conditions several aspects for the material development have to be considered to get a strong correlation between the measured frequency characteristics (the measured variable) and the temperature of the device (the target variable). Thermally activated effects like drift-diffusion, agglomeration, creep and dislocation movement in the finger material under the conditions of high temperature and extremely high cyclic mechanical load (MHz to GHz frequency range) are the most relevant damaging effects at operation temperatures above 30 to 50% of T melt (T melt : melting point of the metallization). In addition, chemical effects such as corrosion and oxidation also have to be considered since they have an influence on the frequency behavior and applicability of the SAW sensor devices. With regard to their thermal stability, materials and material systems with a high melting point like refractory metals, alloys and temperature stable intermetallic phases without any phase transition within the operation temperature range are generally favored for such applications. Typical metallization materials for the fabrication of IDTs like Al, Cu, Ag or Au or alloys or multilayers of them are mostly damaged in long-term and/or cyclic operation of SAW devices under harsh environmental conditions [1][2][3][4][5]. Therefore, other IDT material concepts have to be evaluated that not only deliver high thermo-mechanical stability but also good electrical properties besides chemical resistance to corrosion and oxidation. Metallizations based on noble materials like Pt, Ir, Pd, Re and Rh are alternatives and often reported in the literature [6][7][8][9][10][11][12]. Unfortunately, pure thin films of these materials with a thickness lower than 150 nm are typically damaged by agglomeration due to their weak chemical affinity to the substrate and dewetting effects. Nevertheless, alloying, dispersion hardening and adapted adhesion and covering layers can improve their thermal and mechanical stability and can largely suppress or even completely avoid thermally induced damaging effects [13]. Consequently, oxide particle strengthened (ODS) thin film materials as demonstrated for Pt-Al 2 O 3 , Pt-Rh/NiO x , Pt-Rh/CoO x and Pt-Rh/HfO x [14,15] are already used in the SAW sensor technology above 350 • C because there have been no alternatives yet. A disadvantage of using noble metals is that they increase the device costs drastically. Hence in the past few years intensive research focusing on the search for alternative IDT thin films, which can offer long lifetimes and high reliability while lowering the device cost, has gained more impetus [16][17][18][19][20]. Rane et al. [16,17] and Seifert et al. [18,21,22] studied different W-Mo multilayer systems and RuAl thin films for SAW sensor applications along with the development of their morphology and electrical resistance on oxidized (100) Si and on high-temperature piezoelectric CTGS (Ca 3 TaGa 3 Si 2 O 14 ) substrates. It was shown that oxidation, especially of tungsten which is accompanied by a significant change in relevant acoustic properties like density and stiffness, is a considerable disadvantage of the W-Mo multilayers if they are not covered by a suitable protection layer and are not separated from the substrate by a diffusion barrier layer. Pure molybdenum (T melt of bulk Mo: 2623 • C. This and following data taken from Reference [23]) could also be a candidate for high-temperature IDTs due to its low electrical resistivity for a metal with high melting temperature. In comparison to platinum (T melt of bulk Pt: 1768 • C) the agglomeration tendency is much lower since the activation energy for self-diffusion (bulk Mo ≈ 4.8 eV [24]) is much higher than for bulk Pt (≈ 3 eV [25]). In addition, the higher thermal conductivity of Mo (bulk Mo: 138 W m −1 K −1 , bulk Pt: 71.6 W m −1 K −1 , both at 300 K) lowers the risk of thermally induced local damages as thermo-migration in the electrode material or cracking or the formation of electric shorts due to local temperature gradients. Besides this, the electrical resistivity (bulk Mo ρ Mo ≈ 5.5 µΩcm at 300 K) and the thermal expansion coefficient (bulk Mo α Mo th ≈ 4.8 × 10 −6 K −1 at 300 K) are much lower in Mo compared with Pt (ρ Pt ≈ 10.8 µΩcm, α Pt th ≈ 8.8 × 10 −6 K −1 both at 300 K). Since α Mo th is closer to the value of CTGS (α CTGS th = 4 − 5 × 10 −6 K −1 in the basal plane at 300 K [26,27]) lower thermally induced mechanical stresses are expected in Mo-CTGS systems as compared to Pt on CTGS. All these properties make Mo based material systems very promising for the high-temperature SAW technology. However, pure Mo components in microdevices show a relatively low thermal stability and can fail even below 800 • C as reported by Samanta et al. [28]. To overcome these disadvantages, various approaches can be applied for improving the high-temperature properties of a material for example, forming solid-solutions, strain or precipitation hardening, dispersion strengthening or grain size refinement. For instance for dispersion strengthening of bulk Mo materials oxide or carbide particles can be used as local barriers for diffusion and the movement of dislocations improving their thermo-mechanical properties [29]. An additional strengthening effect can result from blocking the grain growth or the formation of sub-microstructures during the thermal treatment based on the Hall-Petch relation ( [30] and references therein). Thermally stable oxide particles are preferred to form oxide dispersion strengthened Mo material systems (ODS-Mo) using rare earth oxides. These ODS-Mo materials using La 2 O 3 have received considerable attention especially due to their superior ductility, toughness and creep resistance properties at high temperatures and are currently used for high-temperature components [31][32][33][34] [34]. It was observed that the hardening effect obtained due to the addition of oxide particles was 3 to 5 orders of magnitude, so much more effective than what could be related to the reduced grain size (Hall-Petch relation). They also found that the amount of La 2 O 3 in the Mo matrix significantly influences the material properties. The highest thermal stability was achieved if 0.6-1.5 wt.% La 2 O 3 was added to Mo. However, intensive studies on ODS-Mo thin films with a La 2 O 3 particle dispersion phase (ODS La 2 O 3 -Mo thin films) together with a suitable fabrication technology are still missing. Therefore, this paper studies the fabrication of La 2 O 3 -ODS molybdenum thin films by sputter deposition of thin Mo layers and extremely thin lanthanum interlayers forming molybdenum-lanthanum oxide multilayer systems on SiO 2 /Si substrates. Materials and Methods To produce the desired multilayer structures co-sputtering from both a lanthanum oxide and a molybdenum target to fabricate La 2 O 3 strengthened Mo films is not really applicable, because La 2 O 3 targets degrade very fast. Hence, a new approach for fabricating ODS La 2 O 3 -Mo thin films was evaluated which includes sequential multilayer co-sputtering of molybdenum and lanthanum. The multilayers were sputtered in a two target chamber from a pure Mo (99.95% Mo) and a pure La (99.95% La target). The deposition process comprises an in-situ oxidation process of the ultrathin La layers of a nominal thickness between 1 and 0.125 nm before the deposition of the subsequent Mo layer. It is assumed that the La layer growth can largely be described by the Volmer-Weber mechanism [35] forming a discontinuous layer of La islands in the initial steps of the layer deposition at a substrate temperature close to room temperature. As lanthanum can easily form either lanthanum oxide or lanthanum hydroxide it was found that the residual oxygen in the deposition chamber is sufficient to oxidize the lanthanum islands forming only La 2 O 3 if water vapor is kept away in vacuum condition. In order to completely oxidize the deposited La islands the deposition process was interrupted after deposition of each La layer for a certain time (25 to 60 min depending on the La layer thickness) so as to ensure complete oxidation. This waiting time was determined based on prior tests as well as by monitoring the base pressure in the chamber. After La deposition, the waiting time was determined based on the time required for the pressure in the chamber to stabilize to the base pressure that the chamber had before deposition. This indirect method ensured that no more residual oxygen in the chamber was being taken up by either the La layer or the La target itself, thus indicating the complete formation of a lanthanum oxide film (see Section 3.1 for more details and X-ray photoelectron spectroscopy (XPS) studies. In order to achieve an electrical conductivity of the multilayers as high as possible, only a low content of insulating lanthanum oxide in the Mo matrix is allowed which however needs to be high enough to realize an effective strengthening of the Mo films as desired. Furthermore, it is well known from the literature that the thickness of the individual layers in pure Mo multilayer stacks (without lanthanum oxide) has a significant influence on the mechanical and electrical properties of the Mo multilayers due to the size effect [36]. Accordingly, for this study various film systems with up to 8 (Mo-La 2 O 3 ) bilayers were prepared. In the following, the nomenclature Mo-(La 2 O 3 -Mo) n is used to denote the layer configuration of the samples, where the first named Mo denotes the covering Mo layer and n represents the number of (La 2 O 3 -Mo) bilayers. The measured data of each multilayer system were compared with results observed for pure 100 nm Mo films. Figure 1 shows the architecture of all multilayer systems investigated in the present paper. The total thickness of the multilayer films was kept constant at 100 nm so that the thickness of the individual layers of Mo and La varied with the number of bilayers. The total thickness of La and Mo was 1 nm and 99 nm, respectively, that corresponds to a constant value of about 1 wt.% La 2 O 3 in the Mo matrix which is in the range given in Reference [34]. The multilayers were deposited on cleaned (100) Si substrates with 1 µm thick thermally grown SiO 2 on top with a sample size of (10 × 10) mm 2 . The sputtering was carried out in a high vacuum (HV) chamber of a dedicated cluster tool using matched DC generators (Magnetron Power Supply MP-2, Hüttinger Elektronik, Freiburg, Germany) and circular magnetrons (diameter 100 mm, Kurt J. Lesker Company, Jefferson Hills, PA, USA). Each magnetron source has its own pneumatically driven shutter system which is controlled by the computer of the cluster tool. A load lock chamber of the cluster permits to keep the targets in high vacuum conditions all the time to prevent oxidation of the target materials. Before deposition, the substrates were kept in high vacuum at the base chamber pressure of ≈3 ×10 −6 mbar for a constant time duration for each experiment. During the deposition, the chamber pressure was 1.6 × 10 −3 mbar using Ar (purity 99.999% Ar) process gas with the flow rate of ≈30 sccm. The DC power was 500 W and 15 W for Mo and La, respectively. The substrate temperature was nearly room temperature. Before starting the experiments the deposition rate was determined under these conditions for both target materials being 29 nm min −1 and 2 nm min −1 for Mo and La, respectively, in the stationary state of the magnetron operation with a linear behavior of the deposition rate versus time. Therefore, before each deposition the conditions were stabilized for several minutes by depositing onto a closed shutter. The samples rotated at a constant speed of ≈10 rpm. The most relevant sputter parameters are summarized in Table 1. After deposition of the multilayer films, the samples were annealed at 800 • C in high vacuum conditions at a chamber pressure in the range of 10 −5 mbar (to exclude oxidation effects of Mo) for 24 h, 48 h and 120 h. The films were characterized with respect to their microstructure and electrical behavior in the as-deposited state and after heat treatment using scanning (SEM, Zeiss Ultra Plus, Oberkochen, Germany) and transmission electron microscopy (TEM, Technai F30, FEI company, Hillsboro, OR, USA) with energy dispersive X-ray spectroscopy (EDX, Octane T Optima, EDAX Company, Mahwah, NJ, USA), X-ray diffraction (XRD, Philips X'Pert PW3040/00, CoKα), atomic force microscopy (AFM, Dimension Icon, Bruker, Billerica, MA, USA, and electrical sheet resistance measurements (van der Pauw technique, vdP). The X-ray photoelectron spectroscopy measurements (XPS) were done with a PHI 5600 CI system (Physical Electronics, Chanhassen, MN, USA) using non-monochromatic MgKα radiation. Composition of the La Interlayers In a first step, the magnetron sputtered and subsequently oxidized La layers were investigated to ascertain their chemical composition. As mentioned above, lanthanum can form either lanthanum oxide (La 2 O 3 ) or, if water vapor is present, lanthanum hydroxide (La(OH) 3 ). Under vacuum conditions the formation of either La 2 O 3 or La(OH) 3 depends on the leakage rate and total pressure level of the process chamber including the residual water at the chamber walls as well as the purity of the Ar process gas (purity 99,999% Ar was used). Before each deposition process, both targets were pre-sputtered for a certain time with a closed shutter to remove contaminations from the surfaces of the targets and to reach a steady-state condition for the deposition. However, the chemical composition of the resulting La-based (La-X) films had to be verified under these conditions by XPS analysis. To provide reference La(OH) 3 samples for the XPS analysis a La 2 O 3 powder sample was stored for several hours in wet air to be sure that it converts to hydroxide. A second La 2 O 3 powder sample was annealed at 800 • C for 24 h in high vacuum to create a La 2 O 3 standard. In addition, a three-layer system consisting of 10 nm Mo on 2 nm La on 10 nm Mo was deposited on a SiO 2 /Si substrate. The deposition process was interrupted for 25 min after the deposition of the 2 nm La layer. All samples were subsequently investigated with XPS by sputter depth-profiling with Ar. Figure 2 shows the results of the XPS measurements of the La3d 5/2 and O1s peaks during depth profiling of the La-X interlayer region in the Mo-(La-X)-Mo layer in comparison with results measured on the powder references after 10 min of sputter cleaning. The peak shape of the La3d 5/2 peak is similar for both the La(OH) 3 and the La 2 O 3 powder (Figure 2a). The peak measured for the La-X layer is quite similar and its broadening can be explained by the very small thickness of this layer. The peak position of metallic La is at 836.8 eV and none of the measured samples shows this peak which proves that there is no metallic La in the La-X film. However, from the measurement of the La3d 5/2 peak, it is not possible to distinguish between the lanthanum oxide and lanthanum hydroxide. This becomes possible by the analysis of the O1s peak. Lanthanum hydroxide is characterized by a high-energy O1s peak at 532.5 eV that is clearly visible in Figure 2b in case of the lanthanum oxide powder material which was stored for a long time in wet air and, thus converted to lanthanum hydroxide. This peak lacks in the spectrum of the lanthanum oxide, in which a strong peak appears at a lower energy (530.5 eV). The measurement of the La-X layer only shows an intensity at this lower energy, which indicates the formation of La 2 O 3 in this interlayer. The measured intensities also allow a quantitative estimation of the ratio between La and O in the samples. The measured ratio of La to O in the La interlayer region was about 30...40 to 70...60 that is close to that ratio found for the La 2 O 3 reference powder material (ratio 30 at% La to 70 at% O, for comparison for La(OH) 3 : 20 at% La to 80 at% O). Both results, the composition analysis as well as the absence of the hydroxide peak in Figure 2b prove that mainly lanthanum oxide is formed when a very thin La layer (thickness ≤ 2 nm) is deposited on top of a Mo layer with an interruption time of 25 min before the next Mo layer is added. Interpolating from this result, since a 2 nm La film was completely oxidized, it can be expected that films of a lower thickness (≤1 nm in the multilayers studied) should be completely oxidized as well. Figure 3 presents SEM images of the surfaces of the different Mo-(La 2 O 3 -Mo) n multilayer systems. It can be seen that the multilayer stacks have a fine-grained polycrystalline microstructure after deposition with a grain size up to several ten nanometers. No cracks and pores are visible. The pure 100 nm Mo film in Figure 3a shows a typical morphology known for thin Mo films on SiO 2 /Si substrates in the as-deposited state [16,17,37]. In the as-deposited state (AD, left column in Figure 3b-e), the morphology of the covering Mo layers in all the multilayers, independent of the number of layers, is similar to that of pure Mo (Figure 3a left). Upon annealing, strongly differing surface morphologies show up in all the films with increasing annealing time. In case of the pure Mo film, initially the grain size increases significantly upon annealing for 24 h followed by a slower growth up to 120 h. In contrast, all the multilayer films exhibit much smaller grain sizes after annealing for 24 h than that observed for the pure Mo film. The grain size increase during annealing is significantly reduced with an increasing number of bilayers, indicating that the La 2 O 3 distribution influences the grain growth. This is due to the pinning effect of the La 2 O 3 interlayers (as explained below). (Figure 4b) in the as-deposited state as well as after all the annealing durations. This interface is not detectable in the other multilayer systems for thinner La 2 O 3 layers. Upon annealing for 24 h, the out-of-plane structure is slightly clearer for all the films. The pure Mo film shows drastically grown columnar grains that span the entire film thickness. In the Mo-(La 2 O 3 -Mo) film, columnar grains extending only up to midway of the film thickness can be seen indicating an interruption of the grain growth due to the La 2 O 3 interlayer. In contrast, the multilayers are found to be composed of grains that are small, tapered and of different heights along with some grains that also span the entire thickness. With increasing annealing time, as seen in the top-surface view, the multilayer film with the highest number of La 2 O 3 layers exhibits the lowest in-plane grain growth. It can also be observed that with an increasing number of La 2 O 3 layers, the in-plane grain size dispersion becomes narrower. The Mo-La 2 O 3 -Mo layer is composed of some extremely large and some very small grains while a more uniform grain size is seen in Mo-(La 2 O 3 -Mo) 8 . The layer structure and especially the (La 2 O 3 -Mo) interface between two adjacent Mo layers were studied more in detail using transmission electron microscopy (TEM). In Figure 5a In case of the Mo-(La 2 O 3 -Mo) 8 film, the layer structure is still visible (Figure 5b). However, it can be seen that the La 2 O 3 layers are discontinuous and formed of an arrangement of distinctly separated particles. Unlike the Mo-(La 2 O 3 -Mo) film, herein along with discontinuous grains, several columnar grains of Mo extending across several sublayers or completely through the film thickness can also be seen (Figure 4e 120 h). This could be due to the discontinuous La 2 O 3 layers. Due to the low layer thickness of La 2 O 3 it can be expected that the layers consist of La 2 O 3 islands which are well separated from each other as is typically observed for the nucleation of islands in the early stages of film growth [35]. Thus, it can be expected that the out-of-plane grain growth is likely to be interrupted only at locations around La 2 O 3 particles while at other locations the grains can grow continuously. The in-plane grain size also reduces with an increasing number of bilayer periods which could be attributed to the presence of La 2 O 3 at the top and bottom surfaces of the Mo grains and probably also at the adjacent grain boundaries (triple junctions), pinning the grains in multiple directions. Another reason for decreased in-plane grain size could be a result of a mixture of two types of grains that are interspersed, that is, uninterrupted grains (columnar, spanning multiple Mo layers) and grains that are pinned by La 2 O 3 (interrupted across layers). The probability of this mixture of grains being more uniformly dispersed increases with increasing number of bilayer periods. With a higher number of La 2 O 3 layers, a better uniformity of La 2 O 3 particles throughout the film was achieved. This could also explain the more uniform in-plane grain size in the Mo-(La 2 O 3 -Mo) 8 film. The EDX measurements also reveal that there is no oxidation of the Mo at the sample surface even after annealing for 120 h at 800 • C in HV. Compared to other systems which are investigated concerning their suitability for high temperature application in SAW devices the oxidation resistance is strongly improved. In case of RuAl thin films on thermally oxidized Si substrates a 20 nm thick Al 2 O 3 layer is formed on top of the sample during annealing at 800 • C for only 10 h in high vacuum [18], while in Ti-Al based films a strong oxidation of the Al takes place even if an AlN cover layer is applied [38]. Phase Formation The XRD measurements show that the Mo and the Mo-(La 2 O 3 -Mo) n layers in the as-deposited state are polycrystalline and exhibit XRD peaks associated with the body centered cubic (bcc) crystal structure of molybdenum (powder reference). The films grow with a dominant (110) orientation. Figure 6a,b show the diffractograms around the Mo (110) peak position for the different samples in the as-deposited state and after annealing at 800 • C for 120 h. The position of the (110) Mo peak at 47.5 • (Co-K α ) of the powder Mo material is marked as a reference by a vertical line. In the as-deposited state, there is a low peak intensity and the Mo peak position is slightly shifted to lower values as compared to the reference (Figure 6a). After annealing the peak intensities are strongly increased as compared to the as-deposited state and the Mo peak position shifts to the theoretical value of the Mo powder material. The highest intensity is reached for the pure Mo film which is in agreement with the presence of the largest grains for this sample as visible from the cross-section images (Figure 4). The results of the XRD measurements for the pure Mo film and the Mo-(La 2 O 3 -Mo) 8 multilayer system for the different annealing times are presented in Figure 6c,d. For the pure Mo films in Figure 6c, the peak intensity compared to the as-deposited state strongly increases after annealing for 24 h but longer annealing times lead only to a small further increase of the peak intensity. The peak intensities in Figure 6c measured after annealing of pure Mo are significantly higher than those for the Mo-(La 2 O 3 -Mo) 8 multilayer films in Figure 6d, which is explained by the larger grain size of pure Mo films compared with that of the Mo-(La 2 O 3 -Mo) n multilayers, as already seen in Figure 4. For the Mo-(La 2 O 3 -Mo) 8 multilayer films (and likewise for all the other multilayer systems) a continuous increase in peak intensity with annealing time is observed (Figure 6d). Roughness Diffuse scattering of electrons at surfaces is more pronounced the higher the roughness is, that is, the electrical sheet resistance generally increases with roughness, especially for thin films. The root mean square roughness (RMS) of the multilayer systems shown in Figure 7 was obtained on a constant measurement area of 2 µm × 2 µm. The RMS was found to be less than 1 nm for all the as-deposited films. In contrast to the multilayer films, the roughness of the pure Mo films already strongly increases after annealing for 24 h. The RMS values of the multilayers increase after annealing at 48 and 120 h. For all multilayers annealed for 120 h, the Mo-(La 2 O 3 -Mo) 8 In the as-deposited state the resistivity of the 100 nm thick pure Mo layer deposited on SiO 2 /Si is approximately 18.5 µΩcm. For the multilayer systems an integral resistivity for the whole system containing also insulating La 2 O 3 as a layer or individual particles is determined. The layers act as a parallel circuit of the continuous (conductive) Mo layers shunted or interrupted by more or less discontinuous interlayers of La 2 O 3 particles. Anyhow, this measured overall resistivity of the film system can be used as a basis to design electronic devices for example, based on SAW structures. Generally, resistivity values in the as-deposited state of the multilayer systems are found to increase with the number n of (La 2 O 3 -Mo) bilayers that is, with decreasing the individual Mo layer thickness. This tendency is coherent with theoretical models of thin film resistivity since the resistivity is not only affected by an increasing grain boundary volume and a higher number of defects and impurities when the number of interfaces increases but also by electron scattering at the La 2 O 3 -Mo layer interfaces [39][40][41]. The effect of the mean free path of electrons (MFPE) of 39.5 nm of Mo [42] on the electrical resistivity should become predominant as the thickness of the Mo layer reaches this limit (i.e., for all films with n ≥ 2). Then the size effect becomes dominant because the thickness of the individual Mo layers and the in-plane grain size reaches the MFPE value. Upon annealing for 24 h, the resistivity of the pure Mo layers reduces drastically but there is only a slight decrease thereafter upon further annealing up to 120 h. In contrast, for the Mo-(La 2 O 3 -Mo) n multilayers, the reduction of the resistivity is much lower for the annealing duration of 24 h (for Mo-(La 2 O 3 -Mo)) or it even increases slightly (for Mo-(La 2 O 3 -Mo) 2−8 ). However, a more significant and comparable decrease is observed for the long annealing time of 120 h. The resistivity of the 100 nm Mo layer on SiO 2 /Si reaches a value of 8.2 µΩcm due to the observed distinct grain coarsening and defect annihilation. This value is close to that of pure Mo bulk material (≈5.57 µΩcm [43]). It can be seen that the resistivity of the films increases with an increasing number of interlayers. A closer look at the increase reveals, that a higher number of layers which are extremely thin did not affect the resistivity increase with respect to the MFPE as it would be expected according to the theory of Fuchs-Sondheimer [40,41]. As the La 2 O 3 interlayers do not form a continuous layer for n > 2, the Fuchs-Sondheimer theory is not strictly applicable. Furthermore, the resistivity change can be attributed to the special microstructure of the multilayer films composed of a grain distribution of different heights and widths. As seen in Section 3.2.1 more uniformity in grain dimensions was obtained with an increasing number of La 2 O 3 interlayers, thus increasing the number of interlayers helps to realize a more uniform microstructure. The resistivity in this case is governed by the grain size and the content of La 2 O 3 in the film. Conclusions The present paper presents a study of different Mo-(La 2 O 3 -Mo) n multilayer thin film systems which were deposited by magnetron sputtering on thermally oxidized (100) Si substrates to form ODS La 2 O 3 -Mo thin films for future applications in high-temperature wireless SAW devices. The multilayer thin films were investigated in the as-deposited state and after thermal treatment at 800 • C up to 120 h in high vacuum conditions. A content of about 1 wt.% La 2 O 3 was distributed through the Mo-(La 2 O 3 -Mo) n multilayer systems, which were covered with a pure Mo film. The total thickness of the multilayer systems was kept constant at 100 nm for all the multilayer configurations varying the nominal thicknesses of the individual La and Mo layers from 0.125 to 1 nm and from 11.0 to 49.5 nm, respectively. La 2 O 3 was formed by a waiting time of 25 to 60 min after the deposition of each La layer. To apply the investigated multilayer systems to SAW devices based on for example, high temperature stable CTGS or Langasite [44] substrates we expect that diffusion barrier layers are essential to prevent degradation effects. Also the influence of an electrical power load on structured electrode lines needs to be investigated for long term applications. Furthermore, the following conclusions are derived: • All Mo-(La 2 O 3 -Mo) n multilayers deposited at room temperature (RT) are polycrystalline and free of significant cracks and they exhibit a similar morphology in the as-deposited state. • In all multilayers the in-plane grain size reduces with an increasing number of bilayers and, thus decreasing thickness of the Mo layers, both in as-deposited state and after thermal treatment. • There is no clear trend in the RMS roughness with the number n of (La 2 O 3 -Mo) bilayers. However, the roughness upon annealing at 800 • C for 120 h is lowest for the Mo-(La 2 O 3 -Mo) 8 system. • The influence of the increase in RMS roughness during the annealing on the electrical resistivity is overcompensated by defect annihilation. • Annealing of the multilayer systems at 800 • C for 120 h leads to a reduction in the resistivity due to grain coarsening which results in reduced electron scattering. The annealed Mo layers show the lowest resistivity values of ≈8.2 µΩcm. For the Mo-(La 2 O 3 -Mo) n multilayer films with n = 4 and n = 8 (La 2 O 3 -Mo) bilayers a slight increase in resistivity is observed after annealing at 800 • C for 24 h compared to the as-deposited state. • The results show that the multilayer systems retain a clearly visible periodic structure of Mo and La 2 O 3 along the growth direction even after annealing at 800 • C for 120 h. • In case of 1 nm thickness of the La deposition, a continuous and closed La 2 O 3 layer was formed on top of Mo. Thus the La 2 O 3 in Mo-(La 2 O 3 -Mo) provided a layer of complete chemical and physical discontinuity to the Mo layers. However, it was shown that extremely thin La interlayers (<1 nm) were present as a discontinuous layer of La 2 O 3 particles. • The results indicate that Mo-(La 2 O 3 -Mo) n multilayer films can be appropriate material systems for IDT electrodes for applications in the high-temperature range above 600 • C.
8,141
2019-08-21T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Generating operators between Banach spaces We introduce and study the notion of generating operators as those norm-one operators $G\colon X\longrightarrow Y$ such that for every $0<\delta<1$, the set $\{x\in X\colon \|x\|\leq 1,\ \|Gx\|>1-\delta\}$ generates the unit ball of $X$ by closed convex hull. This class of operators includes isometric embeddings, spear operators (actually, operators with the alternative Daugavet property), and other examples like the natural inclusions of $\ell_1$ into $c_0$ and of $L_\infty[0,1]$ into $L_1[0,1]$. We first present a characterization in terms of the adjoint operator, make a discussion on the behaviour of diagonal generating operators on $c_0$-, $\ell_1$-, and $\ell_\infty$-sums, and present examples in some classical Banach spaces. Even though rank-one generating operators always attain their norm, there are generating operators, even of rank-two, which do not attain their norm. We discuss when a Banach space can be the domain of a generating operator which does not attain its norm in terms of the behaviour of some spear sets of the dual space. Finally, we study when the set of all generating operators between two Banach spaces $X$ and $Y$ generates all non-expansive operators by closed convex hull. We show that this is the case when $X=L_1(\mu)$ and $Y$ has the Radon-Nikod\'ym property with respect to $\mu$. Therefore, when $X=\ell_1(\Gamma)$, this is the case for every target space $Y$. Conversely, we also show that a real finite-dimensional space $X$ satisfies that generating operators from $X$ to $Y$ generate all non-expansive operators by closed convex hull only in the case that $X$ is an $\ell_1$-space. Introduction Let X and Y be Banach spaces over the field K (K = R or K = C).We denote by L(X, Y ) the space of all bounded linear operators from X to Y and write X * = L(X, K) to denote the dual space.By B X and S X we denote the closed unit ball and the unit sphere of X, respectively, and we write T for the set of modulus one scalars.Some more notation and definitions (which are standard) are included in Subsection 1.1 at the end of this introduction. The concept of spear operator was introduced in [1] and deeply studied in the book [7].A norm-one operator G ∈ L(X, Y ) is said to be an spear operator if the norm equality max θ∈T G + θT = 1 + T holds for all T ∈ L(X, Y ).This concept extends the properties of the identity operator in those Banach spaces having numerical index one and it is satisfied, for instance, by the Fourier transform on L 1 .There are isometric and isomorphic consequences on the domain and range spaces of a spear operator as, for instance, in the real case, the dual of the domain of a spear operator with infinite rank has to contain a copy of ℓ 1 .For more information and background, we refer the interested reader to the already cited book [7].Even though the definition of spear operator given above does not need numerical ranges, it is well known that spear operators are exactly those operators such that the numerical radius with respect to them coincides with the operator norm.Let us introduce the relevant definitions.Fixed a norm-one operator G ∈ L(X, Y ), the numerical radius with respect to G is the seminorm defined as v G (T ) := sup{|φ(T )| : φ ∈ L(X, Y ) * , φ(G) = 1} = inf δ>0 sup{|y * (T x)| : y * ∈ S Y * , x ∈ S X , Re y * (Gx) > 1 − δ} for every T ∈ L(X, Y ) (the equality above was proved in [14,Theorem 2.1]).Observe that v G (•) is a seminorm in L(X, Y ) which clearly satisfies Then, G is a spear operator if and only if v G (T ) = T for every T ∈ L(X, Y ) (see [7,Proposition 3.2]). Our discussion here starts with the observation that it is possible to introduce a natural seminorm between v G (T ) and T in Eq. ( 1): the (semi-)norm relative to G. Let us introduce the needed notation and definitions.Let X, Y , Z be Banach spaces and let G ∈ L(X, Y ) be a norm-one operator.For δ > 0, we write att(G, δ) to denote the δ-attainment set of G, that is, att(G, δ) := {x ∈ S X : Gx > 1 − δ}. If there exists x ∈ S X such that Gx = 1, we say that G attains its norm and we denote by att(G) the attainment set of G: att(G) := {x ∈ S X : Gx = 1}.We consider the parametric family of norms on L(X, Z) defined by which are equivalent to the usual norm on L(X, Z) (this is so since att(G, δ) has nonempty interior).We are interested in the (semi-)norm obtained taking infimum on this parametric family. Definition 1.1.Let X, Y and Z be Banach spaces and let G ∈ L(X, Y ) be a norm-one operator.For T ∈ L(X, Z), we define the (semi-)norm of T relative to G by When Z = Y , we clearly have that and so this • G is the promised seminorm to extend Eq. ( 1).We may study the possible equality between v G (•) and • G and between • G and the usual operator norm.We left the first relation for a subsequent paper which is still in process [9].The main aim in this manuscript is to study when the norm equality T G = T (2) holds true.Definition 1.2.Let X, Y be Banach spaces.We say that G ∈ L(X, Y ) with norm-one is generating (or a generating operator ) if equality (2) holds true for all T ∈ L(X, Y ).We denote by Gen(X, Y ) the set of all generating operators from X to Y . Observe that both • G and the operator norm can be defined for operators with domain X and arbitrary range, so one may wonder if there are different definitions of generating requiring that Eq. ( 2) holds replacing Y for other range spaces.This is not the case, as we will show in Section 2 that a generating operator G satisfies that T G = T for every T ∈ L(X, Z) and every Banach space Z (see Corollary 2.3).This is so thanks to a characterization of generating operators in terms of the sets att(G, δ): G is generating (if and) only if conv(att(G, δ)) = B X for every δ > 0, see Corollary 2.3 again.When the dimension of X is finite, this is clearly equivalent to the fact that conv(att(G)) = B X (actually, the same happens for compact operators defined on reflexive spaces, see Proposition 2.5).For some infinite-dimensional X, there are generating operators from X which do not attain their norm, even of rank-two (see Example 3.2); but there are even generating operators attaining the norm such that conv(att(G)) has empty interior (see Example 3.4). There is another characterization which involves the geometry of the dual space.We need some definitions.A subset F of the unit ball of a Banach space Z is said to be a If z ∈ S Z satisfies that F = {z} is a spear set, we just say that z is a spear vector and we write Spear(Z) for the set of spear vectors of Z.We refer the reader to [7, Chapter 2] for more information and background.We will show that a norm-one operator G ∈ L(X, Y ) is generating if and only if G * (B Y * ) is a spear set of X * , see Corollary 2.17.These characterizations appear in Section 2, together with a discussion on the behaviour of diagonal generating operators on c 0 -, ℓ 1 -, and ℓ ∞ -sums, and examples in some classical Banach spaces. We next discuss in Section 3 the relationship between generating operators and norm attainment.On the one hand, we show that rank-one generating operators attain their norm (see Corollary 3.1) and, clearly, the same happens with isometric embeddings (which are generating), or with generating operators whose domain has the RNP (see Corollary 2.12), as every generating operator attains its norm on denting points (see Lemma 2.8).But, on the other hand, there are generating operators, even of rank two, which do not attain their norm (see Example 3.2).We further discuss the possibility for a Banach space X to be the domain of a generating operator which does not attain its norm in terms of the behaviour of some spear sets of X * (see Theorem 3.5). Finally, Section 4 is devoted to the study of the set Gen(X, Y ).We show that it is closed (see Proposition 4.1), and show that for every Banach space Y , there is a Banach space X such that Gen(X, Y ) = ∅ (see Proposition 4.2), but this result is not true for Y = C[0, 1] if we restrict the space X to be separable (Example 4.5).We next study properties of Gen(X, Y ) when X is fixed.We first show that Gen(X, Y ) = ∅ for every Y if and only if Spear(X * ) = ∅ (see Corollary 4.6) and that the only case in which there is Y such that Gen(X, Y ) = S L(X,Y ) is when X is one-dimensional (see Corollary 4.7).We then study the possibility that the set Gen(X, Y ) generates the unit ball of L(X, Y ) by closed convex hull, showing first that this is the case when X = L 1 (µ) and Y has the RNP (Theorem 4.10) and when X = ℓ 1 (Γ) and Y is arbitrary (see Proposition 4.12) and that this is the only possibility for real finite-dimensional spaces (see Proposition 4.14). 1.1. A bit of notation.Let X, Y be Banach spaces.We write J X : X −→ X * * to denote the natural inclusion of X into its bidual space. where f ∈ X * and α > 0, and observe that every slice of C is of the above form. For A ⊂ X, conv(A) and aconv(A) are, respectively, the convex hull and the absolutely convex hull of A; conv(A) and aconv(A) are, respectively, the closures of these sets.For B ⊂ X convex, ext(B) denotes the set of extreme points of B. Characterizations, first results, and some examples Our first result gives different characterizations for the equivalence of • and • G on L(X, Z).As one may have expected, this does not depend on the range space Z. Proposition 2.1.Let X, Y be Banach spaces, let G ∈ L(X, Y ) be a norm-one operator, and let r ∈ (0, 1].Then, the following are equivalent: (i) T G r T for every Banach space Z and every T ∈ L(X, Z). (ii) There is a (non null) Banach space Z such that T G r T for every T ∈ L(X, Z). (iii) There is a (non null) Banach space Z such that T G r T for every rank-one operator The remaining implication (v) ⇒ (vi) follows from the Bipolar theorem.Indeed, for δ > 0, take x ∈ rB X , we have to prove that J X (x) belongs to att(G, δ) where the second inequality follows from (iv) and the last one from the fact that x * ∈ att(G, δ) • .Therefore J X (x) ∈ att(G, δ) •• = conv w * (att(G, δ)). Observe that item (vi) in the previous result just means that, for every δ ∈ (0, 1), the set att(G, δ) is r-norming for X * .This leads to the following concept which extends the one of generating operator.Definition 2.2.Let X, Y be Banach spaces, let G ∈ L(X, Y ) be a norm-one operator and let r ∈ (0, 1].We say that G is r-generating if conv(att(G, δ)) ⊇ rB X for every δ > 0. Of course, the case r = 1 coincides with the generating operators introduced in the introduction.For them, the following characterization deserves to be emphasized. Corollary 2.3.Let X, Y be Banach spaces, let G ∈ L(X, Y ) be a norm-one operator.Then, the following are equivalent: (ii) T G = T for every T ∈ L(X, Z) and every Banach space Z. (iii) There is a (non null) Banach space Z such that T G = T for every rank-one operator T ∈ L(X, Z). (iv) B X = conv(att(G, δ)) for every δ > 0. In particular, if there exists A ⊆ B X which satisfies aconv(A) = B X and A ⊆ att(G, δ) for every δ > 0, then G is generating. In the next list we give the first easy examples of generating operators.(1) The identity operator on every Banach space is generating. (3) Spear operators are generating since, in this case, v G (T ) = T for every T ∈ L(X, Y ).(4) Actually, operators with the alternative Daugavet property (i.e.those G ∈ L(X, Y ) such that v G (T ) = T for every T ∈ L(X, Y ) with rank-one, cf.[7,Section 3.2]) are also generating by using Corollary 2.3 with Z = Y in item (iii).( 5) The natural embedding G of ℓ 1 into c 0 is a generating operator. Indeed, for every δ > 0, we have that Indeed, for every δ > 0, notice that ) (this should be well known, but in any case it follows from Lemma 4.11 which includes the vector-valued case).Observe then that, for every We will provide some more examples in classical Banach spaces in Subsection 2.2. The next result deals with compact operators defined on a reflexive Banach space.Proposition 2.5.Let X be a reflexive Banach space, let Y be a Banach space, and let G ∈ L(X, Y ) be a compact operator with G = 1.Then, Consequently, G is r-generating if and only if rB X ⊆ conv(att(G)). Clearly, the previous result applies when X is finite-dimensional. Corollary 2.6.Let X be a finite-dimensional space, let Y be a Banach space, and let G ∈ L(X, Y ) with G = 1.Then, Consequently, G is r-generating if and only if rB X ⊆ conv(att(G)). The next result characterizes those operators acting from a finite-dimensional space which are rgenerating for some 0 < r 1. Proposition 2.7.Let X be a Banach space with dim(X) = n, let Y be a Banach space, and let G ∈ L(X, Y ) with G = 1.The following are equivalent: (i) G is r-generating for some r ∈ (0, 1].(ii) The set att(G) contains n linearly independent elements. Proof.(i) ⇒ (ii).By Corollary 2.6, we have that rB X ⊆ conv(att(G)).Therefore, att(G) contains n linearly independent elements.(ii) ⇒ (i).We start proving that the set conv(att(G)) is absorbing.Indeed, let {x 1 , . . ., x n } be a linearly independent subset of att(G).Then, fixed 0 = x ∈ X, there are λ 1 , . . ., λ n ∈ K such that x = n j=1 λ j x j .Calling 0 < ρ = n j=1 |λ j | we can write where we used that λ j |λ j | x j ∈ att(G) as this set is balanced.Hence, the set conv(att(G)) is absorbing.Besides, conv(att(G)) is clearly balanced, convex, and compact.So its Minkowski functional defines a norm on X which must be equivalent to the original one.Then, there is r > 0 such that rB X ⊆ conv(att(G)) and, therefore, G is r-generating by Corollary 2.6. We next would like to present the relationship of generating operators with denting points (and so with the Radon-Nikodým property, RNP in short).We need some notation.Let A be a bounded closed convex set.Recall that x 0 ∈ A is a denting point if for every δ > 0 x 0 / ∈ conv(A \ B(x 0 , δ)) or, equivalently, if x 0 belongs to slices of B X of arbitrarily small diameter.We write dent(A) to denote the set of denting points of A. A closed convex subset C of X has the Radon-Nikodým property (RNP in short), if all of its closed convex bounded subsets contain denting points or, equivalently, if all of its closed convex bounded subsets are equal to the closed convex hull of their denting points.In particular, the whole space X may also have this property. The following result tells us that generating operators must attain their norms on every denting point. Lemma 2.8.Let X, Y be Banach spaces and let G ∈ L(X, Y ) be a (norm-one) generating operator. The above result can be slightly improved by using the following definition.Definition 2.9.Let x 0 ∈ S X .We say that x 0 is a point of sliced fragmentability if for every δ > 0 there is a slice Observe that this notion is weaker than that of denting point (for instance, points in the closure of the set of denting points are of sliced fragmentability but they do not need to be denting, even in the finite-dimensional case). Lemma 2.10.Let X, Y be Banach spaces, let G ∈ S L(X,Y ) be a generating operator, and let x 0 ∈ S X be a point of sliced fragmentability, then Gx 0 = 1. Proof.Fixed δ > 0, by our assumption, conv(att(G, δ)) = B X for every δ > 0. This implies that, fixed δ > 0, the set att(G, δ) intersects every slice of B X .Applying this to the slice S δ from Definition 2.9, we obtain that there is a point and the arbitrariness of δ finishes the proof. We do not know if Lemma 2.10 is a characterization, but in Proposition 3.6 we will characterize those points on which every generating operator attains its norm. Proposition 2.11.Let X, Y be Banach spaces and let G ∈ L(X, Y ) be a norm-one operator.Suppose that B X = conv(dent(B X )).Then, G is generating if and only if Gx = 1 for every x ∈ dent(B X ). Corollary 2.12.Let X, Y be Banach spaces and let G ∈ L(X, Y ) be a norm-one operator.Suppose that X has the Radon-Nikodým property.Then, G is generating if and only if Gx = 1 for every x ∈ dent(B X ). In the finite-dimensional case, the RNP is for free and denting points and extreme points coincide.Therefore, the following particular case holds.Corollary 2.13.Let X be a finite-dimensional space, let Y be a Banach space, and let G ∈ L(X, Y ) be a norm-one operator.Then, G is generating if and only if Gx = 1 for every x ∈ ext(B X ). The following particular case of Corollary 2.12 is especially interesting. Example 2.14.Let Y be a Banach space and let G ∈ L(ℓ 1 , Y ) be a norm-one operator.Then, G is generating if and only if Ge n = 1 for every n ∈ N. When every point of the unit sphere of the domain is a denting point, Proposition 2.11 tells us that generating operators are isometric embeddings.Spaces with such property of the unit sphere are average locally uniformly rotund (ALUR for short) spaces.They were introduced in [20] and it can be deduced from [13,Theorem] that a Banach space is ALUR if and only if every point of the unit sphere is a denting point. Corollary 2.15.Let X, Y be Banach spaces and suppose that X is ALUR.Then, every generating operator G ∈ L(X, Y ) is an isometric embedding. The next result gives another useful characterization of r-generating operators. Theorem 2.16.Let X, Y be Banach spaces, let G ∈ L(X, Y ) be a norm-one operator, let r ∈ (0, 1], and let Proof.If G is r-generating, fixed x * ∈ X * and δ > 0, we can write where the last inequality holds by Proposition 2.1.The arbitrariness of δ gives the desired inequality. To prove the converse, fixed x * ∈ S X * and δ > 0, it suffices to show that x * G,δ r by Proposition 2.1.We use the hypothesis for δ 2 x * to get that max θ∈T sup So, given 0 < ε < δ 2 , there are y * ∈ A, θ ∈ T, and x ∈ B X such that The arbitrariness of ε gives x * G,δ r as desired. Of course, one can always use A = B Y * in Theorem 2.16 if no other interesting choice for A is available and still one obtains a useful characterization of r-generating operators. In the case of generating operators, we emphasize the following result. Corollary 2.17.Let X, Y be Banach spaces, let A ⊂ B Y * be one-norming for Y , and let G ∈ L(X, Y ) with G = 1.Then, the following are equivalent: Only item (iv) is new, and follows immediately from the following remark.Indeed, to prove the sufficiency, fixed 0 = z 1 ∈ X, observe that max , the triangle inequality allows to write What we have shown is that it suffices to use elements x * ∈ S X * in Theorem 2.16 when r = 1.However, the following example shows that this is not the case for any other value of 0 < r < 1. Example 2.19.Let 0 < r < 1 be fixed, let X be the real two-dimensional Hilbert space, {e 1 , e 2 } be its orthonormal basis with {e * 1 , e * 2 } being the corresponding coordinate functionals.The norm-one operator G ∈ L(X) given by G = r Id +(1 − r)e * 1 ⊗ e 1 is not r-generating but satisfies Observe that G attains its norm only at ±e 1 so Proposition 2.7 tells us that G is not r-generating (in fact, it is not s-generating for any 0 < s 1). If we are able to guarantee that G * (B Y * ) is a spear set of X * , Corollary 2.17 shows that G is generating.The most naive way to do so is to require This obviously means that G is a rank one operator; in this case, G * (B Y * ) is a spear set of X * if and only if x * 0 is a spear vector of X * .In this particular case, Corollary 2.17 reads as follows.Corollary 2.20.Let X, Y be Banach spaces, x * 0 ∈ S X * , and y 0 ∈ S Y .Then, the rank-one operator G = x * 0 ⊗ y 0 is generating if and only if x * 0 ∈ Spear(X * ). Observe the similarity with [7, Corollary 5.9] which states that G = x * 0 ⊗ y 0 is spear if and only if x * 0 is a spear functional and y 0 is a spear vector.Here the condition is easier to satisfy, of course. 2.1.Some stability results.The following result shows that the property of being generating is stable by c 0 -, ℓ 1 -, and ℓ ∞ -sums of Banach spaces. Proof.Suppose first that G is generating and, fixed κ ∈ Λ, let us show that G κ is generating.Observe To prove the sufficiency when , we have that B X = conv (S Xκ × S W ) and so we may find x 0 ∈ S Xκ and w 0 ∈ S W such that Take x * 0 ∈ S Xκ * with x * 0 (x 0 ) = 1 and define the operator S ∈ L(X κ , Y κ ) by which satisfies S Sx 0 = P κ T (x 0 , w 0 ) > T − ε and S Gκ = S since G κ is generating.Moreover, fixed δ > 0, S Gκ = S > T − ε and the arbitrariness of ε gives that T G T as desired. In the case when E = ℓ 1 , fixed δ > 0, consider the set where in the last equality we have used Corollary 2.3.iv as G λ is generating for every λ ∈ Λ.Therefore, B X = conv λ∈Λ {x ∈ X : ) and the arbitrariness of δ gives that G is generating by Corollary 2.3.iv. We next discuss the relationship of being generating with the operation of taking the adjoint.We show next that if the second adjoint is r-generating then the operator itself is r-generating. r since ε > 0 was arbitrary.So G is r-generating by Proposition 2.1. We do not know if the converse of the above result holds in general or even for r = 1.On the other hand, the following example shows that there is no good behaviour of the property of being generating with respect to taking one adjoint, as the property does not pass from an operator to its adjoint, nor the other way around.For any x ∈ S c 0 with x(1) ∈ T we have that G(x) = 1 and, consequently, x ∈ att(G, δ) for every δ > 0. Since such elements are enough to recover the whole unit ball of c 0 by taking closed convex hull, G is generating by Corollary 2.3.iv. • The adjoint operator G * : is not generating by Example 2.14 since is again generating following an analogous argument to the one used for G, using this time elements x ∈ S ℓ∞ with x(1) ∈ T. 2.2. Some examples in classical Banach spaces.Our aim here is to provide some characterizations of generating operators when the domain space is L 1 (µ) or the range space is C 0 (L) by making use of Corollary 2.17. The question of which operators acting from L 1 (µ) are generating leads to study the spear sets in L ∞ (µ).We do so in the next result which is valid for arbitrary measures.Proposition 2.24 (Spear sets in B L∞(µ) ).Let (Ω, Σ, µ) be a positive measure space and let F ⊂ B L∞(µ) .Then, the following are equivalent: Proof.Suppose first that F is a spear set.Given A ∈ Σ with µ(A) = 0 and ε > 0, since max there exists f 0 ∈ F and θ 0 ∈ T such that f 0 + θ 0 ½ A ∞ > 2 − ε and thus, there exists B ⊂ A with µ(B) = 0 such that |f (t)| > 1 − ε for every t ∈ B. To prove the converse implication, given x ∈ L ∞ (µ) and ε > 0, there is A ∈ Σ with µ(A) = 0 such that |x(t)| x ∞ − ε for every t ∈ A. By the hypothesis, there is a subset B of A with µ(B) = 0 and f ∈ F such that |f 0 (t)| > 1 − ε for every t ∈ B. Now, thanks to the compactness of T we can fix an ε-net T ε of T, then we may find θ 0 ∈ T ε and and the arbitrariness of ε gives max As an immediate consequence we get the following characterization of generating representable operators acting on L 1 (µ). Remark 2.26.The restriction on the measure µ being finite in Corollary 2.25 can be relaxed to being σ-finite. Indeed, given a σ-finite measure µ, there is a suitable probability measure ν such that 2 ) given by G(f ) = 2.2.2.Operators arriving to C 0 (L).Let L be a Hausdorff locally compact topological space.It is immediate from the definition of the norm, that the set A = {δ t : t ∈ L} ⊂ C 0 (L) * is one-norming for C 0 (L).Hence, Corollary 2.17 reads in this case as follows. Proposition 2.27.Let X be a Banach space, let L be a Hausdorff locally compact topological space, and let G ∈ L(X, C 0 (L)) be a norm-one operator.Then, the following are equivalent: We would like to compare the result above with [7,Proposition 4.2] where it is proved that G ∈ L(X, C 0 (L)) has the alternative Daugavet property if and only if {G * (δ t ) : t ∈ U } is a spear set of X * for every open subset U ⊂ L. It is then easy to construct examples of generating operators arriving to C 0 (L) spaces which do not have the alternative Daugavet property.For instance, consider G ∈ L(c 0 , c 0 ) given by Generating operators and norm-attainment We discuss here when generating operators are norm-attaining.On the one hand, it is shown in [7, Theorem 2.9] that every spear x * ∈ X * attains its norm.So rank-one generating operators also attain their norm by Corollary 2.20. Corollary 3.1.Let X, Y be Banach spaces and G ∈ Gen(X, Y ) of rank-one.Then, G attains its norm. Besides, if B X contains denting points, all generating operators with domain X are norm attaining by Lemma 2.8. On the other hand, operators with the alternative Daugavet property are generating (see Example 2.4.(4)), and there are operators with the alternative Daugavet property which do not attain their norm (see [7,Example 8.7]).The construction of the cited example in [7] is not easy at all, but we may construct easier examples of generating operators which do not attain their norm, even with rank two. Proof.Observe that G is generating by Corollary 2.25 as g(t) = 1 for every t ∈ [0, 1].To prove that G does not attain its norm, recall that for an integrable complex-valued function f the equality |dt holds if and only if there is λ ∈ T such that f = λ|f | except for a set of zero measure.Suppose, to find a contradiction, that there is a non-zero x ∈ L 1 [0, 1] satisfying Gx = x .Then, as xg can be seen as a complex-valued function and we can identify the norm on ℓ 2 2 with the modulus in C, we have that Therefore, there is λ ∈ T such that xg = λ|xg| = λ|x| except for a set of zero measure.But this is impossible since x takes real values and g covers a non-trivial arc of the unit circumference.Moreover, if the previous assertions hold, we have that ) with g ∞ = 1 and g(t) = 1 almost everywhere by Corollary 2.25.Since S Y is a finite or countable union of segments, we may find a partition π of [0, 1] in measurable subsets of positive measure such that g(A) is contained in a segment of S Y almost everywhere for every A ∈ π.Then, for every ∆ ∈ π and every measurable subset A ⊂ ∆ of positive measure, consider , where |A| denotes the Lebesgue measure of A, and let us show that G attains its norm at x A .Indeed, as g(A) is contained in a segment of S Y a.e., there exists y * ∈ S Y * such that y * (g(t)) = 1 a.e. in A, thus and so G(x A ) = 1 as desired.Moreover, for this π To prove (ii) ⇒ (i), suppose that S Y cannot be written as a finite or countable union of segments and let us construct a generating operator G ∈ L(L 1 [0, 1], Y ) not attaining its norm.Observe that the number of open maximal segments in S Y is finite or countable as S Y is a curve on a two-dimensional space with finite length.Let ∆ n , n ∈ N, be the open maximal segments in S Y and denote D = S Y \ (∪ n∈N ∆ n ).Clearly, D is an uncountable metric compact subset of S Y , hence it contains a homeomorphic copy of the Cantor set K [11, Chapter I] and so there exists an injective continuous function ϕ : K −→ D. Now, let us construct an injection from [0, 1] to K. To do so, recall that the Cantor set is the set of numbers of [0, 1] that have a triadic representation consisting purely of 0's and 2's, that is, where α k (t) ∈ {0, 1}.This representation is unique except for a countable subset of [0, 1] consisting of those numbers with finite dyadic representation.Consider φ : [0, 1] −→ K given by where α k (t) ∈ {0, 1} are the coefficients in the dyadic representation of t.The function φ is welldefined almost everywhere on [0, 1], injective, measurable, and its image lies on K.Then, the function and it is injective.Consider the operator G : G is generating by Corollary 2.25 as g(t) = 1 almost everywhere but it does not attain its norm.Indeed, suppose on the contrary that there is a non-zero We may find y * 0 ∈ S Y * such that This equality implies the existence of a measurable subset A of [0, 1] with positive measure such that |x(t)| = x(t)y * 0 (g(t)) for every t ∈ A, thus y * 0 (g(t)) ∈ {1, −1} for every t ∈ A. Note that g(A) ⊆ y ∈ D : y * 0 (y) ∈ {1, −1} .However, this leads to a contradiction.On the one hand, the latter set has at most four elements as D does not contain open segments of S Y .On the other hand, since g is injective and A has positive measure, g(A) has infinitely many elements.Thus, G cannot attain its norm. The next example shows that, even in the case of norm-attaining operators, the set att(G) cannot be used to characterize when G is generating outside the case when X is reflexive and G is compact covered by Proposition 2.5. Example 3.4.Let G ∈ L(X, Y ) be a generating operator between two Banach spaces X and Y such that it does not attain its norm.Then, the operator G : is generating by Proposition 2.21 and attains its norm, but conv(att The following result characterizes the possibility to construct a generating operator not attaining its norm acting from a given Banach space which somehow extend Example 3.2.Theorem 3.5.Let X be a Banach space, the following are equivalent: (i) There exists a Banach space Y and a norm-one operator G ∈ L(X, Y ) such that G is generating but att(G) = ∅.(ii) There exists a spear set B ⊆ B X * such that sup ), since G is generating, we can use Corollary 2.17 to deduce that B is a spear set.Besides, as G does not attain its norm, we have that On the one hand, for x ∈ S X , we have that On the other hand, using that B is a spear set, for every ε > 0 we may find x * ∈ B with x * > 1 − ε and so Therefore, G = 1 but the norm is not attained. To show that G is generating, we start claiming that, for every g ∈ ℓ 1 (B) ⊂ ℓ ∞ (B) * , we have Indeed, given g ∈ ℓ 1 (B), observe that Therefore, by the arbitrariness of x * 0 ∈ B, we get G * (B ℓ∞(B) * ) ⊃ B, so G * (B ℓ∞(B) * ) is a spear set and G is generating by Corollary 2.17. The above proof, when read pointwise, allows to give a characterization of those points at which every generating operator attains its norm.Proposition 3.6.Let X be a Banach space and x 0 ∈ S X .Then, the following are equivalent: (i) ⇒ (ii) Suppose that (ii) does not hold.Then, there is a spear set B ⊆ B X * such that sup is generating (as shown in the proof of Theorem 3.5) and satisfies Therefore, (i) does not hold. The set of all generating operators Our aim here is to study the set Gen(X, Y ) of all generating operators between the Banach spaces X and Y .Recall, on the one hand, that Id X ∈ Gen(X, X) for every Banach space X, so Gen(X, X) = ∅ for every Banach space X.On the other hand, recall that Corollary 2.20 shows that Gen(X, K) = Spear(X * ), so Gen(X, K) is empty for many Banach spaces X: those for which Spear(X * ) = ∅ as uniformly smooth spaces, strictly convex spaces, or real smooth spaces with dimension at least two (see [7,Proposition 2.11]).We will be interested in finding conditions to ensure that Gen(X, Y ) is non-empty and, in those cases, to study how big the set Gen(X, Y ) can be.We start with an easy observation on Gen(X, Y ). , where the last inequality holds by Corollary 2.17 since G n is generating.Now, it follows again from Corollary 2.17 that G 0 ∈ Gen(X, Y ). Next, we study the problem of finding out whether Gen(X, Y ) is empty or not for the Banach spaces X and Y from two points of view: fixing the space Y and fixing the space X. 4.1.Gen(X, Y ) when Y is fixed.We will show that for every Banach space Y there is another Banach space X such that Gen(X, Y ) = ∅.Proposition 4.2.For every Banach space Y there is a Banach space X such that Gen(X, Y ) = ∅. We need the following obstructive result for the existence of generating operators that will serve to our purpose.We are now able to provide the pending proof.For a Banach space X let dens(X) denote its density character. The above argument is based on the possibility of considering Banach spaces in the domain with a very big density character.It is then natural to raise the following question.This question is easily solvable for separable spaces.Indeed, the space Y = C[0, 1] contains isometrically every separable Banach space.Since isometric embeddings are generating, we get the following example.The question of whether the same trick works for all density characters is involved and depends on the Axiomatic Set Theory.On the one hand, assuming CH, ℓ ∞ /c 0 is isometrically universal for all Banach spaces of density character the continuum [17] but, on the other hand, it is consistent that no such a universal space exists [19], even a isomorphically universal space, see [4]. For the moreover part, it is enough to see that extm(B X * ) is actually a James boundary for X and so B X * = conv(extm(B X * )) by [6,Theorem III.1]. Our next aim is to show that the set Gen(L 1 (µ), Y ) is quite big for every finite measure µ and many Banach spaces Y , and that in some cases it allows to recover the unit ball of L(L 1 (µ), Y ) by taking closed convex hull.Given a finite measure space (Ω, Σ, µ) and a Banach space Y we write Theorem 4.10.Let (Ω, Σ, µ) be a finite measure space and let Y be a Banach space.Then, As a consequence, if Y has the RNP, then Observe that the restriction on the measure µ to be finite can be relaxed to be σ-finite as in Remark 2.26. The proof of the theorem follows immediately using Corollary 2.25 and the next lemma, which we do not know whether it is already known.Step one.Let f ∈ S L∞(µ,Y ) and suppose that there are N ∈ N, numbers α , and pairwise disjoint subsets B k ⊂ Ω with µ(B k ) = 0 for k = 1, . . ., N such that N k=1 B k = Ω and f (t) = α k for every t ∈ B k and every k = 1, . . ., N (observe that α N = 1 as f = 1).Then, f can be written as a convex combination of 2 N −1 functions in B. Indeed, we proceed by induction on N : for N = 1, the function f belongs to B. The case N = 2 gives the flavour of the proof.In this case we have that f (t) = α 1 for every t ∈ B 1 and f If otherwise α 1 = 0, fix y 0 ∈ S Y , and define g 1 (t) = y 0 and g 2 (t) = −y 0 for every t ∈ B 1 .It is clear that in any case we have f = λ 1 g 1 + λ 2 g 2 and that g 1 , g 2 ∈ B. Suppose now that the result is true for N 2 and let us prove it for N + 1.So, let f ∈ S L∞(µ,Y ) and suppose that there are numbers α 1 < • • • < α N +1 ∈ [0, 1] with α N +1 = 1, and pairwise disjoint subsets B k ⊂ Ω with µ(B k ) = 0 for k = 1, . . ., N + 1 such that N +1 k=1 B k = Ω and f (t) = α k for every t ∈ B k and every k = 1, . . ., N + 1. Observe that, as N 2, we have that α N > 0.Then, we . So, we can apply the induction step for f 1 and f 2 to write Therefore, the convex combination we are looking for is which finishes the induction process. Step two.Every function f ∈ S L∞(µ,Y ) can be approximated by functions of the class described in the first step. Let us now discuss the case of purely atomic measures.When µ is purely atomic and σ-finite (so L 1 (µ) can be easily viewed as L 1 (ν) for a suitable purely atomic and finite measure ν, see [3, Proposition 1.6.1]for instance), every operator in L(L 1 (µ), Y ) is representable for every Banach space Y (see [5, p. 62], for instance).So, Theorem 4.10 gives that B L(ℓ 1 (Γ),Y ) = conv(Gen(ℓ 1 (Γ), Y )) for every Banach space Y and every countable set Γ. Actually, the restriction of countability for the set Γ can be remove and the proof in this case is much more direct.For finite-dimensional ℓ 1 -spaces, we get a better result.The next result shows that the only finite-dimensional real spaces with this property are ℓ n 1 for n ∈ N. Proposition 4.14.Let X be a real Banach space with dim(X) = n and such that B L(X,Y ) = conv(Gen(X, Y )) for every Banach space Y .Then, X = ℓ n 1 .Proof.Proposition 4.8 tells us that X * is an almost CL-space so n(X * ) = n(X) = 1.Therefore, as X is real, the set ext(B X ) is finite by [16,Theorem 3.2].Our goal is to show that ext(B X ) contains exactly 2n elements as this clearly implies that X is isometrically isomorphic to the real space ℓ n 1 .We suppose that ext(B X ) has more than 2n elements and we show that, in such a case, there is a Banach space Y (= X with a new norm) such that B L(X,Y ) = conv(Gen(X, Y )).Since dim(X) = n and ext(B X ) has more than 2n elements, we may find {e 1 , . . ., e n } ⊂ ext(B X ) linearly independent and e n+1 ∈ ext(B X ) satisfying e n+1 / ∈ {±e j : j = 1, . . ., n}.For each j = 1, . . ., n, as ext(B X ) is finite, we can pick f j ∈ X * such that 1 = f j (e j ) > c j = max {f j (x) : x ∈ ext(B X ) \ {e j }} . Remark 2 . 18 . Let Z be a Banach space and F ⊂ B Z .Then, F is a spear set if and only if max θ∈T sup z∈F z + θz 0 = 2 for every z 0 ∈ S Z . Example 3 . 2 can be generalized for other two-dimensional spaces Y , but we need some assumptions on the shape of S Y .If S Y can be expressed as a finite or countable union of segments, then every generating operator G ∈ L(L 1 [0, 1], Y ) attains its norm, leading to a complete characterization.Proposition 3.3.Let Y be a real two-dimensional space.Then, the following are equivalent:(i) S Y is a finite or countable union of segments.(ii) Every generating operator G ∈ L(L 1 [0, 1], Y ) attains its norm. Question 4 . 4 . Does there exist a Banach space Y with dens(Y ) = Γ such that Gen(X, Y ) = ∅ for every Banach space X satisfying dens(X) Γ? see [3, Proposition 1.6.1]for instance).Compare Corollary 2.25 the above result with [7, Corollary 4.22] which says that G ∈ L(L 1 (µ), Y ) of norm-one which is representable by g ∈ L ∞ (µ, Y ) is a spear operator if and only if it has the alternative Daugavet property if and only if g(t) ∈ Spear(Y ) for a.e.t ∈ Ω.It is then easy to construct generating operators from L 1 (µ) which do not have the alternative Daugavet property: for instance By Corollary 2.17, G * (B Y * ) is a spear set, so we can find a sequence {x* n } in G * (B Y * ) and a sequence {θ n } in T such that θ n x * n + x * 0 → 2. Therefore, there is a sequence {x n } in S X satisfying Re x * 0 (x n ) → 1 and |x * n (x n )| → 1.Since the norm of X * is Fréchet differentiable at x * 0 ∈ S X *, by the Šmulyian's test, we have that x n − x → 0. Thus, we get |x * n (x)| → 1 which contradicts (4).
10,743.6
2023-06-05T00:00:00.000
[ "Mathematics" ]
4-(2-Nitrobenzyl)-3-phenyl-3,4-dihydro-2H-1,4-benzoxazin-2-ol The title compound, C21H18N2O4, crystallizes with two independent molecules (A and B) in the asymmetric unit. In both molecules the oxazine ring has an envelope conformation with the hydroxyl-substituted C atom as the flap. The nitrobenzyl ring and the phenyl ring are almost normal to the mean plane of the benzooxazine ring system with dihdral angles of 85.72 (15) and 82.69 (15)°, respectively, in molecule A, and 85.79 (15) and 87.72 (15)°, respectively, in molecule B. The main difference in the conformation of the two molecules concerns the dihedral angle between the nitrobenzyl ring and the phenyl ring, viz. 79.67 (18) in molecule A and 71.13 (18)° in molecule B. In the crystal, the A and B molecules are linked by an O—H⋯O hydrogen bond. These units are then linked via C—H⋯O hydrogen bonds, forming sheets lying parallel to (010). Further C—H⋯O hydrogen bonds link the sheets to form a three-dimensional network. There are also O—H⋯π and C—H⋯π interactions present, reinforcing the three-dimensional structure. The title compound, C 21 H 18 N 2 O 4 , crystallizes with two independent molecules (A and B) in the asymmetric unit. In both molecules the oxazine ring has an envelope conformation with the hydroxyl-substituted C atom as the flap. The nitrobenzyl ring and the phenyl ring are almost normal to the mean plane of the benzooxazine ring system with dihdral angles of 85.72 (15) and 82.69 (15) , respectively, in molecule A, and 85.79 (15) and 87.72 (15) , respectively, in molecule B. The main difference in the conformation of the two molecules concerns the dihedral angle between the nitrobenzyl ring and the phenyl ring, viz. 79.67 (18) in molecule A and 71.13 (18) in molecule B. In the crystal, the A and B molecules are linked by an O-HÁ Á ÁO hydrogen bond. These units are then linked via C-HÁ Á ÁO hydrogen bonds, forming sheets lying parallel to (010). Further C-HÁ Á ÁO hydrogen bonds link the sheets to form a three-dimensional network. There are also O-HÁ Á Á and C-HÁ Á Á interactions present, reinforcing the threedimensional structure. Table 1 Hydrogen-bond geometry (Å , ). S1. Comment Numerous natural and synthetic substances that have the core "1,4-benzoxazine" have been used in different fields of medicine. The 1,4-benzoxazine structure is an integral part of several naturally occurring substances. For example, various glycosides of the 2-hydroxy-2H-1,4-benzoxazine skeletons have been found to occur in gramineous plants such as maize, wheat, rye, and rice, and have been suggested to act as plant resistance factors against microbial diseases and insects (Ozden et al., 1992;Hartenstein & Sicker, 1994). Moreover, 3,4-Dihydro-2H-1,4-benzoxazines have received a great deal of attention due to their wide range of biological and therapeutical properties (Ilas et al., 2005). For example they have been investigated as antihypertensive agents (Touzeau et al., 2003), neuroprotective antioxidants (Largeron et al., 1999) and prostaglandin D 2 receptor antagonists (Torisu et al., 2004). Herein, we report our results about the synthesis and the crystallographic study of 4-(2-nitrobenzyl)-3-phenyl-3,4-dihydro-2H-benzo[b][1,4]oxazin-2-ol, (I). The molecular geometry and the atom-numbering scheme of asymetric unit are shown in Fig. 1. The asymetric unit contents two molecule of (I). The crystal packing can be described as alternating connected layers parallel to the (001) plane along the c axis ( Fig. 2 Fig. 2). These interactions link the molecules within the layers and also link the layers together and reinforcing the cohesion of the structure. S3. Refinement All H atoms were localized on Fourier maps but introduced in calculated positions and treated as riding on their parent atoms (C and O) with C-H = 0.97 Å (methylene); C-H = 0.93 Å (aromatic) or C-H = 0.98 Å (methine); O-H = 0.82 Å and with U iso (H) = 1.2 U eq (C aryl ; C methine or C methylene )and U iso (H) = 1.5 U eq (O hydroxy ). In the absence of significant supporting information sup-2 Acta Cryst. (2014). E70, o863-o864 anomalous scattering effects Friedel pairs have been merged. The number of Friedel pairs is 2686. Figure 1 The title molecule (Farrugia, 2012) with the atomic labelling scheme. The displacement parameters are drawn at the 50% probability level. 4-(2-Nitrobenzyl)-3-phenyl-3,4-dihydro-2H-1,4-benzoxazin-2-ol Crystal data Special details Geometry. All e.s.d.'s (except the e.s.d. in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell e.s.d.'s are taken into account individually in the estimation of e.s.d.'s in distances, angles and torsion angles; correlations between e.s.d.'s in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell e.s.d.'s is used for estimating e.s.d.'s involving l.s. planes. Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger. Fractional atomic coordinates and isotropic or equivalent isotropic displacement parameters (Å 2 ) x
1,227.8
2014-07-11T00:00:00.000
[ "Chemistry" ]
Comparison and Evaluation of Bank Efficiency in Austria and the Czech Republic This article compares and evaluates the efficiency of the banking sector in Austria and the Czech Republic in the period 2004-2011. The paper is divided into the following parts. It begins with a literature review dealing with the bank efficiency generally and then with the efficiency of the banking sector in chosen countries. The second section provides an overview of used methodology. The non-parametric Data Envelopment Analysis (DEA) with undesirable output is used for estimating the efficiency. The undesirable output is usually omitted by current authors. Simultaneously were used CCR and BCC models that differ in returns to scale. Section three summarizes the results, discusses them and compares the estimated efficiency rates in both states. This study also attempts to further identify the main sources of inefficiency. INTRODUCTION Each country should try to build the most advanced banking system because the better bank system the state has the more competitive the state is -according to the study Berger, Hassan, Klapper 2004 (An International Analysis of Community Banking and Economic Performance).In their study was found an important relationship between the efficiency of bank system and GDP growth (tested in 49 states).This result confirms also the paper from Wachtel (2003), Kohler and Cecchetti (2009).According them the well-functioning banking system enables better allocation of resources and investments.In current strong competitive financial environment is necessary to work as efficient as possible and do not have unnecessary extra costs.The aim of this paper is to measure the economic efficiency in Austria and Czech Republic in the period 2004 -2011 and to compare and discuss the results and to find the most important sources of inefficiency. According to the term 'efficiency' could be understood technical efficiency -producing maximum output from a given set of inputs (to have a good combination of inputs and outputs).The other type of efficiency is allocative efficiency, where optimal inputs and outputs are chosen based on price in the market.Economic efficiency is than a connection both above mentioned terms.It is the ability to choose the inputs and outputs to optimize the economic goal, usually to maximize profits and to minimize costs. Measuring the level of economic efficiency of the banking system can help to identify the performance of measured units and if there is some way for the eventual improvement.These measurements may provide valuable information for bank managers for their decision making.Inefficient banks have, in accordance to the article from Fioderlisi, Marques-Ibanez and Molyneux (2010), Williams (2004) and Altunbas et al. (2007) the tendency to make risky steps, which are dangerous for the entire financial system.Furthermore, the authors found, that banks, reaching the high productivity, operate with lower costs and do not tend to do operations that include moral hazard (raising amount of short term funds for financing long term activities).Banks with balanced capital structure (finance short term activities with short term funds and long term with long term funds and have sufficient capital adequacy ratio) can afford to make business with higher risk. For estimating the efficiency exist standard performance indicators, for example ROA (Return of asset), ROE (Return of equity), ROI (Return of investment) and other finance analysis indicators.All these indicators have a big disadvantage.To evaluate the bank efficiency it is necessary to compare a lot of results and to know the recommended range where the results are good or bad. The main authors in the literature review deal with the estimation of bank systems in one country.Only some authors compared efficiency of bank systems in more states.More recent studies concentrate on Asia countries and emerging markets, Barros, Managi, Matousek 2011;Yung-Ho Chiu, Chin-Wei Huang, Chung-Te Ting 2011;Fukuyama, Weber 2009.Only rare authors analysed the bank efficiency in CEE countries.Only few authors analysed the efficiency of Czech banking sector (Stavárek, Řepková 2011;Stavárek, Polouček 2004;Staněk 2010;Matoušek, Taci 2005;Taci, Zampieri 1998).All these works used the parametric or nonparametric techniques without including risk (undesirable outputs).In this article the undesirable output as the risk factor will be included.Indeed, all the former studies investigating bank efficiency in transition countries aim at estimating bank performance for transition countries, without any comparison to Western European countries.Staněk 2010 estimated the efficiency in Czech Republic and Austria, but he used the Stochastic Frontier Analysis.No authors to my knowledge has yet compared the efficiency in CEE countries and in Austria, however all these countries are very close connected because of a lot of Austrian branches (7) and subsidiaries (6) in these countries. meThODOlOgy aND DaTa Recently there are two basic methods of efficiency estimation -parametric (econometric) and nonparametric (mathematical programming).In both cases is the measured efficiency compared with the 'best practice frontier' in the group of investigated DMUs (Decision making units, in this study is one DMU one bank). The most frequently used parametric method is SFA (Stochastic Frontier Analysis).This method has a big disadvantage the model must be exactly defined.The DEA (Data Envelopment Analysis) is a nonparametric method which allows quantification of the efficiency in one number and is formed as a piecewise linear combination of best-practice observations.Nonparametric approach is more suitable for bank efficiency ranking (Kamecka 2010, Apergis 2011, Holod a Lewis 2011, Ševčovič, Halická, Brunovský 2001). Advantage of the DEA model is the identification of sources and level of inefficiency for inefficient DMUs (Stavárek, Řepková 2011).One more advantage is that the technique works without the need for standardisation.Classical DEA models, described in Charnes, Cooper, Rhodes (1978) rely on assumption that inputs have to be minimized and outputs maximized (or conversely minimisation of outputs and maximization of inputs in the models oriented of inputs).A lot of authors applied this methodology in their articles.Casu, Molyneux (2000) investigated the bank efficiency in EU after joining in the EU.Fioderlisy, Marques (2010) examined the bank risk and efficiency.Ševčovič, Halická, Brunovský (2001) investigated the level of performance of bank branches in Slovakia, Stavárek, Řepková (2011) estimated the efficiency of Czech banking industry.All these papers used the simply DEA CCR (constant returns to scale) and BCR (variable returns to scale) model.The simple models ignore the undesirable outputs.But it is necessary to decrease these 'bad outputs' and increase the desirable outputs to improve the performance of DMU.As an undesirable output is in bank accounting considered 'loan loss provision'.It is a non-cash expense for banks to account for estimated potential losses on loan defaults in loan portfolio and also single deals. In his paper is used output-oriented model which maximizes output levels without increasing inputs.Banks usually cannot set their inputs independently and rather respecting the given market price (salaries, deposit interest rates etc.), Kamecka 2010. In the model are n DMUs (banks) which are evaluated, indexed by j = 1,…,n The input and output vectors of DMU j is X j = (x 1j , …, x ij ) and Y j = (y 1j , …, y ij ) In this article is used the indirect approach, it means the transformation of undesirable outputs (we set as variable di the constant for recalculating the undesirable outputs to plus sign values: d i = max j (y ij ) + 1). (1) ψ ij …transformed undesirable outputs; UO….undesirable outputs, DO…desirable outputs, I… inputs The undesirable outputs are positive now, we can consider them as normal outputs and it is possible to maximize them. (2) (3) λ = intensity variables that form linear combinations of observed inputs and outputs with variable return to scale imposed by the constant: ∑ j λ j = 1; θ q …degree of efficiency of virtual unit (the system looks for the combination of virtual inputs and outputs which are better or worse than the inputs and outputs of estimated Unit); s i + , s i - …slacks (distance from production possibility frontier); ε…infinitesimal constant which ensures inclusion of all inputs and outputs to the model at least in this value, it is usually 10 -8 The DMU is efficient if (x, y) є T. In this situation no less or any more input can produce the same output or if the same input can produce no more any single outputs.(Fukuyama, Weber 2009) Constrain: Efficient units have the efficiency = 1.The units with higher level of measured efficiency are not effective and have to improve the inputs, desirable outputs and undesirable outputs in this way: All symbols with * are the vectors of optimal values of the models. The dataset for both states was obtained from Bankscope -Bureau van Dijk database.From both states were selected 9 -12 biggest banks according to the amount of total assets.Estimated dataset comprises approximately 75-80% of the whole market. The paper focuses only on commercial banks.Other specialized banks (central banks, investment banks, securities houses, etc.) were not included in this study.As inputs were selected: personnel costs, deposits, fixed assets and as outputs net interest revenue, loans and as an undesirable output loan loss provision.This selection is consistent with the intermediary approach.It is the traditional role of the bank which collects the deposits and funds from the clients that have a money surplus and distributes these funds to them which have a lack of money for their investment and other needs.Of course the bank makes a profit of these deals.The same approach and variables used also Apergis (2011), Holod, Lewis (2011), Andrie, Cocris (2010), Stavárek (2004). All data were used from unconsolidated financial statements, annual periodicity.All data were adjusted for inflation (2005 = 100%).For conversion of results in Czech crowns were used the exchange rate from 31.12.20XX.Selected period was 2004 -2011.In the year 2004 the Bank for International Settlements introduced the regulation Basel II.All financial institution had to gradually adjust its financial statements, information and accounting systems and methodology according this rule.Basel II went into effect in 2008.Both states cooperate economically very close.The Austrian income from their investment in Czech Republic are in the year 2011: 1.363 mil.€ (all incomes from all EU states are 7.705 mi.€).(Source: Einkommen aus österreichischen Direktinvestitionen im Ausland nach Regionen.Österreichische Nationalbank) Austrian banks started to expand after the political changes in 1989 to Czech Republic and opened here new branches and subsidiaries.The main connection with these bank systems is through Erste Group, Raiffeisen Bank and Bank Austria.These banks have approximately 35% share of the total assets of the Czech banking sector.(Source: Banky a pobočky zahraničních bank.Česká národní banka).Both sates have a universal bank system and accounting reported in international accounting standard (my estimated banks had all the financial statements according to IFRS Standard).For the estimation was used the dataset form Bankscope Database.The balance sheet date was in all chosen banks 31.12.20XX.Below are mentioned some bank system characteristics and macroeconomic indicator. ResUlTs aND DIsCUssION Comparison between Czech and Austrian bank system in the period 2004 -2011 is displayed in Tab.2 and in the Fig. 2. The efficient units have the score z i * = 100%.The further is the distance between achieved efficiency level and 100% border, the more inefficient the system is.The efficiency of Czech banking sector is in all estimated years and in both models CCR and BCC with undesirable output higher than by Austrian banking sector (except the year 2006 by BCC model). It exist smaller and larger banks on market, which differently influence the whole banking sector Where w i are weights according to the asset ratio in the estimated file and z i * is the level of efficiency of analysed unit.Also the SAE proves that the Czech banking sector is more efficient.After recalculation according to SAE, all values of efficiency in all years and for both states had deteriorated.These results show that mainly the biggest banks, which have the highest market power, are more inefficient than smaller banks.Bigger banks have very often higher costs for company governance and operation costs.The figure: After this conversion have the efficient units value 100% and inefficient units are in the range (0, 100%) it is more visible how efficient the unit is.AUT 127,73% 177,87% 177,43% 210,14% 193,83% 223,12% 243,31% 216,48% SAE CZE 144,88% 144,51% 138,90% 163,29% 167,35% 167,05% 205,09% 202,42% AUT 144,35% 201,30% 204,63% 240,46% 209,13% 223,42% 258,23% 238,26% g* CZE 81,47% 75,42% 73,83% 67,70% 67,91% AUT 101,39% 101,74% 100,91% 101,49% 101,32% 101,89% 101,73% 102,45% SAE CZE 100,19% 100,40% 100,30% 100,47% 100,66% 101,65% 100,51% 100,65% AUT 100,41% 100,43% 100,22% 100,42% 100,36% 100,40% 100,40% 100,65% AUT 98,63% 98,29% 99,10% 98,53% 98,69% 98,15% 98,30% 97,61% Fig. 2 shows that banks in 2004 had in both states approximately equal level of efficiency, around 80% (CCR model, g*).Because of historical reasons, Austria has one of the most developed bank system in the EU.The favourable economic situation in Austria in the first half of 2004 was driven primarily by exports, while investment activity and consumer spending rose at a subdued pace.The decision to invest to CEE countries was really a very good step for Austrian banks.Banking markets in CEE benefit from strong demand for banking products and the subsidiaries in CEE generated 40% (Source: Financial stability report 2004, Osterreichische Nationalbank) of operating incomes of parent companies in Austria in 2004.The increasing energy and oil prices and not sufficient domestic demand decreased the efficiency of Austrian bank system in the next years.Descending exports, increasing oil prices and increasing number of households with an inability to repay their loans caused the deterioration in bank efficiency (number of personal bankruptcy increased from 5.2 % to 16%).The number of new loans was going down.The main risks were new loans denominated in foreign currencies (32% of new loans were in foreign currencies, mainly in Swiss Franc).Favourable economic environment in 2006 caused a reduction of non-performing loans in Austria and had a very positive effect, according to the CRS model with undesirable output.In the course of the year 2007, sustained financial turmoil worldwide led to a downward revision of the economic outlook for both industrialized and CEE countries.These imbalances and also the relatively large share of domestic foreign currency The reason were still increasing loans in the first half of the period and increasing bank profits (banks were able to pass on higher interbank rates arising from tight liquidity conditions).In 2009, the impact of the crisis on company balance sheets became increasingly visible and external financing also contracted significantly.Furthermore, the conditions and terms for approving loans -interest margins, collateral requirements, the size and maturity of loans granted, the loan covenants as well as the noninterest rate charges were all markedly tightened during this period. The rising risk costs do represent a sizeable burden for the profitability of the Austrian banking system.Overall, the economic recovery and also increase of bank efficiency is visible in 2010 in the model BCC.This growth was primarily export-led with continued sluggish momentum in domestic demand, which had to do with low lending expansion, among other factors.Also the undesirable output, loan loss provision, decreased and that had a positive effect on bank sector efficiency in Austria.In BCC model is visible further decreasing of efficiency in 2011.Austrian banks exhibit very high foreign currency exposure in Austria and in CESEE and the exchange rate risk has materialized in 2011 (the Swiss Franc has appreciated against the Euro and most CESEE currencies have depreciated against the Euro).The structural weaknesses affecting Austrian banks' domestic performance were, however, offset by the continued comparatively favourable performance in CESEE.In the first half of 2011, the operating profits of Austrian banks' subsidiaries, mainly interest income, rose marginally against the same period of the previous year.At the same time, credit risk provisions diminished, so that semi-annual profits were substantially higher than 2010 first-half profits.According to the model CCR the efficiency slacked in 2011, nevertheless the BCC model (variable return to scale) describes more the real situation in the economy.In the study were used both models to confirm the results each other. The starting position of the Czech banking sector was in the same level of efficiency according to the CCR model with undesirable output and on better level that Austrian according to the BCC model with undesirable output.Czech banks after the process of privatising large banks and clearing their balance sheets of bad debts by transfer and sale to transformation institutions had a good financial condition.The Czech Republic's ratio of total banking sector assets to GDP is at 99.8%, very high by comparison with other new CEE member states of the EU.This is a sign of a relatively developed banking sector, although this ratio for the Czech Republic is also decreasing.Development of ever more sophisticated products, services, sales channels and internal banking processes increased the efficiency in the next years according to the BCC model with undesirable output.The traditional macroeconomic sustainability indicators recorded strongly positive developments in 2007 and also the bank efficiency rose because of that.The public budget deficit fell to 1.6 % of GDP and the ratio of public debt to GDP decreased to 28.7 %.The current account deficit declined to 2.5 % of GDP, while the surplus on the output balance increased.Client deposits remain the biggest source of financing for bank loans.At the end of 2007, they were 1.3 times higher than client loans, which, in turn, is more than two times the average in the original EU member countries.Deterioration came in 2008 and 2009 when financial crisis affected also Czech Republic.During the financial crisis was the situation in the Czech financial market generally stabilised (visible on the slight increase of inefficiency in CCR model), although low liquidity, weak activity and higher volatility persist in the money market.The deterioration of Czech banking system efficiency since 2009 was caused of recession associated with the collapse of some large debtors, losses from securities holdings in the event of renewed financial market turmoil (e.g.due to restructuring of the sovereign debt of some over-indebted euro area countries), potential liquidity problems in the building society sector and the impact of new regulatory initiatives, mainly indirectly via links to parent companies abroad.Czech banking sector had also during the crisis sufficient capital and remains highly profitable banks.Following quite a robust recovery in 2010, the global economy recorded a modest slowdown in 2011.Weak economic growth and high unemployment are in a vicious circle from which it was difficult to escape.The Czech Republic maintained positive economic growth in 2011, but this growth gradually slowed.Despite the decline in the unemployment rate, the income situation of households deteriorated.Real wage growth was among the lowest in recent history.This unfavourable situation has so far affected the credit risk of households and the impact was visible in an amount of loans provided.Loans are the main product of bank according our model and because of this decrease also decreased efficiency rate of the Czech banking sector in 2011.Czech branches and subsidiaries create the important part of Austrian bank profits on consolidated bases till today. In Fig. 2, model BCC is visible the one year delay of starting the financial crisis in Czech Republic in 2007 and in Austria in 2006.Between years 2006 and 2007 was the decline ∆g* about 10% in Austria and about 6% in the Czech Republic, so the financial crisis was apparent.The number of efficient units in Austria is (in both models) smaller than in the Czech Republic and in 2010 and 2011 was in Austria found only one effective unit although in the tested file were more banks from Austria (in Tab.3). Tab. 3 -Descriptive statistics of measured units.Source: Author´s calculation.In Tab. 3 are the most frequently used descriptive statistic values of relative efficiency.As mentioned the Czech Republic had in both models more efficient bank units than Austria.Czech banks have also smaller standard deviation that Austrian banks so they are more similar and homogenous.This indicates also the 'Max' in the table.In Czech banking sector are the 'Max values' not so high that in Austrian sector.The software for inefficiency estimation defines also the possible virtual inputs and outputs of all units as if the unit would be on the edge of efficiency. Year According to the analysis of virtual inputs and outputs were the most significant slacks (gaps between real and virtual input / output) found in the size of fixed assets.Particularly bigger banks hold mostly not always fully utilized buildings and other significant items of tangible assets.In addition, banks have a significant number of branches.The operating costs for running these branches are large.The other sources of inefficiency are personnel costs.In Austria were spent 10% of the whole personnel costs extra, in the Czech Republic 6%.High salaries of bank man-agement and big broker provisions are the major problems.Nevertheless both these inputs create only small part of bank balance sheet.It is of course the important information for improvement the efficiency and decreasing the costs but the main source on inefficiency were deposits.Their inefficient utilization rate was 18% in Austria and 11% in the Czech Republic.The ratio was in Austria bigger because of existence of high number of small and medium regional branches which are above average capitalized but cannot invest as needed.The free liquidity is not fully used and they do not have well developed liquidity risk management.A lot of Austrian banks make in the time of writing the article a huge analysis and some banks (Volksbank) closed nearly a half of their branches in Austria to decrease the costs for operating expenses. The analysis of virtual outputs detected a big gap between virtual outputs and real outputs. According to the results should both sectors significantly increase their business especially provided loans and achieved interest margin.Together with the increase of loans in connected also increase of loan loss provisions because from a new portfolio of loans would arise sure some new loan loss provisions.The virtual output loan loss provisions should be also increased to get the efficiency frontier.Austria should according to the analysis increase nearly twice more than Czech Republic.Of course in the real situation in the market is not so easy to double the amount of provided loans. To compare efficiency in the model CCR and BCC with and without undesirable output doesn´t reflect the real situation on the market because from every loan portfolio exist some loan loss provisions.To omit the undesirable output would not be the real description on the situation on the market.For this reason was not counted the efficiency according to the two simple CCR and BCC models without undesirable output. Empirical analysis of other authors analyse the effectiveness in the same countries brought similar results.Stavárek (2004) used the SFA method (profit and cost function) and DEA (CCR simple model) and determines effectiveness in 2003 around 80%. Stavárek, Řepková in 2011 measured the bank performance of Czech banks around 77%.He separated Czech banks to three groups: small, medium and large and founds that big banks have the worse efficiency, which was about 20% lower than small and medium sized banks.Staněk (2010) used the SFA method and investigated the performance of banking sector in Czech Republic and Austria in 2000-2009, according to him was the efficiency of Czech banks between 67-97% (the performance was improved during period) and in Austria 91-99%.According to Staněk is the main reason of the difference between efficiencies of the Czech Republic banking sector and Austrian banking sector the more educated management in Austrian banks in the beginning of the investigated period than in the Czech Republic. CONClUsION The aim of this paper was to estimate the level of bank efficiency in the Czech Republic and in Austria in the period 2004 -2011 and to compare them.For the survey were used two models: CCR and BCC with undesirable output.As an undesirable output was chosen 'loan loss provisions'.As inputs were selected personnel costs, deposits and fixed assets, as outputs loans and net interest revenue.In the survey were covered every year 19-24 bank, which account for about 75% of the whole market in estimated states.According to both models the performance of the Czech banking sector was better than Austrian banking sector.In the Czech Republic were also more efficient units compared to Austria during the estimated period.The efficiency of both banking sectors was nearly on the same level according to the CCR model in 2004 and in accordance with this model with constant return to scale decreased significantly till the year 2007 (in Austria much larger decrease than in Czech Republic).In 2008 came the slight improvement of efficiency in Austria and the rest of estimated years again deterioration in both states.BCC model showed the better position of Czech banking sector in the beginning of period in 2004 but this deteriorated the next year.In Austria, there is visible the earlier entrance of the financial crisis then in the Czech Republic.The efficiency in Austria descended in 2007 and in Czech Republic one year after.Except the slight increase of the level of efficiency in 2010, the performance decreased the rest of the estimated period.The Czech Banking system was not affected as much as Austrian and was more stable.In both states was found the biggest source of inefficiency a huge amount of not well managed free client deposits.The other sources of inefficiency were not fully utilised fixed assets.Banks own a lot of buildings and have a lot of branches, nevertheless a lot of clients still demand the personnel contact on the branch (face to face with account officer) and also the 'virtual banks' begun to build the branch network because of this psychological fact.The next sources were personnel costs which were used in Austria in a more ineffective way than in the Czech Republic.Nevertheless the both models found that banks should much to increase their amount of provided loans (this fact proves also a lot of free deposits).Of course in the real situation in the market is not so easy to double the amount of provided loans.In general Austrian bank system was much more affected of the financial crisis then the Czech banking sector. Fig. 2 - Fig. 2 -Efficiency level measured by CCR and BCC model with undesirable output.Source: Author´s calculation. Tab. 1 -Bank sector indicators.Source: International Monetary Fund.non-interest expenses in all years.In estimated countries are the liquidity and interest margins on the approximately same level, Czech Republic is more stable in this indicator.The source of investment is primarily from local deposits.Because of traditionally less risky behaviour of Czech banks have these banks also only a small number of open currency positions which is not very different from Austria.The banks closed most of open FX positions because of fear of impact of financial crisis risk.Despite small differences it can be conclude that Czech Republic and Austria are economically comparable systems. in Czech Republic, that's why we could suppose lower efficiency of Czech Banking System.In the other way, Czech Republic achieved better return on assets and equity because of higher margins and not so saturated market.Interest margin is very similar in both states, except the year 2011 in Austria (the ratio was much higher because of low income in this year).Czech Re-public has lower Fig. 1 -Macroeconomic indicators 2008 -2011.Source: International Monetary Fund.Fig.1shows some macroeconomic indicators in both estimated states.These indicators are the average figures of the years 2008 -2011.Employment rate is on the same level, GDP per capita in PPS in better in Austria, inflation rate is also nearly identical.Private debt is higher in Austria (as usual in Western countries).Current account balance is about zero in both countries.Real effective exchange rate is higher in Czech Republic.The other two indicators, Ginni coeficient and Net lending/ borrowing are nearly the same.Surprisingly Ginni coeficient in Austria is slightly higher in Austria (progressive income tax rate and other government social politics). because of its different market strength.Because of that was used SAE.It recalculates the data set according to the formula: Tab. 2 -Efficiency level measured by CCR and BCC model with undesirable output.Source: Author´s calculation. loans have contributed to a further increase in both interest rate and exchange rate risks, which have already partly materialized in the next years.Financing via quoted shares almost dried up, and growth in bond-based financing slowed from a high level.Investment is decreasing despite of adverse economic environment.The interest grew further in 2007 and also expenses for new loans because of the higher risk premium.Despite the unfavourable economic situation in 2008 the efficiency of Austrian banks increased.According to CCR model was the growth 4%.
6,521.2
2014-06-30T00:00:00.000
[ "Economics" ]
Tenacity of Enterococcus cecorum at different environmental conditions Our aim was to analyse the survival of Enterococcus cecorum (EC) at various temperatures, relative air humidities and on different substrates commonly existing in broiler houses. Introduction Enterococcus cecorum (EC) is a Gram-positive bacterium, which is facultative anaerobic and a non-spore former. It can be isolated from many animals as well as from humans (Devriese et al. 1991a;Delaunay et al. 2015;Jung et al. 2017b), and it is known to be a commensal of the gut, especially of chickens (Devriese et al. 1991a,b). However, since 2002, disease outbreaks caused by EC have been reported in broiler chickens Wood et al. 2002), and it is considered an emerging avian pathogen today (Jung et al. 2017a). The affected chickens show lameness or hock sitting due to spondylitis and osteomyelitis in the 6th thoracic vertebra compressing the spinal cord, especially around weeks 5 (broilers) to 13 (broiler breeders), and increased mortality (Stalker et al. 2010;Makrai et al. 2011;Borst et al. 2012). It is assumed that EC persists in the environment (De Herdt et al. 2009) and may infect subsequent broiler production cycles. Nonetheless, the source in the broiler houses has not been found yet Borst et al. 2017). Consecutive outbreaks of pathogenic EC imply a farm-associated environmental or biologic reservoir (De Herdt et al. 2009). Tenacity can be defined as the robustness of micro-organisms to defined exogenous factors (Von Sprockhoff 1979). In the Anglo-American language, the term 'tenacity' is uncommon; instead, terms such as 'resistance ', 'sensitivity' or 'survival' are used (Egen 2000). Survival can be understood as keeping the viability under disadvantageous circumstances (Roszak and Colwell 1987). Enterococcus sp. generally have the ability to tolerate low temperatures as 5 or 15°C or rather these favour their survival (Cools et al. 2001). However, many enterococci can also endure high temperatures as 45 up to 60°C, which distinguishes them from other closely related genera such as Streptococci (Moreno et al. 2006). For example, Enterococcus faecalis and Enterococcus faecium were frequently isolated from environmental samples collected in poultry houses and from samples of affected birds such as tissues and swabs (Dolka et al. 2017). They persisted in the environment, even under unfavourable conditions such as high temperature, drought or chemicals (Bradley and Fraise 1996;Cools et al. 2001;Liu et al. 2018). Until now, not much is known about the resistance of EC to environmental factors. The only source, which examined the survival of EC until now, demonstrated that it can grow at low temperatures (4, 10°C) and survives at 60°C for 1 h and some strains at 70°C for 15-30 min (Dolka et al. 2016). Air humidity is known to affect the survival of bacteria in the environment (Gundermann 1972). Observations linking air humidity and the survival of EC are not available. Other Enterococcus species show resistance to environmental stresses, such as demonstrated for E. faecalis to desiccation with survival times of more than 60 days (Hartel et al. 2005;Lebreton et al. 2017). There are contradictory findings for Enterococcus sp. On the one hand, these bacteria can persist for more than 11 weeks under desiccation (Bale et al. 1993). On the other hand, another study claims that drought stress leads to population decline in Enterococcus sp. (Cools et al. 2001). Different substrates are found in poultry houses. Polyvinyl chloride (PVC) is a polymer and one of the most used synthetic plastics in the world (Vesterberg et al. 2005). It can be found in drinker cups and feeders in poultry houses. Survival of E. faecalis and E. faecium isolated from humans and the environment of hospitals was observed on PVC for up to 4 months (Wendt et al. 1998). Broiler litter was investigated for its microbial composition by culture and molecular detection methods (Lu et al. 2003). On evaluating the total aerobic bacterial counts, enteric bacteria like enterococci accounted for only 0Á1%. When analysing 16S rDNA sequences, 2% of the sequences were assigned to the group of Enterococcaceae, including EC. Litter can be described as a mix of feather particles, faeces, feed and bedding components such as wood shavings in broiler houses (Kuntz et al. 2004). Enterococcus faecalis was rarely isolated from broiler litter with fewer than 2% of the isolates, but a high number of unspecified enterococci were detected. Enterococci such as E. faecalis, E. faecium and other unspecified Enterococcus sp. were isolated from poultry litter over a 120-day period, indicating high persistence in that matrix (Graham et al. 2009). Dust is very common in poultry houses and originates from feed, feather components, bedding material and faeces (Carpenter 1986). Decay, amount, composition and formation of airborne particles like dust are influenced by many factors such as temperature and air humidity. Most bacterial micro-organisms isolated from dust are Gram-positive cocci (Hartung and Saleh 2007). Until now, EC was not isolated from dust yet. However, it is discussed to invade via the respiratory tract (Kense and Landman 2011;Jung and Rautenschlein 2014) so that an infection via inhaled dust particles could be theoretically possible (Kense and Landman 2012;Jung et al. 2018). The aim of the present study was to characterize the survival of specific EC isolates on litter, dust and PVC at different air humidities (32% RH, 78% RH) and temperatures (15, 25, 37°C), which can be found in the environment of broiler chickens at different stages of the production cycle. Bacterial strains For the tenacity studies, we used the pathogenic EC isolate 14/086/4/A (Jung et al. 2017a), designated as EC14 in the following. The strain was isolated from the heart of a broiler with EC septicaemia. For selected substratetemperature-air humidity combinations, the commensal EC strain 13/655/3/B (EC13), which was isolated from the intestine of a broiler chicken from a production cycle without classical EC infection, was also investigated (Jung et al. 2017a). In addition, two other pathogenic strains, 15/827/1/A (EC15) and 14/166/2/A (EC14.2) (Jung et al. 2017a), both isolated from the heart of the chickens, were tested in one selected combination of environmental parameters (PVC, 25°C, 32% RH). All strains were stored at À80°C using the cryobank system (Mast Diagnostica GmbH, Reinfeld, Germany). Before each experiment, one bead was thawed, plated on Columbia Agar with Sheep Blood (COLSB; Oxoid Deutschland GmbH, Wesel, Germany) and incubated in a CO 2 -enriched atmosphere at 37°C for 24 h. Bacterial colonies of this plate were further used for subcultivation on a COLSB plate followed by another incubation step. Subsequently, bacterial colonies were suspended in sterile physiologic saline solution in a flat-bottom glass test tube with a Steristopperâ (Heinz Herenz Medizinalbedarf GmbH, Hamburg, Germany), diluted to a McFarland standard of 3Á3 (10 9 CFU per ml) and used for spiking the substrates. Substrates PVC (white PVC, Alt Industriebedarf, Neresheim, Germany) was cut into pieces of 1Á0 9 1Á0 9 0Á1 cm, washed three times with distilled water, rinsed with 70% ethanol and dried in a laminar flow cabinet until inoculation (modified according to Egen 2000). Litter-faeces mixture ('litter'; bedding consisting of wood shavings) and dust were collected in a broiler house (Ross 308) during different production cycles on the same farm in Lower Saxony, Germany at the third or fourth week of life. The litter was always collected at five locations of a fictive line near the walls on both sides and of another fictive line in the centre of the broiler house. Dust was obtained from window ledges, heating installations and pipelines at different locations. The litter was pooled and shredded by a commercial kitchen blender (model number 17956-56; Russell Hobbs Essentials Ltd, Manchester, UK), filled in plastic bags and autoclaved at 121°C for 15 min. The pooled dust was filtered with a sieve (1Á0 9 1Á0 mm opening), filled in 500 ml bottles and also autoclaved just like the litter at 121°C for 15 min. Aliquots of 1Á0 g litter or 0Á5 g dust were weighed in glass test tubes with Steristoppersâ and again autoclaved before use. Inoculation In a laminar flow cabinet, PVC plates were placed on blotting paper. The PVC samples were inoculated each with 10 µl (10 9 CFU) of the bacterial suspension. After 1 h of drying, PVC plates were transferred to glass test tubes and the tubes were sealed with Steristoppers (modified according to Egen 2000). For litter and dust, aliquots of 1Á0 or 0Á5 g were inoculated with 10 µl (10 9 CFU) of the bacterial suspension, dried in the open glass test tubes for 1 h and then sealed likewise. For each sampling time point, test tubes were processed in triplicate. Each trial included one sample inoculated with 10 µl of sterile physiologic saline solution and another non-inoculated sample of the respective substrate as negative controls. Experimental setup All glass test tubes were transferred to desiccators (DWK Life Sciences GmbH, Wertheim, Germany) with a defined microclimate. Two different air humidities were created by saturated saline solutions (O'Brien 1948;OIML 1996;Gillespie et al. 2000;Lu and Chen 2007). To establish an air humidity of 32% RH, a saturated solution of anhydrous calcium chloride (AppliChem GmbH, Darmstadt, Germany) in water was prepared (O'Brien 1948), autoclaved at 121°C for 15 min and filled in a desiccator. The filled desiccator was pre-incubated for at least 72 h before the placement of samples (Gundermann 1972;Cimiotti 1980). The relative air humidity was confirmed using a digital hygrometer (TFA Dostmann GmbH & Co. KG, Wertheim-Reicholzheim, Germany). Sodium chloride (Carl Roth GmbH + Co. KG, Karlsruhe, Germany) was used likewise to generate an air humidity of 78% RH. For every substrate and air humidity, three different temperatures, 15, 25 and 37°C, were tested (Cimiotti 1980;Egen 2000). The applied conditions were inspired by possible environmental conditions in a broiler production cycle (Aviagen 2018, accessed July 2020. Although the typical parameter during a production cycle range between 50 and 70% RH for the relative air humidity and 29-31 to 20°C for the temperature (Aviagen 2018, accessed July 2020), we tried to cover the whole range of environmental conditions to illustrate the potential differences more clearly. An overview about all used temperature-humiditysubstrate-strain combinations is given in Table 1. Samples were removed in triplicate from the desiccator at defined time points including at least 0, 24, 48, 72, 96 and 120 h post-inoculation for each trial. A set of three samples was processed directly after inoculation and 1 h drying time before the placement in a desiccator as the time point '0 h'. Additional time points (up to 4272 h) were included based on previous trials, but chosen individually per sample and course of the trial (see also Table S1). Determination of total viable count Each glass test tube was filled with 3 ml (litter, dust) or 1 ml (PVC) of sterile physiologic saline solution (Cimiotti 1980;Egen 2000). The tubes were shaken at a G-force of 2 g at room temperature for 15 min. After shaking, a 10-fold serial dilution for each sample was prepared. 100 µl of each dilution was plated on COLSB plates in duplicate and incubated at 37°C for 48 h in a CO 2 -enriched atmosphere. One test tube of each negative control was processed in a similar manner at the first sampling date. After incubation, colony forming units (CFU) of all evaluable dilution levels were counted with ProtoCOL3 (Synbiosis, Cambridge, UK) and for each dilution level, count per millilitre was calculated. The count limit was set at 10-300 CFU per counting frame. The mean value for each time point was calculated from all counts (n = 3) of the sample dilution level. Multi-factor analysis of variance (ANOVA) was chosen (P ≤ 0Á05) to evaluate the influence of the factors temperature and air humidity in regard to the time points and substrates (SAS-EG 7.1; SAS Institute Inc., Cary, NC). Two-sample t tests were performed per time point and factor combination to compare strains and substrates (P ≤ 0Á05). For analysing all three substrates in one test, one-way ANOVA with the least significant difference test and the Ryan-Einot-Gabriel-Welsch test as post-hoc tests was chosen (P ≤ 0Á05). Results The measured bacterial surviving time ranged from 48 to 4272 h (178 days) for EC14 depending on the substrate-humidity-temperature combination (Table 1). Using the linear regression model, even longer survival times up to over 19 000 h (on litter) could be calculated in consideration of the coefficient of determination. Litter The shortest survival time of EC14 on litter was detected at 37°C and 32% RH with 120 h, followed by the conditions of 37°C and 78% RH with a survival time of 168 h (Fig. 1a, Table 1). Comparing the temperatures, EC14 survived generally for the shortest time at 37°C on litter. At 25°C, EC14 could be isolated for a longer time with a measured survival time of up to over 3000 h, depending on the relative air humidity (Fig. 1b). In contrast to 37°C, EC14 survived longer at a relative air humidity of 32% RH instead of 78% RH at 25°C with a time difference of over 500 h between the last measured time points (Table 1). The longest survival on litter was at 15°C (Fig. 1c). Survival times of 2784 h (78% RH) and 4272 h (32% RH) were measured before the trials had to be finished (Table 1). Similar to 25°C, EC14 was detected for a longer time at 32% RH than at 78% RH with a time difference of over 1400 h (Fig. 1c). (Fig. 2a, Table 1). At 78% RH and at the same temperature, EC14 could be detected 24 h longer (Fig. 2a). At 25°C, the bacteria could be isolated from the samples up until 240 h (78% RH) and 336 h (32% RH), respectively (Fig. 2b). The longest survival time was found at 15°C with 432 h (78% RH) and 2280 h (32% RH), respectively (Fig. 2c, Table 1). At 15°C and 78% RH, a higher number of CFU per ml in comparison to the other trials on PVC (3 Log 10 CFU per ml compared to about 2 Log 10 CFU per ml) was isolated at the last measurable time point (Fig. 2c). As for litter, the survival time of EC14 on PVC was only longer at the higher relative air humidity (78% RH) when in combination with the highest tested temperature of 37°C (Fig. 2a- Dust For dust, two trials were conducted with the strain EC14 (Fig. 3). EC14 survived for 120 h at 37°C and 78% RH (Fig. 3). At 15°C and 32% RH, the trial was finished at 2784 h (Fig. 3). The time differences between the last measured time points were more than 2664 h ( Table 1). Comparisons of substrates Overall, the longest survival time of EC14 was detected on litter, followed by dust and then PVC depending on the combination of temperature and relative air humidity (Table 1). The significant differences per time point between the substrates are summarized in Table S1 (P ≤ 0Á05). Comparisons of EC strains Different EC strains were tested at specific conditions ( Figs. 4 and 5). The pathogenic EC isolate 14/086/4/A (EC14) was compared with the non-pathogenic EC isolate 13/655/3/B (EC13) on PVC and litter at 25°C and two different relative air humidities (Fig. 4). The pathogenic EC14 survived longer regardless of the conditions (Fig. 4). At 25°C and 32% RH, EC14 was isolated up to 3288 h on litter, whereas the non-pathogenic EC13 was detected up to 2784 h at the same conditions (Fig. 4a). At the same relative air humidity and temperature on PVC, EC14 was detected 240 h longer than EC13 (Fig. 4b, Table 1). At 78% RH and on litter, EC14 survived for 600 h and EC13 for 96 h (Fig. 4c), the last measured time points differing over 500 h. At 25°C and 32% RH, both strains survived longer on litter than on PVC with a difference of 2952 h (EC14) and 2688 h (EC13) between the last measured time points (Fig. 4a,b, Table 1). At 25°C and on litter, both isolates were detected longer at 32% RH than at 78% RH, with a time difference of 2688 h for each strain (Fig. 4a,c). At 25°C and 32% RH, both strains (EC14 and EC13) and another two pathogenic EC strains EC14Á2 and EC15 were tested on PVC (Fig. 5). The measured survival time for EC15 as well as for EC14 amounted to 432 h, but more CFU per ml of EC15 were detected at the last measured time point (Fig. 5). The strain EC14Á2 survived for 336 h, followed by the non-pathogenic EC13 with 168 h (Fig. 5). Again, significant differences per time point between the substrates are summarized in Table S1 (P ≤ 0Á05). When comparing the pathogenic strain EC14 and the non-pathogenic strain EC13, the significant differences per time point did not display a consistent pattern (P ≤ 0Á05). At all time points, pathogenic EC15 was significantly different from the two other pathogenic and one non-pathogenic strains (P ≤ 0Á05). Discussion We investigated the ability of different EC strains to survive at different temperatures and relative air humidities on PVC, litter and dust. These substrates are commonly found in broiler flocks. We tested the enterococcal survival time not only for one pathogenic strain but also for one non-pathogenic and two other pathogenic strains at selected conditions. To our knowledge, this is the first extensive study focusing on the survival of EC under different environmental conditions. Temperature and relative air humidity Regarding temperature, it was generally found that the lower the temperature, the longer the survival time of EC14. This is in accordance with the literature, where it is postulated for enterococci to survive better at low than at high temperatures (Cools et al. 2001;Dolka et al. 2016). The temperature was a very important influencing factor and significantly affected the amount of surviving bacteria at nearly all of the measurable and calculable time points (Table S1). These findings suggest that the bacterial survival time is to a large extent dependent on the environmental temperature. In comparison to other bacteria, enterococci such as EC have high durability. One study compared the survival of E. coli with the survival of Enterococcus sp. (Cools et al. 2001). Escherichia coli could survive for up to 68 days at 5°C (Cools et al. 2001). In contrast, Enterococcus sp. remained constant at 5°C for a period of 80 days (Cools et al. 2001) and in our study EC could be detected for 18 days (432 h) up to 178 days (4272 h) at 15°C depending on the substrate and the relative air humidity. Bacteria as E. coli showed also a longer survival time at lower temperatures as demonstrated in our study for EC14 (Milling et al. 2005;Moretro et al. 2010;Chen et al. 2018). For example, E. coli could be isolated for only 7 h on pine-wood sawdust at 37°C, but for over 144 h (6 days) at 4°C (Milling et al. 2005). On plastic chips, survival times of over 144 h for every temperature were shown, but the bacterial titre was over 3 Log 10 CFU per gram higher for 4°C compared to 37°C at the last measured time point (Milling et al. 2005). The same applies to Salmonella sp. (Pietronave et al. 2004;Vinner as 2007;Chen et al. 2018). In composts, it can be found for over 168 days at 5°C and for up to 91 days at 22°C depending on the inoculum level . In manure, no significant reduction of Salmonella sp. was found at 4°C, but the decimal reduction of 25 days was shown at 14°C (Vinner as 2007). However, it has to be considered that the bacterial sensitivity to low temperatures varies widely and is based on bacterial population Log 10 CFU ml -1 Figure 3 Enterococcus cecorum isolate 14/086/4/A (EC14, pathogenic) survival (log 10 CFU per ml; mean of triplicate samples) plotted over time. Grouped by the factor 'dust' when exposed to temperatures at different relative air humidities. Trials marked with * were finished before complete EC die-off. (Postgate and Hunter 1963;Mackey 1984;Wesche et al. 2009). For the relative air humidity, the tendency was partly dependent on the temperature. EC14 was inactivated faster at the high relative air humidity (78% RH) in combination with the temperatures 15 and 25°C. Only at 37°C was the EC decrease faster at the low relative air humidity in contrast to the other results. Enterococci are known to be able to survive under dry conditions (Bale et al. 1993;Hartel et al. 2005;Lebreton et al. 2017), but drought stress should also lead to a population decline in enterococci (Cools et al. 2001). The results of our study are in accordance with these former findings when taking into consideration the different survival times under various environmental conditions. However, since especially nonpathogenic EC strains are known as gut commensals in chickens, an explanation could be that EC can survive under conditions similar to the intestinal environment with a high relative humidity (RH) such as 78% RH and at temperatures such as 37°C, which is in proximity to the chicken body temperature of 40°C (Lidwell and Lowbury 1950;Menezes-Blackburn et al. 2015). Similar to enterococci, total inactivation of E. faecalis was observed at a RH close to 85% RH and 25°C in 30 min (Robine et al. 2000). The same trend was found for Bordetella avium and Campylobacter jejuni (Cimiotti 1980;Egen 2000). In contrast, other bacteria such as Salmonella sp. survived longer under humid conditions (Stine et al. 2005;Blessington et al. 2014;Margas et al. 2014). For E. coli, there is contradictory information regarding its survival time under different relative humidities (Milling et al. 2005;Stine et al. 2005;Moretro et al. 2010). Besides the tested substrates (litter, PVC, dust) in our study, there are also other substrates such as concrete floor, stainless steel on the drinking and feeding lines along with PVC, chick paper at the first days, bricks on the walls and optionally wood or plastics other than PVC in broiler houses. We only selected the most common substrates found in nearly all broiler houses. For the other substrates, our data could indicate a positive or negative impact on the survival of EC because of some similar characteristics of several substrates. However, additional trials have to be conducted to obtain detailed and substantiated data on the ability of EC to survive on those substrates. It is known for other enterococci that they could be isolated from walls and floors of broiler houses (Borgen et al. 2000). Isolation of E. faecium from other substrates such as feed equipment, the floor and walls, heaters, wooden partitions and scales in broiler houses was also successful (Garcia-Migura et al. 2007). Vancomycin-resistant E. faecalis and E. faecium survived up to several weeks on stainless steel, whereas on copper alloys, only a short survival time of 1 h was detected (Robine et al. 2000;Warnes et al. 2010). When comparing the survival time of the tested EC strains on our tested substrates, this was the longest on litter and the shortest on PVC. Dust as the third tested substrate in our trials was excluded for several temperature-humidity combinations because the measured survival times of EC14 on dust were located between those of PVC and litter at the same conditions. One possible reason for the different survival times may be the different texture of these substrates (Heijnen et al. 1992;England et al. 1993). On a smooth surface like PVC, the bacteria are more exposed to the environmental conditions than on a substrate like litter consisting of many small particles. The risk of air-drying damage on substrates like PVC could be higher, for example (Potts 1994;Billi and Potts 2002). Another possible reason may be the different water absorption ability of the substrates (Billi and Potts 2002;Hanczvikkel and T oth 2018). PVC as a synthetic plastic has a very low ability to absorb and even store water (Kiani et al. 2011;Rimdusit et al. 2012). In contrast, litter can absorb it easily and deposit it (Deininger et al. 2000;Miles et al. 2011;Dunlop et al. 2015). Consequently, bacteria on litter most likely have a higher amount of essential water available than on PVC and thus can survive longer. The nutrient amount is another point for the survival of the enterococci, especially on used litter (Wehunt et al. 1960;Patterson et al. 1998). Other authors confirmed a longer persistence of bacteria in the presence of organic components (Mallmann and Litsky 1951;Hirai 1991;Jawad et al. 1996). Additionally, it was suggested that a low survival rate of E. faecalis on PVC could be due to the high reactivity of PVC, with the presence of oxidizing sites, the release of hydrochloric acid and the presence of additives (Robine et al. 2000). Another point could be the influence of the dissemination of the inoculum on the survival time including different surface tensions on the various substrates and the absorption of the inoculum (Egen 2000). Due to the potential resistance of the tested EC strains, trials lasting unexpectedly longer than 2000 h had to be terminated in our experiments. However, the linear regression model was used to assume the final die-off of the tested EC strains (Table 1). The calculated survival times of the tested EC strains ranged from over 71 h to nearly 20 000 h (over 2 years). The greatest difference between the calculated and the measured survival time amounted to nearly 15 000 h under the conditions with the longest survival time at 15°C and 32% RH on litter. To explain this vast difference, the associated coefficients of determination were calculated. These varied between 0Á483 and 0Á988, indicating a variable goodness of fit. For all but one of the terminated trials, the coefficient of determination reached over 0Á7 or even over 0Á9 so that the linear regression model had an acceptable goodness of fit for these trials. Therefore, these associated assumed survival times can be considered realistic. Nevertheless, linear regression is not always the most suitable model to calculate bacterial survival as already discussed in previous sources (Geeraerd et al. 2005). After inoculation of the substrates, an initial 1-h drying procedure of the inoculum was conducted. This initial drying procedure reduces the amount of CFU in the sample, which was also reported by other researchers (Wendt et al. 1998;Egen 2000;Redfern and Verran 2017). For compensation of the different drying losses and comparison of the different trials, the CFU per ml recovered from the substrate samples directly after the drying time were logarithmized and percentage change values of these data were used for further analysis (Table S1). Furthermore, it has to be considered that for determining viable counts, a culture-dependent method was used. Some enterococci like E. faecalis, E. hirae and E. faecium are known to be able to enter a viable but non-culturable (VBNC) state in response to environmental stress (Lle o et al. 2001;Oliver 2010;Gin and Goh 2013). However, this has not been investigated for EC yet. Consequently, it has to be taken into consideration that EC could have entered this state at some point during the trials and was no longer detectable by the used methods anymore. Other methods like live staining may be of interest for further research (Heim et al. 2002). EC strains On comparing the different EC strains at the same conditions, the three pathogenic EC strains survived longer than the non-pathogenic one. Differences between the pathogenic strains were found regarding longer survival time or higher isolation rate. Therefore, we conclude that the pathogenicity may have an influence on the survival ability of EC. Survival and/or multiplication of different bacteria in protected sites was correlated with pathogenicity (Wilson et al. 1999). A correlation between virulence and durability for bacterial and viral pathogens such as Bordetella pertussis, Streptococcus pneumoniae or influenza virus is occasionally reported (Dennis and Lee 1988;Walther and Ewald 2004). A possible explanation could be the genetic diversity among the pathogenic strains (Potts 1994;Gastmeier et al. 2006). In contrast, other authors do not assume a correlation as they found no differences (Wendt et al. 1998;Neely and Maley 2000;Kramer et al. 2006). For enterococci, a detailed study about such a potential correlation was not conducted yet. The mentioned genetic diversity was also found for E. faecalis (McBride et al. 2007), but not necessarily for EC (Jung et al. 2018). There are some reports of diversity among pathogenic EC, but a greater similarity among the pathogenic strains in comparison to the commensal strains is assumed (Wijetunge et al. 2012;Jung et al. 2018). However, two enterococcal traits are known, that is, a relatively high durability (Bale et al. 1993;Cools et al. 2001;Hartel et al. 2005;Lebreton et al. 2017) and the frequent genetic exchange of putative virulence factors via conjugative plasmids and transposons (Descheemaeker et al. 1999;Angulo et al. 2006;Hammerum 2012;Jung et al. 2018). Such genes typically confer traits that provide survival advantages to organisms in unusual environments such as virulence factors (Eberhard 1989;Jett et al. 1994). The ability to exchange genes itself can be considered an expression of virulence (Jett et al. 1994). Consequently, a correlation of the high durability and the pathogenicity or virulence of enterococci could be possible. In broiler houses, a relative air humidity of even under 25% RH can be found in houses with wholehouse heating and nipple drinkers, although a higher relative air humidity of about 60% RH AE 10% RH depending on the temperature are recommended (Aviagen 2018, accessed July 2020). As mentioned before, the typical parameter during a production cycle range between 50 and 70% RH for the relative air humidity and 29-31°C to 20°C for the temperature considering the age of the broilers (Aviagen 2018, accessed July 2020). The older the broilers, the colder and drier the environmental conditions should be. In this study, also conditions beyond the recommended profile were tested to illustrate the potential influences of the parameters on the survival of EC more clearly. Additionally, in this context not only parameters during the housing of the broilers but also in the downtime between the cycles may be important. The demonstrated possible longevity of pathogenic EC strains can be a risk for carry-over and infection of subsequent broiler cycles in the same house, particularly when cleaning and disinfection are performed inadequately (Heuer et al. 2002). Therefore, the choice of cleaning and disinfection agents and procedure is important. Since a high temperature with a low relative air humidity over a couple of days is disadvantageous for EC, these conditions may be useful to consider when aiming to eliminate EC in broiler houses. At the beginning of a broiler production cycle, it is recommended to preheat the broiler house to achieve an air temperature of around 30°C with a RH of 60-70% RH (Aviagen 2018, accessed July 2020). These environmental conditions are partly similar to those in this study. At conditions of 25 or 37°C with 78% RH, EC survived between 72 and 600 h, depending on the substrate. Hence, if a carry-over of EC occurs, the bacteria will survive long enough in the preheated house to infect the broiler chickens. By prolonging the vacancy period between two broiler production cycles in combination with disadvantageous conditions, a better control of EC outbreaks may be possible. Further research is needed to test these considerations.
7,240.8
2020-10-16T00:00:00.000
[ "Environmental Science", "Biology" ]
Light power resource availability for energy harvesting photovoltaics for self-powered IoT As the Internet of Things (IoT) expands, the need for energy-efficient, self-powered devices increases and so a better understanding of the available energy resource is necessary. We examine the light power resource availability for energy harvesting photovoltaics (PV) in various environments and its potential for self-powered IoT applications. We analyse light sources, considering spectral distribution, intensity, and temporal variations, and evaluate the impact of location, seasonal variation, and time of day on light power availability. Additionally, we discuss human and building design factors, such as occupancy, room aspect, sensor placement, and décor, which influence light energy availability and therefore power for IoT electronics. We propose a best-case and non-ideal scenario in terms of light resource for energy-harvesting, and using a commercially available organic PV cell, show that the energy yield generated and available to the IoT electronics, can be anywhere between 0.7 mWh and 75 mWh per day, depending on the lighting conditions. Introduction Billions of Internet of Things (IoT) devices are predicted to be installed within the coming decade and nearly half of those devices will be deployed indoors [1].The improvement in functionality of these devices has resulted in a decrease in overall power consumption and has coincided with the development of several photovoltaic (PV) technologies, such as Dye sensitized solar cells [2], organic solar cells [3], and perovskite solar cells [4], that are highly efficient at harvesting ambient light, resulting in the possibility of developing perpetually self-powered battery-less IoT.One of the advantages these newer generation of PVs have over more established technologies is the ability to tune the spectral responsiveness of the PV device tailored to varied light conditions [5] found in homes [6][7][8] offices [9], factories [10,11], hospitals [12][13][14], retail stores [15,16], and other indoor locations [17][18][19][20][21][22][23][24]. The types of sensors employed in IoT vary widely in both their function and power consumption.These include temperature sensors (lowest power consumption reported at 71 nW [25]), motion sensors (32 µW [26]), light sensors (57.6 µW [27]), gas sensors (1-100 mW [28]), and humidity sensors (150 µW [29]).Different communication protocols also have varied power consumption which we have summarized in previous work [30].Ultimately the overall power consumption will be a balance between what is required (i.e.how often to 'sense' and transmit data) and how much power is available in certain location.Algorithmic control based on the available light resource will almost certainly be a feature of self-powered IoT. Characterizing these Indoor PVs (IPVs) is a major challenge as this is typically carried out under low illuminance artificial light [31].The recognized standard for solar cell testing for outdoor applications is ISO 9845-1:2022 and at the time when the majority of this work was carried out, no such standard existed for ambient light testing.Fortunately, in the intervening period between submission, review and final publication a standard on indoor PVs testing has been published, IEC TS 62607-7-2:2023, allowing us to comment on our findings and how this relates to the newly published standard. One reason that the standard took a long time to materialize is that unlike the sun, the variable spectral characteristics of ambient indoor light meant it was difficult to reach a consensus on what exactly is meant by the term 'ambient light' .For example, office lighting often makes use of diffuse light to provide safe and comfortable working environments [32] or utilizes blue-enriched lighting to improve alertness [33].Hospital lighting may incorporate relatively high levels of UV-A (315 nm to 400 nm) to disinfect surfaces and destroy pathogens [12,34] or may optimize lighting to assist clinicians in promoting healthy circadian rhythms for patient recovery [35].Supermarkets also modify the colour of lighting to prolong shelf-life and influence our perception of fresh produce [36][37][38][39]. The lack of agreement on the spectral characteristics of measurement standard was further complicated by the common usage of illuminance values (lux) to describe ambient light intensities instead of absolute spectral irradiance (usually mW cm −2 nm −1 ).Illuminance is a measurement of intensity, as perceived by the human eye typically between wavelength of 400 nm-700 nm, whereas spectral irradiance is a measurement of power density across a wider range of wavelengths (typically, 300 nm-1500 nm).Since many solar cells technologies absorb light above 700 nm, claims of record-breaking ambient light solar cell efficiencies must be treated with caution. A variety of illuminance values are commonly chosen by researchers (figure S1 [3,).The two most common are 200 lux and 1000 lux and it is not entirely clear why these values have been chosen.Perhaps researchers have been influenced by the standard, EN 12464-1 [74], which recommends that 200 lux is specified as the minimum lighting standard in public venues such as lecture theatres, public lounges and transportation hubs, while 1000 lux seems to be a value specified for specific inspection tasks.If this is the origin of such values, then it must be noted that EN 12464-1 refers to the illuminance level perceived by the human operator at a workstation performing a specific task and this is not necessarily where you might expect to find or place an energy harvesting IoT node. In order to aid our understanding of the light resource availability in typical locations a comprehensive study of different lighting environments in offices and laboratories located on Swansea University's Bay campus was undertaken.These buildings were constructed between 2015 and 2019 and, as such, should conform to the regulatory lighting standards set out in EN 12464-1 and so the locations chosen in this work can be considered as typical for modern office environments. Logging illuminance meters are placed in locations where one would expect to find an IoT node, i.e. on walls and ceilings, and the illuminance values logged over periods of several days.This work will show that in these locations, illuminance values are often lower than that specified by the standards meaning that researchers developing PV cells for ambient light energy harvesting could be overestimating the light resource availability when giving typical performance outputs.The effects on light resource availability due to human factors, such as office occupancy, location of IoT node and interior decoration are also explored and discussed. Experimental The low light solar simulator consists of a Thorlabs DC2200-High-Power 1-Channel LED Driver with Pulse Modulation to drive an LED array fitted with an acrylic diffuser.A Keithley 236 source meter is used to measure IV curves.Illuminance values are logged using Onset HOBO MX2202 illuminace meters.For in-situ room illuminance measurements, the MX2202 illuminace meters are configured to log illuminances at 1-minute intervals.They are deployed on various indoor surfaces-i.e.walls, ceilings and desks-for several days.Deployment locations are chosen to be representative of potential IoT node positions (e.g.sensors, communication beacons).To determine the correction factor for the illuminace meters, their response is measured under a low-light conditions in a solar simulator and compared to the response of a CEM DT8809A handheld illuminance meter.A calibration factor of 1.5 was determined with an introduced error of 2% for quoted lux values (figure S3).For experiments to determine the directionality of ambient light, a GY30 (BH1750FVI) digital ambient light intensity sensor mounted on an SG90 servomotor is used.The servomotor is rotated in increments of 1 degree, taking the GY30 through a full 180 degrees on the vertical plane.Spectral irradiance is measured using a factory calibrated Ocean Insight FLAME VIS-NIR spectrometer with diffuser (diameter 7140 µm).The spectral irradiance of dimmable ambient LED room light is measured using the Ocean Insight FLAME VIR-NIR spectrometer.The spectral irradiance of other common indoor light sources-namely CCFL, halogen lamp and natural daylight-are measured for comparison.These irradiance spectra are then integrated between 400 nm and 820 nm to give their respective absolute irradiances.Room décor is modelled within the low light solar simulator by placing coloured card on three walls of the solar simulator.A MX2202 illuminance meter and Ocean Insight spectrometer are placed in the centre of the solar simulator on the same illumination plane (approx.15 cm from the diffused LED light source).IPV external quantum efficiency (EQE) measurements are carried out using a custom built EQE measurement system in AC mode.Prior to performing the measurements, the spectrum of the lamp is measured using a calibrated silicon reference cell.The IPV organic photovoltic modules used for energy yield calculations are supplied by Epishine (part number: LEH350X50610).They consist of six cells and cover an area of 50 mm × 50 mm, typically providing an open circuit voltage of 3.8 V with short circuit current of 147 µA at 500 lux warm white LED on a white background.Amorphous silicon solar cells (AM-1417) are supplied by Sanyo, providing an open circuit voltage of 2.4 V and short circuit current of 13.5 µA at 200 lux fluorescent white light. Results & discussion PV efficiency is a simple ratio of the electrical power density output of a PV device and the irradiance power density of the incident light.Indoor and ambient light PVs, however, are almost exclusively measured using illuminance values (lux).Illuminance, based on the spectral response of the human-eye, is considered a poor way to estimate irradiance; however, illuminance meters are relatively cheap and widely available and so are often employed to characterize ambient light sources.Ambient light sources can be natural sunlight filtered by glazing, LED lighting and increasingly less common, fluorescent and halogen lighting, all shown in figure 1 below.Figure 1 shows the relationship between the illuminance values of a range light sources compared to their spectral irradiances and it can be observed that for these particular light sources, over the range of illuminances measured, the spectral distribution scales linearly with light intensity, i.e. the spectral 'shape' is maintained.Care must still be taken, however, when determining cell performance under ambient conditions with LED lighting as it is known that LED spectral distribution is not always consistent across varying light intensities due to changing driving currents altering the junction temperature [75].The spectral match between the emission light source and the indoor PV cell's light absorption characteristics, determined by its band gap, has been the topic of much discussion in recent years, but the adoption of LED lighting over fluorescent and incandescent light sources has allowed PV researchers to focus their efforts in designing materials for optimum absorption for a mixture of artificial and diffuse natural lighting. When employing illuminance meters to estimate irradiance, it is advised to proceed with caution as there can be large discrepancies between different illuminance meters under identical test conditions.In this work we used HOBO MX2202 illuminance meters from ONSET.We compared these under test to a handheld illuminance meter (CEM DT8809A) in a low illuminance solar simulator.The data is shown in figure S2 where it can be seen MX2202 datalogger illuminance measurements are observed to be 40% lower than the DT8809A measurements.This is because the MX2202 under-reads illuminance due to the lack of cosine correction.As the response of both devices are linear within the light intensity regime measured, we can apply a correction factor of 1.50 to the MX2202 datalogger measurements for a more accurate estimation of real illuminance values.Another factor to consider is repeatability across different devices of the same type.In figure S3 we show measurements of five identical MX2202 illuminance meters in the solar simulator show a spread of 5% in their measured illuminance values across the light intensities considered.Despite these issues, because of their low-cost and ease of use, several datalogging illuminance meters (MX2202) can be deployed simultaneously across multiple indoor locations whereas the cost of doing so with spectroradiometers would have been prohibitive.These locations chosen include offices, laboratories, corridors, and stairwells.The loggers were positioned in places where one might consider placing wireless IoT nodes; places where they will be most unobtrusive, inconspicuous and discreet. Figure 2(a) shows a typical open plan office with zoned lighting.The locations of the logging illuminance meters are shown on the photograph and designated as W (office wall), K (office kitchen) and C (office celling).Figures 2(c) and (d) shows the illuminance measured at the locations over two different time periods.The data illustrates several important factors when considering the deployment of indoor PVs in the office environment.Firstly, the maximum illuminance measured is 150 lux and, in some locations, 50 lux, far below recommended task-oriented lighting standard.In this work it has been found that wall illuminance of 50 lux is reasonably common in office-based lighting and so it is therefore recommended that researchers working on indoor PV materials and devices consider measuring their cells efficiencies at 50 lux, or at least investigate the low-light operating limit of their devices as there is a potential that devices that appear to perform well at 200 lux, may not generate any power at 50 lux or below and so would be unsuitable for powering IoT sensor nodes in these very low light locations. In modern buildings, spaces are lit by a mixture of LED lighting and natural daylight.Spatial distribution of illuminance (and therefore irradiance) varies considerably because of human-behavioural and building design factors.For example, in some laboratories, smart lighting automatically dims according to the amount of daylight entering the room and switches off when no movement is detected after a long period of time.In larger open plan offices, lights are manually controlled so that sections of the office are selectively illuminated, according to current occupation.This is illustrated in figures 2(c) and (d) where the effects of zoned lighting can be observed, where certain locations experience very low illuminance, dependent on the occupancy of different zones of the office, at different times of day.This effect is particularly highlighted in figure 2(d) where all zones measured are <20 lux when the office is vacated over the weekend and because of this self-powered IoT devices must be designed with energy storage elements to provide enough power over a weekend when such office spaces are vacated.The relationship between light resource availability and human-behaviour is perhaps best shown in figures 2(b) and (e).Illuminance meters were deployed in a corridor environment where the lighting is controlled by passive infrared (PIR) sensor.The light resource availability is highly dependent on human activity and for periods of time the corridor experienced almost complete darkness.An understanding of human behaviour in the environment and the occupancy frequency of certain locations will be critical in predicting available light to power IoT nodes.Other factors that affect light resource availability include the position of the IoT node with respect to light sources in that location.In a typical working environment, lighting can be classified into four categories: 1. Direct artificial light, where the light fittings cast beams onto walls and floors, e.g.spotlights.This kind of lighting is typically found in corridors and toilets and results in non-uniform illumination of surfaces.2. Diffuse artificial light, where light fittings are intentionally diffuse to give more uniform lighting throughout the room.This lighting is most often found in offices and laboratories.3. Natural light, which can be either direct sunlight or diffuse daylight.4. A superposition of two or more of the above. The greatest variability in illuminance comes from rooms with external windows where seasonal changes dictate the amount of natural light entering the room.Although there is more sunlight in the summer months, the sun is higher in the sky, so there is limited direct sunlight entering south-facing rooms.However, in the winter months, the sun is much lower in the sky, allowing direct sunlight to penetrate farther into south-facing rooms on cloudless days.Other human factors come into play, e.g.occupants closing blinds to prevent screen glare drastically reduces the amount of natural light available for harvesting.Figure 3 shows data from an office with a south facing window.The data was recorded in August at a latitude of 51.6 • and longitude of −3.9 • and so the maximum solar elevation angle was approximately 55 • .This means the sun does not shine directly into the office at any point during the day.The data shows two consecutive days: one sunny and the next overcast.It can be observed that the highest illuminances measured are when the meter is placed horizontally on the desk.This result is to be expected as the illuminance meter is directly incident to the artificial lighting and has a favourable position with respect to the window and the angle of incidence for natural light.The result is also concurrent with indoor lighting standards which are focused primarily on the illuminance experienced by the worker at their workstation.Why then, do we not simply place light harvesting IoT nodes on workstations and desks to take advantage of the higher illuminance values?Apart from the fact that self-powered IoT nodes are likely to be passive sensors or low-power communication relays, requiring them to be unobtrusive and discreet, the answer can be seen in figure 3(d), where the measured illuminance is prone to sudden and prolonged dropouts.Desktops and workstations can be cluttered with objects, e.g.papers, books, coffee mugs, etc, and these could occasionally obscure any desktop mounted IPV devices.This can be seen in figure 3(d), where the cause of the dropout is due to the author accidentally placing their notepad over the illuminance meter.This again shows how critical human behaviour is to light resource availability.While placing a self-powered IoT node on a workstation or desktop may seem like the best option in terms of light resource, it may not be the ideal location in terms of interior design factors and indeed the IPV being covered due to human error.Figure 3(b) shows the data from the same room with desktop illuminance data removed for greater clarity.It can be observed that the aspect and location of the illuminance meters has very little influence over the illuminance values measured.This can be explained by the fact that apart from the desktop meter, none of the remaining meters have either natural or LED lighting directly incident onto their detectors.They are all therefore measuring diffuse light.The fact that the walls of the office are white means that the light resource is scattered and uniformly distributed regardless of location within the office. While it may be obvious that a room with a south-facing window experiences significantly higher illuminance than a room without any natural light, further consideration must be applied to the different spectral characteristics, and how these might change during the day.In modern workspaces without natural light the spectral illuminance will be dominated by white LED lighting, whilst the workspace with mixed lighting will experience a change in the spectral characteristics throughout the day and with varying meteorological conditions.This is illustrated in figure 4 which shows the spectral evolution of light in a room with a south facing window, throughout the course of a typical day.The room is illuminated by daylight as well as LED lights that automatically brighten and dim according to the amount of daylight that is available within the room, further adding to the spectral complexity.This can be observed in figure 4, when in the early morning, the spectrum is dominated by natural light.As the room becomes occupied later in the morning, the PIR sensors switch on the LED lighting and the LED blue emitter peak becomes identifiable in the mixed spectrum and by late afternoon the spectrum is dominated by LED lighting and the emitter and phosphor peaks are clearly identifiable in the spectrum.Figure 4(c) shows the averaged spectral irradiance throughout the day and while the influence of the natural light spectrum can be observed, it is the spectral distribution of the LED lighting that appears to dominate.The position of the self-powered IoT node within a space also becomes important in such mixed light environments.Figure 4(b) shows spectral variance as a function of distance from the window.The spectra were recorded at 10:30, and as one would expect, the natural daylight spectrum dominates nearest to the window, while the LED spectrum is dominant furthest from the window. The spectral characteristics of the light source are important when considering the band gap of light-absorber materials to use in energy harvesting PVs.For example, LED lighting with colour temperatures ranging from 1700 K to 6500 K would correspond to ideal band gaps in solar cell materials of 1.60 eV-1.97 eV [76].For natural daylight filtered by modern building glazing, a band gap of 1.10 eV [71] would be more suitable.To calculate the optimum band gap for the scenario outlined in figure 4, a detailed balance analysis is used, based on that originally outlined by Shockley and Queisser [77], the optimal band gap of the PV system for the incident spectra can be calculated and for the spectra shown in figure 4(b), the optimal band gap varies slightly (1.7 eV-1.8 eV, see figure S6) depending on the amount of daylight present in the spectrum.The thermodynamic limit of the power conversion efficiency (PCE) likewise changes from 37% to 39% at an illuminance of 100 Lux.The full details of these calculations can be found in a recent work [78].These upper limits of PCEs represent the maximum achievable values and do not include losses associated with parasitic resistances, defects, or device size [30].In practice, tuning the band gap for each individual environment would be impractical and having a materials system that is easy to manufacture, stable and able to operate sufficiently (even if not optimally) in low-light environments is more important than designing the ideal band gap for each individual scenario. Lighting in offices and laboratories designed to radiate out at certain angular distributions, which could result in a variance in light resource that is height dependent.Figure 5(a) shows the variance in illuminance as a function of height below the ceiling.In this location ceiling mounted LED luminaires produce diffuse lighting which illuminates the walls.Illuminance can vary by as much as 30% dependent on vertical height placement on the wall, this is something that installers of these technologies should be aware of in optimizing the positioning of self-powered IoT nodes.In most scenarios, to maximize light harvesting opportunity for wall mounted devices, it would be unfavourable to have the device mounted parallel to the wall but instead angled to some degree toward the ceiling mounted lighting.Figures 5(b) and (c) show the angular dependence vs the height below ceiling in two different locations.In figure 5(b), the LED luminaires are 0.6 m from the wall and in figure 5(c) the luminaires are 1.2 m from the wall.It can be observed that maximum illuminance is dependent on both the height below the ceiling and horizontal distance from light fittings.In figure 5(b), maximum illuminance is observed at an angle of 70 • and is practically independent of height.In figure 5(c), maximum illuminance is observed at an angle between 34 • to 54 • , depending on height.It can be concluded therefore, that angular dependence becomes more critical with increasing horizontal distance from the ceiling light source.Given the data in figures 5(b) and (c), it would be recommended that designers of indoor PV powered IoT nodes take this into consideration by choosing a fixed angle of, for example 45 • or designing a tilt system to allow maximum light harvesting in a range of scenarios. Differences in reflectivity and colour are essential design elements within modern buildings because they provide vital contrast.Colour also influences the subjective visual perception of light, and is associated with psychological, physiological, and social reactions [79].In architecture, Light Reflectance Values are a measure of the percentage of visible and usable light that is reflected from a surface when illuminated by a light source [80].Interior designers use these values to design spaces that meet indoor lighting standards.To investigate whether room décor influences light resource availability, the spectral irradiance was measured in two otherwise identical offices, one with blue walls and one with green walls.Figure 5(d) shows the normalised irradiance and indeed there are subtle differences in the measured light spectrum.The office with green walls shows reduction in the peak associated with the LED emitter and increased apparent irradiance in the region associated with the LED phosphor, indicating an increased absorption (by the walls) in the blue portion spectrum and increased reflection in the green portion of the spectrum.Irradiance measured in the blue office shows no change from the emitter peak but decreased emission in the phosphor region due to increased absorption of green light by the walls.The spectral variations due to room décor may influence predicted PV performance due to spectral mismatches with the PV cell's band gap, as discussed previously.To investigate further, coloured card was used to surround the measurement area of a solar simulator set up to measure PV cells at low light.Before measuring PV cells, a spectrometer and illuminance meter were placed in the solar simulator on the same horizontal plane, facing the LED light source.Light is received by the spectrometer and illuminance meter as a superposition of direct LED light that is generally normal to the receiving devices, while diffuse reflected light from the solar simulator walls reaches the receiving devices at more oblique angles of incidence.Even though most light is received directly from the LED light source, the irradiance measurements, shown in figure 5(e), show clear spectral dependence with red and orange background card resulting in higher apparent irradiances compared to blue and green backgrounds. The appropriate PV technologies are crucial for energy harvesting strategies in IoT applications.Two types of PV module commonly specified for energy harvesting IoT, organic photovoltaics (OPV) modules and amorphous silicon (a-Si) modules, were then measured in the solar simulator with the coloured card backgrounds and the results are shown in figure 5(f), where the maximum power output (PMAX) of the module is shown as a % loss compared to the module's PMAX measured with a white background.Coloured walls reflect less radiant energy than white walls, so in all cases, PMAX is less than what would have been achievable with white walls.Red and orange walls reflect more light at the red end of the irradiance spectrum and since the OPV module's spectral responsiveness (external quantum efficiency-see figure S4) extends further into this longer wavelength region, it is more efficient at harvesting this energy than the a-Si whose spectral responsiveness which is reduced significantly at wavelengths >550 nm, so is unable to harvest as much of this longer wavelength energy.Given these results the recommendation to interior designers, to maximise light harvesting efficiency for IoT objects, therefore, is to paint your walls white, but if you must add colour, choose colours at the red end of the spectrum!Next-generation PV technologies such as OPV or perovskite PV (PPV) offer several advantages for their application to IoT.Firstly, as solution-processable technologies, they are amenable to low-energy manufacturing techniques.This provides opportunities for lowering the overall energy footprint of IoT devices.Secondly, the band gap tuneability of next-generation PV materials such as OPV and PPV means they can be optimized for various lighting scenarios.Under typical indoor light sources, IPVs have the potential to achieve PCEs surpassing 50%, which is significantly larger than the predicted PCE of 33.7% for AM1.5G sunlight.However, to achieve such high PCEs, semiconductors with wider optical band gaps between 1.7 eV and 1.9 eV are required.This is considerably wider than traditional solar cell materials such as crystalline silicon, gallium arsenide, and cadmium telluride [81]. As previously mentioned, during the submission and review of this manuscript, the indoor PV measurement standard (IEC TS 62607-7-2:2023) [82] was released by the International Electrotechnical Commission and we would like to briefly comment on the standard with respect to our findings.The standard recommends two light sources, one fluorescent and one LED, and measurements at 1000 lux, 200 lux and 50 lux.We are pleased to see the inclusion of the 50 lux recommendation as our data shows this is not an uncommon illuminance value for an IoT node that might be placed on a wall, and we have already referred to why good IPV performance at 50 lux may be important.Additionally, it is not a value that many in the IPV community readily choose for their cell measurements, and we are happy that the authors of the standard recommend that they should.The choice of including fluorescent lighting is interesting as whilst still very common, fluorescent lighting is mostly being phased out in favour of LEDs, for example, the Swansea University campus we used for this work was built in 2015 and we could not find a single installation of fluorescent lighting in any of the locations considered across the campus estate.There is also no mention of the mixed spectrum that would be a feature of any room with a window, but this is understandable given the complexity of the issue and when one considers the main purpose of the standard, at least from the perspective of the IPV community, is to be able to reliably compare measurements across different laboratories, and so the spectral characteristics should be specified for a single light source just as they are for AM1.5.In a similar fashion the standard recommends collimated as opposed to diffuse light, and whilst a diffuse light source might be more realistic in terms of the indoor PV scenario, a reliable, repeatable diffuse measurement may be be more difficult to recreate across multiple laboratories than a collimated measurement.Lastly the LED light source chosen is CIE LED-B4 which has a colour temperature of approximately 5000 K, a colour temperature typically called 'cool white' .A slightly warmer colour temperature (4000 K) is more common in office spaces, warmer still (2700 K) for domestic environments. However, 5000 K is common in hospitals and other clinical settings and ultimately a standard must be chosen for reliable comparison within the community.After all, we still use AM1.5 to characterise our outdoor solar cells, even though in Swansea, sitting at 52 • on Europe's North Atlantic coast, we may only experience '1-Sun' for a few minutes each day in June-if we are lucky!So far, this work has focussed on the variability of the light resource available to power energy harvesting PVs, designed to power the next generation of self-powered IoT nodes.What does this mean therefore for the designers of low-power electronics that will be used in conjunction with energy harvesting solar cells?How much usable electrical power is available considering the light resource availability, the light harvesting efficiency of the PV module and the efficiency of the energy harvesting electronics?To answer this question, it is perhaps best to identify a best-case but realistic scenario, and a non-ideal scenario.In the best-case scenario, we have identified a semiconductor manufacturing clean room-see figure S5.In this scenario, no natural light is present and so higher light intensities may be available in locations with a large degree of natural light.However, in the clean room, the lighting is relatively bright for artificial light and on 24 h per day and so is very predictable and stable.A wall mounted IoT node can expect to receive illuminance of around 3300 lux, 24 h per day, seven days a week.To evaluate the usable electrical power, we used Epishine OPV modules as these are one of the only OPV modules that are commercially available and made specifically for indoor lighting conditions.Under the clean-room test conditions the maximum power output of the Epishine OPV module is measured as 3477 µW.If we assume we have the latest energy harvesting electronics operating at an efficiency of 90%, then 3129 µW of useable power could be available to power the IoT electronics.We will propose that the non-ideal case to be the corridor without natural light, shown in figure 2(b) and (e), where an average 80 lux can be expected for a device on the wall.The Epishine OPV module gives a power output of 91 µW (82 µW available for the electronics), but the light here is also determined by the number of people using the corridor.In our data the lighting is on approximately 50% of the time, Monday to Friday.The corridor appears to have no traffic at the weekends and so illuminance is negligible.Total hours of available light is therefore 60 h.When illumination is not constant, as it is in the cleanroom, it is perhaps more useful to think in terms of total energy yield over a period of time, rather than power output at a given point in time.Therefore, the total useable energy yield for device in the corridor is 4.9 mWh per week vs 526 mWh per week for devices located in the cleanroom.This results in an average daily energy yield of 0.7 mWh per day (∼0.98 mWh d −1 Mon-Fri, 0 mWh d −1 at weekends) and 75 mWh per day respectively. The PV power output at a specific condition and the average energy yield are important parameters to consider when designing self-powered IoT nodes, in what can be powered and what the energy storage requirements may be.For manufacturers of self-powered IoT nodes, it may be necessary to specify minimum lighting conditions to potential customers, in order to avoid devices consuming more power than can be generated by the solar cell. Conclusions We have shown that predicting the performance of energy-harvesting PVs for indoor applications is significantly more complex than for outdoor PV cells.We have also shown that in many cases the realistic light resource can be much lower than many researchers of ambient-light PV may realise and that they must consider measuring the performance of their PV cells down to 50 lux or even less.The primary source of this complexity stems from the variability in light power resource, which is influenced by factors such as spectral distribution, intensity, and temporal and seasonal fluctuations.Moreover, human and architectural elements, including room orientation, IoT node placement relative to windows, presence of PIR sensors, and room occupancy, further compound the intricacy of performance prediction.To accurately estimate the energy yield for PVs powering IoT nodes, all of these factors must be taken into consideration.To estimate the range of energy that can be generated, we proposed best-case and non-ideal scenarios and using a commercially available Epishine OPV cell, show that the energy yield generated and available to the IoT electronics, can be anywhere between 4.9 mWh and 526 mWh per week, depending on the lighting conditions and human and architectural factors.The data presented here will prove valuable for the design of ambient light-harvesting PVs, as the observed spectral variations can guide the selection of an optimal band gap for maximum PV cell efficiency.Additionally, electronic engineers can utilize this information to better comprehend the power availability under both ideal and non-ideal conditions, thus facilitating more informed design and power management decisions for PV-powered ambient light-harvesting IoT nodes. Reflecting on the variability of light power resources, it becomes evident that approaches to ambient light powered IoT requires a nuanced perspective.Electronic engineers will need to develop adaptive IoT systems that adjust their power consumption in response to harvested energy so they can ensure consistent functionality in diverse light conditions.Delving deeper into human behaviour patterns will shed light on the intricacies of light availability, fostering more intuitive energy harvesting strategies.Additionally, utilising machine learning or AI algorithms to analyse varied environmental data may pinpoint optimal placement of IoT nodes, enhancing energy capture and device efficiency. Figure 1 . Figure 1.Spectral irradiance at a range of illuminance values for (a) natural daylight measured through glazing; (b) cool white LED, colour temperature 5000 K; (c) compact fluorescent tube; (d) halogen lighting. Figure 2 . Figure 2. Illuminance meter locations in (a) office environment, C = ceiling, W = wall, K = kitchen wall; and (b) corridor with no natural light.(c)-(e) illustrate how energy efficient lighting controlled by PIR sensors and combined with human behaviour can lead to complex and difficult to predict light energy resource availability. Figure 3 . Figure 3. (a) and (b) Illuminance meter data from several locations within (c), an office with south facing windows.(d) The effects of human error in accidently covering a self-powered IoT node. Figure 4 . Figure 4. (a) Evolution of light spectra during a typical August day in an office with south facing windows.(b) Spectral variation with distance from the window at 10:30.(c) Average lighting spectrum, compared to spectra at 09:00 and 21:00. Figure 5 . Figure 5. (a) Illuminance as a function of height below ceiling; (b) and (c) illuminance as a function of PV module angle from the vertical (i.e.parallel to wall), (b) = 0.6 m and (c) = 1.2 m horizontal distance from LED luminaires; (d) normalized irradiance spectra in two identical meeting rooms, with different green and blue coloured walls; (e) irradiance spectra inside a low light solar simulator with different coloured walls; (f) calculated percentage PMAX loss compared to PMAX measured with white walls, of an OPV module and an a-Si module in low-light solar simulator-with coloured walls.
8,450.4
2023-12-20T00:00:00.000
[ "Engineering", "Environmental Science" ]
The Performance Assessment of a Semisubmersible Platform Subjected to Wind and Waves by a CFD/6-DOF Approach School of Naval Architecture, Ocean and Civil Engineering, Shanghai Jiao Tong University, Shanghai 200240, China State Key Laboratory of Ocean Engineering, Shanghai 200240, China Shanghai Key Laboratory for Digital Maintenance of Buildings and Infrastructure, Shanghai 200240, China Shanghai Construction Group (SCG), Shanghai 200120, China Key Laboratory of Hydrodynamics of Ministry of Education, Shanghai 200240, China Introduction Because of their limited deployment range, fixed platforms could hardly meet the need of deep exploitation of marine source, which includes renewable energy, oil and gas, et al.; recent focus has therefore been shifted to the floating offshore platforms, especially the semisubmersible offshore platform [1]. At present, it can serve as a drilling production platform or an offshore wind turbine foundation [2]. e towing and motion performances, which are among the vital indicators of the offshore platform performance, will largely affect the operation efficiency and safety of a semisubmersible platform [3]. Hence, it is important to investigate the effects of different platform component designs on the towing and motion performances. e effects of wind, waves, and currents on the platform performance have attracted a number of experimental, field, and numerical investigations [4][5][6][7]. In general, experiments and field tests may have some limitations. For instance, they are usually too expensive to be performed for one type of semisubmersible platform. Further, the field test also requires the harsh environmental conditions and test devices. For a full-scale platform or a scaled platform model, not all tanks are sufficiently large to perform model test of a floating platform. Regarding the numerical investigation aspect, with the enhancement of computing capability, a number of numerical methods and software packages (e.g., AQWA [8], Fluent [9], WAMIT [10], and FAST [11]) are currently available for time-and frequency-domain analyses of a floating platform under wind and waves, etc. However, in several software packages (e.g., ANSYS/Fluent), the dynamic characteristics and relations between wind and waves are still not well considered [12]. In order to improve this problem, we developed a computational program incorporating the wind loading and the stochastic waves based on our previous studies [13] and [14]. It was realized by integrating modules of MATLAB, ANSYS, and Fluent to investigate the motion and towing performances. Computational fluid dynamics (CFD) combined with the volume of fluid (VOF)/6-DOF solver is a powerful tool for investigating rigid-body motion hydrodynamics. According to a comparison study, the unsteady CFD simulations could capture the complete physical phenomena (motion response, towing resistance, wave run-up, etc.) of a floating platform [2]. In particular, just by reading the geometry and mass properties of the platform into the computational code, a CFD simulation was effective to provide the accurate solutions [15]. Further, a number of 6-DOF solvers have been developed over the past 20 years [16] and applied to record aircraft, ship, and submarine motions [17][18][19][20]. And more recently, it expands to offshore platform dynamics [21][22][23]. However, the computational instabilities caused by the artificial added mass were hard to be resolved in these traditional 6-DOF solvers [24,25]. In some open-source CFD software packages, such as OpenFOAM, a 6-DOF solver is able to be implemented to form a tightly coupled algorithm for simulating wave slamming on ship hulls and a wind turbine platform [15,26,27]. e objective of the present study is to assess the effects of different structural component designs on the towing and motion performances under the wind and waves. By compiling a VOF/6-DOF solver in a Fluent UDF Library, a tightly coupled CFD/6-DOF algorithm is constructed and used to monitor the towing resistance and motion response of a semisubmersible platform under the combined wind and wave loadings. e paper is organized as follows: the structural models together with their corresponding parameters are presented in Section 2; numerical methods for conducting CFD simulations are provided in Section 3; validations for mesh strategy and a VOF/6-DOF solver are included in Section 4; and numerical evaluations of towing and motion performances are conducted in Section 5. Platform Model. e original platform model is a 6thgeneration semisubmersible platform, primarily composed of a deck, four columns, two pontoons, and four braces. As is focusing on the towing and motion performances assessment, the derrick and other deck structures are simplified as mass sources acting on the deck. e platform model is established in ANSYS/Workbench as shown in Figure 1. e dimensions and survival conditions of this model are listed in Tables 1 and 2. It is noted that the center of gravity of the simplified model will be changed in Section 3.2 by ANSYS/ AQWA software. e Definition of the Component Form. e in-depth investigations are focused on the effects of different structural patterns on the platform's towing and motion performances. Numerical simulations are performed on three types of existing semisubmersible platform components: the pontoon end shapes, the cross section of the columns, and the longitudinal section of the braces. Based on the original platform model in Figure 1, six numerical cases and their corresponding parameters are presented in the following. Table 3 lists the structural parameters of three common pontoon end shapes in the same displacement: half round, sharp angle, and rounded cube, which are in turn defined as Case 1, Case 2, and Case 3. e corresponding structural forms are shown in Figure 2. Case 3 is the basic platform model as in Figure 1, where the sharp angle is 30°and chamfer radius is 6 m. Based on the model of Case 3, the sections of the columns for Case 3 and Case 4 are modified as shown in Table 4, which are square and circle cross sections, respectively. Moreover, the pontoon end shapes and the longitudinal section of the braces are, respectively, rounded cube and circle. Finally, three longitudinal sections of the braces are recorded in Table 5, which are the circle (Case 3), the halfround plate shape (Case 5), and the plate shape with the chamfer radius of 0.3 m (Case 6). In the subsimulations, the pontoon end shapes and the cross section of the columns are rounded cube for Case 5 and circle for Case 6. In view that these sections are not complicated, the platform models of different column and brace sections are not displayed here. techniques. ese numerical simulations are completed by solving the unsteady Reynolds-averaged Navier-Stokes (RANS) equations. e discrete equations are solved by the pressure-implicit with splitting of operators (PISO) method. e time and space discretizations are both treated by second-order discrete schemes. Bounded central differencing is adopted for momentum discretization. Herein, the shear stress transport (SST) k-ω model, which is applicable in many CFD simulations [12], is utilized to resolve the turbulent behaviors. e governing equations of the SST k-ω turbulence model are listed as below: Computational Method where ρ is the fluid density, p is the fluid pressure, u is the fluid velocity, k is the turbulent kinetic energy, ω is the dissipative rate, Γ, Γ k , and Γ ω are, respectively, the effective diffusion coefficients of u, k, and ω, S i , S k , and S ω are the custom source items of the three abovementioned equations, Y k and Y ω are the dissipative terms of k and ω, and D ω is the cross-diffusion term. A VOF/6-DOF Solver. To describe the motion and towing performances of a semisubmersible offshore platform under the combined wind and wave loadings, the volume of fluid (VOF) model together with a tightly coupled CFD/6-DOF solver, which is named as the VOF/6-DOF solver, is employed. A self-compiled Fluent User-Defined Function Library is developed to effectively realize the transient interface motion. VOF model is suitable for the analyses of two immiscible fluid phases (i.e., wind and waves). Under the assumption that wind and waves can share velocity and pressure fields, the governing equations describing the momentum, mass, and energy transport are solved for an equivalent fluid, which regards the whole domain as a single-phase fluid. erefore, the transport equation of the volume fluid is treated as follows: where V i is the volume fraction for the fluid i (i � 1, 2). e total volume fraction in one controlled volume is unit (i.e., V 1 +V 2 � 1). u x , u y , and u z are the velocity components of u in the Cartesian coordinates O xyz , whose origin is at the interface. MST is the additional mass source, and ρ i is the density of the corresponding fluid i. Later on, in the CFD simulations, the platform is considered as a 6-DOF rigid body, whose motion could be decomposed into the translation and rotation as depicted in Figure 3(a). us, the transient position of the platform body refers to that of the center of mass x r , whose rotation motion is sketched in Figure 3(b). Its instantaneous motion position update is recorded by the following equation: where x r,t and x r,r are, respectively, the motion position of the translation and rotation, x p is the position of the center of mass, v p is the linear velocity, ω p is the angular velocity, Δθ is the rotation angle from time i to time i+1, and e t and e n are, respectively, tangential and normal unit vectors. Different from a traditional 6-DOF solver, a relaxation factor η is applied to relax body force F and moment M, which is described as follows [15]: where the sign "∼" refers to the corresponding relaxed value. Combined with the Aitken method conducting dynamic relaxation, the motion response/towing resistance at each time step is recorded before the next time step, which can effectively avoid the calculation instabilities in the traditional 6-DOF solving algorithms. e algorithm of the present VOF/6-DOF solver in the iterative calculations is concluded as follows: Step 1. Solve body forces in the equivalent fluid field. Step 4. Update the mesh based on the new transient position. Step 5. Correct fluid field variables for mesh motion. Step 7. Record the towing resistance or motion response by UDFs. Step 8. Check the solution convergence and update the initial relaxation factor and return to 1. Computational Parameters Before establishing the computational domain, the center of gravity of this simplified platform is determined by using ANSYS/AQWA. Taking Case 3 as an example, the position changes of the center of gravity in still water within 4 iterations are depicted in Figure 4. As the center of gravity is raised by 0.22 m, the air gap is modified as 14.22 m accordingly. By using this method, the relationship between the relative positions of the interface and each platform in Cases 1 to 6 can be calculated. e CFD simulations of the full-scale semisubmersible platform models are performed in a hexahedral computational domain with a dimension of 1200 m × 500 m × 300 m (length × width × height) shown in Figure 5(a). e platform model is located 300 m away from the inlet. And tetrahedral grids are generated in the neighborhood of the model surfaces as in Figure 5(b). e maximum and minimum grid lengths are, respectively, 0.65 m and 0.11 m, and the grid stretching ratio is kept to be less than 1.05. In consideration of the computer capability and time cost, the minimum values of y+ is less than 3.5, which could be enough to satisfy engineering requirements [28]. e total grid numbers from Cases 1 to 6 are presented in Table 6. All the computation procedures are carried out on a computer server with CPU type of Intel Xeon E3-1220 v3 and 3.1 GHz with a 32G memory. e UDF Library of the VOF/6-DOF solver requires the structural mass, the moment of inertia, and other parameters, which are also computed in ANSYS/AQWA. According to Geng's [29] previous work, the following equation can be utilized for the total mass, the centroid coordinates, and the moment of inertia, which is written as where M is the total mass, I ij is the moment of inertia, x c , y c , and z c are the centroid coordinates in the present position, and x 0 , y 0 , and z 0 is the centroid coordinates in the initial position. For instance, the mass matrix [M] of Case 3 is obtained as follows: e parameters of other cases in the VOF/6-DOF solver are calculated in the same way. Figure 5, the velocity inlet and pressure outlet are included in the computational domain. us, the present study could make use of the velocity inlet to read the time series of wave speed and the average wind speed by the self-compiled UDFs and VOF model, which is sketched in Figure 5(a). e Boundary Conditions. As shown in On the basis of our previous study [14], the relationship between the wind and waves (i.e., the relationship of the average wind speed V 0 and the significant wave height H s ) is explored to guarantee that the strength of wind and waves could match. e relations between H s and V 0 and the derived power spectral density function of waves S(ω) are, respectively, written as where ϖ � ϑω/ω p is the modified natural frequency, ϑ is a corrected coefficient of ω, ω p is the peak frequency, A(ϖ) is the amplitude, c is a corrected coefficient of A(ϖ), and β(ϖ) is the energy transferring coefficient. e time series of wave velocity is generated by the MS method and self-compiled MATLAB programs, which have been validated and applied in our previous studies. e computational procedure of the velocity inlet is provided as follows. Step 1. Determine V 0 and the corresponding wave PSD by (7) and (8) Step 2. Generate the time series of wave speed of the nodes at the inlet Step 3. Read V 0 by the VOF model and the time series of waves by the self-compiled UDFs at the ith time step Step 4. Conduct the CFD calculation at the ith time step Validation Tests In this section, three types of validation tests are performed: the grid independence tests, the time step independence tests, and the VOF/6-DOF solver test. Firstly, the grid and time step independence tests are conducted to balance the time cost with the computing accuracy. en, the VOF/6-DOF solver compiled in a Fluent UDF Library is verified to be applicable in the present numerical simulations. It should be noted that the object of all these validations is based on Case 3 whose sea state, including average wind speed, significant wave height, and wave period, is given as 11 m/s, 3 m, and 6 s, respectively. e Grid Independence Tests. To verify the grid generation method and its implementation in the present study, we refine the computational mesh and perform the CFD simulations by using four types of mesh: the coarse mesh, mesh given in the present study, the medium mesh, and the fine mesh, which contain approximately 3 × 10 6 , 4 × 10 6 , 5 × 10 6 , and 6 × 10 6 computational cells, respectively. Figures 6 and 7 show the towing and motion performances assessment by using these four types of mesh. As seen from Figures 6(a) and 7(a), all the curves of four types of mesh have a relatively similar trend, which preliminarily illustrates the reliability of the computational results by using the present grid generation method. Furthermore, Figures 6(b) and 7(b) investigate the effect of the mesh density based on the average error and time cost. In Figure 6(b), compared to the present study, the average error of the coarse mesh is larger than that of the medium and fine mesh, which are 11.8%, 4.27%, and 4.18%, respectively. It is also found that the numerical accuracy has no significant improvement with the increasing grid number. However, with regard to the time cost ratio, shown as the top x-axis and the right y-axis in Figure 6(b), time cost is increasing rapidly with the grid number. In addition, to further quantify the influence of different meshes on heave response, the Root Mean Square (RMS) of heave response (Z RMS ) of four mesh methods is analyzed in the present section. As given in Table 7, with the increase of the number of meshes, the results show a tendency of convergence. A discrepancy of only 0.29% is noticed when mesh type changes from medium to fine method. Combined with the above analysis, it could be approved that the grid generation method in the present study is reasonable and applicable in the following numerical simulations. e Time Step Independence Tests. To demonstrate the time step setting, we also perform the CFD simulations by using three different time steps: 0.01 s (the present study), 0.005 s, and 0.0025 s. Similarly, Figures 8 and 9 show the evaluation of the towing and motion performances of these three types of time step. As depicted in Figures 8(a) and 9(a), all the curves have a relatively similar trend, which depicts that Δt � 0.01 s is an applicable time step. en, the average error and time cost ratio statistics in Figures 8(b) and 9(b) show that the numerical accuracy has little change with the decreasing time step (i.e., both two average errors are less than 5%). However, the time cost has a rapid increase with the time step. Besides, Table 8 further states that numerical accuracy will not be sensitive to a smaller time step. erefore, in a comprehensive consideration of the numerical accuracy and efficiency, the present time step is applicable in the following numerical simulations. e VOF/6-DOF Solver. Dunbar et al. [15] developed and validated a tightly coupled CFD/6-DOF method with better accuracy and stability by OpenFOAM through the motion performance investigations of a floating offshore wind turbine platform by comparing three methods. In consideration of the larger size and more complex structures of the present production platform, we perform two types of numerical simulations to validate the accuracy and stability Numerical Simulation I. e offshore wind turbine platform is selected for validation tests. Figure 10(a) plots the time series of heave displacement by ANSYS/Fluent adopted in the present method and OpenFOAM. It is obviously found that the present results are relatively in agreement with the previous studies [12,15]. Both curves have the relatively stable and similar oscillations, and only several peak and valley values are a little different. e in-depth error analyses in Figure 10(b) show that the average error is approximately 7%, which is acceptable to be within the engineering criteria. us, the accuracy and stability of the present method and implementation could be validated. Numerical Simulation II. e object is selected as Case 3. Figure 11(a) plots the time series of heave displacement by ANSYS/Fluent adopted in the present method and Open-FOAM. Similarly, two curve trends preliminarily validate the applicability of the present study. e average error in Figure 11(b) is 0.52, which further demonstrates that the In addition, Table 9 summarizes the Z RMS results from numerical simulations I and II. It can be concluded that the results of the two methods are basically the same. erefore, the tightly coupled VOF/6-DOF method compiled in Fluent UDF Library is proved to be more accurate and stable than the traditional method to assess the towing and motion performances of Cases 1 to 6. Numerical Examples In this section, numerical simulations are performed to evaluate the effects of different structural forms on the towing and motion performances under various environmental conditions. Towing Performance Evaluation. e environmental conditions are listed as follows: the sea states, which refer to average wind speed, significant wave height, and wave period, are 11 m/s, 3 m, and 6 s, while the towing speed v t is ranging from 1 kN to 8 kN, with the interval Δv t � 1 kN. Pontoon End Shape. Under the combined loadings of wind and waves, Figure 12(a) depicts the relationship between the towing resistances of the platform and the towing speeds for three different pontoon end shapes. It is found that all the towing resistances of Cases 1 to 3 increase with the towing speeds. With a same platform draft, the rank of the towing resistance is Case 3> Case 1> Case 2. Compared to Case 3 (rounded cube), the towing resistance of Case 2 (30°sharp angle) has an approximately 14.6% decrease. Moreover, Figure 12(b) especially records the time series of towing resistance coefficients of the pontoons only in Cases 1 to 3 at a towing speed of 4 kN. e time-varying resistance coefficients of the pontoons in Case 2 are always less than those of Cases 1 and 3. It similarly illustrates that Case 2 has the better towing performance. However, as shown in Table 3, the pontoon length of Case 2 is 16.16 m longer than Case 3 and 13.38 m longer than Case 1. us, in case of a uniform design length of pontoon for all of the three cases, the draft of Case 2 will be much less than Cases 1 and 3. Under this assumption, Case 1 turns to be a relatively ideal scheme. erefore, the pontoon end shape selection depends on both the towing performance and design requirements. Figure 12(c) investigates the towing resistances of each structural component as well as the platform at the towing speed of 4 kN. e total towing resistance of Case 3 is 1310 kN. e rank of three structural components in ascending order is pontoons, columns, and horizontal connections (braces), which approximately occupy 45%, 33%, and 22% of the total resistance, respectively. erefore, in consideration of the towing performance optimization, the pontoon end shape should have priority over the other two components. Figure 13(a) investigates the varying towing resistances of Cases 3 and 4 with different towing speeds. Both curves reveal that the towing resistance increases with the towing speed. Meanwhile, the resistances of Case 4 are always less than those of Case 3 with the differences ranging from 16.7% to 19.4%. For the towing speed less than 4 kN, the towing resistances of Cases 3 and 4 have a little difference, while for speeds larger than 4 kN, the difference increases significantly. Figure 13(b) records in detail the time series of the towing resistance coefficient of the columns only in Cases 3 and 4 at the towing speed of 4 kN. e towing resistance coefficients of the columns in Case 4 are all the time less than those in Case 3. eir maximum percentage of difference could reach 51.7%. In view that the contribution of the columns to the total resistance is 33%, the total maximum differences of the towing resistance coefficient between Cases 3 and 4 are about 17.1%. e estimated result is relatively close to the result in Figure 13(a). Column Cross Section. To sum up, compared to Case 3, Case 4 with a circle cross section has a better towing performance. e column selection still needs to take the draft and motion response into consideration. Figure 14 longitudinal section. All the towing resistances of Cases 3, 5, and 6 increase with the towing speeds. Meanwhile, the towing resistances of Case 3 are larger than those of Cases 5 and 6. Compared to Case 3, the differences are 16.1% and 9.3%, respectively. Figure 14(b) records the time series of the towing resistance coefficient of braces only in Cases 3, 5, and 6 at the towing speed of 4 kN. e resistance coefficients of Cases 5 and 6 reduce approximately by 45.7% and 39.1%, respectively, when compared to Case 3. In view that the contribution of the braces to the total resistance is 22%, the resistance of Cases 5 and 6 will have a 10% and 8% decrease than that of Case 3. erefore, the towing performance of the plate shape in Cases 5 and 6 is better than that of circle longitudinal section in Case 3. Motion Performance Evaluation. e environmental conditions are listed as follows: the wind and wave direction is 0°, the wave period T wave ranges from 3 s to 123 s, and interval of wave period ΔT wave � 1 s and v t � 0. Pontoon End Shape. e periodic motion responses of Cases 1 to 3 are calculated by using time-frequency conversion method, as shown in Figure 15. All the curves for three cases in roll, pitch, and heave responses have the similar trend, and the response amplitudes of Cases 1 to 3 have no significant difference. e maximum roll response amplitudes of Cases 1 to 3 are 0.67788, 0.6853, and 0.6891. e maximum pitch response amplitudes of Cases 1 to 3 are 0.9846, 1.0014, and 1.0012 and the maximum heave response amplitudes of Cases 1 to 3 are 1.9388, 1.9728, and 2.0137. us, it can be considered that the pontoon end shape has a little influence on the motion performance of the whole platform. In the pontoon design, the effect of pontoon end shape on the motion response can be neglected. Column Cross Section. Distinct from the influence of pontoon end shape on motion performance, all the roll, pitch, and heave responses have the significant changes for different column cross sections, as shown in Figure 16. Seen from the roll and pitch periodic response curves in In sum, for motion response, Case 3 and Case 4 have their own advantages and disadvantages. us, the column selection should be under a comprehensive consideration of the towing performance, the motion response requirements of each degree of freedom, et al. Figure 17 plots the roll, pitch, and heave periodic responses of Cases 3, 5, and 6, whose brace longitudinal section differs from each other. Similar to the regulations of the pontoon end shape, all the curves are relatively close. e maximum roll response amplitudes of Cases 3, 5, and 6 are 0.6823, 0.6835, and 0.6831. e maximum pitch response amplitudes of Cases 3, 5, and 6 are 0.9916, 0.9969, and 0.9949. And the maximum heave response amplitudes of Cases 3, 5, and 6 are 1.9892, 2.0032, and 1.9986. It is found that the peak response amplitudes have little difference and share the same corresponding wave periods. erefore, Cases 3, 5, and 6 have the essentially similar motion performance. Brace Longitudinal Section. In summary, the effect of the brace longitudinal section on the motion performance can be ignored. e brace design should only consider the towing performance and other design requirements. Conclusions In this study, the effects of different structural components on the towing and motion performances are investigated by using the integrated CFD method that contains RANS with SST k-ω turbulent model, a VOF/6-DOF solver (i.e., a joint application of VOF model and a tightly coupled 6-DOF solver), and a UDF Library for reading the time series of wave speed. e numerical simulations lead to the following conclusions: (a) e VOF/6-DOF solver compiled in a new Fluent UDF Library has been validated to be accurate and stable. By comparing the heave responses of a wind turbine platform used in previous study and the production platform of Case 3 with different methods, it could be verified that the innovative solver adopted in the present study is relatively in agreement with OpenFOAM. It improves the computational stability and accuracy, compared with the traditional 6-DOF solver in ANSYS/Fluent (b) Towing performances of Cases 1 to 6 are evaluated at the towing speeds ranging from 1 kN to 8 kN. All the towing resistances increase with the towing speeds. e towing resistances of the platform components in ascending order are pontoons, columns, and braces, which share approximately 45%, 33%, and 22% of the total resistance, respectively. Only from the perspective of towing performance, the following suggestions from the abovementioned in-depth investigation are listed as follows. For the pontoon end shape, the proper form is Case 2 (30°sharp angle); however, in the case of a same platform draft, the proper form is Case 1 (half round). For the column cross section, Case 4 (circle) has a better motion performance than Case 3 (square) when the towing speed is greater than 4 kN. For the brace longitudinal section, the towing resistances of Cases 5 and 6 (plate shape) are relatively close and less than those of Case 3 (circle). us, the platform design should be optimally chosen as the plate shape (c) Motion responses of Cases 1 to 6 are analyzed at 121 sea states (i.e., wave period ranges from 3 s to 123 s with an interval of 1 s). Roll, pitch, and heave periodical responses lead to the following suggestions. Firstly, the pontoon end shapes and brace longitudinal sections have no significant influence on the motion response of the whole platform. Meanwhile, the motion responses of different column cross sections have great differences. Herein, the roll and pitch responses of Case 4 are greater than those of Case 3, which may threaten the safety of the platform. On the contrary, the heave responses of Case 4 are less than those of Case 3, which will improve the safety of the platform. erefore, the column design should comprehensively consider the response amplitude limitations of each degree of freedom, the towing performance, and other design requirements. Data Availability e data used to support the findings of this study are included within the article. Previously reported data were used to support this study and are available at Disclosure e funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. Conflicts of Interest e authors declare no conflicts of interest regarding the publication of this paper.
6,900.6
2020-08-19T00:00:00.000
[ "Engineering" ]
Large and moderate deviations for kernel–type estimators of the mean density of Boolean models Abstract: The mean density of a random closed set with integer Hausdorff dimension is a crucial notion in stochastic geometry, in fact it is a fundamental tool in a large variety of applied problems, such as image analysis, medicine, computer vision, etc. Hence the estimation of the mean density is a problem of interest both from a theoretical and computational standpoint. Nowadays different kinds of estimators are available in the literature, in particular here we focus on a kernel–type estimator, which may be considered as a generalization of the traditional kernel density estimator of random variables to the case of random closed sets. The aim of the present paper is to provide asymptotic properties of such an estimator in the context of Boolean models, which are a broad class of random closed sets. More precisely we are able to prove large and moderate deviation principles, which allow us to derive the strong consistency of the estimator of the mean density as well as asymptotic confidence intervals. Finally we underline the connection of our theoretical findings with classical literature concerning density estimation of random variables. Introduction The mean density of lower dimensional random closed sets, such as fiber processes and surfaces of full dimensional random sets, is an important quantity which arises in different scientific fields. As a consequence its evaluation and estimation have undergone a growing interest during the last decades [6,19]. Recent areas of applications include pattern recognition and image analysis [40,28], computer vision [42], medicine [1,8,15,16,17], material science [14]. We remind that, given a probability space (Ω, F , P), a random closed set Θ in R d is a measurable map where F denotes the class of the closed subsets in R d , and σ F is the σ-algebra generated by the so called Fell topology, or hit-or-miss topology, that is the topology generated by the set system where G and C are the system of the open and compact subsets of R d , respectively (e.g., see [36]). We say that a random closed set Θ : (Ω, F ) → (F, σ F ) satisfies a certain property (e.g., Θ has Hausdorff dimension n) if Θ satisfies that property P-a.s.; throughout the paper we shall deal with countably H nrectifiable random closed sets, having denoted by H n the n-dimensional Hausdorff measure. A random closed set Θ n of locally finite n-dimensional Hausdorff measure H n induces a random measure μ Θn (A) := H n (Θ n ∩ A), A ∈ B R d , and the corresponding expected measure is defined as where B R d is the Borel σ-algebra of R d . (The important issue of the measurability of the random variable μ Θn (A) has been addressed in [5,45].) Whenever the measure E[μ Θn ] is absolutely continuous with respect to the d-dimensional Hausdorff measure H d , its density (i.e. its Radon-Nikodym derivative) with respect to H d is called mean density of Θ n , and, according to notation in previous works (e.g., see [18,20]), denoted by λ Θn . It is worth mentioning that, while the estimation of the mean density in stationary settings has been widely studied in the literature (see, e.g., [6,23]), only recently the non-stationary case has been addressed, and, to the best of our knowledge, a general density estimation theory for random sets is still missing. The aim of the present paper is the investigation of this area. As a matter of fact, the problem of the local and global approximation of λ Θn for non stationary random sets has been tackled by the authors in [2,18,19,20,44]. More specifically, given an i.i.d. random sample Θ (1) n , . . . , Θ (N ) n of size N for the random closed set Θ n , the authors have provided two different kinds of estimators for the mean density of Θ n : the so-called "Minkowski content"-based estimator, introduced in [43] through the notion of the Minkowski content of a set (see, e.g., [3]), and the so-called kernel-type estimator, introduced in [10] and denoted here λ κ,N Θn (for its precise definition see Eq. (6) bellow). We refer to [10] for a discussion on similarities and differences among them; we mention here that, even if the evaluation of λ κ,N Θn (x) is a non-trivial issue for very general random sets, it has been shown in [11] that it approaches the true value of λ Θn (x) much faster than the "Minkowski content"-based estimator. We point out that the importance of the estimator λ κ,N Θn (x) arises in the general theory of random sets, because it may be regarded as a generalization of the classical kernel density estimator of random variables to the case of random sets (see also Section 6); this is the reason why we shall refer to λ κ,N Θn (x) as "kernel-type" estimator (or briefly kernel density estimator ), and why its investigation plays a pivotal role in the whole theory of random sets, providing a unifying approach to density estimation. While the asymptotic properties of the "Minkowski content"-based estimator, as well as asymptotic confidence intervals and central limit theorems, have been studied in [13], no analogous results are still available for the kernel-type estimator of the mean density. Hence the main aim of the present paper is the investigation of large and moderate deviation principles of λ κ,N Θn (x) for a large class of random closed sets, known as Boolean models, leaving to subsequent works extensions to more general classes. The analysis we will carry out is much in the spirit of [31,35], who proved similar results for kernel estimators of random variables. Even if Boolean models do not cover all the variety of random sets, as stated in [4], they are usually considered basic random sets models in stochastic geometry. So the present paper may be seen as the first step in extending large and moderate deviation principles for kernel density estimators of random variables to the case of kernel-type estimators of the mean density of random sets. The theorems we are going to prove are interesting in their own right, in addition they provide tools to derive asymptotic normality and strong consistency of kernel-type estimators, which are useful to determine asymptotic confidence intervals, as well. The paper is organized as follows. In Section 2, we depict the general framework of Boolean models that we want to handle in this paper; besides we briefly recall all the results on stochastic geometry and large deviation theory that are necessary to the aim of the present paper. Large and moderate deviation principles for the kernel-type estimator of the mean density are presented in Section 3, namely in Theorem 2 and Theorem 3, respectively. These theorems are the basic building blocks to derive statistical properties of such an estimator. Indeed we are able to prove its strong consistency and to derive asymptotic confidence intervals (see Section 4). Some noteworthy examples of Boolean models are discussed as well in Section 5. Finally Section 6 contains a discussion on relevant connections with the literature and paves the way for future developments of the present work. For the reader's convenience, the proofs of the main theorems, and some related technical lemmas, are deferred to Appendix A. Preliminaries and notations This section gathers some basics on stochastic geometry and large deviations, which are necessary to understand our main results. Clearly the treatment is not exhaustive here, thus throughout the paper we provide some interesting references for those readers who want to deepen the results we just recall. Point processes, intensity measure and Boolean models Roughly speaking a point process, denoted here by Φ, is a locally finite collection {ξ i } i∈N of random points; more formally Φ is a random counting measure, that is a measurable map from a probability space (Ω, F , P) into the space of locally finite counting measures on R d . Throughout the paper we will deal with simple point processes, that is Φ({x}) ≤ 1 ∀x ∈ R d , P-a.s. The measure Λ(A) := E[ Φ(A)] on B R d is called intensity measure of Φ; whenever it is absolutely continuous with respect to H d , its density is called intensity of Φ. Marked point processes may be regarded as a generalization of point processes. They are collections of random points ξ i in R d , each one associated with a mark K i , which usually belongs to a complete and separable metric space (c.s.m.s.) K. Hence the resulting collection of random points Φ = {(ξ i , K i )} i∈N is a point process on R d × K, with the property that the unmarked process . A common assumption (e.g., see [33]) is that there exists a measurable function f : R d × K → R + and a probability measure Q on K such that Λ(d(x, K)) = f (x, K)dxQ(dK). We also recall that point processes can be considered on quite general metric spaces. In particular, a point process in C d , the class of compact subsets of R d , is called particle process (see [4] and references therein). It is well known that, by a center map, a particle process can be transformed into a marked point process Φ on R d with marks in C d , by representing any compact set C as a pair (x, Z), where x may be interpreted as the "location" of C and Z := C − x the "shape" (or "form") of C. In this case the marked point process Φ = {(X i , Z i )} is also called germ-grain model. Every random closed set Θ in R d can be represented as a germ-grain model by means of a suitable marked point process In a large variety of applications the random sets Z i are uniquely determined by a suitable random parameter S ∈ K. Typical examples include: union of random balls, where K = R + and S is the radius of a ball centered at the origin; segment processes in R 2 in which K = R + × [0, 2π] and S = (L, α) where L and α are the random length and orientation of the segment attached to the origin, respectively. In order to be consistent with the notation used in previous works (e.g., [44,10]), we shall consider random sets Θ n described by marked point processes Φ in R d with marks in a suitable mark space K so that Z = Z(S) is a random set containing the origin: Whenever Φ is a marked Poisson point process, Θ n is said to be a Boolean model. Since we are going to consider here Boolean models, we also recall that a marked Poisson point process in R d with marks in K may be seen as a Poisson point process on R d × K with intensity measure Λ if Λ(· × K) is continuous and locally bounded. For an exhaustive treatment of point processes we refer to [24,25], and to [34] for an elegant presentation of Poisson processes. Further, we mention [36,37,38,39] for a unified theory on germ-grain models. Basics on large and moderate deviations The theory of large deviations is concerned with the asymptotic estimation of probabilities of rare events, by giving an asymptotic computation of small probabilities in exponential scale. Assume that (X, X ) is a Polish space equipped with its Borel σ-algebra. The large deviation principle characterizes the asymptotic behavior of a family of probability measures {μ N } N ≥1 on (X, X ) as N goes to infinity in terms of a rate function. A rate function is a map J * : X → [0, +∞) lower semicontinuous, i.e. the level sets {x : J * (x) ≤ α} are closed for every α ≥ 0; J * is said to be a good rate function if the level sets are compact. The set {x : J * (x) < +∞} amounts to be the domain of J * . Let v N be a velocity, namely a function such that v N → +∞ as N → ∞. A family of probability measure {μ N } N ≥1 is said to satisfy a Large Deviation Principle (LDP) with rate function J * and velocity v N if and only if for any 432 F. Camerlenghi and E. Villa where • A and A are the interior and the closure af A, respectively, and with the convention that the infimum over the empty set equals +∞. We say that a sequence of random variables satisfies the LDP when the sequence of measures induced by these variables satisfies the LDP. The Gärtner-Ellis Theorem [26, Theorem 2.3.6] is the main tool to prove large deviations results. For our purposes, we consider the case X = R m , with m ≥ 1, and X = B R m . In what follows a · b := m j=1 a j b j denotes the scalar product between two generic vectors a = (a 1 , . . . , a m ) and b = (b 1 , . . . , b m ) of R m . We also remind that a convex function f : R m → (−∞, ∞] is said to be essentially smooth (see e.g. Definition 2.3.5 in [26]) if the interior where w N → ∞ as N → ∞, a LDP holds for suitable centered random variables with speed v N = 1/a N and the same quadratic rate which does not depend on the choice of {a N }. Moderate deviations may be employed to obtain the weak convergence to a centered Normal distribution whose variance is determined by a suitable application of the Gärtner-Ellis Theorem (e.g., see also [9]). This will be clarified in Section 4 where we shall apply LDP and MDP to show that, for every x ∈ R d , the kernel estimator λ κ,N Θn (x) of λ Θn (x) is strongly consistent and asymptotically Normal, respectively. Notations and assumptions To fix the notation, b n denotes the volume of the unit ball in R n , and B r (x) is the closed ball centered at x ∈ R d with radius r > 0. For any A ⊂ R d and r > 0, its Minkowski enlargement at size r > 0 is denoted by For further definitions and properties on rectifiable sets refer to [3,29,30]. In the sequel, we will say that Θ n satisfies a certain property if such a property is satisfied for P-almost every ω ∈ Ω; in particular Θ n will be a Boolean model driven by a Poisson point process Φ in R d × K with intensity measure Λ(d(x, s)) = f (x, s)dxQ(ds), satisfying the following assumptions: (A1) for any s ∈ K, Z(s) is a countably H n -rectifiable and compact sub- for some γ, γ > 0 independent of s; (A2) for any s ∈ K, H n (disc(f (·, s))) = 0, where disc(f (·, s)) contains the discontinuity points of f (·, s), and f (·, s) is locally bounded such that for These assumptions may seem to be a little bit technical at a first glance, but they are natural hypotheses fulfilled by a wide class of germ-grain models, and their meaning has been extensively discussed in [10,44]; indeed, for the reader's convenience, we use here the same notation (A1) and (A2) introduced in [10] and in [44], respectively. We also recall that the assumption (A1) guarantees (see Remark 4 and Proposition 5 in [44]) that the measure E[μ Θn ] defined in (1) is locally bounded and absolutely continuous with density In order to define the kernel density estimator of the mean density, we remind that a multivariate kernel is a probability density function κ : R d → R which is radially symmetric. Summing up, throughout the paper, unless otherwise specified, we suppose the validity of: is a sequence of i.i.d. random closed sets as Θ n . • κ is a continuous kernel with compact support supp(κ) ⊂ B R (0), and such that κ(x) ≤ M , for all x ∈ R d and for some M > 0. The kernel-type estimator λ κ,N Θn (x) of the mean density λ Θn (x) at a point x ∈ R d is defined as follows [10]: where * stands for the usual convolution product, while κ r N : It can be shown (see [10,Corollary 7]) that if the bandwidth r N is such that The notion of approximate tangent space shall appear in the expression for the rate function both in the LDP and in the MDP stated in Theorem 2 and Theorem 3, respectively. Such a notion is borrowed from geometric measure theory and it is recalled below, for the reader's convenience. Denoted by G n the set of unoriented n-dimensional subspaces of R d , and by C c (R d ; R) the space of all the real valued continuous functions with compact support in R d , we remind that a H n -rectifiable compact set A ⊂ R d admits approximate tangent space By Theorem 2.83 and Proposition 1.62 in [3], π x A exists for H n -a.e. x ∈ A; moreover, (7) holds for any bounded Borel measurable function φ : R d → R with compact support such that H n |πxA (disc(φ)) = 0. For the sake of simplicity, we have assumed that κ is continuous: this allows us to directly apply Eq. (7) in the sequel. We refer to [10, Remark 9] for a more detailed discussion on the non-continuous case. Large and moderate deviations for the kernel-type estimator In this section we state large and moderate deviation principles for the kernel density estimator defined in (6), by deferring their proof to the Appendix. Such results will be useful to derive statistical properties and confidence intervals for the involved estimator, as we will see in Section 4. Theorem 2 (LDP). Let Θ n and κ be as in the Assumptions. Then the sequence of kernel estimators satisfies a LDP with speed v N = Nr d−n N and good rate function where Theorem 3 (MDP). Let Θ n and κ be as in the Assumptions, and let {b N } N ≥1 be a sequence of positive real numbers such that Then the sequence of estimators and good rate function where C V ar (x) is the quantity so defined Statistical properties and confidence intervals In the previous section we stated large and moderate deviation principles for the kernel estimator of the mean densities of random closed sets; these results allow to derive useful statistical properties for such an estimator. Indeed, proceeding along the same lines of [12, Remark 2], we can show how an estimate of the rate of convergence of λ κ,N Θn (x) to λ Θn (x) follows as a byproduct of Theorem 2 and that an immediate application of the Borel-Cantelli Lemma leads to a strong consistency result: Proposition 4 (Convergence rate). Let Θ n and κ be as in the Assumptions, and let has been defined in Theorem 2, we have that for any Proof. It is known that when we can apply the Gärtner-Ellis Theorem (see Theorem 1), the rate function J * (y) uniquely vanishes at y = y 0 , where y 0 := ∇J(0). Denoted for any δ > 0 we have that inf y∈C δ J * (y) > 0, since J * is non-negative and uniquely vanishes at y 0 . Therefore, as a consequence of the large deviation upper bound in (3) for the closed set C δ , we have By virtue of Theorem 2, the previous bound holds true for Z N = λ κ,N Θn (x), and v n = Nr d−n N . Besides, using equations (5) and (29), it can be easily seen that in our setup y 0 := J x (0) = λ Θn (x). Hence, in view of these remarks and (11), one concludes that for all η such that 0 < η < Γ * δ , there exists N 0 such that Corollary 5 (Strong consistency). Let Θ n and κ be as in the Assumptions, Proof. Let H := (Γ * δ − η), with Γ * δ defined as in Proposition 4 and η ∈ (0, Γ * δ ). Then H is a positive quantity independent of N , and observe that N ≥1 exp − Thus the result follows by Proposition 4 and a standard application of the Borel-Cantelli lemma. At the end of Section 2.2, we mentioned that the term moderate deviation is used when for a sequence {a N } of positive numbers satisfying the conditions in (4), a LDP holds for suitable centered random variables with speed v N = 1/a N . If we choose w N = Nr d−n N , we may observe that by Theorem 3 we are in the case a N = Nr d−n N /b 2 N , with b N satisfying the conditions in (9). Moreover we also mention that the case a N = 1/w N (so here b N = Nr d−n N ) and a N = 1 (so here b N = Nr d−n N ) should correspond to the convergence to zero and to the weak convergence to a centered normal distribution, respectively, of the associated centered random variables (here λ κ,N and , respectively). This is in accordance with the corollary above and with the proposition below. Proposition 6 (Asymptotic Normality). Let Θ n and κ be as in the Assumptions. Then the sequence is the quantity defined in (10). Proof. One can proceed as in the proof of Theorem 3 with b N = Nr d−n N , noticing that the proof is still valid, even if the first condition in (9) is violated. As a consequence one is able to show that which is tantamount to saying that converges weakly to the normal distribution N(0, C V ar (x)), as N → +∞. We conclude the investigation of the statistical properties related to λ κ,N Θn (x) providing asymptotic confidence intervals for λ Θn (x), relying on Proposition 6. In order to do this we have to choose a specific bandwidth r N , which is assumed to be the optimal bandwidth determined in [10]. Here we recall some useful results in this direction. We remind that the best choice for r N should be the one which minimizes the mean square error (MSE), given by The minimization of the MSE is a quite challenging problem, which cannot be solved even in the simplest case of kernel density estimators of random variables. Hence one should look for an r N which minimizes the asymptotic mean square error (AMSE). For Θ n and κ as in the Assumptions, the following asymptotic approximation of the variance may be deduced by the proof of Theorem 8 in [10]: where C V ar (x) is the quantity defined in (10). For what concernes the asymptotic approximation of the bias, further differentiability assumptions on f are required. To fix the notation (the same used in [10] for the reader's convenience), in the sequel α := (α 1 , ..., α d ) will denote a multi-index of N d 0 ; we will further define |α| F. Camerlenghi and E. Villa besides, for all s ∈ K, we will put ·, s)). For now on we assume that f (·, s) is at least twice differentiable, and that the following assumption is fulfilled for any |α| = 2: (A2bis) for any s ∈ K, H n (D (α) (s)) = 0 and D α y f (y, s) is locally bounded such that for any compact An asymptotic approximation of the bias has been proved in [10,Theorem 8]: where From (12) and (13) H d -a.e. x ∈ R d , provided that C Bias (x) = 0. (For a discussion on the case C Bias (x) = 0 we refer to [10].) Proposition 7. Let Θ n and κ be as in the Assumptions, and such that (A2bis) is fulfilled. If r N is the asymptotic optimal bandwidth r o,AM SE N in (14), then Proof. First of all note that and the first term in (15) converges weakly to the standard normal distribution as N → +∞, by Proposition 6. Let us notice now that the non-random term Proof. Thanks to Proposition 7 we can state that The asymptotic confidence intervals we have derived in the present section are based on the assumption C Bias (x) = 0, this does not happen for stationary Boolean models. However in such a situation the kernel-estimator is unbiased (see [10]) and Proposition 6 gives immediately the following: The previous Proposition is the basic building block to determine asymptotic confidence intervals for stationary Boolean models as well. Indeed, proceeding along the same lines as in the proof of Corollary 8, an asymptotic confidence interval for λ Θn of level α is . Note that here r N can be any bandwidth. Noteworthy examples Here we discuss some relevant examples of Boolean models, in particular a Boolean segment process, the Poisson point process and the Matérn cluster process. A Boolean segment process As simple example of applicability of the previous results we discuss the Boolean segment process already introduced in [10]. Let n = 1 and assume that Θ 1 is an inhomogeneous Boolean model of segments in R 2 with random length L and uniform orientation, so that the mark space is K = R + × [0, 2π]. For all s = ( , α) ∈ K, let Z(s) := {(u, v) ∈ R 2 : u = τ cosα, v = τ sinα, τ ∈ [0, ]} be the segment with length ∈ R + , and orientation α ∈ [0, 2π]. Denoted by P L (d ) the probability law of the random length L, we assume that E[L 3 ] < +∞. Finally the segment process Θ 1 is driven by the marked Poisson process Φ in R 2 × K having intensity measure Λ(d(y, s)) = f (y)dyQ(ds), where f (y) = f (y 1 , y 2 ) = y 2 1 + y 2 2 and Q(ds) = 1 2π dαP L (d ). We are going to consider the kernel k(z) = 1 B1(0) (z)/π, which is not continuous, anyway the theory developed here apply for this kernel thanks to [10, Remark 9]. More precisely, λ κ,N Θ1 (x) is given here by whereas in [10] it is shown that and the asymptotic optimal bandwidth is Hence Proposition 7 and Corollary 8 now apply with the previous specifications of λ κ,N Θ1 (x), r N and C V ar (x). Poisson point processes Let Ψ be a Poisson point process in R d with a continuous intensity λ Ψ . We recall that Ψ may be seen as a particular Boolean model with Hausdorff dimension n = 0 and mean density λ Ψ , by choosing K = R d as mark space, Z(s) = s ∈ R d as trivial typical grain, and Λ(d(y, s)) := λ Ψ (y)dyδ 0 (s)ds. As expected, observe that of λ Ψ (x) defined by In particular, by observing that π y (x − Z(s)) = {0}, we get Hence we can specialize large and moderate deviation principles for λ κ,N Ψ (x) by a direct application of Theorem 2 and Theorem 3, respectively: satisfies a LDP with speed v N = Nr d N and good rate function is a sequence of positive real numbers satisfying Finally, as a direct consequence of Proposition 6 it follows that the sequence converges weakly, as N → +∞, to the normal distribution N(0, ||κ|| 2 2 λ Ψ (x)). Matérn cluster processes Clustering is a fundamental operation on point processes, well-known in stochastic geometry, and it allows to construct new point processes (see [23] for a more exhaustive treatment). The clustering operation consists in replacing each point x of a given point process Φ p , called parent point process, by a cluster N x of points, called daughter points. Each cluster N x is itself a point process, and it is assumed to have only a finite mean number of points. The resulting point process given by the union of all the clusters N x is said to be a cluster point process. Let us assume that the parent point process Φ p is a homogeneous Poisson point process with intensity λ p , and the clusters N x are of the form N xi = N i + x i for each x i ∈ Φ p , where the sequence {N i } i is independent of Φ p , and independent and identically distributed as N 0 (the representative cluster, centered at 0). Assuming that the number of points of N 0 is distributed according to a Poisson random variable with parameter n c , and that these points are independently and uniformly distributed in the ball B R (0), where R is a further parameter of the model, then the resulting cluster point process is called Matérn cluster process. It follows that Φ has constant intensity λ Φ = λ p n c , and may be regarded as a Boolean model Θ 0 with dimension n = 0, underlying Poisson point process Φ p , and typical grain Z 0 := N 0 given by a Poisson point process restricted to B R (0) whose intensity equals λ N0 ( . The resulting Boolean model Θ 0 ≡ Φ is driven by a marked Poisson point process in R d × S having intensity measure Λ(d(ξ, η)) = λ p dξQ(dη), where the mark space coincides with K := S the space of all sequences of points in R d and Q is the probability distribution of N 0 . Note that all the assumptions (A1) and (A2) are trivially fulfilled; as a consequence, all the previous results on λ κ,N Θ0 (x) ≡ λ κ,N Φ (x) hold in such a context. A LDP follows from Theorem 2, more specifically one can observe that the general expression for J * x appearing in the statement of that theorem simplifies in the context of Matérn cluster processes, indeed Hence one can see that the same large deviation principle (LDP) stated in Section 5.2 for a Poisson point process Ψ holds even for the Matérn cluster process Φ, replacing the intensity λ Ψ with λ Φ in the expression for the rate function J * x (16). In a similar vein one can prove the validity of the MDP stated in Section 5.2 for the Matérn cluster process Φ, where again the intensity λ Ψ is replaced with λ Φ in (17). Discussion and concluding remarks We have proved large and moderate deviation principles for kernel-type estimators of the mean density of Boolean models. Thanks to these results, we have been able to derive the consistency of the estimator, and asymptotic confidence intervals as well. Theorems 2 and 3 are connected with classical results concerning the kerneltype estimator of the density function of an absolutely continuous random variable due to [31,35]. Here we want to pinpoint the connection with the classical literature in view of future developments. More specifically, let X be a random variable taking values in R d with probability density function f X , and let X 1 , . . . , X N be a random sample for X. The kernel density estimator f N X (x) of f (x) at a point x ∈ R d is traditionally [27,32,41] defined as The scaling parameter r N , known as the bandwidth, determines the smoothness of the estimator, and it has to be chosen such that r N → 0 Nr d N → ∞ to obtain an asymptotically unbiased and weakly consistent estimator f N X (x). The kernel density estimator λ κ,N Θn (x) of λ Θn (x) defined in Eq.(6) may be seen as the natural extension of f N X (x) to the case of very general random geometric objects in R d of Hausdorff dimension n > 0, i.e. not necessarily Boolean models. See also [10, Section 3.3.1]. Large and moderate deviation principles for kernel density estimators of f X have been investigated in [31,35] with different techniques, in particular, in [31] the author establishes pointwise, as well as uniform, moderate and large deviations principles for the sequence f N , even for more general kernel functions κ. We recall here the pointwise results for large and moderate deviations given in [31, Proposition 3.1] and [31, Proposition 2.1], respectively, specializing them with our notation and assumptions on κ: satisfies a LDP with speed v N = Nr d N and rate function is a sequence of positive real numbers satisfying If Theorems 2 and 3 were true for a general germ-grain model (not only for Boolean models), then the results concerning random variables just recalled here would follow as a particular case. Indeed a random variable X ≡ Θ 0 can be seen as a trivial germ-grain process driven by the marked point process Φ = {(X, s)} in R d with mark space K = R d , consisting of one point (X) only, with grain Z(s) := s, and intensity measure Λ(d(y, s)) = f (y)dyδ 0 (s)ds. With these choices equation (5) implies that λ Θ0 (x) = f (x), i.e. the mean density of X amounts to be its probability density function, and the expressions in (18) and (19) follow (formally) by replacing Λ(d(y, s)) = f (y)dyδ 0 (s)ds and Θ n = X in Theorem 2 and Theorem 3, respectively, in analogous way as we did in Section 5.2. Note that the further term tf X (x) appearing in (18) is due to having considered now . Hence we may ask whether the theorems obtained for Boolean models extend to more general random closed sets, e.g. germ-grain models. In such a case, as just observed here, the results of [31,35] would follow as a particular case of a more general theory. Otherwise, if the extension is not possible, the independence property of the underlying Poisson point processes would be peculiar in obtaining such expressions. This problem remains open and requires different kinds of techniques with respect to the ones employed here, which are mainly based on the availability of the Laplace functional of a Poisson point process. Finally it is worth to underline that the theoretical results proved in this paper and in [12] may be useful in many applications, for example to determine confidence intervals for the estimators. A future work in this direction, we are working on, will be focused on simulation studies of the kernel-type estimator in comparison with other estimators, such as the "Minkowski content"-based estimator mentioned in the Introduction. A.1. Proof of Theorem 2 Before proving Theorem 2, we provide two thecnical lemmas. For the sake of simplifying notation we define and we shall write h N (ξ, s) if r = r N in the above definition. F. Camerlenghi and E. Villa Proof. First of all we remind that (see [34, pg. 28]) if Ψ is a Poisson point process on X with intensity measure μ, then, for any measurable function g : for any complex number ϑ. By observing that min{h(ξ, s), 1} ≤ h(ξ, s), and that 1/r n ≤ 1/r d if r ≤ 1, we have We remind that κ is a kernel with supp(κ) ∈ B R (0), Z(s) ⊆ Ξ(s), and we notice that ifỹ ∈ Z(s) and Thus we may write Finally, Lemma 3 in [44] guarantees that the event that different grains of Θ n overlap in a subset of R d of positive H n -measure has null probability, therefore we may claim that that is the assertion. Lemma 11. Let Θ n and κ be as in the Assumptions. If r < min 1, 1/(2R) , the following bound holds for any s ∈ K, t ∈ R, w ∈ R d , and H n -a.e. y ∈ x − Z(s): where Proof. First of all consider the case t ≤ 0. The case t > 0 is less trivial, and we employ the Taylor series expansion of the exponential Finally, the integrability condition (25) easy follows: Proof of Theorem 2. The proof relies on the Gärtner-Ellis Theorem. First of all we will show that then we observe that J is a smooth function defined on R, hence satisfying the assumptions of the Gärtner-Ellis Theorem. Since Θ (i) n i∈N is a sequence of i.i.d. random sets, then for N sufficiently big so that r N < 1 It is worth to multiply and divide the above integrand by [(x−Z(s))−ξ]/r N κ(y) × H n (dy); then, by suitable changes of variable, the following chain of equality holds: Denoted by D f (s) the set of discontinuity points of f (·, s) for any s ∈ K, assumption (A2) implies H n (D f (s)) = 0, therefore we can see that for any s ∈ K, w ∈ R d , and H n -a.e. y ∈ x − Z(s) . Thus the (29) follows by a simple application of the dominated convergence theorem, whose validity is guaranteed by Lemma 11. To conclude the proof we observe that J satisfies the assumptions of Theorem 1. More precisely, as a byproduct of the application of the dominated convergence theorem, J(t) < +∞ for any t ∈ R. Finally we show that J is differentiable on R with for any t 0 ∈ R. In order to prove this, fix t 0 ∈ R and δ > 0 sufficiently small; following [7,Theorem 16.8], we need to show that the integrand exp t is bounded from above for any t ∈ (t 0 − δ, t 0 + δ) by an integrable function. To this end, the definition of approximate tangent space and similar arguments as in (27) give hence the thesis follows. A.2. Proof of Theorem 3 Before proving Theorem 3 we need some useful lemmas. where the summation is extended over all positive integers which are solution of the equation q 1 + . . . + q n = n. It is worth noticing that the summation is knows as the Bell number, which amounts to be the number of partition of k objects in distinct sets (see [21, pg. 292]). By [21, pg. 97] we have the following representation The Stirling number of the second kind satisfy a useful recurrence relation Proof. With the notation introduced in (20), the same argument at the end of the froof of Lemma 10, together with traditional combinatorial arguments, show that for any k ≥ 3: where the sum over ( ) runs over all the vectors (q 1 , . . . , q i ) of positive integers such that q 1 + · · · + q i = k. Besides we have used the fact that the marked point process is simple. Since there are i! possible permutations of the points ξ 1 , . . . , ξ i we can write where ν [i] is the i-th factorial moment measure of Φ (e.g. see [23] where τ is the function defined in (31). By the end of the proof of Lemma 12 we konw that whenever r is sufficiently small, i.e. r ≤ min 1, 1/(2R) , therefore and so Recalling the definition of the Bell numbers B k (see (35)) we have Now we consider the summation in (36) for any t 0 > 0. Finally, the previous bound for the expectation yields e t0Cm m! = r d−n exp e t0C − 1 , and the r.h.s. of this inequality turns out to be a O(r d−n ), which implies the assertion. Finally we recall that the discrete version of the Hölder inequality can be written as follows where x i , y i ≥ 0 for any i = 1, . . . , n, and p, q > 0 are such that 1/p + 1/q = 1. By specializing (37) with n = 2, y i = 1 for i = 1, 2, p = k and q = k/(k − 1), it directly follows that for any x 1 , x 2 > 0 and for any integer k > 0. We are now ready to prove the theorem. Proof of Theorem 3. Let us define and J x (t) := lim after proving that the limit exists finite for any t ∈ R; then the good rate function will turn out to be as a direct application of Theorem 1. First of all observe that, for any t ∈ R, In order to bound the term R(N ) appearing in the previous equation, we note that for any real valued random variable X the following inequality holds where the last inequality follows from a standard application of the Hölder inequality, namely (E|X|) k ≤ E|X| k . Hence, if X = Θn κ x−y r N H n (dy), the remainder term R(N ) in (40) By assumption {Θ (i) n } i∈N is a sequence of i.i.d. random closed sets as Θ n ; therefore V ar Θn k x − y r N H n (dy) = Nr 2d N V ar( λ κ,N Θn (x)) (12) = Nr 2d As a consequence the rate function is given by and the assertion follows.
9,835
2018-01-01T00:00:00.000
[ "Mathematics", "Computer Science" ]
Synthesis of “All-Cis” Trihydroxypiperidines from a Carbohydrate-Derived Ketone: Hints for the Design of New β-Gal and GCase Inhibitors Pharmacological chaperones (PCs) are small compounds able to rescue the activity of mutated lysosomal enzymes when used at subinhibitory concentrations. Nitrogen-containing glycomimetics such as aza- or iminosugars are known to behave as PCs for lysosomal storage disorders (LSDs). As part of our research into lysosomal sphingolipidoses inhibitors and looking in particular for new β-galactosidase inhibitors, we report the synthesis of a series of alkylated azasugars with a relative “all-cis” configuration at the hydroxy/amine-substituted stereocenters. The novel compounds were synthesized from a common carbohydrate-derived piperidinone intermediate 8, through reductive amination or alkylation of the derived alcohol. In addition, the reaction of ketone 8 with several lithium acetylides allowed the stereoselective synthesis of new azasugars alkylated at C-3. The activity of the new compounds towards lysosomal β-galactosidase was negligible, showing that the presence of an alkyl chain in this position is detrimental to inhibitory activity. Interestingly, 9, 10, and 12 behave as good inhibitors of lysosomal β-glucosidase (GCase) (IC50 = 12, 6.4, and 60 µM, respectively). When tested on cell lines bearing the Gaucher mutation, they did not impart any enzyme rescue. However, altogether, the data included in this work give interesting hints for the design of novel inhibitors. Introduction Lysosomal storage disorders (LSDs) are a group of more than 70 inherited orphan diseases caused by specific mutations in genes encoding lysosomal enzymes and characterized by progressive accumulation of substrates within the lysosomes, leading to organ dysfunctions [1][2][3][4]. Defective activity of lysosomal β-galactosidase (β-Gal), which is responsible for the hydrolytic removal of a terminal β-galactose residue from several glycoconjugates, leads to two different types of rare LSDs, the sphingolipidosis GM1-gangliosidosis and the mucopolysaccharidosis IVB (MPS IVB, also known as Morquio disease type B) [5,6]. Typical substrates of β-Gal are GM1-gangliosides, glycoproteins, oligosaccharides, and glycosaminoglycan keratan sulfate, which accumulate inside the lysosomes due to the malfunctioning enzyme, which usually presents a mutation in its natural amino acid sequence. To date, no pharmacological treatment is available for these severe diseases. Pharmacological chaperone therapy (PCT) is emerging as a promising therapeutic approach for the treatment of LSDs, especially for those that cannot be treated with enzyme replacement therapy (ERT). PCT is based on pharmacological chaperones (PCs), small molecules that can selectively bind mutant enzymes in the endoplasmic reticulum (ER) and stabilize their correct three-dimensional conformation when used at subinhibitory concentrations, thus improving lysosomal trafficking and rescuing enzymatic activity. PCs are reversible inhibitors and can be replaced by the natural substrate of the enzyme inside the lysosomes. Their main pros include oral administration, broad body distribution (PCs have the potential to cross the blood-brain barrier), and minor side effects [7][8][9][10][11]. Nitrogen-containing glycomimetics, such as iminosugars (carbohydrates analogues with a nitrogen atom replacing the endocyclic oxygen), are the most investigated class of PCs for LSDs [12,13]. Recently, the first commercially available PC for treating lysosomal Fabry disease, Galafold TM (1-deoxygalactonojirimycin, DGJ, 1, Figure 1), has been marketed in Europe [14]. Previous studies have shown that the N-alkylated derivative of DGJ N-nonyl-DGJ (2, Figure 1) is able to rescue the intracellular activity of mutant β-Gal in GM1-gangliosidosis patient fibroblasts, thus highlighting the potential of PC therapy for patients with brain pathologies [15,16]. In addition, several azasugars, carbohydrate analogues in which nitrogen formally replaces the anomeric carbon, have shown interesting biological activities, as recently reviewed by Simone and coworkers [17]. In particular, 4-epi-isofagomine (3, Figure 1) showed moderate inhibition of human lysosomal β-Gal (IC50 = 1 µM) [18]. Introduction of an alkyl chain as in compounds 4 and 5 ( Figure 1) enhanced the inhibitory potency up to IC50 = 10 and IC50 = 0.4 nM, respectively. These compounds were able to rescue mutant β-Gal activity in fibroblasts from GM1-gangliosidosis patients [19,20]. Moreover, the "all-cis" trihydroxypiperidines 6 and 7 ( Figure 1) were able to increase β-Gal activity in GM1 gangliosidosis patient fibroblasts up to 2-6 fold [21]. A commonly employed strategy for the design of a potential PC is to modify a natural inhibitor of the compromised enzyme with the aim of increasing its efficacy and selectivity. However, since the inhibitory potency of a synthetic analogue does not always correlate with its performance as a PC, a reliable structure-activity relationship (SARs) for PCs is not easy and needs to be demonstrated by experimental evidence [21]. In particular, 4-epi-isofagomine (3, Figure 1) showed moderate inhibition of human lysosomal β-Gal (IC 50 = 1 µM) [18]. Introduction of an alkyl chain as in compounds 4 and 5 ( Figure 1) enhanced the inhibitory potency up to IC 50 = 10 and IC 50 = 0.4 nM, respectively. These compounds were able to rescue mutant β-Gal activity in fibroblasts from GM1-gangliosidosis patients [19,20]. Moreover, the "all-cis" trihydroxypiperidines 6 and 7 ( Figure 1) were able to increase β-Gal activity in GM1 gangliosidosis patient fibroblasts up to 2-6 fold [21]. A commonly employed strategy for the design of a potential PC is to modify a natural inhibitor of the compromised enzyme with the aim of increasing its efficacy and selectivity. However, since the inhibitory potency of a synthetic analogue does not always correlate with its performance as a PC, a reliable structure-activity relationship (SARs) for PCs is not easy and needs to be demonstrated by experimental evidence [21]. Scheme 1. Compounds synthesized in this work through the functionalization of ketone 8 via: (i) reduction to alcohol and Williamson reaction to ether 9, (ii) reductive amination with dodecyl amine to access 10, (iii) addition of organolithium derivatives to finally obtain compounds 11-15 . Synthesis Ketone 8, precursor of all the new compounds, was synthesized as reported from aldehyde 16, derived in turn from inexpensive D-mannose in four steps with a high overall yield (85%). The piperidine skeleton of 17 was obtained through a double reductive amination procedure (DRA) [25][26][27] followed by protection of the endocyclic nitrogen atom with a tert-butyloxycarbonyl (Boc) group. Oxidation of 17 with Dess Martin periodinane (DMP) gave ketone 8, which was diastereoselectively reduced to the "all-cis" alcohol 18 with NaBH4 in EtOH (Scheme 2) [25]. The "all-cis" ether 19 was obtained by Williamson synthesis following a recently reported procedure on a similar substrate [28]. Treatment of alcohol 18 with sodium hydride in dry DMF, alkylation with 1-nonyl bromide gave ether 19 with a 53% yield. Concomitant deprotection of acetonide and Boc groups under acidic conditions (aqueous HCl in MeOH), followed by treatment with the strongly basic resin Ambersep 900 OH, gave ether 9 with a 70% yield (Scheme 3). In order The "all-cis" ether 19 was obtained by Williamson synthesis following a recently reported procedure on a similar substrate [28]. Treatment of alcohol 18 with sodium hydride in dry DMF, alkylation with 1-nonyl bromide gave ether 19 with a 53% yield. Concomitant deprotection of acetonide and Boc groups under acidic conditions (aqueous HCl in MeOH), followed by treatment with the strongly basic resin Ambersep 900 OH, gave ether 9 with a 70% yield (Scheme 3). In order to investigate the role of different configurations at C-3 on biological activity, the diastereomeric alcohol 17 was treated similarly to give protected ether 20 with a 54% yield. Deprotection of 20 as above yielded ether 21 quantitatively (Scheme 3). to investigate the role of different configurations at C-3 on biological activity, the diastereomeric alcohol 17 was treated similarly to give protected ether 20 with a 54% yield. Deprotection of 20 as above yielded ether 21 quantitatively (Scheme 3). Scheme 3. Synthetic strategies to obtain ethers 9 and 21 and the amine 10. Ketone 8 was also converted to the amine 22 through reductive amination employing dodecyl amine. The reaction was performed in dry MeOH under catalytic hydrogenation on Pd(OH)2/C and gave, after purification by flash column chromatography (FCC), the protected piperidine 22 with a 55% yield. Final deprotection under acidic conditions followed by treatment with Ambersep 900 OH resin gave the free amine 10 with an 83% yield (Scheme 3). The "all-cis" configuration in compound 22 was established by comparison of its 1 H-NMR spectrum with those of similar compounds with a different alkyl chain at the exocyclic nitrogen atom [29]. The broad singlet at 4.41 ppm observed for 4-H in the 1 H-NMR spectrum of 22 is consistent with eq-ax relationships with both 3-H and 5-H occurring in its chair conformations. The synthesis of "all-cis" trihydroxypiperidines alkylated at C-3 by addition of organometallic reagents to the carbonyl group of 8 was then addressed. Ketone 8 was first reacted with Grignard reagents (octylMgBr, ethylMgBr, and methylMgBr, Scheme 4) under conditions which had proved successful for Grignard additions to aldehyde 16 and a nitrone derived thereof [23,30]. The addition of ethyl magnesium bromide to a simpler N-Boc-protected 4-piperidone has been previously reported [31]. However, Grignard additions to ketone 8 proved difficult and sluggish. Variation of reaction conditions (temperature ranging from −78 °C to room temperature, Grignard equivalents ranging from 1 to 1.8, see the Supplementary Materials for further details) did not lead to substantial improvements, and the desired alcohols 23, 24, or 25 were obtained with yields of 12% or less, in complex mixtures difficult to purify. In case of 23 and 24, a single diasteromeric adduct was isolated (as assessed by ESI-MS and 1 H-NMR), while with methyl magnesium bromide a mixture of two diastereoisomers (about 12% yield) was observed in the 1 H-NMR spectrum. The low yields can be ascribed to formation of the reduction product 18. Reduction of ketones by Grignard reagents through a β-hydride addition is known as an undesired side reaction that can occur with particularly encumbered substrates as a consequence of the reduced rate of nucleophilic addition [32,33]. Formation of 18 was attested by analysis of the 1 H-NMR and ESI-MS spectra and confirmed after Scheme 3. Synthetic strategies to obtain ethers 9 and 21 and the amine 10. Ketone 8 was also converted to the amine 22 through reductive amination employing dodecyl amine. The reaction was performed in dry MeOH under catalytic hydrogenation on Pd(OH) 2 /C and gave, after purification by flash column chromatography (FCC), the protected piperidine 22 with a 55% yield. Final deprotection under acidic conditions followed by treatment with Ambersep 900 OH resin gave the free amine 10 with an 83% yield (Scheme 3). The "all-cis" configuration in compound 22 was established by comparison of its 1 H-NMR spectrum with those of similar compounds with a different alkyl chain at the exocyclic nitrogen atom [29]. The broad singlet at 4.41 ppm observed for 4-H in the 1 H-NMR spectrum of 22 is consistent with eq-ax relationships with both 3-H and 5-H occurring in its chair conformations. The synthesis of "all-cis" trihydroxypiperidines alkylated at C-3 by addition of organometallic reagents to the carbonyl group of 8 was then addressed. Ketone 8 was first reacted with Grignard reagents (octylMgBr, ethylMgBr, and methylMgBr, Scheme 4) under conditions which had proved successful for Grignard additions to aldehyde 16 and a nitrone derived thereof [23,30]. Given the poor results obtained with alkyl Grignard reagents, we investigated the addition reaction with less bulky sp 2 and sp organometals. The reaction of 8 with vinyl magnesium bromide was less sluggish and a single diastereoisomeric adduct 26 (see below for structural assignment) was obtained with a 51% yield (Scheme 4). Hydrogenation of 26 in the presence of Pd/C under acidic conditions (aqueous HCl in EtOH) allowed complete deprotection and reduction of the alkene. Final treatment with Ambersep 900 OH and purification by FCC gave the saturated azasugar 13 with a 52% yield (Scheme 4). Encouraged by this result, we turned our attention to the addition of alkynyl organometallic reagents, namely, lithium acetylides generated in situ by treating terminal alkynes with butyl lithium. Compared to many C-nucleophiles, acetylides are less basic and less sensitive to steric congestion [34]. The results of the addition of structurally differentiated lithium acetylides to ketone 8 are reported in Table 1 The addition of ethyl magnesium bromide to a simpler N-Boc-protected 4-piperidone has been previously reported [31]. However, Grignard additions to ketone 8 proved difficult and sluggish. Variation of reaction conditions (temperature ranging from −78 • C to room temperature, Grignard Molecules 2020, 25, 4526 5 of 23 equivalents ranging from 1 to 1.8, see the Supplementary Materials for further details) did not lead to substantial improvements, and the desired alcohols 23, 24, or 25 were obtained with yields of 12% or less, in complex mixtures difficult to purify. In case of 23 and 24, a single diasteromeric adduct was isolated (as assessed by ESI-MS and 1 H-NMR), while with methyl magnesium bromide a mixture of two diastereoisomers (about 12% yield) was observed in the 1 H-NMR spectrum. The low yields can be ascribed to formation of the reduction product 18. Reduction of ketones by Grignard reagents through a β-hydride addition is known as an undesired side reaction that can occur with particularly encumbered substrates as a consequence of the reduced rate of nucleophilic addition [32,33]. Formation of 18 was attested by analysis of the 1 H-NMR and ESI-MS spectra and confirmed after acetylation of the free OH group (see the Supplementary Materials) (Scheme 4) in case of octyl magnesium bromide addition. Given the poor results obtained with alkyl Grignard reagents, we investigated the addition reaction with less bulky sp 2 and sp organometals. The reaction of 8 with vinyl magnesium bromide was less sluggish and a single diastereoisomeric adduct 26 (see below for structural assignment) was obtained with a 51% yield (Scheme 4). Hydrogenation of 26 in the presence of Pd/C under acidic conditions (aqueous HCl in EtOH) allowed complete deprotection and reduction of the alkene. Final treatment with Ambersep 900 OH and purification by FCC gave the saturated azasugar 13 with a 52% yield (Scheme 4). Encouraged by this result, we turned our attention to the addition of alkynyl organometallic reagents, namely, lithium acetylides generated in situ by treating terminal alkynes with butyl lithium. Compared to many C-nucleophiles, acetylides are less basic and less sensitive to steric congestion [34]. The results of the addition of structurally differentiated lithium acetylides to ketone 8 are reported in Table 1 . obtained with a 51% yield (Scheme 4). Hydrogenation of 26 in the presence of Pd/C under acidic conditions (aqueous HCl in EtOH) allowed complete deprotection and reduction of the alkene. Final treatment with Ambersep 900 OH and purification by FCC gave the saturated azasugar 13 with a 52% yield (Scheme 4). Encouraged by this result, we turned our attention to the addition of alkynyl organometallic reagents, namely, lithium acetylides generated in situ by treating terminal alkynes with butyl lithium. Compared to many C-nucleophiles, acetylides are less basic and less sensitive to steric congestion [34]. The results of the addition of structurally differentiated lithium acetylides to ketone 8 are reported in Table 1 The alkynes were treated with BuLi (1.5 equiv.) at −78 • C for 30 min, followed by the addition of ketone 8 at −78 • C. The reaction mixture was allowed to warm to room temperature and left to react for 2.5-3 h, then worked up. In all reported cases, FCC of the crude mixtures afforded good yields (65-88%) of adducts with both simple and functionalized alkenes (Table 1). A single diastereoisomer was obtained in all cases, which was ascribed the (S) configuration at the newly formed C-3 stereocenter on the basis of the following considerations. Alkynyl lithium derivatives are small nucleophiles, which typically prefer an axial rather than an equatorial attack on cyclohexanones [35][36][37] in order to avoid torsional strain [38,39]. In Scheme 5, the two more stable chair conformations of ketone 8 are depicted. Only in the 6 C 3 conformation, nucleophilic axial attack at C-3 experiences a stabilizing interaction by the low-lying energy σ* orbital of the antiperiplanar C-O bond at C-4, according to a favorable Felkin-Anh model [40,41]. Thus, based on stereoelectronic and steric considerations, we assumed that nucleophilic attacks occurred selectively at the Re face of ketone 8, giving the "all-cis" 3, 4, 5-trihydroxypiperidines (Scheme 5). A careful analysis of the 1 H NMR spectra of derivatives 26-31 and the products of their transformations supports this structural assignment (see below). was obtained in all cases, which was ascribed the (S) configuration at the newly formed C-3 stereocenter on the basis of the following considerations. Alkynyl lithium derivatives are small nucleophiles, which typically prefer an axial rather than an equatorial attack on cyclohexanones [35][36][37] in order to avoid torsional strain [38,39]. In Scheme 5, the two more stable chair conformations of ketone 8 are depicted. Only in the 6 C3 conformation, nucleophilic axial attack at C-3 experiences a stabilizing interaction by the low-lying energy σ* orbital of the antiperiplanar C-O bond at C-4, according to a favorable Felkin-Anh model [40,41]. Thus, based on stereoelectronic and steric considerations, we assumed that nucleophilic attacks occurred selectively at the Re face of ketone 8, giving the "all-cis" 3, 4, 5-trihydroxypiperidines (Scheme 5). A careful analysis of the 1 H NMR spectra of derivatives 26-31 and the products of their transformations supports this structural assignment (see below). Scheme 5. Stereochemical outcome of the addition of lithium acetylides to ketone 8. Compounds 27, 28, and 30 were subjected to acidic treatment for the concomitant removal of acetonide and Boc-protecting groups, leading to the "all-cis" trihydroxypiperidines 32, 33, and 34 with good yields (51-71%) after treatment with the strongly basic resin Ambersep 900 OH. The triple bond was subsequently reduced by catalytic hydrogenation in the presence of Pd(OH)2/C in EtOH to give azasugars 11, 12, and 14 (39-98%) (Scheme 6). Due to the presence of the additional acid-labile acetal moiety, adduct 29 was expected to give complications arising from possible interactions of the free amine with the aldehyde and was not subjected to acid-induced deprotection. Unsuccessful deprotection of 31 with different acids Compounds 27, 28, and 30 were subjected to acidic treatment for the concomitant removal of acetonide and Boc-protecting groups, leading to the "all-cis" trihydroxypiperidines 32, 33, and 34 with good yields (51-71%) after treatment with the strongly basic resin Ambersep 900 OH. The triple bond was subsequently reduced by catalytic hydrogenation in the presence of Pd(OH) 2 /C in EtOH to give azasugars 11, 12, and 14 (39-98%) (Scheme 6). was obtained in all cases, which was ascribed the (S) configuration at the newly formed C-3 stereocenter on the basis of the following considerations. Alkynyl lithium derivatives are small nucleophiles, which typically prefer an axial rather than an equatorial attack on cyclohexanones [35][36][37] in order to avoid torsional strain [38,39]. In Scheme 5, the two more stable chair conformations of ketone 8 are depicted. Only in the 6 C3 conformation, nucleophilic axial attack at C-3 experiences a stabilizing interaction by the low-lying energy σ* orbital of the antiperiplanar C-O bond at C-4, according to a favorable Felkin-Anh model [40,41]. Thus, based on stereoelectronic and steric considerations, we assumed that nucleophilic attacks occurred selectively at the Re face of ketone 8, giving the "all-cis" 3, 4, 5-trihydroxypiperidines (Scheme 5). A careful analysis of the 1 H NMR spectra of derivatives 26-31 and the products of their transformations supports this structural assignment (see below). Scheme 5. Stereochemical outcome of the addition of lithium acetylides to ketone 8. Compounds 27, 28, and 30 were subjected to acidic treatment for the concomitant removal of acetonide and Boc-protecting groups, leading to the "all-cis" trihydroxypiperidines 32, 33, and 34 with good yields (51-71%) after treatment with the strongly basic resin Ambersep 900 OH. The triple bond was subsequently reduced by catalytic hydrogenation in the presence of Pd(OH)2/C in EtOH to give azasugars 11, 12, and 14 (39-98%) (Scheme 6). Due to the presence of the additional acid-labile acetal moiety, adduct 29 was expected to give complications arising from possible interactions of the free amine with the aldehyde and was not subjected to acid-induced deprotection. Unsuccessful deprotection of 31 with different acids Due to the presence of the additional acid-labile acetal moiety, adduct 29 was expected to give complications arising from possible interactions of the free amine with the aldehyde and was not subjected to acid-induced deprotection. Unsuccessful deprotection of 31 with different acids (CF 3 COOH, CH 3 COOH, and HCl) was ascribed to the higher reactivity of its triple bond. Therefore, hydrogenation of the alkyne under neutral conditions (H 2 , Pd/C in EtOH) was first carried out to give piperidine 35, which was subsequently deprotected with aqueous HCl in MeOH. Final treatment with the strongly basic resin Ambersep 900 OH gave trihydroxypiperidine 15 with a 56% yield (Scheme 6). Configuration Assignment Relevant chemical shifts and coupling constants are reported in Tables 2 and 3 for H-4, H-5, and H-6 in 1 H-NMR spectra in selected compounds of the two series of protected (in CDCl 3 ) and deprotected (in CD 3 OD) trihydroxypiperidines, respectively. These values show regularities (the same applies to H-2 signals), which allowed us to ascribe the same configuration at C-3 for all compounds (see the Supplementary Materials for the full Table). Moreover, the shape of the signals and their coupling constants, where detectable, are consistent with the (S) absolute configuration tentatively assigned (see above) on the basis of mechanistic considerations. Indeed, the signal of H-5 appears as a broad singlet (or as a narrow multiplet), which is in agreement with its equatorial position in a preferred chair conformation, which places the R substituent equatorially, i.e., in the (3S) configuration ( 6 C 3 alcohol in Scheme 5). The lack of large ax-ax coupling constants is confirmed by signals of H-6. For example, in piperidine 13 (R = ethyl), the two hydrogens at C-6 display vicinal coupling constants J = 3.4 and J = 2.4 Hz, typical for ax-eq and eq-eq relationships, respectively. The same applies to the other derivatives when the signals are well resolved, as in compounds 11, 12, 14, and 15. (CF3COOH, CH3COOH, and HCl) was ascribed to the higher reactivity of its triple bond. Therefore, hydrogenation of the alkyne under neutral conditions (H2, Pd/C in EtOH) was first carried out to give piperidine 35, which was subsequently deprotected with aqueous HCl in MeOH. Final treatment with the strongly basic resin Ambersep 900 OH gave trihydroxypiperidine 15 with a 56% yield (Scheme 6). Configuration Assignment Relevant chemical shifts and coupling constants are reported in Tables 2 and 3 for H-4, H-5, and H-6 in 1 H-NMR spectra in selected compounds of the two series of protected (in CDCl3) and deprotected (in CD3OD) trihydroxypiperidines, respectively. These values show regularities (the same applies to H-2 signals), which allowed us to ascribe the same configuration at C-3 for all compounds (see the Supplementary Materials for the full Table). Moreover, the shape of the signals and their coupling constants, where detectable, are consistent with the (S) absolute configuration tentatively assigned (see above) on the basis of mechanistic considerations. Indeed, the signal of H-5 appears as a broad singlet (or as a narrow multiplet), which is in agreement with its equatorial position in a preferred chair conformation, which places the R substituent equatorially, i.e., in the (3S) configuration (6C 3 alcohol in Scheme 5). The lack of large ax-ax coupling constants is confirmed by signals of H-6. For example, in piperidine 13 (R = ethyl), the two hydrogens at C-6 display vicinal coupling constants J = 3.4 and J = 2.4 Hz, typical for ax-eq and eq-eq relationships, respectively. The same applies to the other derivatives when the signals are well resolved, as in compounds 11, 12, 14, and 15. H-6 in 1 H-NMR spectra in selected compounds of the two series of protected (in CDCl3) and deprotected (in CD3OD) trihydroxypiperidines, respectively. These values show regularities (the same applies to H-2 signals), which allowed us to ascribe the same configuration at C-3 for all compounds (see the Supplementary Materials for the full Table). Moreover, the shape of the signals and their coupling constants, where detectable, are consistent with the (S) absolute configuration tentatively assigned (see above) on the basis of mechanistic considerations. Indeed, the signal of H-5 appears as a broad singlet (or as a narrow multiplet), which is in agreement with its equatorial position in a preferred chair conformation, which places the R substituent equatorially, i.e., in the (3S) configuration (6C 3 alcohol in Scheme 5). The lack of large ax-ax coupling constants is confirmed by signals of H-6. For example, in piperidine 13 (R = ethyl), the two hydrogens at C-6 display vicinal coupling constants J = 3.4 and J = 2.4 Hz, typical for ax-eq and eq-eq relationships, respectively. The same applies to the other derivatives when the signals are well resolved, as in compounds 11, 12, 14, and 15. The observed upfield shift (0.3-0.5 ppm) of H-4 within the two series of compounds on turning from the alkynyl to the saturated substituents (see for instance 32 vs. 11), consistent with H-4 falling in the deshielding cone of the triple bond in the former derivatives when in a cis relationship, further supports this assignment. Biological Screening Compounds 9-15, 21, 32, and 33 were first evaluated as human lysosomal β-Gal inhibitors at 1 mM in human leukocyte homogenates and the results are shown in Table 4 and compared to previously published data. Unfortunately, none of the tested compounds strongly inhibited β-Gal (only a moderate 22% inhibition was found for the "all-cis" ether 9). These data demonstrate that both the alkylation of the hydroxy or the exocyclic amine group and the introduction of a substituent at C-3 of the trihydroxypiperidine skeleton dramatically affect β-Gal inhibition. However, the screening on a panel of 12 commercial glycosidases (see the Supplementary Materials), showed that only trihydroxypiperidine 10 (bearing a dodecyl chain connected at C-3 The observed upfield shift (0.3-0.5 ppm) of H-4 within the two series of compounds on turning from the alkynyl to the saturated substituents (see for instance 32 vs. 11), consistent with H-4 falling in the deshielding cone of the triple bond in the former derivatives when in a cis relationship, further supports this assignment. Biological Screening Compounds 9-15, 21, 32, and 33 were first evaluated as human lysosomal β-Gal inhibitors at 1 mM in human leukocyte homogenates and the results are shown in Table 4 and compared to previously published data. Unfortunately, none of the tested compounds strongly inhibited β-Gal (only a moderate 22% inhibition was found for the "all-cis" ether 9). These data demonstrate that both the alkylation of the hydroxy or the exocyclic amine group and the introduction of a substituent at C-3 of the trihydroxypiperidine skeleton dramatically affect β-Gal inhibition. The observed upfield shift (0.3-0.5 ppm) of H-4 within the two series of compounds on turning from the alkynyl to the saturated substituents (see for instance 32 vs. 11), consistent with H-4 falling in the deshielding cone of the triple bond in the former derivatives when in a cis relationship, further supports this assignment. Biological Screening Compounds 9-15, 21, 32, and 33 were first evaluated as human lysosomal β-Gal inhibitors at 1 mM in human leukocyte homogenates and the results are shown in Table 4 and compared to previously published data. Unfortunately, none of the tested compounds strongly inhibited β-Gal (only a moderate 22% inhibition was found for the "all-cis" ether 9). These data demonstrate that both the alkylation of the hydroxy or the exocyclic amine group and the introduction of a substituent at C-3 of the trihydroxypiperidine skeleton dramatically affect β-Gal inhibition. However, the screening on a panel of 12 The observed upfield shift (0.3-0.5 ppm) of H-4 within the two series of compounds on turning from the alkynyl to the saturated substituents (see for instance 32 vs. 11), consistent with H-4 falling in the deshielding cone of the triple bond in the former derivatives when in a cis relationship, further supports this assignment. Biological Screening Compounds 9-15, 21, 32, and 33 were first evaluated as human lysosomal β-Gal inhibitors at 1 mM in human leukocyte homogenates and the results are shown in Table 4 and compared to previously published data. Unfortunately, none of the tested compounds strongly inhibited β-Gal (only a moderate 22% inhibition was found for the "all-cis" ether 9). These data demonstrate that both the alkylation of the hydroxy or the exocyclic amine group and the introduction of a substituent at C-3 of the trihydroxypiperidine skeleton dramatically affect β-Gal inhibition. However, the screening on a panel of 12 commercial glycosidases (see the Supplementary Materials), showed that only trihydroxypiperidine 10 (bearing a dodecyl chain connected at C-3 through a nitrogen atom) was able to inhibit β-glucosidase from almonds. In particular, compound 10 showed an IC50 = 85 µM towards this enzyme, which prompted us to evaluate compounds 9-15, The observed upfield shift (0.3-0.5 ppm) of H-4 within the two series of compounds on turning from the alkynyl to the saturated substituents (see for instance 32 vs. 11), consistent with H-4 falling in the deshielding cone of the triple bond in the former derivatives when in a cis relationship, further supports this assignment. Biological Screening Compounds 9-15, 21, 32, and 33 were first evaluated as human lysosomal β-Gal inhibitors at 1 mM in human leukocyte homogenates and the results are shown in Table 4 and compared to previously published data. Unfortunately, none of the tested compounds strongly inhibited β-Gal (only a moderate 22% inhibition was found for the "all-cis" ether 9). These data demonstrate that both the alkylation of the hydroxy or the exocyclic amine group and the introduction of a substituent at C-3 of the trihydroxypiperidine skeleton dramatically affect β-Gal inhibition. However, the screening on a panel of 12 commercial glycosidases (see the Supplementary Materials), showed that only trihydroxypiperidine 10 (bearing a dodecyl chain connected at C-3 through a nitrogen atom) was able to inhibit β-glucosidase from almonds. In particular, compound 10 showed an IC50 = 85 µM towards this enzyme, which prompted us to evaluate compounds 9-15, 21, 32, and 33 also on human lysosomal β-glucosidase (GCase) ( The observed upfield shift (0.3-0.5 ppm) of H-4 within the two series of compounds on turning from the alkynyl to the saturated substituents (see for instance 32 vs. 11), consistent with H-4 falling in the deshielding cone of the triple bond in the former derivatives when in a cis relationship, further supports this assignment. These data overall show that GCase inhibition higher than 90% is guaranteed by the presence of a long alkyl chain (8, 9, or 12 carbon atoms, compounds 9, 10, 12, and 21). This parallels previous studies, which showed that alkylated imino-and azasugars are strong GCase inhibitors due to favorable interaction of the alkyl chain with the hydrophobic domain of the enzyme [24,44,45]. However, in terms of IC50 values, a remarkable difference was found between compounds 9 and 21, in agreement with previous observations by Compain and coworkers on differently configured ethers derived from 1,5-dideoxy and 1-5-imino-D-xylitol (DIX) [46]. Moreover, good GCase inhibitors can be identified among piperidines with only two free hydroxy groups (e.g., 9, 10, and 21), as previously observed with different azasugars [46]. We then assayed the strongest GCase inhibitors 9, 10, and 12 (IC50 lower than 100 µM) in human fibroblasts derived from Gaucher patients bearing the N370S mutation. None of the compounds gave enzyme rescue when tested at six different concentrations (10, 100, 1, 10, 50, and 100 µM) (see the Supplementary Materials). In particular, compound 10 showed remarkable toxicity at the highest concentrations (50 and 100 µM). Notably, the stronger inhibitory activity of 10 with respect to 36 (see Table 4) does not correspond to a higher chaperoning activity (indeed, compound 36 was able to rescue GCase activity of 1.5-fold at 100 µM) [22]. The same behavior is evident by comparing the C-2 alkylated trihydroxypiperidines 38 and 37: The best inhibitor was the dodecyl alkylated (compound 38) but the best chaperone was the octyl Conversely compound 33, bearing the triple bond and a longer aliphatic substituent, showed 65% inhibition, which increased considerably after hydrogenation to 12 (Table 4, entry 10 vs. 4). This latter compound showed a moderate IC50 = 60 µM, which is close to that observed for trihydroxypiperidines bearing an octyl chain at C-2, previously synthesized in our group (37 and 39, Table 4, entries 12 and 14) [23,24]. These data overall show that GCase inhibition higher than 90% is guaranteed by the presence of a long alkyl chain (8, 9, or 12 carbon atoms, compounds 9, 10, 12, and 21). This parallels previous studies, which showed that alkylated imino-and azasugars are strong GCase inhibitors due to favorable interaction of the alkyl chain with the hydrophobic domain of the enzyme [24,44,45]. However, in terms of IC50 values, a remarkable difference was found between compounds 9 and 21, in agreement with previous observations by Compain and coworkers on differently configured ethers derived from 1,5-dideoxy and 1-5-imino-D-xylitol (DIX) [46]. Moreover, good GCase inhibitors can be identified among piperidines with only two free hydroxy groups (e.g., 9, 10, and 21), as previously observed with different azasugars [46]. We then assayed the strongest GCase inhibitors 9, 10, and 12 (IC50 lower than 100 µM) in human fibroblasts derived from Gaucher patients bearing the N370S mutation. None of the compounds gave enzyme rescue when tested at six different concentrations (10, 100, 1, 10, 50, and 100 µM) (see the Supplementary Materials). In particular, compound 10 showed remarkable toxicity at the highest concentrations (50 and 100 µM). Notably, the stronger inhibitory activity of 10 with respect to 36 (see Table 4) does not correspond to a higher chaperoning activity (indeed, compound 36 was able to rescue GCase activity of 1.5-fold at 100 µM) [22]. The same behavior is evident by comparing the C-2 alkylated trihydroxypiperidines 38 and 37: The best inhibitor was the dodecyl alkylated (compound 38) but the best chaperone was the octyl However, the screening on a panel of 12 commercial glycosidases (see the Supplementary Materials), showed that only trihydroxypiperidine 10 (bearing a dodecyl chain connected at C-3 through a nitrogen atom) was able to inhibit β-glucosidase from almonds. In particular, compound 10 showed an IC 50 = 85 µM towards this enzyme, which prompted us to evaluate compounds 9-15, 21, 32, and 33 also on human lysosomal β-glucosidase (GCase) ( Table 4). Point mutations in the gene encoding this enzyme cause Gaucher disease, the most common autosomal recessive LSD [42,43]. Conversely compound 33, bearing the triple bond and a longer aliphatic substituent, showed 65% inhibition, which increased considerably after hydrogenation to 12 (Table 4, entry 10 vs. 4). This latter compound showed a moderate IC 50 = 60 µM, which is close to that observed for trihydroxypiperidines bearing an octyl chain at C-2, previously synthesized in our group (37 and 39, Table 4, entries 12 and 14) [23,24]. These data overall show that GCase inhibition higher than 90% is guaranteed by the presence of a long alkyl chain (8, 9, or 12 carbon atoms, compounds 9, 10, 12, and 21). This parallels previous studies, which showed that alkylated imino-and azasugars are strong GCase inhibitors due to favorable interaction of the alkyl chain with the hydrophobic domain of the enzyme [24,44,45]. However, in terms of IC 50 values, a remarkable difference was found between compounds 9 and 21, in agreement with previous observations by Compain and coworkers on differently configured ethers derived from 1,5-dideoxy and 1-5-imino-d-xylitol (DIX) [46]. Moreover, good GCase inhibitors can be identified among piperidines with only two free hydroxy groups (e.g., 9, 10, and 21), as previously observed with different azasugars [46]. We then assayed the strongest GCase inhibitors 9, 10, and 12 (IC 50 lower than 100 µM) in human fibroblasts derived from Gaucher patients bearing the N370S mutation. None of the compounds gave enzyme rescue when tested at six different concentrations (10, 100, 1, 10, 50, and 100 µM) (see the Supplementary Materials). In particular, compound 10 showed remarkable toxicity at the highest concentrations (50 and 100 µM). Notably, the stronger inhibitory activity of 10 with respect to 36 (see Table 4) does not correspond to a higher chaperoning activity (indeed, compound 36 was able to rescue GCase activity of 1.5-fold at 100 µM) [22]. The same behavior is evident by comparing the C-2 alkylated trihydroxypiperidines 38 and 37: The best inhibitor was the dodecyl alkylated (compound 38) but the best chaperone was the octyl derivative 37 [24]. The inefficacy of dodecyl alkylated compounds 10 and 38 as PCs can be ascribed to the cytotoxicity imparted by the 12-carbon atom alkyl chains. These data are also consistent with other reports on amphiphilic N-alkylated iminosugars, which suggest that cytotoxicity is strongly chain-length dependent and that potent inhibitors with chains longer than C 8 can be toxic when assayed in cell lines [47,48]. NaH (7 mg, 0.3 mmol, 60% on mineral oil) was added to a solution of 18 [25] (22 mg, 0.08 mmol) in dry DMF (1.2 mL) at 0 • C. The mixture was stirred at room temperature for 30 min, then 1-bromononane (54 µL, 0.28 mmol) was added, and the reaction mixture was stirred at room temperature for 72 h, until the disappearance of the starting material was observed via TLC (CH 2 Cl 2 /MeOH/NH 4 OH (6%) 10:1:0.1). Then, water was slowly added, and the reaction mixture was extracted with AcOEt (3 × 3 mL). The combined organic layer was washed with saturated NaHCO 3 and brine and concentrated after drying with Na 2 SO 4 . The crude residue was purified by flash column chromatography on silica gel (hexane/AcOEt 8:1) to give 17 mg of 19 (R f = 0.3, hexane/AcOEt 8:1, 0.04 mmol, 53%) as a colorless oil. 3.1.2. Synthesis of (3S, 4R, 5R)-4, 5-Dihydroxy-3-(Nonyloxy) Piperidine (9) A solution of 19 (15 mg, 0.04 mmol) in MeOH (3 mL) was left stirring with 12 M HCl (60 µL) at room temperature for 18 h. The crude mixture was concentrated to yield 9 as the hydrochloride salt. The corresponding free amine was obtained by dissolving the residue in MeOH (4 mL), then the strongly basic resin Ambersep 900 OH was added, and the mixture was stirred for 45 min. The resin was removed by filtration and the crude product was purified on silica gel by flash column chromatography (DCM/MeOH/NH 4 OH (6%) 10:1:0.1) to afford 7 mg of 9 (R f = 0.2, DCM/MeOH/NH 4 OH (6%) 10:1:0.1, 0.03 mmol, 70%) as a pale-yellow oil. (20) NaH (11 mg, 0.46 mmol, 60% on mineral oil) was added to a solution of 17 [25] (57 mg, 0.21 mmol) in dry DMF (3 mL) at 0 • C. The mixture was stirred at room temperature for 30 min, then 1-bromononane (140 µL, 0.73 mmol) was added, and the reaction mixture was stirred at room temperature for 40 h, until the disappearance of the starting material was observed via TLC (CH 2 Cl 2 /MeOH/NH 4 OH (6%) 10:1:0.1). Then, water was slowly added and the reaction mixture was extracted with AcOEt (3 × 5 mL). The combined organic layer was washed with saturated NaHCO 3 and brine and concentrated after drying with Na 2 SO 4 . The crude residue was purified by flash column chromatography on silica gel (hexane/AcOEt 10:1) to give 45 mg of 20 (R f = 0.4, hexane/AcOEt 7:1, 0.11 mmol, 54%) as a colorless oil. (21) A solution of 20 (54 mg, 0.14 mmol) in MeOH (6 mL) was left stirring with 12 M HCl (150 µL) at room temperature for 18 h. The crude mixture was concentrated to yield 21 as hydrochloride salt. The corresponding free amine was obtained by dissolving the residue in MeOH (5 mL), then the strongly basic resin Ambersep 900 OH was added, and the mixture was stirred for 45 min. The resin was removed by filtration to give 36 mg of 21 (0.14 mmol, 100% yield) as a pale-yellow oil. (22) Ketone 8 [25] (63 mg, 0.23 mmol) and dodecylamine (65 mg, 0.35 mmol) were dissolved in MeOH (3 mL), and molecular sieves (3 Å pellets; 25 mg) were added. The reaction mixture was stirred at room temperature for 1 h and then Pd(OH) 2 /C (30 mg) was added. The mixture was further stirred at room temperature under hydrogen atmosphere for 51 h. The catalyst and the molecular sieves were removed by filtration, the obtained compound was washed several times with MeOH, and the solvent was evaporated under vacuum. The crude residue was purified by flash column chromatography on silica gel (gradient eluent from hexane/AcOEt 5:1 to 2:1) to afford 56 mg of 22 (R f = 0.4, hexane/AcOEt 2:1, 0.13 mmol, 55%) as a colorless oil. Vinyl magnesium bromide (288 µL, 0.29 mmol) was added to a dry THF solution (1 mL) of ketone 8 (52 mg, 0.19 mmol), dropwise at 0 • C under nitrogen atmosphere. The solution was stirred at 0 • C for 5 h when the disappearance of 8 was attested by a TLC control (hexane/AcOEt 2:1). A saturated aqueous NH 4 Cl solution was added at 0 • C, and the mixture was stirred for 10 min. The reaction mixture was extracted with AcOEt (3 × 3 mL). The combined organic layer was washed with water, saturated NaHCO 3 , and brine and concentrated after drying with Na 2 SO 4 . The crude residue was purified by flash column chromatography on silica gel (gradient eluent from hexane/AcOEt 5:1 to 2:1) to give 29 mg of 26 (R f = 0.2, hexane/AcOEt 5:1, 0.10 mmol, 51%) as a pale-yellow oil. 3.1.8. Synthesis of (3S, 4R, 5R)-3-Ethyl-3, 4, 5-Trihydroxypiperidine (13) Compound 26 (26 mg, 0.09 mmol) was dissolved in EtOH (5 mL) and HCl 12 M (150 µL) and Pd/C (13 mg) were added. The reaction mixture was stirred at room temperature under hydrogen atmosphere for 24 h. The catalyst was removed by filtration through Celite, and the filtrate was concentrated under vacuum to give the hydrochloride salt of 13. The corresponding free amine was obtained by dissolving the residue in MeOH, then the strongly basic resin Ambersep 900 OH was added, and the mixture was stirred for 45 min. The resin was removed by filtration and the crude product was purified on silica gel by flash column chromatography (gradient eluent from DCM/MeOH/NH 4 OH (6%) 10:1:0.1 to 1:1:0.1) to give 8 mg of 13 (R f = 0.2, DCM/MeOH/NH 4 OH (6%) 10:1:0.1, 0.05 mmol, 52%) as a colorless oil. To a dry THF solution (0.24 M) of alkyne (2 eq.), n-BuLi (1.5 eq.) was added dropwise over 5 min at -78 • C under nitrogen atmosphere. The solution was allowed to warm to 0 • C over 1 h and held at 0 • C for an additional 30 min. The solution was then recooled to -78 • C and ketone 8 (1 eq.) was added in one portion. The solution was allowed to warm to room temperature and stirred at room temperature until the disappearance of ketone 8 was attested by a TLC control (EtP/AcOEt 2:1). A saturated aqueous solution of NH 4 Cl was added and the reaction mixture was extracted with AcOEt. The combined organic layer was washed with water, saturated NaHCO 3 , and brine and concentrated after drying with Na 2 SO 4 . The crude compound was purified by flash column chromatography.
10,078.6
2020-10-01T00:00:00.000
[ "Chemistry", "Medicine" ]
Effects of colloidal nanoSiO 2 on the hydration and hardening properties of limestone calcined clay cement (LC 3 ) This research investigates the influence of colloidal nanosilica (CNS) on the hydration and hardening properties of Limestone Calcined Clay Cement (LC 3 ). The sulfation degree for LC 3 was first optimized based on the hydration heat, and the results suggested that gypsum increases the cumulative heat release during the hydration process of LC 3 , with a dosage of 2% by weight gypsum leading to the highest heat release. The effects of CNS on hydration reaction, fluidity, mechanical properties and microstructure of LC 3 were then investigated. According to the results obtained from isothermal calorimetry and thermogravimetric analysis, CNS can considerably accelerate the reaction rate of the LC 3 system. 3% and 5% by weight CNS can significantly improve the compressive strength of LC 3 blends, especially at early ages of 3 and 7 days. The findings from this study lead to a better understanding of the modification effects of CNS on LC 3 which subsequently provides an insight into regulation mechanism of CNS on LC 3 . Introduction Portland cement has demonstrated its reliability in construction across a wide range of loading and service environments with a proven service history spanning over two centuries.Despite its comparatively low cost and low energy consumption compared to other construction materials, the large scale of its production worldwide results in anthropogenic CO 2 emissions accounting for 6-8% which mainly comes from the decarbonation of limestone during the clinkering process [1][2][3][4][5].Consequently, dramatically reducing CO 2 emissions associated with cement production is the most critical and pressing challenge faced by the cement industry [6]. The use of supplementary cementitious materials (SCMs) to partially substitute clinker has a huge potential to reduce carbon emissions and also precious natural resource consumption in the cement industry [7].So far, over 80% by weight of the SCMs that are adopted to reduce clinker in cement are ground granulated blast furnace slag (GGBS) and fly ash [8].However, with the growing demand of using cleaner fuels in power plants, the supply of traditional SCMs (i.e., GGBS and fly ash) is expected to dramatically decrease in the near future [9].Hence, it is crucial to broaden the range of low-carbon SCM resources, such as limestone and calcined clay, for cement production. Limestone Calcined Clay Cement (LC 3 ) is a new type of cement that is based on a blend of clinker, limestone, calcined clay and gypsum [10].The feasibility of LC 3 as an alternative to PC is attributed to the fact that kaolinitic clay and limestone, which are constitutive components of LC 3 , are two widely distributed materials in many regions around the world [11].Clays having a certain proportion of kaolinite have proven to be highly pozzolanic if calcined between 700 and 850 • C [12].In LC 3 , calcined clay reacts as pozzolanic material with portlandite, water and sulfate to form C-A-S-H, ettringite and AFm phases [13,14].When limestone is added to cement, calcite reacts with C 3 A from clinker and aluminate phase from calcined clay to form hemi-and mono-carboaluminate (Hc, Mc) phases [15][16][17].The sequence of reactions can be described by the following Eqs.( 1) -( 4).This feature enables a greater substitution of clinker, resulting in a more refined and less interwoven microstructure that enhances the mechanical and durability performance of the LC 3 system [18]. It's important to note that the optimal sulfate level for LC 3 may vary compared to traditional Portland cement.In various studies, it has been noticed that LC 3 often necessitates extra gypsum in addition to achieve the highest compressive strength within 24 h [19,20].Without sulfate adjustment, the aluminate peak may precede the alite peak, leading to a reduced and broader silicate peak.The introduction of additional gypsum shifts the aluminate peak to later stages and recovers the silicate peak. Despite possessing numerous commendable advantages, LC 3 exhibits a lower early strength and slower strength gain in contrast to Portland cement [21].Nano additives have been incorporated into cement and concrete to attain more robust and long-lasting concrete [22,23].NanoSiO 2 (NS) is the most frequently utilized nanomaterial as an additive for cement and concrete, and has been extensively studied so far [24][25][26][27].It has been widely documented that the inclusion of NS can significantly enhance the properties of cementitious materials [23][24][25][26][27]. Siang et al. [24] reported that 3% by weight NS can improve the compressive strength of mortar by 38% at 28 days; Meng et al. [25] found that NS can enhance the value and shorten the occurrence time of hydration exothermic peak of cement; Shih et al. [26] found that the ratio of maximum increase in compressive strength is about 60.6% at an age of 14 days and reduces to 43.8% at an age of 56 days with the addition of 0.6 wt% NS.This could be explained in three aspects.Firstly, due to its small particle size and high surface area, NS particles can fill the gaps between cement particles and improve the packing density of the paste.Secondly, the large surface area of NS particles allows for a higher degree of interaction with cement particles.Finally, NS particles can act as a nucleation agent for the formation of C-S-H gel which can lead to a denser microstructure and a reduction in porosity, resulting in improved mechanical properties such as strength and durability. Currently, there is very limited research regarding the utilization of NS for regulating LC 3 .This paper aims to improve the performance of LC 3 by NS.Sulfation degree adjustment in case of undersulfation, hydration acceleration mechanisms, mechanical properties, fluidity and hydration products were investigated, all of which help to provide a better understanding of the modification effects of NS on the LC 3 systems. Materials The blended cement used in this paper was made from Portland cement (PC), quartz (Q) and three types of SCMs, i.e. metakaolin (MK), limestone (LS) and gypsum (Gyp).A type I Portland Cement (i.e.Grade 52.5) conforming to BS EN 197-1 was used in this study.The MK was purchased from BASF.The limestone was acquired from Scientific Laboratory Supplies with 98% purity.Gypsum (i.e.CaSO 4 ⋅2 H 2 O) sourced from Sigma-Alrich (98 + grade) was used for the formulations.To facilitate NS homogeneously distributed in LC 3 paste, colloidal nanoSiO 2 (CNS), instead of NS powder, was used in the mixtures, which was obtained from Merck Life Science UK Limited with 50 wt% suspension in water.The sand used for making the mortars was graded river sand with 4 mm nominal maximum aggregate size.The quartz was made by grinding sand through a ball mill.It was used as an inert material together with metakaolin to simulate a calcined clay with 50% of calcined kaolinite.Polycarboxylic (PCE) based superplasticizer was used to achieve the same fluidity for all mortars.The chemical (by XRF) and phase (by XRD-Rietveld) compositions of the materials in the study are given in Table 1.The particle size distribution by laser diffraction, obtained after dispersing the powder in Isopropanol is presented in Fig. 1 which shows that the particle size of quartz is finer than metakaolin.The size difference between quartz may affect the contact and interaction between these materials, potentially reducing the overall pozzolanic activity and hindering the formation of additional C-S-H gel. Mixtures and preparation of samples In this paper, LC 3 blends were made of 55% PC, 15% MK, 15% quartz, and 15% limestone all by weight.15% MK and 15% quartz together were used to simulate 30% calcined clay with 50% calcined kaolinite content.The dosage of gypsum was optimized in the LC 3 system, as described in Section 3.1.For all blended pastes, CNS as an additive was added to the mixture but with different dosages, as specified in Table 2. Samples were prepared using a water-to-binder ratio of 0.5.Each sample was mixed for 2 min at 1600 rpm, and poured into the molds.The molds were put in the curing chamber and stripped off after one day.After specified days of standard curing (relative humidity (RH) > 95%, 20 ℃), hydration of the pastes was stopped.To do so, all samples were immersed in pure isopropanol for 3 h, then moved to new pure isopropanol for another 24 h curing, and followed by being dried in a vacuum oven at 40 ℃ for 3 days.The dried hardened pastes were then stored in a vacuum chamber, ready for the thermogravimetric analysis (TGA), X-ray diffraction (XRD) and scanning electron microscopy (SEM) analyses. Hydration heat The heat release during hydration was measured in an isothermal calorimeter (TAM Air) for 72 h. 10 g of sample was poured in a glass ampoule, which was then sealed and put in the calorimeter. Fluidity and compressive strength For fluidity test, mortars were made with a binder-to-sand weight ratio of 1:3 as the mix propositions for binders are specified in Table 3. Fluidity of the samples was measured conforming to BS EN 12390-16:2019. Regarding to mortar strength, superplasticizer was added to mortars to obtain similar fluidity.In accordance to BS EN 12390-16:2019, prismatic mortar samples of 4 cm × 4 cm× 16 cm were made and their compressive strength were measured at 3, 7, 28 days. XRD XRD tests (Bruker D8 Advance diffractometer) were carried out on freshly cut slices of hardened paste with voltage 40 mV and current 25 mA to assess the phase assemblage of the samples.The samples were crushed to a fine powder and pressed into a sample holder.The scanning program consisted in rotating between 5 • and 70 • 2θ with a step size of SEM Microscopic examinations of the hardened pastes were performed using an ultra-high-resolution scanning electron microscope (Supra 35VP, Carl Zeiss, Germany).Prior to the microscopic observations, the surfaces of the samples were coated with gold using an Edwards S150B sputter coater. TGA TGA of the pastes was performed using a simultaneous DSC/DTA/ TGA system (C-Therm Technologies, New Brunswick, Canada).The heating rate was set at 10 • C/min, and the temperature range was between 30 and 950 • C. Sulfation degree adjustment The main purpose of adding gypsum to LC 3 system is to achieve the desired reactivity balance between silicate phase and alumina phase.The mix compositions for sulfate adjustment of the LC 3 blends are shown in Table 3. The rate of heat evolution regarding to the effect of gypsum on LC 3 is shown in Fig. 2. It can be seen that the silicate and aluminate peaks are separated by gypsum, and the aluminate peaks are depressed and delayed with the increase of gypsum dosage.Also, gypsum increases the cumulative heat release during cement hydration, and a dosage of 2% by weight results in the highest heat release. The onset of the acceleration stage was independent of the gypsum content.The intensity of the silicate reaction peak (I) decreased slightly with increasing sulfate content, but this could be attributed to the separation of the overlapped aluminate peak (labeled as II).The peak labeled II, which is associated with the depletion of sulfate [28], showed the expected significant dependence on the sulfate content.It was retarded with the increasing sulfate content, with its maximum value being after 20, 30 and 40 h at 2%, 3% and 4% by weight gypsum contents, respectively.During the initial stages of hydration, the high concentration of sulfate in the pore solution is stabilized by the presence of calcium sulfates leading to sulfate adsorption on the C-S-H [29].Once gypsum is depleted, the sulfate concentration in the pore solution drops with sulfates desorbing from the C-S-H.The continuing desorption of sulfate delays the sulfate depletion/aluminate reaction peak (II) [30]. Kinetics of hydration The rate of heat evolution regarding to the effect of CNS on LC 3 is shown in Fig. 3.The incorporation of 1% and 3% both by weight CNS advanced and intensified the hydration peaks of silicate (i.e. the first peak).With the incorporation of small amount of CNS, the degree of hydration of cement is promoted, making the cement matrix produce more C-S-H gel, which can improve the pore structure of cement hydration.At the same time, a crystal nucleus is formed in the process of hydration and thus C-S-H can grow on its surface directly rather than adsorbed on the surface of unhydrated cement particles, thus affecting the speed and degree of hydration.However, the silicate peak was depressed by 5% by weight CNS.The depression of silicate reaction with the incorporation of a large amount of CNS may be due to a phenomenon called "particle packing effect" [31].CNS particles have a very high specific surface area and tend to agglomerate, which can cause an increase in the viscosity of the cementitious system and hinder the movement of water molecules.As a result, the effective water-cement ratio may be reduced, leading to a decrease in the availability of water for cement hydration.In addition, the high surface area of CNS particles can lead to the formation of a thick gel-like layer around the cement particles, which can limit the diffusion of ions and delay the hydration process [32]. It can also be found that the alumina peaks (i.e. the second peak) were depressed with the incorporation of CNS.The depression of the alumina reaction is possibly due to the consumption of aluminum ions in the formation of C-S-H gel [33].CNS particles can react with the calcium hydroxide produced during cement hydration to form additional C-S-H gel, which requires the consumption of calcium and aluminum ions [34].As a result, the available aluminum ions for the formation of aluminum hydrate phases such as C 3 AH 6 (tricalcium aluminate hydrate) is reduced, leading to a depression of the alumina peak [35].Additionally, the presence of CNS can modify the microstructure of the cementitious system, leading to changes in the distribution and availability of aluminum ions, which may also contribute to the depression of the alumina peak [36]. Workability and compressive strength The fluidity of fresh LC 3 mortars with different ratios of CNS is shown in Table 4.With the incorporation of CNS, the fluidity of LC 3 mixture is greatly decreased.The addition of CNS to LC 3 reduces its workability due to its high surface area and small particle size.Nano-silica particles tend to agglomerate and form clusters, which can cause a reduction in the mobility of cement particles, resulting in a decrease in workability [37].Additionally, the surface charge of nano particles can interact with that of cement particles, resulting in the formation of electrostatic bonds.These bonds can lead to a decrease in the dispersion of the cement particles and, therefore, cause the cement to become more viscous, further reducing its workability [38].Furthermore, the addition of CNS can cause the cement to set more rapidly, leading to a shorter working time and making it difficult to place and finish.This is due to the fact that the high surface area and small particle size of nano-silica can accelerate the hydration process of the cement, leading to an earlier onset of the setting time [39]. The compressive strength of LC 3 composite with four dosages of CNS at four ages is presented in Fig. 4. Furthermore, Table 5 illustrates the enhanced compressive strength observed form LC 3 blends with various dosages if CNS compared to the control group.Basically, the compressive strength at four different stages exhibited a positive correlation with the amount of CNS present, especially at early ages.Notably, the incorporation of 5% by weight CNS yielded the most substantial increase in compressive strength across all ages, with a rise of 31% after 3 days and 26% after 7 days.Nevertheless, in the later ages, the increase rate decreases.The enhanced compressive strength of hardened LC 3 resulting from the addition of CNS can be attributed to at least two plausible mechanisms.Firstly, the small CNS particles can penetrate the pore structure of the material and fill the voids, thereby reducing porosity and increasing the material's density and strength.Secondly, the CNS particles can react with the calcium hydroxide, producing additional calcium silicate hydrate (C-S-H) gel.This reaction leads to a denser matrix and an increase in strength.In addition, CNS reacts rapidly with calcium hydroxide to form calcium silicate in an alkaline environment such as the pore solution of LC 3 paste.Thus, the contribution of added CNS to the increase in strength of the hardened LC 3 blend becomes apparent in the early days of hydration. When comparing the impact of CNS on the mechanical properties of cement, Hou et al. [40] reported that a 5% by weight addition of CNS can enhance the compressive strength of CEM 1 mortar by 45% at 7 days and 12% at 28 days.The influence of CNS on the strength development of LC3 is less pronounced than that of CEM 1 at early ages (7 days), but becomes more significant at later ages (28 days).This difference can be attributed to the increased involvement of cement hydration due to the nucleation effect of nanoparticles during the early stages.However, as time progresses, a higher degree of metakaolin participates in the hydration process. Hydration products XRD patterns at 1, 3 and 28 days of hydrated LC 3 blends with various dosages of CNS are shown in Fig. 5.After 1 day of hydration, as shown in Fig. 5(a), the ettringite and Mc peaks were observed at different diffraction angles.It can be seen that the intensity of portlandite (CH) was decreased with the increase of CNS.Mc has a three-dimensional structure with positively charged calcium, aluminum, and hydroxide ions surrounded by negatively charged carbonate ions [41].It contributes to the long-term strength of cement and can also help to improve its durability.Consumption of CH can be considered as the pozzolanic activity of CNS that consumed part of portlandite. At 3 days, with the process of hydration, Hc were observed.It contributes to the early strength of cement and can also help to stabilize the pH of the cement paste.Numerous studies have shown that carboaluminate hydrate can increase strength by filling the pore spaces [42,43]. At 28 days, portlandite was almost depleted as the result of the progression of hydration.The amounts of hydration products like ettringite, Hc and Mc were basically at the same level in all pastes though with different dosages of CNS.The hydration degree cannot be deduced from the change of amorphous phase content, but the variation of portlandite can be a good indicator of the hydration degree.The Portlandite content in the LC 3 paste with CNS was lower than in the control group, confirming that the calcined clay consumed CH to generate additional amorphous C-A-S-H gel. Microstructure morphology After curing for 28 days, the samples of LC 3 -0CNS and LC 3 -3CNS, containing 0% and 3%, respectively, by weight colloidal nanosilica, were observed by SEM, as depicted in Fig. 6.The SEM images revealed that the samples of LC 3 -0CNS possessed many pores and a relatively loose structure with many pores, whereas the microstructure of LC 3 -3CNS was denser and contained fewer pores.These observations suggest that the filler effect of nano silica and its pozzolanic activity contribute to increasing the density of the matrix.NS addition increased the quantity of C-(A)-S-H [44] in the cement paste and make the microstructure more compact [45,46].Consequently, the addition of nano silica can increase the compressive strength of LC 3 blend pastes, which can be supported by the results of compressive strength with CNS which shows an increased effect on LC 3 blends. Thermogravimetric analysis 3.6.1. Calcium hydroxide (CH) content The weight loss observed between 450 • C and 550 • C was considered to be the decomposition of CH.The CH content, determined by TGA, is shown in Fig. 7 for LC 3 with varying ratios of CNS at both 3 and 28 days.At 3 days, the CH content decreased in the samples with the addition of CNS especially with 3% and 5% both by weight CNS.At 28 days, there is no big difference in CH content between the control group and the CNSadded group.In addition, the CH content greatly decreased with progress of hydration. NS, a pozzolanic material, can react with calcium hydroxide to form additional C-(A)-S-H, thereby reducing the amount of CH in the system.This reaction consumes calcium hydroxide and produces more stable calcium silicate hydrates (C-S-H), which can improve the strength and durability of the cementitious material.Besides, the pozzolanic reaction of metakaolin can result in the formation of more C-(A)-S-H gel, which consumes CH during its formation, leading to a decrease in the overall content of CH in the final product.Furthermore, high concentrations of NS can act as nucleation sites for the precipitation of CH [47], resulting in an increase in its content. Bound water (BW) content The weight loss observed between 40 • C and 550 • C was attributed to the decomposition of BW.Monitoring the change of the amount of BW is one of the most commonly used methods for monitoring cement hydration.When compared 3 days and 28 days weight loss of BW in Fig. 8, it can be seen that the BW content increased with the progression of hydration.Additionally, there was no much difference in the BW content in samples with different dosages of CNS at 3 days.However, the BW content increased as the amount of CNS increased.NS particles, due to its pozzolanic activity, reacts with CH produced during cement hydration to form additional C-S-H gel.This reaction consumes water and contributes to the increased formation of hydration products, including BW.The additional C-S-H gel formed with the incorporation of CNS requires more water for its formation, resulting in an overall increase in the amount of BW in the LC 3 paste [40]. Conclusions The LC 3 , containing around only 50% clinker, is a new type of promising low-carbon energy-saving cement, however, the lower early strength has been a fatal factor limiting its wider applications in construction.In this study, CNS was added into the LC 3 system, aiming at promoting its development of mechanical property.Based on the results and analysis presented in this paper, the following conclusions can be drawn: (1) Gypsum adjustment and reaction control: Gypsum is added to adjust the sulfation degree of LC 3 , preventing flash setting by controlling the reaction of alumina.The optimal dosage of gypsum is found to be 2% by weight, which balances the silicate and alumina reactions in the LC 3 system.(2) Influence of CNS addition: Isothermal calorimetry results reveal that the addition of 1% and 3% both by weight CNS advances and intensifies silicate peaks, while the addition of 5% by weight CNS depresses the silicate peak.Alumina peaks are also depressed due to the consumption of aluminum ions during the formation of C-S-H gel.Incorporating CNS decreases the workability of the LC 3 blend due to its high surface area and small particle size.(3) Compressive strength and hydration products: The addition of 5% by weight CNS significantly increases compressive strength of LC 3 system, particularly at early ages, with a 31% rise at 3 days and a 26% rise at 7 days observed.XRD analysis shows that CNS accelerates the formation of hydration products, increasing the amount of Mc and decreasing the amount of portlandite.SEM images reveal that the microstructure of LC 3 paste becomes denser with the addition of CNS.TGA results confirm a decrease in CH (calcium hydroxide) content with CNS incorporation, which aligns well with the findings of XRD. This study presents the application of CNS on the hydration and hardened properties of LC 3 system.In future research, it will be essential to evaluate the impact of CNS on the hydration degree of LC 3 , considering both clinker and calcined clay components.Additionally, a comprehensive examination of the influence of CNS on the hardening and hardened properties of LC 3 with varying qualities of calcined clay should be conducted. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Fig. 2 . Fig. 2. (a) Normalized heat release per gram of cement and (b) cumulative heat released per gram of cement for LC 3 with different dosages of gypsum. Fig. 3 . Fig. 3. Normalized heat release per gram of cement (a) and cumulative heat released per gram of cement (b) for LC 3 with different ratios of CNS. Fig. 7 . Fig. 7. Effect of CNS on CH content of LC 3 paste. Fig. 8 . Fig. 8. Effect of CNS on BW content of LC 3 paste. Table 4 Fluidity of mortars with different dosages of CNS. Table 5 Increase of compressive strength of samples with various dosages of CNS compared to the control group.
5,698
2024-01-01T00:00:00.000
[ "Materials Science" ]
Experiments on the Porch Swing Bearing of Michelson Interferometer for Low Resolution FTIR Porch swing bearing for the linear motion of the mirror in Michelson interferometer for mid-infrared low resolution Fourier transform spectrometer was studied experimentally using themodulation depth of the collimated laser beam.Themirror tiltingwas measured to be lower than 5 μrad over 3mmmirror travel using two different bearings assemblies. Additionally, themanufacturing tolerances of the bearing type were proved to be loose enough not to limit the interferometer application. These demonstrate that the porch swing without any adjustment mechanisms provides the sufficient motion linearity. Introduction Inexpensive and robust portable Fourier transform infrared (FTIR) spectrometers for gas analysers are still needed, because the existing solutions have some weaknesses like expenses, bulky size, or sensitivity to the temperature variations.The most sensitive component in FTIR spectrometer is the interferometer.We have aimed to design such an interferometer with 25 mm beam diameter and with 2 cm −1 resolution in the mid-infrared region.Numerous interferometer designs can be found, for example, from the papers by Jackson [1] and Kauppinen et al. [2] and from the book by Griffiths and de Haseth [3].We have selected Michelson interferometer with plane mirrors, because it has only a few, even inexpensive, optical components.Using Michelson interferometer, it is possible to minimize the size and cost of the interferometer structure while keeping the beam diameter and the throughput constant.Additionally, we will not use any dynamic alignment of the optics which are commonly applied in most solutions. One of the key parameters in interferometer design is the modulation depth of the interferogram.In Michelson interferometer built using plane mirrors, the modulation depth is practically mainly affected by the tilting of optics which causes an angle between the output beams from inter-ferometer.The shearing is the second major phenomenon determining the modulation depth.It is caused by the lateral shift of the output beams.With the plane mirror setup, the shearing is usually negligible because the plane mirrors do not shift the beams with each other.If the cube corners would be used as the end mirrors, the output beams would not tilt but they may be shifted with each other which would decrease the modulation depth especially when the size of the radiation source is finite.Maximizing the modulation depth sets the requirements for the movement of the mirror.The movable mirror has to remain parallel to the image of the second mirror formed by the beam splitter with extreme accuracy over the desired mirror travel.This linear motion of the mirror is achieved by a suitable bearing system which is reviewed quite throughly by Jackson [1].The air bearings, both compressed air and magnetic suspensions [4,5], provide very smooth movement and low friction but are usually bulky and expensive.Various sliding bearings, like syringes or rails and guide bar systems, have many advantages including low costs but, in general, they are quite sensitive to produce misalignments and can require special materials or extreme cleanliness in manufacturing [1]. The porch swing bearings, presented in Figure 1, have none of these disadvantages.They have quite a simple structure which can be made rather small.The porch swing is a four bar parallelogram linkage where the pivots are usually flexure pivots, flat springs, or other flexure elements.They provide virtually frictionless driving and do not need lubrication.Additionally, they have no wear if they are loaded correctly. According to Griffiths and de Haseth [3], the porch swing bearing was first used in an interferometer by Walker and Rex [6] in the 1970s.However, Jones [7] has mentioned that the porch swing mechanism for the linear motion is "well known, " already in 1951.Walker and Rex used the flexure pivots as in Figure 1(c) in their bearing design.This kind of interferometer has been operated successfully in harsh environments like helicopters by Small et al. [8], in an air balloon by Huppi et al. [9], or during a rocket flight by Kemp and Huppi [10].According to Hanel et al. [11], a kind of parallel spring suspended moving shaft was used in the interferometer in Nimbus III space flight in 1969.Wishnow et al. [12] have described an interferometer with porch swing driving for visible band imaging spectrometer used in the telescope.Many patents, as [13][14][15][16][17], are also related to the porch swing design.The porch swing driving is also used in a few commercial products, but they typically use a dynamic alignment system. Because the linearity of the motion is essential, we have studied experimentally the tilting of our interferometer design which has a porch swing as in Figure 2. We will demonstrate nearly a perfect porch swing, where the modulation depth has almost no decrease over the 3 mm driving path.Additionally, we have quite good measurement results with the porch swing where all the parts of the bearing have normally achievable machining tolerances.The tilting was estimated using a Helium-Neon laser beam and its modulation depth. In addition to the tilting, there are many other phenomena, like thermal expansion and stability in temperature changes, the force constant and the resonance frequencies of the mechanism, or the aging of the mechanics, which have to be considered in design.The lifetime of the flexure elements, typically made of steel, is normally very long if they are loaded correctly.However, these are out of the scope of this article, and we concentrate on the driving stability. Experiments on Motion of the Porch Swing Bearing 2.1.Requirements for the Linearity.The maximum allowed decrease of the modulation depth sets the limits for the tilt angle which has to be achieved during a single scan and from scan to scan; hence, the signal strength is proportional to the modulation depth.The modulation depth of the circular output beam of the interferometer can be expressed as where is the beam diameter, is the tilt angle, ] is wavenumber or the reciprocal of the wave length, or ] = 1/, and 1 is the Bessel function of the first kind, and the uniform intensity distribution over the cross-section of the collimated infrared beam is assumed [2,18].So, the modulation depth is roughly proportional to the square of the tilt angle which makes the interferometer very sensitive to the changes in the tilt angle.To keep the modulation depth above 0.95, the tilt angle should be less than about 14 rad with the beam diameter of 25 mm and the wavenumber of 3000 cm −1 .This wave number is one common convention to report the modulation depth, and so we use it here also. Stiffening Clamps in the Flexures Reduce Parasitic Motions. If the porch swing bearing is perfect, the arms stay parallel with each other during driving.The movable arm follows actually a circular trajectory but if the travel length is short, the motion is nearly linear.The imperfections of the porch swing structure cause the unwanted rotations of the movable arm and so tilting of the mirror during the travel.These undesired rotations are also called parasitic motions.When only the plane mirrors are used in the interferometer, the modulation depth is practically affected only by tilting around and axes or pitch and yaw rotations, respectively.The rotation around the axis and shifting on the -plane have either no or negligible effect on the modulation depth.Additionally, the arm shifts always in the direction because of the circular trajectory, but with small travel, this shift is negligible. The length differences of the arms Δ or the flexure elements Δℎ cause the pitch rotation of the arm as illustrated in Figure 3.The angle of the nonparallelism of the neutral axis of the flexures or the principal axis of the inertia of the flexures produce the yaw rotation .The tilting is also resulted by any deviation of the driving force F from the ideal position which lies along a line which is parallel to the axis and runs through the center of the cross-section on the -plane of the porch swing.These errors are discussed in more detail in the appendix and in the literature [19][20][21][22][23][24].In addition to the above errors, many extra possible sources of the parasitic motions are mentioned in the literature.These include, among others, the friction at the contact point of the driving force, the asymmetric mass distribution of the arm, gravity orientation, and variations of the spring material.As a summary, the porch swing is most sensitive to the unequal lengths of the arms Δ and the angle between the principal axes of inertia . When the flat springs are used as the flexure elements, the parasitic rotations can been substantially decreased by the stiffening clamps in the middle of the springs as in Figures 1(b) and 2 [7,16,20,25].The clamps seem to make flexure elements more similar to each other and increase the stiffness in the direction which decrease the sensitivity to the tilting. We have demonstrated the effect of the stiffening clamps by measuring the tilt angle as a function of the mirror displacement using the porch swing bearing which was constructed of two = 50.0mm long arms and four steel springs with ℎ = 50.0mm long flexible part.The springs were 12.7 mm wide and 0.30 mm thick.They were mounted at the end of the arms as pairs with distance of 2.7 mm between them.The configuration of the spring elements was as in Figure 2. The stiffening clamps were two pairs of aluminum plates with length of 25 mm which was the half of the spring length ℎ.They were mounted in the halfway of the arms as in Figure 1(b).The lower arm of the porch swing was rigidly mounted on the table, and the upper arm could be pushed using a rod with fine threads.The light source was a red Helium-Neon laser whose beam was collimated to the plane wave.The coherence length of the laser light was roughly about 20 cm, so it was not necessary to scan the interferometer exactly around the zero of the optical path difference.Instead, the zero of the mirror displacement 0 was the equilibrium position of the spring elements.The optical setup is depicted in Figure 4.The end mirror M1 of the Michelson type interferometer was mounted on the top of the movable arm and the other optics on the table.The interference fringes were magnified by a diverging lens to a screen with a millimeter scale attached to it.We measured the distances between adjacent fringes on the screen as illustrated in Figure 5.The magnification of the lens was considered by scaling the distance values.The experiment was repeated with and without the stiffening clamps.The tilt angle can be calculated from the fringe distance as where is the number of the fringes across the beam with diameter of and is the wave length of the beam.If the fringe distances are measured orthogonally, the actual distance is Results of the tilt angles are presented in Figure 6.Without the stiffening clamps the tilt angle was about 90 rad after 4 mm translation, but with clamps the tilt always stayed well below 20 rad.Thus, the results support the proposition that the clamps increase the stability of the moving arm. Manufacturing Tolerances. We estimated the maximum allowed manufacturing tolerances of the porch swing bearings which had the typical dimensions /ℎ = 50/50, /ℎ = 100/50, /ℎ = 150/50, and /ℎ = 50/100.The width of the bearings was = 50 mm.As the limit for errors, we used the conditions given in Section 2.1 or ≥ 0.95 and ≤ 14 rad which ensures that the bearing is suitable for FTIR interferometer. The porch swing structure is most sensitive to the difference in the arm lengths Δ and the angle between the ends of the arms.The greatest allowed arm length differences were the worst case.This positioning accuracy is well possible in practise.According to (A.5) and ( 1), this causes 0.015 decline in the modulation depth.However, the more common arm length in FTIR is about 150 mm which leads to a decline of about 0.002.Exerting the driving force to the movable arm is sometimes more practical than to the ideal position.According to (A.7), the force position = ℎ+6 mm produced 14 rad tilt angle which decreased the modulation depth to about 0.94 in the worst case when the arms were 50 mm long.In the other cases, the tilt was below 8 rad and the modulation at least 0.99.Although, these tolerances were very loose, even the resulted worst case modulation depth values were small enough. Hatheway [21] has estimated the tilt angles over 300 rad using the worst case tolerances.These are much greater than our estimations above.This was mainly because of significantly larger values for the tolerances and longer 5 mm stroke.Incidentally, over 80% of these angle values came from the arm length error and the nonparallelism of the arm ends. Clearly, the most essential properties of the porch swing are the length difference of the arms and the nonparallelism of the arm ends.This has already been noted by many authors, for example, by Walker and Rex [6] and Strait [14] who have designed adjustment mechanism to minimize these errors.However, the previous calculations show that adjustment mechanisms are not necessarily needed which simplifies the construction and may reduce manufacturing costs. Measured Modulation Depth with Porch Swing Bearings. We have studied a few porch swings by measuring the modulation depth as a function of the mirror position.In the following, we present two examples of them.In the first example, the porch swing had dimensions in millimeters = 133.4,ℎ = 50.0,and = 46.5.Two flat springs were mounted in each end of the porch swing.We used the 30 mm stiffening clamps mounted in the halfway of flat springs.The porch swing was a part of the Michelson interferometer in a very similar setup as in Section 2.2, but the projection of the fringe pattern was not enlarged.The distances between the adjacent fringes were determined by using ( 2) and ( 3) and counting the number of fringes across the beam in vertical and horizontal directions at the mirror travel of 2 mm from the equilibrium position of the spring elements.After the first assembly, the porch swing caused the tilting a way too much to be used in an interferometer, thus some adjustment was required.Firstly, we removed the yaw tilt by adding thin spacers under one flexure end which effectively lengthened the other side of the arm and thus changed the angle between the arm ends.The spacer thickness, which eliminated the yaw, was about 0.16 mm.The flexures were then probably very close to parallel.The other measured spacer thicknesses and the corresponding values of were shifted so that the angle = 0 when the yaw tilt was about zero.The measured pitch and yaw tilt angles and the calculated yaw angles from (A.6) are presented in Figure 7.The measured yaw angles were in quite good agreement with (A.6).However, the pitch did not seem to be fully independent of the angle , although the opposite could be expected.It is probably due to some uncertainty in the measurement.For example, only the upper arm ends were adjusted, so some errors might have remained in the lower arm dimensions. Next, the pitch tilt was removed by adding more spacers under both springs on the same end of the upper arm, while keeping the difference of thickness between the two spacer stacks at about 0.16 mm to get nearly zero yaw.The pitch and the yaw were eliminated almost completely when the thicknesses of the spacer combinations were about 0.14 mm and 0.30 mm.This corresponds to the length difference Δ = 0.14 mm.The pitch was also calculated using (A.1a) and (A.1b) which, among the measured values, is plotted in Figure 8.The measured length differences are shifted so that they are zero when the pitch is about zero.In addition to the tilt elimination, we obtained some support to the equations of the pitch.According to Figure 8, the measured pitch was increased with slightly smaller rate than the calculated pitch.However, there was still some yaw tilt almost every measurement point which among the other measurement uncertainties may have affected the results. Slight tilting, which was left after the above fine tuning, was eliminated by replacing the 30 mm clamps with the 44 mm clamps and by careful reassembly and some minor changes of the spacers.Thus, the flexible parts shortened from 10 mm to 3 mm.The changes in the fringe pattern were no more distinctive by the human eye, so the tilting was estimated by measuring the modulation depth of the interferometer output beam, which was focused on a photodiode as depicted in Figure 9.The movable arm was displaced by a pushing rod with fine threads.The zero position of the mirror displacement 0 was the equilibrium position of the flat springs.In each mirror position, the rod mount was pushed carefully by hand to get a movement of a few fringes which caused a few complete sinusoidal cycles in the voltage signal from photodiode circuit.Because the photodiode was DC coupled, the positive minimum and maximum voltages min and max could be recorded.The visibility, or the modulation depth, is then about Although this is not the most accurate way to determine the modulation depth, hence, the noise is on these voltage values; our experience has shown that it gives a very good approximation especially when the signal has low noise as in this case. The results of the previous modulation depth measurements are presented in Figure 10.In the first two measurements, the initial modulation depth was aligned to about 0.92 which is as close to 1 as possible with used optical components.Over 3.0 mm travel, the modulation depth was decreased not more than about 0.04 units.The result corresponds to about 4 rad tilt angle when the Gaussian distribution of laser intensity is considered [18].This tilt is below the 14 rad limit set in Section 2.1.It would cause about 0.005 decrease in the modulation at 3000 cm −1 with a uniformly distributed beam which diameter is 25 mm.In the other two measurements, an initial tilt was adjusted.The decrease of the modulation depth is roughly proportional to the squared tilt angle according to (1), so the initial tilt should cause a more rapid decrease in the modulation depth.However, this could not be observed which is a sign of very low tilting. The above discussed porch swing was clearly machined poorly, because much adjustment was required.In the following example, the porch swing was assembled from properly cnc machined parts without modifying or tuning the parts in any way after the machining.The dimensions in millimeters were = 110, ℎ = 53, and = 65.The flat steel springs were 0.2 mm thick and 10 mm wide.The springs had the 51 mm stiffening clamps in the middle.The drawing scale of Figure 2 corresponds to these dimensions apart from the flexible parts which are exaggerated for clarity.The measurement setup was similar as in the previous experiment.The modulation depth of the interferometer was decreased about 0.05 units during the mirror travel of 3 mm as represented in Figure 11.This corresponds to about 5 rad tilt angle when the distribution of the laser beam is considered.The tilt is below the 14 rad limit set in Section 2.1.Using 25 mm beam, this tilt produces about 0.007 decrease in the modulation depth at 3000 cm −1 .The error in the determining of the modulation depth from the photodiode signal was about 0.01.However, the results were not repeatable very well.This was probably because of the friction between the pushing rod and the upper arm although the friction was significantly reduced by a glass plate between the rod and the arm end as a slide bearing. Walker and Rex [6] used an interferometer which had the flexure pivot bearing as in Figure 1(c).They have reported that the tilting right after the assembly was about 150 rad, but they achieved to decrease it to the acceptable value of 5 rad after adjusting of the pivot centers using the adjustment mechanism they had designed.Kemp and Huppi [10] used the similar interferometer and reported 5 rad maximum tilt over 5 mm travel but they did not mention if any adjustments were required.However, we think that they also might had to adjust the pivots to obtain sufficiently low tilting.As noted earlier, we have achieved the maximum tilt of 5 rad using only carefully machined parts without any tuning or adjustment mechanisms.The interferometer of Onillon et al. [26] used a porch swing, which maintained the tilt below 5 rad over ±2 mm motion range and apparently had no arm length adjustment system, but it was not actually designed for FTIR spectrometer.Auguson and Young [27] have reported the tilting of five fringes after 1 cm travel.Their interferometer was, however, for far infrared and utilizing the ball bearings.With 3.75 inches beam they apparently used, the tilt was about 17 rad, which would have been too much for mid-infrared. Several authors have demonstrated the tilting of the porch swing bearings.Jones [7] has reported the tilt of 34 rad with a porch swing made with spring strips and asymmetrically mounted stiffening clamps.Hatheway [21] used monolithic flexures, where the flexure element and the clamps pressing it to the arm were machined in one piece.The smallest mentioned tilt values were pitch of 5 rad and yaw of 39 rad.However, he has noted that these values may not always be repeatable because reassembling increased the tilt significantly.It seems that, both, Jones and Hatheway did not use the arm length adjustments.Muranaka et al. [20] have built a porch swing with adjustable arm lengths and have achieved the tilt angle less than about 0.5 rad with the maximum stroke of ± 3 mm.However, their device is more appropriate for the demonstrations of the parasitic motions than as an actual bearing in an interferometer.Sizes of all above mentioned interferometers and demonstration bearings were comparable to example cases used in this article. Conclusion We have demonstrated experimentally that sufficient motion linearity of the mirror in Michelson interferometer is well achievable by using a porch swing bearing which has no adjustment mechanism which is often used.We defined the sufficient linearity by the maximum allowed decrease of the modulation depth which was 0.05 units over 3 mm mirror travel with 25 mm beam at 3000 cm −1 .The corresponding decrease was achieved experimentally using Helium-Neon laser and a porch swing which was manufactured using normal machining and assembly tolerances.The estimated manufacturing tolerances for the porch swing were proven to be loose enough not to limit the application of the bearing in the FTIR interferometer.Additionally, the equations of the parasitic motions explained the tuning of the poorly machined porch swing.We used the flat springs as the flexure elements of the bearing.We observed that the tilting was substantially decreased and the driving stability was improved by the stiffening clamps mounted in the middle of the springs.The clamps also increased the force constant or the spring rate of the bearing which might help in the vibration control of the system. Figure 1 :Figure 2 : Figure 1: Side view of typical porch swing bearing assemblies for realizing the linear movement of the mirror.The drawing (a) shows flat springs as the flexure elements and the drawing (b) stiffener clamps in the springs.In the drawing (c), the flexure pivots are used.The driving force F is exerted to the upper arm. Figure 3 : Figure 3: Side view of the porch swing structure with manufacturing errors Δ in the arm and Δℎ in the flexure lengths which produce the pitch angle when the upper arm is pushed to 0 .The errors are exaggerated. Figure 4 :Figure 5 : Figure 4: Measurement setup for studying the fringe pattern of the output beam of the interferometer.The beam from Helium-Neon laser was collimated in C and then pointed through the Michelson interferometer built from plane mirrors M1 and M2 and a cube beam splitter BS.The mirror M2 was rigidly mounted, and the mirror M1 was moved using the porch swing bearing.The diverging lens L was used to enlarge the fringe pattern on the screen S which had a millimeter scale attached to it. Figure 6 : Figure 6: The stiffener clamps mounted to the spring strips decreased the tilt angle of the movable arm of the porch swing.The shaded area is the range of allowed tilt angle ≤ 14rad.The zero of the displacement was the equilibrium position of the spring elements.The error bars represent the maximum measurement error of the tilt angle.The maximum error of the mirror displacement was about ± 0.1 mm. Figure 7 : Figure 7: Measured pitch and yaw tilt angles and the calculated yaw angle from (A.6) as a function of the angle between the arm ends or the angle of nonparallelism of the inertia axes of the flexures.The error bars represent the maximum measurement errors estimated from the readings of the scales on the measurement equipment. Δl of the arms (mm) Figure 8 : Figure 8: The measured and calculated values of the pitch tilt as a function of the length difference Δ between the arms.The pitch was caused by the length difference of the arms.The error bars represent the maximum measurement errors estimated from the readings of the scales on the measurement equipment. Figure 9 : Figure9: Measurement setup for determining the modulation depth of the output beam of the interferometer.The beam from Helium-Neon laser was collimated in C and then pointed through the Michelson interferometer built from plane mirrors M1 and M2 and a cube beam splitter BS.The mirror M2 was rigidly mounted and the mirror M1 was moved using the porch swing bearing.The converging lens L was used to focus the beam on the photodiode D. Figure 10 :Figure 11 : Figure 10: The modulation depth of an interferometer with welladjusted porch swing bearing.Four different initial tilt angles were used.The zero of the displacement was the equilibrium position of the spring elements.The maximum measurement error in the modulation is about 0.01 and in the position about 0.03 mm.
5,987.8
2013-05-30T00:00:00.000
[ "Physics", "Engineering" ]
A Secure and Efficient Group Key Management Protocol with Cooperative Sensor Association in WBANs The wireless body area network (WBAN) is considered as one of the emerging wireless techniques in the healthcare system. Typical WBAN sensors, especially implantable sensors, have limited power capability, which restricts their wide applications in the medical environment. In addition, it is necessary for the healthcare center (HC) to broadcast significant notifications to different patient groups. Considering the above issues, in this paper, the novel practical WBAN system model with group message broadcasting is built. Subsequently, a secure and efficient group key management protocol with cooperative sensor association is proposed. In the proposed protocol, the Chinese remainder theorem (CRT) is employed for group key management between HC and the personal controller (PC), which also supports batch key updating. The proposed sensor association scheme is motivated by coded cooperative data exchange (CCDE). The formal security proofs are presented, indicating that the proposed protocol can achieve the desired security properties. Moreover, performance analysis demonstrates that the proposed protocol is efficient compared with state-of-the-art group key management protocols. Introduction Development of wireless communication and sensor technologies has enabled remarkable improvement in both academic research and practical applications of wireless body area networks (WBAN), which offer ubiquitous wireless communication services to users [1]. In the medical field, WBAN is used to monitor patients' real-time health status and seamlessly transmit physiological data to medical institutions including hospitals, community clinics and emergency centers. Consequently, the doctor could conduct remote diagnostics on the patients and provide timely medical assistance. Additionally, with necessary symptom detection, early warnings, as well as precautionary measurements for certain diseases including asthma, AIDS, cancer and influenza can be provided [2]. Nowadays, as a crucial part of the Internet of Thing (IoT), WBANs have continuously attracted much attention. Its architecture varies greatly, so as to adjust to diverse requirements of different practical scenarios. In general, a typical WBAN designed for the healthcare system mainly consists of the healthcare center (HC), personal controller (PC) and many low-power wireless medical sensors implanted inside or attached to the patient's body [1]. Through these sensors, vital biomedical information such as heartbeat and blood pressure can be measured and then transmitted to the healthcare center (HC) through the personal controller (PC). Therefore, the doctor or physician could be aware of a patient's real-time physical parameters by analyzing the acquired biomedical information. According to these analysis results, appropriate remote diagnostics and timely medical assistance are 1. A novel WBAN model with message broadcasting: In practical medical WBAN scenarios, patients who receive services from HC are allocated to different departments according to their physical conditions and diseases. As a result, it is necessary for HC to provide a notification service to different patient groups. To the best of our knowledge, we are the first to propose the system model providing a specific group communication channel for message broadcasting between HC and patients. Moreover, the medical data transmission channel from sensors to PC is also taken into consideration in our design. 2. Group key management between HC and PC with CRT: The Chinese remainder theorem is employed for the group key management between HC and PC, which also supports batch key updating. In this case, HC is capable of broadcasting messages to different patient groups. Moreover, patients in the same group are capable of exchanging information about their physical conditions. 3. Group key management between PC and sensors with CCDE: In our design, the group key management between PC and sensors is motivated by coded cooperative data exchange for the purpose of minimizing the communication rounds for group key generation. Hence, the communication and computation complexity can be drastically reduced, which is efficient for resource-limited wireless sensors in WBAN. The remainder of this paper is organized as follows. Section 2 briefly surveys the relevant research achievements. Section 3 introduces some necessary preliminary works and the designed system model in order to allow the reader to obtain a better understanding of the topic. Section 4 presents the proposed sensor association and group key management protocol in detail. Section 5 demonstrates the security analysis. Section 6 displays the performance analysis. The conclusion is drawn in Section 7. Related Works To the best of our knowledge, many research achievements have been made on group key management for wireless body area networks. Theoretically, the traditional public key cryptosystem (TPKC) had been implemented in wireless body area networks previously [10][11][12][13][14][15]. A certificate generated by a third party is required to combine the identity of the user and the associated public key. However, in TPKC-based schemes, complex modular exponentiation is calculated so that more computation and storage are required in resource-limited wireless sensor devices. Therefore, these TPKC-based group key management schemes cannot meet the practical requirements. In order to alleviate the computation and storage burden on the sensor side, several authentication and group management schemes [4,[16][17][18] based on elliptic curve cryptography (ECC) have been proposed, which provide the same security with a smaller key size compared to TPKC-based schemes. Many researchers applied the idea of identity-based public key cryptography (ID-PKC) [19], which was a cryptography technique first proposed by Shamir [20] in order to address the certificate management problem in TPKC. In ID-PKC, the public key of the user can be calculated from his/her publicly-known identity, while the secret key of each user is generated by a fully-trusted key generation center (KGC). In 2009, Yang et al. [21] proposed an ID-PKC-based key management scheme for mobile devices. However, Yoon and Chang [22] proved that the proposed scheme was vulnerable to impersonation attacks. Subsequently, several ID-based key agreement protocols were proposed [23][24][25]. Certificateless public key cryptography (CL-PKC) was first introduced by Al-Riyami and Paterson [26] in 2003. In CL-PKC, the private key of the user consists of two parts, which are respectively generated by a semi-trusted key generation center (KGC) and by the user himself/herself. Hence, the key escrow problem, as well as the certificate management problem can be addressed. Liu et al. [2] proposed two certificateless authentication protocol in the WBAN environment. However, Xiong [27] demonstrated that Liu et al.'s protocols could not provide forward security and scalability. Additionally, a new certificateless encryption scheme and the signature scheme with efficient revocation against short-term key exposure were proposed in [28]. Thereafter, He et al. [3] proposed an efficient certificateless public auditing (CLPA) scheme with the purpose of addressing integrity issues in cloud-assisted WBANs. Furthermore, the Chinese remainder theorem (CRT) has been applied in many existing group key distribution schemes [29][30][31][32]. Zheng et al. proposed two centralized group key management protocols based on CRT [29]. The main contribution of this work is that the transmission passes for group key distribution are minimized, which is available in wireless networks with sourced restriction. After that, Zhou et al. proposed a key tree and CRT-based group key distribution scheme [30]. Note that in this scheme, the key server uses the root keys of the group member subtrees and CRT for group key distribution. Moreover, the computation on the user side is minimized. Based on this, Vijayakumar et al. proposed CRT-based centralized group key management for secure multicast communication [33]. The proposed key management scheme could prominently reduce the computation complexity of the key server. Coded cooperative data exchange (CCDE) as first introduced by Rouayeb et al. [34] in 2010 and has drawn increasing attention [35][36][37]. Milosavljevic et al. proposed a deterministic algorithm for CCDE [38], where a novel divide and conquer-based architecture was presented in order to determine the number of bits each node should transmit in the public channel. Subsequently, Sprinston et al. [39] presented a randomized algorithm with a high probability to minimize the number of transmissions over the public channel. In 2016, Courtade et al. characterized the minimum number of public transmissions for key agreement [40] with an arbitrary key distribution. The aforementioned group key management schemes vary greatly with different security techniques. The existing research emphasizes the secure data transmission between sensors and PC, while the communication and access control for patients remain to be enhanced. In this paper, we design an integral system model involving both HC-PC and PC-sensor communication. In practical scenarios, a high turnover of patients brings frequent key updating in the hospital environment. In this case, we adopt the CRT to PC group key distribution, which could provide fast and effective key updating. Additionally, the CCDE is adopted in sensor group key distribution. Note that the decentralized cooperative key generation strategy drastically decreases the communication cost, which is suitable for resource-limited WBAN sensors. The corresponding security and performance analysis demonstrates that the proposed protocol could provide adequate security assurance and efficiency. Preliminaries and Model Definitions This section introduces some necessary preliminaries to facilitate the reader's understanding, including bilinear pairing, the coded cooperative data exchange problem (CCDE) and the Chinese remainder theorem (CRT). Meanwhile, the system model and network assumption are presented. Coded Cooperative Data Exchange Problem A set X = {x 1 , ..., x n } of n packets each belonging to a finite alphabet A needs to be delivered to a set of k clients C = {c 1 , ..., c k }. Each client c i ∈ C initially holds a subset X i of packets denoted by X i ⊆ X. We denote by n i = |X i | the number of packets initially available to client c i and by X i = X\X i the set of packets required by c i . We assume that the clients collectively know all packets in X (∪ c i ∈C X i = X). Each client can communicate to all its peers through an error-free broadcast channel capable of transmitting a single packet in A. The data are transmitted in communication rounds. For example, in round i, one of the clients c j broadcasts a packet x to all its outgoing neighbors in C. The transmitted information x may be one of the original packets in X j or some encoding of packets in X j and the information previously transmitted to c j [34,37]. The problem is to find a scheme that enables each client c i ∈ C to obtain all packets in X i (and thus, in X) while minimizing the total number of broadcasts [35]. Chinese Remainder Theorem Let k 1 , ..., k n be positive integers that are relatively prime in pairs. Then, for any given integers a 1 , ..., a n , the system of congruences: [1,n] has a unique solution modulo ∂ g = ∏ n i=1 k i . The solution is given by: System Model As shown in Figure 1, the entire system model consists of three entities: the healthcare center (HC), the personal controller (PC) and the sensors. The description of these three entities is given below. The healthcare center (HC) is a trustworthy authority providing medical service to the patients. HC is assumed to have adequate storage and computation power. In our system model, HC communicates with PCs to obtain physiology data of patients. Hence, the patient's physical condition can be remotely monitored. The personal controller (PC) is a mobile device responsible for both biomedical information gathering from sensors and communication with HC. Note that each patient is combined with one PC. The PC employed in this paper is assumed to be professional equipment designed specifically for medical purposes. Sensors are low-power wireless medical devices either implanted inside or attached to a patient's body. These sensors have limited computation ability and restricted battery capacity. The sensors are responsible for real-time measurement of various physiological parameters of patients. Network Assumption According to Figure 1, there are several departments in the healthcare center. Patients with different diseases are assigned to different departments. In each department, the patients are arranged to be one patient group. HC is assumed to provide service to all the departments (patient groups). A secure communication channel for data transmission between HC and PC is essential. Furthermore, as mentioned above, a specific group communication channel between HC and a particular patient group is indispensable. As for individual patient, the secure association between PC and multiple sensors is crucial so that the vital physical data from sensors can be safely transmitted. In our system model, the PC is designed to communicate directly with HC through a wireless channel, which is different from other existing WBAN models using Internet communication between PC and HC [3,28]. PC is designed as a professional medical device with appropriate treatment units. As part of the medical facility, it is assumed that PC works within the effective range of HC [41,42]. After the patient fully recovers from the disease, his/her PC will be removed and arranged with other new patients. Proposed Schemes In this section, we explain our cooperative sensor association and group key management protocol, which can be generally divided into two parts: the group key management between HC and PCs affiliated with the same patient group and the cooperative association between sensors and the related PC. According to Figure 1, we assume that HC is in charge of r departments in total. Each department consists of multiple PCs. In this case, one PC is combined with one patient. Consequently, the patient and the relevant PC in this paper are considered as one entity. In department j (j ∈ [1, r]) with n PCs (patients) in total, PC i (i ∈ [1, n]) is in contact with the corresponding patient P i . As for P i , m sensors are arranged in or on different parts of P i ' body in order to monitor various physiological parameters. In our design, we are motivated to build a group key management scheme between HC and all the n PCs in department j. At the same time, group key agreement between PC i and the m sensors is provided accordingly. We introduce our protocol based on department j. Meanwhile, the design for the multiple department situation is similar. The notations used in our protocol are described in the following subsection. Thereafter, the detailed description of our protocol is given, which contains four parts: group key generation for HC and PCs, PC join and leave operation, group key generation for PC and sensors and sensor join and leave operation. Notations The notations used in our protocol and a brief description are listed in Table 1. Group Key Generation for HC and PCs In this section, the group key generation for HC and PCs affiliated with department j is described. It is worth noting that the generation procedures for multiple departments are similar. The proposed group key generation for HC and PCs can be divided into three phases. The first phase is the registration phase, which is responsible for secret key allocation to each PC and other necessary precomputation. The second phase is the group key computation phase, where the group key is generated and distributed to PCs. At last, in the group key derivation phase, each PC derives the group key from the received keying message. The detailed descriptions of these three phases are as follows. Registration Phase Before the group key generation procedure, some essential operations should be previously conducted by HC in the registration phase [43]. Initially, let P 1 , ..., P n be n patients who are assigned to department j (j ∈ [1, r]). First, patient P i∈ [1,n] registers to HC so that HC could acquire P i 's personal information including name, age, gender, phone number, and so on. Thereafter, HC allocates PC i to P i . Next, HC generates the symmetric key hsk and the secret key PSK i for PC i∈ [1,n] by conducting SecKeGen. Subsequently, HC executes PreCom for necessary precomputation. The design of SecKeGen and PreCom is presented below. • SecKeGen: The HC conducts SecKeGen to generate information for PC i∈ [1,n] . Z * p and Z * s are defined as two nonnegative integers sets less than p and s, respectively, where p and s are two large prime numbers. Additionally, G is defined as a multiplicative group of p, and g is a generator of G. HC randomly chooses SSK and PSK i∈ [1,n] from Z * p , where PSK i is the secret key of PC i and SSK is the HC master key. Moreover, HC chooses hsk ∈ Z * s for symmetric encryption. As a result, the HC temporary identity HID is generated as: where TS is the current time stamp. During the registration phase, HC assigns PSK i , HID, hsk to PC i∈ [1,n] of department j and keeps the master key SSK only in its memory. In other words, HC maintains a key list for each department, where SSK, hsk, HID and PSK i of n PCs are stored. Each PC i possesses PSK i , HID, hsk . Note that SSK is the confidential information only known to HC, while HID and hsk are assumed to be known to all PCs in department j. • PreCom: The HC conducts PreCom to compute the essential intermediate values [44]. First, HC selects PSK i from the key list and computes: involving n registered PCs of department j. Then, for each PC i∈ [1,n] , HC computes: and obtains {x 1 , ..., x n }. That is, x i for PC i∈ [1,n] is the multiplication of all the remaining PSK i . Subsequently, HC computes y i for each x i (i ∈ [1, n]), which satisfies: That is, y i is the modular multiplicative inverse of x i to the modulus PSK i . Hereafter, HC acquires the variables var i∈ [1,n] according to: Thus, the intermediate value µ can be computed as: Upon completion, HC stores the value of µ for the following group key computation. At this point, the precomputation based on CRT is completed. Group Key Computation Phase In this phase, the group key of department j is generated by HC. Let q be a large prime number where q ≤ p/2 . First, HC chooses a random value from Z * p as the group key PGK j . Then, PGKCom is conducted by HC in order to obtain the keying message. Finally, HC conducts SecHtoP to distribute the keying message to all PC i∈ [1,n] . The design of PGKCom and SecHtoP is described in detail below. • PGKCom: In our design, the HC conducts PGKCom to get the keying message γ j for department j, which is illustrated as: Particularly, for department j, only one PGK j and one µ are effective in the same time interval. Furthermore, the keying message γ j is available for all PCs. • SecHtoP: The HC conducts SecHtoP to distribute the keying message γ j to department j. First, HC encrypts the keying message, illustrated as: where S_ENC x (M) denotes the symmetric encryption using x. Next, HC computes the certificate SIG SSK (TS||HID||E(γ j )) according to: In this way, the certificate can be obtained as: Following the above calculation, the message: is finally broadcast to PC i∈ [1,n] of department j. Group Key Derivation Phase In this phase, the main task for PC i is to verify the validity of the received message by employing AuthMess. Subsequently, PC i derives the group key PGK j using GrKeCom. The design of AuthMess and GrKeCom is described in detail below. • AuthMess: PC i conducts AuthMess to verify the received message from HC. First, PC i checks the time stamp TS from the broadcast message. If TS matches the current time, PC i checks whether: e(SIG SSK (TS||HID||E(γ j )), g) ? =ê(H(HID||E(γ j )), HID) holds. The correctness is elaborated as follows: If the certificate is valid, PC i derives E(γ j ) from the message and decrypts the message illustrated as: where S_DEC hsk (M) denotes symmetric decryption using hsk. At this point, the keying message γ j is securely transmitted. • GrKeCom: This algorithm is designed for group key derivation from the received keying message γ j . In GrKeCom, a modulo division on the PC i side is conducted as: where PSK i is the allocated secret key. As defined above, holds, which guarantees that the derived group key PGK j is equal to the original one. At this point, the group key generation is finished. All the PC i of department j share PGK j with HC. PC Join and Leave Operations In the practical scenario, patients frequently join or leave the department [4,45]. Assume patient P i of department j is restored to health after receiving the treatment. PC i is not allowed to obtain the broadcast message after revocation for the purpose of privacy protection towards the remaining patients. Moreover, the newly joined patient needs to be allocated the group key. Consequently, the group key should always be updated when join or leave operations happen. In this section, the key updating scheme is illustrated respectively from two aspects, namely the PC join operation and the PC leave operation. Note that we demonstrate the join and leave operations in the single-PC case. That is, only one PC is to join or leave the department at the same time. Subsequently, the scenario of multiple PCs joining and leaving the same department is studied in the batch updating operation phase. The detailed description of the join and leave operations, as well as the batch updating operation is as follows. PC Join Operation Phase As mentioned above, the PC join operation in department j is considered in this section. It is obvious that the HC should update the group key PGK j as soon as a specific patient, named P join , joins department j. We would like to emphasize that P join should register to HC first, which is in accordance with the actual situations. Then, P join is assigned PC join and obtains its own necessary secret key set PSK join , HID join , hsk from HC. Subsequently, JoKeUpdate is conducted by HC to generate the rekeying message of PC join and other n PCs of department j. Finally, by conducting JoKeDerive, the updated group key is distributed to all the n + 1 PCs of department j. The design of JoKeUpdate and JoKeDerive is described in detail below. • JoKeUpdate: The HC conducts JoKeUpdate to generate the rekeying message for both PC join and the current n PCs. A few steps are necessary as introduced below: First, for PC join , HC computes its corresponding x join and y join according to the PreCom algorithm in Section 4.2. Hence, the variable var join can be computed as: In this way, HC computes the intermediate value µ join defined as: Consequently, HC selects a new group key PGK j−join and generates the rekeying message γ j−join by computing: Thereafter, by conducting the SecHtoP algorithm introduced in Section 4.2, the rekeying message γ j−join can be securely transmitted to the n + 1 PCs, which includes one new joining PC join and existing n PCs of department j. • JoKeDerive: This algorithm is designed for the aforementioned n + 1 PCs to derive the updated group key PGK j−join from γ j−join . After the verification process through AuthMess in Section 4.2, the PC i∈[1,n]∪{join} conducts a modulo division, illustrated as: Note that the secret key PSK join of PC join is included in µ join so that the derived new group key PGK j−join is equal to the original one. The process of JoKeDerive is similar to the group key derivation phase presented in Section 4.2. PC Leave Operation Phase In this section, we assume that the patient P leave is restored to health. Hence, HC deletes this patient and the corresponding PC leave from department j. Moreover, if some PCs in department j were compromised, HC would delete the compromised PC in the same way. In this case, the effective compromised detection strategy is necessary. As for this paper, some existing schemes can be applied in order to detect the compromised PCs periodically [46,47]. In this phase, HC conducts the LeKeUpdate algorithm first to generate the rekeying message µ j−leave and transmits it to the remaining n − 1 PC i∈[1,n]\{leave} securely. Then, LeKeDerive is adapted on the PC i side. Hence, the updated group key PGK j−leave is derived by HC and the rest of the n − 1 PCs. The design of LeKeUpdate and LeKeDerive is described in detail below. • LeKeUpdate: The HC conducts LeKeUpdate to generate the rekeying message concerning the remaining n − 1 PCs. A few steps are necessary as introduced below: First, HC obtains µ leave of PC leave demonstrated as: where var leave is stored in HC's memory. Consequently, HC selects a new group key PGK j−leave and computes the rekeying message γ j−leave according to: Thereafter, by conducting the SecHtoP algorithm introduced in Section 4.2, the rekeying message γ j−leave can be securely transmitted. • LeKeDerive: After the verification process with the AuthMess algorithm in Section 4.2, PC i∈[1,n]\{leave} conducts LeKeDerive to derive the updated group key PGK j−leave , illustrated as: Note that the secret key PSK leave of PC leave is excluded in µ leave so that the removed patient P leave cannot derive the correct group key. The process of LeKeDerive is similar to the group key derivation phase presented in Section 4.2. Batch Updating Phase With the particular feature of CRT, batch updating for multiple PCs can be achieved accordingly, which meets the practical requirements for medical WBAN. In this section, we present the batch updating involving the join and leave operations of multiple PCs at the same time. Suppose that P bj∈ [1,w] delegate w joining patients in department j. Similarly, P bl∈ [1,z] denote z leaving patients at the same time. P bj and P bl are respectively combined with PC bj and PC bl . Hence, after updating, the number of PCs in department j is n + w − z. In our design, first, HC conducts the BaKeUpdate algorithm to generate the batch rekeying message γ j−batch and uses SecHtoP to distribute it to all the n + w PCs. Afterwards, AuthMess is conducted for verification on the PC side. Finally, BaKeDerive is conducted so that the updated group key PGK j−batch is obtained by n + w − z PCs in department j. It is noteworthy that the SecHtoP and AuthMess algorithms are the same as the ones presented in Section 4.2. The design of BaKeUpdate and BaKeDerive is described in detail below. • BaKeUpdate: The HC conducts BaKeUpdate to generate the batch rekeying message for the n + w − z PCs. A few steps are necessary as introduced below: First, with the aforementioned PreCom algorithm described in Section 4.2, HC computes the corresponding x bj and y bj of w PC bj∈ [1,w] . Hence, the variable for PC bj is obtained as: Consequently, the sum var + b involving all the w joining PCs can be computed as: Similarly, the sum var − b involving all the z leaving PCs can be computed as: Hence, the intermediate value including w joining PCs and z leaving PCs is defined as follows: As a result, HC chooses a new group key PGK j−batch and generates the batch rekeying message γ j−batch , demonstrated as: Afterwards, by conducting the SecHtoP algorithm introduced in Section 4.2, the batch rekeying message γ j−batch can be distributed to all the n + w PCs. • BaKeDerive: After the verification process using the AuthMess algorithm in Section 4.2, PC i∈ [1,n+w] derives the updated group key PGK j−batch from γ j−batch using BaKeDerive. The PC i∈[1,n+w−z] conducts a modulo division, illustrated as: Note that the w secret keys PSK bj of new join PC bj are included in µ j−ba so that the derived PGK j−join is equal to the original one. Additionally, the secret keys of PC bl∈ [1,z] are excluded in µ j−ba so that the removed patient P bl∈ [1,z] cannot get the correct group key. At this point, the batch updating procedure interrelated with w joining patients and z leaving patients is completed. The group key for all the n + w − z PCs in department j is updated securely. Group Key Generation for PC and Sensors In this section, our design is motivated by coded cooperative data exchange (CCDE). Assume that k packages are loaded to t clients previously. In simple terms, the goal of CCDE is to recover the k packages for t clients in minimal transmission. Upon completion, each client obtains all the k packages. So far, many research achievements have been made on solving the CCDE problem. According to [38] and [48], if the t clients are fully connected, the CCDE problem can be solved in polynomial time. Inspired by the group key agreement designed in [5], we consider assigning in total k master keys to all the sensors in department j. The master key distribution follows the rule that every two sensors share at least one master key. Hence, the sensors of department j are fully connected with each other. With the assistance of the corresponding PC, the sensors can build the group key cooperatively. Based on Definition 1 in [5], the CCDE-based scheme is feasible for efficient sensor association for the purpose of achieving optimal transmission passes. For a better description, we take a patient P i with PC i , for instance, where P i is in department j. Let C i = {SN v |v ∈ [1, m], m ∈ N * } be a set of m wireless sensors allocated to P i . The association of these m sensors will be conducted after PC i successful registers to HC. The proposed sensor association scheme can be divided into two phases: the setup phase and the key generation phase. The setup phase is responsible for secret key allocation and some necessary preparation. Thereafter, the group key is generated in the next key generation phase. The detailed descriptions of these two phases are presented as follows. Setup Phase In this phase, PC i assigns necessary secret information to the m sensors. First, the PC i conducts SecKeDis to generate temporary identity PID i and symmetric secret key nsk. Thereafter, PC i conducts MasKeDis to distribute the predefined master keys to sensor SN v∈ [1,m] . The design of SecKeDis and MasKeDis is described in detail below. • SecKeDis: The PC i conducts SecKeDis to generate nsk and PID i . Let Z * h be a nonnegative integer set less than h, where h is assumed to be a large prime number. Additionally, G T is defined as a multiplicative group of h, and u is the generator of G T . First, PC i randomly chooses nsk from Z * h . Hence, PID i is generated, illustrated as follows: where PSK i is the confidential information of PC i . Thereafter, PC i stores PSK i , PID i , nsk in its memory. • MasKeDis: The PC i conducts MasKeDis to distribute a set of master keys among the m sensors. Let Q i = {k h |h ∈ [1, c], c > m ∧ c ∈ N * } be the c master keys to be allocated. According to our design, In this way, each sensor SN v ∈ C i shares at least one master key with each remaining sensor. Upon completion, PC i assigns PID i , nsk, B v to sensor SN v . Key Generation Phase In this phase, PC i is responsible for distributing the keying message to all the sensors securely. First, PC i conducts MasKeSel 1 to select the most widely-shared master key k 1 Ψ ∈ Q i in all the m subsets B v∈ [1,m] and computes the session key Sk 1 Ψ . Afterwards, PC i transmits the session key Sk 1 Ψ to sensors with SecPtoS. Subsequently, AuthSess is conducted by sensor SN v ∈ C i so as to guarantee the validity of the received session key and to compare it with B v . Hence, the sensors preloaded with k 1 Ψ are classified as one subset Λ 1 ⊆ C i . Other sensors without k 1 Ψ abandon the received message. Thereafter, PC i runs MasKeSel 2 to select the second master key k 2 Ψ . Similarly, the sensors preloaded with k 2 Ψ are classified as the second subset Λ 2 ⊆ C i . According to our design, Λ 1 ∩ Λ 2 = ∅. In other words, at least one sensor is preloaded with both k 1 Ψ and k 2 Ψ . Let SN be the sensors such that SN Ψ and k 2 Ψ conducts GrKeEnc so that the sensors in Λ 2 (Λ 1 ∩ Λ 2 ) can derive the session key Sk 1 Ψ . Note that Sk 1 Ψ is considered as the group key SGK i . Now, Sk 1 Ψ is distributed to the sensors in Λ 1 ∩ Λ 2 . Subsequently, PC i repeatedly conducts the above process in order to distribute Sk 1 Ψ to the remaining sensors in C i (Λ 1 ∪ Λ 2 ). In this way, after several broadcast transmission passes, all the SN v ∈ C i can finally get Sk 1 Ψ as the group key. Hence, the key generation phase is completed. The design of MasKeSel 1 , SecPtoS, AuthSess, MasKeSel 2 and GrKeEnc is respectively described in detail below. • MasKeSel 1 : This algorithm is designed for PC i to select the master key k 1 Ψ . It is assumed that PC i primarily chooses the master key involving more sensors. As a result, the corresponding session key Sk 1 Ψ is generated, illustrated as: • SecPtoS: After the computation of session key Sk 1 Ψ , PC i conducts SecPtoS for session key distribution. First, Sk 1 Ψ is encrypted by PC i following: As illustrated before, S_ENC x (M) denotes the symmetric encryption. Next, PC i computes the certificate SIG PSK i (TS||PID i ||E 1 (Sk 1 Ψ ) according to Equation (9). Hence, the certificate SIG PSK i (TS||PID i ||E 1 (Sk 1 Ψ )) can be obtained by computing: After the above calculation, the message: is finally broadcast to SN v ∈ C i . It is noteworthy that the entire process of SecPtoS is similar to the aforementioned SecHtoP. • AuthSess: This algorithm is designed for sensors to verify the received certificate from PC i . The whole process is similar to the aforementioned AuthMess algorithm. PC i checks whether: holds. The correctness is elaborated as follows: If the certificate is valid, SN v derives E 1 (Sk 1 Ψ ) from the message and decrypts the message as: where S_DEC nsk (M) denotes symmetric decryption using nsk. As a result, the keying message Sk 1 Ψ is securely transmitted. • MasKeSel 2 : This algorithm is designed for PC i to select the second master key k 2 Ψ . It is required that at least one sensor in Λ 1 stores master key k 2 Ψ in its master key subset. That is, ∃SN Ω ∈ Λ 1 , k 2 Ψ ∈ B Ω holds. Following this rule, PC i chooses the master key involving more sensors in C i Λ 1 . After that, session key Sk 2 Ψ is generated according to: Next, . It is noteworthy that the transmission process is similar to the aforementioned SecPtoS. At last, After the message checking process employing AuthSess, sensors in Λ 2 (Λ 1 ∩ Λ 2 ) derive the session key Sk 1 Ψ . Hence, Sk 1 Ψ is distributed as the group key SGK i . The above process repeats until: holds, where ρ denotes the transmission times on the PC i side. At this point, the group key generation for PC and sensors is completed. Sensor Join and Leave Operations In this section, the occasions of sensor joining and leaving C i are considered respectively. Sensor Join Operation In our system model, the sensor join operation should be available in order to offer continuous treatment for the current patient. Assume patient P i is equipped with m wireless sensors in department j. SN join denotes the new sensor to be assigned. It is worth emphasizing that the existing m sensors have already been associated with PC i through the generated group key SGK i . In this case, the joining sensor SN join first registers to PC i and obtains PID i , nsk, B join . Additionally, B join denotes the master key subset allocated to SN join . For ∀v ∈ [1, m], Bv ∩ B join = ∅ and Bv ∪ B join ⊆ Q i hold. After that, PC i selects the master key k join Ψ . Note that k join Ψ is preloaded to B join and at least one existing sensor in C i simultaneously. That is, for k join Ψ ∈ B join , ∃v ∈ [1, m], k join Ψ ∈ Bv ∩ B join holds. The next process for the joining sensor is similar to Section 4.4. As a result, all the m + 1 sensors obtain the group key SGK i . The sensor join operation is completed. Furthermore, the occasion with multiple sensors joining the group is similar to the above single-sensor case. In conclusion, the above sensor join scheme emphasizes allocating the existing group key SGK i to the new join sensor. However, in order to enhance the security properties, the existing group key should be updated whenever a new sensor joins C i , which is supported by the aforementioned group key generation process. Sensor Leave Operation According to our system model, the sensors are assigned to each patient by the healthcare center and will not be frequently removed from the patient's body. In most cases, the allocated sensors are combined with the related patient and keep working till the patient leaves the department. However, if the sensor is compromised or disabled, the current group key should be refreshed in timely manner. It is notable that in our design, the sensors are closely attached to or in the patient's body so that the sensors are fully controlled by the patient, where the patient is assumed to be a benign user. Hence, for security consideration, PC i should assign a new secret message and conduct the group key generation process again. Security Analysis In this section, we analyze the security properties of the proposed protocol. The security theorems, as well as the corresponding proofs are given below. Resistance to Replay Attack The adversary can conduct a replay attack by reusing the previous messages [49,50]. We analyze the resistance to replay attack in the proposed protocol. Theorem 1. During the authentication process in the group key management between HC and PCs, replay attack can be prevented. That is, the reuse of the previous message sent from HC cannot pass the current authentication process on the PC side. Proof of Theorem 1. The security of replay attack resistance is formally defined through game G 1 . Let A 1 be a probabilistic time adversary. C 1 denotes the challenger, and h and H denote the random oracles. It is worth emphasizing that C 1 has the ability to simulate all the oracles and to output the signing message as a real signer [2,3]. In G 1 , it is assumed that A 1 can conduct the following corresponding queries to C 1 : h query:A 1 can query the random oracle h at any time. C 1 simulates this random oracle by maintaining a list L h of tuple {j, PC i }, where L h is initialized to be empty. When the oracle is queried with input j, if the query j is already in L h , C 1 outputs PC i to A 1 . Otherwise, C 1 generates a random PC i and returns it to A 1 . Note that {j, PC i } is added to L h . Extract query: Upon receiving the query from A 1 , C 1 executes the SecKeGen algorithm to generate relevant secret information {TS, PSK i , SSK, g, hsk}. It is notable that TS denotes the current time stamp. After that, C 1 computes HID and E(γ j ). Finally, {PC i , HID, TS, g, E(γ j )} is returned to A 1 . H query: A 1 can query the random oracle H at any time. C 1 simulates this random oracle H by maintaining a list L H of tuple {PC i , Y i }, where L h is initialized to be empty. When the oracle is queried with input PC i , if the PC i is already in L h , C 1 outputs PC i to A 1 . Otherwise, C 1 generates a random number Y i and returns it to A 1 . Meanwhile, {PC i , Y i } is added to L h . SigGen query: C 1 simulates the signature oracle by responding to the signature query of message E(γ j ). C 1 executes the SecHtoP algorithm to generate the signature SIG(TS||HID||E(γ j )) and return it to A 1 . Replay query: Upon receiving the signature from A 1 , C 1 simulates the replay operation by conducting the AuthMess algorithm to check the validity of the received signature. The received signature is compared with the newly-generated signature after a certain time interval ∆t by replaying the process. As a result, A 1 obtains the signature SIG(TS||HID||E(γ j )), where the generated signature is valid and the following equation:ê (SIG(TS||HID||E(γ j )), g) holds. At this point, TS||HID||E(γ j )||SIG(TS||HID||E(γ j )) is obtained by A 1 , while the newly-generated signature SIG(TS||g SSK||TS ∆t ||E(γ j )) involving the corresponding information satisfies: where TS ∆t is the time stamp at time ∆t generated by C 1 (TS ∆t > TS). Accordingly, by conducting the replay query, C 1 runs the AuthMess algorithm as follows: e(SIG(TS ∆t ||HID||E(γ j )), g) It is obvious that the reused previous signature can pass the authentication only when Y i = Y i and HID = HID . That is, TS ∆t = TS, which contradicts the aforementioned definition. Hence, the replay attack is not available in the proposed group key management scheme between HC and PCs. Theorem 2. During the authentication process in the group key management between PC and sensors, the replay attack can be prevented. That is, the reuse of the previous message sent from PC cannot pass the current authentication process on the sensor side. Proof of Theorem 2. The proof of Theorem 2 is similar to the above proof of Theorem 1. Resistance to Forgery Attack In this section, we analyze the resistance to the forgery attack of the proposed protocol. Theorem 3. The proposed group key management scheme between HC and PCs is existentially unforgeable in the random oracle model. Proof of Theorem 3. Similarly, the proof of forgery attack resistance is formally defined through game G 2 . Let A 2 be a probabilistic time adversary. C 2 denotes the challenger, and h and H denote the random oracles. It is worth noting that C 2 has the ability to simulate all the oracles and to output the signing message as a real signer. In G 2 , it is assumed that A 2 can conduct the following corresponding queries to C 2 : h query: This is the same as the definition in Theorem 1. Extract query: Upon receiving the query from A 2 , C 2 executes the SecKeGen algorithm to generate relevant secret information {TS, PSK i , SSK, g, hsk}. Note that TS denotes the current time stamp. {PC i , HID, TS, g} is returned to A 2 . SyEnc query: C 2 maintains a list L S of tuple {PC i , γ j , E(γ j )}, where L S is initialized to be empty. When queried by A 2 , C 2 generates a random number as γ j and checks the list L S . If {PC i , γ j } is already in L S , C 2 randomly chooses another value again. Otherwise, C 2 computes E(γ j ) with hsk. Finally, {PC i , γ j , E(γ j )} is returned to A 2 and also added to L S . H query: This is the same as the definition in Theorem 1. SigGen query: This is the same as the definition in Theorem 1. Replay query: Upon receiving the signature from A 2 , C 2 simulates the replay operation by conducting the AuthMess algorithm to check the validity of the received signature. The received signature is compared with the newly-generated signature of E(γ j ) (γ j = γ j ). Finally, the adversary A 2 obtains the signature SIG(TS||HID||E(γ j )), as well as {TS, HID, E(γ j )} of PC i by querying C 2 . As a result, the equationê(SIG(TS||HID||E(γ j )), g) =ê(Y i , HID) holds. Furthermore, C 2 outputs another signature SIG(TS||HID||E(γ j )) to A 2 . Assume the signature can pass the authentication, illustrated as:ê Thus, the forged signature can pass the authentication only when Y i = Y i and HID = HID . That is, g SSK ||TS = HID , so that SSK = SSK , which contradicts the aforementioned assumption. Hence, the forgery according to the acquired message is not available in the proposed group key management scheme between HC and PCs. Theorem 4. The proposed group key management scheme between PC and sensors is existentially unforgeable in the random oracle model. Proof of Theorem 4. The proof of Theorem 4 is similar to the above proof of Theorem 3. Forward Security In this section, we analyze the forward security property of the proposed protocol. Theorem 5. The proposed group key management scheme between HC and PCs provides forward security against an adversary. That is, the revoked PCs (patients) cannot get access to the current communication. Proof of Theorem 5. This theorem is analyzed through game G 3 . Let A 3 be the adversary by colluding with the revoked PC i in department j. It is worth noting that A 3 obtains all the secret information stored in PC leave and wants to derive the current group key PGK j−leave . After receiving the keying message γ j−leave = PGK j−leave × µ leave from HC, A 3 conducts the modulo division to derive the group key. However, as described in the aforementioned sections, for the revoked PC leave , HC subtracts var leave from µ so that the µ leave = µ − var leave . In this case, the rekeying message only involves information of the rest of the n − 1 PCs. Hence, the revoked PC leave cannot derive the correct group key. That is, PGK j−leave = γ j−leave mod PSK leave . Thereafter, the rest of the n − 1 PCs in department j can update their new group key securely. We assume that the size of PSK leave is bits. As a result, A 3 has to perform 2 times in order to obtain one PSK i of the rest of the n − 1 PCs. Accordingly, the probability that A 3 can successfully obtain PGK j−leave is n−1 2 . Thus, the forward security is provided in our protocol between HC and PCs. Theorem 6. The proposed group key management scheme between PC and sensors provides forward security against an adversary. That is, the revoked sensors cannot get access to the current communication. Proof of Theorem 6. As illustrated above, the sensors are closely attached on or in the patient's body and are fully controlled by the patient. Assume a sensor is removed from the patient's body. In this case, PC i assigns new secret messages including PID i , nsk and a master key subset to the remaining sensor. The whole group key generation process will be conducted again to refresh the group key. In this way, the revoked sensor cannot derive the new group key since the vital secret information is different. Resistance to Collusion Attack In this section, we analyze the collusion attack resistance of the proposed protocol. Theorem 7. The proposed group key management scheme between PC and sensors provides forward security against an adversary. That is, the revoked sensors cannot get access to the current communication. Proof of Theorem 7. We define the collusion attack through game G 4 . Let A 1 4 and A 2 4 be the adversaries removed from department j at time t 1 and t 2 (t 1 < t 2 ), respectively. At time t 1 , A 1 4 leaves the department with the acquired group key PGK t 1 − . Meanwhile, the rekeying message γ t 1 is obtained by A 2 4 . Additionally, the updated group key PGK t 1 + is derived by A , PGK t 1 − , PGK t 1 + , γ t 1 , γ t 2 . With the above information, PGK t 2 + is computed according to PGK t 2 + = γ t 2 mod PSK A η 4 with η ∈ {1, 2}. Assume the size of PSK A η 4 is bits. The probability that A 1 4 and A 2 4 can successfully obtain the group key is n−2 2 . Hence, the collusion attack is prevented. Performance Analysis In this section, we present the performance analysis towards the proposed protocol. As illustrated in the above sections, our scheme consists of two parts: the group key management between HC and PCs and group key management between PC and sensors. The performances of the two schemes are respectively considered. Subsequently, the corresponding simulations and results are presented. Group Key Management between HC and PCs The proposed protocol is compared with two state-of-the-art grouping key management protocols: ESSA [4] and DAKM [44]. The comparison of the computational cost and storage, as well as the communication cost are demonstrated as follows. Computational Cost and Storage The computational cost is defined as the total time consumption for group key generation [44]. Additionally, the storage mentioned here refers to the required memory size for the corresponding operations. The comparison result with ESSA and DAKM is given in Table 2. We denote the modulo operation as mod, the exponential operation as Ex and the bilinear pairing as e. Enc and Dec refer to encryption and decryption. Additionally, H, M, D and A represent the one-way hash function, multiplication operation, division operation and addition operation, respectively. Finally, the point multiplication operation is defined as p. Table 3. Accordingly, both DAKM and our protocol require one broadcast for the whole process, which is efficient for resource-constrained wireless sensors. Group Key Management between PC and Sensors In this section, the proposed protocol is analyzed and compared with the ESSA protocol [4]. The comparisons of the computational cost and storage, as well as the communication cost are illustrated as follows. Computational Cost and Storage The comparison result with ESSA [4] on the computational cost and storage is given in Table 4. The notations used in the table are the same as those in Table 2. As illustrated above, the sensors in subset Λ M ⊆ C i get the session key Sk M Ψ . Note that the process repeats for ρ times so that M ∈ [1, ρ]. For abetter description, we assume that there are Θ M sensors in Λ M . Meanwhile, there are Φ M sensors in subset Λ M ∩ Λ M+1 . In this case, the computational cost on the PC i side is (ρ + 1)Ex + 2ρH + ρEnc. On the sensor side, we consider the average required computation for message authentication and encryption. The detailed procedures are as follows: First, after receiving the first message from PC i , the computation for each sensor in subset Λ 1 is 1e + 1H + 1Dec so that the total computation is Θ 1 (1e + 1H + 1Dec). Similarly, in the second round, after receiving the message from PC i , the computation for all the Θ 2 sensors in subset Λ 2 is Θ 2 (1e + 1H + 1Dec). After that, Φ 1 sensors in Λ 1 ∩ Λ 2 broadcast the message to others with computation 1Enc + 1H + 1Ex. Next, the Θ 2 − Φ 1 sensors in Λ 2 (Λ 1 ∩ Λ 2 ) computes for 1e + 1H + 1Dec. Hence, we can conclude that the total computation in the i-th rounds is: In conclusion, the total computational cost for all the sensors is computed according to: Consequently, the average computational cost on the sensor side is: We consider the extreme situation where PC i needs to conduct m − 1 broadcasting. In this assumption, the computational cost reaches the upper limitation. That is, In this way, the maximum average computation cost on the sensor side is: According to the practical requirement, m 6; thus: AveComp_Sen(i) ≈ 4(e + H + Dec). Subsequently, the storage comparison with ESSA is shown in Table 4. It is notable that the value k PSK i in the table denotes a certain storage allocated for the preloaded master keys on the both PC and sensor side. After this comparison with the existing two protocols on the group key management in WBAN, the simulations for the three protocols are presented, so as to prove the efficiency of the proposed protocol. Subsequently, the storage comparison with ESSA is shown in Table 4. It is notable that the value k in the table denotes a certain storage allocated for the preloaded master keys on both the PC and sensor side. The comparison result shows that our protocol requires less memory size compared with the ESSA protocol. Communication Cost The comparison result on the communication cost is given in Table 5. In ESSA [4], the transmission type during the authentication between PC and sensors is unicast. After that, broadcast is used for group key derivation. PC i communicates with each sensor for four rounds. Hence, the total communication cost is 4m + 1. As described above, PC i broadcasts for ρ times to assign necessary messages to sensors. Moreover, each sensor in subset Λ M ∩ Λ M+1 (M ∈ [1, ρ − 1]) broadcasts the keying message to other sensors. In this way, the total communication cost is ρ + ∑ ρ−1 i=1 Φ i . Similar to the above section, we set ρ = m − 1 and Φ i = 2, i ∈ {1, ..., ρ − 1} to compute the maximum communication cost. That is, In this case, we can get 3m − 5 < 4m + 1. It is obvious that our protocol requires less communication cost for group key management between PC and sensors. Simulation Experiments and Results In the previous two sections, adequate performance analysis and comparison emphasizing computational and communication cost are provided, along with a mathematical discussion and estimation for extreme cases. In addition, relevant simulations are presented in order to prove the efficiency of our protocol. It is worth noting that the time consumption for group key generation and distribution is particularly concerned, which is the crucial factor in the performance evaluation of WBANs. The experiments were conducted on Windows 10 with a 2.70-GHz Intel(R) Core i7-6820HK CPU and 16 GB memory. Two parts of the proposed protocol, namely the group key management between HC and PCs and group key management between PC and sensors, were performed in Visual Studio 2015 with C++ language. Moreover, the Pairing-Based Cryptography (PBC) library was adopted accordingly. The experiments on group key management between HC and PCs were conducted first. Note that the assignment of necessary secret information was designed to be done before the formal group key generation. Hence, the time consumption for SecKeGen was not included. The simulation was performed for several times based on different numbers of PCs. The comparison results with ESSA [4] and DAKM [44] are presented in Figures 2 and 3. As shown in Figure 2, it is obvious that our protocol required less running time. When the number of PCs increased, the running time for our protocol and DAKM [44] was similar. Additionally, the running time for each PC was affected by the key size, where in Figure 3, our protocol obviously required less running time on the PC side when the key size was set to 512 bits. After that, the group key updating time of HC was considered in order to prove the efficiency of our group key updating scheme based on CRT. Note that both the joining and revoked PCs were defined to be the updated PCs. In this way, the key updating time is shown in Figure 4. Similarly, the comparison result with ESSA [4] on group key generation time between PC and sensors is given in Figure 5. In a word, the above simulation results demonstrate that our protocol could provide better performance than the state-of-the-art group key management protocols. Conclusions In this paper, first, a novel practical WBAN system model with a notification channel is designed. Moreover, an efficient group key management protocol employing the Chinese remainder theorem (CRT) between HC and PCs is introduced, which supports secure group key updating. In this way, the HC is capable of broadcasting the message to different patient groups. Additionally, the group key scheme between PC and sensors is designed, which is motivated by coded cooperative data exchange (CCDE). Formal security analysis is given, indicating that the proposed protocol can achieve the desired security properties. Furthermore, performance analysis demonstrates that the proposed protocol is efficient compared with the state-of-the-art.
13,594
2018-11-01T00:00:00.000
[ "Medicine", "Engineering", "Computer Science" ]
CAVER: a new tool to explore routes from protein clefts, pockets and cavities Background The main aim of this study was to develop and implement an algorithm for the rapid, accurate and automated identification of paths leading from buried protein clefts, pockets and cavities in dynamic and static protein structures to the outside solvent. Results The algorithm to perform a skeleton search was based on a reciprocal distance function grid that was developed and implemented for the CAVER program. The program identifies and visualizes routes from the interior of the protein to the bulk solvent. CAVER was primarily developed for proteins, but the algorithm is sufficiently robust to allow the analysis of any molecular system, including nucleic acids or inorganic material. Calculations can be performed using discrete structures from crystallographic analysis and NMR experiments as well as with trajectories from molecular dynamics simulations. The fully functional program is available as a stand-alone version and as plug-in for the molecular modeling program PyMol. Additionally, selected functions are accessible in an online version. Conclusion The algorithm developed automatically finds the path from a starting point located within the interior of a protein. The algorithm is sufficiently rapid and robust to enable routine analysis of molecular dynamics trajectories containing thousands of snapshots. The algorithm is based on reciprocal metrics and provides an easy method to find a centerline, i.e. the spine, of complicated objects such as a protein tunnel. It can also be applied to many other molecules. CAVER is freely available from the web site . Background The shape of a protein is complicated by its many clefts, pockets, protrusions, channels and cavities. Protein concavities offer a unique microenvironment for biological functions, such as ligand binding or enzymatic catalysis. Protein shape is of great interest to medicinal chemists working in the drug discovery industry and looking for inhibitors, enzymologists interested in identifying substrate molecules based on the well known "lock and key" mechanism and protein chemists studying protein-protein or protein-DNA interactions. The identification of protein pockets and cavities has been the focus of several studies [1][2][3][4] and various algorithms have been developed for the calculation of protein volume and surface area. A large number of enzymes possess buried active sites that are connected to the external solvent environment by access routes (tunnels or channels). A catalytic step must always be preceded by the formation of an enzyme-substrate complex, which may require passage of the substrate through these routes. The size and shape of the access routes may become an important determinant of enzyme substrate specificity [5]. Changes in the diameter of the access tunnels during the dynamic movement of a protein play an important biological role, such as that described for acetylcholinesterase [6]. Two narrow active site gorges are positioned deep inside the protein core and movement of the residues making up the gorge walls is necessary to allow ligands access to the active site. A method based on molecular surface was used for the calculation of the gorge diameter in acetylcholinesterase. The diameter was defined as the maximum probe size that produces a continuous molecular surface between an active site and a solvent. Calculation of one diameter in this approach requires the generation of several molecular surfaces using a series of probes of increasing size [7]. A more effective method is implemented in the CAST program, which utilizes the alpha shape theory. CAST computation of pockets and their openings does not require direct human interaction. The required inputs are atomic coordinates, van der Waals radii, and the radius of the probe sphere [4]. The program VOIDOO, a component of O package utilizes a grid-based algorithm for detection, delineation, and measurement of protein cavities and solvent accessible pockets. The VOIDOO algorithm suffers from crude grid spacing and the "can-of-worms" phenomenon [1]. The central problem in the analysis of tunnels in protein structures is the identification of the centerline, i.e. spine, of a 3D object. Algorithms dealing with centerlines have been applied to medical procedures, for example in virtual colonoscopy and bronchoscopy [8][9][10][11]. The aim of this study was to develop a rapid and accurate algorithm for the identification of routes from buried active sites to the external solvent in static protein structures. We aimed to produce an algorithm that could also be applied to molecular dynamic trajectories. Further, the algorithm was intended to allow changes in the radius of a channel gorge with time to be monitored and the most probable access routes to be identified. Several other requirements were taken into consideration during development of the algorithm and its implementation: (i) speed, thus enabling rapid analysis of an entire trajectory from a molecular dynamic simulation, i.e. thousands of snapshots, in a few hours; (ii) easy identification of a starting point for the calculation; (iii) that the algorithm functions independently of the probe radius; (iv) storage of paths in PDB format; and (v) intuitive visualization. The algorithm The most accessible path from the protein cavity to the bulk solvent has to be found by systematic exploration of the protein interior, in order to calculate the access route gorge radius (Fig. 1). In our model, a protein consists of hard sphere atoms with appropriate van der Waals radii. The protein body is modeled on a discrete three-dimensional grid space and all grid nodes are clustered into two classes: nodes located in the interior of the protein body (inside atomic vdW radii) and nodes located outside the protein body. Outer nodes can lie in the cavity, access tunnels or in the external environment of the protein, e.g. a bulk solvent. The convex approximation of the protein, termed the 'convex hull', is used to distinguish nodes that lie either in the interior or exterior of the protein (Fig. 1). Nodes that are located outside of the convex hull, are eliminated and not used in further calculations. Attention is paid to nodes that lie on a boundary of the modeled convex hull. These nodes are potential end-stops of the grid search algorithm because each boundary node can be treated as a putative outfall of the channel. The mathematical object, which is called a vertex-weighted Sketch of the method implemented in CAVER Figure 1 Sketch of the method implemented in CAVER. The black bold circle represents the starting point. The protein is visualized by gray circles with van der Walls atom radii mapped on a discrete grid (black dots). The solid line represents the boundary between the protein (convex hull) interior and its surroundings. Empty circles represent the maximally inscribed balls on the probable route (dashed line). graph, is constructed and the algorithm applied to identify the shortest low-cost path. Each possible path from the active site to the exterior is evaluated as a positive value. This value represents the relative cost to navigate each path in what could be described as a "highway-toll". Long and complicated paths are more "expensive", while the short direct paths are "cheaper" (Fig. 2). More formally, the cost function C(P) is defined (Eq. 1) for the given path P as the sum of node-price-function values calculated for all nodes forming the path P. Let N(P) be the set of all points form the path P, then the C(P) is expressed as This cost function depends on the number of nodes in each sum and, as such, this function is not suitable for the purposes of comparison. A normalized cost function is defined (Eq. 2) to avoid cost function dependence on a summand number: where N is the number of the summand. Next, the single node cost function c(x) must fulfill two requirements. First, it must provide a positive value for each node and a low-value for nodes that are surrounded by empty space. Second, it must identify preferred nodes that are surrounded by sufficient empty space to allow a hypothetical substrate to pass through a channel without risk of collision. These low-weighted nodes are preferentially selected by the search-algorithm. In our case, the cost function c(x) for a single node was chosen (Eq. 3) where function r max (x) is equal to the maximal radius of a hypothetical ball that can be inserted into node x just touching the protein surface. The small constant ε is here only for technical purposes to get rid of a singularity of the function in points where r max (x) equals zero. The graph-searching algorithm then establishes the lowest cost path from the active site to the external environment. The calculated path can be visualized ( Fig. 3 and Fig. 4) using the r max radii for each node of the path. The smallest radius represents the channel gorge and as such the point coordinates can be determined together with the gorge radius r gorge . Access path visualized by pyMol Implementation details The method was implemented in the CAVER program (Additional file 1 and web page http:// loschmidt.chemi.muni.cz/caver/. The program uses the publicly-licensed software qhull [12] that quickly (in O(N log N) time) computes a convex hull for a given set of N points in three dimensions. The result of qhull was used to eliminate nodes located outside the convex hull. All points were investigated regardless of whether they lay inside or outside the convex hull. This can be achieved by traversing all facets of the convex hull and testing whether a point lies in the same halfspace as another convex hull point. This process takes O(NK) time, where N is the number of points inside the convex hull, and K is the number of facets forming the convex hull. Based on graph theory, several methods for the shortest path problem have been described. The most widely used algorithms are Dijkstra's [13], Bellman-Ford's [14,15], A* search algorithm, and the Floyd-Warshall algorithm [16]. In our case, a positively vertex-weighted graph is plotted on a three dimensional grid, where the source vertex is known while the destination vertex is not. We used a modified form of the Dijkstra's algorithm. The Dijkstra's algorithm effectively solves the problem of the shortest path from a single source vertex to the destination one. It was originally designed for edge-weighted graphs but its vertex-weighted variation is easy to implement. The algorithm can be used even if the destination vertex is unknown. In the main loop of the algorithm, the shortest path to the closest available vertex (measured from the source vertex) is determined. Then, estimates of the shortest path for all the adjacent vertices are updated. This means searching can be terminated if the nearest available vertex is the boundary vertex indicating that the shortest path has been identified. To speed up the algorithm, a modification related to the cost function was implemented. The single node cost function is evaluated as part of the main loop of the Dijkstra's algorithm. The cost function is evaluated only at nodes where it is required. The identified path can be easily visualized since the program writes a PDB file containing the coordinates of the path nodes accompanied by the maximum probe radius that touches the vdW protein surface (Fig. 3). A userfriendly GUI was implemented in the graphic program PyMol [17]. Method performance An algorithm to perform a skeleton search on a defined grid was developed and implemented in the CAVER program as described in the Methods section. The algorithm developed automatically finds the easiest path from the starting point, typically located inside the molecule, to the exterior of the molecule. The identified path resembles a tunnel that connects protein clefts, pockets or cavities with the surrounding bulk solvent. The tunnel characteristics, e.g. length, mean radius and gorge radius are determined and can be further analyzed. In molecular dynamics trajectories it is possible to analyze time fluctuations of tunnel characteristics and construct a dynamic picture of tunnel behavior. The tunnel gorge radius r gorge is one of the most important tunnel characteristics because the tunnel gorge can form a bottleneck for substrate access or product release to and from the active site of a protein. The radius r gorge as estimated by the algorithm is always underestimated in finite grids. The maximal error "max" of an r gorge estimation is expressed by the equation (Eq. 4): where d is equal to the length of the grid cell edge. The probability of ε max realization is equal to zero and this error is overestimated, therefore the mean error should be defined. The mean error of r gorge determination is equal to (Eq. 5): and its variance and deviation equal to (Eq. 6) The r gorge estimation should be corrected by adding 0.48d to the r gorge . The corrected r gorge estimate has a mean error value μ ε = 0 and the variance of error is var(ε) = 0.019d 2 . In the case of a globular protein (50 × 50 × 50 Å 3 ) the ε max (r gorge ) costs 0.43 Å for a grid with d = 0.5 Å, however, the mean ε(r gorge ) equals 0.24 Å, and σ ε = 0.07 Å. The results of the tests (depicted in Fig. 4) focus on the convergence of the identified paths with d decreasing from 1.0 to 0.3 Å. Performance of the method is given as the tunnel volume i.e. number of vertexes searched rather than the number of atoms. In the case of haloalkane dehalogenases (the active site volume ~200Å 3 ) the typical calculation of one tunnel takes about 10-12 sec. but in the case of cytochrome P450 2C9 or 3A4 (which has larger active sites ~500-600Å 3 ) the calculation takes about 20-25 sec. In case of very large cavities (e.g. RNA) calculation may take several minutes at low resolution (d = 1-2Å). The program performance was tested on Pentium IV 3.2 GHz machine with 2 GB RAM running on Windows XP Professional operating system. Case study Haloalkane dehalogenases (EC 3.8.1.5) are microbial enzymes that cleave a carbon-halogen bond in a broad range of halogenated compounds [18]. The molecular structures of three different haloalkane dehalogenases are known: DhlA from Xanthobacter autotrophicus GJ10 [19][20][21][22][23][24][25][26], DhaA from Rhodococcus sp. [27] and LinB from Sphingomonas paucimobilis UT26 [28][29][30][31]. The overall shape of haloalkane dehalogenases is globular, with the active site buried between the main domain of an α/β hydrolase fold and a cap domain with a uteroglobin fold. There are three access routes connecting the protein surface with the active site, denoted the main tunnel, the upper tunnel and a slot (Fig. 5). The three proteins differ in the number of routes that provide access to their active site: LinB has the most available active site, accessible through three tunnels. The active site in DhaA is accessible through the upper tunnel and the slot, while DhlA is believed to have a single accessible route via the main tunnel [32]. Here we used CAVER to conduct a thorough analysis of the access paths using all available X-ray structures and pre-calculated molecular dynamics trajectories. and LinB (C) identified by protein crystallography [19][20][21][22][23][24][25][26][27][28][29][30][31] and molecular dynamic simulations [32]. The slot in DhlA was described in this study. Analysis of X-ray structures the slot in DhaA and LinB. The gorge, i.e. the bottleneck, of the main tunnel is made up of W175, L262 and H289, the gorge of the slot is formed by P168, F164, F172 and the backbone by G171, K261 and L262. In the next step, a more systematic analysis was conducted to identify additional paths. Four paths were calculated for each structure and averaged (Table 1). CAVER found that, in each case, two tunnels were equivalent to previously described paths, i.e. the main tunnel and the slot. Two other paths had significantly higher cost function and a narrower gorge (Table 1), making it less likely that they fulfill a biological role. We note that the crystallographic analysis [22] revealed one access tunnel, while two parallel access paths were deduced from the kinetic data [26]. The existence of the second tunnel in DhlA provides an additional explanation for the elevated activity of F172W with 1-chlorohexane, which is attributed to the increased flexibility of the 'helix-loop-helix' region [26]. The three crystal structures available for the DhaA enzyme (1BN6, 1BN7 and 1CQW) were analyzed for the presence of access routes. CAVER detected one clearly preferred route, which corresponded to the upper tunnel. The gorge of this tunnel is made up of W141, F144, A145, F159, V172 and C176 (numbered according to dhaA sequence). The additional paths were located in the slot (Fig. 6B). The slots of the three structures studied showed a slightly different spatial position and variable size with respect to the mean gorge radius, mainly due to repositioning of the side-chain R133. The cost function of these routes was almost twice that of the upper tunnel (Table 1), but still comparable with the main tunnel and the slot of DhlA, which are known to be involved in substrate binding and product release. Exchange of solvent molecules between the active site and the slot was observed during a 1ns molecular dynamic simulation [32]. Analysis of eleven available LinB crystal structures (1CV2, 1D07, 1K5P, 1K6E, 1MJ5, 1K63, 1IZ8, 1IZ7, 1G5F, 1G4H and 1G42) identified the lower tunnel as the most accessible access route in nine of them (underlined). The gorge of this tunnel is made up of P144, D147, L177 and A247. The upper tunnel was identified as the most accessible route in two out of the nine structures. The gorge of the upper tunnel is formed by D147, F151, V173 and L177. Systematic searches of the four tunnels in the structures revealed the slot as another possible access route (Fig. 6C). The cost function of this route is, however, twice that of the lower tunnel (Table 1). Its gorge is formed by D142, P144, L248, G251 and M253. Previous molecular dynamics simulations [32] demonstrated that all three access routes can be used in the exchange of water molecules between the active site and the bulk solvent. The existence of alternative export routes explains the activity of LinB mutants that carry a large amino acid residue at position L177 [5]. L177 is located inside the lower tunnel and its substitution may result in closure of this tunnel. Analysis of molecular dynamics trajectories A molecular dynamics simulation of the DhlA enzyme was analyzed using CAVER to determine the easiest routes from the active site. About 400 snapshots taken at 5 ps intervals were analyzed, the tunnels identified and their corresponding gorgeswere further analyzed by visual inspection (Fig. 7A). CAVER identified two clusters of gorges that correspond to two different paths. The most populated tunnel gorges (78%) are located in the main tunnel and the other remaining gorge clusters are located in the slot. As in the previous case, the molecular dynamics simulation of the DhaA enzyme was analyzed using the CAVER program. Two main clusters were identified by CAVER, one having two subclusters (Fig. 7B) of gorges resulting in three different paths,. The most populated access gorges (64%) were located in the upper tunnel and the two remaining gorge clusters were positioned in the slot. The two subclusters in the slot (Fig. 7B) have populations equal to 26% (cluster 2) and 10% (cluster 3), respectively. Analysis of the LinB molecular dynamics trajectory identifies the main tunnel as the most accessible route to the active site (Fig. 7C). Conclusion A new algorithm for the identification of tunnels in large molecular systems was developed and implemented in the CAVER program, which is available within the public domain. The algorithm automatically explores a grid, which is constructed over the molecule and stripped to its convex hull. Nodes are evaluated using a cost function, which determines the amount of free space around the node. The grid search algorithm is used to find the lowestcost centerline path between a given starting point and the surface of the molecule. The user needs only to provide the molecular geometry, atomic van der Waals radii and the designated starting point, to enable the analysis of any molecular system be it protein, nucleic acid or inorganic material. The algorithm is sufficiently rapid and robust for the routine analysis of molecular dynamics trajectories that contain thousands of snapshots. The program is also available as a plug-in for PyMol and, additionally a Webbased version of the program offers analysis of static protein structures online. Authors' contributions MP developed the algorithm for search of access paths, wrote and tested the software, prepared initial draft of the manuscript and prepared web pages; MO developed concept of the software, conducted performance tests, wrote parts of the manuscript and prepared graphic material for the web; PB contributed ideas on algorithm and conducted statistical analyses; PK conducted performance tests; JK financially supported MP; JD contributed fundamental biochemical concept and interpreted data, wrote parts of the manuscript, contributed ideas on web pages and coordinated project. a annotation of paths is provided in Figure 5, r rgorge -gorge radius; C -mean cost function; l -mean length of the tunnel. The values are averaged over all available X-ray structures.
4,893
2006-06-22T00:00:00.000
[ "Biology", "Chemistry", "Computer Science" ]
Forensic Metrology: Its Importance and Evolution in the United States Forensic measurements play a significant role in the U.S. criminal justice system. Guilt or innocence, or the severity of a sentence, may depend upon the results of such measurements. Until recently, however, forensic disciplines were largely unaware of the field of metrology. Accordingly, proper measurement practices were often, and widely, neglected. These include failure to adopt proper calibration techniques, establish the traceability of results and determine measurement uncertainty. These failures undermine confidence in verdicts based upon forensic measurements. Over the past decade, though, the forensic sciences have been introduced to metrology and its principles leading to more reliable measurement practices. The impetus for this change was driven by many forces. Pressure came initially from criminal defense lawyers challenging metrologically unsound practices and results relied upon by government prosecutions. Litigation in the State of Washington led this movement spurring action by attorneys in other jurisdictions and eventually reform in the measurement practices of forensic labs around the country. Since then, the greater scientific community, other forensic scientists and even prosecutors have joined the fight. This paper describes the fight to improve the quality of justice by the application of metrological principles and the evolution of the field of forensic metrology. Beginnings Forensic measurements play a prominent role in the criminal justice system. They are relied upon to investigate and prove charges ranging from murder to simple traffic offenses. Some crimes and punishments, for example driving with a breath alcohol concentration (BrAC) in excess of a prohibited level, even require forensic measurements to be established. As ubiquitous as forensic measurements are, though, historically they have often been poorly understood and misrepresented by forensic and legal professionals alike. This undermines the ability of fact finders to properly weigh such evidence and the public's confidence in verdicts based upon it. The fight to change this state of affairs through the imposition of a complete and coherent metrological framework upon forensic measurements was taken up by attorneys in Washington State defending individuals charged with the crime of Driving Under the Influence of Alcohol and Drugs (DUI). DUI defense was a natural place for such a movement to take hold as almost all DUI prosecutions include the results of forensics measurements as evidence. These include breath and blood alcohol test results as well as measurements of the amount or concentration of drugs in an individual's blood. The lion's share of the litigation undertaken in this cause centered on BrAC tests as they are by far the most frequent forensic measurement performed. Breath alcohol tests are used in the investigation and prosecution of DUI throughout the United States. Their purpose is to measure either an individual's breath or blood alcohol concentration. An important element of many breath test machines is a simulator. A simulator is a device that is filled with an aqueous-alcohol solution, referred to as a simulator solution. The simulator heats the solution to a specified temperature producing a vapor of a specified alcohol concentration. This vapor is used to both calibrate breath test instruments as well as to check their accuracy during the performance of a breath test. In order to ensure that the vapor created is the correct concentration, a thermometer is used to monitor the temperature of the solution it is a product of. In early 2001, the Washington State Toxicology Lab claimed that the accuracy of the temperatures reported for the solutions were determined by a specified "margin of error" attributed to the thermometers used to measure them. An investigation by the defense turned up several problems with the State's claim, however. First, the uncertainty of the temperatures measured by the thermometers used was found to be significantly greater than the margin of error that had been claimed. Second, the thermometers themselves were not being used in a manner consistent with their validation rendering any values reported unreliable even had the claimed margin of error been correct. After a day-long hearing wherein these issues were addressed, a Court suppressed the breath test results. [1] The victory wasn't about "just trying to get another guilty person off" as is so often lamented by critics, though. It was about preventing the government from using flawed science to deprive Citizens of their liberty. Every one of us is innocent until proven guilty. That is one of the safeguards against tyranny provided by our Constitution. When the government tells a judge or jury that science supports claims that it does not, it is tantamount to committing a fraud against our system of Justice. It doesn't matter whether the deception is purposeful or not because the result is the same: a Citizen's liberty is imperilled by a falsehood. The government sought to fix the problems that had been encountered in the use of the thermometers by requiring that the temperatures measured by such thermometers must, going forward, be traceable to standards maintained by NIST. Unfortunately, the government neither knew the definition of traceability nor understood how to establish it. The Defense Bar put an end to this bad government science by showing what was required to establish the traceability of the temperature reported by these simulator thermometers. The Washington State Supreme Court suppressed breath tests state-wide in the first published decision explicitly recognizing metrology in a forensic context. In doing so it enunciated a clear and common sense principle: "If the citizens of the State of Washington are to have any confidence in the breath testing program, that program has to have some credence in the scientific community as a whole." [2] 2. Not Just Bad Science, Bad Tribunals But bad government science doesn't necessarily arise from bad government scientists. Nor is the desire to ensure that science is used correctly to discover truth in the courtroom confined to defense attorneys. Forensic scientists, prosecutors and judges have sought the same goals. In 2004, the head of the Washington State Breath Test Program helped the defense to keep the government from administratively suspending a women's driver's license based upon a misleading test result. The woman had submitted to a breath test that yielded duplicate results both in excess of the legal limit. Through the State expert, the defense was able to show that the uncertainty associated with the results proved that there was actually a 56.75% probability that her true BrAC was less than the legal limit. The bad government science in this case was not that done by forensic scientists. To the contrary, it was one of the State's top forensic scientists who used metrology to establish that this motorist had most likely not violated the law. Rather Metrology: A Tool for the Critical Analysis of All Forensic Measurements Measurements involving the determination of a person's breath or blood alcohol concentration are quite common because the crime of DUI is defined by the results of these measurements. They are by no means the only type of forensic measurements, though. Determining the weight of seized drugs using a scale; the speed of a motor vehicle using a radar; the angle at which a bullet entered a wall using a protractor; and even the distance between a drug transaction and a school using a measuring wheel; these are just a few of the many types of forensic measurements that are performed. The same underlying metrological principles that allowed for the analysis of the breath alcohol measurements discussed above apply to every other forensic measurement as well. This leads to an astonishing conclusion. Since the science of metrology underlies all measurements, its principles provide a basic framework for the critical evaluation of all measurements, regardless of the field they arise out of. Accordingly, given a familiarity with metrology, scientists and police officers can better perform and communicate the results of the forensic measurements they perform; Lawyers can better understand, present and cross-examine the results of forensic measurements intended to be used as evidence; Judges will be better able to subject testimony or evidence based on forensic measurements to the appropriate gatekeeping analysis; and each of these participants will be better prepared to play their role in ensuring that the misuse of science doesn't undermine the search for truth in the courtroom. Reform Through Litigation This was the idea attorneys had in mind when, in the Summer of 2007, a forensic scientist within the Washington State Toxicology Lab was discovered committing perjury, claiming to have performed measurements that she had not. Upon further investigation, though, the defense discovered that the Lab's problems went far deeper than perjury. The Lab's process for creating simulator solutions for the calibration and checking of breath test machines was in a state of disarray. Failures to validate procedures, follow approved protocols, adhere to scientifically accepted consensus standards, properly calibrate or maintain equipment, and even to simply check the correctness of results and calculations were endemic. In a private memo to Washington's Governor, the State Toxicologist explained that the measurement procedures in question "had been in place for over twenty years and had gone unchallenged, leading to complacency." What allowed the defense to find what others had missed over the years was, again, metrology. Viewed through the appropriate metrological framework, it became clear that what complacency had led to was the systemic failure of the Lab to adhere to fundamental scientific requirements for the acquisition of reliable measurement results. After a seven-day hearing that included testimony from nine experts, declarations from five others, as well as one hundred and sixtyone exhibits, a panel of three judges issued a thirty-page ruling suppressing all breath test results until the Lab fixed the problem's identified. [4] Under new leadership, the Lab subsequently used the same metrological framework to fix its problems that had been used to discover them. It did so by implementing fundamental metrological principles and practices and obtaining accreditation under ISO 17025, the international standard that embodies these principles and practices. Because of this, the Washington State Toxicology Lab has one the best Breath Test Calibration programs in the country. The same metrological principles that can be such effective tools in the hands of legal professionals can be even more powerful when employed by competent forensic scientists. Criminal defense attorneys in other states, including Michigan, California, Minnesota, New Mexico, Pennsylvania and Arizona, have since begun to follow Washington's lead. The National Academy of Sciences It was at about this time, in February of 2009, that the National Academy of Sciences released a report on the state of forensic science in America. [5] The Report was very critical of the practices engaged in by many of the forensic sciences identifying problems in the areas of method validation, adherence to appropriate practices as evidenced by consensus standards, the determination and reporting of measurement uncertainty and many others. A majority of the scientific issues identified by the Report, though, are those that metrology addresses. What had been done in Washington State with respect to forensic measurement was to not only beat the Academy to the punch in the discovery of these issues, but also to the identification of the appropriate framework for their solution. In the forensics community, metrology is now being relied upon to address many of the issues identified by the National Academy of Sciences. Its principles are helping to improve how forensic measurements are developed, performed and reported. Accreditation and adherence to international scientific standards are restoring confidence that forensic measurements comply with the same rigorous methodology followed in other sciences. And it is providing a common language for all those engaged in making or relying upon forensic measurements to communicate about them regardless of application. A Growing Movement Despite having had it's breath test calibration program accredited, by the Summer of 2009, the Washington State Toxicologist had decided that the uncertainty associated with breath test results obtained in Washington would neither be determined nor provided to Citizen's who were being prosecuted on the basis of those results. Accordingly, Washington defense lawyers engaged the fight again. Their argument was that breath test results could not be properly weighed by fact finders absent their uncertainty. In other words, judges and juries could only understand the conclusions supported by a result if they were provided the uncertainty that revealed the range of values that could reasonably be attributed to an individual's BrAC based on that result. The Defense drove home it's point during cross-examination of the State's primary expert. The witness was handed a breath alcohol test ticket containing results of 0.081 g/210L and 0.080 g/210L. Assuming proper quality assurance procedures and testing protocols were followed, all parties agreed that these were the results of an "accurate and reliable" test. The expert was then asked, given these results, could he state beyond a reasonable doubt that this individual's BrAC exceeded a 0.080 g/210L (the per se limit in Washington State). The expert responded: "I would have to say yes based on these results here." Similar evidence and testimony, concerning a range of forensic measurements, is introduced in courtrooms around the country every day. And based on such evidence and testimony, citizens accused of all manner of crimes plead or are found guilty. The problem is that an accurate and reliable test doesn't necessarily mean what most lay factfinders presume. In fact, as the Court later noted, even an expert may be misled by what is deemed an accurate and reliable result. This was demonstrated by subsequently providing the expert with the breath test result's uncertainty and asking whether he still believed that these results supported the conclusion that this individual's BrAC exceeded a 0.080 g/210L that beyond a reasonable doubt. His response was no. The expert conceded that, despite the fact that the test was accurate and reliable, based on the result's uncertainty there was actually a 44% likelihood that this individual's BrAC was below 0.080 g/210L! Far more than a reasonable doubt, these "accurate and reliable" test results barely established the conclusion as more likely than not! After a five-day hearing that included testimony from four experts as well as 93 exhibits, a Panel of three judges issued a thirty-page order explaining that breath test results would henceforth be inadmissible unless they were accompanied by their uncertainty. The rulings from this and other Washington cases concerning measurement uncertainty garnered nationwide attention. Lawyers, judges, forensic scientists and scholars from around the country began discussing and writing about the importance of providing a measured result's uncertainty when the result will be relied upon as evidence at trial. [7] Thomas Bohan, former president of the American Academy of Forensic Sciences, declared it to be "a landmark decision, engendering a huge advance toward rationality in our justice system and a victory for both forensic science and the pursuit of truth." [8] The battle was subsequently taken up by defense attorneys in several other State and Federal Courts, and continues to spread as of the time of this writing. Unfortunately, in a decision that defies reason, the Washington State Court of Appeals decided that it understood science better than scientists. It found that: 1) Proper science does not require that the uncertainty of measured results be either determined or provided to the users of those results; and 2) despite the State's top expert having been fooled by breath test results that were not accompanied by their uncertainty, that measured results were best understood without their uncertainty. [9] Given the embarrassment suffered by the State Toxicology Lab during the lower Court proceedings, however, it now "voluntarily" determines and provides the uncertainty of the results of all breath tests performed within the State of Washington. The Prosecution Begins to See the Light It's not just defense counsel who are concerned about the effects of bad and misleading forensic evidence in the courtroom, though. In a 2013 paper published in the Santa Clara Law Review, a California prosecutor provided the rational for why all those advocating on behalf of the state should also be fighting for the imposition of metrological requirements on state administered breath and blood tests. In fact, after a trial court denied a defense motion to require the reporting of uncertainty and traceability with blood test results, he worked with that jurisdiction's lab to make sure that this was done for all future test results despite the court's ruling. And he subsequently worked to make this a mandatory regulatory requirement. Why? Because he wants to ensure that the science presented by the state in court is "the best science regardless of what the law requires." [10] Back to Fundamentals: The Measurand The latest battles in this saga concern the most fundamental aspect of the measurement process: identification of the measurand. The problem at issue is identification of measurands in statutes by scientifically unsophisticated legislators that are either so significantly under-defined, or vary by jurisdiction in such a way, that they create confusion for both forensic science and legal professionals. An example of the first type of problem arises under controlled substance statutes that prohibit certain substances and their "analogues." An analogue is typically defined as having a chemical structure "substantially similar" to the named controlled substance. What constitutes substantially similar, however, is a matter of opinion on which forensic scientists in the same laboratories often disagree on. The second type of issue arises primarily in the DUI context where statutes of different jurisdictions specify distinct measurands for a common breath alcohol test. [11] Depending upon the measurand identified, the result of a breath test means something very different between these jurisdictions and the uncertainty of the results varies significantly as well. Statutes suffering from such infirmities create a confusing landscape. Given the role of lawmakers, as opposed to scientists, in the specification of a test's measurand, though, they are representative of the type of problems that arise at the intersection of law and science. With a lack of scientific sophistication, many lawmakers fail to realize the necessity of, or required detail concerning, the measurand in a measurement. In fact, many have never even heard the term measurand before. The truth about any scientific measurement is that it can never reveal what a quantity's true value is. The power of metrology lies in the fact that it provides the framework by which we can determine what conclusions about that value are supported by measured results. It tells us how to develop and perform measurements so that high quality information can be obtained. It helps us to understand what our results mean and represent. And finally, it provides the rules that guide our inferences from measured results to the conclusions they support. Whether you are a prosecutor or defense attorney, judge or forensic scientist, or even a law enforcement officer who performs measurements as part of investigations in the field, forensic metrology provides a powerful tool for determining the truth when forensic measurements are relied upon. Forensic science, legal practice and justice itself are improved by a familiarity with the principles of forensic metrology. [12]
4,410
2016-01-01T00:00:00.000
[ "Law", "Physics" ]
Structured Learning for Temporal Relation Extraction from Clinical Records We propose a scalable structured learning model that jointly predicts temporal relations between events and temporal expressions (TLINKS), and the relation between these events and the document creation time (DCTR). We employ a structured perceptron, together with integer linear programming constraints for document-level inference during training and prediction to exploit relational properties of temporality, together with global learning of the relations at the document level. Moreover, this study gives insights in the results of integrating constraints for temporal relation extraction when using structured learning and prediction. Our best system outperforms the state-of-the art on both the CONTAINS TLINK task, and the DCTR task. Introduction Temporal information is critical in many clinical areas (Combi and Shahar, 1997). A big part of this temporal information is captured in the free text of patient records. The current work aims to improve temporal information extraction from such clinical texts. Extraction of temporal information from clinical text records can be used to construct a time-line of the patient's condition (such as in Figure 1). The extracted time-line can help clinical researchers to better select and recruit patients with a certain history for clinical trials. Moreover, the time-line is crucial for making a good patient prognosis and clinical decision support (Onisko et al., 2015;Stacey and McGregor, 2007). Temporal information extraction can be divided into three sub-problems: (1) the detection of events (E e ); (2) the detection of temporal expressions (E t ); and (3) the detection of temporal rela-tions between them. In the clinical domain, events include medical procedures, treatments, or symptoms (e.g. colonoscopy, smoking, CT-scan). Temporal expressions include dates, days of the week, months, or relative expressions like yesterday, last week, or post-operative. In this work, we focus on the last sub-problem, extraction of temporal relations (assuming events and temporal expressions are given). As a small example of the task we aim to solve, given the following sentence: In 1990 the patient was diagnosed and received surgery directly afterwards. in which we assume that the events diagnosed and adenocarcinoma, and the temporal expression 1990 are given, we wish to extract the following relations: • CONTAINS(1990, diagnosed) • CONTAINS(1990, surgery) • BEFORE(diagnosed, surgery) • BEFORE(diagnosed, d) • BEFORE(surgery, d) where d stands for the document creation time. Our work leads to the following contributions: First, we propose a scalable structured learning model that jointly predicts temporal relations between events and temporal expressions (TLINKS), and the relation between these events and the document creation time (DCTR). In contrast to existing approaches which detect relation instances separately, our approach employs a structured perceptron (Collins, 2002) for global learning with joint inference of the temporal relations on a document level. Second, we ensure scalability through using integer linear programming (ILP) constraints with fast solvers, loss augmented subsampling, and good initialization. Third, this study leads to valuable insights on when and how to make inferences over the found candidate relations both during training and prediction and gives an in-depth assessment of the use of additional constraints and global features during inference. Finally, our best system outperforms the state-ofthe-art of both the CONTAINS TLINK task, and the DCTR task. Related Work There have been two shared tasks on the topic of temporal relation extraction in the clinical domain: the I2B2 Temporal Challenge (Sun et al., 2013), and more recently the Clinical TempEval Shared Task with two iterations, one in 2015 and one in 2016 Bethard et al., 2015;Bethard et al., 2016). In the I2B2 Temporal Challenge eight types of relations were initially annotated. However, due to low inter-annotator agreement these were merged to three types of temporal relations, OVERLAP, BEFORE, and AFTER. Good annotation of temporal relations is difficult, as annotators frequently miss relation mentions. In the Clinical TempEval Shared tasks the THYME corpus is used (Styler IV et al., 2014), with a different annotation scheme that aims at annotating those relations that are most informative w.r.t. the time-line, and gives less priority to relations that can be inferred from the others. This results in two categories of temporal relations: The relation between each event and the document creation time (DCTR), dividing all events in four temporal buckets (BEFORE, BEFORE/OVERLAP,OVERLAP, AFTER). These buckets are called narrative containers (Pustejovsky and Stubbs, 2011). And second, relations between temporal entities that both occur in the text (TLINKS). TLINKS may occur between events (E e × E e ), and between events and temporal expressions (E e × E t and E t × E e ). The TLINK types (and their relative frequency in the THYME corpus) are CONTAINS (64,42%), OVERLAP (15,19%), BEFORE (12,65%), BEGINS-ON (6.15%), and ENDS-ON (1.59%). The relations AFTER, and DURING are expressed in terms of their inverse, BEFORE, and CONTAINS respectively. In our experiments, we use the THYME corpus for its relatively high inter-annotator agreement (particularly for CONTAINS). To our knowledge, in all submissions (4 in 2015, and 10 in 2016) of Clinical TempEval the task is approached as a classical entity-relation extraction problem, and the predictions for both categories of relations are made independently from each other, or in a one way dependency, where the containment classifier uses information about the predicted document-time relation. Narrative containment, temporal order, and document-time relation have very strong dependencies. Not modeling these may result in inconsistent output labels, that do not result in a consistent time-line. An example of inconsistent labeling is given in Figure 2. The example is inconsistent when assigning the AFTER label for the relation between lesion and the document-time. It is inconsistent because we can also infer that lesion occurs BEFORE the document-time, as the colonoscopy event occurs before the document-time, and the lesion is contained by the colonoscopy. Temporal inference, in particular temporal closure, is frequently used to expand the training data (Mani et al., 2006;Chambers and Jurafsky, 2008;Lee et al., 2016;Lin et al., 2016b), most of the times resulting in an increase in performance, and is also taken into account when evaluating the predicted labels UzZaman and Allen, 2011). Only very limited research regards the modeling of temporal dependencies into the machine learning model. (Chambers and Juraf-(event) (timex3) (event) A colonoscopy on September 27, 2008 revealed a circumferential lesion . (Pustejovsky et al., 2003). They trained local classifiers and used a set of global temporal label constraints. Integer linear programming was employed to maximize the score from the local classifiers, while satisfying the global label constraints at prediction time. For both, this gave a significant increase in performance, and resulted in consistent output labels. (Yoshikawa et al., 2009) modeled the label dependencies between TLINKS and DCTR with Markov Logic Networks (MLN), allowing for soft label constraints during training and prediction. However, MLN can sometimes be sub-optimal for text mining tasks w.r.t. time efficiency (Mojica and Ng, 2016). Quite recently, for a similar problem, spatial relation extraction, (Kordjamshidi et al., 2015) used an efficient combination of a structured perceptron or structured support vector machine with integer linear programming. In their experiments, they compare a local learning model (LO), a local learning model with global inference at prediction time (L+I), and a structured learning model with and without inference during training (IBT+I, and IBT-I respectively). In their experiments L+I gave better results than LO, but a more significant improvement was made when using structured learning in contrast to local learning. In this work, we aim to jointly predict TLINKS and DCTR in a structured learning model with inference during training and prediction, to assess inference with temporal constraints of (Chambers and Jurafsky, 2008; Do et al., 2012) for the THYME relations, and to experiment with both local, and document-level inference for temporal information extraction in the clinical domain. The Model For jointly learning both tasks on a document level, we employ a structured perceptron learning paradigm (Collins, 2002). The structured perceptron model uses a joint feature function Φ(X, Y ) to represent a full input document X with a label assignment Y . During training the model learns a weight vector λ to score how good the label assignment is. Predicting label assignment Y for a document X corresponds to finding the Y with the maximal score. In the following sub-sections we define the joint feature function Φ, describe the prediction procedure of the model, and provide how we train the model (i.e. learn a good λ). Joint Features To compose the joint feature function, we first define two local feature functions: φ tl : (x, y) → R p assigns features for the local classifications regarding TLINKS (with possible labels L tl = {CONTAINS, BEFORE, OVERLAP, BEGINS-ON, ENDS-ON, NO LABEL}), and a second local feature function φ dr : (x, y) → R q , for local features regarding document-time relation classification (with labels L dr = {BEFORE, BE-FORE OVERLAP, OVERLAP, AFTER}. The features used by these local feature functions are given in Table 1. From these, we define a joint feature function Φ joint : (X, Y ) → R p+q , that concatenates (⊕) the summed local feature vectors, creating the feature vector for the global prediction task (predicting all labels in the document for both sub-tasks at once). Φ joint is defined in Equation 1, where C tl (X) and C dr (X) are candidate generation functions for the TLINK sub-task, and DCTR sub-task respectively (further explained in Section 3.2). Features φ dr φ tl String features for tokens and POS of each entity String features for tokens and POS in a window of size {3, 5}, left and right of each entity Boolean features for entity attributes (event polarity, event modality, event degree, and type) String feature for the token and POS of the closest verb String feature for the token and POS of the closest left and right entity String features for the token {1, 2, 3}-grams and POS {1, 2, 3}-grams in-between the two entities Dependency path between entities (consisting of POS and edge labels) Boolean feature on if the first argument occurs before the second (w.r.t. word order) Table 1: Features of the local feature functions of each sub-task, φ tl for TLINKS, and φ dr for DCTR. Local Candidate Generation For each document X, we create local candidates for both sub-tasks. In this work, we assume that event (E e ) and temporal expression (E t ) annotations are provided in the input. The DCTR-candidates in document X are then given by C dr (X), which returns all events in the document, i.e. E e (X). C tl (X) returns all TLINK candidates, i.e. E e (X) ∪ E t (X) × E e (X). In our experiments we usually restrict the number of candidates generated by C tl to gain training and prediction speed (without significant loss in performance). This is explained further in Section 4.3. Global Features We also experiment with a set of global features, by which we mean features that are expressed in terms of multiple local labels. The global features are specified in Table 2. Global features are defined by a feature function Φ global (X, Y ) → R r and have their corresponding weights in weight vector λ. When using global features Φ global is concatenated with the joint feature function Φ joint to form the final feature function Φ, as show in in Equation 2. When not using global features, we use only the joint features, as shown in Equation 3. Prediction The model assigns a score to each input document X together with output labeling Y . The score for (X, Y ) is defined as the dot product between the learned weight vector λ and the outcome of the joint feature function Φ(X, Y ), as shown in Equation 4. The prediction problem for an input document X is finding the label assignment Y that maximizes the score S based on the weight vector λ, shown in Equation 5. We use integer linear programming (ILP) to solve the prediction problem in Equation 5. Each possible local decision is modeled with a binary decision variable. For each local relation candidate input x i,j (for the relation between i and j) a binary decision variable w l i,j is used for each potential label l that could be assigned to x i,j , depending on the sub-task. The objective of the integer linear program, given in Equation 6, is to maximize the sum of the scores of local decisions. In all equations the constant d refers to the documentcreation time. The objective is maximized under two sets of constraints, given in Equations 7 and 8, that express that each candidate is assigned ex-actly one label, for each sub-task. For solving the integer linear program we use Gurobi (Gurobi Optimization, 2015). Temporal Label Constraints Because temporal relations are interdependent, we experimented with using additional constraints on the output labeling. The additional temporal constraints we experiment with are shown in Table 3. Constraints are expressed in terms of the binary decision variables used in the integer linear program. In Table 3, constraints C Ctrans , and C Btrans model transitivity of CONTAINS, and BEFORE respectively. Constraints C CBB , and C CAA model the consistency between TLINK relation CON-TAINS and DCTR relations BEFORE, and AFTER respectively (resolving the inconsistent example of C CBB in section 1, and Figure 2). Similarly, C BBB , and C BAA model the consistency between TLINK relation BEFORE and DCTR relations BE-FORE, and AFTER. Constraints can be applied during training and prediction, as Equation 5 is to be solved for both. If not mentioned otherwise, we use constraints both during training and prediction. Training The training procedure for the averaged structured perceptron is given by Algorithm 1, for I iterations, on a set of training documents T . Notice that the prediction problem is also present during training, in line 6 of the algorithm. Weight vector λ is usually initialized with ones, and updated when the predicted label assignmentŶ k for input document X k is not completely correct. The structured perceptron training may suffer from overfitting. Averaging the weights over the training examples of each iteration is a commonly used way to counteract this handicap (Collins, 2002;Freund and Schapire, 1999). In Algorithm 1, c is used to count the number of training updates, and λ a as a cache for averaging the weights. We also employ local loss-augmented negative sub-sampling, and local pre-learning to address class-imbalance and training time. Loss-augmented Negative Sub-sampling For the TLINK sub-task, we have a very large negative class (NO LABEL) and a relatively small positive class (the other TLINK labels) of training examples. To speed up training convergence and address class imbalance at the same time, we subsample negative examples during training. Within a document X, for each positive local training example (x positive , y positive ) we take 10 random negative examples and add the negative example (x negative , y no label ) with the highest score for relation y positive , i.e. S(x negative , y positive ). This cutting plane optimization gives preference to negative training examples that are more likely to be classified wrongly, and thus can be learned from (in an online manner), and it provides only one negative training example for each positive training example, balancing the TLINK classes. Local Initialization To reduce training time, we don't initialize λ with ones, but we train a perceptron for both local subtasks, based on the same local features mentioned in Table 1, and use the trained weights to initialize λ for those features. A similar approach was used by (Weiss et al., 2015) for dependency parsing. Details on the training parameters of the perceptron are given in Section 4.3. Experiments We use our experiments to look at the effects of four modeling settings. Abbrev. Label Dependencies Constraints Table 3: Temporal label dependencies expressed as integer linear programming constraints. The variables i, j and k range over the corresponding TLINK arguments, and constant d refers to the documentcreation-time. CONTAINS i,j indicates that entity i contains entity j. 1. Document-level learning in contrast to pairwise entity-relation learning. Joint learning of DCTR and TLINKS. 3. Integrating temporal label constraints. Using global structured features. We will discuss our results in Section 4.4. But first, we describe how we evaluate our system, and provide information on our baselines, and the preprocessing and hyper-parameter settings used in the experiments. Evaluation We evaluate our method on the clinical notes test set of the THYME corpus (Styler IV et al., 2014), also used in the Clinical TempEval 2016 Shared Task (Bethard et al., 2016). Some statistics about the dataset can be found in Table 4. F-measure is used as evaluation metric. For this we use the evaluation script from the Clinical TempEval 2016 Shared Task. TLINKS are evaluated under the temporal closure (UzZaman and Allen, 2011). Table 4: Dataset statistics for the THYME sections we used in our experiments. Baselines Our first baseline is a perceptron algorithm, trained for each local task using the same local features as used to compose the joint feature function Φ joint of our structured perceptron. We have two competitive state-of-the-art baselines, one for the DCTR sub-task, and one for the TLINK subtask. The first baseline is the best performing system of the Clinical TempEval 2016 on the DCTR task (Khalifa et al., 2016). They experiment with a feature rich SVM and a sequential conditional random field (CRF) for the prediction of DCTR and report the -to our knowledgehighest performance on the DCTR task. The competitive TLINK baseline is the latest version of the cTAKES Temporal system (Lin et al., 2016b;Lin et al., 2016a). They employ two SVMS to predict TLINKS, one for TLINKS between events, and one for TLINKS between events and temporal expressions and recently improved their system by generating extra training data using extracted UMLS concepts. They report the -to our knowledge -highest performance on CONTAINS TLINKS in the THYME corpus. Hyper-parameters and Preprocessing In all experiments, we preprocess the text by using a very simple tokenization procedure considering punctuation 1 or newline tokens as individual tokens, and splitting on spaces. For our partof-speech (POS) features, and dependency parse path features, we rely on the cTAKES POS tagger and cTAKES dependency parser respectively (Savova et al., 2010). After POS tagging and parsing we lowercase the tokens. As mentioned in Section 3.2, we restrict our TLINK candidate generation in two ways. First, both entities should occur in a token window of 30, selected from {20, 25, 30, 35, 40} based on development set performance. And second, both entities should occur in the same paragraph (paragraphs are separated by two consecutive newlines). Our motivation for not using sentence based candidate generation is that the clinical records contain many ungrammatical phrases, bullet point enumerations, and tables that may result in missing cross-sentence relation instances (Leeuwenberg and Moens, 2016). In all experiments, we train the normal perceptron for 8 iterations, and the structured perceptron for 32 iterations, both selected from {1, 2, 4, 8, 16, 32, 64} based on best performance on the development set. The baseline perceptron is also used for the initialization of the structured perceptron. Moreover, we apply the transitive closure of CONTAINS, and BE-FORE on the training data. Results Our experimental results on the THYME test set are reported in Table 5. In the table, the abbreviation SP refers to the structured perceptron model described in Section 3 but without temporal label constraints or global features, i.e. the joint document-level unconstrained structured perceptron, using local initialization, and loss-augmented negative sub-sampling. We compare this model with a number of modified versions to explore the effect of the modifications. Document-Level Learning When we compare the local perceptron baseline with any of the document-level models (any SP variation), we can clearly see that learning the relations at a document-level improves our model significantly 2 (P<0.0001 for both DCTR and TLINKS). Furthermore, when comparing loss-augmented sub-sampling (SP) with random sub-sampling of negative TLINKS (SP random sub-sampling ) it can be seen that a good selection of negative training instances is very important for learning a good model (again P<0.0001), and resulted in our model to improve the state-of-the-art by 1.4 on the CONTAINS TLINK task 3 . Jointly Learning DCTR and TLINKS When comparing the disjoint model (SP disjoint ) with our joint model (SP) it can be noticed that joint prediction gives only a very small improvement (P=0.0768 for TLINKS, and P=0.0451 for DCTR). However, joint learning on a document level provides the flexibility to formulate constraints connecting the labels of both tasks, such as the last four constraints in Table 3, resulting in a more consistent labeling over both tasks. Similarly, in the joint learning setting, we can define global features that connect both tasks (like Φ drtl ). Integrating Temporal Constraints We experimented with integrating label constraints in two ways (1) both during training and prediction (SP cc + C * ), or (2) only during prediction (SP uc + C * ). In general it can be noticed that in our experiments using the temporal label constraints from Table 3 slightly increases DCTR performance, but slightly decreases TLINK performance. A reason for this can be that the model generally gives better predictions for DCTR, that might result in providing a better alternative to a constraint violating solution. A difference in consistency of the annotation between both tasks could also be a reason. Furthermore, we can see that integrating the constraints both during training and prediction gives slightly lower performance compared only integrating them during prediction. Using Global Structured Features We have two types of features, Φ sdr , which is only based on DCTR labels, and Φ drtl , which is based on a combination of DCTR and TLINK labels. When we add Φ sdr to our model, the overall F-measure on the DCTR task improves with 1.3 points (P<0.0001), improving the state-of-the-art by 0.3 points. A reason for this can be the sequential dependency of DCTR labels, also exploited by (Khalifa et al., 2016), using the sequential CRF. The second global feature, Φ drtl , in fact models the same type of dependencies as the last four constraints in Table 3, relating the TLINK relations with the DCTR labels of each TLINK argument, however as a soft dependency and not as a hard constraint. In our experiments, this feature did not improve either of the two sub-tasks. It appears that training with cross-task constraints, or global cross-task features is not trivial, and further research is needed on how to exploit these cross-task dependencies also during training. We assume that the lower-than-expected scores when modeling cross-task dependencies may be related to sub-sampling the negative TLINK training instances. Table 5: Results on the THYME test set. SP refers to our structured perceptron model, without constraints or global features, using local initialization and loss-augmented negative sub-sampling. C * refers to using all constraints. Superscript CC and UC refer to using constraints at training and prediction time, or only at prediction time respectively. Conclusions In this work, we proposed a structured perceptron model for learning temporal relations between events and the document-creation time (DCTR), and between temporal entities in the text (TLINKS) in clinical records. Our model efficiently learns and predicts at a document level, exploiting loss-augmented negative subsampling, and uses global features allowing it to exploit relations between local output labels. For construction of a consistent output labeling, needed for time-line construction, we formulated a number of constraints, including those from (Chambers et al., 2007;Do et al., 2012), and assessed them during inference. Our best system outperforms the state-of-the-art of both the CONTAINS TLINK task, and the DCTR task. Our code for this work is available at https://github.com/tuur/SPTempRels.
5,528.6
2017-01-01T00:00:00.000
[ "Computer Science", "Medicine" ]
Study on water stability of asphalt binder with medium weathered igneous rock Aiming at the problems of weak acidity of medium weathered igneous rock around Nairobi, Kenya, poor adhesion with asphalt and poor water stability of asphalt binder, the article studied the use of anti-stripping agent, cement, hydrated Lime and other technical measures to improve the water stability of asphalt binder with medium weathered igneous rock. The results showed that the 48h Marshall residual stability of the benchmark asphalt binder without any measures was 78.5%, which did not meet the standard requirements. The Marshall residual stability of medium weathered igneous rock can be significantly improved by adding anti stripping agent, cement and hydrated Lime. After freeze-thaw cycles, the splitting tensile strength of the asphalt binder with medium weathered igneous rock decreased obviously, and the TSR values of the asphalt binder with anti-spalling measures from small to large were K-4, K-3, K-2, K-6 and K-5. The water stability of the medium weathered igneous rock asphalt binder mixed with anti-stripping agent alone had relatively poor durability, and the medium weathered igneous rock asphalt binder mixed with cement and anti-stripping agent had the strongest ability to resist deformation when immersed in water. Introduction China is pursuing the "Belt and Road" construction. The Nairobi Expressway in Kenya is one of the important infrastructure projects in East Africa. The main line of the expressway has a total length of 27km and a design speed of 80 kilometers per hour, which is a national highway Aclass highway. The rocks in Nairobi and surrounding areas are mainly medium weathered igneous rock. After investigation, the medium weathered igneous rock available along the project are weakly acidic aggregates, and other rock quarries are far away and costly. The project considers using medium weathered igneous rock along the highway for engineering construction. Since medium weathered igneous rock is weakly acidic aggregate, and asphalt is also acidic, the adhesion of asphalt and aggregate needs to be studied. In the application of acidic aggregates, Dong et al. [1] studied the adhesion of lime to asphalt and granite aggregates and found that lime could increase the surface energy of asphalt and enhance the adhesion work between asphalt and granite aggregates, thereby improving adhesion; the adhesion of asphalt and granite aggregate increased with the increase of lime fineness; when the lime content was 10% of the asphalt mass, the adhesion between asphalt and granite aggregate was the best. Tan et al. [2][3][4][5][6][7] studied to improve the adhesion of asphalt and acidic rocks by adding anti-stripping agent, cement, etc., thereby improving the water stability performance. The research results also showed that different technical measures have improved the adhesion of asphalt and aggregate to a certain extent. However, the investigation of domestic and foreign related research showed that there were few reports on the adhesion of medium weathered igneous rock and asphalt, especially the water stability. In order to promote the application of medium weathered igneous rock in the Nairobi Expressway in Kenya, this research used the test methods such as the immersion Marshall test, the split strength test after freezing and thawing, and the water rutting test to investigate the improvement of antistripping agent, cement, hydrated Lime and other materials on the water stability of asphalt binder with medium weathered igneous rock. Raw material (1) Aggregate: The aggregate used in this experiment was medium weathered igneous rock in Nairobi and surrounding areas. The rock was processed into coarse and fine aggregates using a laboratory small jaw crusher. For stone powder with a particle size of 0~0.075mm, chemical titration was used for chemical composition analysis. According to Chinese Standard the Methods of Aggregate for Highway Engineering (JTG E42-2005), the compressive strength, apparent density, firmness, and crush value of aggregates with a particle size of 5-25mm were tested, and the test results were shown in Table 1: (2) Asphalt: The asphalt used in this test was road petroleum asphalt 90#, and its technical indicators were shown in Table 2: (3) Cement: CEM I 42.5 cement produced locally in Kenya was used. The setting time is 165min for initial setting and 267min for final setting. Water consumption for standard consistency is 26%, 3d compressive strength is 27.1MPa, 28d compressive strength is 52.3MPa. (4) Hydrated Lime: hydrated Lime produced locally in Kenya with an effective CaO and MgO content of 68.2% was used. (5) Amine anti-stripping agent: liquid amine antistripping agent was used, whose appearance is milky white thick liquid, pH value is between 10 and 12, and freezing point is less than 0 ℃. And its content is 0.4% of the asphalt mass. Testing Proportion In order to study and improve the road performance of medium weathered igneous rock asphalt binder, antistripping agent, cement, hydrated Lime, and Compound of anti-stripping agent, cement and hydrated Lime were used as admixtures to study their influence on the water stability of asphalt binder. The amount of slag powder accounts for 2% of the total amount of ore; the blending amounts of cement and hydrated Lime account for 50% of the total amount of slag powder. When used alone, the amount of anti-stripping agent was 0.3% of the asphalt content; when it was compounded with cement and hydrated Lime, the amount of anti-stripping agent was 0.2% of the amount of asphalt, and cement and hydrated Lime respectively account for 50% of the amount of slag powder. Table 3 Proportion of ore materials Testing method At present, the evaluation methods for the water stability of asphalt binder mainly include the immersion Marshall test, the split strength test after freezing and thawing, and the water immersion rutting test. The immersion Marshall test The main reason of water damage and poor water stability of asphalt binder pavement structure is the decrease of adhesion between asphalt and aggregate. The reason for the decrease of adhesion is that water enters between asphalt and aggregate, which makes the adhesion between asphalt and aggregate decrease until asphalt is separated from aggregate surface. On the other hand, the binding force of water and aggregate is greater than that of asphalt and aggregate, so it can break off the bonding between asphalt and aggregate, and make asphalt separate from aggregate surface. Therefore, the immersion Marshall test was used to evaluate the water stability of the asphalt binder. The test results of Marshall stability and residual stability of different test schemes for improving the bonding performance of asphalt and aggregate were shown in Figure 1 and Figure 2. It could be seen that the Marshall stability of the different test groups gradually decreased with the increase of the immersion time of the test piece. The 48h Marshall residual stability of the K-1 test group without any measures was 78.5%, which did not meet the requirements of Technical Specifications for Construction of Highway Asphalt Pavements JTG F40-2017 to be higher than 80%, and the Marshall stability cannot be measured after the test piece is immersed in water 96h. Therefore, it was clear that for medium weathered igneous rock, appropriate anti-stripping measures need to be taken when used in asphalt binders. Combining Figure 1 and Figure 2, it could be seen that under different immersion times, the Marshall stability of the K-2 to K-6 groups were significantly higher than that of the K-1 group, indicating that different technical methods had a certain effect on improving the water stability of the medium weathered igneous rock asphalt binder. Comparing different technical measures, it could be found that the Marshall stability of the asphalt binder mixed with the anti-stripping agent alone was not much different from other technical measures at the immersion time of 48h, but when the immersion time reached 96h, the Marshall stability of K-2 has a significant decrease. The main reason was that the anti-stripping agent is a liquid amine, which is organic, and its anti-aging performance is relatively poor, and the anti-stripping effect becomes weak during long-term immersion. Hence, it was recommended to use cement or hydrated Lime, or a combination of cement, hydrated Lime and antistripping agent, for the medium weathered igneous rock asphalt binder, which can improve the water stability of the asphalt binder for a long time. Freeze-thaw test The main reason for the water damage of asphalt binder pavement structure and poor water stability of asphalt binder is not only the decrease of the adhesion between asphalt and aggregate, but also the existence of the water between asphalt and aggregate, which is prone to hydrodynamic pressure. This repeatedly acts on the interface between asphalt and aggregate, causing aggregate detachment in the asphalt-bound material pavement structure, forming potholes and other pavement diseases. For this reason, the study uses the splitting tensile strength of the asphalt binder after freezing and thawing to compare the splitting tensile strength of the asphalt binder without freezing and thawing to evaluate the water stability of the weathered igneous rock asphalt binder. The test results of freeze-thaw splitting tensile strength and splitting tensile strength ratio (TSR) of different test schemes to improve the binding performance of asphalt and aggregate were shown in Fig. 3 and Fig. 4. It could be seen that the splitting tensile strength of medium weathered igneous rock asphalt binder decreased significantly after freeze-thaw cycle, and the TSR of the benchmark test group k-1 without any measures was only 67.1%, which did not meet the requirement of more than 75% in JTG F40-2017. Therefore, it was necessary to take corresponding technical measures to improve the water stability when the medium weathered igneous rock is applied in asphalt binder. It could be seen from Figure 4 that the TSR values of asphalt binders with anti-spalling technical measures from small to large were K-4 (hydrated Lime accounts for 50% of mineral powder), K-3 (cement accounts for 50% of mineral powder), K-2 (0.3% of asphalt), K-6 (0.2% of anti-spalling agent combined with hydrated Lime), K-5 (0.2% of anti-spalling agent combined with cement). The TSR values of asphalt binders of K-2 to K-6 could meet the requirements of JTG F40-2017, in which the minimum TSR value of asphalt binders with single hydrated Lime were 79.2%, the maximum TSR value of asphalt binders with anti-stripping agent 0.2% and cement compound technical measures was 87.3%. In general, the single addition of anti-stripping agent, cement or hydrated Lime can greatly improve the TSR of medium weathered igneous rock asphalt binder. The single addition of antistripping agent had the best effect, and the single addition of cement was better than hydrated Lime. In addition, through the combination of anti-stripping agent and cement or hydrated Lime, the water stability of weathered igneous rock asphalt binder was improved most significantly. Immersion rutting test Traffic load is also one of the main reasons for the deterioration of water stability of asphalt binder. From the perspective of macro mechanics, asphalt pavement structure needs to face the repeated action of traffic load in the service process; at this time, the interface between asphalt and aggregate will have repeated shear action, and different aggregate particles will also have repeated shear action. If the interface between asphalt and aggregate has shear failure, the water can quickly enter between the interface of asphalt and aggregate through the point of shear failure, resulting in the deterioration of water stability of asphalt binder. On the other hand, when the water enters the asphalt binder, under the repeated load, the ordinary water becomes dynamic water, which further accelerates the deterioration of water stability of asphalt binder. Therefore, the immersion rutting test was used to further study the water stability of the medium weathered igneous rock asphalt binder under load. The results of immersion rutting test under different test schemes to improve the binding performance of asphalt and aggregate were shown in Figure 5.It could be seen that the rutting depth curve of immersion rutting test showed an obvious slope change at 10 min. In the first 10 min of immersion rutting test, the rutting depth of each test group increased greatly, and then the rutting depth of each group showed a linear relationship with the extension of immersion time, with a small increase. For the K-1 group without anti-stripping measures, the rutting depth was the largest in the rutting test of each immersion time, which indicates that the application of medium weathered igneous rock in asphalt binder needs to take anti-stripping measures to improve its water stability. The rutting depth of each test group with different technical measures within 10 minutes of immersion was basically similar, and then with the extension of immersion time, the rutting depth of K-2 group with anti-stripping agent was deeper than that of other groups, which indicated that the durability of water stability of medium weathered igneous rock asphalt binder with anti-stripping agent was relatively poor. In the 60 min immersion rutting test, the K-5 test group with cement and anti-stripping agent mixed had the shallowest rutting depth, which indicated that the medium weathered igneous rock asphalt binder with cement and antistripping agent mixed had the strongest resistance to deformation. Conclusion (1) The 48h Marshall residual stability of the benchmark asphalt binder without any measures was 78.5%, which did not meet the standard requirements. The Marshall residual stability of medium weathered igneous rock could be significantly improved by adding anti-stripping agent, cement and hydrated Lime. (2) After the freezing and thawing cycles of the medium weathered igneous rock asphalt binder, the split tensile strength decreased significantly. The TSR value of the asphalt binder with anti-stripping technical measures was K-4, K-3, K-2, K-6 and K-5 in order from small to large. (3) Single mixing of anti-stripping agent, cement and hydrated Lime could greatly increase the TSR of medium weathered igneous rock asphalt binder. Single mixing of anti-stripping agent has the best effect, and single mixing of cement had better effect than hydrated Lime. Through compounding anti-stripping agent and with cement or hydrated Lime composite, the water stability of the medium weathered igneous rock asphalt binder was most significantly improved. (4) The durability of water stability performance of medium weathered igneous rock asphalt binder with antistripping agent was relatively poor, while that of medium weathered igneous rock asphalt binder with cement and anti-stripping agent was the strongest.
3,301.2
2021-01-01T00:00:00.000
[ "Environmental Science", "Engineering" ]
The Impact of Airline’s Smart Work System on Job Performance of Cabin Crew Extant studies in medical and educational fields have demonstrated that employees’ device use (smartphones, tablet PCs, etc.) can enhance job performance. Correspondingly, global airline companies have made substantial investments to enhance passenger services. An earlier study examined the impact of flight attendants’ technology usage on job satisfaction by investigating the causal relationship between the benefits of tablet PC use, job performance, and its consequences. Based on the literature review, four advantages of technology use were derived: (1) efficiency, (2) convenience, (3) service effectiveness, and (4) pride. Additionally, three consequences of job satisfaction were derived: (1) team performance, (2) organizational commitment, and (3) turnover intention. Empirical data were collected from 208 flight attendants working for a South Korean airline, which provided tablet PCs for its employees. Data analysis revealed that work efficiency, convenience, and pride had a significant and positive impact on job satisfaction. However, flight preparation did not show a similar impact. This study is the first to investigate the benefits of using technology in the airline industry. Furthermore, it examined the convergence of airline management and information technology. The findings provide managerial implications for airline companies that are considering providing tablet PCs to flight attendants. Introduction The service industry has made protracted attempts to use technology to enhance efficiency. Undoubtedly, the aviation industry is extensively dependent on information and communication technology (ICT) [1][2][3], not only for optimizing procedures and processes of airline operations management [4][5][6][7], but also for in-flight entertainment and customer services. In recent years, it has become customary to search and book flights, check-in, and obtain information on departures and arrivals through mobile devices such as smartphones or laptops. An increasing number of airline companies are also providing Wi-Fi on board. Technical innovations have been introduced across the aviation sector [8]. Presently, airline companies are becoming more app-based platform enterprises through collaborations with startups. Recently, airline companies have also been making attempts to incorporate new technology in order to distinguish themselves from their rivals and survive intense competition. These companies are now providing personal tablets to their employees working at the forefront. This is currently being implemented in approximately 20 airlines worldwide (Delta Airlines, United Airlines, JetBlue Airways, British Airways, KLM, Air France, Lufthansa, Alaska Airlines, Iberia, Emirates Airlines, Etihad Airways, Qantas, ANA, etc.). Among these, A Airlines (pseudonym used for anonymity) is the first Korean airline to provide tablet PCs to all flight attendants. A airline introduced the "A-tab" system, a smart work platform (a kind of smart work platform that can submit work-related information and reports through a tablet PC), to increase the cabin crew's work efficiency [9]. Many studies have been conducted on how incorporating personal devices such as smartphones or tablets into jobs achieves improved work performance. However, most of the existing research has focused on medical and educational institutions, and none on flight attendants in the aviation field. Accordingly, the purpose of this study is to (1) create and test a cognitive positive effect model for the introduction of new technology in which flight attendants use tablet PCs for work; (2) determine the factors that maximize job satisfaction among flight attendants; and (3) investigate the impact of the cognitive positive effect of adopting new technology on team performance/organizational commitment/turnover intention. Consequently, this study will provide theoretical and practical guidance for the adoption of new technology in flight attendants' work in the airline industry. Adaptation of New Technology, Use of Tablet PCs by Flight Attendants A tablet PC refers to a computer equipped with a flat touch screen that uses a digital pen or finger as the main input device, as opposed to a keyboard or mouse, and the screen is usually 5-14 inches. Tablet PCs are generally Wi-Fi, 3G, or higher wireless internet connection-enabled, and are equipped with a mobile operating system such as Apple's iOS or Google's Android. Tablet PCs are convenient to carry, and compared to other media and communication technologies, have been gaining popularity, as consumers' preference for them seems to be increasing. Therefore, adaptation of work using tablet PCs is increasing in the aviation industry. As mentioned earlier, about 20 airlines worldwide have begun using tablet PCs at work. Considering that safety is extremely important in the aviation industry, flight attendants are required to carry safety manuals always comprising over 1000 pages when flying. As the Aviation Act mandates that safety manuals be immediately revised whenever circumstances change or a new aircraft is introduced, these manuals need to be updated and printed several times a year. However, the dissemination of tablet PCs can reduce the cost of printing and distributing the training manuals, while the latest information can be updated instantly and applied in the field. Mirvis, Sales, and Hackett (1991) studied the implementation and adoption of new technology in organizations regarding the impact on work, people, and culture. The implementation of new technology may improve work relationships, skill utilization, and job performance. These in turn improve organizational effectiveness and the quality of the work life of employees [10]. In the next sections, we discuss in detail the type of cognitive positive effects that occur when flight attendants adapt to new technology. These can be largely divided into four categories: (1) efficiency, (2) convenience, (3) service effectiveness, and (4) pride. Efficiency Many studies have demonstrated that employees' use of technology-intensive devices for work simplifies work preparation and increases efficiency. Employee efficiency in the workplace may be increased through bring-your-own-device (BYOD) programs [11]. Ghosh et al. defined the benefits of BYOD, which refers to the use of personal mobile devices instead of handouts printed by employees or the necessity to log into websites to prepare for work [12]. BYOD increases employee productivity, as corporate information and organizational data are readily available on personal mobile devices such as smartphones, PCs, and laptops. Mohsen Hakami conducted a survey of 69 female students from Sharura Science and Arts University at Nazran University in Saudi Arabia to gauge their learning satisfaction through Nearpod, an educational application software, using personal mobile devices [13]. If a teacher uploads lecture materials using Nearpod's application, the students can study using their personal mobile devices or interact with students and teachers in real time for interactive classes. The study found that the students expressed satisfaction with various multimedia learning materials tailored to individual needs on their mobile devices, in a switch from the traditional teaching method that involves the use of textbooks and paper [13]. Additionally, Singh, Chan, and Zulkefli (2017) found that the use of mobile computing devices among students for higher education provide an opportunity for better performance, productivity, convenience, and a promise of mobility [14]. In addition, as work-related information that should be obtained from various sources can be accessed from a single integrated device, efficiency can be maximized in terms of time [12]. In the case of the aviation industry, prior to the use of tablet PCs, flight attendants prepared a manual for flight attendant safety, accessed an intranet that can only be read within the company, selected the necessary information from various kinds of flight-related information, and wrote it down manually. Subsequently, a group of flight attendants used to participate in the brief. Currently, all the information, including the safety manual, can be contained in a single application, making the task of preparing briefings easy, which increases efficiency [9]. Based on these studies, we proposed the following hypothesis. Convenience The second positive cognitive effect obtained by introducing the new technology is convenience. Convenience has not been clearly defined in the literature, and it is appropriate to understand convenience as a multidimensional construct. To examine the convenience of services, Brown proposed a conceptual framework with five dimensions: time, place, acquisition, use, and execution [15]. Based on Brown's theory, this study defines the dimensions of perceived convenience for flight attendants regarding the use of tablet PCs. (1) Time dimension: This refers to the degree of perception according to which the use of a tablet PC allows a person to perform work at a convenient time. In most airlines, flight attendants can check flight information only from their company premises. However, using a company-issued tablet PC allows them to verify flight information at a convenient time, even before going to work. However, it should be noted that this dimension does not mean "time saving." (2) Place dimension: This refers to the degree of perception according to which work can be performed in a more convenient place using a tablet PC. While flight attendants are on their way to the airport, they can update flight information through their personal mobile devices even while traveling. In addition, if they use Wi-Fi on a flight, they can communicate with the ground staff and send necessary information during flights. Based on the research, a self-service technology (SST) theory was developed, which implies that consumers can choose their own services without the help of staff [15]. The remaining dimensions are summarized on the premise that cabin attendants can be considered customers in terms of technology use. (3) Execution speed dimension: This refers to the degree of recognition according to which a tablet PC (or iPad) is convenient to use in the process of performing work [16]. Speed has been selected as an important factor in numerous qualitative studies on SST [17][18][19]. Task processing speed is defined as the time required for users to actively complete a selection through SST [20]. "Perceived convenience" in handling tasks with self-service can have a strong impact on perceived speed. Accordingly, Farquhar and Rowley emphasized that the convenience of a service is related to the concept of execution, a process involving the time taken to perform a task [21]. With the use of tablet PCs, flight attendants feel that the speed of execution of tasks has increased because they can receive the latest information through updates rather than waiting on an aircraft for information delivered by ground staff [14]. (4) Accuracy dimension: This refers to the degree of perception according to which the use of a tablet PC has increased accuracy in the process of one's performance. Transaction accuracy or processing ability for customer needs is an important factor in evaluating the quality-of-service experience [22,23]. A qualitative study by Wolfinbarger and Gilly showed that customers who had the convenience of starting and stopping online transactions on their own had a higher perception of information accuracy than those who had offline interactions with service employees [24]. According to Airline Trend (2017), the Emirates airline started taking flight meal orders from business class passengers using an MOD (meal ordering devices) system [25]. In the case of in-flight meal service, the awareness of information accuracy among flight attendants is higher when passengers order a meal through a personal monitor than when they take meal orders directly from the passengers. This also makes flight attendants' work easier. Correspondingly, the convenience of using a tablet PC for flight attendants is expected to increase. Based on these studies, we proposed the following hypothesis. Service Effectiveness Information technology (IT) has been widely used in collaboration with various businesses. Among them, group support systems (GSSs) enable communication between participants in a collaborative group, allowing them to interact simultaneously to make decisions and address issues. IT has been increasingly used to support collaborative work in a variety of business contexts. GSSs allow participants in a collaborative group to interact simultaneously and anonymously to generate ideas, make decisions, and resolve issues. GSSs can be very effective when used by groups to perform tasks that do not require information-rich communication, such as planning and creativity tasks involving generating ideas. Combining Daft and Lengel's information richness theory with McGrath's "task circumflex" results in a theory of a "task/technology fit," which suggests that GSSs-mediated communication may be very effective for certain task types, but less effective or detrimental for other task types [26,27]. Specifically, techniques that provide simple information, such as yes or no, for general tasks that do not require expertise or creative discussion seem effective. Therefore, the use of GSSs, which provide simple information, can have a significant impact on handling tasks efficiently [26][27][28]. Effectiveness and job satisfaction are increased because employees are working with devices of their own choosing and are accordingly more familiar with the technology [9]. The consumerization of information technology (CoIT) suggests that organizations can benefit from the implementation of the BYOD concept and boost employees' functionality at work [29]. The CoIT is a natural phenomenon, given that mobile devices have become ubiquitous. In terms of flight attendant work, the use of tablet PCs is also helpful because it simplifies ordering in-flight meals or duty-free items. BYOD has brought significant convenience and advantages to business activities enhancing work flexibility and efficiency [30]. For example, in the healthcare industry, medical staff use IT devices at work. When a doctor or nurse uses a personal mobile device for work purposes, it reduces the time for paper-based documentation so that patient care and treatment will receive enough attention. Therefore, studies have shown that this has a positive effect on employees' perceived work productivity [31][32][33][34]. In addition, research has suggested that productivity can be improved owing to faster communication and information retrieval [35][36][37][38]. Eslahi, Naseri, Hashim, Tahir, and Saad (2014, April) found that personal mobile devices can be used to promote employees' satisfaction and work efficacy [30]. Based on these studies, we proposed the following hypothesis. Pride Psychological ownership refers to the cognitive-affective state in which a consumer experiences a sense of ownership, regardless of actual legal ownership, believing that "This is mine!" about the target. It indicates the relationship between someone "who has been closely related to himself" and someone else [39]. This includes things, ideas, and others. Three distinct paths lead to psychological ownership: (1) dealing with an ownership target, (2) having a close relationship with the target, or (3) putting money or effort into the target [40]. Furthermore, psychological ownership is experienced more strongly when the target activates their identity, enhances their sense of self-efficacy, provides stimulation, or makes them feel familiar, like being at home [39]. Psychological as well as physical possession becomes a part of the expanded self [41], and by giving value to their possessions, they intrinsically have pride [42]. In the IT field, when consumers choose technology artifacts such as applications and communication functions, the psychological ownership of technology can be strengthened. The ownership of technology provides a means to value oneself. Therefore, it motivates one to increase one's value through pride, which is an evaluation according to which one's ability is superior to that of others [43]. By comparing oneself with others, one can achieve satisfaction and confidence or experience disappointment and frustration [44]. In addition, developmental psychologists have noticed that for young children, simply owning an object before others is enough to elicit feelings of ownership [44]. Therefore, it is highly likely that the effect will be strengthened in public (private) consumption situations because pride can be self-conscious and social. People who act in public places have stronger pride than those who act in the absence of others, and behaviors in public places can trigger implicit social comparisons [44][45][46]. Therefore, ownership, whether psychological or legal, is essentially a social construct as it has limited meaning in the absence of social comparison. Accordingly, research results have been developed according to which individuals can vicariously experience pride through the accomplishment of other team members in the group [47,48]. Similarly, Lee and Hyun suggested that flight attendants who feel proud of their organization can have a psychologically positive effect on other members, which also positively influences active service behavior [49]. Among Korean airlines, A Airlines first introduced a tablet PC to the flight attendants' work. In the airline industry, flight attendants provide in-flight services with tablet PC applications, which helps them gain psychological ownership and increase the pride of flight attendants. We suggest that flight attendants will be proud of the response of passengers and of flight attendants from other airlines. Based on these studies, we proposed the following hypothesis. Job Satisfaction Job satisfaction refers to a positive emotional state in which the evaluation results in showing how one's job or job experience is enjoyable or positive [50][51][52][53]. In addition, some scholars define it as a combination of feelings and beliefs that organizational members experience toward their current duties [54,55]. According to Lyons et al., job satisfaction can be effectively enhanced by using implicit correction factors (e.g., personal development, useful technology) rather than explicit encouragement factors (e.g., wages) [56]. Pitichat found that authorizing the use of a smartphone for work can result in the following outcomes: (1) freedom to choose the method of working, (2) deeper relationships formed among co-workers through the use of inner social communications, and (3) a convenient and practical way of sharing information [57]. The results of the study indicated that these factors induce positive job satisfaction, which eventually leads to higher productivity [57]. Jeong et al. conducted a study on 113 hotel employees from seven five-star hotels in U.S. cities (New York, Los Angeles, San Francisco, Miami, Chicago, Philadelphia, and Washington DC). It was recognized that using a mobile device at work (BYOD) is beneficial to overall work performance, as it brings a sense of self-efficacy, which leads to greater satisfaction and positive results in extending the tenure of employees [58]. Team Performance A team is defined as a group of people working interdependently to achieve a common goal, working together in a trustworthy manner [59]. Kalisch et al. suggested that teamwork consists of four essential components: involving more than two employees together to achieve a shared goal or objective, having clear and established roles within the team, ensuring that each member of the team understands the roles of all members, and working together through collaboration in order to achieve the stated goal [60]. Teamwork is an essential attribute of the aviation industry. Many studies have shown that the service rendered by flight attendants is enhanced through teamwork and the synergy it creates among the staff, rather than individual abilities while working independently [60][61][62][63][64]. Ku, Chen, and Hsu showed that flight attendants with high job satisfaction were found to be more active in dealing with complex and newly updated service manuals [65]. Park conducted a study on 322 flight attendants of K Airlines in Korea and found that the greater the team member's job satisfaction, the higher the team's overall job performance [66]. Therefore, to improve team performance, it is imperative to identify the individual tendencies of flight attendant team members, form appropriate teams accordingly, and assign suitable tasks. Based on these studies, we proposed the following hypothesis. Organizational Commitment Organizational commitment refers to the degree of unity that individual members of an organization experience toward their organization [67]. This means identifying oneself with a particular organization by being active and positive, and expressing a willingness to contribute significantly to the organization. This leads to commitment and attachment toward a certain organization, leaving oneself with a strong desire to have a long-lasting relationship [67,68]. A study on 275 flight attendants in Taiwan by Ku et al. found that flight attendants with high job satisfaction have an enormous organizational commitment toward their work and are more proactive in responding to complex and newly updated airline service manuals and behavioral bases [65]. Testa (2001) conducted a study to examine the relation between job satisfaction and organizational commitment in the context of service environment. The findings suggest that an increase in job satisfaction increased organization commitment and service effort [69]. Russ and McNeilly (1995) studied the relationship between job satisfaction and organizational commitment using experience, performance, and gender as moderators. The results indicated that performance and experience moderate the relationship between job satisfaction and organizational commitment [70]. Based on these studies, we proposed the following hypothesis. Turnover Intention Turnover intention refers to the intention of an employee affiliated with a company to leave in the near future after working for a certain period [71]. Considering that flight attendants' service duties are intensive in nature and involve relatively high labor costs, managing turnover is an important issue for most airlines. Flight attendants, who work as frontline employees, play a critical role in directly engaging with passengers and delivering flight services [72]. Lee and Lee investigated the factors of job satisfaction that may lower turnover intentions [73]. Their study was conducted among 201 Korean flight attendants of an airline by subdividing job satisfaction into work, company, and performance satisfaction. Consequently, the job satisfaction obtained by establishing a friendly relationship with co-workers and introducing and learning new technology greatly reduces the turnover rate [73]. Based on these studies, we proposed the following hypothesis. Study Design and Participants In this study, we used a self-report questionnaire survey and convenience sampling to obtain responses from flight attendants of A Airlines. The inclusion criteria were as follows: (1) flight attendants currently working for the airline, and (2) flight attendants with experience using a tablet PC provided by the airline. During data collection, all flight attendants who participated in the survey were informed that the collected information would remain confidential and would be destroyed after the analysis was completed. After the participants gave their consent, they were provided with a link to an online survey via social networking sites or email. In addition, face-to-face questionnaires were administered within a flight attendant briefing room in the company or through individual meetings. Overall, 215 questionnaires were administered both through online and face-to-face methods between 15 August and 15 September 2020. Of these, seven responses that seemed unreliable were eliminated, while the remaining 208 questionnaires were included in the analysis. Figure 1 presents the research model. Measures To empirically measure the nine theoretical concepts proposed in this study, measurement items verified in the existing literature in various fields (flight attendant competency, communication, psychology, etc.) were applied as follows. Three questions on flight preparation derived from a previous study on the four di- Measures To empirically measure the nine theoretical concepts proposed in this study, measurement items verified in the existing literature in various fields (flight attendant competency, communication, psychology, etc.) were applied as follows. Three questions on flight preparation derived from a previous study on the four dimensions of the cognitive positive effect of new technology introduction, Ghosh et al. [11], four questions of convenience derived from Brown [15], Farquhar, and Rowley [21], three questions on efficiency derived from Niehaves et al. [29], and three questions of pride derived from Dommer and Swaminathan [42] and Lee and Hyun [49] were measured using an interval scale. Job satisfaction was measured using three questions adopted by Kristensen and Nielsen [55] and Lyons et al. [56]. Team performance was evaluated using three questions adopted by Boshoff and Allen [63]. Organizational commitment was measured using three questions adopted by Mowday et al. [67] and Ku et al. [65]. Turnover intention was measured using three questions adopted by Mobley [71] and Chen [72]. After creating the initial questionnaire based on the aforementioned measures, we asked the questionnaire participants to respond to each question on a 5-point Likert scale ranging from "strongly disagree" (1 point) to "strongly agree" (5 points). To ensure the validity of the measures used in this study, we conducted a preliminary interview survey with a focus group consisting of flight attendants of A Airlines before administering the questionnaire. The questionnaire conducted a pilot test with 10 flight attendants of A airline in Korea with more than 3 years of experience prior to the field survey. Seen as A airlines introduced tablet PCs in July 2019, flight attendants with more than 3 years of experience can compare the advantages of tablet PCs. Four core factors were derived through the focus group interview (efficiency, convenience, service effectiveness, pride), but the environmental factors were not suitable for the content of the study, so they were modified and, finally, four cognitive positive factors were derived. Next, we conducted a pilot test with 30 flight attendants to check the readability of the questionnaire. Based on the preliminary survey, we made several improvements and adjustments, and administered the questionnaire after a final check by a professional group specializing in the subject. We obtained a Cronbach's α > 0.7, suggesting that the scales used in this study were reliable. For the development of the measurement tools in this study, four benefits of technology adaptation were derived: (1) efficiency, (2) convenience, (3) service effectiveness, and (4) pride. Additionally, three consequences of job satisfaction were derived: (1) team performance, (2) organizational commitment, and (3) turnover intention. The equivalent scale used a 5-point Likert scale (1 = "not at all," 5 = "very much so"), where a value of 1 is the respondent's strong negative view, and a value of 5 implies the respondent's strong positive view. Data Analysis First, a frequency analysis was performed to understand the general characteristics of the study participants. Second, a confirmatory factor analysis (CFA) was conducted to evaluate the validity of the measurement model, convergent validity and discriminant validity were verified, and Cronbach's α coefficient was checked to verify the reliability of the measurement tool. Third, a structural equation model analysis (SEM) was conducted to verify the relationship between the variables. For statistical analysis, IBM SPSS 25 and AMOS 25 were used, and statistical significance was determined based on a significance level of 5%. Table 1 shows the respondents' characteristics. To achieve the objectives of our study, we collected a sample of 208 individuals, of which 24 were male (11.5%) and 184 were female (88.5%). Among these, 41 Table 2 presents the results of the CFA. In this study, the CFA of the measurement model was performed to verify the unidimensionality of the latent variables. As a result of the analysis, the chi-square values for the model fit were CMIN = 677.930, DF = 314, and p = 0.000, indicating that the model fit was satisfactory (chi-square/df = 2.159). CFI = 0.919, IFI = 0.920, TLI = 0.903, which satisfied the incremental fit index and was judged to be a suitable model [72]. In addition, if the RMSEA value was 0.05 or less (0.05-01 is appropriate), it was judged to be appropriate, but RMSEA = 0.078 was judged as acceptable [73]. Note: All factor loadings are significant at p < 0.001. Table 3 presents the results of the conceptual validity analysis. Composite reliability (C.R.) is above 0.7, and the average variance extracted (AVE) value is above 0.5, so an internal consistency verification is not required. The concentration feasibility was secured for this purpose. Discriminant validity verification and latent factor measurements were performed using the mean variance (AVE) value [74]. As shown in Table 3, discriminant validity was secured because the AVE value was larger than the square of the correlation coefficient between the variables. Convergence Feasibility Verification and Validation In the concept validity analysis in Table 4, it was found that there was no multicollinearity because the correlation coefficient of each concept did not exceed 0.8. Analysis of Structural Models and Hypothesis Validation SEM and Goodness of Fit In this study, SEM analysis was performed to verify the theoretical hypothesis presented in previous studies. Several factors can affect the changes magnitude in fit statistics, such as sample size, pattern of non-invariance, model complexity, and ratio of sample size [75]. According to the analysis, CMIN = 746.136, DF = 332, p = 0.000, indicating that the model fit was appropriate (chi-square/df = 2.247). If the RMSEA value is 0.05 or lower (0.05-01 is appropriate), it is judged to be suitable. RMSEA = 0.078, which was considered acceptable [76]. CFI = 0.908, IFI = 0.909, and TLI = 0.895, which were judged appropriate [77]. Table 5 demonstrates the hypothesis testing results. All the hypotheses were supported, except H1. Figure 2 shows the results of the proposed model and the SEM with standardized theoretical path coefficient. According to the SEM, seven out of eight hypotheses were statically supported. Therefore, H1 is rejected (S. Discussion and Implications Major airlines across the globe have been adopting new technologies and work platforms such as equipping flight attendants with a tablet PC, in order to provide diverse and differentiated services and enhance the travel experience of passengers. However, there have not yet been any academic findings and/or studies suggesting the enhancement of services and experiences with new technology adoption. The CoIT suggests that organizations can benefit from the implementation of the BYOD concept and boost employee functionality at work [31]. This CoIT suggestion has been mainly supported by industries such as healthcare and educational services, but also by the airline industry and, specifically, by in-flight attendants. This study reviews how the introduction of new technology and the use of tablet PCs by in-flight attendants can deliver potential cognitive effects/benefits to their job satisfaction. In addition, it illustrates how team performance, organizational commitment, and willingness to search for another job opportunity or turnover intention would be influenced by adopting these new technology and work platforms. Based on the study, the potential cognitive effects/benefits are as follows: (1) efficiency, (2) convenience, and (3) pride. However, the adoption of new technology results in little or no significant impact on the simplification of flight preparation. Discussion and Implications Major airlines across the globe have been adopting new technologies and work platforms such as equipping flight attendants with a tablet PC, in order to provide diverse and differentiated services and enhance the travel experience of passengers. However, there have not yet been any academic findings and/or studies suggesting the enhancement of services and experiences with new technology adoption. The CoIT suggests that organizations can benefit from the implementation of the BYOD concept and boost employee functionality at work [31]. This CoIT suggestion has been mainly supported by industries such as healthcare and educational services, but also by the airline industry and, specifically, by in-flight attendants. This study reviews how the introduction of new technology and the use of tablet PCs by in-flight attendants can deliver potential cognitive effects/benefits to their job satisfaction. In addition, it illustrates how team performance, organizational commitment, and willingness to search for another job opportunity or turnover intention would be influenced by adopting these new technology and work platforms. Based on the study, the potential cognitive effects/benefits are as follows: (1) efficiency, (2) convenience, and (3) pride. However, the adoption of new technology results in little or no significant impact on the simplification of flight preparation. The theoretical implications of this study are as follows. Previous studies on the cognitive positive effects of the introduction of new technologies include a study in which the use of mobile devices by medical staff improved work efficiency in the healthcare field [32], a study in which the use of mobile devices by students in the educational field increased environmental sustainability [12], and research in which pride in the organization is shown as active service behavior [50]. However, as there are no studies on the aviation industry, this study is meaningful in that it is the first to theoretically reveal the cognitive positive effects of introducing new technologies to the work of flight attendants. Many airlines are also making financial investments to apply tablet PCs to flight attendants' work, identifying which of the four types of cognitive positive effects plays a major role in bringing job satisfaction and providing academic value. In addition, combining IT and cognitive psychology will lay the foundation for the introduction of artificial intelligence technology in the future. The practical l implications of this study are as follows. First, we were able to understand the overall global trends in which various airlines are increasingly adopting new technology for their flight attendants and considering how to meet and positively influence their job satisfaction. Accordingly, by combining ICT and aviation management, this study's results have implications for more related research. Second, the positive effect of the adaptation of new technology proven in this study can increase not only flight attendants' competency but also customer satisfaction with airline services. Third, the findings suggest strategies for airlines for flight attendants' human resource management by reasoning about their job satisfaction through the adaptation of new technologies, and help improve flight attendants' working conditions, corporate culture, and interpersonal relationships. In conclusion, the results can provide and suggest practical implications for airlines that are currently considering whether to adopt new technology and leverage tablet PCs in the workplace. The COVID-19 pandemic has accelerated non-face-to-face services, and Koreans have preferred non-face-to-face travel to minimize the risk of COVID-19 infection [78]. In preparation for the post-COVID era, more airlines are creating a smart work environment with tablet PCs for passengers and flight attendants who prefer non-face-toface services. This study provided a theoretical support for airlines worldwide to create a smart working environment in order to provide efficient services to passengers in the post-COVID-19 era. Conclusions We interpreted the results through the following aspects. First, the efficiency by adopting new technology does not positively influence job satisfaction. H1 is not a reasonable explanation of the potential cognitive effects/benefits from the adoption of new technology, because the actual data collection for flight preparation from various resources, including flight attendant manual books, is approximately 5 to 10 min on average. Therefore, it can be interpreted that the use of new technology such as a tablet PCs under the BYOD concept does not significantly simplify flight preparation or allow additional time prior to departure. Second, the convenience of adopting new technology positively influences job satisfaction. The execution and task processing speed and transaction accuracy from the adoption of new technology are critical factors in job satisfaction. It is suggested that airlines should continuously monitor their technology to remain updated (e.g., processing speed of a tablet PC, battery life expectancy) and make additional investments in developing advanced mobile applications and software on their newly introduced work platforms. Third, the service effectiveness generated by the adoption of new technology at work positively influences job satisfaction. Previous research shows that there is an enhanced work efficiency among medical staff who utilize a personal electronic device, as the utilization of those devices can deliver fast and effective communication and improve information retrieval [32][33][34][35][36][37][38][39]. For in-flight attendants, work efficiency is expected to be the most influential among other potential cognitive effects/benefits from the adoption of new technology. It is important to prioritize the efficiency of delivering exceptional onboard services by optimizing communication and service processes. For example, the introduction of SkyTab tablets for in-flight attendants at Condor Airlines in 2020 has successfully optimized the airlines' onboard sales process with cashless services using credit cards, Apple, and Google Pay. Through SkyTab's in-flight sales technology, receipts can be digitally provided at Condor Airlines upon request. This new technology allows contactless services between in-flight attendants and their passengers while maintaining optimized communication with efficient logistics and onboard services. Fourth, the pride of using new technology positively influences job satisfaction. Airlines should now consider how the introduction and adoption of a tablet PC can help employees take pride in their work while promoting higher job satisfaction. It was found that the intuitive part observed when cabin attendants use a tablet PC will elicit positive internal/external feedback to increase flight attendants' pride and bring job satisfaction. For example, if the external design (color similar to corporate identity, size that is easy to carry, recognizable brand, etc.) is exceptional, customers may feel that they are an airline leading the latest IT technology, in line with the trend of global airlines, and that they are receiving a luxurious and meticulous service. Consequently, they will be satisfied with the service quality, and cabin attendants who have received positive internal/external feedback will experience pride, bringing greater job satisfaction. This study has several limitations, based on which we suggest some directions for future research. First, the sample for this study consisted only of current in-flight attendants at one of the Korean airlines who have experience in using tablet PCs in the workplace. Therefore, these sampling limitations can affect the generalizability of this research study. To mitigate and overcome this limitation, there needs to be an increased sample size to include those at different airlines with tablet PC usage experiences and a comparable analysis study at each airline equipped with tablet PCs in the workplace. Second, the sample for this study consisted of in-flight attendants and assistant pursers in the 30-39 age group only. Considering the difficulties and barriers of interacting with new technology among different age groups and ranks in the workplace, future researchers will need demographic information as a controlling factor for a more detailed study. Third, COVID-19 was widespread at the time of the survey, which was conducted between August and September 2020. Therefore, it is possible that there may be a psychological effect on the survey responses of the flight attendants, owing to job insecurity and economic difficulties regarding an uncertain future. It would be preferable to reconsider the same survey after the aviation industry normalizes.
8,611.4
2022-09-29T00:00:00.000
[ "Business", "Computer Science" ]
A Mobile Suitcase for Informatic Teachers Related to the “Digital” Didactic Goals of the 21 Century This study deals with the optimal equipment of a mobile suitcase for computer science teachers, which offers the possibility to teach the skills of the curricula from primary to high school of the 21 century. First, the Single Board Computers (SBCs) in question are filtered out from previous studies and the accessory parts required are determined through a quantitative market analysis. Then, by combining the results with a qualitative analysis according to Mayring, the degree of curricular coverage of individual accessories is determined and binarized. Afterwards, the optimal equipment of the mobile suitcase is evaluated and established based on the cost overlap by horizontal summation and vertical inclusion of the necessary accessories after recording the prices and the budget. The results were clearly presented in network diagrams and lists. This study thus provides computer science teachers and computer science professors with a budgetdependent basis for making decisions about the contents of a mobile suitcase for computer science lessons or a computer science laboratory for learning the skills of the curricula from primary to high school of the 21 century. The study closes with a summary and an outlook. Introduction The teaching of digital content and processes has been included in the curricula of primary to high school in recent years. The implementation of teaching digital content in the subject area of education therefore requires new technical aids and special hardware requirements. To meet these, single-board computers have become more important in the field of education. In the study "Digital didactic objectives of primary, secondary, and higher education curricula in the 21 st century executable with a single-board computer" by Nothacker & Lavicza, the skills of the 21 st century in computer science teaching were presented based on the curricula and examined whether they could be realised with a single-board computer (Nothacker et al., 2020: 350). In the subsequent study of 2021 "Low-costs computer learning sets and the relation to the digital didactic goals of the 21 st century," Nothacker carried out a quantitative market analysis of currently available single-board computers and determined the degree of congruence between the two parameters using a combinatorial approach of properties from the technical documentation of individual products and the digital competences and skills to be taught in the curricula of the primary to high school levels . Nothacker provides a list of suitable single-board computers that meet the requirements of the curricula and allow a product to be selected according to the available budget. In these studies, however, it was not clarified what a configuration of a learning set for computer science teachers that corresponds to the curricula of the 21 st century looks like. This is precisely where this study comes in. In this study, the question that will be investigated is, "What must a mobile suitcase for modern computer science teaching contain in order to meet the latest requirements of the curricula as fully as possible?" The budget is limited to $100 per participant. An internet connection via Wireless Local Area Network (WLAN) is assumed. Research design and Methods In order to be able to fulfil the objectives of this study, the "Mixed Methods" method according to Creswell is applied (Creswell, 2014: 50). The MAXQDA2020 Analytics Pro software, in combination with Microsoft Excel, was used to record the hardware and its characteristics, as well as to record the corresponding filters and create the evaluations and graphics. First, the necessary accessories are filtered out through a quantitative market analysis and then the curricular coverage of individual accessories is determined and binarized with a qualitative analysis according to Mayring (Mayring, 2014: 26). Considering the studies by Nothacker & Lavicza from 2020 and Nothacker 2021, the corresponding SBC model with the associated accessories is determined by combining the summary costs, while considering the budget limit of $100 per participant. Those combinations that are within the given budget limit are of particular interest in this study. The study closes with a recommendation for the content of the hardware per mobile suitcase in relation to the content to be taught in the subject area of computer science at different types of schools, with the possibility for justification. The author points out that the quantitative survey is a snapshot of the items and prices currently available on the market at the time of the survey. It is possible that at the time of publication of this study, new products will add to the variety of choices. For this reason, the quantitative and qualitative selection criteria were chosen in such a way that new products can easily be included in the evaluation scheme. Results In the following, the examination results are listed according to the flow chart of Figure 1. Analysing SBCs for mobile Suitcase The study by Nothacker 2021 shows the recommended single-board computers to be used depending on the budget of the individual institutions. In this study, the selection was made for the SBCs in places 1-4, and thus for the SBCs in the Raspberry Pi family in different versions, so that teachers and learners could concentrate exclusively on the task creation or problem-solving level without having to familiarise themselves with a new platform each time. The models all have a 40-pin GPIO port, which is a prerequisite for connecting various accessories for experimental use. Thus, the models Raspberry Pi Zero W ( Analysing SBC-Accessories It was decided to use accessories that are complementary to each other and comply with the complete conditions of the curricular specifications. It was also decided that all the interfaces provided by the device should be covered so that the participants could get to know them all. For this purpose, the internet platform "pinout.xyz" was used to find the suitable accessories (Raspberry Pi HATs , 2021), which are known as Hardware Attached on Top (HAT). The result set was restricted according to the $100 limit. The following HATs "Animated Eyes" , "Automation HAT" (Automation HAT Mini 2021, "BrainCraft HAT" (BrainCraft HAT, 2021), "DC & Stepper Motor" (DC et al., 2021), "Pi-Finger" (The Pi Hut, 2021), "Rainbow HAT" (Rainbow HAT , 2021), "Sense HAT" (Raspberry Pi Sense HAT, 2021), "Servo HAT" , "SIM7000E IoT HAT" (The Pi Hut, 2021) , "Touch pHAT" (Touch pHAT, 2021), "Traffic Lights" (4tronix, 2021), "Breakout Garden" (Breakout Garden for Raspberry Pi, 2021), "CYBERDECK" , "DUAL HAT Extension" (The Pi Hut, Raspberry Pi 400 Dual HAT, 2021) were identified. According to Nothacker & Lavicza 2020, this includes that all competences can be covered by a certain hardware constellation. This means that the hardware has certain characteristics and must fulfil certain capabilities. In this study, the skills were limited to the "modern skills," such as 3D-Printing, Robots, Sensors, Actuators, Industry 4.0, Big Data, Artificial Intelligence, Machine Learning, Deep Learning, Hight Performance Computing, Blockchain, Data Mining and Simulation (Nothacker et al., 2020: 356). Figure 2 lists all selected HATs. In addition, the programming languages to be used are listed. The Deep Learning, High Performance Computing and Blockchain categories are dependent on a network connection and can be covered accordingly depending on the SBC model used. To determine the coverage with existing properties, the HATs were analysed with the properties given as categories and listed in Figure 3. Accordingly, the Rainbow HAT and the BrainCraft Hat are followed by the Sense HAT and the Automation HAT modules with very diverse possibilities. Evaluate and Rank Accessories by Skill, Properties and Price To be able to rank the accessories, they were evaluated according to the fulfilment levels of the skills, existing properties, and the price information at the time of the consultation. In Figure 4, the order for the capabilities and features of the accessories have been sorted in descending order and the price details in ascending order. To obtain a meaningful graph depending on the budget, the skills, properties, and price values must be added horizontally and ranked vertically under the influence of the price. The result is a ranking of accessories according to their best suitability, under consideration of price. This is shown in tabular form in Figure 5. Figure 5 shows which HATs are best suited to cover the 21 st century skills while containing the maximum variety of interfaces, taking price into account as a percentage. According to the budget, the percentage influence of price was included in the analysis from SPP(100) (a higher importance was attributed to a lower price point) to SPP(0) (a lower importance was attributed to a higher price point). In Figure 5, a horizontal separator line was drawn between no. 11 and no. 12, as accessories no. 12, 13 and 14 represent adapters for better local positioning in the form of relocation of the 40-pin port interface or to another plug-in system "Breakout Garden." For this reason, the accessories with the numbers 12, 13 and 14 were not included in the graphic representation in Figure 6. The accessories are to be considered in addition to the various HATs for better operability, such as accessory no. 14 "DUAL HAT Expansion," which offers the possibility of operating 2 HATs in parallel or plugging an additional display into one of the ports and thus managing without an external screen. Which HAT combinations make sense and are possible within the budget of $100 per participant will be discussed in section 3.4. Assembling Sets for 100$ per participant To filter out the best configuration for a mobile suitcase, the various HATs with the SOCs from section 3.1 must be combined with the accessories from section 3.3. To do this, the individual models offered are differentiated once again in the designs and combined with the various HATs, cumulated horizontally, and ranked vertically. The list of results was vertically supplemented by kit offers from the supplier Pimeroni, who offers not only the keyboard, SD card with the NOOBS operating system, power supply unit and housing for the corresponding SOCs, but also "The official Raspberry Pi Beginner's Guide" to the SOC in a set. The provider was only selected as an example and serves only as orientation in this study. Individual parts were added horizontally (addons), such as power supply units, SD card with operating system, keyboard, to be able to make a comparison with the kits offered. To operate the "BrainCraft HAT," or to explore the CSI interface, a camera is necessary. Therefore, a day and night vision camera, as well as a touch screen display, were added horizontally to the list. The additional touchscreen display is used to realise autonomous work without other devices, quasi as a monitor replacement. The individual solutions were considered for corresponding numbers of participants with 10, 20 or 30 participants and corresponding budgets of $1000 -$3000. It was assumed that the group size per unit consists of at least 2 but no more than 3 persons. The results are presented in tabular form in Figure 7. Mobil Suitcase Content The list in Figure 7 shows the recommended contents of the mobile suitcase. In order to achieve the maximum equipment with optimal budget utilisation, the Pimeroni kits should be used for the Raspberry Pi Zero (Figure 7,No. 12). This results in the greatest savings potential at around $240. The number of group participants plays a subordinate role here. Even smaller groups of 10 or more participants with a budget surplus of about $240 are still justifiable. When choosing the Raspberry Pi Zero (no. 11 or 12), however, you must do without interesting HATs such as the "BrainCraft Hat" and the "Animated Eyes" due to its lower performance. These HATs are also only particularly useful for the topics "Artificial Intelligence" and are therefore only suitable for secondary or high school levels. The "Breakout Garden" adapter has been included as a special expansion option, in which sensors, displays or other actuators can be plugged in via sockets using individual plug-in cards. For the selection of the plug-in cards, the budget for the 5-piece set is about $192, for the 10piece set about $731 and for the 15-piece set over $1200, which are not dealt with in detail in this study. The prices for the plug-in cards range from $5.50 (temperature sensor) to $61.88 (thermal imaging camera) depending on the sensor or actuator. For Secondary or High School, consider whether existing devices such as screens, tablets, keyboard and mouse or PCs can be used. If at least keyboard and screens are already available, the Raspberry Pi with 4GB main memory (Figure 7,No. 8) with the specified accessories is recommended. Depending on the size of the group, a set of 10 or 15 devices is recommended. In order to be as independent as possible from other equipment and the environment, a mobile suitcase must be able to use the full variety of SOCs and HATs without the presence of other equipment. For example, the Raspberry Pi 400 (Figure 7, No. 10) can be used to equip a mobile suitcase with 10 units. This can cover group sizes of 2-30 people, with a subgroup size of 2-3 people. If particular emphasis is placed on subgroups of 2, the number of SOCs should be increased to 15. In the absence of an internet connection, at least one 4G LTE router must be included in the suitcase, which can provide all participants with high-speed internet access. In the case that there is no power from the socket at the place of use, a power source in the form of a power bank should be included in the purchase planning, such as a power bank with 13000 mAh from "Intenso," which costs about $25 per device and can supply the devices with power for up to about 15 hours. The mobile suitcase can be usefully supplemented by one or two "Robot Car(s)" for about $70 each, which usually contain a distance sensor, a line sensor, RGB LEDs, DC motors and servos as learning objects. The "robot car" can also serve as a success check for the participants after they have successfully learned the individual functions and interfaces and can be combined with other interesting tasks to be able to replicate authentic real-life scenarios, e.g. recognising traffic signs or other objects with the camera or other exciting projects. Conclusions and Outlook This study deals with the optimal equipment of a mobile suitcase for computer science teachers related to the digital didactic goals of the 21 st century, which offers the possibility to teach the skills of the curricula from primary to high school. In order to be able to make a concrete statement here, the study by Nothacker and Lavicza 2020 was used as a basis for the skills analysis and the study by Nothacker 2021 "Low-cost Single-Board-Computers and Learning-Sets and the Relation to the "Digital" Didactic Goals of the 21 st Century" with a result list of suitable SBCs as a basis for this study. The SOCs to be used and the corresponding accessories were combined, analysed and listed on the basis of the two studies. Subsequently, under the influence of the budget, which was limited to $100 per person, the optimal equipment of the mobile case was filtered out on the basis of the cost overrun of the necessary accessories. The result was clearly presented in network graphics and lists. The result of this study thus gives computer science teachers and computer science professors a basis for decision-making about the contents of a mobile suitcase for computer science teaching or a computer science laboratory for learning the skills of the curricula from primary to high school of the 21 st century. This study can be developed as the basis of a comprehensive teaching concept for the use of a mobile suitcase in connection with learning tasks and assessments. The author recommends developing such a scenario through station learning, where the participants can get to know the individual accessories of the case and their interface types in more depth.
3,710
2021-11-29T00:00:00.000
[ "Computer Science", "Education" ]
Overcoming Challenges of Lignin Nanoparticles: Expanding Opportunities for Scalable and Multifunctional Nanomaterials Conspectus The increasing demand for polymeric materials derived from petroleum resources, along with rising concerns about climate change and global plastic pollution, has driven the development of biobased polymeric materials. Lignin, which is the second most abundant biomacromolecule after cellulose, represents a promising renewable raw material source for the preparation of advanced materials. The lucrative properties of lignin include its high carbon content (>60 atom %), high thermal stability, biodegradability, antioxidant activity, absorbance of ultraviolet radiation, and slower biodegradability compared to other wood components. Moreover, the advent of lignin nanoparticles (LNPs) over the last ten years has circumvented many well-known shortcomings of technical lignins, such as heterogeneity and poor compatibility with polymers, thereby unlocking the great potential of lignin for the development of advanced functional materials. LNPs stand out owing to their well-defined spherical shape and excellent colloidal stability, which is due to the electrostatic repulsion forces of carboxylic acid and phenolic hydroxyl groups enriched on their surface. These forces prevent their aggregation in aqueous dispersions (pH 3–9) and provide a high surface area to mass ratio that has been exploited to adsorb positively charged compounds such as enzymes or polymers. Consequently, it is not surprising that LNPs have become a prominent player in applied research in areas such as biocatalysis and polymeric composites, among others. However, like all ventures of life, LNPs also face certain challenges that limit their potential end-uses. Solvent instability remains the most challenging aspect due to the tendency of these particles to dissolve or aggregate in organic solvents and basic or acidic pH, thus limiting the window for their chemical functionalization and applications. In addition, the need for organic solvent during their preparation, the poor miscibility with hydrophobic polymeric matrices, and the nascent phase regarding their use in smart materials have been identified as important challenges that need to be addressed. In this Account, we recapitulate our efforts over the past years to overcome the main limitations mentioned above. We begin with a brief introduction to the fundamentals of LNPs and a detailed discussion of their associated challenges. We then highlight our work on: (i) Preparation of lignin-based nanocomposites with improved properties through a controlled dispersion of LNPs within a hydrophobic polymeric matrix, (ii) Stabilization of LNPs via covalent (intraparticle cross-linking) and noncovalent (hydration barrier) approaches, (iii) The development of an organic-solvent-free method for the production of LNPs, and (iv) The development of LNPs toward smart materials with high lignin content. Finally, we also offer our perspectives on this rapidly growing field. This study shows that controlling the degree of esterif ication significantly improves the stability of hybrid lignin oleate nanoparticles in acidic and basic aqueous dispersions owing to the accumulation of acyl chains close to the particle surface producing a hydration barrier.• Pylypchuk, I.; Sipponen, M. H. Organic solvent-free production of colloidally stable spherical lignin nanoparticles at high mass concentrations.Green Chem.2022, 24, 8705−8715. 3This work describes an organic solvent-f ree method for the production of lignin nanoparticles of poorly water-soluble lignins in the presence of sodium lignosulfonate.The lignin nanoparticle dispersions exhibit shear-thinning behavior and undergo gelation within well-def ined pH and concentration regions.• Moreno, A.; Delgado-Lijarcio, J.; Ronda, J. C.; Cadiz, V.; Galia, M.; Sipponen, M. H.; Lligadas, G. Breathable Lignin Nanoparticles as Reversible Gas Swellable Nanoreactors.Small, 2023, 19, 2205672. 4This study shows the preparation of gas-responsive lignin nanoparticles exceeding 75 wt % in lignin content.The reversible swelling behavior upon O 2 /N 2 bubbling of the particles was demonstrated for the fabrication of gas tunable nanoreactors for the synthesis of gold nanoparticles. INTRODUCTION −8 In nature, lignin reinforces plant cells by embedding cellulose and hemicellulose, adding rigidity to the cell walls and protecting against biological stresses. 9From a chemical point of view, once isolated from wood, lignin consists of amorphous, three-dimensionally branched aromatic molecules containing methoxy groups, aliphatic and phenolic hydroxyl groups, and some terminal carboxylic acid groups located at the side chains (Figure 1a).The structural differences between lignins depend on their botanical source and the extraction process from lignocellulosic biomass. 5,10,11−15 LNPs are typically prepared via solvent-exchange methodology, where lignin is dissolved in an organic solvent, and poured rapidly or gradually into a water solution or vice versa. 12,13The formation of LNPs proceeds via aggregation of lignin, induced by hydrophobic interactions and π−π stacking of aromatic rings when the volume fraction of organic solvent is reduced.Other noncovalent interactions such as intra-and intermolecular hydrogen bonding and van der Waals forces contribute to the stabilization of the formed aggregates.Therefore, the formation of LNPs is essentially governed by the molecular size and (in)solubility of the lignin molecules in such a way that the stable particles have relatively more hydrophobic cores composed of higher molecular weight lignin molecules and surfaces consisting of relatively smaller lignin molecules enriched with hydrophilic groups (Figure 1b).This LNP formation via nucleation−growth mechanism has been validated by GPC and SEM analyses, 16 while 1 H liquid-state nuclear magnetic resonance spectroscopy has proved the presence of hydrophilic hydroxyl (aliphatic and phenolic), carboxylic acid, and methoxy groups at the surfaces of the LNPs arising mainly from the S-and G-units and β-O-4′ substructures. 17Here, it is important to note that the presence of carboxylic acid groups in their ionized form results in an increase in the surface charge of LNPs, which is crucial for their stabilization via electrostatic repulsion.Additionally, there are some cases where hemicelluloses can stabilize lignin particles. 11,18This stabilization of LNPs by attached polysaccharide chains is due to increased osmotic pressure when the particles approach each other, as the concentration of polysaccharide segments locally increases, causing a repulsive force.Recently, DFT calculations also support that the molecular structure of lignin strongly influences the formation of LNPs, so that flexible interunit linkages, specifically the β-O-4′ substructures, yield molecular folding resulting in intramolecular π−π stacking which presumably supports the assembly process. 19olvents such as tetrahydrofuran, acetone, dimethyl sulfoxide, and ethanol are commonly used to dissolve lignin. 20,21However, they usually need to be combined with low amounts of water (3:1 w/w ratio) in order to achieve complete solubility of lignin before particle formation.Alternatively, it is possible to harness the partial solubility of lignin in polar organic solvents to prepare LNPs from specific lignin fractions.For instance, solvent fractionation of SKL with polar solvents such as ethanol offers the possibility to separate insoluble high molecular weight (MW) and soluble low MW lignin fractions, with the latter producing smaller LNPs.The high MW lignin fraction promotes a faster and more efficient dense packing via hydrophobic π−π stacking interactions.In the same manner, differences in the distribution of functional groups present on the surface of LNPs can also be detected since, for example, soluble and low molecular weight lignin fractions are usually more enriched with carboxylic acid groups.Although the solvent-exchange methodology is the most popular approach for the preparation of LNPs, aerosol technology is another alternative approach to prepare LNPs, in which solvent is vaporized, forming supersaturated lignin aerosol droplets that collapse into a spherical shape at the hydrophobic solvent−air interface. 22,23Other approaches, albeit less common, include the use of emulsion templates through self-driven encapsulation of hydrophobic compounds (oils) 24,25 or precipitation of lignin by adjustment of pH which typically leads to the formation of irregular particles. 26For more information about the preparation of LNPs using either "dry" (aerosol technology) or "wet" (solvent exchange) processes, we direct the readers to the excellent and recent reviews. 12,13he attention that colloidal lignin materials have captivated is based on the superior properties of LNPs in contrast to bulk lignin.Among them, a well-defined spherical shape accompanied by the presence of negatively charged functional groups (phenolic hydroxyl and carboxylic acid), and a large surface area to mass ratio make them suitable for adsorption of positively charged compounds such as enzymes or polymers. 27,28In addition, LNPs resist aggregation in aqueous dispersions in neutral to slightly acidic pH owing to their submicrometer size and the electrostatic repulsion between the aforementioned negatively charged surfaces. 12,29In this regard, LNPs are able to circumvent challenges of crude lignins such as their poor interfacial binding within the polymeric matrix and aggregation during the preparation of lignin-based polymeric composites. 14,30However, LNPs also face some challenges such as (i) the use of a considerable amount of organic solvents for their production, which hinder their transfer from academia to industry, (ii) incompatibility with hydrophobic polymeric matrices when they are used as fillers for the preparation of polymeric composites, (iii) solvent instability, i.e., dissolution or aggregation in alkaline and acidic pH and organic solvents, and (iv) lack of complementary stimuli in the design of smart functional nanomaterials (Figure 2). Over the last years, our group has focused on tackling the above-mentioned challenges in order to unlock and expand the potential of LNPs for different applications.In the next sections of this Account, we discuss our and other's contributions in these frontiers.This Account is structured following the chronological developments carried out in our laboratory.We begin with a discussion on the different strategies to overcome the incompatibility of LNPs with hydrophobic polymeric matrices, followed by the current synthetic strategies for the stabilization of LNPs and their chemical functionalization in dispersion state.Thereafter, we introduce an alternative approach to prepare LNPs without the need for organic solvents and the preparation of stimuli-responsive LNPs with higher lignin content.Furthermore, we provide our perspectives on the upcoming challenges and opportunities in this rapidly growing field. DISPERSING LNPS INTO HYDROPHOBIC POLYMERIC MATRIXES Synthetic polymeric nanoparticles (SPNPs) are widely used materials as reinforcing agents during the production of polymeric composites.−34 Hence, given the aforementioned intriguing properties of LNPs, one of their most common applications is as fillers in the preparation of polymeric nanocomposites. 14In this way, LNPs have been combined with cellulose nanofibrils (CNF), 35,36 poly(vinyl alcohol) (PVA), 37 and chitosan, 38 among others 39,40 to produce polymeric nanocomposites with improved photothermal, UV shielding, mechanical, and antioxidant properties.Here, it is important to note that all the aforementioned cases have in common that the polymeric matrix is composed of a water-soluble or a hydrophilic polymer, allowing LNPs to be well dispersed and efficiently interact within the polymeric matrix.−43 Consequently, the polymeric composites may not exhibit enhancement in properties, and in certain instances a decrease can be observed, notably in mechanical properties. In order to overcome this limitation, we reported a materialefficient method for the fabrication of hydrophobic polymeric composites that incorporated LNPs and improved mechanical, UV-shielding, and antioxidant properties. 1,44Our system is based on the fabrication of enzyme-coated LNPs and their application as functional surfactants for biocatalytically degassed radical polymerization of hydrophobic monomers in Pickering emulsions.After the polymerization, the latex dispersions were converted to hydrophobic polymeric composites with a homogeneous distribution of LNPs by a simple melting process (Figure 3a).The fabrication of the enzyme-coated LNPs involved a two-step adsorption process in which chitosan (chi) and glucose oxidase (GOx) were adsorbed onto LNPs to produce biocatalytic hybrid particles (GOx-chi-LNPs) capable of circumventing the oxygen inhibition of the radical polymerization process.The confirmation of the successful adsorption of chitosan and GOx into LNPs was evaluated based on dynamic light scattering measurements (DLS), with a gradual increment in particle size from 97 to 215 nm with associated reversal of the zeta potential from negative (−29 mV) to positive (+42 mV).These hybrid colloidal particles were used to stabilize hydrophobic monomer (styrene or butyl methacrylate)-in-water Pickering emulsions at a concentration of 9 g L −1 , particles/ monomers, while enabling efficient thermally initiated free radical or copper-catalyzed controlled radical polymerization in an open-air system showing the robustness of the system.After the polymerization, the analysis of the latex dispersions revealed polymeric beads efficiently covered by GOx-chi-LNPs.Melting of the dried polystyrene (PS) or poly(butyl methacrylate) (PBMA) latexes produced polymeric composite films with excellent distribution of the nonmelting lignin particles as fillers in the polymer matrix (Figure 3b).The evaluation of polymeric composites with different concentrations (wt %) of GOx-chi-LNPs for mechanical properties revealed a substantial improvement in toughness.Specifically, at 15 wt % of hybrid particles, toughness was boosted by a factor of 3.5 and 15 compared to pristine PS and PBMA, respectively (Figure 3d).We postulated that the enhancement in mechanical properties stems from both the effective dispersion of hybrid particles in the polymeric matrix and their favorable surface-area-to-mass ratio.Additionally, the effective noncovalent interactions within the matrix likely contribute by acting as sacrificial bonds, forming new bonds during deformation, thus explaining the positive reinforcing effect observed in our polymeric composites (Figure 3c).In addition to improving mechanical properties, the hybrid particles also conferred efficient UV-blocking and antioxidant properties to the polymeric composites, crucial for sectors like food packaging.However, potential safety issues arising from the migration process should also be considered, but so far we are limited to evidence from the antimicrobial activity of chi-LNPs. 45In summary, our approach not only integrated LNPs into hydrophobic polymeric systems but also enhanced mechanical properties while adding UV-blocking and antioxidant properties, overcoming a significant challenge in ligninhydrophobic polymer composite preparation. Continuing in the same direction, Kimiaei et al. also took advantage of the surfactant properties of LNPs to prepare cellulose-polycaprolactone (CNF-PCL) nanocomposites with improved mechanical properties. 46In their system, an aqueous CNF dispersion was combined with hydrophobic polycaprolactone (PCL) using LNPs as the emulsion stabilizer.The CNF-PCL films containing 10−30 wt % of LNPs exhibited a remarkable improvement in dry strength, showing around five to six times higher strain compared to the reference nanocomposites without LNPs.Additionally, the wet strength reached up to 87 MPa, significantly surpassing the previously reported wet strength of CNF cross-linked with tannic acid, epoxies, or multivalent metal ions, which ranged between 30 and 70 MPa.The superior properties of the nanocomposites were attributed to the capability of LNPs to form noncovalent bonds with both cellulose and PCL, thus serving as an interfacial compatibilizer.The ease with which this methodology can be applied to other hydrophobic polymers exemplifies the potential of LNPs in crafting hydrophobic polymeric nanocomposites with a favorable carbon footprint.More recently, Wang et al. also exploited the surfactant properties of LNPs in a seeded freeradical emulsion copolymerization of butyl acrylate and methyl methacrylate. 47In their approach, lignin was allylated prior to the formation of LNPs to include polymerizable allyl groups on the surface of LNPs.The resulting allylated-LNPs were then used as active interfacial-modulating surfaces to control emulsion polymerization, forming multienergy dissipative latex film structures with a lignin-dominated core (16% dry weight basis) via a simple casting method.The LNPs-integrated latex film demonstrated exceptional toughness exceeding 57.7 MJ m −3 , achieved through an optimized allyl-terminated concentration of 1.04 mmol g −1 .This enhancement in mechanical properties represents the most significant improvement reported in the literature so far.However, the necessity for solution-stage chemical modification of lignin could impede scalability and the transition to industrial processes. OVERCOMING SOLVENT-INSTABILITY OF LNPS: ACCESS TO CHEMICAL FUNCTIONALIZATION OF LNPS IN DISPERSION STATE Unlike modifying lignin in solution, focusing on chemical modification directly on solid particle surfaces could be more effective and open avenues to improve the compatibility of LNPs with polymeric matrices, as described in the previous section.However, enhancing the stability of LNPs under harsh conditions is necessary to develop advanced LNPs-based materials via acid/base catalysis and reactions in organic solvents.In this sense, chemical functionalization of LNPs in dispersion state has been viewed as a restricted area owing to the solubility of LNPs at pH > 10 due to ionization of the phenolic hydroxyl groups.There are also challenges in acidic conditions since LNPs have a point of zero charge and aggregate under acidic conditions at pH < 3 due to the protonation of their carboxylic acid groups. 48In addition, switching from aqueous to organic solvents either solubilizes the LNPs or leads to their aggregation.In this regard, the vast majority of functionalized LNPs necessitate the chemical modification of lignin before the particles are formed. Pioneering works to overcome these considerable challenges include the work by Nypelöet al., who combined Kraft lignin with epichlorohydrin in a water-in-oil microemulsion to create intraparticle-cross-linked LNPs. 49The resulting LNPs exhibited strong resistance to dissolution when exposed to a highly alkaline environment at pH 13.Afterward, Mattinen et al. reported the use of laccases to achieve the stabilization of LNPs by means of a radical-mediated oxidative process, resulting in LNPs that are resistant to dissolution in organic solvents such as THF. 50espite progress in particle stabilization, the chemical functionalization of LNPs in dispersion state has remained relatively less explored.The aforementioned methods relied on emulsion templates or enzyme-catalyzed cross-linking processes, effective only at low LNPs concentrations.The first chemical functionalization of LNPs in dispersion state was reported by Zou et al., who demonstrated a simple route to prepare internally cross-linkable epoxy-lignin hybrid particles. 51n their approach, an epoxy-cross-linker (bisphenol A diglycidyl ether, BADGE) was dissolved with lignin in an acetone:water (3:1 w/w) solvent mixture and hybrid particles with 10−40 wt % of cross-linker were formed following the solvent-exchange methodology.Thermally induced ring-opening reactions were demonstrated for intra-and interparticle cross-linking.The authors demonstrated that by using BADGE concentrations ≤20 wt %, it is possible to control the cross-linking within the particles, thus preserving their colloidal stability.The covalently stabilized particles remained intact even after being rinsed with aqueous acetone at a similar composition as that employed in particle production.Furthermore, these particles could be covalently functionalized via a base-catalyzed ring-opening reaction employing a quaternized epoxide, resulting in particles with a surface net charge that responds to pH (positively charged Accounts of Chemical Research at pH < 5; negatively charged at pH > 5) Additionally, the hybrid particles containing 30 wt % BADGE were utilized as thermally curable particulate adhesives, exhibiting dry strength comparable to and wet strength surpassing that of a commercial epoxy adhesive.Overall, these findings suggest that adding a crosslinker during LNPs' supramolecular assembly is a successful strategy for achieving stable and functionalized LNPs, a method later extended by our group and others. 52,53nspired by the preceding work, our team began development of environmentally friendly alternatives to BADGE.Our approach involved esterifying lignin with an oleoyl fatty acid derivative to achieve lignin-oleate, which could then be crosslinked using free radical chemistry. 2In this way, oleic lignin nanoparticles (OLNPs) were prepared via solvent-exchange methodology from lignin oleates with different degrees of esterification (DE = 20%, 50%, and 80%) (Figure 4a).Initially, we speculated that oleic fatty acid chains would be restricted to the inner core−shell�core of the particle�of OLNPs, and internally stabilized particles would be feasible to obtain via the radical cross-linking of the double bond present in the unsaturated oleic chain.DLS analysis of OLNPs revealed no significant differences in particle sizes among the three OLNPs (around 200 nm).However, direct comparison between LNPs and OLNPs pointed out a significant difference in particle sizes (100 nm vs 200 nm, respectively), which was attributed to the effect of unsaturated oleic chains that would hamper an efficient molecular dense packing during the self-aggregation process.Stability studies under basic and acidic conditions (pH = 12 and pH = 2) revealed unprecedented stability for OLNPs without thermal curing, which increased with the DE of the lignin-oleic precursor.OLNPs 20 (DE = 20%) remained colloidally stable for 48 h under basic conditions while OLNPs 80 (DE = 80%) exhibited stability for more than 100 h (Figure 4b).Based on these observations and supported by TEM imaging of core− shell structures of OLNPs (Figure 4c), we hypothesized that the exceptional stability of the OLNPs stemmed from a hydration barrier created by the oleic fatty acid chains collapsed on the particle surface. As previously mentioned, the formation of LNPs is influenced by molecular size distribution and hydrophobicity of lignin.Esterification of lignin with long fatty acids like oleic acid (C 18 ) enhances its hydrophobicity and alters its structure.In this context, we proposed that high molecular weight lignin oleate molecules reside in the particle interiors, while low molecular weight esters containing hydrophilic carboxylic acid groups are oriented toward the hydrophilic surfaces, exposing them to the water phase.This arrangement prompts the hydrophobic effect, causing the oleate chains in low molecular weight fragments to collapse and associate at the surface, minimizing exposure to water (Figure 4a).Consequently, OLNPs display charged surfaces, where the deposited oleate chains form an effective hydration barrier that retards the ionization of phenolic groups under alkaline conditions and protonation of carboxylic groups in acidic media.Encouraged by these findings, we also conducted, for the first time, covalent functionalization of non-cross-linked LNPs in the dispersion state via base-and acidcatalyzed ring opening reactions (Figure 5a).Methacrylated OLNPs (MA-OLNPs) were utilized to create anticorrosive coatings for aluminum.The curing of MA-OLNPs resulted in a particulate coating that significantly reduced the corrosion current density (CCD) by 3 orders of magnitude, providing effective corrosion protection (Figure 5c).Additionally, the cationized OLNPs (c-OLNPs) were demonstrated as fast and effective pH-switchable adsorbents for water treatment (Figure 5b).More recently, our group reported an alternative methodology that involves the preparation of hydroxymethylated lignin nanoparticles (HLNPs) followed by a catalyst-free hydrothermal curing to trigger internal cross-linking reactions. 54In addition to allowing for dispersion state modification of the HLNPs, this methodology preserves the phenolic groups that are key functionalities defining biodegradability, redox activity, and antimicrobial properties of lignin.In a follow-up study, HLNPs were used to adsorb phospholipase D, allowing repeated use of this expensive enzyme over four cycles of transformation of phospholipids to polar headgroup-modified derivatives. 55his approach simplifies the use of LNPs in enzyme immobilization; previously enzyme-coated LNPs have been stabilized by encapsulation in calcium alginate beads for instance, 39 or cationic LNPs coated with chitosan. 44oth strategies outlined above, using cross-linkers during selfassembly or providing a hydration barrier from fatty acids, are crucial for obtaining functionalizable LNPs and enabling access to lignin-based advanced materials.Intrigued by combining the synergies of these strategies, our group recently reported the use of Urushi (oriental lacquer) as a sustainable component to achieve stabilization of LNPs via an internal cross-linking process and hydration barrier. 56Hybrid particles containing ≤25 wt % Urushi exhibited stability following the thermally triggered cross-linking of its unsaturated hydrocarbon chains, attributed to Urushi's deposition in the inner core of the particles.Conversely, hybrid particles with Urushi content >25 wt % showed enhanced stabilization via thermal interparticle cross-linking process effect owing to the surface exposure of Urushi's hydrophobic chains.These particles demonstrate a great potential to prepare particulate coatings to protect wood from water under harsh conditions such as extreme pH. ORGANIC-SOLVENT-FREE WET PROCESS: A PARADIGM SHIFT IN THE PREPARATION OF LNPS Formation of LNPs by adding a nonsolvent (water) into a lignin solution in organic solvent or vice versa causes the formation of spherical particles via hydrophobic-induced aggregation of lignin (Figure 1b). 12Despite ongoing efforts, when it comes to the scale-up of these production processes, critical challenges remain to be overcome.For instance, solvent-exchange methodology lacks cost-efficient methods for the organic solvent recovery as the techno-economic assessments have shown. 21In addition, the range of concentrations at which LNPs can be produced as colloidally stable dispersions is limited to ∼2 wt %.Meanwhile, the aerosol technologies require a careful evaluation of the risks involved in the production, transportation, and handling of dry LNP powders. 22The contributions described in sections 2 and 3 were achieved with LNPs and hybrid LNPs prepared via the solvent-exchange methodology, which still face the aforementioned challenges.To address this challenge, our next step was to develop a robust method for the production of LNPs without the need of organic solvent.In this context, we proposed an approach that relies on the combination of two lignins with different aqueous solubility, as is the case with the two most important technical lignins: the poorly water-soluble softwood kraft lignin (SKL) and water-soluble sodium lignosulfonate (SL). 3Our methodology involves the dissolution of both LS and SKL in aqueous alkali (pH > 10), and the adjustment of the pH to slightly acidic (pH = 5.5) gives rise to a free-flowing micellar solution or gel as shown in Figure 6a.From a mechanistic point of view, as the pH decreases, poorly watersoluble lignins like SKL gradually precipitate and form spherical nuclei through hydrophobic interactions, while also associating with LS.Sulfonate groups of LS prevent molecular close packing and maintain a loosely packed micellar structure due to repulsive electrostatic interactions (Figure 6b).Systematic studies demonstrated that a 4:1 mass ratio of LS:SKL is the minimum requirement to obtain stable colloidal dispersions, while a 5:1 mass ratio produces the lowest particle size of 82 nm (Figure 6c).Instead of SKL, it is possible to use other poorly watersoluble lignins such as organosolv lignin (OLS) or soda lignin (SL).Unlike aqueous−organic solvent-based methods discussed previously, where lignin concentration is limited to 2 wt % to prevent particle size increase and subsequent agglomeration, 16 the hydrodynamic diameter of the colloidal particles did not increase but decreased as the concentration of lignin increase from 2 wt % to 14 wt % at a fixed ratio of LS:SKL.This fact can be attributed to increased viscosity of the system and the resulting shear forces that effectively counteract the particle growth.This method notably extends the working window for lignin particle concentrations compared to prior methods. 12heological experiments revealed a distinct gelation point dependent on lignin concentration and pH (Figure 6d).Specifically, lignin concentrations around 26 wt % at pH 5−6 promote the formation of a continuous particulate network based on intra-and intermolecular interactions of the two lignin types.TEM images of the colloidal dispersions revealed spherical particles (25 nm) sensitive to the beam exposure, proving the micellar nature of the particles internally stabilized by hydrophobic interactions (e.g., π−π stacking) and externally by repulsive electrostatic interactions arising from the sulfonate groups (Figure 3e).Overall, in comparison to traditional methods, the key advantages of this approach include the elimination of organic solvents, the ability to operate at high concentrations (up to ∼50 wt %), and the simplicity of preparing shear-thinning lignin nanoparticle gels with self-supporting properties.Conversely, the softness of the micellar particles distinguishes them from the denser LNPs, indicating potential divergence in paths toward various different applications. UNLOCKING THE POTENTIAL: TOWARD STIMULI-RESPONSIVE, PHOTONIC, AND CIRCULAR LNPS Stimuli-responsive materials, sometimes referred to as "smart" materials, have the ability to "sense" external stimuli such as pH, light, gas, or temperature and translate it into an observable response based on physiochemical changes. 7,8Regarding stimuli-responsive nanomaterials, such as polymeric nanoparticles, most efforts have focused on imparting a "programmable" degradation by introducing labile chemical groups (e.g., acetal, disulfide, etc.) to develop advanced drug delivery platforms. 57,58Stimuli-responsive LNPs have been explored toward drug delivery systems that harness the inherent properties of lignin.Dai et al. combined UV-blocking properties of lignin and the temperature responsiveness of poly(Nisopropylacrylamide) to develop temperature-responsive LNPs able to deliver on-demand trans-resveratrol, a light-sensitive drug. 59Another impressive effort is the work reported by Qian et al.where the well-known surfactant properties of lignin were combined with the ability of poly(dimethylaminoethyl acrylate) to interact with CO 2 and N 2 to develop reversible emulsifiers for Pickering emulsions processes. 60While these works exemplify the synergistic integration of lignin with stimuli-responsive functionalities, they typically require polymer grafting, resulting in a low lignin content (25 wt %) in the final material.In this sense, our group contributed with an alternative methodology to prepare gas (O 2 /N 2 )-responsive LNPs exceeding 75 wt % of lignin mass content. 4Our approach involves the solventexchange of SKL in the presence of a fluorinated lignin oleic acid ester (SKL-OlF) resulting in the formation of hybrid-LNPs (Figure 7a).The coaggregation of unmodified lignin with hydrophobic lignin derivatives resulted in the formation of hybrid particles, where the inner core is composed of the association and collapse of the more hydrophobic fragments.Consequently, hy-LNPs containing SKL-OlF content ranging from 10 to 50 wt % exhibited reproducible reversible swelling behavior upon exposure to O 2 /N 2 , with a volume increase of approximately 35% (Figure 7b).This change in volume also led to a morphological shift from spherical to core−shell (Figure 7c).The swelling behavior and change in the morphology were ascribed to the effective interaction of O 2 with C−F, promoting a decrease in the hydrophobicity of the fluorine oleate chains. Polarity changes prompt lignin-fluorinated oleic chains to migrate from the inner part to the particle surface, increasing particle swelling and enhancing stability under acidic conditions (pH < 2.5).We also showcased the potential of these LNPs as tunable gas nanoreactors for preparing gold-lignin hybrid nanoparticles.This approach offers exciting prospects for designing advanced nanomaterials based on LNPs, potentially serving as catalytic vessels for asymmetric chemical reactions. Building on the regulated assembly of lignin particles, photonic lignin materials exhibit unique optical properties, including structural coloration, due to their periodic arrangement of nanoscale components.First demonstrated by Wang and co-workers, 61 photonic lignin materials are gaining traction due to their ability to produce individual and rainbow colors as an alternative to photonics prepared from synthetic polystyrene latex particles. 62,63These materials hold great potential for various applications from biomedicine to environmental monitoring. SUMMARY AND OUTLOOK In this Account, we have summarized the different approaches to overcome the main challenges associated with LNPs and highlighted our contributions to the field.Despite the exciting progress achieved, challenges still exist.First, the preparation of hydrophobic nanocomposites remains restricted to the use of low amounts of LNPs (15−30 wt %), relying mainly on acrylic monomers.In this sense, stabilized LNPs would be crucial to explore routes such as reverse emulsion processes, where the stability of the particles in organic solvent would promote the interactions within the polymeric matrix, allowing for increasing the concentrations of LNPs.Second, there are currently two main methodologies for the stabilization of LNPs: (i) entrapment of a cross-linker during the self-assembly process and (ii) formation of hydration barrier provided by fatty acids.Both strategies have allowed the chemical modification of LNPs in aqueous dispersion state under acidic and basic conditions.However, so far there are no examples of chemical functionalization of LNPs in organic solvents.Therefore, future works should assess the possibility to conduct chemical reactions with LNPs in green organic solvents.Among them, polymerization-induced self-assembly (PISA) processes would be of broad interest to obtain hybrid nanostructures with multiple morphologies based on LNPs, possibly allowing the access to advanced materials such as nanomotors where morphology control is crucial.Third, harnessing the valuable properties of lignin along with added stimuli-responsive functionalities is worth investigating to develop advanced materials with a favorable carbon footprint.However, thus far, these systems are limited to the introduction of a single stimulus, constraining their range of application.In this sense, the next generation of stimuli-responsive LNPs should address the introduction of multiple stimuli in a predictable manner.If successful, these advancements will allow for a "programmable" control of various stimuli and even their combination, enabling the development of complex cascade processes that mimic biological systems.First steps have already been taken with the development of multistimuli-responsive lignin microcapsules for the delivery of pesticides, 64 but much effort introducing complementary stimuli is still needed.Such advancements could be of interest for the creation of advanced drug delivery systems based on LNPs (e.g., nanotheranostics).It is important to note that since some of the presented materials can function at the interface between material science and biology, evaluating Accounts of Chemical Research their biodegradability, toxicity, and recyclability still requires more understanding and effort.For example, recycling our lignin-polymeric composites (section 2) could be challenging, involving basic extraction to solubilize the biocomponents.Additionally, the degradation of fluorinated lignin esters (section 5) could release small fluorine synthons into the environment.The biodegradability of these systems also remains a challenge, as chemical modification of lignin is expected to affect its biodegradation, given that free phenolic hydroxyl groups are the primary sites of enzymatic degradation of lignins in nature. 65Therefore, it is clear that further research is necessary to elucidate the end-of-life and environmental impacts of lignin-based nanomaterials in various emerging applications.Last but not least, we hope that this Account will inspire researchers to develop new methodologies aimed at maximizing and unlocking the potential of LNPs across disciplines, including chemical biology and materials science. Figure 1 . Figure 1.(a) Lignin distribution in lignocellulosic biomass and example of the lignin structure.(b) Schematic representation of colloidal self-assembly process of lignin into LNPs.Note: not drawn to scale. Figure 3 . Figure 3. (a) Schematic illustration of the preparation of biocatalyst-loaded LNPs (GOx-chi-LNPs) and their application as functional surfactants in enzyme-degassed Pickering emulsion polymerization to produce particulate lignin-polymeric nanocomposites.(b) Schematic illustration of the preparation of the GOx-chi-LNPs-polymeric composites by the melting process and SEM micrographs of top and cross-sectional surfaces of PS-GOxchi-LNP composite films.(scale bars: 1 μm).(c) Schematic illustration of the proposed interactions between hybrid LNPs with polymeric chains before and after deformation in tensile testing.Note: not drawn to scale.(d) Tensile stress−strain curves of PBMA and PS, and their composites with GOx-chi-LNP.Adapted from ref 1. Available under a CC-BY 3.0 DEED license.Copyright 2021 Royal Society of Chemistry. Figure 4 . Figure 4. (a) Illustration of the preparation of oleic lignin nanoparticles (OLNPs): 1) Base-catalyzed esterification of SKL with oleoyl chloride; 2) production of OLNPs via solvent exchange precipitation from lignin-oleic acid esters.The orange color around the OLNP surface indicates the hydration barrier produced by the oleate chains.(b) Evolution of particle size for LNPs and OLNPs at pH 12.0.The colored dashed sections indicate the time-dependent aggregation/dissolution of different particles.(c) TEM images of LNPs and OLNPs 50 (scale bar: 100 nm).Inset digital images correspond to LNPs and OLNPs colloidal dispersions.Adapted from ref 2. Available under a CC-BY 4.0 DEED license.Copyright 2021 Wiley. Figure 5 . Figure 5. (a) Surface covalent functionalization of OLNPs 50 : (a) left: base-catalyzed ring-opening of GTMA under basic conditions (pH 12.0), right: acid-catalyzed ring-opening reaction of GMA under acidic conditions (pH 2.0).OLNPs 50 was used as a nucleophile for oxirane ring-opening.(b) Application of c-OLNPs 50 in dye adsorption in aqueous solutions: Illustration of the electrostatic interaction between c-OLNPs 50 with negatively charged Congo Red, and digital images of dye removal from aqueous solutions.(c) Application of MA-OLNPs 50 as an anticorrosion coating for metal surfaces: SEM image of a diagonally scratched surface of MA-OLNPs 50 -coated Al specimen, and digital images before and after the exposure of cured MA-OLNPs 50 -coated Al specimen to saline water (5% NaCl) for 15 h.Potentiodynamic polarization curves (Tafel plots) of coated (MA-OLNPs 50coated Al, red line) and noncoated aluminum substrates (reference, black line) after exposure to a 5% NaCl solution at 25 °C for 15 h.Adapted from ref 2. Available under a CC-BY 4.0 DEED license.Copyright 2021 Wiley. Figure 6 . Figure 6.(a) Preparation of micellar particle gels and colloidal dispersions from sodium lignosulfonate (LS) and poorly water-soluble lignin such as softwood kraft lignin (SKL, pictured).(b) Schematic model of formation of micellar particles of lignosulfonate in the presence of softwood kraft lignin or other lignin grades poorly soluble below neutral pH.(c) Effect of LS:SKL mass ratio on particle size (hydrodynamic diameter, Z-average values based on DLS) and observed colloidal stability of the dispersions.(d) Rheological properties of colloidal lignin gels.Dependency of dynamic viscosity of LS-SKL (5:1 w/w) dispersion on total lignin concentration, expressed as wt %, while maintaining a constant pH 4.8.(e) Transmission electron microscopy (TEM) image of LS + SKL colloidal dispersion (5:1 w/w) (scale bar: 100 nm).Adapted from ref 3. Available under a CC-BY 3.0 DEED license.Copyright 2023 Royal Society of Chemistry.
8,127
2024-07-04T00:00:00.000
[ "Materials Science", "Environmental Science" ]
Insights into the Role of Biopolymer-Based Xerogels in Biomedical Applications Xerogels are advanced, functional, porous materials consisting of ambient, dried, cross-linked polymeric networks. They possess characteristics such as high porosity, great surface area, and an affordable preparation route; they can be prepared from several organic and inorganic precursors for numerous applications. Owing to their desired properties, these materials were found to be suitable for several medical and biomedical applications; the high drug-loading capacity of xerogels and their ability to maintain sustained drug release make them highly desirable for drug delivery applications. As biopolymers and chemical-free materials, they have been also utilized in tissue engineering and regenerative medicine due to their high biocompatibility, non-immunogenicity, and non-cytotoxicity. Biopolymers have the ability to interact, cross-link, and/or trap several active agents, such as antibiotic or natural antimicrobial substances, which is useful in wound dressing and healing applications, and they can also be used to trap antibodies, enzymes, and cells for biosensing and monitoring applications. This review presents, for the first time, an introduction to biopolymeric xerogels, their fabrication approach, and their properties. We present the biological properties that make these materials suitable for many biomedical applications and discuss the most recent works regarding their applications, including drug delivery, wound healing and dressing, tissue scaffolding, and biosensing. Introduction In the past few years, we have witnessed the development of various novel functional materials from different precursors. Xerogels and aerogels are two examples of porous, structured materials that result from the different drying techniques of wet gels [1]. The attractive and unique properties of such porous materials arise from the extraordinary flexibility and resilience of the sol-gel developing process, which is combined with either ambient drying (xerogel) [2] or supercritical drying (aerogel) [3]. These materials have been prepared from several precursors, including silica [4], carbon [5], synthetic [6], and biopolymers [7]. Biopolymeric xerogels possess different physical, chemical, mechanical, and biological properties, depending on several factors, including precursor material/s, solvent medium, and drying conditions [7]. These factors also influence the shrinking of the biopolymeric gels, leading to an increased density and reduced porosity [8]. The structure, shape, and morphology of xerogels can be controlled in both the synthesizing and drying phases, but their porosity remains less than that of aerogels of the A xerogel is defined as a porous, structural material that can be obtained via the evaporative drying of any precursor's wet gel. Although the porosity and surface area of xerogels are lower than the aerogels, they are characterized by their easy and unexpansive fabrication, better mechanical stability, and higher density compared with aerogels [28]. Fabrication Techniques The fabrication of xerogels generally consists of forming the polymeric hydrogel and drying that hydrogel in a way that retains (at least in part) its porous texture after the drying [29]. The process varies from one polymer to another, and drying conditions also differ based on the used solvent and precursor material/s. Pectin xerogel has been prepared from its alcogel. The authors used meld temperature (60 • C) for the drying purpose under vacuum conditions for 4 days until the complete drying of the alcogel [12]. The authors reported that in order to prevent a major collapse during the drying process, ionic gelation is a necessary step. A massive shrinkage of around 90 vol% commonly occurs after evaporative drying due to structural collapse, leading to an increase in the density of the material and a reduction in its porosity. The attractive properties of the biopolymeric porous hydrogels arise from their extraordinary flexibility during the sol-gel phase, which is mostly combined with various drying techniques, leading to the formation of the desired xerogel. Cellulose xerogel has been fabricated using a facile approach consisting of three steps: the partial ionic liquid dissolution of cellulose suspension, non-solvent rinsing, and drying [30]. In a different study, cellulose nanofiber xerogels were fabricated by Toivonen et al. [31] through a solvent exchange process (with octane), the vacuum filtration of their solvent dispersion, and finally, ambient drying. The authors reported a mesoporous xerogel with good porosity and surface area. Melone et al. [32] suggested a new, economically affordable synthesization protocol for the design of novel xerogels based on the cross-linking of TEMPO-oxidized cellulose nanofibers (TOUS-CNFs) and branched polyethyleneimine. The xerogel exhibited high adsorption capability for different organic pollutants, indicating its potential for water decontamination. In a different work, the authors were able to prepare different xerogels with attractive properties by cross-linking TEMPO-oxidized and ultra-sonicated cellulose nanofibers [33]. The drying step is the most important in most cases of biopolymeric xerogel fabrication. It directly affects most of the physical and morphological properties of the material. Xerogels and aerogels are the two closest relatives of polymeric substances, with slight differences in terms of fabrication approaches and properties. Unlike aerogels, xerogels cannot be formed from pure nanocellulose or any nongel forming polymers [27,34]. Such biopolymers require cross-linking in order to form gels in them, then drying these gels to obtain xerogels [35]. Chitosan-silica xerogel was prepared by sol-gel and emulsification-crosslinking [36]. The addition of 20 wt% of SiO 2 was found to be enough to make the xerogels exhibit a regular spherical shape with sufficient dispersity and a uniform microstructure for drug delivery applications. However, compared with the pure chitosan xerogel-based microspheres, this hybrid showed significantly improved in vitro bioactivity in addition to good drug loading capacity and sustained release. Figure 1 presents an illustration of biopolymeric xerogel fabrication and the difference between biopolymeric xerogels and biopolymeric aerogels. the cross-linking of TEMPO-oxidized cellulose nanofibers (TOUS-CNFs) and branched polyethyleneimine. The xerogel exhibited high adsorption capability for different organic pollutants, indicating its potential for water decontamination. In a different work, the authors were able to prepare different xerogels with attractive properties by cross-linking TEMPO-oxidized and ultra-sonicated cellulose nanofibers [33]. The drying step is the most important in most cases of biopolymeric xerogel fabrication. It directly affects most of the physical and morphological properties of the material. Xerogels and aerogels are the two closest relatives of polymeric substances, with slight differences in terms of fabrication approaches and properties. Unlike aerogels, xerogels cannot be formed from pure nanocellulose or any non-gel forming polymers [27,34]. Such biopolymers require crosslinking in order to form gels in them, then drying these gels to obtain xerogels [35]. Chitosan-silica xerogel was prepared by sol-gel and emulsification-crosslinking [36]. The addition of 20 wt% of SiO2 was found to be enough to make the xerogels exhibit a regular spherical shape with sufficient dispersity and a uniform microstructure for drug delivery applications. However, compared with the pure chitosan xerogel-based microspheres, this hybrid showed significantly improved in vitro bioactivity in addition to good drug loading capacity and sustained release. Figure 1 presents an illustration of biopolymeric xerogel fabrication and the difference between biopolymeric xerogels and biopolymeric aerogels. Properties and Advantages of Xerogels A xerogel is a solid, porous material resulting from the slow drying of hydrogels at room temperature, with unconstrained shrinkage depending on the type of precursor/s. Xerogels differ from aerogels in many aspects, including their shrinkage ratio, porosity, specific surface area, and bulk density [37]. Xerogels generally possess higher shrinkage than aerogels, and thus, they have lower porosity, lower surface area, and greater bulk density. Groult et al. [12] compared the properties of pectin xerogels and aerogels and found that in order to prevent a major collapse during the drying process, ionic gelation is a necessary step. The xerogels had bulk density and porosity of 1.057 g/cm 3 and 29.5%, respectively, compared with the pectin aerogels, which had 0.083 g/cm 3 and 94.4 % for the bulk density and porosity, respectively. The xerogels exhibited a higher loading efficiency of 94% compared with the aerogels' loading efficiency, which was recorded to be 62%. The mechanical properties of xerogels vary depending on the type of precursor materials; in most cases, xerogels possess better mechanical properties than aerogels due to their lower porosity and higher bulk density [38]. Similarly, Ganesan et al. [39] prepared Properties and Advantages of Xerogels A xerogel is a solid, porous material resulting from the slow drying of hydrogels at room temperature, with unconstrained shrinkage depending on the type of precursor/s. Xerogels differ from aerogels in many aspects, including their shrinkage ratio, porosity, specific surface area, and bulk density [37]. Xerogels generally possess higher shrinkage than aerogels, and thus, they have lower porosity, lower surface area, and greater bulk density. Groult et al. [12] compared the properties of pectin xerogels and aerogels and found that in order to prevent a major collapse during the drying process, ionic gelation is a necessary step. The xerogels had bulk density and porosity of 1.057 g/cm 3 and 29.5%, respectively, compared with the pectin aerogels, which had 0.083 g/cm 3 and 94.4% for the bulk density and porosity, respectively. The xerogels exhibited a higher loading efficiency of 94% compared with the aerogels' loading efficiency, which was recorded to be 62%. The mechanical properties of xerogels vary depending on the type of precursor materials; in most cases, xerogels possess better mechanical properties than aerogels due to their lower porosity and higher bulk density [38]. Similarly, Ganesan et al. [39] prepared cellulose-based xerogels and aerogels and compared their characteristics, as presented in Figure 2. The authors found that the aerogels possessed significantly higher porosity, ranging between 92.7 and 96.4%, while the xerogels only possessed a porosity of 70.2 to 80.3%. The properties of biopolymeric xerogels are highly influenced by two main factors: the precursor material/s and the liquid-vapour interface, in addition to the solvent medium, which affects the drying process [13]. Thus, changing these factors will lead to xerogels with different physical and morphological properties. cellulose-based xerogels and aerogels and compared their characteristics, as presented in Figure 2. The authors found that the aerogels possessed significantly higher porosity, ranging between 92.7 and 96.4%, while the xerogels only possessed a porosity of 70.2 to 80.3%. The properties of biopolymeric xerogels are highly influenced by two main factors: the precursor material/s and the liquid-vapour interface, in addition to the solvent medium, which affects the drying process [13]. Thus, changing these factors will lead to xerogels with different physical and morphological properties. Solvents such as ethanol have similar surface tension values to isopropanol, and research has reported that using these two solvents to prepare xerogel in the same condition could yield xerogels with different physical properties due to the change in vapour pressures [40]. Pramanik et al. [9] used nanocellulose in different mass ratios to improve the mechanical strength of polyvinyl alcohol xerogels. The authors reported that increasing the nanocellulose content led to a significant enhancement in the thermal properties of the xerogel. However, a xerogel rupture occurred in the case of a higher quantity of nanocellulose (18%) due to the formation of weak cellulose-rich regions. The addition of this much nanocellulose in the polymeric matrix increased the brittleness of the xerogels, which is the main cause of xerogel fracture. Silk fibroin-based xerogels possess great water absorption capacity, and Cheng et al. [23] reported that their xerogels were able to absorb up to 90 times its own mass of water within a minute in addition to its great hemostatic properties, making such material suitable for absorbing other body exudates. Several attempts have been made to produce aerogel-like xerogels under ambient conditions to minimize the shrinkage. However, the resulting xerogels in most of the cases inevitably took the form of thin films with relatively low porosity [7]. Prakash et al. [41] developed a unique approach to exchange the Solvents such as ethanol have similar surface tension values to isopropanol, and research has reported that using these two solvents to prepare xerogel in the same condition could yield xerogels with different physical properties due to the change in vapour pressures [40]. Pramanik et al. [9] used nanocellulose in different mass ratios to improve the mechanical strength of polyvinyl alcohol xerogels. The authors reported that increasing the nanocellulose content led to a significant enhancement in the thermal properties of the xerogel. However, a xerogel rupture occurred in the case of a higher quantity of nanocellulose (18%) due to the formation of weak cellulose-rich regions. The addition of this much nanocellulose in the polymeric matrix increased the brittleness of the xerogels, which is the main cause of xerogel fracture. Silk fibroin-based xerogels possess great water absorption capacity, and Cheng et al. [23] reported that their xerogels were able to absorb up to 90 times its own mass of water within a minute in addition to its great hemostatic properties, making such material suitable for absorbing other body exudates. Several attempts have been made to produce aerogel-like xerogels under ambient conditions to minimize the shrinkage. However, the resulting xerogels in most of the cases inevitably took the form of thin films with relatively low porosity [7]. Prakash et al. [41] developed a unique approach to exchange the hydrogel's solvent for a solvent with a lower polarity than water, such as pentane or hexane, to reduce the capillary force and thus produce xerogels with higher porosity. Other materials, such as organosilicons, have been intro- duced to the xerogels to enhance the optical transparency of the xerogels and make them exhibit rubbery compression [42]. Cellulose nanofiber xerogels were fabricated through a solvent exchange process with mesoporous and in a film-like shape [31]. The xerogel possessed 60% porosity and 200 m 2 /g specific surface area, which is considered close to the properties of aerogels. The characteristics of biopolymeric xerogels are highly influenced by the preparation conditions, as they directly affect the shrinkage of the hydrogels. Suitability of Biopolymeric Xerogels in Biomedical Applications Toxicity evaluation is very important when it comes to any medical applications, and the material will directly attach to the human or animal cells. Although many of the natural materials did not show significant toxicity to living cells, the preparation conditions may alter the chemistry of these materials and alter their biological effects [43,44]. Biopolymeric xerogels are dried forms of the biopolymer/s precursor; they have the chemical and biological characteristics of that biopolymer/s [45]. Several xerogels have been prepared without any need for further chemical addition or modification, but in other cases, natural compounds are added to extend the applications such as adding essential oils as an antibacterial agent. Biopolymers are known for being biocompatible and non-cytotoxic; they have been evaluated in several forms including the raw biopolymers [46], films [47], membranes [48], composites [49], hydrogels [50], aerogels [51], and even xerogels [14]. Although the number of cytotoxicity evaluations regarding biopolymeric xerogels is limited compared with aerogels, despite the drying process, aerogels and xerogels are prepared with the same principle, and thus, both of them are highly biocompatible, non-cytotoxic, and allow the attachment and migration of cells [52]. Refer to Table 1 for a summary of the cytotoxicity and biocompatibility evaluations of biopolymeric xerogels. [14] Collagen-silica xerogel Cell culture experiments Human monocytes The xerogel promoted the differentiation of monocytes into osteoclast-like cells. [53] Carbon xerogel Cytotoxicity test Fibroblast cell The xerogel was biocompatible; the presence of carbon fibers increases the cell's proliferation. [54] Chitosan coated mesoporous silica xerogels Cytotoxicity assays Mouse myoblast cells line No obvious cytotoxicity was reported for the xerogel even after 7 days of the exposure. [55] Silk Fibroin Protein Xerogel Hemostasis experiments In-vitro and in-vivo rabbit ear Good hemostatic properties were observed both in vitro and in vivo for the xerogel. [23] Chitosan-poly(vinyl alcohol) xerogel Cytotoxicity and migration rate Mouse embryonic fibroblast The xerogel exhibited significant cell proliferation & migration rates and high biocompatibility. [56] Alginate-hydroxyapatite aerogel Cytotoxicity, viability, and migration Mesenchymal stem cells Highly biocompatible, allowed attachment and migration. [57] Collagen-silica xerogel Cell proliferation assay Preosteoblast cells Good biocompatibility and high level of osteoblast differentiation [58] Biopolymeric Xerogels in Biomedical Applications Biopolymeric xerogels are porous networks of many unique and desirable properties that have been widely studied for different biomedical applications including controlled and sustained drug delivery, wound dressing and healing applications, tissue engineering scaffolds, and other applications [23]. Owing to the biocompatibility, non-cytotoxicity, and non-immunogenicity of the biopolymers, biopolymeric xerogels are considered to be a safer option than inorganic and synthetic materials in medical applications [1,59]. Drug Delivery Xerogels have been extensively studied for their potential use in drug delivery since their discovery. Owing to their porous texture, their ability to control pore structure, and their large surface area, they attracted the attention of scientists in many pharmaceutical applications. Such desirable characteristics are favoured by drug loading and allow for better control of the drug release behavior [60]. Zhou et al. [16] used a poly (ε-caprolactone)-chitosan-silica xerogel for tetracycline hydrochloride delivery by green fabrication route. The presence of silica in the xerogel significantly enhanced the thermal stability and endowed good in vitro bioactivity and drug release behavior for the xerogel. The ability to modify the surfaces of biopolymers within the xerogel facilitates the drug incorporation in higher loading capacity and more sustained release. In a recent study, an alginate-based xerogel was modified using g-poly (methacrylic acid; AGM2S) for insulin delivery toward wound care [61]. The authors reported significant improvement in the physical stability, good swelling, and low degradation of the modified xerogel. More than 70% of loaded insulin was released from the xerogel in two days, which modulated the healing response [61]. In a different study, a novel xerogel was prepared from silica and poly(ethylene glycol) by the facile sol-gel route and showed sustained release of an enrofloxacin antibiotic drug [62]. The unique properties and facile fabrication of xerogels permit the slow release of drugs, making them a better option for sustained drug delivery applications. Different precursors consisting of naturally available diatomaceous earth microparticles have been used for the first time in xerogel fabrication [20]. Such unique xerogels were modified to enhance their drug loading capacity by using a facile sol-gel method resulting in a pH-sensitive micro drug carrier, which was evaluated for diclofenac sodium drug delivery. The authors reported a significant increase in drug loading capacity and sustained drug release fitting the zero-order model. Križman et al. [63] fabricated silk fibroin-based xerogels and evaluated their potential for long-acting hormone estradiol delivery (Figure 3). Ethanol was used in the preparation process and acted as a dissolving agent for the drug in addition to an accelerator for the gelation process. The authors were able to achieve a sustained drug release of up to 129 days from the xerogel delivery system, suggesting the great potential of such biopolymeric xerogel in the prolonged release of hydrophobic drugs. Antibacterial and Wound Healing Applications The process of wound healing is a complex and dynamic process consisting of several stages that lasts days or even weeks depending on multiple factors, such as the type of wound, its depth, microbial colonization, and the patient's immune system, to enable the injured skin to restore itself [64]. Hydrogels' antibacterial materials [65-67] have been Antibacterial and Wound Healing Applications The process of wound healing is a complex and dynamic process consisting of several stages that lasts days or even weeks depending on multiple factors, such as the type of wound, its depth, microbial colonization, and the patient's immune system, to enable the injured skin to restore itself [64]. Hydrogels' antibacterial materials [65][66][67] have been widely used in wound healing applications, but they have the drawback of requiring gauze or other adjuvants to be applied to a bleeding wound. Furthermore, the overly moist environment caused by hydrogel is not conducive to promoting wound healing and the scabbing effect, especially at the early stages of wound formation [68]. Deep wounds may favor the growth of anaerobic bacteria, leading to severe inflammation and suppuration [69]. Xerogels have been used to overcome these drawbacks, which can be customized to be super-hydrophobic and/or super-adhesive functional materials [70]. In a recent investigation, Huang et al. [71] fabricated a novel xerogel with good mechanical properties, using silver nanoparticles as an antibacterial agent. The hybrid xerogel was able to rapidly capture bacteria and kill 99.9% of E. coli and 99.85% of S. aureus through the electrostatic interactions of the disulfide groups. Although silver nanoparticles have been linked with minor adverse health effects, the authors reported the good biocompatibility and non-toxicity results of the xerogel [72]. Natural antibacterial agents, such as plant essential oils and extracts, could be also loaded into the xerogel and used for wound healing. Plant polysaccharide-based xerogels are characterized by their high biocompatibility, large biodegradability, and high water absorption capacity [34]. Owing to the excessive distribution of surface functional groups, they have the potential to cross-link with natural antibacterial agents. Chitin and chitosan are the most used animal-based biopolymers in terms of wound healing application due to their special properties, including bactericidal and antifungal characteristics, high permeability to oxygen, and healing activities by stimulating fibroblast proliferation [51]. Deon et al. [73] used a silica/titania magnetic xerogel to immobilize chitosan-stabilized gold nanoparticles as an antibacterial system. Owing to the synergistic effect of chitosan and gold nanoparticles, the surface reactivity of titania, and the porous and magnetic response of silica, the xerogel system possessed strong antibacterial activity, even at an extremely low gold content. Using two or more biopolymers in xerogel fabrication was found to enhance the properties of the material and limit the shrinkage after drying; a porous xerogel was fabricated using chitosan in combination with sodium polyacrylate, polyethylene glycol wound treatment, and hemorrhage control [74]. Chitosan was used as antimicrobial agent that always cross-linked with different organic or inorganic materials, such as gelatin and tannic acid, which played a hemostatic role [75]. Gelatin is a biopolymer that is extensively used in wound and skin care applications due to its ability to activate platelet aggregation, and it can also act as an absorbable hemostatic agent [75]. Patil et al. [14] used the two biopolymers to prepare a highly porous xerogel for an efficient, multimodal topical hemostat (Figure 4). The authors ionically cross-linked gelatin and chitosan with sodium tripolyphosphate, and they were able to achieve in vitro >16-fold improved blood clotting compared to the available commercial materials. The xerogel content displayed good platelet activation and promoted the generation of thrombin, which is very important in wound healing applications. The same authors conducted an in vivo study of their xerogel on a lethal femoral artery injury and reported 2.5 min hemostasis, which is significantly faster than the commercial Gauze (4.6 min) and Celox (3.3 min), in addition to easy removal from the wound. Although xerogels have not been used for commercialization purposes yet, in the coming years, we will witness the utilization of these materials in wound healing applications, as they have great potential as topical hemostatic agents and can be used to save precious lives. eration of thrombin, which is very important in wound healing applications. The same authors conducted an in vivo study of their xerogel on a lethal femoral artery injury and reported 2.5 min hemostasis, which is significantly faster than the commercial Gauze (4.6 min) and Celox (3.3 min), in addition to easy removal from the wound. Although xerogels have not been used for commercialization purposes yet, in the coming years, we will witness the utilization of these materials in wound healing applications, as they have great potential as topical hemostatic agents and can be used to save precious lives. . Chitosan-gelatin xerogel composite loaded with silica nanoparticles and calcium for rapid halting blood loss, showing the interaction between the biopolymers and its wound healing properties. Adapted with permission from Patil et al. [14]. Copyright 2022 Elsevier. Tissue Engineering Porous biopolymeric xerogels have been also used in tissue engineering scaffolds, as the easy adjustment of pore size and structure, in addition to their high biocompatibility, makes them a highly favorable form of the materials in such an application. A porous chitosan/berberine hydrochloride composite xerogel was prepared for tissue regeneration and hemostatic applications [76]. This biopolymeric xerogel exhibited good antibacterial activity, hemostatic properties, and fast degradability after immersion in phosphate-buffered saline. The authors reported good biocompatibility and strong hemostatic potential, as it was only composed of natural materials, which implies that it is a promising material for skin regeneration and hemostatic applications. The unique properties of some biopolymers, such as the antimicrobial activity of chitosan and promoting cell growth in collagen and silk fibrin, made their xerogels highly favorable in tissue engineering and regenerative medicine [77]. Wu et al. [78] fabricated a novel bioactive hybrid xerogel based on silk fibroin as precursor material, silica to enhance the mechanical properties, and CaO-P 2 O 5 to enhance the xerogel's properties for bone regeneration applications. The authors reported excellent porosity and pore structures for their xerogel and adding the silica significantly enhanced the mechanical properties. The xerogel exhibited profound bioactivity once immersed in a simulated fluid due to the hydroxyapatite layers on its surfaces. The xerogel was biocompatible, although it showed little toxicity to MC3T3-E1 cells, which was due to the effect of silica on the cells. In a similar study, Lee et al. [58] fabricated a hybrid xerogel from calcium, silica, and collagen for bone regeneration applications. The authors used calcium to promote the bone cells' proliferation and silica to enhance the mechanical properties of collagen. Owing to the homogenous mixing and the incorporation of silica in the collagen matrix, the xerogel did not form any by-products, and it showed excellent bioactive characteristics. The hybrid xerogel expressed a better osteoblastic phenotype than the xerogels of pure collagen and pure silica. Elshishiny & Mamdouh [56] reported the fabrication of novel tri-layered, asymmetric, porous xerogel scaffolds for skin regeneration applications. The xerogel scaffold consisted of two layers: an upper layer of electrospun chitosan-poly(vinyl alcohol) and a lower layer of their regular xerogel. The authors fixed the two layers together by using a third material, fibrin glue, as a middle layer. This novel fabrication showed promising scaffold-swelling capability in addition to a high absorption capacity in regard to wound exudates. The porosity of the xerogel provided an optimum environment for the fibroblast cells' migration and proliferation. In a recent study, Rößler et al. [79] used the three-dimensional (3D) plotting of a silica and collagen hybrid xerogel scaffold in another biopolymeric matrix consisting of alginate ( Figure 5). The authors used viscoelastic alginate as a matrix to enhance the biocompatibility and binding properties of the xerogel scaffold, and they reported that alginate concentration is the golden key to controlling the shape regularity of the xerogel granules. Biosensing Biopolymeric xerogels have also been utilized in the sensing applications of many medically important parameters such as glucose level, uric acid, cholesterol, etc. Xerogels possess desired biosensor properties, such as porous structure and high surface area, making them highly advanced detection tools [80]. Khattab et al. [13] developed an easy-to-use, smart, microporous cellulose xerogel-based colorimetric sensor by immobilizing bromocresol purple chromophore into a cross-linked carboxymethyl cellulose xerogel matrix. Proton shifting from the hydroxyl group in the bromocresol purple dye to ammonia nitrogen enabled the identification of ammonia gas. Unlike dense material and metal-based xerogels, biopolymeric xerogels are distinguished by their non-toxicity, lighter weight, and larger surface area; thus, they are suitable for the identification of different parameters both in liquids and gaseous analytes [81]. The home-based detection and quantification of common analytes such as glucose monitoring, in addition to environmental routine monitoring, is exceedingly challenging and requires high measurement accuracy. Xerogel-based biosensors attracted attention for this purpose, as they are inexpensive, robust, and reusable materials Biosensing Biopolymeric xerogels have also been utilized in the sensing applications of many medically important parameters such as glucose level, uric acid, cholesterol, etc. Xerogels possess desired biosensor properties, such as porous structure and high surface area, making them highly advanced detection tools [80]. Khattab et al. [13] developed an easyto-use, smart, microporous cellulose xerogel-based colorimetric sensor by immobilizing bromocresol purple chromophore into a cross-linked carboxymethyl cellulose xerogel matrix. Proton shifting from the hydroxyl group in the bromocresol purple dye to ammonia nitrogen enabled the identification of ammonia gas. Unlike dense material and metal-based xerogels, biopolymeric xerogels are distinguished by their non-toxicity, lighter weight, and larger surface area; thus, they are suitable for the identification of different parameters both in liquids and gaseous analytes [81]. The home-based detection and quantification of common analytes such as glucose monitoring, in addition to environmental routine monitoring, is exceedingly challenging and requires high measurement accuracy. Xerogelbased biosensors attracted attention for this purpose, as they are inexpensive, robust, and reusable materials able to meet all the requirements of biosensors [27]. The fabrication of xerogel-based biosensors began with the immobilization of active agents that are able to detect the desired parameters. Numerous active compounds, such as antibodies, active receptors, enzymes, cells, regulatory proteins, etc. have been used for this reason [82]. Three main approaches have been reported for the immobilization of active agents in the xerogels, including entrapment, physisorption, and covalent attachment [83]. The physisorption approach is the simplest, but it has the drawback of the random orientation of active agents on the xerogel, which could lead to them being unable to access the target molecule, thus lowering the accuracy of the xerogels [84,85]. To solve this issue, covalent attachment, which generally forms more stable interfaces, was developed. However, this approach also severs from the partial orientation of some kinds of active agents in addition to being a more expensive and time-consuming approach [82]. Freeman et al. [86] prepared the first generation of novel amperometric glucose biosensors, but they used several synthetic materials instead of biopolymers, which has the drawback of toxicity. To solve this issue, Alharthi et al. [87] recently used a nanocellulose acetate-based xerogel for the colorimetric detection of urea. The authors reported that their sponge-like, microporous xerogel was highly sensitive to urea because it used a urease enzyme as a catalytic agent and triarylmethane as a spectroscopic chromophore. The porous xerogel allowed for the in-situ integration of the triarylmethane probe, which enhanced the detection process and increased the accuracy of detection. Similarly, Abdelrahman et al. [88] developed a highly sensitive, reversible, and cost-effective biopolymeric xerogel for am- Three main approaches have been reported for the immobilization of active agents in the xerogels, including entrapment, physisorption, and covalent attachment [83]. The physisorption approach is the simplest, but it has the drawback of the random orientation of active agents on the xerogel, which could lead to them being unable to access the target molecule, thus lowering the accuracy of the xerogels [84,85]. To solve this issue, covalent attachment, which generally forms more stable interfaces, was developed. However, this approach also severs from the partial orientation of some kinds of active agents in addition to being a more expensive and time-consuming approach [82]. Freeman et al. [86] prepared the first generation of novel amperometric glucose biosensors, but they used several synthetic materials instead of biopolymers, which has the drawback of toxicity. To solve this issue, Alharthi et al. [87] recently used a nanocellulose acetate-based xerogel for the colorimetric detection of urea. The authors reported that their sponge-like, microporous xerogel was highly sensitive to urea because it used a urease enzyme as a catalytic agent and triarylmethane as a spectroscopic chromophore. The porous xerogel allowed for the in-situ integration of the triarylmethane probe, which enhanced the detection process and increased the accuracy of detection. Similarly, Abdelrahman et al. [88] developed a highly sensitive, reversible, and cost-effective biopolymeric xerogel for ammonia vapor detection. The microporous cellulose xerogel exhibited naked-eye colorimetric responsiveness immediately upon exposure to ammonia vapour. The application of biopolymeric xerogel in biosensing has not yet been extensively studied; limited works have been established, but we believe that these materials have great potential in regards to this application. Challenges and Future Prospective Biopolymeric porous materials such as xerogels and aerogels are still in their initial experimental stage in many biomedical applications. A limited number of materials entered the clinical trials, and most of them are still in the developmental and laboratory experimental phases. Although there are a significant number of studies that have proven that biopolymeric xerogels are highly suitable for biomedical applications, clinical and longterm evaluations of these materials are highly necessary before they can be commercialized. Smart and controlled delivery has been achieved in some experiments, especially in the case of cancer [89], but long-term evaluation and different cell evaluation have yet to be explored. Numerous challenges remain for other pathologies, as most of the current research focuses only on delivering specific drugs, particularly anticancer, anti-diabetic, and antimicrobial agents, and many of them involve in vitro studies or only short-term in vivo studies without considering the effect of these materials on other human bodies' biological parameters. Toxicity and biocompatibility experiments are, in most cases, carried out by using one type of cell in simulation conditions [90], and the real conditions inside our bodies might be different, so such materials may not be as biocompatible as they seem. The full effects of these biopolymeric materials on the human body have not yet been determined. The future generation of therapeutic biopolymeric materials with antibiotics, antibodies, hormones, peptides, genes, etc. should minimize undesirable side effects, not increase them. The future of biopolymeric xerogels requires serious collaboration among worldwide researchers, different industries, and regulatory agencies to maintain and ensure the safety and effectiveness of these therapeutic platforms to evaluate the potential and the possibilities of xerogel production in adequate quantities and of adequate quality to meet the expected demands of society.
7,751.8
2022-05-29T00:00:00.000
[ "Biology", "Materials Science" ]
Predicting New TV Series Ratings from their Pilot Episode Scripts Empirical studies of the determinants of the ratings of new television series have focused almost exclusively on factors known after a decision has been made to broadcast the series. The current study directly addresses this gap in the literature. Specifically, we first develop a parsimonious model to predict the audience size of new television series. We then test our model on a sample of 116 hour-long, scripted television series that debuted on one of the four major US television networks during the 2009-2014 seasons. Our key predictor is the size of the main component of the text network developed from the script of the pilot episode of each series. As expected, this size measure strongly explains the number of viewers of the new series’ first several episodes. Introduction In the spring of 2011 an article appeared in the Wall Street Journal entitled "The Math of a Hit TV Show" (Chozick, 2011). Of particular note was the article's detailed description of the gauntlet that is the new TV show development process. In its first stage, which we are told begins in early summer, each of the big four US networks receives about 500 "elevator pitches" or loglines, each of which describes the basic idea for a new series. Next, through a process unique to each network, the review of the 500+ pitches results in about 70 scripts being commissioned. In the third stage, approximately one-third of these scripts get the green light, i.e., the show creators are given money to produce a proof-of-concept pilot episode. Once the proof-of-concept pilots have completed filming, they are subjected to varying levels and kinds of marketing research, focus-group testing being one of the most common. Depending on a variety of factors, not the least of which is the strength of a network's current slate of shows, anywhere from 4-8 of these completed pilots are given slots in the network's lineup. At this point the show's creative team ramps up its staff, particularly with writers who begin penning the next several episodes of the series. No sooner do episodes appear on the air, then their ratings and viewership are carefully scrutinized. Based on how well those episodes are received, changes may be made to the characters and story lines. Even the order of episodes may be altered. In the worst case scenario, shows falling far below ratings or audience expectations can be cancelled or replaced after as little as 2-3 weeks. The networks' post-New Year schedules always contain a raft of mid-season replacements and, like the survivors from the fall line-up, they may be subject to changes right up to and including the airing of the season's final episode. In mid-May, at the industry confab known as the "up fronts," the networks announce their lineups for the coming fall's season. A few weeks later, as the networks once again start receiving pitches, the development process for the next year begins anew. Also of note in the article was its description of the strategies that executives, producers, and show creators use to improve their odds of first getting a show on the air and then keeping it there. One such strategy was actually spelled out in the article's subtitle-"For New Shows, Networks Try Familiar, with a Little Twist." Just such an example was Grimm (2011)-a cop show (familiar) with a twist (characters inspired by Grimm's fairy tales). Chief among the other strategies include focus-group testing and multiple rounds of script rewrites and revisions. Networks are also said to differ in their mix and structure of the strategies employed. For example, CBS-then and still the top-ranked network in total prime-time viewers-limits to just four the number of people providing input on new series-the CEO, the president of the entertainment division, the head of development, and the show runner. At other studios, the process is known to involve many more people, leading some to remark upon the old adages about the number of chefs and the preparation of soup and camels being horses designed by committees. But no matter the structure of the decision process or the number of decision makers, one inarguable mathematical fact remains: the large majority of new shows fail within two seasons, thus falling very short of becoming hits (Bielby & Bielby, 1994;Nathanson, 2013). The reason why such high failure rates matter is because television networks earn a large portion of their revenue from the sale of blocks of time to advertisers. The prices that they charge for new shows are a function of the projected audience-both in terms of size and demographics. When it comes to series that have already been on the air for one or more seasons, there is already a wealth of information upon which to base those projections (Danaher, Dagger, & Smith, 2011). But with new shows, little if any of that information is known either when key development decisions are being made or when advertisers begin buying time slots, which is shortly after the up fronts. When a new series fails to deliver the projected audience, it runs the risk of cancellation. But in addition, advertisers who bought time slots on the underperforming series are entitled to partial refunds or airtime on other shows. They are not, however, compensated for the opportunity cost of their wrong decisions. Because new shows typically comprise 20-40% or more of a network's fall lineup, they are a source of substantial uncertainty for both buyers (advertisers) and sellers (studios and networks). Those who can manage that uncertainty stand to profit handsomely. As Litman (1979) noted, "program executives who can successfully predict how viewers will respond to different types of programs can be expected to make fewer development and scheduling mistakes, hold down programming costs, and win the ratings game." Unfortunately for both the networks and their advertisers, predictive models of the ratings performance of new television series are both scarce and inaccurate (Napoli, 2001). Further complicating matters is the dearth of empirical models for predicting new series performance-either from the pre-production stage or after the fact. There is, however, a small but burgeoning literature in a closely-related field-that of cultural economics-that has developed models for predicting box office revenues using only information known during pre-production. Of particular importance in those models is the utilization of variables derived from what appears on the page, i.e., what is found through the textual and content analysis of the scripts. These are factors that have gone all but unnoticed in the academic literature on television ratings. To bridge this gap, we draw upon the film studies and cultural economics literature to develop an early-stage model for predicting total network viewership of new dramatic television series. In particular, we focus on a text-analytical measure developed by Hunter, Smith, & Singh (2016). As predicted, we find that even when controlling for the track record of success of a new show's creators, the originality of that show's concept, and the television network on which the show appears, this text-analytical measure is a statistically significant predictor of audience size through at least the first five episodes of a new series' first season. The remainder of this paper is organized as follows. In the next section we summarize the academic literature on television ratings and argue for the adaptation of its models to the study of new television show ratings. In the third section we describe our data and statistical methods. The four sections contain a discussion of the results of the analysis while in the final section we discuss implications of the same. Literature Review As noted in the introduction, whether measured in absolute or relative terms, the failure rate of new television shows is very high and has been for decades (Bielby & Bielby, 1994). Despite the fact that such failure rates are costly to industry participants-particularly the television networks and the advertisers-the determinants of success and failure remain poorly understood (Littlejohn, 2007). Compounding this problem is the fact that empirical research in the area of television studies has provided few, if any, useful insights or solutions. But as also noted in the introduction, there is small body of relevant work in a companion literature-the field of cultural economics-that sheds light upon the prediction of television ratings. Specifically, we refer to three recent studies that explain variation in box office revenues using only variables that are known during the pre-production stages. The first of the three is authored by Goetzman, Ravid, & Sverdlove (2013) who investigated the "forward looking" nature of prices paid by movie studios for screenplays (p. 277). The authors predicted and found that prices were positive and highly significant predictors of the ensuing film's box office receipts. Because the screenplays were purchased very early in the development process, the implication is that price serves as a "signal for the perceived quality of the subsequent project" (p. 297). The second of the relevant studies is one by Eliashberg, Hui, & Zhang (2014) who used four groups of variables derived through textual and content analysis of screenplays. The first of these four groups was the film's genre, ijel.ccsenet.org International Journal of English Linguistics Vol. 6, No. 5; i.e., whether the film was a comedy, western, action, drama, etc. The second pertained to story line or content variables such as the likability of the protagonist, early exposition of information about the same, the presence of a surprise ending, or an unambiguous resolution to the central conflict. The third group consisted of semantic features of the text such as the total number of scenes and the average length of dialogs. The fourth and final group of predictors were two "bag-of-words" measures that captured styles and frequencies of individual words in the text. The authors used both human coders and computational methods to determine the levels of these variables in a sample of 300 shooting scripts of films released between 1995 and 2010. As predicted, they found that one or more variables in all four categories were strongly predictive of box office revenues. A third study attempting early-stage prediction of box office revenue is one by Hunter, Smith, & Singh (2016). Similar to Eliashberg et al., they relied on text-derived variables in their analysis of the screenplays of 170 US-produced feature films released in 2010 and 2011. The specific method they employed was "network text analysis," a software-supported approach for constructing networks of interconnected concepts from documents. Consistent with research in the fields of educational psychology and socio-linguistics, they found that the size of the text network created from selected words in each film's screenplay was positively and quite significantly associated with opening weekend box office, even when controlling for several other covariates. Taken together, these three studies show that there are reliable predictors of box office performance whose values can be known or reasonably inferred at very early stages in the film development process. While film and television development are not identical, they are similar enough in both structure and intent-and sometimes personnel-that it is not unreasonable to examine whether the same relationships might also hold between textual properties of television scripts and subsequent performance. That said, one thing about television development and production that does not have its analog in film is the critical importance placed upon the pilot episode (Littlejohn, 2007). In particular, pilot episodes are supposed to "set the tone for the series" (Lindauer, 2011) and "to establish the characters and situations" that will recur episode after episode (Anders, 2012). And because the initial performance of the pilot so strongly impacts whether the series will get either an early cancellation notice or a full-season order (MacNabb, 2015;Kissell, 2015), we believe that the text-derived properties of the pilot episode in particular will be determinative of the ratings performance of the all-important, first several episodes of a new series. Specifically, we thus hypothesize that: all else equal, the size of the text network of the teleplay of a new series' pilot episode will be positively associated with the series' initial ratings performance. Methods & Data In order to investigate the aforementioned hypothesis, data were collected on the total number of viewers of new prime-time, hour-long television series debuting during six recently-completed broadcast seasons (2009)(2010)(2011)(2012)(2013)(2014). Following Napoli (2001) only shows debuting on the Big Four US television networks-ABC, CBS, NBC, and FOX-were included. We used several sources to determine which shows appeared during those seasons. These included TV Series Finale, TV.com, TV Guide, and Wikipedia, particularly the latter's series of "US Network Television Schedule" pages for these seasons. We identified a total of 136 new, hour-long, dramatic series that debuted in prime-time as part of the 2009-2014 television seasons. Six of these shows were eliminated from consideration because they were (co-)produced with or produced by foreign television networks and debuted in those countries before being seen on US network television. They were Rookie Blue (2009) We also eliminated five "back-door" pilots, i.e., episodes of long-running shows that introduced one or more guest characters for what would become a new series in the next television season. The back-door pilots were NCIS-Los Angeles ( Dependent Variable Our measure of the ratings performance for the 116 new series was the total broadcast viewership, as measured in millions of viewers, in each of the first five episodes of the first season. We obtained the data from a number of sources including TV Series Finale, Tv.com, TV by the Numbers, and the Wikipedia pages for each show, particularly the "Episodes" sub-sections which provide summary descriptions of the each episode along with the viewership numbers. Because of the highly-skewed distribution of viewership, we log-transformed those quantities and assigned the resulting variable the name LOGVIEW.As shown in Table 1, below, LOGVIEW averaged 6.79 for the first five episodes of the 116 new series in the sample. This value corresponds to 6.17 million viewers. The minimum and maximum values of LOGVIEW were 6.20 and 7.22, respectively. These correspond to the 1.58 million viewers who tuned in for the fourth episode of the ill-fated medical drama Do No Harm (2013) and the 16.5 million viewers of the pilot episode of the cyber-themed, action-adventure series Intelligence (2014). Independent Variable Several distinct approaches exist for creating networks from texts (Nerghes, Lee, Groenewegen, & Hellsten, 2015). They differ along a number of dimensions including the level of automation, whether and how words are abstracted to higher-order conceptual categories, and the nature of the underlying relationship used to connect the words or concepts. In this study we opted for Hunter's (2014) morpho-etymological approach, one which is semi-automated, which abstracts words into higher-order conceptual categories defined by common etymological root, and which connects conceptual categories according to their co-occurrence in "multi-morphemic compounds" (MMCs). MMCs may include, but are not necessarily limited to, open compounds (middle class, attorney general), closed compounds (parkway, gunshot), abbreviations and acronyms (WASP, HQ, SUV), blend words (brunch, biopic, guesstimate), hyphenated multiword expressions (state-of-the-art, glow-in-the-dark), infixes (un-bloody-believable, fan-blooming-tastic), appositional compounds (attorney-client, actor/model), hyphenated compounds (rapid-fire, wide-eyed), selected clipped words (internet, wi-fi), and pseudo-compound words (misunderstanding, overrated). As Hunter (2014) noted in his study of a sample of Academy Award-nominated original screenplays, MMCs are highly complex, as measured by the number of characters, and consist largely of unique, context-specific terminology, conceptual vocabulary, jargon, or lexicon of the kind that distinguishes film genres from one another. The first step in the creation of the text networks involved identifying the MMCs in each script. To accomplish this we first used the Generate Concept List and the Identify Possible Acronyms routines in the CASOS Institute's Automap software program (Carley & Diesner, 2005) to generate word lists for each script in the sample. Each word list was analyzed by a pair of the authors with the intent of identifying all of the MMCs contained therein. Then each pair, in conjunction with the corresponding author, reconciled all differences in coding choices. Across the 116 scripts in the sample, we identified 5861 unique MMCs in the sample appearing a total of 14013 times. That makes for an average of almost 121MMCs per script or about 2-3 per page. The second step involved decomposing every MMC in each list into its constituent words. For example, the closed compound policeman is comprised of two words-police and man. Next, each constituent word was assigned to a conceptual category defined by its most remote etymological root. Typically, the most remote root was Indo-European, as defined in the 3rd edition of American Heritage Dictionary of Indo-European Roots (AHDIER). That source assigns over 13,000 English words to over 1,300 Indo-European (IE) roots. For example, the word police descends from the IE root pele-3 which means "citadel, fortified high place," while the word ijel.ccsenet.org International Journal of English Linguistics Vol. 6, No. 5; man descends from the IE root man-1, which means "man." This stage of the analysis was software-supported. Specifically, we first created a database containing the entire contents of the AHDIER. It maps over 13,000 unique words to nearly 1300 different roots whose descendants co-occur in tens of thousands of MMCs, many of which were in contained in our sample. We then automatically assigned over 83% of the constituent words to one of 752 Indo-European roots. The remaining 17% were instances where etymological roots of constituent words were not Indo-European or did not exist. In the former case, etymological roots provided in the American Heritage Dictionary of the English Language were used. Most typically these were Latin, Greek, Germanic, or Old English. In the latter case, where words had no known etymological root, the base form of the word was used. The final stage was to calculate the size of the resulting network of concepts with the use of the UCINet software program (Borgatti, Everett, & Freeman, 2002). In social network analysis, the largest cluster of mutually-reachable nodes in a network is referred to as the "main component." Our measure of the size was the number of links contained in the main component of the text network constructed from the MMCs contained in the script of a new series' pilot episode. Figure 1, below, depicts a portion of the main component of the text network constructed from the script of the pilot episode of Fox's dramatic series The Following (2013), as well as several of the network's minor components. The main component has 28 nodes while the six minor components have a total of twenty-five nodes among them, with a range of 2-6 nodes apiece. As noted above, the nodes in the network are etymological roots while MMCs are associated with the links between pairs of nodes. The MMCs in the displayed portion of the text network of The Following included closed compounds (madman, classroom, bloodbath, courtroom, Nevermore), acronyms (GPS, SUV, BAU, CNN), a hyphenated compound (college-aged) and two clipped words (ethernet and internet). As noted in the descriptive statistics displayed in Table 1, the average of the log of the number of links in the 116 text networks is 1.65, a value that corresponds to an average size of 56 links. The variable that contains the measures of network size for the series in the sample is named LOGLINK. From this variable we created a categorical variable, TOPLINK, which was coded "1" if the value of LOGLINK was in the 25th percentile and coded "0" otherwise. Concept Originality Our review of the television studies literature was unable to identify any prior empirical research that evaluated the influence of concept originality on a television series' ratings-be that series new or ongoing. That said, research in film studies, most notably that concerned with the determinants of box office revenues, has examined a closely-related question. Specifically, research has found that sequels in particular (Basuroy & Chatterjee, 2008) and adapted premises more generally (Hunter, Smith, & Singh, 2016) are associated with higher box office performance. As such, we include as a control variable in our statistical model a dummy variable named ADAPT ijel.ccsenet.org International Journal of English Linguistics Vol. 6, No. 5; 2016 6 which was coded "1" if the show was adapted from prior source material, be that a novel, a comic book, a film or film franchise, another TV series (past or present), a stage play, etc. and coded "0" otherwise. Track Record Again, our review of the television ratings literature returned no empirical studies examining the impact of the track record of the creative team on a series' ratings performance. But as above, research on the determinants of box office has addressed a closely-related matter. Particularly, Nelson & Glotfelty (2012) reported that the star-power of directors has a positive and statistically significant impact on box office revenue of film projects to which they are attached. Further, Hunter, Smith, & Singh (2016) reported that the box office of a screenwriter's last film project is a positive and statistically significant predictor of the box office revenues associated with their current film project. In the present study, we developed a measure of prior success of the new series' creator(s). Specifically, we used the International Movie Databse (IMDb) to first determine the number of series for which the creator(s) had earned a writing credit for the script of the pilot episode. From among those credits, we determined how many of the shows had been renewed, i.e., that had aired for at least two seasons. We created a Likert-scaled variable, RECORD, where creative teams with no prior successes were assigned a score of zero (n = 80), those with one prior success were assigned a score of one (n = 25), and those with two or more prior success were assigned a score of two (n = 11). Broadcast Network Following Napoli (2001), we created dummy variables to capture unexplained heterogeneity among the four networks whose series are the object of this analysis. Because CBS has been the ratings and audience leader among the four networks over the entire span of our observations, we created three dummy variables representing the three other networks. Specifically, the first we created was named "ABC" to represent the American Broadcasting Company. It was coded "1" if the new series appeared on ABC and coded "0" otherwise. The two variables representing Fox and the NBC were constructed in an analogous fashion. Table 2, below, contains the correlation matrix for all of the aforementioned variables. Excluded, however, are correlations of the network dummy variables with another. Table 3, below, contains the results of two random effects, generalized least squares (GLS) and five ordinary least squares (OLS) regression models used to test our hypothesis. In all models the dependent variable is LOGVIEW, the log of the total broadcast viewing audience while the independent variable is TOPLINK. In both models, data from only the first five episodes of the new series' first season are used. The random effects regression is the appropriate choice in the first model because independent and control variables are time-invariant. That is to say, their values don't change across the five episodes. In the latter models, an OLS regression is appropriate because only one episode is considered at a time and thus, there is no question of (in) variance of across episodes. In short, the results of the regression analyses show very strong support for our hypothesis. In each model the coefficients associated with our independent measure, TOPLINK, are positive and highly significant statistically. The first model of the seven described in Table 3 specifies a random effects model of all five episodes. In this model, coefficients were estimated on a sample of 571 observations from all 116 new dramatic television series, an average of about 4.9 episodes per series. While all variables in the model are significant, one of the two most highly so is TOPLINK (= 0.152, p < 0.0001, 1-tailed test), and in the predicted direction. The same holds true for the remaining five, single-episode, OLS regression models. In each instance, the coefficient for TOPLINK is positive and highly, statistically significant (0.134 < β < 0.173, 0.0001 < p < 0.001, 1-tailed test). Two notable trends are evident. First, there is a substantial increase in the coefficient value, statistical significance, and proportion of variance explained (R 2 ) between the model for Episode 1 and 2. From that point on, all of these values monotonically decrease. When we recall that almost without exception the first episode's audience is the largest of the season, then it suggests that the inclusion of an additional dummy variable to distinguish initial episodes from others might be in order. As shown in the second GLS model, which includes just such a variable, this supposition is confirmed. Specifically, the beta coefficient of that variable was very highly significant (p <0.0001) and the overall R 2 of the model increased from 32.7% to 38.9% while the significance of all other covariates stayed the same or improved. And because this variable is, by definition, not the same for all episodes, the within-sample R 2 value climbed from 0% in the first model to 44.5% in this one. Results More generally, it can be observed that all other covariates were significant at or above the p< 0.05-level (1-tailed test). With one notable exception, the sign of the coefficients was in the expected direction. That exception was associated with the dummy variable ADAPT. Instead of the expected positive relationship, like that found in film studies between adapted concepts and box office, our results showed that new series with adapted concepts had significantly smaller initial audiences than did new series with original concepts. Also noteworthy is the significance of the dummy variables presenting the three broadcast networks-ABC, FOX, and NBC. They are each negative and highly significant in every model that we specified. This result confirms that in comparison to CBS, which was the reference category, new series on these three networks had much smaller initial audiences. Discussion & Conclusion The results presented strongly support our hypothesis that there should be a positive and significant relationship between the size of the text network of the pilot episode of a new series and the size of the initial audience of the ijel.ccsenet.org International Journal of English Linguistics Vol. 6, No. 5; series. There are several important implications of these findings that merit further discussion. First, recall that unlike research on the determinants of box office, there is a dearth of research attempting to explain ratings or other measures of performance using only factors known during the early stages of production. This study is the first one of which we are aware that addresses this gap in the literature. Like those studies of box office performance, this one emphasizes that both characteristics of the creative team and properties of the text of the script itself are significant predictors of performance, especially the latter. What this study adds to that small but burgeoning literature is further confirmation of the decision to model performance using a limited number of early-stage factors. Secondly, while our model does explain a high proportion of the variance in initial audience size, we should make clear the distinction between explaining variance with a model constructed from a sample of shows from the past and using that model to predict the audience size of shows currently appearing on network television. The results in Table 4, below, present our model's predictions for the seven series debuting in the fall of 2015 for which we were able to locate pilot scripts by December 1st, 2015-Blindspot, Code Black, Limitless, Minority Report, Quantico, Rosewood, and Wicked City. The second column contains the model's predictions of average viewership of the first five episodes. The third column reports the observed viewership numbers. The fourth column contains the difference between the observed and predicted amounts expressed as a percentage. The smallest differences were for Limitless (+0.45%) and Quantico (+3.7%). The largest differences in absolute terms were also negative. The two series in question were Wicked City (-57.8%) and Minority Report (-50.7%). Both under-performed the modest audience sizes projected for them. The fifth and sixth columns indicate the percentile ranking in the 2009-14 sample of the predicted and observed audience numbers. For example, the 8.98 million viewer prediction for Limitless falls just below the 80th percentile of initial audience figures for the 116 series in the sample. Minority Report's observed average of 2.32 million viewers is in the lower 5% of that sample. The final column provides information about each series' status as of December 20th, 2015. Here we can see how early within the season important decisions were made concerning the fate of the shows. After just their third episodes, Wicked City was cancelled and Minority Report's initial order was reduced from 13 episodes to 10. Industry observers are taking the latter to be a signal of impending cancellation (Surette, 2015;Wagmeister, 2015;Piester, 2015). Both of these shows were performing well below even the modest audiences predicted for them by our model. On the positive side, Quantico, Rosewood, and Limitless received full season orders after their third, fourth, and fifth episodes respectively while Blindspot was renewed for a second season after its eighth. Further, after its fourth episode, Code Black received an order for six additional scripts. Notably, all five of these shows were performing above the audience levels that our model predicted. Taken together, the information contained in this table underscores the value in being able to anticipate a new series' early audience size and performance. Third, recall that very little empirical work, if any, has examined the series development process and the decision-making occurring in or across its early stages. And while we are certainly not suggesting that an audience estimate by our model would be the only factor taken into account in key decisions such as whether and when to air a new series, to order new episodes, to cancel, or to renew, we very much mean to suggest that such estimates could be taken into account. Specifically, we maintain that our sample of new series, taken as a whole, represent an expanded set of "comps", i.e., a set of TV scripts that are comparable on many dimensions, one of which is our measure of text network size. Both predicting future performance and evaluating observed performance could potentially be informed by the estimates of our model. Fourth, we might further add that in media landscape that saw a reported 409 scripted series in 2015-and number which is up 9% from 2014 and up 100% since 2009-the quality of scripts is a very real and pressing concern (Littleton, 2014). As Hibberd (2015) of Entertainment Weekly recently opined, The problem is that as the number of shows increase, the typical audience size for each show declines… (and) at a certain point, it will theoretically be impossible for networks to keep making higher and higher quality shows for an audience that's increasingly divided. That comment came on the heels of FX Networks chief John Landgraf's recent lament that "there is simply too much television… (and that)…there is too much competition… (too) hard to find good shows…and…impossible to maintain quality control" (Littleton, 2015). These issues of quality and competition suggest another possible use for the analytical approach outlined herein. Particularly, the possibility exists to use it in that stage of the development process where decisions are being made about which pilots should have proof-of-concepts commissioned. Given the results of this study, it is certainly possible that the application of our method could identify the potentially weaker scripts in a sample. It might also help to quantify differences in re-writes and revisions of scripts as the move through the development process. Finally, this matter of quality may also be applied in the post-broadcast period as well. Recall that the decision to broadcast a series is typically made on the basis of the proof-of-concept pilot episode. Only after that decision is made does the creative team-assisted by a team of newly-hired writers-begin work on penning the next several episodes needed to complete the network order. This study made no use of these subsequent episodes. There are two reasons why. First of all, because of the focal importance of pilot episodes and their scripts, and because those scripts are so widely available, we rightly focused our attention there. Secondly, subsequent episodes are almost never found online or made available for sale. Thus, our analysis was hampered by the lack of availability of scripts for later episodes. If, however, we were able to obtain them, it would be a relatively straightforward task of comparing both the size-and even the content-of the pilot and later episodes, and thus their quality. Differences in the sizes might also explain some of the variation within and across series' viewership. We anticipate undertaking research in the near future that directly examines this question.
7,620.2
2016-09-23T00:00:00.000
[ "Computer Science" ]
Bacterial communities in the natural and supplemental nests of an endangered ecosystem engineer . Supplemental nests are often used to restore habitats for a variety of rare and endangered taxa. However, though they mimic the function of natural nests, they vary in design and construction material. We know from previous research on human buildings that these differences in architecture can alter the types of microbes to which inhabitants are exposed, and these shifts in microbial interactions can be detrimental for individual health and well-being. Yet, no one has tested whether bacterial communities in supplemental structures are distinct from those found in natural nests. Here, we sampled the bacteria from inside supplemental nests of the endangered Key Largo woodrat ( Neotoma fl oridana smalli ). We then compared the diversity and composition of those bacteria to the bacteria collected from natural stick-nests and the surrounding forest environment in Key Largo, Florida. In addition, we sampled woodrat bodies to assess the microbiota of nest inhabitants. We observed distinct bacterial communities in Key Largo woodrat nests, relative to the forest environment; however, we could not differentiate between the bacterial communities collected from supplemental and natural nests. Furthermore, when we considered the potential accumulation of rodent-associated bacterial pathogens, we found no evidence of their presence in supplemental nests, in natural nests, or on the forest fl oor. Where we expected to see an accumulation of pathogens, we instead observed high relative abundances of bacteria from antimicrobial-producing groups (i.e., Pseudonocardiaceae and Streptomycetaceae). The bacteria on Key Largo woodrat individuals resem-bled those of their nests, with a low relative abundance of potential pathogens (0.3% of sequence reads) and a high relative abundance of bacteria from antimicrobial-producing groups. Our results suggest that, although there is some microbial interaction between nests and nest inhabitants, there are no detectable differences in the types of bacteria to which Key Largo woodrats are exposed in supplemental and natural nest structures. INTRODUCTION Supplemental nests are often used in the conservation and management of threatened and endangered species to increase and restore available nesting habitat and to provide protection from predators, competitors, and the environment (Newton 1994, Spring et al. 2001, Libois et al. 2012). However, though supplemental nests model the function of their natural counterparts, they are commonly built with relatively little attention to mimicking the intricate details of natural nest design and are constructed from manufactured materials, such as metal and plastic. For example, supplemental nests of the Scarlet Macaw (Ara macao), a tree cavity nesting species, are constructed from wood, poly-vinyl chloride tubes, and 55-gallon poly-acryl amide barrels (Vaughan et al. 2003). Despite these differences in construction design and nesting substrate, it has not been previously considered whether species interactions are altered in supplemental nest environments. For instance, alternative materials (e.g., plastic) could potentially limit the dispersal of environmental species into nests and alter microclimate conditions, affecting the overall diversity and succession of the microbial communities that are able to colonize and persist inside. These shifts in species interactions could then lead to negative health outcomes for individuals, ultimately hindering conservation efforts. We already know that captivity can result in a loss of body-associated bacterial diversity, resulting in adverse health consequences-a trend that has been observed among terrestrial mammals, aquatic mammals, and amphibians (Becker et al. 2014, Loudon et al. 2014, Cheng et al. 2015, Wan et al. 2016. One example of this comes from research on the critically endangered Panamanian golden frog (Atelopus zeteki), in which it has been shown that captivity reduces the species richness and phylogenetic diversity of bacteria on the skin (Becker et al. 2014). This frog is now found only in captive environments, and the observed changes in body-associated communities have been linked to an increase in the risk of infection. Further, the effects of captivity are not restricted to the skin or discreet external body sites. The Tasmanian devil (Sarcophilus harrisii) gut, skin, pouch, and mouth microbiomes all exhibit compositional differences, based on whether an animal is captive or wild (Cheng et al. 2015). However, though we know that the environment is an important determinant of the microbes that live on animals (and hence individual health), no one has characterized the microbial communities in the supplemental nests themselves or compared how those communities vary to those found in the natural environment. Therefore, to better predict how the design and use of supplemental nests might alter microbial species interactions, we pull from the literature on human dwellings. We know from the study of human-built structures (e.g., homes and office buildings) that building material and architectural design strongly influence the diversity and types of microbes found on interior surfaces. In recent years, there have been a number of studies suggesting that as we have moved into more modified homes, we have lost exposures to diverse environmental species (e.g., Haahtela et al. 2015, Stein et al. 2016, Thoemmes et al. 2018. For example, contemporary houses have far fewer environmental microbes than do more open traditional homes, such as thatched houses in the Amazon (Ruiz-Calderon et al. 2016). While the absence of some bacteria in our daily lives is beneficial, the absence of others is associated with negative health outcomes. For example, a decrease in the abundance of soil bacteria on the skin is directly linked to an increase in the prevalence of atopic sensitization and autoimmune disorders in humans (Fyhrquist et al. 2014, Ruokolainen et al. 2015. Additionally, as indoor microbial diversity decreases there is a subsequent increase in the abundance of bodily microbes, such as those from feces and skin (Dunn et al. 2013, Lax et al. 2014, and pathogens found in both homes (e.g., Staphylococcus aureus; Gandara et al. 2006) and hospitals (Kembel et al. 2014). These findings highlight the importance of understanding how human-built supplemental nests might alter the microbial communities to which species of concern are exposed. A loss in microbial diversity or the accumulation of pathogens in supplemental nests could have detrimental effects, particularly for species at a high risk of extinction, such as the Key Largo woodrat (Neotoma floridana smalli). The Key Largo woodrat is a federally endangered subspecies endemic to Key Largo, Florida (US Department of the Interior 1984). Once ranging throughout the tropical hardwood hammock, historical habitat loss and land alterations during the agricultural era have limited their distribution to North Key Largo and reduced the availability of natural nesting substrate in the environment (Winchester et al. 2009, Cove et al. 2017. This loss of habitat and nesting sites has been detrimental to the survival of these v www.esajournals.org 2 September 2020 v Volume 11(9) v Article e03239 ecosystem engineers, as they build substantial stick-nests by layering forest debris at the bases of trees (Fig. 1a), in fallen tree throws, or in solution holes (Cove and Maurer 2019). Additionally, recent evidence suggests that woodrat distributions have been further limited by the presence of feral and free-ranging cats (Felis catus)-resulting in a shift away from their natural stick-nest building behavior . Once estimated to number fewer than 100 individuals (McCleery et al. 2005), Key Largo woodrats have benefitted greatly from conservation management practices, including nest supplementation and exotic predator removal , and there are now more than 2000 supplemental Key Largo woodrat nests located in their protected habitats (Cove et al. 2017). These nests are constructed from large plastic culvert pipes and covered with rocks or chunks of fossilized coral (Cove et al. 2017). On the exterior, Key Largo woodrats maintain supplemental and natural nests in the same way (i.e., stick-stacking behavior; Cove et al. 2017), but supplemental nest interiors are more enclosed with comparatively little air flow and moisture penetration (Barth 2014). Here, we examine the diversity and composition of bacteria in natural and supplemental Key Largo woodrat nests to assess whether there are differences in bacterial communities associated with nest supplementation. Based on what we know from contemporary human homes, we might expect supplemental nests to have less bacterial diversity than natural nests. Similarly, since supplemental nests are composed from materials that could potentially restrict the colonization of environmental microbes, we might expect there to be a difference in which bacterial taxa are most abundant, including an increase in the accumulation of body associates and pathogenic bacteria. Finally, as we know there is an interaction between the microbiota of the body and the built environment (Hospodsky et al. 2012, Dunn et al. 2013, Becker et al. 2014, Loudon et al. 2014, Lax et al. 2014, Cheng et al. 2015, Gibbons et al. 2015, Wan et al. 2016, we characterize the bacteria found on the bodies of Key Largo woodrat individuals. METHODS The Crocodile Lake National Wildlife Refuge is located in North Key Largo, Florida, USA. This refuge is composed of mangroves, coastal wetlands, and part of the last remaining large tract of tropical hardwood hammock habitat. When combined with the Dagny Johnson Key Largo Hammock Botanical State Park, this forest type covers <1000 ha (Frank et al. 1997, US Fish andWildlife Service 1999). However, despite its limited area, the tropical hardwood hammock is home to a variety of endemic and endangered species, including the Key Largo woodrat, the Key Largo cotton mouse (Peromyscus gossypinus allapaticola), and the Stock Island tree snail Natural nests that have been previously identified and all supplemental nests are individually marked as part of a long-term monitoring project. From these, we visited 10 natural (Fig. 1a) and 10 supplemental nests (Fig. 1b, n = 20), focusing on the area of the refuge that has the highest Key Largo woodrat population density . We determined nest occupancy based on visual surveys of active stickstacking behavior (Balcom and Yahner 1996, Cove et al. 2017), camera trap surveys, and/or Sherman live traps baited with a mixture of peanut butter powder and rolled oats. Once occupancy was confirmed, we swabbed each nest with dual-tipped sterile rayon BBL CultureSwabs (Becton, Dickinson and Company, Franklin Lakes, New Jersey, USA). To standardize the distance into each nest, as well as to avoid contamination of sample swabs on exterior building material, we inserted a PVC pipe (approximately 0.5 m in length; seen in Fig. 1a) into each nest prior to sample collection. We targeted entrances that appeared to be used most frequently by the nest inhabitant(s), placed the pipe into the nest interior, and threaded each swab through to the sample location. To lengthen each swab, we wedged a wooden dowel into the cross hatches found on the exterior of the swab cap, avoiding any contact between the dowel and the sample swab itself. After sampling, we removed the pipe and swab from the nest at the same time and sterilized the interior and exterior of the pipe between each sampling event. We then swabbed the forest floor approximately 0.5-0.75 m from the outer edge of all natural nests (n = 10), targeting an area that did not appear to be trafficked by humans or wildlife. Finally, we collected bacteria from the flank and ventral side of Key Largo woodrat individuals to investigate the microbial interaction between nests and nest inhabitants (n = 10). All individuals were captured near sampled nests with Sherman live traps, and to prevent repeated sampling of individuals, we verified identity with doublemarked monel 1005 ear (National Band and Tag Company, Newport, Kentucky, USA) and subcutaneous PIT (Biomark, Boise, Idaho, USA) tags. All environmental and animal samples were collected for approximately 15 s and subsequently stored at À20°C until processing for DNA analyses. Molecular methods and analyses We performed DNA extractions with a DNeasy PowerSoil Kit (Qiagen, product #12888-100), with the modifications described in Fierer et al. (2008). PCRs were performed in triplicate and all amplicons were pooled in equimolar concentrations prior to sequencing on the Illumina MiSeq platform at the University of Boulder, Colorado, using the 515f/806r primer pair to amplify the V4-V5 region of the 16S rRNA gene (Flores et al. 2012). We demultiplexed and quality-filtered the resulting sequence data using default parameters in the QIIME2 pipeline (version 2019.7.10; Bolyen et al. 2019). We identified amplicon sequence variants (ASVs) with Deblur (via deblur denoise-16S; Amir et al. 2017) and assigned taxonomy with the naive Bayes classifier (Bokulich et al. 2018), trained on the Greengenes 13_8 99% OTUs reference database (version 8.15.13;McDonald et al. 2012). When compared to other methods (e.g., OTU clustering algorithms), ASV assignment provides a more accurate characterization of bacterial communities (Caruso et al. 2019) and accurately identifies microbes to the species or even subspecies level of taxonomic resolution (Callahan et al. 2017). We then rarefied our data to 4000 sequences per sample and analyzed all data in R (version 3.4.4) with the mctoolsr, vegan, PMCMRplus, and FSA packages (Oksanen et al. 2013, R Core Team 2015, Leff 2016, Ogle 2018, Pohlert and Pohlert 2018. One forest floor and one body sample did not meet the rarefaction threshold. We compared differences in ASV richness and Shannon diversity between nest and forest floor samples with Kruskal-Wallis, using Dunnett's test and Benjamini-Hochberg method for multiple comparisons (n = 29; Dunnett 1955; v www.esajournals.org Benjamini and Hochberg 1995). We then quantified differences in the composition of bacterial communities among samples with Bray-Curtis dissimilarity, weighted by ASV abundance (Bray and Curtis 1957). We examined differences in bacterial community composition between natural and supplemental nests with a permutational multivariate analysis of variance (PERMA-NOVA), where we compared differences between nest type (i.e., natural and supplemental nests) and between natural nest and forest samples separately. We then calculated the percent relative abundance of previously described bacterial pathogens in rodents, excluding zoonotic species that are known only to be infectious in humans (Table 1). Since we do not know the physical condition of the individuals that use and/or live in the woodrat nests sampled in this study, we included opportunistic species that have been shown to have increased virulence potential when rodents are immunocompromised (e.g., Pasteurella pneumotropica; Heyl 1963, Towne et al. 2014. Additionally, since the bacteria associated with Key Largo woodrats have not been previously characterized, we included all bacterial pathogens described from rodents, regardless of host species. Though there is likely to be some variation in the pathogens found on Key Largo woodrats compared to other rodents, the bacterial taxa included in our analyses encompass 55 species across 29 genera, including those of well-known rodent-associated pathogens (e.g., Yersinia pestis; Butler et al. 1982). We also included 5 genera for which all (or nearly all) mammal-associated species studied to date have been described as pathogenic (e.g., Leptospira; Picardeau 2017). Though we may have missed fine-scale interactions (e.g., previously undescribed pathogens or species-specific associations), we believe this representative dataset has captured generalized patterns in the accumulation of potentially pathogenic bacteria in Key Largo woodrat nests. Differences in the relative abundance of all bacterial taxa of interest were compared with Kruskal-Wallis tests. Finally, we characterized the bacteria found on Key Largo woodrats and compared bacterial community composition between nests and woodrat bodies with Bray-Curtis dissimilarity (Bray and Curtis 1957). We visualized community data with v www.esajournals.org non-metric multidimensional scaling (NMDS) ordination plots and quantified observed differences with PERMANOVA, using an FDR correction for multiple comparisons. We then calculated the percent relative abundance of potential rodent-associated pathogens and other bacterial taxa of interest recovered from Key Largo woodrat individuals (n = 9). RESULTS After rarefication, we observed a total of 714 ASVs among natural Key Largo woodrat nests (n = 10), with an average of 254 ASVs represented per individual nest. ASV richness in natural nests did not differ significantly from supplemental nests (average of 278 ASVs per individual nest; P = 0.21) or from the forest floor (average of 275 ASVs per sample location; P = 0.39; Fig. 2a). We found a similar pattern for Shannon diversity, with no significant difference between natural and supplemental nests (n = 20; P = 0.65) or between natural nests and the forest floor outside of those nests (n = 19; P = 0.62; Fig. 2b). In natural and supplemental nests, the three most abundant phyla were Proteobacteria (natural 31%, supplemental 32%), Actinobacteria (natural 30%, supplemental 33%), and Bacteroidetes (natural 11%, supplemental 13%). At the genus level of identification, Candidatus Nitrososphaera, a group of ammonia-oxidizing archaea (Zhalnina et al. 2014), was the most abundant taxa in natural nests (3.8% of bacterial sequence reads) and accounted for 2.4% of bacterial sequences in supplemental nests. In supplemental nests, the most abundant genus was Streptomyces (5% of bacterial sequence reads), a common source of antibiotic medications (de Lima Proc opio et al. 2012), and Streptomyces was the second most abundant genus recovered from natural nests. Additionally, we ( v www.esajournals.org observed no significant clustering of the bacterial communities for natural and supplemental nests (n = 20; PERMANOVA: P = 0.495; Fig. 3); however, we did detect differences between Key Largo woodrat nests overall compared to the surrounding forest environment (n = 19; PERMANOVA: P = 0.004; Fig. 3). Differences between Key Largo woodrat nests and the forest floor were not driven by an accumulation of rodent-associated pathogens. Of the 60 taxa considered (Table 1), we did not detect any in natural nests, supplemental nests, or from forest floor samples. However, while we saw no accumulation of potential pathogens, we saw a high relative abundance of Pseudonocardiaceae and Streptomycetaceae in Key Largo woodrat nests (Fig. 4). These bacterial families contain important antimicrobial-producing groups (Platas et al. 1998(Platas et al. , K€ ampfer et al. 2014 and include bacteria that produce many of our common commercial antibiotics, such as erythromycin and vancomycin (Sakoulas et al. 2004, Jafari et al. 2014, K€ ampfer et al. 2014). We might have expected to detect these bacteria in nest and forest samples, as they are found in diverse environments and are abundant in soils, globally. However, it is the high abundance of these taxa that is notable. Relative to all other taxa, Pseudonocardiaceae and Streptomycetaceae were the most abundant bacterial families in both natural and supplemental nests, accounting for 10.5% of all sequence reads from natural nests and 13.3% of all sequence reads from supplemental nests (n = 20; Fig. 4). Further, they were significantly more abundant in nests than on the forest floor, where they represented only 3.6% of all sequence reads (v 2 = 9.68, P = 0.002). Both Pseudonocardiaceae and Streptomycetaceae were recovered from all nest and forest samples collected in our study (n = 29). When we examined Key Largo woodrat body microbiota (n = 9), the three most abundant phyla were Bacteroidetes (33%), Firmicutes (24%), and Proteobacteria (19%), and the most abundant taxa were S24-7 (28.1% of bacterial sequences) and Lactobacillus (8.2% of bacterial sequences; Fig. 4). S24-7 bacteria are almost exclusively found (and are highly abundant) in the guts of mammals (Ormerod et al. 2016), and Lactobacillus is commonly found in mammalian guts and vaginas (Hartemink et al. 1997). Woodrat bacterial communities were distinctly different from those found in nests (PERMANOVA: P < 0.001), even more so than were nests from the forest environment (PERMANOVA: P < 0.002; Fig. 3), and they exhibited the greatest amount of variation among samples compared to all other sample types (Fig. 3). Overall, the relative accumulation of potential pathogens on Key Largo woodrat bodies was very low. We observed only two of the 60 Fig. 3. Non-metric multidimensional scaling (NMDS) ordination plot representing bacterial communities collected from natural and supplemental Key Largo woodrats nests, the forest floor outside each natural nest, and Key Largo woodrat bodies. There were distinct bacterial communities in Key Largo woodrat nests compared to the forest floor (P = 0.004), but there was no difference between natural and supplemental nests (P = 0.495). The composition of bacteria on Key Largo woodrats was different from those found in their nests (P < 0.001), and individuals exhibited a greater amount of variation in bacterial communities than observed from their nest and forest environments. v www.esajournals.org taxa examined, accounting for just 0.3% of all bacterial sequences. These included Treponema spp. and Streptobacillus moniliformis. Additionally, due to the high relative abundance of Pseudonocardiaceae and Streptomycetaceae bacteria in nests, we tested for their presence on Key Largo woodrat bodies. Pseudonocardiaceae was the third most abundant family (average of 6% of bacterial sequences; range of 1-16.5%). Streptomycetaceae was less abundant, however, representing an average of 1.2% of bacterial sequences among individuals (Fig. 4). Despite the range in abundance between these taxa and among samples, Pseudonocardiaceae and Streptomycetaceae were found on all individuals. DISCUSSION We found the bacterial communities in Key Largo woodrat nests to be distinct from the forest environment, but there were no significant differences in bacterial diversity (Fig. 2) or community composition (Fig. 3) between natural and supplemental nests. Additionally, we did not detect any potential pathogens from nest or forest environments. Key Largo woodrats were host to potential pathogens, but we only detected two of the 60 taxa considered, which comprised a small portion of the total bacteria recovered. Instead, we found a high abundance of bacteria from common antimicrobial-producing bacterial groups (Pseudonocardiaceae and Streptomycetaceae), both in nests and on individuals. However, though Key Largo woodrat body bacteria differ significantly from those of their nests overall (Fig. 3), the high prevalence of these groups suggests some sharing of microbes between individuals and their environment. This microbial exchange might account for some of the similarity among natural and supplemental nests (Fig. 3), despite their differences in construction design and materials. We expected supplemental nests to be similar to other structures built by humans, in that they might have a less diverse, unique assemblage of bacteria compared to natural nests (Dunn et al. 2013, Lax et al. 2014). If supplemental nests alter bacterial species interactions, this could have detrimental effects on woodrat health. For example, exposure to a greater diversity of microbes increases immune response and the ability to fight off infectious disease in rodents (Beura et al. 2016), other mammals (Becker et al. 2014, Fyhrquist et al. 2014, Ruokolainen et al. 2015, and even in amphibians (Loudon et al. 2014). However, we found no evidence of such an effect. Relative to the forest environment, nests had a similar diversity of bacteria, regardless of whether they were natural or supplemental (Fig. 2). Nests did diverge from the forest environment in the composition of those communities-hosting distinct assemblages of bacteria that were not significantly different from one another, based on nest type (Fig. 3). This suggests that, likely through some combination of nest design or pattern of use, supplemental nests maintain a bacterial community that is no different from their natural counterparts. One explanation might be that the culvert pipes used in supplemental nest construction have open ends. These openings could act in a similar way to the gaps in natural nests or to open windows in human homes, in which bacterial diversity increases on surfaces compared to those that are more closed from the outdoor environment (Kembel et al. 2012, Barber an et al. 2015. We also found no evidence of potential pathogens in natural or supplemental nests. High bacterial diversity and a high relative abundance of Pseudonocardiaceae and Streptomycetaceae in nests are likely contributing factors, as we know that high bacterial diversity and the application of antimicrobials in human buildings are associated with a decrease in the abundance of pathogenic microbes and a reduction in exposure risk (Lax et al. 2014, Ruokolainen et al. 2017). On the other hand, the application of antimicrobials in homes has favored antibiotic-resistant strains (Hartmann et al. 2016), and therefore, we might expect to find antibiotic-resistant bacteria in Key Largo woodrat nests, particularly since they are typically used for several generations and persist over long periods of time (Rainey 1956). To understand whether there were shared bacterial taxa between nests and nest inhabitants, we also characterized the bacteria found on Key Largo woodrat bodies. The high abundance of gut-associated bacteria suggests that they are in close contact with their feces. Overall, Key Largo woodrat bacterial communities were distinctly different from those of their nest environments (P < 0.001; Fig. 3). Additionally, the variation in bacterial community composition among individuals was much greater than what we observed between and among nest and forest samples (Fig. 3). One likely explanation is that woodrats harbor bacteria unique to the body compared to environmental bacteria. However, this does not account for the lack of variation we would also expect to see among soil communities. Therefore, another potential explanation might be the convergence of soil-associated and soil-adjacent bacterial communities in response to the landfall of Hurricane Irma in September 2017 (3 months prior to sample collection). Catastrophic weather events can homogenize biological communities (Savage et al. 2018), and therefore, this hurricane event might account for v www.esajournals.org the similarity between and among nest and forest communities. Individuals were host to potential pathogens, including Treponema spp. and Streptobacillus moniliformis. However, though these taxa were detected, they only accounted for 0.3% of total sequences. As with nests, the low abundance of pathogens on bodies might be due, in part, to the high abundances of Pseudonocardiaceae and Streptomycetaceae. Further, although these taxa are described as infectious agents among rodents (Baker 1998, Whary et al. 2015, we are unable to confirm whether the presence of these specific bacteria increases infection risk for the Key Largo woodrat. Without further investigation, it would be imprudent for us to directly attribute the bacteria recovered from the Key Largo woodrats (or from their nests) to pathogenesis, but rather, we can use these results as a proxy for understanding the broadscale patterns of pathogen accumulation. One of our more unusual findings was the high prevalence of bacteria from Pseudonocardiaceae and Streptomycetaceae in Key Largo woodrat nests. Associations between animals and antimicrobial-producing bacteria have been described in social insects (e.g., from ants and wasps; Currie et al. 1999, Cafaro and Currie 2005, Madden et al. 2013), but to our knowledge, such a relationship has never been observed among non-human mammals. Based on our study design, we cannot ascribe a causative relationship between the Key Largo woodrats and the presence of these bacteria. However, due to their high relative abundance and ubiquity among natural and supplemental nests, we propose the possibility that the bodies and/or behaviors of Key Largo woodrats promote the colonization and accumulation of these bacteria. Unlike other human-built structures, such as homes and hospitals, supplemental nests do not appear to alter bacterial species interactions in the ways we would predict to be detrimental for individual health. This includes the loss of diverse species interactions that are important for immune development in rodents and other animals, as well as the subsequent accumulation of noxious organisms (Kembel et al. 2012, Becker et al. 2014, Loudon et al. 2014, Cheng et al. 2015, Beura et al. 2016, Stein et al. 2016, Wan et al. 2016). However, due to the variation in the types of supplemental nests and nest boxes used in threatened and endangered species conservation, we recommend that more research is needed prior to the extrapolation of results to the nests constructed for other species of concern. As animals increasingly inhabit supplemental structures that are different from those in which they evolved to live, breed, or seek refuge, it is important that we more fully incorporate microbiome research into conservation biology (Trevelline et al. 2019, West et al. 2019) and consider the comprehensive implications of conservation management practices on animal health. ACKNOWLEDGMENTS This research was supported by the Brevard Zoo's Quarters for Conservation Fund, The Florida Keys Wildlife Society, USFWS, the Florida Keys National Wildlife Refuge Complex, and NC Cooperative Fish and Wildlife Research Unit. Special thanks to J. Dixon, S. Sneckenberger, M. Jee, R. DeGayner, and C. DeGayner for their efforts and continued support building supplemental nests and conducting visual, live trap, and camera trap surveys. This research paper includes original research conducted by the manuscript co-authors, and any related work has been fully acknowledged herein. All authors contributed equally to the design, implementation, and resulting manuscript for this project, and we have no competing interests to declare. v www.esajournals.org
6,519.8
2020-09-01T00:00:00.000
[ "Environmental Science", "Biology" ]
R&D accounting treatment, firm performance, and market value: Biotech firms case study . This study examines the correlation between R&D accounting treatment and market value in association with the firm’s performance, with a focus on biotech firms. Firstly, the results of the analysis show that capitalized R&D has a positive correlation with market value, consistent with existing literature. In the case of biotech firms, capitalized R&D has a higher value relevance compared to other industries. Secondly, this study examines the effect of a decrease in capitalized R&D on market value. It is found that the decrease in capitalized R&D has a negative effect on market value, however, this is not the case for biotech firms. In particular, in years where major biotech firms acknowledge and correct their accounting error, capitalized R&D decrease seems to have a more positive effect on market value. Additionally, this study extends the inquiry in association with the company performance and finds the decrease in capitalized R&D has a significant positive association with market value when the firms get better performance. But when the firm’s performance gets worse, a decrease in capitalized R&D adversely affects the market value. However, this is not the case for biotech firms, suggesting excessive expectation around the R&D process of biotechnology firms. transformation is at the core of biotechnology, this industry in Europe is now attracting a lot of promising young business people. Korean bioindustry is also experiencing explosive growth over the past 2-3 years. Multinational pharmaceutical companies are now paying attention to Korean bioventures. Global investors are weighing investments at emerging markets such as South Korea in addition to the existing investment destinations such as the US, Europe and Israel. In 2018, eight out of ten stocks showing rapid growth were biotech (biopharma) stocks. Seven of the 10 largest Korean companies listed at KOSDAQ (Korean Securities Dealers Automated Quotations) 1 by market cap were also biotech firms. This is because, despite uncertainty, there is great anticipation for the potential growth of biotech firms. There are, however, some voices of concern that the current stock prices are already overvalued. R&D spending is an important productive input, however, it is also a major source of information asymmetry (Aboody and Lev, 2000;Moehrle and Walter, 2008). For biotech companies, R&D investment is essential in developing innovative new drugs and creating future sustainable value. Top three global biotech companies (by market cap) such as Johnson & Johnson, Pfizer, and Roche are investing more than USD 8 billion a year in R&D. In recent years, Korean biotech stocks prices fluctuate frequently, recording both the highest and the lowest figures. A frequently controversial issue here is how to treat R&D in the biotech sector -as capitalization or as expenses? According to the Korean FSS (Financial Supervisory Service), as of the end of 2016, 55%, or 83 of biopharma companies were capitalizing their R&D investments. While capitalized R&D ratio to total assets of all listed companies amounts to a mere 1%, capitalized R&D ratio to total assets of the biotech industry is said to reach 4%. In Korea, R&D capitalization is allowed when the conditions required by the KIFRS (Korean International Financial Reporting Standards) are met, it is also debatable as to whether accounting standards are being applied appropriately. However, it has been confirmed that capitalization eases the tendency to reduce R&D investments (Oswald and Zarowin, 2007) and R&D expensing has been proven to cause under-investment in R&D (Dukes et al., 1980;Shehata, 1991;Wasley and Linsmeier, 1992). As such, in an industry such as biotech where R&D investment is essential, a decrease in R&D investments can serve as an obstacle to new drug development. This raises the question of how to interpret market's reaction to biotech firms' surging based on the anticipation of successful development of new drugs, even when the forecast for revenue or profit is uncertain. Such excitement over biotech stocks is driven by the anticipated future benefits from successful R&D. This study examines the market value of R&D capitalization, with a focus on biotech firms in particular. As consistently proved in previous studies, value relevance of R&D capitalization is higher than that of R&D expensing (Lev and Sougiannis, 1996;Chambers, Jennings and Thompson, 1999;Healy, Myers and Howe, 1999;Monahan, 1999). Then, whether it is done voluntarily or compulsorily when a firm decreases the amount of capitalized R&D, one may question whether it leads to a decline in value relevance. Some major Korean biotech firms have acknowledged and corrected their accounting errors by restating their financial statements or recognizing impairment losses. In general, when capitalized R&D amount decreases, there is a likelihood that it would have a negative impact on market value. But in the case of biotech firms, a decrease in capitalized R&D will not deter the market from valuing a firm highly, with high expectations for future success of massive R&D investment. As such, this study also examines market's reaction to adjustments made to R&D capitalization. The remainder of the paper is organized as follows. Chapter 2 provides literature review and the hypotheses. Chapter 3 discusses the research samples and methodology. Chapter 4 presents descriptive statistics, correlations, and regression results. Chapter 5 discusses the results and suggests conclusions from the analysis. R&D accounting treatment, earnings management and value relevance There have been two main streams of studies on R&D investment having a positive correlation with future earnings or market value (Chan et al., 2001;Joos and Zhdanov, 2008). In one stream, on R&D expenditures, literature has found that discretionary R&D capitalization is being used for opportunistic earnings management. Companies' investment in R&D causes cash outflows and thus is geared towards an increase in long-term value rather than short-term profit. But in the case of companies with low performance or profit, capitalizing R&D spending is opportunistically used as a means for earnings management through discretionary accounting treatment (Aboody and Lev 1998;Cazavan-Jeny et al. 2011;Markarian et al. 2008;Dinh et al., 2016). Nelson et al. (2003) conduct a survey and find that firms are mostly using R&D capitalization as earnings management strategies. Using Italian listed companies, Markarian et al. (2008) also find that companies tend to capitalize R&D expenditures to smooth earnings. Cazavan-Jeny et al. (2011) find that R&D capitalization is negatively associated with stock prices when R&D capitalization is opportunistically used. Dinh et al. (2016) analyze the association between R&D capitalization and benchmark beating. They find a negative association between market values and strategic use of R&D capitalization for benchmark beating. However, they also find the market values R&D capitalization positively for well-performing firms not seeming to use R&D capitalization for opportunistic earnings management purpose. The reduction of discretionary spending on R&D is also found to be used for real earnings management (Dechow and Skinner, 2000;Graham et al., 2005;García Osma and Young, 2009). Several studies have confirmed that the market lowers the value of firms that discretionally cut R&D investments for earning management purpose (Baber et al., 1991;Bushee, 1998;Dechow and Sloan, 1991;García Osma and Young, 2009;Mande et al., 2000). But as mentioned in the introduction, it has been verified that R&D accounting treatment affects the size of R&D investment, and as such, there are also arguments that support R&D capitalization. Especially for corporations in their early stages, R&D capitalization can be seen as a sign for future success. Therefore, many studies have been conducted on the information usefulness or value relevance of R&D capitalization. Many preceding studies have proved that R&D capitalization is considered a more value-relevant information, compared to R&D expense. Lev and Zarowin (1999) confirm that the capitalization of R&D offers useful information for users of financial information. Oswald and Zarowin (2007) demonstrate the value relevance of R&D capitalization of UK firms and suggest that R&D capitalization may be informative. Depending on the life cycle stage of a firm, R&D capitalization may differently affect the value relevance of a firm. For firms in the growth stage of R&D activities, capitalization may be more value relevant (Oswald, 2008). Lev and Sougiannis (1996) find that the contribution of capitalized R&D to profits is sustained over five to nine years. By examining the value relevance of R&D reporting in France, Germany, the UK, and the US, Zhao (2002) find that capitalized R&D has a greater association with stock price than expensed R&D. Chambers et al. (2003) analyze the value relevance of accounting information in cases where R&D spending is capitalized or expensed, and find that capitalization increases profits, enhances the explanatory power of the stock price, and is better able to explain firm value. Using Australian company data, Ahmed and Falk (2006) also analyze the value relevance of R&D accounting treatment and verify that capitalized R&D is more value relevant than expensed R&D. Meanwhile, spending more R&D investment than market expectation can be interpreted as managers providing a positive signal on future profits and future investment opportunities in a situation where information asymmetry exists (Qian et al., 2012). Qian et al. (2012) measure discretionary R&D expenditures to verify the effect of discretionary R&D expenditures and find that discretionary R&D expenditures support the signal hypothesis and not the managerial over-optimism hypothesis. In addition, markets are found to react more favourably to increase in R&D investment of high-tech industry compared to that of low-tech industry (Chan et al., 1990;Eberhart et al., 2004). Wang et al. (2016) also find that R&D capitalization leads to higher market value. Zakari and Saidu (2017) examine the impact of accounting treatments of R&D spending on financial statements. Though the results clearly show the reduction of net assets and equity by expensing R&D, they suggest that potential investors and other financial information users would take notice of a great amount of R&D spending as a sign of probable future benefits. R&D and value relevance of biotech firms In the case of Biotech firms, R&D investment is essential and the amount is much greater than in any other industry. As for pharmaceutical firms, it is said that an average of $800 million is needed to develop one drug (Kaitin, 2003) and R&D investment is constantly needed as research diversification is required (Nivoix & Nguyen, 2018). In terms of firm size, it is found that large firms are likely to invest more in R&D activities (An & Wang, 2010;Choi & Lee, 2018;Khoshnevis & Teirlinck, 2018). Since such a large amount of money is required, both the company and investors have high expectations for R&D investment, as well as concerns for its uncertainty. Chan et al. (1990) confirm that the value relevance of R&D investments varies across industries and that in the high-tech industry R&D investments have a positive effect on stock price, but a negative effect in the non-high-tech industry. R&D investment has been found to positively impact on firms' performance (Jin et al. 2018). Xu and Sim (2018) find that R&D intensity is positively related with firm performance in emerging markets. Several preceding studies examine the value relevance of biotech industry R&D spending. Given the nature of the biotech industry, R&D progresses over a long period of time that includes several stages, and thus the value relevance may differ for each stage. Hand (2005) proves that a firm's maturity, R&D growth rate, and R&D intensity affect the value-relevance of R&D expenditures. What was commonly found is that R&D outlays are more value relevant to the development or maturity stage (Ely et al., 2003;Xu et al., 2007). This may be due to the belief that as the R&D stages progress, the likelihood of success increases. Ely et al. (2003) confirm that R&D spending has greater equity valuation implications in high potential firms than in low potential firms, and prove that value relevance is higher in later development stages than in the early stage. Xu et al. (2007) examine the value-relevance of both R&D expenditures (financial) and uncertainty measures (nonfinancial) of biotech firms which is an issue of high uncertainty. They find that nonfinancial uncertainty information is more value-relevant in the maturity stage. Guo et al. (2005) also find that product development stage, as well as total number of products, percentage of protected drug indications by patents, affect the determination of the valuation of biotech firms completing an IPO. This study analyzes the correlation between capitalized R&D and value relevance, based on the findings of preceding studies, and compares the case of Korean biotech firms. Compared to other industries, biotech firms have a greater scale of R&D spending, but they also have higher expectations for their success. Based on this, Hypothesis 1 is as follows: H1. Capitalized R&D will be value relevant, and capitalized R&D of biotech firms will be more value relevant than that of firms in other industries. Meanwhile, as regulation and supervision on biotech firms are strengthened and firms themselves admit their errors, capitalized R&D has been reduced, either voluntarily or compulsorily. In2017, some of the major biotech firms acknowledged their accounting errors of the past and restated financial statements by converting the capitalized R&D to expenses. Some firms recognized impairment losses. As has been verified in preceding studies, capitalization of R&D has a higher value relevance than expensing. Therefore, if capitalized R&D decreases, it is likely that the information usefulness or value relevance in the market will also drop. However, in the case of biotech firms, because they admitted to their accounting errors, they were able to build further trust and confidence of the market, as well as anticipation for the future, which can lead to rather higher value relevance. Meanwhile, when capitalized R&D decreases or the existing capitalized R&D is converted to expenses, this affects the firm's performance. Therefore, in this study, additional verification will be carried out by linking a decrease in capitalized R&D with performance. However, even if capitalized R&D decreases, if the firm's performance does not drop, then it is anticipated that the market value for capitalized R&D will not drop. But if capitalized R&D decreases and performance drops, it is anticipated that this will have a negative effect on market value. Exceptionally, even in such a case, we expect that the market value for R&D of biotech firms will not decrease as these firms still retain the market's confidence. Accordingly, this study sets the following Hypothesis 2, and sub-hypothesis 2-1 and 2-2. H2. A decrease in capitalized R&D has a negative effect on value relevance, but this is not the case for the decrease in capitalized R&D of biotech firms. In a given year that biotech firms decrease capitalized R&D through accounting correction, the decreased capitalized R&D has even greater value relevance. H2-1. Even in cases where capitalized R&D decreases, if performance increases, the decreased capitalized R&D does not have a negative effect on value relevance. H2-2. If capitalized R&D decreases and performance, too, decreases, the decreased capitalized R&D has a negative effect on value relevance. But this is not the case for biotech firms. Regression model and measurement of variables For empirical analysis, the OLS model is employed with Tobin's Q as the dependent variable. The first regression model for Hypothesis 1 is as follows. Tobin's q (market-to-book ratio)i,t=α + β1RDCAPi.t+ β2RDCAPbioi.t+ ∑αjXj+ +∑αkINDk + ∑αlYEARl+ εi,t Tobin's q is computed as the market value of equity plus liabilities, all divided by total assets. Tobin's q is employed to assess a firm's value as used in prior studies (McConnell and Servaes, 1990;Simon and Sullivan, 1993;Rao et al., 1994;Dahya et al., 2007).RDCAPi.t is the capitalized amount of R&D, divided by total assets.RDCAPbioi.t is with the RDCAP biotech firm dummy variable. Xi.t is the other factors affecting Tobin's Q (explained below), IND is the industry indicator variables, and YEAR is the year indicator variable. The model includes control variables that can affect firm value. These variables include size, leverage, sales growth, market to book value, and investment. Size, which is measured as the natural log of total assets, is included to control for side effects. Size is defined as the book value of total assets and may have a positive association with market value. Leverage is the total liabilities divided by total assets. Leverage may have a negative association with market value (Jensen, 1986). Sales growth is included to control for growth. The market-to-book ratio is calculated as the market value of equity divided by the book value of equity. A firm's investment decisions might have an effect on firm value, and therefore, investment is used as a control variable. Finally, industry dummy variables, defined by the onedigit Korea Standard Industry Code, and year dummy variables are included as control variables. For the analysis of Hypothesis 2, explanatory variables including RDCAPdec, RDCAPdecbio, and RDCAPdecbioYR17 are used. RDCAPdec is a dummy variable which is coded as 1 if the firm decreases capitalized R&D amount in year t. Otherwise, it is coded as 0.RDCAPdecbio is the RDCAPdecbiotech dummy variable.RDCAPdecbioYR17 is the RDCAPdecbio dummy variable specifically for YR 17. YR17 is a dummy variable which is coded as 1 if the year is 2017 and coded as 0 otherwise. The regression model for Hypothesis 2 is as follows: The regression model for Hypothesis 2-1 is: RDCAPdecPS is an RDCAPdec dummy variable for positive sales, where PS is coded as 1 if the change in sales is positive, and 0 otherwise.RDCAPdecPSbio is similar to the above, but specific to biotech firms. For the analysis of Hypothesis 2-2, explanatory variables including RDCAPdecNS and RDCAPdecNSbio are used. RDCAPdecNS is the RDCAPdec dummy variable for negative sales, where NS is coded as 1 if the change in sales is negative, and 0 otherwise.RDCAPdecNSbio is this same variable, but specific to biotech firms. The regression model is as follows. Descriptive statistics The mean ( : market-to-book ratio, market value of equity divided by book value of equity. INV : plant, property, and equipment (except land and construction in progress) divided by total assets The Pearson correlation results are reported in Table 3. Significant correlations are observed between market value (TQ) and some of the explanatory variables (RDCAP, RDCAPbio, RDCAPdecbio, RDCAPdecbioYR17, RDCAPdecPS, RDCAPdecPSBio, RDCAPdecNSbio) (p<0.01). Table 3 Correlations Note: See Table 2 for variable definitions. Significant positive correlations are also seen between earnings management and some of the control variables (SIZE, GROW, MTB, INV)(p<0.01). Significant negative correlations are observed between firm value and some of the explanatory variables (RDCAPdec, RDCAPdecNS) (p<0.01). Significant positive correlations are also seen between firm value and LEV(p<0.01). To test for multi-collinearity, the variance inflation factors (VIFs) are computed. No multi-collinearity problems are evident. Table 4 represents both the OLS regression and the Fixed Effect regression results for the association between the capitalized R&D and the firm's market value. The results for the OLS regression show that the capitalized R&D has significant positive association with market value (p<0.01) and that capitalized R&D for biotech firms has a more significant positive association with market value (p<0.01) than that of firms in other industries. Thus, the results provide support for H1. The results imply that R&D capitalization information for biotech firms appears to be more value relevant. These results confirm that biotech companies are receiving positive feedback on the high expectations of R&D investment success through aggressive R&D accounting treatment. A large amount of R&D invested in the biotech industry compared to other industries and its capitalization are considered to be well aligned with the future sustainable success of biotech firms. Significant associations are also seen between market value and the control variables. Some of the control variables (SIZE, GROW, MTB, INV) has a significant positive association with market value, and LEV has a significant negative association with market value. The results for the fixed effect regression remained consistent with the OLS results for the explanatory variables. Note: See Table 2 for variable definitions. t-values are shown in parentheses. * p < 0.10 ** p < 0.05 *** p < 0.01 Table 5 represents both the OLS regression and the Fixed Effect regression results for the association between the decrease in capitalized R&D and the firm's market value. The results for the OLS regression show that the decrease in capitalized R&D is negatively associated with a market value (p<0.01). However, the decrease in capitalized R&D for biotech firms is positively associated with a market value (p<0.01). In addition, the decrease in capitalized R&D for biotech firms in a certain year, when major biotech firms switched R&D accounting treatment from capitalizing to expensing by restating their financial statements or by a recognized impairment loss of the capitalized R&D, has a stronger positive association with market value (p<0.01). Thus, the results provide support for H2. The results confirm that capitalized R&D decrease of biotech firms is considered to improve accounting transparency, therefore it does not hurt the positive evaluation about the future value of biotech firms aiming at sustainable technology development through R&D investment. Rather, they seem to have received a better evaluation in the market. The results infer that biotech firms' accounting error correction, regardless of whether it has been done voluntarily or compulsorily, appears to be a reliable indication for R&D success and positively affects a firm's market value. Regression results Significant associations are also seen between market value and the control variables. Some of the control variables (SIZE, GROW, MTB, INV) are positively associated with a market value, whereas LEV is negatively associated with a market value. The results for the fixed effect regression remained consistent with the OLS results for the explanatory variables except for the RDCAPdecbio. Note. See Table 2 for variable definitions. t-values are shown in parentheses. * p < 0.10 ** p < 0.05 *** p < 0.01. Table 6 Panel A. represents the OLS regression results for the comparative effect of the decrease in capitalized R&D on the firm's market value, depending on the firm's performance. The results in Model 1 show that the association between the decrease in capitalized R&D and market value has a significant positive association with market value (p<0.01) when the firms get better performance. The results apply to all industries and provide support for H2-1. When the firm's performance is good, the decrease in R&D capitalization does not seem to mitigate the faith in R&D success. However, the results in Model 2, when the firm's performance get worse, show that the association between the decrease in capitalized R&D and market value is negatively associated with a market value (p<0.01). In conclusion, when capitalized R&D decreases, the market tends to focus more on the firm's performance. Exceptionally, the decrease in capitalized R&D of biotech firms and market value is positively associated with a market value (p<0.01), even when the firms get worse performance. Thus, the results provide support for H2-2. Regardless of the decline in R&D capitalization or the decline in firms' performance, positive evaluation about biotech firms' future success of the market remains unchanged. This may imply enormous expectation and unconditional affirmation around the R&D process of biotechnology firms. Policymakers should recognize the special nature of biotechnology companies and ensure to create an environment where they can cultivate their sustainable growth potential in the future, in a way of not interfering with the wise decisions of investors. Significant associations are also seen between market value and the control variables. Some of the control variables (SIZE, GROW, MTB, INV) are positively associated with a market value, and whereas is negatively associated with a market value. Panel B of Table 6 Robustness regression An analysis is carried out on the regression models using robust regression techniques to eliminate the influence of outlier biases in all specifications. As can be seen in Table 7, the results of the study remain consistent when running through the robustness check. Note: See Table 2 for variable definitions. t-values are shown in parentheses. * p < 0.10 ** p < 0.05 *** p < 0.01 CONCLUSION While the accounting treatment of R&D has continuously become a topic of debate, it has taken on even more significance as an issue as Korean biotech firms have seen their stock prices surge. R&D capitalization is allowed as long as certain conditions are met, but some people have raised the legitimacy of this treatment and argued for a new guideline on the application of more transparent accounting rules. However, as has been verified in preceding studies, R&D accounting treatment affects R&D investment and expensing of R&D triggers under-investing in R&D (Dukes et al, 1980;Shehata, 1991;Wasley and Linsmeier, 1992;Oswald and Zarowin, 2007). Cuts in R&D may undermine the company's competitiveness, with the blow being especially hard on biotech firms where R&D investments are usually larger than in other industries. As such, this study first analyzed the correlation between capitalized R&D and its value relevance as has been done in preceding studies and conducted a comparative analysis of Korean biotech firms. The analysis showed that in line with the findings of preceding studies, capitalized R&D had a positive correlation with market value, and compared to firms in other industries, capitalized R&D of biotech firms had a greater value relevance. This seems to be due to the high expectations of the market about the future success of the massive R&D spending of biotech firms. Next, based on the empirical studies that state that capitalization of R&D has a higher value relevance compared to expensing of R&D, a second hypothesis was set that when capitalized R&D decreases, the information usefulness or value relevance in the market would also drop. For this hypothesis, the case of biotech firms was also compared. The analysis results showed that a decrease in R&D capitalization had a negative effect on market value but this was not found in the case of biotech firms. Moreover, specific years were examined for which major biotech firms voluntarily, or because of an audit, corrected accounting errors. It was found that perhaps due to the increase in expectation for accounting transparency and future R&D success, such moves had a positive effect on market value. In further analysis, a decrease in capitalized R&D and firm performance was associated. If firm performance does not drop despite a voluntary or forced decrease in capitalized R&D, then decreased capitalized R&D is found to be value relevant. In the case that both capitalized R&D and firm performance drop, there was a negative correlation with market value. However, in the case of biotech firms, even in such situations, the market value for R&D information did not drop. In most cases, it appears that the market places much confidence in biotech firms and their R&D information usefulness. However, current and potential investors are carefully required to make wise judgments. Particularly, policymakers should create an environment in which investors can help them distinguish between positive and negative activities of biotech companies, but they should respect the particular circumstances of the biotechnology companies and not impede their potential for future growth. The share of biotech firms is relatively small in the larger market, which may lead to the concern that the analysis is less sophisticated. Despite such limitations, this study is meaningful in that it conducted a comparative analysis on biotech firms by associating R&D accounting treatment, firm performance, and the firm's market value. Future research may consider to explore whether there will be a change in practices since then. The issue of limited data availability may be tackled by considering companies internationally by exploring with different practices across countries.
6,336.4
2019-05-01T00:00:00.000
[ "Business", "Economics" ]
Integrated ‘all-in-one’ strategy to stabilize zinc anodes for high-performance zinc-ion batteries Abstract Many optimization strategies have been employed to stabilize zinc anodes of zinc-ion batteries (ZIBs). Although these commonly used strategies can improve anode performance, they simultaneously induce specific issues. In this study, through the combination of structural design, interface modification, and electrolyte optimization, an ‘all-in-one’ (AIO) electrode was developed. Compared to the three-dimensional (3D) anode in routine liquid electrolytes, the new AIO electrode can greatly suppress gas evolution and the occurrence of side reactions induced by active water molecules, while retaining the merits of a 3D anode. Moreover, the integrated AIO strategy achieves a sufficient electrode/electrolyte interface contact area, so that the electrode can promote electron/ion transfer, and ensure a fast and complete redox reaction. As a result, it achieves excellent shelving-restoring ability (60 hours, four times) and 1200 cycles of long-term stability without apparent polarization. When paired with two common cathode materials used in ZIBs (α-MnO2 and NH4V4O10), full batteries with the AIO electrode demonstrate high capacity and good stability. The strategy of the ‘all-in-one’ architectural design is enlightened to solve the issues of zinc anodes in advanced Zn-based batteries. INTRODUCTION Aqueous zinc-ion batteries (ZIBs), with their cost efficiency, high safety, nontoxic features and high energy density, are quite competitive and popular in the large-scale energy storage and wearable electronics field [1][2][3]. Since the reversible zinc-ion storage in aqueous system, numerous breakthroughs have been made in research into cathode materials [4][5][6][7][8][9]. Commercial zinc foil has been used in anode materials, but little has been done to overcome its inherent problems [10]. In the past two years, the use of zinc metal anodes has been attracting more attention, with several studies summarizing issues and proposed relevant optimizations [11][12][13]. Recent reviews on the anodes of ZIBs described the main issues as formation of zinc dendrites, hydrogen evolution, corrosion and passivation [14,15]. The current modification strategies include structural design [16,17], surface modification [18], electrolyte optimization [19] and zinc alloying [20]. Structural design is a widely employed method of modification. The essence of this method is to increase the specific surface area of the electrode to accelerate distribution of the electrolyte and current uniformly on the electrode surface, thereby achieving uniform deposition of zinc ions [21]. Therefore, more attention should be directed toward the research progress of three-dimensional (3D) zinc anode than non-3D anodes [22]. Despite these advantages, however, traditional 3D anodes employed in routine liquid electrolytes show an increase in the specific surface area, which, in turn, indicates a reduction in the local current density. According to the Tafel formula, hydrogen evolution overpotential should decrease. In addition, as the specific surface area increases, there will inevitably be more reactive sites on the anode surface. With this comes the probability of an increase in hydrogen evolution and other side reactions. The increase in these reactions will greatly reduce the Coulombic efficiency (CE) of zinc deposition/stripping, and thus, the cycle life of the zinc C The Author(s) 2021. Published by Oxford University Press on behalf of China Science Publishing & Media Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. Structural design Interface modification Initial state After cycles Figure 1. Schematics of structural design, interface modification, electrolyte optimization and integrated 'all-in-one' system, advantages and disadvantages are also listed. anode, thereby affecting the cycle performance of the battery. Interface modification is a commonly used strategy to reduce side reactions caused by active water molecules [23]. This strategy avoids direct contact between the electrolyte and electrode. Most interface modification strategies can realize uniform zinc deposition, selective ion transfer and anti-corrosion properties [24,25]. However, the introduced coating layer increases the internal impedance and hinders the rapid transport of ions and electrons. Flexible batteries are a promising developmental direction for ZIBs [26,27], and gel electrolytes account for a large proportion of the electrolytes they employ [28,29]. However, because of differences in electrolyte fluidity, their development is significantly restricted by electrode/electrolyte interface issues [30]. This is because of limited contact area, volume change and morphology change of the electrode during cycling. A solution to these problems is required to achieve sufficient and close contact between the electrolyte and electrode. For the above three anode modification strategies, an optimized approach is urgently required to combine their strengths. Therefore, we designed an 'all-in-one' (AIO) electrode by combining the strategies of structural design (3D skeleton), interface modification (suffi-cient interface contact) and electrolyte optimization (mixed gel electrolyte). This integrated AIO strategy should increase the electrode/gel electrolyte contact area, facilitate occurrence of the ion transportation and redox reactions, and improve the adaptability of electrode volume changes to alleviate the interface stress problem. Additionally, the AIO electrode retains the advantages of a high specific surface area while effectively suppressing hydrogen evolution and side reactions, consequently achieving better stability (Fig. 1). In a symmetrical battery, the as-prepared AIO electrode achieved a 600-hour long-time stability without significant polarization, as well as ultra-stable shelving-restoring ability. In Zn/α-MnO 2 and Zn/NH 4 V 4 O 10 cells, the AIO electrode also exhibited better electrochemical performance than traditional 3D anodes in routine liquid electrolytes. RESULTS AND DISCUSSION The preparation of the AIO electrode is shown in Fig. 2a. In step I, Cu foam and a Zn sheet are used as the working and counter electrodes, respectively, for the Zn plating to obtain Cu foam@Zn [31]. Using sodium alginate as the main content and Step II : gel electrolyte 'plating' a b c d f e palygorskite powder as an additive, a mixed electrolyte 'plating' suspension is obtained in step II. Subsequently, the Cu foam@Zn and a Zn sheet are utilized as the working electrode and counter electrodes, respectively, such that the Zn on Cu foam@Zn loses electrons and transfers into an electrolyte 'plating' suspension. Sodium alginate in the electroplating suspension completes the ionic cross-linking on the surface of the Cu foam@Zn. This is seen in Fourier transform infrared spectra (FT-IR) from the asymmetric stretching vibrations of the −COOgroups, which shift from 1615 cm -1 to higher values 1640 cm -1 through formation of coordinate bonds between the carboxylate groups and zinc ions (Fig. S1, Supplementary data) [32,33], and brings the palygorskite together with the electrode to form an AIO electrode. Palygorskite has been proven to effectively improve battery performance through ion exchange [34]. Surface and cross-sectional energy dispersive X-ray (EDS) mappings (Al, Zn, Si, Mg) confirm the uniform distribution of the palygorskite material [MgAlSi 4 O 10 (OH)·4H 2 O] at the surface and body of the gel membrane (Fig. S2). Photographs of the Cu foam, Cu foam@Zn and AIO electrodes are shown in Fig. 2a. The distributions of Zn on the Cu foam and mixed gel membrane on the Cu foam@Zn are both quite uniform. The cross-sectional photograph and scanning electron microscope (SEM) image of the AIO electrode show that the gel membrane penetrated the electrode and was tightly bonded to Cu foam@Zn ( Fig. 2b and c), achieving sufficient electrode/electrolyte interface contact. Such an AIO electrode can function as an anode, electrolyte and separator simultaneously, as shown in Fig. 2d. The X-ray diffraction Natl Sci Rev, 2022, Vol. 9, nwab177 (XRD) results show that Cu foam@Zn was obtained during the electroplating process without formation of by-products (Fig. 2e). Analysis of the peeled off electrolyte membrane by FT-IR shows that its infrared peaks (Fig. 2f) matched well with its two components-palygorskite and zinc alginate (Fig. S3a). To illustrate the feasibility of this optimization strategy, the electrochemical performances of the AIO electrode and the Cu foam@Zn in liquid electrolytes (2 M ZnSO 4 or 2 M ZnSO 4 + 0.1 M MnSO 4 ) were compared. In terms of shelvingrecovery performance, the AIO system delivered a small polarization voltage after undergoing 60 hours of shelving thrice and maintained a normal opencircuit voltage (0.006 V) during the fourth shelving (Fig. 3a). In contrast, in the liquid system, after going through 60 hours of shelving twice, a significant polarization increment occurred in the third cycle. During the third shelving, the open circuit voltage increased sharply, accompanied by battery failure. In a symmetric battery, the AIO system exhibited better reversibility and stability. Cyclic voltammogram tests of a full battery (Cu foam@Zn/α-MnO 2 ) show that the V between the oxidation and reduction peaks in the AIO system is smaller than that in the liquid system (2 M ZnSO 4 + 0.1 M MnSO 4 ). This is observed in both the first (Fig. 3b) and second circles (Fig. S3b), indicating that the AIO system exhibits better reversibility. Floating charge current can be used to evaluate the amount of energy required to maintain a battery at 100% charge. Generally, the smaller the floating charge current, the better the stability of the system [35,36]. As shown in Fig. 3c, the floating charge current of the liquid system was 0.00777 mA, whereas that of the AIO system was reduced by 36.3%. In addition, the AIO system effectively suppressed selfdischarge (Fig. 3d). The faradaic reaction, which includes decomposition of the electrolyte, is the main cause of self-discharge. This suggests that the AIO system inhibits unwanted side reactions [37,38]. aqueous ZIBs, and its formation can be indicative of the severity of the side reactions. A comparison of the XRD patterns of the anode in the AIO system and liquid systems after 100 cycles at a current density of 500 mA g -1 shows that the AIO system can effectively inhibit formation of Zn 4 SO 4 (OH) 4 ·xH 2 O (Fig. 3e). This conclusion can be verified by the SEM images of the anode in the AIO (Fig. 3f) and liquid (Fig. 3g) systems, and further confirmed by low magnification SEM images (Fig. S4). Moreover, the morphology of the anode after cycling in the AIO system is much flatter than that in the liquid system. To explore the reasons for the improved stability and reversibility of the AIO electrode, a linear polarization test was performed to compare the corrosion properties of Zn metal in the different systems. As shown in Fig. 4a, there was a smaller corrosion current in the gel system. It is generally believed that a lower corrosion current indicates a lower corrosion rate [39], which means that Zn metal exhibits greater stability in the AIO system. Comparing the stable electrochemical windows of the gel electrolyte and the liquid electrolyte, the former has both a higher O 2 evolution potential and a lower H 2 evolution potential (Fig. 4b). Thus, the AIO system can effectively relieve gas evolution [40]. To observe the hydrogen evolution more intuitively, symmetrical batteries were assembled in a transparent container to perform repeated Zn deposition/stripping. As shown in Fig. 4c, after five cycles under the same conditions (Fig. S3c), obvious bubbles can be observed on the electrode surface in the liquid system, whereas the formation of bubbles can barely be seen in the AIO system. With respect to the nucleation overpotential (NOP, Fig. 4d), that of the AIO system is 53 mV larger than that of the liquid system, which is mainly attributed to the interaction between zinc ions and carboxyl groups [33]. The larger NOP value provides a sufficient nucleation driving force to form finer nuclei [23]. Current change with time under a constant potential has the sensitivity to reflect the nucleation process and surface changes [41]. To analyze the deposition forms of zinc ions on copper and zinc in the two systems, the chronoamperometry comparisons of the Cu foam@Zn/Cu and Cu foam@Zn/Zn batteries were conducted. It can be seen from Fig. 4e that Zn deposition on the copper substrate in the AIO system is closely arranged. In the chronoamperometry of the Cu foam@Zn/Zn battery, two-dimensional diffusion corresponds to the unrestricted diffusion of zinc ions on the anode surface. The AIO system only undergoes a fast constrained 2D diffusion before entering a stable 3D diffusion stage (Fig. 4f), that is, zinc ions tend to be reduced at the adsorbed position, which greatly increases the number of nucleation points and improves the distribution situation. This limited 2D diffusion can also be attributed to the coordination between carboxyl groups and zinc ions [33]. The reasons for the improved stability of the AIO system are shown in Fig. 1. In detail, the increase in surface area through this structural design causes the local current density to diminish, and consequently, the hydrogen evolution overpotential to decrease [42]. In most 3D systems, the number of reactive sites inevitably increases because of surface area enlargement. Hence, H 3 O + is more likely to obtain electrons to generate hydrogen (with the generated bubbles adhering to the electrode surface), which hinders the migration path of Zn 2+ . As hydrogen evolution and Zn 2+ deposition are competitive reactions, the easier reduction of H 3 O + means that it is more difficult for Zn 2+ to obtain electrons [43]; that is, the zinc stripped from the Cu foam@Zn is more difficult to return. The macroscopic phenomenon is that zinc on Cu foam@Zn gradually dissolves as the cycle number increases. Moreover, the occurrence of hydrogen evolution means that the partially remaining OHaggravates side reactions and generates by-products such as Zn 4 SO 4 (OH) 4 ·xH 2 O. For interface modification, introducing a coating layer usually leads to an increase in internal impedance, which hinders fast ion/electron transfer. Because most of the water molecules are fixed inside the gel, electrolyte optimization can effectively reduce the water-induced side reactions, but the poor interface contact blocks its further development. In contrast, within the AIO electrode, the advantages are cleverly combined. Most of the water molecules are fixed inside the mixed gel, and the number of active water molecules are markedly reduced. As a result, side reactions, such as hydrogen evolution caused by active water molecules, are greatly inhibited. Meanwhile, the role of the 3D structure in homogenizing ion deposition is retained, as the interaction between the gel electrolyte and Zn 2+ has been strengthened to a certain extent. Moreover, the close contact between the gel membrane and the electrode can enable fast electron/ion transportation [44]. Because the AIO system is more stable than the liquid system, it is believed that batteries with an AIO electrode should also exhibit a better electrochemical performance. In 70 cycles of a Zn/Cu battery under a 20% depth of discharge (Fig. S5), the CE of the AIO system was maintained at ∼100%, while the liquid system had a relatively obvious voltage fluctuation at 41 cycles, and then completely failed in the 68th cycle (CE dropped to 0%). Because of the lack of a rigid substrate, the zinc foil completely failed after only 14 cycles under a 20% depth of discharge. In the Zn/Zn symmetric battery, the polarization voltage of the AIO system can be stabilized within ±0.06 V after cycling for 600 hours at a 7% depth of discharge, whereas the polarization voltage of the liquid system more than doubles (Fig. 4g). In addition, when the battery was disassembled to compare the electrodes (Fig. S6), the zinc on the Cu foam@Zn in the AIO system was still visible, whereas the zinc dissolution could be clearly observed in the liquid system, which is essentially a result of the reduction in CE caused by the severe hydrogen evolution mentioned above. Surface and cross-sectional SEM images (Fig. S7) of the AIO electrode after cycling also confirmed that the gel electrolyte was still coated onto the 3D structural anode, which is the same as the gel electrolyte before cycling with an obvious layer configuration. By comparing the stability of symmetrical batteries in the two systems at different current densities, it was found that the AIO system also shows a better rate performance (Fig. S8). At a current density of 4 mA cm -2 , the polarization voltage of the liquid system increased sharply, followed by system failure, whereas the AIO system maintained a stable performance. The difference in rate performance can be attributed to the stability difference of the two systems, as well as the higher ionic conductivity of the AIO system (Fig. S9a). The high ionic conductivity may be associated with the abundant ion transfer channels of the nano-palygorskite materials [34]. As the electrochemical reaction is a coordinated process of electron transmission and ion migration, under a large current density, the ion migration is limited by the polarization of concentration; that is, systems with higher ionic conductivity tend to obtain better high-rate performance. Notably, the contact area provided by the close proximity of the gel membrane and electrode enables the high ionic conductivity of the gel electrolyte to be effectively utilized. Moreover, in terms of electron transmission, the AIO system possesses a smaller electric charge resistance, according to the Nyquist plots ( Fig. S9b and c). In summary, fast ion/electron transfer can be realized using an AIO electrode. The superiority of the rate performance is also reflected in the Cu foam@Zn/NH 4 V 4 O 10 (NVO) (Fig. S9d) and Cu foam@Zn/α-MnO 2 (Fig. S9e) full cells. Compared to the cycle performance of the Cu foam@Zn/α-MnO 2 system at a current density of 0.5 A g -1 (Fig. S9f), the specific discharge capacities of the two 3D systems were similar, but the liquid system started to suffer from obvious capacity fading after 150 cycles. In contrast, the capacity retention rate of the AIO system was much better. The non-3D anode (Zn-foil-based AIO electrode) delivers inferior endurance to that of the 3D anode. The XRD patterns and SEM images of the two cathodes are shown in Fig. S10. For the NVO system at a current density of 10 A g -1 , the liquid system first maintained a stable cycle of ∼500 times, but then a 'cliff' capacity decline occurred at the 530th cycle, which was associated with the severe H 2 evolution phenomenon and the dissolution of zinc metal (inset of Fig. S11), as reflected in the CE shown in Fig. S11. In addition, because of the internal pressure increment, electrolyte leakage occurred on the corresponding button cell of the liquid system (Fig. S12). Because the AIO system can effectively inhibit hydrogen evolution, there was no obvious capacity decline even when the cycle number was up to 1000 (Fig. 4h), and the corresponding button battery was no electrolyte leakage. To display the potential application of the AIO electrodes, we conducted the following experiments. We assembled a soft packing battery with an AIO electrode, and its first cycle CE reached ∼100% (Fig. S13a). Two AIO-based ZIBs were connected in series to power an LED bulb (rated voltage: 3 V). To simulate situations that may be encountered in actual applications, bending experiments ( Fig. S13b and c; Video S1), piercing experiments (Video S2) and impact experiments (Video S3) were conducted. In all the above situations, the AIO ZIBs exhibited constant stability. CONCLUSION In summary, an AIO electrode inheriting the advantages of the 3D zinc anode and gel electrolyte with almost no hydrogen evolution was prepared by two-step electroplating. In contrast to the point-tosurface contact between gel membranes and 3D zinc anodes in the past, the gel electrolyte here is tightly integrated with the Cu foam@Zn, providing more active sites and channels for redox reactions and fast ion transportation. As most water molecules in the gel electrolyte are constrained, hydrogen evolution is greatly suppressed. Therefore, the AIO electrode can effectively reduce side reactions, improve stability and obtain a relatively flat morphology. Consequently, compared with a 3D anode in routine liquid electrolyte, the AIO electrode exhibited a more stable CE (99.6%) under 20% depth of discharge. The stability of AIO electrode was confirmed using full batteries with NVO and α-MnO 2 cathodes. In liquid electrolytes, zinc dissolution caused by strong gas evolution induced a sharp capacity decline; however, the capacity retention rate of the AIO system was as high as 85.4% (charging capacity), even after 1000 cycles. With this integrated AIO strategy, we hope to point out a way to combine modification methods and promote the development of nextgeneration Zn-based batteries. SUPPLEMENTARY DATA Supplementary data are available at NSR online. AUTHOR CONTRIBUTIONS Z.J. and S.L. proposed and supervised the project. C.L. and X.X designed and carried out the synthesis, characterizations and electrochemical tests. J.Z., X.X. and C.L. co-wrote the manuscript. H.L., P.W., C.D., B.L. and S.L. discussed the results and participated in analyzing the experimental results.
4,761.4
2021-09-15T00:00:00.000
[ "Materials Science" ]
Transcription at a Distance in the Budding Yeast Saccharomyces cerevisiae : Proper transcriptional regulation depends on the collaboration of multiple layers of control simultaneously. Cells tightly balance cellular resources and integrate various signaling inputs to maintain homeostasis during growth, development and stressors, among other signals. Many eukaryotes, including the budding yeast Saccharomyces cerevisiae , exhibit a non-random distribution of functionally related genes throughout their genomes. This arrangement coordinates the transcription of genes that are found in clusters, and can occur over long distances. In this work, we review the current literature pertaining to gene regulation at a distance in Overview and Background Transcription is the production of an RNA intermediary that links the genetic information stored in the nucleus to a specific phenotype, as outlined in the 'Central Dogma of Molecular Biology' [1,2]. In all cells, transcriptional regulation is essential for the maintenance of homeostasis and adaptation to a changing environment, allowing the cell to maintain an equilibrium within the niche that it occupies. In the case of single celled organisms, transcription is balanced with the intracellular and extracellular signaling cues received, allowing coordination of growth with adaptation to stressors [3][4][5]. Proper gene expression is required for health, survival, adaptation, and development [6,7]. In all organisms, myriad layers of transcriptional regulation collaborate to modulate the transcriptome. The loss or dysfunction of regulation in even a single layer of this balance can result in quite severe cellular disorders, disease states, or even death. Canonical mechanisms that collaborate to regulate gene expression include regulatory nucleotide sequences as well as regulatory DNA binding proteins [8][9][10]. Overlaid with that are epigenetic mechanisms, one example of which is lysine acetylation, which is required for normal development in evolutionary divergent eukaryotes (and is the subject of several excellent reviews) [11][12][13][14]. Abnormal lysine acetylation has long been recognized as a characteristic of diseases, including cancers [15][16][17]. In addition to these mechanisms, there are additional layers of regulation in many species, including modification of DNA nucleotides, regulation of transcription by microRNAs, and RNA turnover and degradation that collaborate to coordinate mRNA abundance, spatial positioning in both two-dimensions and three-dimensions (within the nucleoplasm), among others [18][19][20][21]. The focus of this work is to review recent advances in the literature surrounding transcriptional regulation at a distance, using the budding yeast Saccharomyces cerevisiae as the model system. The budding yeast, S. cerevisiae, is an exceptional model system for molecular and genetic studies, and lends itself to insights in other eukaryotes [22][23][24]. While budding yeast has its own species-specific quirks, there is extensive conservation on a genetic level to humans [25,26]. Recent work has revealed valuable insights into the chromosomal distance constraints in place that limits transcriptional activation and repression across broad genomic regions. Overview of Transcriptional Regulation in the Budding Yeast, Saccharomyces cerevisiae One fundamental layer of transcriptional regulation is local cis regulatory nucleotide sequences that include promoters, enhancers, upstream activating sequences (UAS), and upstream repressive sequences (URS). The promoter sequence is what directly interacts with the RNA polymerase to form the pre-initiation complex (PIC) [27]. The formation of the PIC is stabilized by the UAS to increase transcription (and conversely, the URS inhibit and destabilize the PIC formation) [27]. S. cerevisiae contains a compact genome, and regulatory sequences are frequently in close proximity to the open reading frame (ORF) for a gene of about 300 base pairs, on average [28,29]. Promoters largely fall into two distinct families: those that are constitutively expressed under enriched nutrient growth (about 55% of promoters) and those that are induced upon specific conditions (about 45% of promoters) [30]. Examples of constitutively active promoters include those for genes that are necessary for ribosome biogenesis, including NOP12-which are regulated by an UAS for Abf1p and URSs such as the polymerase A and C (PAC) and ribosomal RNA processing element (RRPE) (Figure 1). These sequences balance the production of the 200+ genes that are components of the ribosome biogenesis (Ribi) regulon [31]. During periods of rapid growth and division, the Ribi genes are upregulated and highly expressed to meet cellular demands, but during stress they are rapidly downregulated as cellular resources are diverted elsewhere [3]. Conversely, there are inducible promoters including the GAL1 and CUP1 promoters. Transcription of genes that are associated with these promoters is typically repressed during rapid growth, but they are activated by the presence of galactose and copper, respectively, within the growth environment [32]. cerevisiae One fundamental layer of transcriptional regulation is local cis regulato sequences that include promoters, enhancers, upstream activating sequence upstream repressive sequences (URS). The promoter sequence is what dire with the RNA polymerase to form the pre-initiation complex (PIC) [27]. The the PIC is stabilized by the UAS to increase transcription (and conversely, th and destabilize the PIC formation) [27]. S. cerevisiae contains a compact geno ulatory sequences are frequently in close proximity to the open reading fra a gene of about 300 base pairs, on average [28,29]. Promoters largely fall into families: those that are constitutively expressed under enriched nutrient g 55% of promoters) and those that are induced upon specific conditions (abou moters) [30]. Examples of constitutively active promoters include those for genes th sary for ribosome biogenesis, including NOP12-which are regulated by Abf1p and URSs such as the polymerase A and C (PAC) and ribosomal RN element (RRPE) (Figure 1). These sequences balance the production of the 20 are components of the ribosome biogenesis (Ribi) regulon [31]. During per growth and division, the Ribi genes are upregulated and highly expressed to demands, but during stress they are rapidly downregulated as cellular reso verted elsewhere [3]. Conversely, there are inducible promoters including t CUP1 promoters. Transcription of genes that are associated with these prom cally repressed during rapid growth, but they are activated by the presence and copper, respectively, within the growth environment [32]. The proximal regulatory sequences oftentimes are a binding site for trans acting transcription factors (TFs) that can alter the recruitment of RNA polymerases. There are roughly 270 verified and predicted TFs in the budding yeast [33,34]. TFs work in collaboration with one another for PIC formation, as seen in the ribosomal biogenesis transcription factors Abf1p, Stb3p, Tod6p, and Dot6p, and, they function to maintain stoichiometric levels of expression of the Ribi genes during ribosome biogenesis ( Figure 1A) [35][36][37]. The establishment of differential chromatin states modulates accessibility of the cis and trans factors within a spatial and temporal context. The presence of nucleosomes can inhibit transcription, and most transcribed genes have a 150-200bp nucleosome-free region (NFR, also called a nucleosome-depleted region, or NDR) upstream of the ORF [38][39][40][41]. Epigenetic markers of active chromatin, including acetylated histones, are found at the 5 end of actively expressed genes [42]. Induction of the stress response in budding yeast results in the upregulation of genes to adapt to a stressor, such as the heat shock proteins that act to maintain proteostasis and to modulate tRNA abundance to regulate transcription [5,43]. TF binding can alter nucleosome dynamics from the promoter region of corresponding genes and favors PIC formation [40,44]. The spatial arrangement and positioning of genes along the chromosome contribute to the absolute levels of expression due to position effects within a genomic locus. These effects were initially characterized based on proximity to heterochromatin, including that found at the telomeres [45]. These positional effects are not limited to the proximity of heterochromatin, but are prevalent throughout the genome as well [46]. Such position effects can result in transcriptional regulation at a distance, as is seen in adjacent gene co-regulation, a phenomenon that links transcription of functionally clustered genes via shared regulatory mechanisms [47,48]. This phenomenon results in the clustering of genes whose transcripts are required in roughly equivalent stoichiometric levels by the cell, as seen in shared biosynthetic pathways and protein complexes [49]. Transcriptional Interference and Gene Repression at a Distance Gene proximity can influence the transcription of neighboring genes via transcriptional interference, as seen in the SRG1-SER3 locus ( Figure 1B) [50]. The SRG1 transcript is a non-coding RNA species that represses the expression of SER3 when transcribed [51]. The spatial arrangement of these two genes in a tandem orientation (→→) results in intergenic transcription of the SRG1 locus into the regulatory region of SER3 [51]. This overlap of transcription results in a repression of SER3 as a part of a serine responsive transcriptional circuit [50,51]. Transcriptional interference is a potent regulator of gene expression, and thus can favor genome organization to allow mutually exclusive transcription patterns [52]. Proximity of a regulatory element to a gene correlates with expressional regulation. The closer that a promoter, enhancer, or regulatory sequence is located to a gene, the greater the influence of the regulatory sequence on the transcription of the neighboring gene(s) [53]. Simply separating a regulatory sequence from a gene with an increasing spacer size causes a decrease in the resulting expression of a reporter gene [53,54]. Activation drops off to nil at a distance of approximately 600 base pairs of distance between a regulatory element and a gene. Interestingly, the Mediator complex imposes one of the distance constraints to limit transcriptional activation at a distance [53]. The Mediator protein complex is a multiple subunit complex that associates with transcriptional activators and components of the PIC to help modulate transcription [55]. The S. cerevisiae Mediator complex has three distinct domains (head, middle, and tail) and is comprised of 21 subunits [56]. One such subunit is Sin4p, which plays a role in UAS-core promoter specificity by means of encoding a subunit of the tail domain of the Srb/mediator coactivator complex. SIN4 null mutants display an ability to activate transcription at a distance of up to two kilobases away [53]. Thus, the Mediator coactivator complex limits long-distance activation under normal conditions. The Mediator components Sin4p, Rgr1p, and Cdk8p are responsible for repression of long-distance transcription, and are dependent on the Med2p and Med3p [57]. In a sin4 null background, the Mediator tail components can be recruited independently of the rest of Mediator [57]. An elegant genetic screen for polygenic mutants that can transcribe at greater distances found causative mutations in MOT3, GRR1, MIT1, MSN2, and PTR3 that allow for long-distance activation at distances that are otherwise impermissible for transcriptional activation [57]. These isolated polygenic mutants transcribe effectively at distances outside of the range of the wild type, however they cannot activate transcription as efficiently at the distance within the range typical of a wild type promoter element. Consistent with these observations, the authors reason that there are multiple factors that regulate activation (or repression) at a distance and that for other, larger eukaryotes, the regulation of long-distance activation may be coordinated by multiple additional factors [57]. Gene Activation at a Distance Budding yeast has a compact genome for a eukaryote, so it is important that activation occurs only over a short distance to limit activation accordingly [58,59]. In S. cerevisiae, UASs are typically found within 450 base pairs of the TSS, whereas in metazoans with larger genomes, enhancers can be located at greater distances and are often located several kilobases away [60]. Gene proximity can influence transcription throughout a chromosomal region. This results in 'pockets' of correlated gene expression genome-wide [61]. This likely occurs via the activation of genes due to promiscuous promoter and enhancer elements that exert activation at distance, oftentimes to genes that are located at a distance [62]. In budding yeast, this distance constraint has been characterized, with a global activation distance of roughly one kilobase of distance-although there is extensive variance that is present depending on the genomic locus queried [62]. The orientation of genes is important for transcriptional regulation. In simple prokaryotic organisms, functionally related genes are often clustered to allow for polycistronic transcription and regulation of a gene family [63,64]. Operons are not a characteristic of most eukaryotes, with the characterized exception of C. elegans, which contains clustered genes that are transcribed as a polycistronic mRNA species [65,66]. This orientation represents an efficient manner to co-regulate multiple genes simultaneously. One feature of yeasts, including S. cerevisiae, is the prevalence of extensive clustering of functionally related genes as neighbors throughout the genome [49,67,68]. This clustering is present in a vast number of gene families whose protein products are components in the same metabolic pathways, and has been extensively characterized in the ribosomal protein (RP) and ribosome biogenesis (Ribi) families [48]. This clustering of the RP and Ribi gene families are extensively conserved throughout eukaryotes that are evolutionarily divergent [69]. The orientation of co-expressed, clustered genes likely facilitates the mechanism underlying expression regulation. Clusters can be found in divergent (← →), tandem (→→ and ←←), or convergent orientations (→ ←). Divergent promoters activate multiple genes simultaneously, such as a the shared GAL1-GAL10 promoter ( Figure 1C) [70]. Many functionally clustered genes in yeasts are oriented in a divergent manner, allowing for a shared bidirectional promoter [67,68,71]. Many yeast promoters have been characterized as being bidirectional in nature and can function regardless of orientation relative to a gene [72,73]. The prevalence of bidirectional promoters results in pervasive 'cryptic' transcription in yeast, which is normally limited at select loci by the activity of Rap1p [74,75]. Tandem and convergent orientations also can help to modulate transcription of functionally clustered genes. When orientated in a tandem orientation, there is the possibility of a single mRNA intermediary that contains coding information for both genes. A recent analysis of the entire S. cerevisiae has found that a small, but significant, fraction of the genome is transcribed in a bicistronic manner, such as the RTC4-GIS2 locus ( Figure 1D) [76]. As a mechanism, the prevalence of bicistronic transcripts represents approximately 10% of the genes in the genome [76]. A convergent arrangement lends itself to co-expression via mechanisms that may include chromatin remodeling or long range looping interactions within the nucleoplasm [54]. Conclusions Regulation of gene expression throughout a genomic region has important implications for our understanding of gene functions and biotechnological applications. A paucity of data pertaining to this phenomenon has led to a missed annotation of gene functions due to transcriptional disruption across a genomic region. Representative examples include the attribution of a genetic interaction between CDC50 and PAN2, rather than the bona fide interaction between PAN2 and CDC39, which is neighbors with CDC50 [77,78]. Such effects are especially important for geneticists exploring gene functions, which frequently employ reporter genes that may disrupt the transcriptional patterns throughout a region via the neighboring gene effect. Likewise, researchers working to engineer or manipulate specific metabolic pathways for pharmaceutical and industrial uses should take heed-the choice of location can have unintended secondary effects, depending on the locus chosen for manipulation [46].
3,407.6
2021-06-15T00:00:00.000
[ "Biology" ]
Spatial geometry and special relativity a comparative approach In this work, it is shown the interplay of relative and absolute entities, which are present in both spatial geometry and special relativity. In order to strengthen the understanding of special relativity, we discuss firstly an instance of geometry and the existence of both frame-dependent and frame-independent entities. We depart from a subject well known by students, which is the three-dimensional geometric space in order to compare, afterwards, with the treatment of four-dimensional space in the special relativity. The differences and similarities between these two subjects are also presented in a explicit way, with the goal of improving the comprehension of newcomers on the theory of relativity. Introduction The theory of relativity was proposed in 1905 by Einstein and its world view is quite different from the Newtonian mechanics.The uncontrolled popularization of the relativity generated many preposterous views about its content.Maybe, the more improper of them would be the assumption that the relativity shows that 'everything is relative'.On the contrary, the main idea of theory is to determine what is relative in the material world, in order to have a better understanding of what is not.The absolute aspects of the physical universe are, in fact, the major target * Endereço de correspondência<EMAIL_ADDRESS>of Einstein's relativity.The special relativity deals with the observation of physical phenomena made in different reference frames, in relative uniform motion.The theory relates the descriptions made of these phenomena and aims at finding absolute laws underlying different descriptions.An important feature is that there is no privileged frame, which allows one to say that the idea underlying on the importance of the reference frame is the desire of describing the same physics in different frames [1].In this way, the theory distinguishes and correlates two different realms.One of them is related with frame-dependent observations whereas the other, regards absolute quantities which cannot be directly reached by either measures or experiments.The idea of relativity gives rise a split of the world into two parts, which are what really exists in the nature and what can be observed.This dichotomy is the essence of relativity since there are both relative and absolute entities in the nature.This was quite emphasized by Minkowski, when he states that since the postulate comes to mean that only the four-dimensional world in space and time is given by phenomena, but that the projection in space and time may still be undertaken with a certain degree of freedom, I prefer to call it the postulate of the absolute world (or briefly, the world-postulate)( [1], p.83).This dichotomy is present in physics, mathematics, language and, especially, in our daily life.It is a kind of game based on the distinction between an object and the perceptions of object.For example, if one takes a concrete object, like a cube, there are many ways of knowing what a cube is.This knowledge can begin with the sensorial contact with the physical cube, such as dices or boxes.Doing it, a child experiments the cube with the tough, the eyes, the mouth and so on.After the cube is manipulated we may say that somehow we know or learn about the cube. This learning on the cube occurs in an abstract way in our mind, since after this experience we recognize all the infinite cubes that exist in the world.There is a pattern which allows one to identify different objects and categorize them as 'cube'.The idea acquired of cube encompasses particular cubes.With this idea one is able to recognize all cubes around us.Therefore, the idea of abstract cube is an absolute and lives in our mind, without mediations.On the other hand, the representations of cube are different and can be done, for example, by means of words, which are relative representations, since they depend on the local, language and culture.Thus, the same idea of cube may be represented by different words, namely würfel, in german, küp, in turkish, or even more complex symbols, as in chinese and japonese languages [2].We can also represent the cube with figures, as those in fig.(1). One notes that it is not possible to draw the whole cube.The partiality of representations necessarily incorporates a perspective, which is always particular to an observer. In both geometric and special relativities, the description of physical phenomena depends on the observer, which is in a given frame.An observer is not able to see directly the whole cube.However, he can see separately each face and reconstruct the whole cube in his mind.The idea of cube and the relationship between their vertices are frame-independent.Hence, there is an absolute entity underlying the relative descriptions.The notion of absolute has always been present in physics, even in geometric objects.In order to understand the relativity applied to the geometry, we discuss in the sequence how the geometric description of the cube is. The geometric cube Take as example the eight vertices of a cube, seen by three different observers.A geometric theory of relativity is based on an idea that each observer describes the same point in a different way.However, these different descriptions keep well-defined relationships between the points.In analytical geometry, a frame is the xyz system.In this paper one uses names of people for designating the frames [2], for example, Mary's frame, denoted by S M and shown in the figure 2(a).For a cube with edge L, in S M , we have the following coordinates of vertices P M 1 : (0, 0, 0) P M 2 : (L, 0, 0) ( One can note that S A is displaced by an arbitrary distance d in relation to S M , which can be written as a vector d = (−a, −b, −c).Taking another one, the John's frame S J , shown in the figure 2(c), the x and y axes are now rotated by 30 o in relation to S M and the descriptions for the same points are P J 1 : (0, 0, 0) , 0) P J 4 : (0, 0, L); P J 5 : ( , L) The aim of these examples is to verify the dialectical relation between the object and the perceptions of object.The cube is the same for the three observers, whereas its description is not.One could also imagine if we were born in a world without concrete cubes, such as small boxes, dice, and so on.A world in which the cubes can only be known by means of their analytical descriptions, such as the sets of points (1), ( 2) and (3).It would be hard to reach the concept of cube in this world having only these descriptions.However, there are stable and well established features of the cube, which are common to all particular descriptions.A theory which is able to distinguish relative from absolute entities front of a description is called 'theory of relativity'. The geometric invariants For maintaining the idea of cube, the relations between the vertices should have the same measure in all frames and one says that the distance between two vertices is an invariant, such as the length of the diagonal.For example, the relation between the vertices 7 and 2 in the three frames are given by e4309-4 Spatial Geometry and Special Relativity: a comparative approach One notes that these results are not equal to three frames, however, if we consider that each substraction is a generic vector A, the length of diagonal is given by the modulus of the vector ), and their modulus are The geometric meaning of this result is simple: 3 is the distance between vertices 7 and 2, which is an invariant.In the two first cases, this invariant is originated by translating frames (S M and S A ), whereas the third, the invariant comes by rotating frame (30-degree of the axes x and y in S J ).The invariants coming by rotating frames are quite relevant in geometric relativity.For example, the single point P in figure 3(a), is written as whereas in x y z system, which is rotated by a angle α in relation to xy axes, the new description of P , shown in the figure 3(b), is (5) where ϕ = θ − α. Although the both descriptions are different, one shows that the modulus of A is an invariant, calculating | A| = x 2 + y 2 + z 2 .In xyz system we have and in the x y z system, the same operation is These results show that the modulus of a vector does not depend on neither the reference system nor the angle rotated.A theory of relativity is one which transforms the coordinates from a frame into another one, i.e., it transforms (x, y, z) of P into (x , y , z ).We note that the vector A does not depend on the α in two frames and, hence, we have to related α with θ and ϕ, by means of the following trigonometric relations. Therefore, the equations which relate x, y and z with x , y and z are In order to obtain a set of equations which transform xyz → x y z , we replace eqs.( 6) in (7), yielding which can be also represented by a matricial form These equations are well known from rotating geometry and are quite useful, since they determine the coordinates of a same vector in different frames and allow one to make changes of frame, in a analytical way, without constructing the geometric figure.Therefore, the figure of cube can be abandoned and the calculus is made only with the points.Returning to vertices P M 7 and P M 2 , described by (L, 0, 0) and (0, L, L) in S M , the result (8) allows to obtain P J 2 and P J 7 in S J , which is θ-degree rotated in relation to S M .The figure (4) shows these points in S M and their corresponding vectors. Using the eqs.(8), the same vectors can be written in S J as which yields A 2 = ( Lcosθ, −Lsenθ, 0 ).Similarly, we write the vector which represents the point 7 in S J as which yields A 7 = ( Lsenθ, Lcosθ, L ).It is easy to see that they are different in each frame, but the distance between P 7 and P 2 remains unchanged, since it corresponds to the length of the diagonal.Calculating A 7 − A 2 and A 7 − A 2 , we have The main goal of these demonstrations is to emphasize that in spatial geometry there are entities frame-independent.These entities are invariants and are obtained by means of some mathematical operation between two frames.In general, the invariants are hidden behind the descriptions and for obtaining them it is necessary to make an operation with the coordinates.In spatial geometry it is usual to call the theory of coordinate transformations as relativity by rotations.In the Einstein's special relativity there is also a set of equations which transforms Spatial Geometry and Special Relativity: a comparative approach descriptions of physical events between two frames.In the following section we show the differences and similarities between spatial relativity and special relativity. The relativistic cube In geometry the game between the object and their representations can be complex and depends on the observers.The Einstein's theory of relativity is similar and its purpose is to find the absolute laws which determine the apparent description of phenomena.The great difference in this theory is that it relates the descriptions of moving observers. Events are particularly importante in relativity.They are occurrences which can be described by four coordinates in a frame: one instant in time and one point in tridimensional space.Minkowski introduced this concept as a world-point, saying that the objects of our perception invariably include places and times in combination.Nobody has ever notice a place except at a time, or a time except at a place.[...] A point of space at a point of time, that is, a system of values x, y, z, t, I will call a world-point.The multiplicity of all thinkable x, y, z, t systems of values we will christen the world ( [3], p.76). The occurrence of an event is something absolute, i.e., if it occurs in a given frame, it will also occur in all the other ones.However, the instant and the position which they occur depend on the frame.The theory is based on the idea that different observers describe the same event in different ways, by means of different coordinates.A frame is a system with three spatial axes added with a time-like coordinate and, differently from spatial geometry, the space is four-dimensional.Therefore, the description of an event is made by means of a point with four coordinates.An event in S M , S A and S J is described as (c t M ; x M , y M , z M ), (c t A ; x A , y A , z A ) and (c t J ; x J , y J , z J ), where the time coordinate were multiplied by c so that all components have the same dimension. An event in relativity is similar to a vertex of the cube in spatial geometry.In the case of geometry, the idea of cube is defined by a specific relation between the vertices, which is always equal for different frames and, therefore, is absolute.We used the calculus of the diagonal for elucidating the existence of invariants.In special relativity, there are also relationships between events, which preserve and maintain some abstract entity, that is analogous to the idea of cube. In order to understand this abstract relativistic cube, we consider another example, which in turn, cannot be drawn in the same way, since we are dealing with a four-dimensional space.Moreover, one needs to consider the time coordinate in the vertex of this 'relativistic cube', i.e., the object is constructed by events.Take a ruler with length L and two pens coupled at their ends, one is red and other, blue.The pens can mark a paper with two different colorful dots.This ruler comes down and mark these two points, at the first time, at an instant t 1 .Then, the ruler changes its position and repeats the colorful marks in a different y-coordinate, at the instant t 2 .This situation is illustrated in figure (5), where a and b are the distances of the ruler from the origin at the instants t 1 and t 2 respectively. The two descents of the ruler generate four events, i.e., four occurrences in the four-dimensional space, which are described by: One can also situate these four events in a fourdimensional space in the same way that we represented the vertices of geometric cube in a threedimensional space.One notes that the coordinates x and z are always zero, since the ruler is placed along the y-axis.This facilitates the construction of these events, which are shown in figure 6. These four events seem to be the vertices of a cube, representing together one of its face, however, it is a face of an imaginary cube, since one of the axis is the time and the figure is a parallelogram.The purpose of this example is to show that the events in relativity are represented by means of four coordinates, i.e., there is one more coordinate than the description of a point in analytical geometry.In both Physics and Mathematics the change of frame is quite important.We showed that the coordinates of vertices of a cube are different for each frame and that it is possible to transform a set of coordinates In special relativity, the set of equations which makes it are the Lorentz transformations. In relativity, we have four coordinates and, therefore, the Lorentz transformations transform ct; xyz → ct ; x y z .There are two essential differences between eq.( 8) and the Lorentz equations: (i) in relativity there are four equations involving four coordinates and (ii) there is no angle of rotation between frames, but instead it there is a relative velocity between frames.Thus, the angle in eq.( 8) is substituted by a relative velocity v between two frames.For simplifying the calculation, we adopt the relative velocity v only in the y-axis and, therefore, there are no changes in x and z coordinates.Taking S M and S J , their descriptions of an event are (ct M ; x M , y M , z M ) and (ct J ; x J , y J , z J ).If John movies relative to Mary with a constant velocity v, along the y-axis, and they adopt a single space and time origin, the mathematical operations which relate their sets of four coordinates are where γ = 1/ 1 − v 2 c 2 is called Lorentz factor [4]. Considering the four events (10) as being in S M , one can obtain their description in S J , which moves with relative velocity v. Calculating y and t coordinates, we have Event 3 Mary -E 3 : ( c t 2 ; 0 , b , 0 ) −→ John -E 3 : ( ? ; 0 , ? , 0 ) John - Mary -E 4 : ( c t 2 ; 0 , b + L , 0 ) −→ John -E 4 : ( ? ; 0 , ? , 0 ) John - One repares that the descriptions in S J are quite weird and the classical intuition does not reach their meanings, since we are dealing with a fourdimensional space and the spatial and temporal coordinates appear mixed together.On the other hand, underlying all these descriptions must have something common in both frames.This abstract entity is the absolute which does not depend on the changing frame and is an invariant by Lorentz transformations.The relativistic invariant is hidden and is necessary to perform some mathematical operation to find it. The relativistic invariants In the geometric cube, the invariant is found by means of the relation between two vertices and we took as example the diagonal length.The operation to obtain the diagonal length is the modulus of vector A, which is calculated by x 2 + y 2 + z 2 .In relativity, there is one more coordinate which leads one to think whether the distance between two events is calculated by the same formula, just adding the t-coordinate term, as represented in figure ( 7). The answer to this question is no and was introduced in relativity by Minkowski [3] , who realized that the distinction between space-like and time-like coordinates is made by means of changing the sign.In relativity the definition of the distance between two events ds 2 is given by and is called relativistic interval.This strange sum has an important meaning within theory and represents the relativistic invariants, similar to the geometry.In Euclidean space, a vector squared corresponds to the operation , the dot product of A by itself.This result, as we already mentioned, is an invariant, a scalar entity.The great new of relativity is the re-definition of dot product, which yields invariant entities.In a fourdimensional space, the dot product of a 4-vector A = (A 0 ; A x , A y , A z ) by itself is The minus signs in front of the spatial coordinates are essential for the dot product A 2 to be frameindependent.All scalar products of four-vectors are relativistic invariants, however, only some of them are relevant and have meaning within the theory.The first invariant we present is the four-distance √ A 2 , which replaces the typical Euclidean notion of distance [5].One can show, therefore, that this abstract entity is equal for both S M and S J by calculating the sum (5) in each frame.For S M , it is ) whereas for S J , the sum is (15) In order to compare these two intervals, we take the events E 4 and E 1 of previous example and calculate the interval between them. Replacing these coordinates in eqs.( 14) and (15), the relativistic intervals for S M and S J are presented in the sequence. Mary : (ds with the development of the sum of squares, the mixed term is canceled, which yields (ds 2 , which corresponds to the same value for S M . Concluding remarks In this work, we show differences and similarities between three-dimensional geometry and special relativity.The treatment of physical phenomena in a four-dimensional space requires a new kind of mathematics, since the spatial and temporal coordinates are mixed together.Albeit the mathematical operations are different, the way of thinking is similar.The existent entities in Euclidean geometry have their corresponding analogues in special relativity, which are summarized in table (1). We mentioned in this paper only one relativistic invariant, namely the four-distance.However, there are other important invariants, generated from different dot products.For example, the proper time τ , coming from a specific time-like interval [5] and the mass, which is defined by the squared of the 4-momentum p 2 [6].This comparison between geometry and relativity is a starting point for students to plunge in the four-dimensional world of physics.It aims at turning the subject more understandable for students not acquainted with the theory of relativity.On the other hand, both teachers and student have to be aware that the interpretation of fourdimensional entities in relativity is quite complex, since it deals with abstract and absolute concepts.In this sense, there is no continuous path from a lesser to a higher dimension and, hence, the understanding of four-dimensional entities can only be achieved by means of mathematical constructions. Figure 3 : Figure 3: Description of vector in rotating frames Figure 4 : Figure 4: Vectors A 2 and A 7 in S M Figure 5 : Figure 5: Ruler in different positions
5,069.4
2016-01-01T00:00:00.000
[ "Physics" ]
Direct observations of anomalous resistivity and diffusion in collisionless plasma Coulomb collisions provide plasma resistivity and diffusion but in many low-density astrophysical plasmas such collisions between particles are extremely rare. Scattering of particles by electromagnetic waves can lower the plasma conductivity. Such anomalous resistivity due to wave-particle interactions could be crucial to many processes, including magnetic reconnection. It has been suggested that waves provide both diffusion and resistivity, which can support the reconnection electric field, but this requires direct observation to confirm. Here, we directly quantify anomalous resistivity, viscosity, and cross-field electron diffusion associated with lower hybrid waves using measurements from the four Magnetospheric Multiscale (MMS) spacecraft. We show that anomalous resistivity is approximately balanced by anomalous viscosity, and thus the waves do not contribute to the reconnection electric field. However, the waves do produce an anomalous electron drift and diffusion across the current layer associated with magnetic reconnection. This leads to relaxation of density gradients at timescales of order the ion cyclotron period, and hence modifies the reconnection process. M ost of the visible universe is composed of plasma, consisting of ions and electrons. The behavior of plasma is governed by electromagnetic forces. In low-density solar and astrophysical plasmas Coulomb collisions are typically extremely rare, meaning that collisions between particles do not play a role in the behavior of the plasma and cannot provide plasma resistivity and diffusion. However, the scattering of particles by electromagnetic waves can introduce effective collisions, lowering the plasma conductivity 1,2 . Such anomalous resistivity due to wave-particle interactions is thought to be crucial to a wide variety of collisionless plasma processes [3][4][5] . One process where anomalous effects are thought to be important is magnetic reconnection, which is a fundamental plasma process providing explosive energy releases by reconfiguring magnetic field topology 6,7 . In particular, it has been suggested based on theoretical and numerical results that waves can provide both diffusion and resistivity, which can potentially support the reconnection electric field 8,9 , the out-of-plane electric field responsible for sustaining reconnection. One wave that has received significant attention as a source of anomalous effects is the lower hybrid wave [10][11][12] . Lower hybrid waves are found at frequencies between the ion and electron cyclotron frequencies and are driven by plasma gradients and the associated cross-field currents 11,13 . Previous attempts to calculate anomalous terms concluded that the anomalous resistivity was small 14,15 , while cross-field particle diffusion associated could be significant 16,17 . However, these estimates relied on density fluctuations inferred from the spacecraft potential, electron velocities inferred from the electric and magnetic fields assuming electrons remain frozen in, and often single spacecraft measurements. An external electric field can modify the spacecraft potential, making density fluctuations associated with waves inferred from the spacecraft potential unreliable 18,19 . Similarly, it is unclear how well the frozen-in approximation works without direct measurements. Recent observations have shown that electrons remain close to frozen in, although pressure fluctuations associated with the waves can cause some deviation from the ideal frozen in condition 20 . Thus, calculations of anomalous resistivity, viscosity, and cross-field diffusion based on direct particle measurements are needed to determine the role of lower hybrid waves. In this work, we directly measure and quantify anomalous resistivity, viscosity, and cross-field electron diffusion associated with lower hybrid waves using the high-resolution fields and particle measurements from the four MMS spacecraft 21 . We show that anomalous resistivity (drag) is balanced by viscosity (momentum transport), and thus the waves do not contribute to the reconnection electric field. However, the waves do produce an anomalous electron drift and diffusion across the current layer associated with magnetic reconnection. This can lead to the relaxation of density gradients at timescales of order the ioncyclotron period, which counteracts steepening of density gradients caused by magnetic reconnection and hence modifies the process. Results Magnetic reconnection and case study. A region where reconnecting current sheets and potential anomalous effects can be found is the terrestrial equatorial magnetopause, the boundary between the shocked solar wind in the magnetosheath and the magnetosphere (Fig. 1a). Magnetic reconnection occurs between the high-density magnetosheath and the more tenuous magnetosphere. This results in reconnection being asymmetric with strong density gradients across the boundary. Figure 1b shows the result from a numerical simulation (see Methods, subsection Simulation description) designed to illustrate the magnetopause reconnection event presented in Fig. 2. The approximate orbit of MMS moving from the magnetosheath to the magnetosphere is indicated, with the turbulent magnetopause separating the two regions. The density fluctuations on the low-density side of the reconnection region are due to lower hybrid waves, which are driven by the strong density gradients in this region. Figure 2 provides an overview of a magnetopause reconnection event observed by MMS. We use MMS electric 22,23 and magnetic field data 24,25 , and electron and ion data 26 . In particular, to investigate fluctuations in the electron and ion distributions associated with waves, we use particle moments sampled at 7.5 and 37.5 ms, respectively 27 . The electron sampling rate is high enough to resolve the local lower hybrid frequency and is unique and essential for comparisons with the lower hybrid waves. Magnetic field data from one MMS spacecraft is shown in Fig. 2a in a local current sheet coordinate system: the current sheet normal points along N, L is along the anti-parallel magnetic field direction, and M = N × L completes the right-hand coordinate system. The local coordinates are determined using a minimum variance analysis of B. The magnetopause crossing is characterized by a reversal in B L from negative in the high-density magnetosheath to positive in the low-density magnetosphere (Fig. 2b). MMS crosses the magnetopause close to, but southward, of the electron diffusion region (EDR), as indicated by the electron and ion jets reported previously in ref. 28 . Based on fourspacecraft observations, we estimate the current sheet velocity to be ≈40 km s −1 sunward in the N direction. The components of the electric field E perpendicular and parallel to the magnetic field B are shown in Fig. 2c. The most intense waves are observed on the low-density side of the current sheet, mainly perpendicular to B in the N and M directions, with some intermittent smaller-amplitude higher-frequency fluctuations parallel to B (close to the L direction). We identify the waves as lower hybrid drift waves driven by the diamagnetic current at the density gradient (Fig. 2b). Lower hybrid waves occur between the ion and electron gyrofrequencies. The waves have a frequency of around~10 Hz, a phase speed of v ph ≈ 140 km s −1 , and a wavenumber of kρ e ≈ 0.4 28 , where ρ e is the thermal electron gyroradius. The analysis techniques used to determine the wave properties are detailed in ref. 20 . These waves have been proposed as a source of anomalous resistivity and can be important for magnetic reconnection 29 . Some recent studies concluded that the waves are relatively unimportant 5,30,31 , while others conclude that the waves are important for ongoing reconnection 9,12,32 . Figure 2d shows the perpendicular and parallel components of V e of the lower hybrid waves. The fluctuations are well resolved and the electron moments can be used to calculate the associated anomalous terms. Large V e fluctuations are observed not only in the perpendicular but also in the direction parallel to B, indicating that the wave vector is not exactly perpendicular to B. This means the waves can potentially heat electrons. Figure 2e shows E and the electron and ion convection terms −V e × B and −V i × B in the M direction. Throughout the interval E ≈ −V e × B meaning the electrons move together with the magnetic field (are approximately frozen in) as expected for lower hybrid waves. In contrast, −V i × B remains close to zero. Although ion moments do not fully resolve the waves, this is consistent with the ions being unmagnetized, with only small perturbations in V i . This results in large-amplitude fluctuating currents, which are in turn responsible for fluctuations in B (Fig. 2a). Figure 2f displays density fluctuations normalized to the background density, associated with the waves. Large normalized perturbations, δn e / n e > 0.1, and electric field fluctuations suggest that anomalous resistivity may be significant. Anomalous terms associated with waves. To evaluate the effects of waves on the plasma we divide the quantities into fluctuating and quasi-stationary components, Q = 〈Q〉 + δQ where 〈Q〉 corresponds to spatial or temporal averaging over fast fluctuations and δQ corresponds to fluctuations. Anomalous resistivity is effectively a force on charged particles due to waves, so we analyse a momentum equation. The electron momentum equation for a collisionless plasma is where e, n e , m e , V e , and P e are the unit charge, electron density, mass, bulk velocity, and pressure tensor, respectively, and E and B are the electric and magnetic fields. Introducing fluctuations, neglecting time derivatives, and averaging yields Here D, T, and I are the anomalous drag (sometimes called resistivity), anomalous viscosity (momentum transport), and anomalous Reynold's stress, respectively. These quantities are defined as T ¼ À hn e V e Bi hn e i þ hV e i hBi; ð4Þ We define the total anomalous contribution to equation (2) as R = D + T + I. We find that the contributions of I are negligible compared with D and T, so they are neglected in the following analyses (see Methods, subsection Estimating the anomalous terms for an example and details). We study the electron continuity equation to find anomalous flows due to fluctuations A cross-field diffusion coefficient D ⊥ relates the electron density and velocity fluctuations to the density gradient in the direction normal to the boundary A gradient relaxation timescale can be estimated as For lower hybrid waves, it has not been previously possible from observations to directly evaluate the terms involving electron density or velocity fluctuations, such as 〈δn e δE〉. Anomalous contributions from lower hybrid waves. Figure 3a shows the lower hybrid waves from one MMS spacecraft and Fig. 3b-e display the anomalous terms D, T, anomalous electron flow V N,anom in the N direction, and the diffusion coefficient D ⊥ in the N direction, obtained by combining data from all four spacecraft (see Methods, subsection Estimating the anomalous terms). The terms D and T have a maximum amplitude of 0.8 ± 0.2 mV m −1 (Fig. 3b, c), a small fraction (~2%) of the amplitude of the waves. For comparison, the reconnection electric field associated with magnetopause reconnection is expected to bẽ 1 mV m −1 for fast reconnection, comparable to the peak magnitudes of D and T. Both D and T are predominantly in the M direction and D ≈ −T. This is similar to the result found from the simulation in ref. 33 . The directions of D and T remain the same while the lower hybrid waves are observed, although some Fig. 1 Magnetic reconnection at the magnetopause. Sketch of the magnetosphere (from https://mms.gsfc.nasa.gov/science.html) and a numerical simulation, showing an overview of a region with lower hybrid waves, anomalous plasma effects, and magnetic reconnection. a Sketch of the magnetosphere around Earth (blue circle) and the regions where magnetic reconnection is expected to occur (indicated by red-shaded regions). The black lines show the magnetic field lines associated with the interplanetary magnetic field (IMF) and Earth's magnetosphere (indicated by the green-shaded regions). The dark green region corresponds to the Van Allen radiation belt region. Magnetic reconnection occurs at the magnetopause, the boundary between higher-density solar wind/magnetosheath and lower-density magnetospheric plasma. b Three-dimensional simulation of magnetic reconnection at the magnetopause (see Methods, subsection Simulation description for details on the simulation parameters). Reconnection at the magnetopause is asymmetric, meaning the upstream conditions on the left and right differ significantly. The gray lines indicate the magnetic field lines and the color shading indicates electron density n e . Lower hybrid waves at the density gradient drive fluctuations in n e , which can cause significant electron diffusion and broadening of the layer, consistent with MMS observations. The black arrow indicates the approximate MMS trajectory through the reconnection event in residual fluctuations remain. These fluctuations result from the four-spacecraft averaging used to approximate the spatial averaging needed to compute D and T and provide an indicator of the uncertainty in the averaging. Similarly, the magnitudes of D and T are larger than the estimated uncertainties based on the fields and particle measurements (indicated by the shaded regions associated with each anomalous term). The anomalous terms are significant only when large-amplitude waves are present and are localized to the density gradients on the low-density side of the boundary. Thus, the anomalous terms are negligible at the neutral point (B L = 0). Overall, the contribution to the reconnection electric field is small because R ≈ 0, which results from E ≈ −V e × B for lower hybrid waves, and the waves do not penetrate into the center of the current sheet. Figure 3d shows a significant anomalous electron flow V N,anom with a magnitude up to 20 km s −1 toward the lower-density side. Due to electrons being approximately frozen in V anom ≈ −D × 〈B〉/ 〈|B|〉 2 . Figure 3e shows a related large diffusion coefficient D ⊥ , which peaks at about 10 9 m 2 s −1 , suggesting that significant broadening of the current layer can occur. Overall, the anomalous electric field [equation (2)] is not likely to affect the reconnection electric field and the reconnection rate. Rather, the lower hybrid waves can produce anomalous diffusion of electrons from higher to lower density regions, thus broadening the current layer. This can in turn affect the reconnection process by modifying the Hall electric and magnetic fields and contributing to the electron heating observed in the magnetospheric inflow region 17,31 . From equation (8) the estimated relaxation time is~1 s (comparable to the ion-cyclotron period). Examples of anomalous terms from lower hybrid waves. Figure 4 shows two different magnetopause crossings observed on 02 December 2015 (Fig. 4a-d) and 14 December 2015 (Fig. 4e-h). In Fig. 4a-d the spacecraft crossed from the magnetosphere to the magnetosheath. The spacecraft crossed the EDR at around 01:14:56 UT, close to the neutral point 34 . The lower hybrid waves are observed for ≈10 s on the magnetospheric side of the boundary. In Fig. 4e-h the spacecraft crossed the EDR from the magnetosheath and magnetosphere 35,36 , with the lower hybrid waves observed on the magnetospheric side for 1 s. The properties of the lower hybrid waves were investigated in detail in ref. 20 . Overall, the properties of the magnetopause crossings are similar to the event in Figs. 2, 3. Namely, large-amplitude lower hybrid waves are observed on the magnetospheric side, D M < 0 and T M > 0, such that D + T is small, and a significant V N,anom < 0 is observed. For the 14 December 2015 event D and T have a significant component in the L direction due to the significant B M (guide field). In both events, the anomalous terms are negligible at the neutral point (indicated by the magenta lines in Fig. 4). f Normalized density fluctuations δn e /n e . The cyan dashed line indicates the magnetospheric separatrix, which is the boundary between magnetospheric and reconnected field lines. The separatrix was identified by the strong electron jet directed away from the X line, as described in ref. 28 and seen in panel (d). Although the amplitude of the lower hybrid waves are comparable in these two events, |V N,anom | is significantly larger for the 14 December 2015 event. This is likely because B is smaller, corresponding to a larger amplitude δV e,⊥ ≈ δE × B/|B| 2 . At the times when |V N,anom | peaks we estimate D ⊥ = 0.58 × 10 9 m 2 s −1 and D ⊥ = 1.21 × 10 9 m 2 s −1 , respectively, for the 02 December 2015 and 14 December 2015 events. This corresponds to the diffusion of electrons across B from the magnetosheath to the magnetosphere in both cases. Thus the diffusion coefficients are significant and comparable to the values obtained in Fig. 2. In both cases, the uncertainties are smaller than the peak values of the anomalous terms. To summarize, the two events presented in Fig. 4 show the same qualitative behavior as the 06 December 2015 event: D and T both reach about 0.5 mV m −1 but have opposite signs so the contribution to E is small. Large anomalous flows and crossfield electron diffusion toward the magnetosphere are observed. In all three cases the peak values of |D| and |T| are~2% of the maximum amplitude of δE. Statistical results. Figure 5 shows statistics from magnetopause crossings where high-resolution particle moments are available and lower hybrid waves are observed. We divide each event into (1) EDR crossings, where the waves are observed adjacent to EDR regions identified in ref. 37 In each case the largest D was in the −M direction and the largest T was in the M direction. Figure 5a shows the maximum T (jTj max ) versus the maximum D (jDj max ) and the associated uncertainties for each event. Here jDj max and jTj max can reach ≈1.5 mV m −1 , with jTj max increasing approximately linearly with jDj max . Both peak at approximately the same time, so for all cases R ≈ 0. We find that jTj max tends to be slightly smaller than jDj max , possibly due to small deviations from the frozen-in condition for electrons due to fluctuations in the electron pressure due to density fluctuations. The largest D and T correspond to magnetic reconnection events and EDRs, although more non-reconnection events need to be analyzed. Figure 5b shows the values of D ⊥ where |V N,anom | peaks versus the maximum −V N,anom . Here D ⊥ tends to increase as −V N,anom increases. Each case corresponds to diffusion from higher to lower densities. We find that D ⊥ ranges from 0.05 × 10 9 to 2 × 10 9 m 2 s −1 , i.e., from small to very significant diffusion 38,39 . The largest −V N,anom and D ⊥ tend to occur close to EDRs, although there are cases where −V N,anom and D ⊥ are also small near EDRs. Thus, cross-field diffusion and associated broadening are expected to be highly variable during magnetopause reconnection. The results suggest that D ⊥ may be the largest close to the Discussion We find that for lower hybrid waves the anomalous terms D, T, and V anom can be accurately determined from the data and that R = D + T ≈ 0, so the contribution to the reconnecting electric field is negligible because electrons are approximately frozen in. However, the diffusion coefficient D ⊥ and V N,anom can often be significant, corresponding to transport from the higher-density magnetosheath to the lower-density magnetosphere, producing significant broadening of the magnetopause density gradient. Overall, these direct observations of anomalous terms in collisionless plasma open a new window to investigate fundamental plasma physics. Directly evaluating all terms involved in waveparticle interactions will show which processes are important, and which are not, in many astrophysical plasmas. In many reconnection events, the lower hybrid waves are observed over several seconds at large amplitude, which suggests that the density gradient is driven by ongoing reconnection, while lower hybrid waves counter this. Methods Estimating the anomalous terms. For each event we rotate the vector quantities into LMN coordinates, where N is normal to the magnetopause pointing sunward, L is along the reconnecting magnetic field direction, and M completes the coordinate system and is close to the guide-field direction. We determine the coordinate system using a minimum variance analysis of B across the magnetopause. The reliability of the N direction is confirmed by determining the boundary normal velocity using four-spacecraft timing analysis, as well as minimum variance analysis of the current density J. In most cases, the uncertainty in the coordinate system directions are small and do not significantly affect the results. Ideally, the quantities in equation (2) are computed from an ensemble average in the M direction. With MMS we must use a four-spacecraft average to estimate these quantities. For all events, the spacecraft were in tetrahedral configurations with spacecraft separations ranging from~15 to~5 km. These separations are well below ion spatial scales at the magnetopause, but larger than electron spatial scales, which is ideal for studying lower hybrid waves. To calculate the anomalous and background quantities we use the following procedure: (1) We resample all field data to the sampling frequency of the high-resolution (7.5 ms) electron moments and perform a four-spacecraft timing analysis on B L at the current sheet to determine the boundary normal velocity and the time delays between the spacecraft. Typical boundary normal speeds range from~10 to~100 km s −1 . (2) We use the time delays to offset the spacecraft times so all spacecraft cross the boundary layer at the same time as MMS1. (3) To obtain the non-fluctuating terms 〈Q〉 we average the time-shifted quantities over the four spacecraft and bandpass filter below 5 Hz. At the magnetopause the lower hybrid waves are typically found at frequencies 10 Hz < f < 30 Hz. (4) To obtain δQ associated with the lower hybrid wave fluctuations we bandpass filter Q above 5 Hz. The specific bandpass frequency does not significantly modify the results, as long as it is not too high to significantly remove lower hybrid wave power. (5) We obtain 〈δQ 1 δQ 2 〉 by averaging δQ 1 δQ 2 over the four spacecraft then low-pass filter the result below 5 Hz to remove any remaining higherfrequency fluctuating components. To evaluate equation (4) we expand it to obtain All terms are calculated to determine T, although we find that only the components involving 〈δn e δV e 〉 are significant. The uncertainties in the anomalous terms are calculated from the uncertainties in the electron moments and assuming a 10% uncertainty in the gain of the electric field. The uncertainty in the electric field is based is on the fact that the gain is validated by comparing the electric field with the DC convection field caused by the spacecraft moving relative to a magnetized plasma, and there can be small changes in the gain due to changes in the plasma conditions. The uncertainties in the electron moments are based on the counting statistics of the particle distributions. This is only available at 30 ms sampling, so we assume that the uncertainties are four times larger for the 7.5 ms moments we use, due to the reduced azimuthal sampling. The magnitude of the uncertainties of the particle moments are compared with the magnitude of the envelope of the fluctuating quantities to estimate the relative uncertainties. Estimates of the anomalous contributions from the electron inertial term and time derivative in equation (1) indicate that they are much smaller than D and T due to the m e /e dependence. The M component of the anomalous inertial terms (anomalous Reynold's stress 5 ) I can be well approximated by assuming that the anomalous terms in I vary primarily in the N direction, which is given by Since the method of obtaining the anomalous terms equation (12) relies on four-spacecraft averaging, the four spacecraft cannot be used to calculate the gradient associated with these terms. Therefore, the gradient is approximated assuming these quantities move past the spacecraft at the boundary normal velocity, such that ∂N = −v N ∂t, where v N is the boundary normal velocity estimated from the four-spacecraft timing of the current sheet. The values of I M obtained from equation (12) are significantly smaller than D and T and do not significantly contribute to R. As an example, Fig. 6 shows I M estimated from equation (12) Figure 6a shows the electric field associated with the lower hybrid waves from MMS1. In Fig. 6b we plot the anomalous terms 〈n e 〉〈δV M δV N 〉, 〈V M 〉〈δn e δV N 〉, and Γ = 〈n e 〉 〈δV M δV N 〉 + 〈V M 〉〈δn e δV N 〉. We find that 〈V M 〉〈δn e δV N 〉 < 0 due the term being proportional to V N,anom . In contrast, 〈n e 〉〈δV M δV N 〉 fluctuates with very little offset from zero. This is due to the lack of consistent correlation between the δV M and δV N associated with the lower hybrid waves. As a result, Γ fluctuates and is similar to 〈n e 〉〈δV M δV N 〉. In Figure 6c we plot I M for Γ and Γ bandpass filtered below 1 Hz to remove the fluctuations. We find that I M fluctuates around zero when the 5 Hz low-pass filter is used. There is negligible large-scale offset, as seen for the <1 Hz case. Thus, I M for the 5 Hz low-pass filter is overestimated. In Fig. 6d we plot D M , T M , and I M for the 5 Hz bandpass filter. We find that the I M is much smaller than D M and T M . Both D M and T M have clear background components, in contrast to I M . Similar results are found for the other events, and there is no clear evidence that I can significantly contribute to R for lower hybrid waves. We conclude that the contribution of I M to the total anomalous electric field is negligible based on MMS observations. This differs from the results of three-dimensional simulations 5,9,31,32 , which have found that I M could be significant. Possible reasons for these differences are: (1) Artificial plasma conditions, such as reduced electron to ion mass ratio and reduced ratio of electron plasma to cyclotron frequency, are needed to run 3D simulations. (2) When spatially averaging over the M direction in simulations very lowfrequency fluctuations, such as current sheet kinking, are typically included, which can lead to large anomalous terms that are not due to lower hybrid waves 5 . In observations, we used a high-pass filter of 5 Hz, which removes such low-frequency fluctuations, if they are present. Simulation description. We model the 06 December 2015 event using the fully kinetic iPIC code 40 . The code uses an implicit moment method, which allows the cell size to exceed the Debye length 41 (1) Asymmetric magnetic reconnection is first run in two dimensions in x − y coordinates in a double periodic domain 42 . The size of the domain is L x × L y = 2822 × 1058 km 2 and is resolved by 1728 × 648 cells. A weak localized perturbation at (L x /2, L y /4) is used to initiate reconnection 43 . (2) The three-dimensional (3D) simulation is initialized at time tΩ ci = 35 once steady-state reconnection is reached, where Ω ci is the angular ion-cyclotron frequency. The initial conditions of the 3D simulation are the fields and NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-022-30561-8 ARTICLE particle information from the 2D run and replicated in the z-direction. The computational domain is L x × L y × Lz = 2822 × 1058 × 117.5 km 3 and is resolved by 1728 × 648 × 72 cells. This replicated geometry is suitable for investigating instabilities, such as the lower hybrid drift instability, with wavelengths short compared with L z . Data availability MMS data were available at https://lasp.colorado.edu/mms/sdc/public. The data can be found in the following directories: mms#/edp/brst/l2/dce/ for electric fields, mms#/fgm/ brst/l2/ for the background magnetic field, mms#/scm/brst/l2/scb/ for the fluctuating magnetic field, mms#/fpi/brst/l2/des-moms/ for the background electron moments, mms#/ fpi/brst/l2/des-qmoms/ for the highest resolution electron moments, and mms#/fpi/brst/l2/ dis-qmoms/ for the highest resolution ion moments. Source data required to generate the figures in this paper can be found at https://github.com/danbgraham/anomres 44 and are available on request. The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
6,591.2
2022-05-26T00:00:00.000
[ "Physics", "Geology" ]
A Framework for the Use of Immersive Virtual Reality in Learning Environments Immersive Virtual Reality (iVR) technologies can enrich teaching and learning environments, but their use is often technology-driven and instructional concepts are missing. The design of iVR-technology-supported learning environments should base on both, an evidence-based educational model as well as on features specific to iVR. Therefore, the article provides a framework for the use of iVR in learning environments based on the Cognitive Theory of Multimedia Learning (CTML). It outlines how iVR learning environments could and should be designed based on current knowledge from research on Multimedia Learning. Keywords—Virtual reality, instructional design, immersive virtual reality, pedagogical framework Introduction Virtual reality (VR) technology is increasingly promoted as a promising educational tool in various training settings [1,2], like health care [3 -6] or engineering [7 -9]. While the educational use of VR is growing, little is known about the learning processes occurring in VR environments [10]. When asking "Where is the pedagogy?", Fowler [11] is urging for models explaining learning in VR environments. Using VR technology in training should be a balanced decision that considers the positive and restrictive attributes of VR. Several educational benefits of implementing VR have been reported in the literature. It is highly motivating, increases student engagement, provides high-quality visualizations, and creates the feeling of being present [1][2][3][12][13][14]. However, the additional educational value of VR differs in terms of intensity in (1) immersive VR (iVR) environments and (2) 3D environments that are presented via a 2D display. A distinction between low immersive VR, that is based on traditional devices like mouse and keyboard, and high immersive VR, that generally involves a head-mounted display (HMD), is typically made in the literature [15,16]. This paper focusses on how to design iVR learning environments to support meaningful learning. However, iVR technology is subject to certain restrictions. Disadvantages of using iVR are related to time and costs necessary for developing hard-and software, possible health and safety effects, the uncomfortable nature of wearing HMDs, possible reluctance to use and integration into learning scenarios [17]. Additionally, especially immersive VR environments are more likely to distract and overload users and result in lower levels of learning. Depending on the instructional goals, integrating iVR may reduce working-memory capacity and thereby interfere with learning processes. Thus, the use of iVR technology should provide an added value for learning results [18]. Following instructional guidelines, the opportunities of iVR should be exploited as well as challenges by using iVR should be addressed. Developing a powerful learning environment requires consideration of features specific to iVR technology. So far, research on iVR applications is often technology-driven and focuses on anecdotes, case studies and demonstrations of technical prototypes. Neither learning processes are mentioned in iVR nor do instructional methods form the basis of training applications [1,2,11,19]. This leads to the situation that in some cases iVR seems to be uneconomical, ineffective as well as exaggerated, i.e., too complex, or inappropriate to fulfil a training goal and other learning media (e.g., simulations, pictures) might have been a better choice regarding costs and benefits [20]. Consequently, instructional designers and computer scientists should work closely together to develop iVR learning environments that are based on educational decisions. This paper suggests that iVR technology used in educational settings should be designed according principles to design multimedia to benefit from its promising characteristics. Hence, an approach is needed that offers instructional hands-on guidelines how to design iVR learning environments to take full advantages of the technology and overcome obstacles. Theoretical Background The development of our framework is based on two assumptions. First, we understand learning in iVR as multimedia learning, because in virtual worlds images and texts are presented in combination. Secondly, learning is an active process that goes beyond the mere repetition and reproduction of information or central concepts. Therefore, we are following the distinction between rote and meaningful learning postulated by Mayer [21]. Whilst rote learning leads to superior performance in retention processes, meaningful learning supports transfer processes as well. Meaningful learning refers to the application, i.e. the transfer of knowledge to solve problembased tasks. This is illustrated by the revised taxonomy for educational objectives [22], according to which meaningful learning addresses the goals of understanding, applying, analyzing, evaluating and creating [21]. To enable meaningful learning, Mayer suggests to design instructional media according to the principles stated in the cognitive theory of multimedia learning (CTML) [21,23]. These two assumptions guide the development of our framework. Therefore, we first introduce the CTML, how multimedia learning works and the consequences for the instructional design to help learners to learn with multimedia instruction. Subsequently, we will describe the key features of iVR technology. In a final step, we synthesize both, CTML and features specific to iVR. As a result, we present the meaningful iVR learning (M-iVR-L) framework. Based on this framework, we identify design guidelines to enable meaningful iVR learning. Multimedia learning The combined presentation of words (spoken or written) and pictures (static or animated) for the purpose of learning is known as multimedia learning [24]. This term, originates from the empirical work of Mayer and colleagues, is widespread and influences research on various instructional media like computer games, simulations, video and also iVR [25][26][27]. The CTML is a theoretical framework of how people learn with instructional media [24,28]. Therein, three principles are assumed [29]. First principle, has its roots in dual coding theory [30], that is that people process information within two different channels. One channel is responsible for the processing of verbal information, the other one of visual information. Second principle, the limited capacity of each channel, builds upon the findings of cognitive load theory (CLT) [31,32]. This instructional theory has proven that human working memory capacity is limited and therefore instruction has to abandon the application of inappropriate instructional approaches. This is achieved by reducing unnecessary strain for learning, i.e. extraneous cognitive load, as much as possible [33]. These two rather cognitivist principles are supplemented by a third constructivist one, namely learning as a generative activity [29]. According to generative learning theory of Wittrock [34,35], learning is an interplay between already stored information with new stimuli and is effective when learners active cognitive processing is stimulated. In CTML, active cognitive processing is stimulated through the engagement of learners in selecting the relevant material, organizing it into a coherent structure, and integrating it with prior knowledge [29,36]. Learning with instructional media according to the principles stated in CTML is then what Mayer calls meaningful learning where learners acquire knowledge and skills for the purpose of effective problem-solving [21,23]. Over the last 30 years, empirical research has repeatedly confirmed the assumptions made in CTML. From these results three major instructional design goals emerged that should be considered when designing multimedia learning environments [29]. Instructional design goals Instructional design goals are based on the scientific study of how to help people learn, i.e. the science of instruction. The current assumption here is that hands-on activities by themselves cannot foster meaningful learning, but cognitively guided active processing can do so [29]. With the principles of CTML in mind, three instructional design goals are essential to help learners learn with instructional media: First design goal is the reduction of extraneous processing. This is dispensing distracting aspects within the multimedia learning environment, like background music (coherence principle) or presenting onscreen text during narration (redundancy principle) [37]. Another way is the physically and temporally synchronous display of information. The positive effect on learning outcomes of temporal and spatial contiguity principle is also confirmed in metanalyses [38]. The same applies to the signaling principle where symbols and colors are used to guide learner's attention on relevant material. The positive effect of signaling on learning outcomes is well documented and robust [39]. The second design goal refers to scaffolding which helps learners to manage the essential processing to avoid cognitive overloading. Principles that help are the modality principle, segmenting principle and pretraining principle. The modality principle states that it is better to present images with spoken instead of written text. This presentation format leads to better learning, at least for less complex content [40,41]. The distribution of complex material into smaller learning units is recommended by research on the segmenting principle. If considered learning time increases, cognitive load is reduced and a positive effect on both memory and transfer tasks occurs [42]. For learners with little prior knowledge of the to be learnt content pretraining is an effective principle. Here, learners study the basic concepts of a lesson before interacting with the multimedia instruction which in turn frees up working memory capacities for the essential processing [43]. Making sense of the material through generative processing is the third instructional design goal. Here, the use of social cues and generative learning strategies are recommended. Social cues are the usage of conversational language during narration (personalization principle), speaking of information or instructions with a friendly and human voice (voice principle) as well as the application of human-like gestures for animated content (embodiment principle) [44]. Generative learning strategies in multimedia learning are self-explanation [45] and drawing principle [46]. Other strategies like self-testing, summarizing, mapping and teaching are already well investigated for traditional media like textbooks and are now gradually finding their way into the design and research of more emergent instructional media (for an overview see [47]). The use of learning strategies is based on aforementioned generative learning theory [48] and the idea that learning is an active construction of knowledge. The positive effect of such strategies has already been proven for instructional videos. Learning with video becomes with the help of strategies an active engagement with the content instead of a purely passive consumption [26]. Aim of the outlined instructional goals is to help learners gain skills and knowledge that are applicable to new problems and tasks. This claim regarding the transfer of learning outcomes is the difference between retention-based or rote learning and meaningful learning as it is understood by Mayer and colleagues [21]. Key features of VR technology The application of multimedia principles and instructional goals within iVR learning environments requires a profound understanding of the medium itself and factors that affect individual perceptions of iVR technology. VR can be described as "the sum of the hardware and software systems that seek to perfect an all-inclusive, sensory illusion of being present in another environment" [49]. This distinguishes VR from other reality enhancing technologies such as augmented reality (AR) and augmented virtuality (AV). These are placed on the reality-virtuality-continuum of Milgram and Kishino [50] between the real environment and the entirely computer-simulated environment (Fig. 1). Fig. 1. reality-virtuality-continuum [50] VR learning environments, especially the more immersive ones, allow the realistic visualization of three-dimensional (3D) data and support an exciting real-time learning experience. They can improve performance outcomes, enable high interactivity with objects and persons, allow to present a virtual environment that resembles the real world, offer feedback from the simulation to the learner and foster conceptual understanding by providing an effective and unique way to learn and motivate learners [51]. Learning environments building up on this technology offer authentic learning activities that other media (e.g. video) cannot provide appropriately (e.g. turn and rotate elements of mechanical installations that are not available in real world). There are different ideas about the key characteristics of VR that distinguish VR from other educational media [52 -56]. Burdea and Coiffet [52] define VR as "I3" (Immersion-Interaction-Imagination). Immersion: Immersion is one factor that contribute to the capabilities and impact of VR as it can bridge the technical features of a 3D environment, the experience of presence and the educational affordances of a task. Immersion can be classified into: 1) Mental immersion 2) Physical immersion It plays an important part in creating a successful personal experience within a VR environment. When the user is moving, the visual, auditory, or haptic devices that establish physical immersion in the scene are changing in response. A user can interpret cues to gather information while navigating and controlling objects. Naturally, the more sensory inputs are present in a virtual environment, the easier it is for the user to visualize and feel incorporated into the world [57]. Mental immersion refers to the tension to be deeply engaged within a VR environment [58]. Hence, immersive environments can offer learners rich and complex content-based learning while also helping learners to improve their technical, creative, and problem-solving skills [51]. Slater and Wilbur [59] identify five characteristics to describe immersion: inclusiveness (diversion of focus from the real world), extensiveness (extent of sensory input), surroundingness (extent of panoramic display), vividness (richness of features) and proprioceptive matching (alignment of perceptual means with the virtual interface). Interaction: Another feature that contributes to the success of learning in 3D environments is interaction or interactivity [56,60,61]. That is, a VR system can detect an input (e.g., a user's gesture) via multiple sensory channels (e.g., haptic, visual) and provide real-time response to the new activity instantaneously. At the same time, users can see activity change on the screen based on their commands and captured in the simulation [51]. Interactivity includes the ability to freely move around in a virtual environment, to experience it ''first-hand'' and from multiple points of view, to modify its elements, to control parameters, or to respond to perceived affordances, environment cues, and system feedback. Interaction has also often been linked to immersion, indicating that user control over the environment was important for the experience of being present in VR [62]. VR learning environments enable several interactions (e.g., navigation, selection, manipulation). When using an HMD, the user can navigate freely if he does not leave the range of the tracker, is hindered by cables or hits a wall of the real room. By a simple touch with the input device, selection can be made. The position of the input device in the virtual world is represented by a 3D cursor, for example in the shape of a human hand. If the objects are too far away, techniques such as laser pointers or crosshairs can be used by pointing too distant objects. The manipulation of objects in the real world is manifold (touching, lifting, rotating, turning on etc.). In VR, it is usually precisely defined which objects allow which interactions and special 3D widgets are required (e.g., spotlight manipulator, Through-the-Lens-Camera Control etc.) [63 -65]. Imagination: A further construct that is specific to VR is imagination [52]. It refers to the human mind's capacity to perceive non-existent things. VR supports the user to elaborate on thoughts and engage in meaningful learning. This requires you to wilfully put yourself into a suitable frame of mind. It takes active attention as well as active mental modelling of what one is perceiving [66]. For Jonassen [67], VR technologies can activate cognitive tools that help learners to elaborate on their thoughts and to engage in meaningful learning. Therefore, a VR environment triggers the human mind's capacity to imagine in a creative sense non-existent thing. Hence, VR technologies are well suited to convey abstract concepts (e.g., the inside of a machine) due to visualization abilities [51]. To stimulate imaginations of direct experiences, sensory information should be balanced with prior knowledge to avoid under-or overstimulation, which would impede imagination [68]. Depending on their particular instructional goals, many educators regard it as unnecessary to deploy all three features. Hence, numerous virtual learning applications integrate interaction and immersion, whereas imagination seems to be underrepresented [69]. However, focus of current research in the field still often seems technologydriven, whereas it is crucial to explore how to design an iVR learning environment to accomplish learning objectives and to enable meaningful learning. In the following section, we therefore aim at developing a framework that provides guidelines for the instructional design of iVR learning environments and to encourage stakeholders to implement iVR for their own learning scenarios based on this framework. Meaningful iVR Learning (M-iVR-L) Framework In this section, we bring the principles of CTML, the instructional design goals to support learning and the key features of iVR technology, described in section 2, together. As a result, we postulate the meaningful iVR learning framework (see Fig. 2) which considers the design of iVR learning environments as a process. The principles found in CTML influence the instructional design goals [29] and these must be taken into account with respect to the technical features of iVR technology to enable meaningful learning. Consequently, we propose six evidence-based recommendations within our M-iVR-L framework that should be considered when designing iVR learning environments. Learning first, immersion second With the raise of iVR technology, the key feature of immersion was claimed as supportive for learning, e.g. because of its possibility to provide situated learning through authentic contexts and tasks [70]. Recently, studies comparing learning scenarios in low immersive VR media (like desktop computer games) with iVR media draw a contradictory picture. On the one hand, the study in [12] found that an iVR simulation leads to a higher feeling of being present in a virtual lab but less learning compared to the low immersive desktop condition. This was also found in [71]. On the other hand, the studies in [72][73][74] found evidence for a positive influence of feeling of immersion on learning outcomes. We recommend, on behalf of the instructional goal to reduce extraneous processing, to carefully think about the grade of immersion necessary. If a higher degree of immersion is not relevant to achieve the learning objective, here, less is more. Provide learning relevant interactions Learn-relevant physical activities can positively impact declarative knowledge acquisition and are unavoidable if procedural knowledge, i.e. skills, are to be obtained. This learning strategy is known as enactment and can foster generative processing [47]. However, it should be noted that enacting is only beneficial if the movements performed are relevant for a certain learning task [75]. This is also true for iVR learning, where for example the use of controllers let learners perform object manipulation with virtual representations of one's hands. In [76] high levels of interactivity are found to be helpful for learning, while in [77] compared to a video condition without generative processing, no advantage for iVR was found for procedural knowledge gain or transfer performance. To optimize iVR learning in terms of interaction, we postulate two recommendations: First, avoid unneeded and learn-irrelevant interactions. Second, enable the learners pre-training, not only in terms of basic concepts, but also on how to use the iVR interaction tools. Segment complex tasks in smaller units Content in iVR learning environments is an extremely complex form of multimedia instruction with the high risk of overwhelming learners. The influence of this possible distraction was tested in two studies, also through EEG measurement regarding cogni-tive load. In both studies it was found that the iVR groups cognitive load was higher and at the same time the scores in a retention and transfer test were lower compared to a slide show presentation group [25,78]. Similar results for cognitive load levels were found in [12] and [79]. The authors of the mentioned studies point out that iVR can increase extraneous load, which is the type of load that hinders learning [70]. Providing scaffold to manage essential processing is one way to overcome this issue. For example, in [25] an iVR simulation on the human body was divided into six smaller segments with a summarizing phase after each segment. Therefore, the iVR group with segmented lessons outperformed an iVR condition without segmenting and compared to the slide show group similar performance levels were reached. We conclude that breaking down complex tasks into small segments is also effective for managing essential processing in iVR. Guide immersive learning The role of guidance is still a debated topic in educational psychology and beyond. Even if there seems to be at least some agreement that completely unguided discovery learning is not useful due cognitive overload issue, the debate is now about timing and form of guidance for effective learning [for an overview see 80]). As mentioned in section 3.1, iVR itself increases cognitive load, whereas it is the responsibility of instructional designers to provide appropriate guidance. If not, especially novices will feel overloaded and thus not learn [32]. Evidence for that claim was found in [81], where elementary students reported high levels of presence but not for perceived learning during an iVR field trip. Here, highlighting essential material (signalling principle) as well as the use of pedagogical agents, designed based on personalization and voice principle can guide learners through the iVR learning environment. Guidance can also foster generative processing through just-in-time information that fades away if the learner has built higher levels of knowledge and skills to solve the next learning task [82]. For example, in vocational education novices practice car-painting through an iVR simulation with hints and information during the process. After reaching a certain skill level the hints fade away, still giving the learner the chance to call for help if needed [83]. Build on existing knowledge To foster learning activities, new information should be balanced with prior knowledge to avoid under-or overstimulation [51,68]. Worked examples and tutorials may help learners with a low level of prior knowledge, but hinder learners with a high level of prior knowledge. This phenomenon is called expertise reversal effect [84] and is also valid in iVR learning scenarios [85]. We recommend, to determine learners' current level of knowledge to adjust severity as well as amount of support. This needs to be an ongoing process during learning progress. Depending on their current level of knowledge, learners need preparation, inside and/or outside iVR (pertaining principle), which frees up working memory capacities for the essential processing within the iVR learning task. Supportive information helps to keep the cogni-tive load low, especially for learners with little prior knowledge [43]. This principle has already been tested within iVR. Compared to a group with video instruction and with and without pretraining, the iVR group with pretraining achieved the greatest learning success in both memory and transfer [79]. Provide constructive learning activities Today it is common consent that learning is an active process which engages learners in knowledge construction. Some construction processes are visible, like hands-on activities which often result in a self-designed artifact or product [86]. Others are not visible, like linking prior knowledge with newly acquired information which is based on human cognitive architecture [32,87]. What they have in common is the assumption that learning takes place through learning activities. Several learning activities were found to be effective in iVR learning. In [72] learners used the strategy of memory palaces in an HMD and outperformed a desktop based control group condition. In [25] the generative learning strategy of summarizing was used to foster processing. Here, the summary was written by the learners after each segment of an iVR simulation on the human body outside of the HMD. The same applies to the study in [77] for the learning strategy enactment. Here, the learners used a virtual lab in iVR and afterwards enacted physical objects on a table that represent the same laboratory tools manipulated before in iVR. Of interest in these last two studies is that the learner's enjoyment during the iVR lesson was not diminished through adding generative learning strategies. This means that iVR has the potential to be effective for learning and at the same time makes learning more enjoyable than traditional media like slide show presentation. We conclude with the words of David Merrill who stated that "information alone is not instruction" [88,89]. Learning is an active process of knowledge construction and even the most impressive, immersive, and realistic iVR environment will not promote learning if learners do not engage in learning activities. Therefore, we recommend providing constructive learning activities that enable learner's knowledge construction and the application of it to newly problem-based tasks inside or outside of iVR. http://www.i-jet.org Conclusion and Future Research IVR offers new learning experiences based on a vivid and lifelike learning environment [90 -92]. So far, there are only few examples that demonstrate the usefulness of iVR in learning applications. Therefore, we have worked out an evidence-based framework grounded on the widespread and proven theory of multimedia learning (CTML), its consequences with regard to instructional design goals and additionally have taken into account key features of iVR that make this technology unique. Our framework consisting of six recommendations is not to be understood as final, it has been developed based on current empirical findings in learning with iVR. The key findings are that effective and enjoyable learning does not need high degrees of immersion in most cases, but it profits from guidance and the breakdown of iVR lessons into smaller units. Interactions must meet the learning objectives, if not, they can distract and therefore hinder learning. Taking prior knowledge into account to enable learner's efficient knowledge construction as for every learning environment also applies for iVR learning. Learner preparation inside as well as outside iVR is recommended. Due the fact that learning is more than just consuming information, constructive learning activities must be integrated, inside or outside the virtually designed world, if meaningful learning with iVR should happen. Teachers and instructional designers need not fear that iVR learning will no longer be perceived as joyful if they use our recommendations. The current findings show that learning strategies do not diminish learner's positive affective states towards learning with iVR. However, even if our framework is based on an analysis of current literature findings, we need to point out that the proposal is still mostly centered on assumptions. For example, we have not incorporated the social dimension of learning in iVR [93] or other aspects like gamification and game-based learning mechanisms [94]. To learn more about these aspects, more carefully planned and rigorous designed research is necessary, both in real-classroom as well as in laboratory settings [95][96][97]. For example, not all principles of CTML are tested in iVR learning, nor were the learning strategies proposed in generative learning theory [47]. It is also noticeable that the added learning strategies used in the outlined studies were always established outside the iVR environment. Research studying their usage inside an immersive virtual world are completely lacking. For instance, it would be interesting if selfexplaining or teaching others (humans or avatars) is affected by the features of iVR and hence impacts learning outcomes. The creation of an empirical basis on how learning happens in iVR should also consider replication studies like in [78]. Here, the authors found no beneficial effect of adding the learning strategy of practice testing to an iVR simulation compared to traditional slide show presentation neither for retention nor transfer. Further open questions concern the already mentioned social interaction possibilities in iVR. It may be possible to reduce extraneous cognitive load in a collaborative iVR learning environment based on the claims made in collaborative cognitive load theory [98,99]. Thus, design elements found to be distracting in other studies would lose this negative significance and design collaborative iVR learning environments would have to be thought differently.
6,280.4
2020-12-22T00:00:00.000
[ "Computer Science" ]
Pyroelectric response of inhomogeneous ferroelectric-semiconductor films We have modified Landau-Khalatnikov approach and shown that the pyroelectric response of inhomogeneous ferroelectric-semiconductor films can be described by using six coupled equations for the average displacement, its mean-square fluctuation and correlation with charge defects density fluctuations, average pyroelectric coefficient, its fluctuation and correlation with density fluctuations of charged defects. Coupled equations demonstrate the inhomogeneous reversal of pyroelectric response in contrast to the equations of Landau-Khalatnikov type, which describe the homogeneous reversal with sharp pyroelectric coefficient peaks near the thermodynamic coercive field values. Our approach explains pyroelectric loops observed in Pb(Zr,Ti)O3 film. Introduction The main peculiarity of ferroelectric materials is hysteresis dependence of their dielectric permittivity ε, spontaneous displacement D and pyroelectric coefficient γ on electric field E 0 applied to the sample [1].The pyroelectric hysteresis loops of inhomogeneous ferroelectric-semiconductor films have several characteristic features depicted in figure 1a.Such typical ferroelectric-semiconductors as slightly doped Pb(Zr,Ti)O 3 solid solutions, their films, multilayers and heterostructures are widely used in actuators, electro-optic, piezoelectric, pyroelectric sensors and memory elements [2][3][4].However, the task of creating the ferroelectric material with pre-determined dielectric and/or pyroelectric properties is solved mainly empirically.The correct theoretical consideration could answer fundamental questions as well as help to tailor new ferroelectric-semiconductor materials, save time and expenses. Conventional phenomenological approaches with material parameters obtained from first-principle calculations give significantly incomplete picture of the pyroelectric hysteresis in the doped or inhomogeneous ferroelectrics-semiconductors (compare figure 1b with figure 1a).In particular, Landau-Khalatnikov approach, evolved for the single domain perfect ferroelectrics-dielectrics, describes homogeneous polarization reversal but does not describe the domain nucleation and movement [5,6].Therefore, its modification for inhomogeneous ferroelectrics-semiconductors seems necessary [7,8]. In our recent papers [9][10][11][12] we have considered the displacement fluctuations caused by charged defects and modified the Landau-Khalatnikov approach for the inhomogeneous ferroelectrics-semiconductors.In this paper we develop the proposed model for pyroelectric response (see figure 1c).Let us consider n-type ferroelectric-semiconductor with sluggish randomly distributed defects.The charge density of defects ρ s (r) is characterized by the positive average value ρS and random spatial fluctuations δρ S (r), i.e. ρ S (r) = ρS + δρ S (r) (the dash designates average values).The average distance between quasi-homogeneously distributed defects is d.Screening clouds δn(r,t) with Debye screening radius R D surround each charged center, so the free carriers charge density is n(r, t) = n + δn(r, t).Screening clouds are deformed in the external field E 0 , and the system "defect center δρ S + screening cloud δn" causes displacement fluctuations δD(r, t) in accordance with Maxwell's equations div D = 4π (n + ρ S ), div (∂D/∂ t + 4π j c ) = 0 (see figure 2). ΓR ∂δγ δD ∂ t ΓR ∂ δγ δρS ∂ t The built-in electric field in (1) and its deviation δEi = 2π in (2) are inversely proportional to the film thickness l, thus it vanishes in the bulk material.For a finite film it is induced by the space charge layers accommodated near the non-equivalent boundaries z = ±l/2 of the examined heterostructure/multilayer (e.g.near the substrate with bottom electrode and free surface with top electrode depicted in figure 2).Such layers are created by the screening carriers [5][6][7].In general case the field Ei (l, t) can be time-dependent, its amplitude is proportional to the space charge fluctuations |δn + δρS|.Also Ei diffuses paraelectric-ferroelectric phase transition.In particular, it shifts and smears dielectric permittivity temperature maximum.Bratkovsky and Levanyuk [8] predicted the existence of built-in field in a finite ferroelectric film within the framework of phenomenological consideration.Our approach confirms their assumption and gives the expression of the field existing in the inhomogeneous ferroelectric-semiconductor film. Renormalization ΓR ≡ Γ + τm of Khalatnikov kinetic coefficient is connected with the contribution of free carrier Maxwellian relaxation.The renormalization of coefficients αR ≡ α + (γ + kBT /4π ne)d 2 and αRT ≡ αT + kB4π ned 2 is connected with the contribution of correlation and screening effects [9,13].Coefficient α = −αT(TC−T ) is negative in the perfect ferroelectric phase without random defects (δρ 2 S = 0).For the partially disordered ferroelectric with charged defects (δρ − δρSδn (t)n 2 was calculated in [9] at small external field amplitude and low frequency.Under the condition of prevailing extrinsic conductivity n ≈ −ρS the correlator R 2 ( t) varies in the range (0; 1) because its amplitude is proportional to the charged defects disordering δρ 2 S ρ 2 S .Hereinafter we discuss only the pyroelectric response near the equilibrium states.The system (1)-( 6) quasi-equilibrium behavior is described by the dimensionless built-in field amplitudeEm = Ei/EC ∼ (δn + δρS) and frequency w = −Γω/α as well as by the aforementioned parameters ξ, R 2 (w),g and temperature T /TC (EC = −α−α/β).Under the conditions w < 1, g 1 and ξ 1 the scaling parameter gR 2 ξ determines the system behavior [9]. Figure 3 demonstrates the typical changes of pyroelectric hysteresis loop caused by the increase of charged defects disordering (note, that gR 2 ξ ∼ δρ 2 S ρ 2 S ).It is clear that the increase of gR 2 ξ value leads to the essential decrease and smearing of pyroelectric coefficient peaks near the coercive field as well as to the decrease of the coercive field value (compare Landau-Khalatnikov loops (R 2 = 0) with dashed curves (R 2 0.2)). Let us underline that we do not know any experiment, in which pyroelectric coefficient peaks near the coercive field have been observed.Moreover, usually pyroelectric hysteresis loops in doped ferroelectrics have a typical "slim" shape with coercive field values much lower than the thermodynamic one [14,15].The quantitative comparison of our results with typical PZT-pyroelectric loops is presented in the next section. Discussion Dopants, as well as numerous unavoidable oxygen O −2 vacancies, can play a role of randomly distributed charged defects in "soft" PZT.In this case ferroelectric and pyroelectric hysteresis loops have got relatively high γ and D remnant values, but reveal low coercive fields [3].Usually pyroelectric hysteresis loops of PZT are rather slim and sloped even at low frequencies ω ∼ (0.1 ÷ 10)Hz [3], no pyroelectric coefficient maximum near the coercive field is observed [14][15][16]. The pyroelectric response of the PZT films was registered by means of dynamic pyroelectric measurements (see [14,16] for details).During the measurements, the quasi-static voltage U varied in the range (-11V, +11V) at the low-frequency ω ∼ 0.01 Hz, the temperature T changes near the room temperature with the frequency about 20 Hz.Pyroelectric hysteresis loops for Uπ 1 ∼ γ and Uπ 2 ∼ γ/ε are presented in figures 4. It is clear from the figures that our model both qualitatively and quantitatively describes pyroelectric hysteresis loops in thick "soft" PZT films.The modelling of ferroelectric and dielectric hysteresis loops was performed earlier (see e.g.[9]). Earlier we proved [9][10][11][12][13] that the effect of random defect leads to the non-zero average values of δD 2 even at D = 0.This means that the sample is divided into polar regions with opposite polarization, i.e. the domain structure originates from charged defects.In our model we neither consider the spatial distribution of the emerged domain structure nor incorporate its initial distribution.We calculate the average values only.The initial distribution of polar regions determines the initial conditions of the system (1)-( 6), which do not affect the equilibrium hysteresis loop shape [9]. Surely, the domain formation can be caused by many other factors besides the considered charged defects, e.g. by local inhomogeneous stresses and elastic defects.In particular, the presence of elastic defects or other pinning centers undoubtedly causes additional domain splitting, domain walls movement and pinning.Allowing for piezoelectric effect, the displacement fluctuations caused by random elastic defects could be included in the system of coupled equations.Thus, one could assume that their contribution leads to additional smearing of hysteresis loop, changes the coercive field and saturation law at high external fields [1]. Thus, the modelling based on the coupled equations ( 1)-( 6) gives realistic coercive field values and a typical pyroelectric hysteresis loop shape.Taking into account that the inhomogeneous reversal of spontaneous Figure 1 . Figure 1.Pyroelectric γ (E0) hysteresis loops.Different plots correspond to the data obtained for a semiconductor-ferroelectric film (a), Landau-Khalatnikov model (b) and our coupled equations (c) for a bulk sample (ω1 ω2 are two frequencies of the applied electric field).
1,930.6
2007-03-01T00:00:00.000
[ "Physics" ]
Acoustic phonon propagation in ultra-thin Si membranes under biaxial stress field We report on stress induced changes in the dispersion relations of acoustic phonons propagating in 27 nm thick single crystalline Si membranes. The static tensile stress (up to 0.3 GPa) acting on the Si membranes was achieved using an additional strain compensating silicon nitride frame. Dispersion relations of thermally activated hypersonic phonons were measured by means of Brillouin light scattering spectroscopy. The theory of Lamb wave propagation is developed for anisotropic materials subjected to an external static stress field. The dispersion relations were calculated using the elastic continuum approximation and taking into account the acousto-elastic effect. We find an excellent agreement between the theoretical and the experimental dispersion relations. One such area concerns the thermal conductivity in nonmetals, which results from the cumulative contribution of the transport of phonons with a broad range of wavevectors and mainly from long-wavevector phonons. Thus, heat transport at room temperature can be influenced by the reduction of the membrane thickness as the dispersion relation of the shortwave-phonons starts to be affected by emerging new vibrational modes arising from the boundary conditions at the membrane surface [22]. At low temperatures, however, the wavelength of the dominant thermal phonons is large enough so that their dispersion relation is strongly affected by characteristic sizes of the order of the micrometer [23]. The introduction of a controlled stress on the membrane presents an additional degree of freedom to modify the phonon dispersion relation, which could lead to the tuning of the thermal conductivity [16]. Understanding the effect of stress on the characteristic mechanical modes of ultra-thin membranes offers an excellent avenue toward experimental and theoretical research on the thermal properties in low dimensional structures. The vibrational acoustic properties of nanomembranes have been studied by inelastic light scattering. Recently, synchrotron x-ray thermal diffuse scattering was used to probe phonons with wave vectors spanning the entire Brillouin zone of nanoscale silicon membranes [24]. In this paper we show experimental evidence on the stress induced changes in the dispersion relations of hypersonic acoustic phonons propagating in ultra-thin Si membranes. Experimental results obtained by means of Brillouin light scattering (BLS) are compared with theoretical calculations based on the elastic continuum approach and acousto-elastic theory. Materials and experimental methods Typically stress-dependent optical or electrical measurements are carried out following many diverse approaches. For example, compressive hydrostatic stress measurements are usually conducted using a diamond anvil cell, typically using He or alcohol mixtures as pressure transmitting media [25]. Experiments under uniaxial compressive stress are done using a lever arm or gas chamber to increase the uniaxial force acting on the sample [26]. The case of biaxial stress is usually approached using the ball-on-ring technique as described in [27][28][29]. A fundamental difference between these techniques is that whereas hydrostatic or uniaxial measurements are usually done applying compressive stress, biaxial experiments are based on tensile stress. An alternative approach to apply stress is given by heteroepitaxy, i.e. making use of the lattice mismatch between two materials to apply biaxial stress [30,31]. Herein, we use a method based on a similar concept by taking advantage of the controllable tensile stress of chemical vapour deposited (LPCVD) Si 3 N 4 films. The biaxial static tensile stress acting on membranes was achieved by the strain tuning method of [32], where a Si 3 N 4 frame generates well defined biaxial stress (see figures 1(a) and (b)). The tensile stress can be adjusted by changing the strain compensation ratio = R w w / tensile stress σ 0 ( ± 0.1 0.025 GPa, ± 0.2 0.025 GPa and ± 0.3 0.025 GPa) obtained by Raman spectroscopy. Details on the sample fabrication and Raman stress measurements can be found from [32]. BLS spectroscopy allows the investigation of thermally activated acoustic phonons (waves) in the GHz range of frequencies. BLS is a well established technique commonly used in nondestructive testing of elastic properties in bulk materials and thin films [33][34][35][36][37]. BLS has been also found as an excellent tool to characterize phonons propagation in the nm-scale systems, such as ultra-thin free standing membranes [13] and phononic crystals [3,5,38]. BLS experiments provide information on the relative change in the frequency (Stokes and anti-Stokes components) of laser light undergoing inelastic coherent scattering by acoustic phonons. Brillouin spectroscopy measurements were performed on a six-pass tandem type Fabry-Perot interferometer (JRS Scientific Instruments) in the p-p (incident and scattered light polarization parallel to the plane of incidence) backscattering geometry, which ensured the best intensity of the inelastically scattered light [39]. The light source was a solid-state laser generating light at λ = 532 nm. The light was focused and collected by using an Olympus × 10 microscope objective with a numerical aperture of 0.5. The laser spot diameter was approximately of 12.5 μm and the incident power was kept below 2 mW. For the backscattering geometry (see figure 1(c)), the angle of the laser beam incidence onto a given surface studied is equal to the scattering angle and denoted by θ. For opaque or semi-transparent materials, the main contribution to the scattered light comes from the surface ripple mechanism. Therefore, the momentum conservation holds only for the in-plane components and the magnitude of the scattering wavevector q is given by [33,34,40]: BLS measurements were performed at room temperature for scattering angle θ varied in the range°11 -°50 , which according to (1), corresponds approximately to the range of wavenumbers q: 0.0451-− 0.0180 nm 1 . Lamb waves in pre-stressed anisotropic membranes In the case of plates and membranes, normal acoustic modes are classified in terms of the spatial symmetry of the deformation with respect to the plate mid-plane into asymmetric (flexural), symmetric (dilatational) and SH (shear-horizontal) families of Lamb waves (LW). Although they were described for the first time by Lamb in 1917 [41], they are still of particular interest, resulting in a large number of both theoretical and experimental papers, monographs and technological applications [8-10, 13, 42, 43]. The undeformed, original state of a membrane in the absence of any strain and stress is called the natural state. According to the linear theory of elasticity, the velocity of any possible acoustic wave depends only on elastic parameters defined at the natural state. Therefore, constant values of these parameters during a deformation cannot lead to any change of the acoustic wave velocity. The initial state (further denoted by the superscript 0) describes the pre-deformed state of the membrane under the action of an applied static stress. A motion of an acoustic wave in the membrane in the initial state leads to the final state with an additional dynamic deformation, which is assumed to be small in comparison to the static one. The problem of acoustic wave propagation requires applying the nonlinear theory of elasticity due to the large deformation from the natural to the initial state [14,44]. The schematic representation of the considered issue in the adopted coordinate system (x 1 , x 2 , x 3 ) is shown in figure 1(b), where x 1 and x 2 are the directions parallel to diagonals of the square defined by the membranes. The initial biaxial stress acting on the membrane, given by the Cauchy stress tensor σ ij 0 , after the rotation to the coordinate system associated with the membrane takes the form: The initial pre-deformation of the membrane is static, thus σ ij 0 satisfies the equation of equilibrium, which using the convention of summation over repeated subscripts takes the form: Additionally, we assume that σ ij 0 can be related to the initial strain tensor u kl 0 by means of Hookeʼs law σ = C u ij ijkl kl 0 0 . Here C ijkl is the fourth order elastic tensor, which can be expressed as 6 × 6 matrix of second order elastic constants (SOE) in the Voigt notation ( 1, 22 2, 33 3, 23 4, 13 5, 12 6). The nonzero and independent SOE constants C IJ for the natural state of silicon (cubic symmetry) are given by C 11 , C 12 , C 44 [45]. Considering the above, the equation of motion for the pre-stressed and hyperelastic body is [44]: propagation) satisfies Hookeʼs law, which for this case is given by σ = ′ C u ij ijkl kl . ′ C ijkl stands for the elastic tensor modified by the static pre-deformation, which can be expressed as: In the general case, calculations of dispersion relations for LW propagating in an anisotropic free standing membrane require applying a numerical procedure based on the partial waves approach [47]. The main goal of this method (see appendix) is to find phase velocities of LW satisfying simultaneously the elastodynamic equation of motion (3) and the stress-free boundary conditions given by (7). Due to the symmetry of the problem one can separate the numerical solutions in terms of the displacement about the midplane of the membrane (x x 1 2 plane at = x 0 3 ) , into so-called symmetric (S), asymmetric (A) and shear horizontal (SH) families of waves. In the general case, A and S modes in anisotropic materials can have a small component of the displacement perpendicular to the sagittal plane, here, the plane containing the phonon wavevector and perpendicular to the x x 1 2 plane). Pure A and S waves are typical for isotropic media or high symmetry directions in anisotropic materials. Results and discussion Measurements of angle-resolved BLS were performed for the 27 nm thick membranes with the orientation as shown in figures 1(b), (c). Figure 2 presents several representative spectra differing in the scattering angle and obtained for the membrane with tensile stress magnitude σ = 0.3 GPa 0 . Peaks identified as coming from the inelastic light scattering by A0 (fundamental flexural) acoustic waves were fitted using a Lorentzian function. Here, the position of the particular peak indicates the value of the phonon frequency or the Brillouin shift Δf B (in GHz). Thus, using (1) the phase velocity is given by: Apart from the value of the Brillouin shift, figure 2 also contains information related to the intensity and the spectral full width at half maximum (FWHM) of the scattered light. The intensity depends on the mean-squared amplitude of the out-of-plane surface displacement, while the FWHM can be expressed as a function of the phonon lifetime [34,40]. However, both also depend strongly on the optical setup characteristics and its alignment, therefore, any quantitative conclusions concerning absolute out-of-plane displacement and phonon lifetime need further measurements, e.g., reference samples and calculations. In the range of observed reduced wavenumbers (qd) the total displacement of the S0 (fundamental dilatational) waves is dominated by the in-plane longitudinal component [48], therefore they are not visible in the BLS spectra. Figure 3 shows the experimental dispersion curves in the form ν q ( ) compared to the theoretical calculations for three exemplary values of σ 0 . As expected, the measured changes in ν progressively depend on the stress magnitude. The theoretical curves in figure 3 are calculated using the formalism presented in section 2 and appendix A without any free parameter. The elastic properties used can be found in table 1 and are taken from [46], while the stress magnitude σ 0 is estimated independently by Raman spectroscopy in the same samples [32]. Flexural waves are not detectable for small θ due to the presence of an elastic scattering peak (see figure 2), thus the smallest measurable values of Δf B were about 0.3 GHz. The agreement between experimental data and theory also confirms the values of the stress tensor adopted in calculations and measured by Raman spectroscopy. Dispersion relations can also be presented in the ω q ( ) form (see insets in figure 3), which is more convenient as a starting point to study phonon group velocities, density of states, heat capacity and thermal conductivity [19,49]. Based on the formalism presented in section 2 and appendix A we can investigate theoretically the changes of A0 modes propagation in terms of tensile or compressive stresses. In the appendix we show also dispersion relations for the families of A, S and SH waves propagating in the Si membrane subjected to the static positive or negative biaxial stress. Figure 4 presents dispersion curves in the form (a) ν q ( ) and (b) ω q ( ), which were calculated for a 27 nm thick membrane for the wavenumbers which coincide with the range reachable by angle-resolved BLS following (1). Here we can find remarkable changes of the A0 mode both for compressive and tensile stresses. For small values of q and σ = 0 0 (black line in figure 4(a)) the phase velocity of the A0 mode is proportional to the wavenumber with ν → = q ( 0) 0. From figure 4(a) we can find that both tensile and compressive stresses lead to the nonlinear behaviour of ν q ( ), which was also predicted recently for graphene [19,20]. Considering the stress influence on dispersion curves in the form ω q ( ) (see figure 4(b)) we can find that for small q values, ω q ( ) cannot be approximated by a quadratic function as for σ = 0 0 case. We can validate further the theory by calculating the limiting cases and recovering well-known physical effects as corollaries. In the tensile stress case we see that as q tends to 0, ν attains values different than 0. Solving (A.5) with → q 0 we find that for the A0 mode: That is, ν depends only on the stress component parallel to q and on a single material parameter ρ′. It is interesting to note that (9) is also the solution of the well-known problem of a wave velocity in a stretched (guitar) string. Moreover, for the compressive loads we find ranges of forbidden wavenumbers q. For the range of wavenumbers between 0 and q b , velocities ν and frequencies ω of A0 modes were found to be imaginary values. This behaviour can be explained taking into account the stability of the membrane subjected to compressive forces. Therefore, for small negative values of σ 0 and ν = 0 we find from (A.5) that: which is the Euler load equation [43,50] describing the critical compressive load necessary to produce buckling of a plate, beam or membrane. On the other hand, for a given membrane with thickness d, (10) determines the width of the membrane ( π < w q 2 / m b ) that does not undergo the buckling instability due to the compressive stress (see figure 1). At this point it is worth noticing, that BLS allows directional measurements of stress, since the value of (9) depends only on the stress component parallel to the wavevector. It offers the possibility of further measurements in the presence of a non-uniform biaxial stress, to obtain, e.g., stress mapping of the sample. Concluding remarks Using BLS we measured the fundamental flexural mode of 27 nm thick membranes subjected to different homogeneous tensile biaxial stresses. We showed that dispersion relations of LW in ultra-thin silicon membranes can be tuned in a controlled manner by means of applied stress. The experimental results show a good agreement with theoretical calculations based on the elastic continuum approach without any free parameter. The experimentally proved stress tuning of flexural phonons dispersion relations opens the door to thermal management and energy conversion in low dimensional structures. It is expected that this effect of tuning the dispersion relation translates into changes in the thermal properties when the wavelength of the phonons involved in the heat transport is commensurate to the thickness of the membrane [15,16,49] or at low temperatures [23]. In addition, we showed how Brillouin spectroscopy provides a contactless and nondestructive tool for the stress measurements in the nm-scale systems. Our findings are essential for micro-and nano-electromechanical (MEMS and NEMS) systems development and have deep implications in engineering of the thermal conductivity. In the general case the adopted iterative search procedure sweeps ν and qd as the parameters looking for the phase velocities of LW for the given qd, corresponding to the minima of the boundary condition determinant. Moreover, these solutions can be classified in terms of the polarization or symmetry by means of the evaluated weighting factors A n and (A.1). All calculations were performed for Si membranes oriented as shown schematically in figure 1 (b). Figure A1 presents the dispersion curves with exemplary values of both tensile and compressive biaxial load for A, S and SH modes, respectively, with ∥ q 110 [ ]. From figure A1 we can find that the magnitude or even the sign of the applied load does not lead to simple and unambiguous changes in the dispersion relations. In general, the change of phase velocity depends on the direction of propagation with respect to applied load, qd, type of wave/ polarization and order of a given mode.
3,950.8
2014-07-17T00:00:00.000
[ "Physics" ]
Transformation of the model of estimation of balance reliability UPS of the Russia in the account of correlation of the consumption regime The characteristics of the electric power consumption regime are used when determining the indicators of adequacy in the management of the development of the UPS of Russia. The influence of the changed conditions of their representation on the probabilistic indicators of the balance reliability of system-forming links between the territorial zones (UPS) of the computational model of the UPS of Russia is shown. On this basis, we proposed the transformation of the model of estimating the balance reliability indicators, which allows to increase the computational efficiency. Introduction When planning the development of the industry, the socalled balance of power for the UPS of the country as a whole and its unified EES (UPS) [1,2] was developed earlier, is being developed today and will be developed in the future.Its form includes two positions (Figure 1): the expenditure part "demand" (maximum load, export and standard power reserve); the incoming "cover" (installed power, unused power for the period of passage of the maximum, power inputs after passing the maximum, underutilization of power). The issue that concerns both consumers and suppliers of electrical energy is to justify the regulatory power reserve, which is one of the components of the expenditure part of the balance sheet.In the tasks of long-term planning of EES it is called the full (normative) reserve, and conditionally divided into three components: repair, strategic and operational, intended to compensate for unscheduled (emergency) conclusions of the main generating and network equipment for repair.Therefore, in order not to confuse the concept of operational reserve in the current planning and shortterm (long-term), this component of the full reserve has recently been called the compensation reserve. Market relations in the electric power industry exacerbate the issues of justifying the power reserve.With short-term planning (from 3 year to 4 years), the price for capacity during long-term (competitive) power take-off depends on the requirements for a full (normative) reserve in the conditions of market relations. The difficulty in determining the total capacity reserve is mainly related to finding one of its three components -the operational (compensation) power reserve, which depends on a variety of factors and events, including those randomly determined [3 -6, etc.].These include: -model of the calculation scheme of the UPS; -disposable capacity structure of generating capacities; -scheduled repairs of equipment; -reduction of generating capacity of territorial zones due to emergency damage to power plant units; -regular and irregular maximum loads of territorial zones and graphs of their changes in the context of the year and day; -random deviations and irregular load fluctuations; -reserves of communication capacity in normal and emergency modes between territorial zones. The article considers the influence of modern possibilities of receiving and processing of probabilistically determined information on power consumption modes and the models of the UPS of Russia calculation schemes used to transform the methodology for estimating indicators adequacy (IA) and its means of support. Representation of a mode of power consumption Important, having a significant impact on IA and means of their provision in planning the development of EES is the question of presenting the power consumption regime during the settlement period, which usually is a year.The receipt of an hourly perspective schedule of power consumption of the territorial zones of EES for the settlement year at all times caused considerable difficulties. Studies of balance reliability in our country began at the turn of the 60s of the last century.A generalization of these studies to the concentrated EES is given in [7].As a yearly load graph, a load schedule was used for the duration in the hour section (8760 hours).As a criterion for making a decision to justify the reserve of power, the indicator of the integral probability of no-deficit work 0,999 1 . The index of the integral probability of occurrence of a power shortage 0,001 [7].In this expression, the specific reduced costs for reserve capacity ( уд. з R ) were taken equal to 5 rub./ kW, specific losses from non-supply of electricity to consumers 0 y = 0.6 rub./ kW • h. In the same work, for the first time in the country, taking into account foreign experience, the concepts of the regular maximum of the load and its random deviations due to irregular oscillations and prediction errors were introduced.These concepts are included in almost all approved normative technical documents (NTD) -methodological instructive for the design of the development of power systems (hereinafter MI), used in our country and now.It should be noted that in these NTD introduced the concept of the average daily load schedule of the maximum power consumption of the days of December [2].The distribution of this schedule is proposed for all 250 working days of the year (Figure 1).In those days, the reasons for justifying such an approach were understandable and understandable to the designers and the scientific community.In what it manifested itself. In the methodical plan there was a transition from the consideration of the UPS of the country not by a concentrated association, but by a multi-zone one with the separation as zones of joint EES.This required the development of a new approach to modeling the random states of generating capacity and the load in them with subsequent evaluation for satisfaction of all consumers. Considering 8760 hourly load changes has become, beyond time, an impossible task.The size of the decision criterion for justifying the capacity reserve also changed 0,999 1 . The indicator of the integral probability of the appearance of a power shortage, by virtue of a change in the principles of determining unit costs for standby capacity (the operating costs began to be taken into account) was adopted by the value 0,004 The deficit of generating capacity in the country's UPS that was observed at that time did not allow for capital repairs during peak hours, which, unfortunately, is not observed today.They always fit into seasonal power failures.Thus, the load in them increased (assuming that repairs -this is an addition to the load in a certain seasonal interval of its decline) to the December maximum and the annual schedule of seasonal load decreases equalized.This was the main reason for the spread of the December average daily load schedule for all working days of the year.It should be noted that the same happened abroad, for example, in the power systems of North America, the balance sheet reliability standard is still applied in the form of LOLE = 0.1 days / year, obtained on the basis of analysis of not 8760 hour changes in the load, but only it maximum values for each of 365 days of the year. At present, the processes of obtaining the initial information on the levels of power consumption and the schedules for their change are significantly different from those that took place in the development of guidelines and MI.Today, it is possible to obtain realtime schedules of load changes in the power systems of the UPS of Russia in any time interval due to information: -within the retrospective period about the actual capacity at the end of the hour 1 ; actual daily average and average monthly temperatures; actual annual consumption of electrical energy; -within the forecast period about the annual power consumption, set in the annual work of the schemes and programs of long-term development UPS Russia The presence of such information allows to form the necessary data packet.It includes: -the form of forecast daily average load charts; -random deviations of the load from the mean values; -the magnitude of the forecasted regular load maxima. A description of the procedure for obtaining the above information is given in detail in [8].A few words about the random component of load deviations from mean values.Her, with respect to the problem of balance reliability, at all times associated mainly with fairly wellestablished probabilistically determined events, caused by two reasons.The first one is caused by the appearance of deviations from the mean values of the daily schedule load (Figure 1), obtained as a result of processing a huge set of information on power consumption modes for a number of working days of the December month of the retrospective period.The second most significant reason is connected with taking into account the influence of the temperature factor on the 1) Daily dispatching lists of the operational information complex (OIC) of the main dispatch center (MDC) of SO UES JSC. , the prospective values of the maximum loads are given to the average annual temperatures of the outside air of the territory under consideration. Statistical processing of power consumption modes for the 6-year period from 2010 to 2015, carried out by the specialists of JSC "SO UPS" [8], allowed, according to the specified in the -within the forecast period about the annual power consumption, set in the annual work of the schemes and programs of long-term development UPS Russia for 2016-2022 annual electricity consumption volumes for the considered territorial zones (UPS): -regular load maxima; -random deviations of the load from the mean values; -average graphs of changes in daily loads; -correlation dependencies of random load deviations. The received information on the first two items for 2022 is given in Table 1.In the same table, the differences between regular and irregular load maxima are shown using different techniques.It can be seen that these differences are quite significant. It should be understood that the methodology for obtaining a regular maximum based on the analysis of retrospective information is not entirely perfect in terms of using retrospective information on power consumption not listed in the middle temperature standards.Studies show that to describe the power consumption mode, you can use the shape of the load curve when using an irregular maximum as a maximum.The last two lines of the table show the values of the random deviations of the load ( п σ ), obtained by using different methods.It can be seen that they, obtained by Coefficients of consumption covariance between the territorial zones identified in the form of UPS as a result of the processing of the retrospective information (the last of the items listed above) fluctuate within fairly wide limits.Of the 15 possible coefficients (6 UPS without the UPS of the Far East), more than 10 exceed the threshold of 0.5.This indicates a strong dependence of random deviations in the load at once in several UPS, which largely affects IA and the means of their provision [8]. 3 Peculiarities of forming the model of the settlement scheme of the UPS of Russia for the tasks of adequacy. Under the zones of the same reliability (hereinafter the territorial zone is a concentrated system 4 ) controlled in the analysis of balance reliability both under the conditions of a vertically integrated system and under the market management principle, a set of sets of territorial zones of EES for which network restrictions do not affect IA should be understood.In the future, this term will also be characterized by the phrase territorial area, i.e. the territory in which network restrictions do not affect the restriction of consumers in power and electricity. With the long-term planning of the development of the UPS of Russia at all times, a number of various assumptions were adopted that made it possible to simplify the solution of the problem.This applies to the representation of the electricity consumption regime (only in December), the accounting for scheduled repairs and the composition and limitations of generating 4) A concentrated system is called a power system, within which there are no restrictions on power transmission in any possible emergency situations caused by unreliability of the generating and network equipment.equipment, and many other issues.The simplifications introduced during the long-term planning naturally concern also the issues of aggregating the scheme of electrical connections of the main generating and network objects. It is clear that the introduced simplifications that have a certain influence on the decisions taken to manage the development of EES should have a certain justification.In any case, the model of the settlement scheme of the UPS of Russia for determining IA in order to justify the reserve capacity values should be represented by a graph containing links to nodes (territorial zones) of contiguity.The territorial zone is a kind of association in which network restrictions do not affect the power distribution.Linkages are understood as a set of power transmission lines between territorial zones.The communications of the calculated graph should contain information on maximum permissible overflows (MPO) of capacity (connectivity capacity) in the forward and reverse directions for normal and emergency operation. In recent years (2010)(2011)(2012)(2013)(2014)(2015) in the circles of reservation researchers there have been judgments that in the calculations aimed at determining the requirements for the levels of the operational reserve capacity for the short-term period (from 1 to 7 years), the UPS scheme of Russia should be more detailed with the inclusion of up to 100 or more territorial zones.When developing power and power balances for a more distant future, there were no doubts that territorial zones could be represented in a more aggregated form, for example, in the form of united power system (UPS) or free power transfer zones (FPTZ) [6], with possible breakdown of them on sections that limit emergency mutual assistance within them.In any case, the number of territorial zones did not exceed 20. It is clear that the correct solution of these issues requires the conduct of relevant studies.In the period 2012-2016 such studies in solving a set of issues related to the development of the UPS of Russia were carried out at Joint Stock Company «Scientific and Technical Center of Unified Power System»., which is the basic scientific center of JSC «SO UPS» [9].Unfortunately, at the present time, the information necessary for the scientific substantiation of these issues is practically closed to researchers dealing with issues of adequacy.In this article, the authors relied on information support received in the course of a number of works with JSC "SO UPS of Russia" in the period 2011-2016. In 2011-2012.on the instructions of JSC «SO UES» JSC "STC UPS" performed work on the formation of the model of the settlement scheme of the UPS of Russia for solving the problems of ensuring their adequacy [9] It is quite obvious that in order to evaluate the IA of the actual composition of the generating equipment, their accidents, and the projected maximum loads, such schemes are of some interest from the perspective of obtaining IA in the territorial zones and identifying "weak" links in the system.At the same time, solving the task of justifying the means of ensuring adequacyoperational (compensatory) capacity reserves for such schemes is, in our opinion, rather problematic. On the basis of the developed 56-node model of the design scheme the UPS of Russia, with the participation of experts from the regimes service of JSC "SO UPS", a design scheme consisting only of UPS was developed (Figure .2).Information filling on the territorial zones (the composition of the generating equipment, the rate of its failure rate, the form of the change in the daily load schedule, the forecast errors and their correlation dependencies), on the connections (throughput and accidents) was obtained partly work from the schemes and programs of long-term development Russian Unified Power System for 2016-2022, partly as a result of processing of retrospective information on the reporting parameters of the system operation for 2010-2016. Opportunities for transforming the model for justifying the operational reserve of capacity, in relation to the current conditions for the development of the UPS of Russia Analysis of the results of determining the operational (compensation) capacity reserve, presented in Table 2 and 3 for 2022 (the schemes and programs of long-term development Russian Unified Power System for 2016-2022) allows us to take a slightly different look at the formulation of the task of estimating IA in planning the development of the UPS of Russia in the current conditions for receiving and processing retrospective information on power consumption regimes.Specifically, this relates to the retrospective information revealed as a result of statistical processing, the dependence of the change in the power consumption mode due to random errors in forecasting the load between different territorial zones.The effect of this factor in Table 2 and 3 are termed with and without correlation. Presented in Table 2 the results of the distribution of the operational capacity reserve for UPS of Russia are of a research nature.The reason is insufficiently worked out and practically not discussed at the conferences the questions on the formation of the initial information on the maximum permissible power flows in the models of the settlement scheme of the UPS of Russia presented by the JSC "SO UPS".The results given in Table 2 show that when the levels of the MPO of communications both inside the UPS (here not shown) and between them are received at JSC "SO UPS" (Table 3), the differences in the reservation levels of the territorial zones are extremely insignificant.The explanation for this is quite simple: in power system associations, the MPO (transfer capacity) of connections practically do not limit the transfer of power from one node to another.Moreover, attention is drawn to the fact that when the correlation dependencies between random load deviations in the territorial zones (UPS) are taken into account, the influence of the MPO of the bonds on the IA decreases drastically.In Table 3 shows such relationships, for example, if the integrated probability of a power shortage in the UPS is close to 0.004 for the optimal variant of the development of the UPS of Russia, then for the Ural-Center connection (1-5, in Figure 2) without taking into account the correlation, the probability of overloading its MPO is 0,000119 (multiplicity 0,004/0,00018 = 22, in Table 3).Taking into account the correlation leads to a reduction of more than threefold the probabilities of MPO overloading.For the communication in question, it was 0.000048. All this leads to the possibility of considering the UPS of Russia in the form of a concentrated power system in which the simulation of random states of generation and especially the load is made at the level of territorial zones, taking into account the dependencies of their random deviations between themselves.The evaluation of the generated state for the purpose of providing the load is made by simply comparing the total for all territorial zones of the model of the calculation scheme of the UPS of Russia, the values of generation and load. Adopting such an approach dramatically reduces the time required to obtain a preliminary result.Cost reduction occurs due to the exclusion from the consideration in the IA estimation model of the most costly in the temporal aspect of the block of estimating the random state of the system [5,6].Then this result can be corrected already taking into account the maximum allowed power flows.From Table 2 that the introduced simplifications in the variant taking into account the correlation lead to a decrease in the compensation power reserve by only 0.75% in relation to the irregular maximum of the load, and if to the reserve capacity, it is only 5.5%. Fig. 2 . Fig. 2. Model of the settlement scheme of the UPS of Russia: 1 -UPS Urala; 2 -UPS of the Middle Volga; 3 -UPS of the South; 4 -UPS of the North-West; 5 -UPS of the Center; 6 -Kazakhstan; 7 -UPS of Siberia; 8 -UPS East. 2)Scheme and program for the development of the UPS of Russia for 2016-2022. Presentation of the annual power consumption mode the characteristic daily schedule of December.It is no accident that in the unapproved Ministry of Energy of Russia in guidelines 2011 and in the Order of the Ministry of Energy of Russia 3 ) Order of the Ministry of Energy of Russia of 07.09.2010,No. 431 (Edited on August 17, 2017) "On approving the Regulations on the procedure for determining the amount of demand for capacity for conducting long-term power take-off ..." (Registered in the Ministry of Justice of Russia on 29.09.2010,No. 18578).processing retrospective information, are almost 40% higher, given in the 2003 and 2011 MR.The same trend is observed for individual UPS, except for the Middle Volga UPS. Table 1 . Some load characteristics for the 2022 forecast year Table 2 - Distribution of the operational (compensation) reserve for the ECO with different presentation of the model of the settlement scheme of the UES of Russia for 2022.(information from the work of CIPR UES of Russia for 2016-2022) Table 3 . Bandwidth of communications (MW) and ratio multiplicity optimal integral probability of power shortage in the UPS (0.004) to the value of the integral probability of link overloading
4,797.8
2018-10-01T00:00:00.000
[ "Computer Science" ]
Calcitriol Reduces the Inflammation, Endothelial Damage and Oxidative Stress in AKI Caused by Cisplatin Cisplatin treatment is one of the most commonly used treatments for patients with cancer. However, thirty percent of patients treated with cisplatin develop acute kidney injury (AKI). Several studies have demonstrated the effect of bioactive vitamin D or calcitriol on the inflammatory process and endothelial injury, essential events that contribute to changes in renal function and structure caused by cisplatin (CP). This study explored the effects of calcitriol administration on proximal tubular injury, oxidative stress, inflammation and vascular injury observed in CP-induced AKI. Male Wistar Hannover rats were pretreated with calcitriol (6 ng/day) or vehicle (0.9% NaCl). The treatment started two weeks before i.p. administration of CP or saline and was maintained for another five days after the injections. On the fifth day after the injections, urine, plasma and renal tissue samples were collected to evaluate renal function and structure. The animals of the CP group had increased plasma levels of creatinine and of fractional sodium excretion and decreased glomerular filtration rates. These changes were associated with intense tubular injury, endothelial damage, reductions in antioxidant enzymes and an inflammatory process observed in the renal outer medulla of the animals from this group. These changes were attenuated by treatment with calcitriol, which reduced the inflammation and increased the expression of vascular regeneration markers and antioxidant enzymes. Introduction Cisplatin (CP) is one of the most potent and effective anticancer drugs used in clinical practice [1]. Despite its effectiveness, its use is limited by its nephrotoxicity, which leads to kidney injury (AKI), and it has been shown that cisplatin accumulation in the kidney leads to selective damage to S3 of the proximal tubule located in the outer stripe of the outer medulla [2][3][4]. AKI is characterized by an abrupt drop in renal function, decline in glomerular filtration rate (GFR) and accumulation of metabolic waste [5]. Several mechanisms have been studied to evaluate the determinants for the nephrotoxic effect of CP. The kidney's toxicity from cisplatin has been associated with the basolateral uptake along the proximal tubule via the organic cation transporter 2 (OCT2), leading to an intracellular cisplatin concentration up to five times higher than that of plasma levels [6]. To maintain cellular homeostasis, a balance between ROS production and antioxidant defense activity is necessary [13,14]. CP interferes with this balance by increasing the production of ROS [15] and reducing the production of antioxidant enzymes, such as Table 1. Plasma creatinine (Pcreat), fractional sodium excretion (FE Na+ ) and glomerular filtration rate (GFR) 5 days after injection of CP or vehicle of the SAL (n = 6), SAL + calcitriol (n = 6), CP (n = 8) and CP + calcitriol (n = 8) groups. Cisplatin Provoked Renal Injury and Inflammation That Was Ameliorated by Calcitriol Treatment The typical histological features of CP-induced AKI (characterized by loss of the brush border, cell necrosis, tubular dilation, sloughing and obstruction) and a higher tubular damage score were observed following the CP treatment. Nevertheless, these abnormalities were markedly reduced by treatment with calcitriol (CP + calcitriol group) ( Figure 1A-E). The typical histological features of CP-induced AKI (characterized by loss of the brush border, cell necrosis, tubular dilation, sloughing and obstruction) and a higher tubular damage score were observed following the CP treatment. Nevertheless, these abnormalities were markedly reduced by treatment with calcitriol (CP + calcitriol group) ( Figure 1A-E). The epithelial cells express vimentin before differentiation or in the transdifferentiation processes that occur through a process known as the epithelial-mesenchymal transition (EMT) [30]. During this process, these cells can proliferate, migrate and produce an extracellular matrix. Therefore, this protein can be used as a marker of cellular damage [27,28]. The immunohistochemical analysis showed an increase in the expression of vimentin in the renal outer medulla in the animals in the CP group compared to the control groups (SAL and SAL + calcitriol) (Figure 2A-D). Treatment with calcitriol decreased vimentin expression in the renal outer medulla of these animals ( Figure 2I-K). The tubular injury caused by CP was also evaluated through the cell proliferation. The number of PCNA-positive cells was analyzed in the renal outer medulla ( Figure 2E-H). PCNA is an antigen in the nucleus of cells present in the proliferation phase that are indicative of recent tubular injury. The number of PCNA-positive cells was increased in the renal outer medulla of the animals injected with CP compared to the control groups (SAL and SAL + calcitriol), showing an intense tubular lesion in these animals. However, such alterations were attenuated by treatment with calcitriol ( Figure 2H-L). The score for TIL in the renal outer medulla (E) of all experimental groups. Data are expressed as mean ± SEM (n = 5-8 for each group). ** p < 0.01; **** p < 0.0001 ×400. The epithelial cells express vimentin before differentiation or in the transdifferentiation processes that occur through a process known as the epithelial-mesenchymal transition (EMT) [30]. During this process, these cells can proliferate, migrate and produce an extracellular matrix. Therefore, this protein can be used as a marker of cellular damage [27,28]. The immunohistochemical analysis showed an increase in the expression of vimentin in the renal outer medulla in the animals in the CP group compared to the control groups (SAL and SAL + calcitriol) (Figure 2A-D). Treatment with calcitriol decreased vimentin expression in the renal outer medulla of these animals ( Figure 2I-K). The tubular injury caused by CP was also evaluated through the cell proliferation. The number of PCNA-positive cells was analyzed in the renal outer medulla ( Figure 2E-H). PCNA is an antigen in the nucleus of cells present in the proliferation phase that are indicative of recent tubular injury. The number of PCNA-positive cells was increased in the renal outer medulla of the animals injected with CP compared to the control groups (SAL and SAL + calcitriol), showing an intense tubular lesion in these animals. However, such alterations were attenuated by treatment with calcitriol ( Figure 2H-L). An inflammatory process evidenced by a large infiltration of macrophages (ED1positive cells) in the tubulointerstitial of the outer medulla from the kidneys in the cisplatintreated rats was also observed ( Figure 3A-D). Calcitriol treatment prevented macrophage infiltration in this tubulointerstitial area ( Figure 3E). Analysis of IL-1β expression in renal tissue showed higher levels of this cytokine in the CP group than in the controls ( Figure 3F), which was attenuated by calcitriol treatment. Interleukin-10 (IL-10), a cytokine with antiinflammatory and immunomodulatory functions, showed a downregulation in the CP group, while in the calcitriol group, upregulation was observed ( Figure 3G). The densitometric ratio between vimentin and GAPDH was calculated, and data are expressed in comparison with the control group, with the mean control value (± SEM) designated as 100% and expressed as mean ± SEM (n = 5-8 for each group). Blots are representative images of independent experiments. *p < 0.05; **p < 0.01; ***p < 0.001; ****p < 0.0001: magnification, ×100. An inflammatory process evidenced by a large infiltration of macrophages (ED1-positive cells) in the tubulointerstitial of the outer medulla from the kidneys in the cisplatin-treated rats was also observed ( Figure 3A-D). Calcitriol treatment prevented macrophage infiltration in this tubulointerstitial area ( Figure 3E). Analysis of IL-1β expression in renal tissue showed higher levels of this cytokine in the CP group than in the controls ( Figure 3F), which was attenuated by calcitriol treatment. Interleukin-10 (IL-10), a cytokine with anti-inflammatory and immunomodulatory functions, showed a downregulation in the CP group, while in the calcitriol group, upregulation was observed ( Figure 3G). The densitometric ratio between vimentin and GAPDH was calculated, and data are expressed in comparison with the control group, with the mean control value (±SEM) designated as 100% and expressed as mean ± SEM (n = 5-8 for each group). Blots are representative images of independent experiments. * p < 0.05; ** p < 0.01; *** p < 0.001; **** p < 0.0001: magnification, ×100. Values are given as the mean ± SEM. Renal tissue levels of IL (interleukin)-1β (F) and IL (interleukin)-10 (G) from control (SAL and SAL + calcitriol) and experimental (CP and CP + calcitriol) groups. Data are expressed as mean ± SEM (n = 5-8 for each group). *p < 0.05; **p < 0.01; ****p < 0.0001: magnification, ×400. Evaluation of 25-OH Vitamin D, VDR and the Cubilin Receptor There was no significant difference between groups either concerning the mean serum concentration of 25-OH vitamin D or the classification of levels as deficient (levels <20 ng/mL) or sufficient (levels >30 ng/mL) ( Figure 4F). However, the Western blot analysis demonstrated reduced VDR expression in renal tissue from the CP group com- Values are given as the mean ± SEM. Renal tissue levels of IL (interleukin)-1β (F) and IL (interleukin)-10 (G) from control (SAL and SAL + calcitriol) and experimental (CP and CP + calcitriol) groups. Data are expressed as mean ± SEM (n = 5-8 for each group). * p < 0.05; ** p < 0.01; **** p < 0.0001: magnification, ×400. Evaluation of 25-OH Vitamin D, VDR and the Cubilin Receptor There was no significant difference between groups either concerning the mean serum concentration of 25-OH vitamin D or the classification of levels as deficient (levels < 20 ng/mL) or sufficient (levels > 30 ng/mL) ( Figure 4F). However, the Western blot analysis demonstrated reduced VDR expression in renal tissue from the CP group compared with the control groups (SAL and SAL + calcitriol) ( Figure 4G,H). The reduction in the expression of this protein induced by CP was attenuated by treatment with calcitriol. We also observed in immunohistochemical analysis that the number of tubules with a brush border marked with cubilin receptors was smaller in the CP group compared with the control groups (SAL and SAL + calcitriol) ( Figure 4A-D). Calcitriol treatment attenuated these alterations ( Figure 4E). The densitometric ratio between VDR and GAPDH was calculated, and data are expressed in comparison with the control group, with the mean control value (±SEM) designated as 100% and expressed as mean ± SEM (n = 5-8 for each group). Blots are representative images of independent experiments. *p < 0.05; **p < 0.01; ****p < 0.0001: magnification, ×400. The Endothelial Damage Induced by CP Was also Improved by Calcitriol The immunohistochemistry studies with JG12, a marker of endothelial cells, showed that CP animals had fewer capillaries in the renal cortex and outer medullae ( Figure 5A-D). The effect of CP on the renal endothelium, evaluated by Western blot using a specific marker for endothelial cells (CD34), showed that the CD34 expression was reduced in the CP groups compared to the control groups (SAL and SAL + calcitriol) ( Figure 5F). The calcitriol treatment improved these alterations ( Figure 5E,G). Western blot analysis also demonstrated a reduction in tissue NOS3 ( Figure 5H,I) and p-NOS ( Figure 5J,K) expression in the CP group compared to the CP + calcitriol group. Western blot analysis of vitamin D receptor (VDR) and GAPDH (G) in the renal tissue from all experimental groups (SAL, SAL + calcitriol, CP, CP + calcitriol). VDR densitometry (H). The densitometric ratio between VDR and GAPDH was calculated, and data are expressed in comparison with the control group, with the mean control value (±SEM) designated as 100% and expressed as mean ± SEM (n = 5-8 for each group). Blots are representative images of independent experiments. * p < 0.05; ** p < 0.01; **** p < 0.0001: magnification, ×400. The Endothelial Damage Induced by CP Was also Improved by Calcitriol The immunohistochemistry studies with JG12, a marker of endothelial cells, showed that CP animals had fewer capillaries in the renal cortex and outer medullae ( Figure 5A-D). The effect of CP on the renal endothelium, evaluated by Western blot using a specific marker for endothelial cells (CD34), showed that the CD34 expression was reduced in the CP groups compared to the control groups (SAL and SAL + calcitriol) ( Figure 5F). The calcitriol treatment improved these alterations ( Figure 5E,G). Western blot analysis also demonstrated a reduction in tissue NOS3 ( Figure 5H,I) and p-NOS ( Figure 5J,K) expression in the CP group compared to the CP + calcitriol group. . The densitometric ratio between CD34, NOS3, p-NOS and GAPDH was calculated, and the data are expressed in comparison with the control group, with the mean control value (± SEM) designated as 100% and expressed as mean ± SEM (n = 5-8 for each group). Blots are representative images of independent experiments. *p < 0.05; **p < 0.01. Vascular endothelial growth factor (VEGF) is a significant proangiogenic factor in angiogenesis. We observed that the expression of VEGF was reduced in the animals of the CP group compared with that in the control rats (SAL and SAL + calcitriol) ( Figure 6A,B). This alteration was reversed by treatment with calcitriol, evidenced by the increase in the expression of VEGF in the CP + calcitriol group. VEGF exerts its actions through its VEGFR2 receptor on target cells. We observed that those in the CP + calcitriol group also showed an increase in the expression of VEGFR2 in the kidney tissue ( Figure 6C,D). CXCR4 is a receptor present on endothelial cells and pericytes of hypoxic tissues. We observed that the animals in the CP group showed an increase in the renal expression of this receptor compared to controls (SAL and SAL + calcitriol) ( Figure 6E,F). This increase in expression was attenuated in the CP + calcitriol group. Reducing antioxidant enzymes is one of the mechanisms of generating oxidative stress triggered by CP, and the increase in oxidative stress leads to endothelial damage. To assess the participation of this mechanism, we analyzed the expression of the antioxidant enzyme EC-SOD in kidney tissue ( Figure 6F-H). We observed that CP reduced the expression of EC-SOD, and the antioxidant action of calcitriol was confirmed by the increased expression of EC-SOD in the SAL + calcitriol group. This action was also observed in the animals of the CP + calcitriol group, evidenced by the maintenance of EC-SOD levels in this group. Vascular endothelial growth factor (VEGF) is a significant proangiogenic factor in angiogenesis. We observed that the expression of VEGF was reduced in the animals of the CP group compared with that in the control rats (SAL and SAL + calcitriol) ( Figure 6A,B). This alteration was reversed by treatment with calcitriol, evidenced by the increase in the expression of VEGF in the CP + calcitriol group. VEGF exerts its actions through its VEGFR2 receptor on target cells. We observed that those in the CP + calcitriol group also showed an increase in the expression of VEGFR2 in the kidney tissue ( Figure 6C,D). CXCR4 is a receptor present on endothelial cells and pericytes of hypoxic tissues. We observed that the animals in the CP group showed an increase in the renal expression of this receptor compared to controls (SAL and SAL + calcitriol) ( Figure 6E,F). This increase in expression was attenuated in the CP + calcitriol group. Reducing antioxidant enzymes is one of the mechanisms of generating oxidative stress triggered by CP, and the increase in oxidative stress leads to endothelial damage. To assess the participation of this mechanism, we analyzed the expression of the antioxidant enzyme EC-SOD in kidney tissue ( Figure 6F-H). We observed that CP reduced the expression of EC-SOD, and the antioxidant action of calcitriol was confirmed by the increased expression of EC-SOD in the SAL + calcitriol group. This action was also observed in the animals of the CP + calcitriol group, evidenced by the maintenance of EC-SOD levels in this group. The densitometric ratio between VEGF, VEGFR2, CXCR4, EC-SOD and GAPDH was calculated, and the data are expressed in comparison with the control group, with the mean control value (± SEM) designated as 100% and expressed as mean ± SEM (n = 5-8 for each group). Blots are representative images of independent experiments. *p < 0.05; **p < 0.01; ****p < 0.0001. Discussion The data presented in this manuscript provide evidence that calcitriol can attenuate inflammation, endothelial injury, oxidative stresses and epithelial cell injury in cisplatin-induced AKI. In addition, our study demonstrates that calcitriol also may display an anti-inflammatory action through modulation of IL-1β and IL-10, leading to a downregulation of IL-1β and upregulation of IL-10. Consistent with our data, previous studies have shown that cisplatin (CP) nephrotoxicity was associated with increased expression of IL-1β [31][32][33], and its inhibition alone does not protect against CP-induced AKI [31]. A recent study revealed that renal tubular epithelial cell-derived IL-1β polarizes renal macrophages toward a proinflammatory phenotype that stimulates salt sensitivity through the increase of renal IL-6 [34]. IL-10 is a cytokine known for its anti-inflammatory actions [35] and is produced by many immune cells [36,37]. Its anti-inflammatory actions are attributed to its ability to inhibit the infiltration of monocytes and neutrophils and the production of inflammatory cytokines [38][39][40]. A reduction in IL-10 levels has already been observed in CP nephrotoxicity [31]. Its increased expression has been suggested to be protective against CP-induced kidney injury [41,42]. Amirshahrokhi et al. (2015) observed that the reduction in renal toxicity caused by CP might be related to inhibiting proinflammatory cytokines [32]. Our results showed that this exact mechanism could be involved in calcitriol's downregulation of IL-1B and re- The densitometric ratio between VEGF, VEGFR2, CXCR4, EC-SOD and GAPDH was calculated, and the data are expressed in comparison with the control group, with the mean control value (±SEM) designated as 100% and expressed as mean ± SEM (n = 5-8 for each group). Blots are representative images of independent experiments. * p < 0.05; ** p < 0.01; **** p < 0.0001. Discussion The data presented in this manuscript provide evidence that calcitriol can attenuate inflammation, endothelial injury, oxidative stresses and epithelial cell injury in cisplatininduced AKI. In addition, our study demonstrates that calcitriol also may display an anti-inflammatory action through modulation of IL-1β and IL-10, leading to a downregulation of IL-1β and upregulation of IL-10. Consistent with our data, previous studies have shown that cisplatin (CP) nephrotoxicity was associated with increased expression of IL-1β [31][32][33], and its inhibition alone does not protect against CP-induced AKI [31]. A recent study revealed that renal tubular epithelial cell-derived IL-1β polarizes renal macrophages toward a proinflammatory phenotype that stimulates salt sensitivity through the increase of renal IL-6 [34]. IL-10 is a cytokine known for its anti-inflammatory actions [35] and is produced by many immune cells [36,37]. Its anti-inflammatory actions are attributed to its ability to inhibit the infiltration of monocytes and neutrophils and the production of inflammatory cytokines [38][39][40]. A reduction in IL-10 levels has already been observed in CP nephrotoxicity [31]. Its increased expression has been suggested to be protective against CP-induced kidney injury [41,42]. Amirshahrokhi et al. (2015) observed that the reduction in renal toxicity caused by CP might be related to inhibiting proinflammatory cytokines [32]. Our results showed that this exact mechanism could be involved in calcitriol's downregulation of IL-1B and reduced inflammation. In addition, vitamin D induces an increase in IL-10 secretion by regulatory T cells, an effect observed in a study with a patient with systemic sclerosis [43]. The increased vimentin expression was also observed in tubular cell injuries in the renal outer medulla from CP-injected rats. Tubular cells only express vimentin when proliferating, demonstrating recent lesions of these cells. The increased number of PCNApositive cells confirmed this result. Calcitriol treatment decreased tubular cell injury and the expression of vimentin and PCNA in the animals injected with CP. Tan et al. (2006) observed that treatment with paricalcitol (a synthetic vitamin D analog) significantly reduced the expression of PCNA and attenuated renal interstitial fibrosis in a model of obstructive nephropathy [44]. The authors also observed that vitamin D treatment restored the expression of the VDR receptor, blocked epithelial-mesenchymal transition and inhibited cell proliferation, demonstrating that vitamin D plays a protective role in cellular integrity against this cell injury process. Previous results from clinical and animal studies have suggested that VDR activation has beneficial effects on various renal diseases [45,46]. To address this question, we examined the VDR in kidney tissue and found a lower expression in the animals from the CP group, while the CP group that received calcitriol showed an increase in its expression. In the present study, we observed that the lesions in the renal outer medulla were associated with decreased cubilin receptor expression in the apical region of the tubule cells in the CP group. The reduction in the number of tubules expressing cubilin in the cell brush border could lead to disturbances in vitamin D activation [47,48]. Our results showed that calcitriol-treated rats present preserved cubilin receptors, demonstrating the renoprotective role of calcitriol. Additionally, studies have shown that vitamin D deficiency is a risk factor for contrast-induced AKI due to an imbalance in intrarenal vasoactive substances and oxidative stress [49]. Our results showed that decreased expression of EC-SOD (an antioxidant enzyme) induced by CP was attenuated by calcitriol treatment. A similar effect was observed by , where pretreatment with cholecalciferol, an inactive form of vitamin D3, partially protected against ischemia-reperfusion-induced AKI through regulation of oxidant enzymes and suppression of oxidative stress [50]. CP-induced nephrotoxicity is due in part to vascular damage and the vasoconstriction associated with endothelial dysfunction and abnormal vascular self-regulation [51]. Vascular injury results in decreased renal blood flow and GFR, causing hypoxic tubular damage [52]. We have previously shown the participation of the endothelium in the development of AKI induced by CP [53]. We used JG12 to assess changes in capillary density in the outer medulla of the kidney. JG12 is a specific marker for the blood vessel endothelium and discs, and is instinctively expressed by the endothelial cells of tubulointerstitial vessels in the kidney [54,55]. Using a unilateral ureteral obstruction model, Sun et al. (2012) observed an alteration in peritubular capillary density detected using JG12 immunostaining [56]. In the present study, we observed that JG12-positive peritubular capillaries were markedly diminished from the outer medulla regions with significant interstitial expansion and tubular atrophy. Following the loss of the peritubular capillaries, the CP group also presented with decreased production of NOS3 and p-NOS. In the vascular endothelium, NOS3, which is also known as nitric oxide synthase (eNOS), is an enzyme that produces NO. The decreased p-NOS expression in the renal tissues in our study may be due to decreased NOS3 expression, which can lead to increased vasoconstriction and contribute to alterations in blood pressure [57]. Renal vasculature quiescence is tightly regulated by the balance between pro-and antiangiogenic factors in healthy kidneys. However, this quiescence can be disrupted during AKI, resulting in an antiangiogenic environment with the loss of peritubular capillaries [58]. The increase in the expression of VEGF and VEGFR in the calcitriol group reinforces the role of calcitriol as an endothelium promoter since several studies demonstrate that VEGF promotes the growth of endothelium and protects the endothelial cells from apoptosis. We next evaluated the effects of CP on CXCR4 expression in the kidney tissue. The increased expression of CXCR4 observed in the CP group could also contribute to epithelial and endothelial cell damage. In recent work, Chang et al. (2021) observed that suppression of the SDF-1/CXCR4 pathway resulted in increased tubular cell regeneration and reduced cell death and attenuation of microvascular rarefaction in IR-AKI mice kidneys [59]. This finding is consistent with our data, which show that calcitriol suppressed CXCR4 expression in the CP + calcitriol group, which was followed by the amelioration of endothelium and epithelial cell dysfunction. In this study, we demonstrate that calcitriol attenuates the morphological and functional changes that occur in CP-induced AKI. However, some limitations must be recognized. First, studies through in vitro experiments using primary kidney endothelial cells and angiogenesis assays should be performed to better evaluate these events on the renal microvasculature. Second, we measured plasma creatinine levels with routine clinical laboratory methods but not with high-performance liquid chromatography, a more reliable method for evaluating plasma creatinine levels. Third, studies are needed to evaluate how calcitriol can reduce inflammatory cell infiltration and epithelial cell proliferation and accelerate the resolution and repair of epithelial cell injury. Despite these limitations involving experimental models, the present study adds to the literature concerning the participation of calcitriol administration on proximal tubular injury, oxidative stress, inflammation and vascular injury observed in CP-induced AKI. In conclusion, our study suggests that calcitriol attenuates tubular injury, endothelial damage, reductions in antioxidant enzymes and the inflammatory process observed in the renal outer medulla observed in CP-induced AKI. Animal Model and Experimental Design The protocols were performed by the Animal Experimentation Committee of the University of São Paulo at the Ribeirao Preto Medical School (COBEA/CETEA/FMRP-USP, protocol no. 115/2018). This study used male Hannover rats (200-300 g). The rats were housed four per cage according to the groups, with a room temperature of 22 ± 2 • C, a 12 h light/dark cycle with a chow diet and water ad libitum. The animals were divided into four groups: (1) SAL (0.9% saline, n = 6), (2) SAL + calcitriol (0.9% saline + calcitriol, n = 6), (3) CP (cisplatin 5mg/kg, n = 8) and (4) CP + calcitriol (cisplatin 5mg/kg + calcitriol, n = 8). Calcitriol (6 ng/day, Calcijex, Abbvie Laboratories, North Chicago, IL, USA) or vehicle (0.9% NaCl) was administered using miniosmotic pumps (model 2004, Alzet, Cupertino, CA, USA) implanted subcutaneously under isoflurane anesthesia (Cristalia, Brazil). Calcitriol or vehicle supplementation was started two weeks before the injection of CP and was maintained five days later, corresponding to the period evaluated. The dose of calcitriol treatment was selected according to previous studies [27,28,60]. None of the rats died after cisplatin administration. All animals were used in the study. Renal Function Studies On the fourth day after CP injection, the animals were placed in metabolic cages for 24 h to collect urine samples. On the fifth day after CP injection, the animals were anesthetized (xylazine 0.1 mL/100 g and ketamine 0.05 ml/100 g, i.p.), the aorta was cannulated and blood samples were collected. Renal function was assessed using 24 h urine and blood samples. Plasma and urinary creatinine were determined by the colorimetric method using picric acid as a chromogen [61]. Urinary and plasma sodium were analyzed using the ion-selective electrode quantification technique (9180 Electrolyte Analyzer, Roche Diagnostics GmbH, Mannheim, Germany, 2004). Fractional sodium excretion was calculated by dividing sodium clearance by creatinine clearance. The results of the plasmatic and urinary creatinine quantification were used to determine the glomerular filtration rate (GFR). Serum 25 Hydroxyvitamin D (25 OHD) Levels We assessed 25(OHD) with a direct competitive test based on the chemiluminescence principle (CLIA) (DiaSorin, Liaison®, Saluggia, Italy); this test was performed in the clinical analysis laboratories at the School of Medicine of Ribeirao Preto Hospital and Clinics, which participates in national and international quality assurance certification. Histological Studies Histological sections (4 µm thick) were stained using Masson's Trichrome and examined under light microscopy (Axion Vision Rel. 4.3; Zeiss, Oberkochen, Germany). Tubulointerstitial changes interstitial infiltration of inflammatory cells, atrophy of the cells of the renal tubules and dilation of the tubular lumen were evaluated. Lesions in the renal outer medulla were graded [53] on a scale of 0-4 as follows (0 = normal; 0.5 = small focal areas; 1 = involvement of <10% of the renal outer medulla; 2 = 10-25%; 3 = 25-75%; 4 = extensive damage involving more than 75% of the renal outer medulla). Thirty grid fields measuring 0.1 mm 2 were evaluated in the renal outer medulla of each kidney (Axion version 4.8.3, Zeiss, Oberkochen, Germany), and the mean values per kidney were calculated. Counterstaining of the sections was then performed with methyl green, which was followed by dehydration and mounting. The immunoperoxidase staining for ED1 and PCNA was determined by counting the number of positive cells in the renal outer medulla. JG12 was determined by the number of positive peritubular capillaries in the outer renal medulla. The reaction to cubilin was evaluated by counting the number of intact tubules with a cubilin-marked brush border in the tubules of the renal outer medulla. Vimentin and α-SMA were semiquantitatively graded in the outer medulla, and the mean score per kidney was calculated. The scores depended on the percentage of a grid field showing positive staining as follows: 0 = absent or <5% staining, 1 = 5-25%, 2 = 25-50%, 3 = 50-75% and 4 = >75% staining. Thirty consecutive fields (0.1 mm 2 each) for the outer medulla were evaluated. The average score per kidney was calculated. All fields were analyzed under 400× magnification. ELISA Studies Levels of IL-1β and IL-10 were measured in kidney tissue samples, which were stored at −70 • C until analysis. The content was determined using ELISA kits according to the manufacturer's guidelines (Alpco, Keewaydin Drive, Tulane, USA; Pierce, Waltham, MA, USA, respectively). IL-1β and IL-10 values are reported in in picograms/milligrams (pg/mg) of protein. Statistical Analyses For data with a normal distribution, analysis of variance and the Newman-Keuls multiple comparison tests were applied. For data not normally distributed, the Kruskal-Wallis nonparametric test followed by Dunn's posttest was used. The Kolmogorov-Smirnov trial investigated the normality of the dependent variables. Data are presented as mean ± SEM. GraphPad Prism version 9.0 for Windows (GraphPad Software 9.0, San Diego, CA, USA) was used to perform the statistical analysis and subsequent graph construction. Statistical significance was established at p < 0.05.
6,882.4
2022-12-01T00:00:00.000
[ "Medicine", "Environmental Science", "Chemistry" ]
A numerical study of the influence of radiation on turbulence in a 2 D axisymmetric turbulent flame 1 This paper presents a study of the influence of thermal radiation on turbulence in the simulation of a turbulent, non-premixed methane-air flame. In such a problem, two aspects need to be considered for a precise evaluation of the thermal radiation: the turbulence-radiation interactions (TRI), and the radiative properties of the participating species, which are treated here with the weighted-sum-of-gray-gases (WSGG) model based on recently obtained correlations from HITEMP2010 database. The chemical reactions rates were considered as the minimum values between the Arrhenius and Eddy Break-Up rates. A twostep global reaction mechanism was employed, while the turbulence modeling was considered via Resumo. Este artigo apresenta um estudo da influência da radiação térmica sobre a turbulência através da simulação de uma chama turbulenta, não pré-misturada, de metano-ar. Neste problema dois aspectos precisam ser considerados para uma avaliação precisa da radiação térmica: interações turbulência-radiação (TRI) e as propriedades radiativas das espécies participantes, as quais são tratadas aqui com o modelo da soma ponderada de gases cinza (WSGG) baseado em correlações recentemente obtidas a partir do banco de dados espectrais HITEMP2010. As taxas das reações químicas foram consideradas como os menores valores entre as taxas de Arrhenius e de Eddy Break-Up. Um mecanismo de reação global de duas etapas foi empregado, enEste é um artigo de acesso aberto, licenciado por Creative Commons Atribuição 4.0 International (CC-BY 4.0), sendo permitidas reprodução, adaptação e distribuição desde que o autor e a fonte originais sejam creditados. 56 Estudos Tecnológicos em Engenharia, vol. 11, n. 2, p. 55-63, jul/dez 2015 A numerical study of the influence of radiation on turbulence in a 2D axisymmetric turbulent flame Introduction Combustion problems involve a number of coupled phenomena, such as fluid mechanics, heat transfer, and chemical kinetics of gaseous species and soot, in which thermal radiation can be the dominant heat transfer mode.Heat transfer directly affects the temperature field and, therefore, chemical kinetics and thermophysical properties (as density, heat capacity, and viscosity). An important phenomenon to be considered in turbulent combustion simulations is the so-called turbulence-radiation interactions (TRI).Turbulence and radiation are physical phenomena of high complexity even when analyzed independently.In turbulent flow, it is not possible to deal with these phenomena in an independent way, but in a coupled form.In turbulent reactive flows, temperature and species concentration fields can undergo high levels of fluctuations, leading to variations on the radiative field.Since the radiative field is present in the energy conservation equation as a source term, and turbulent fluctuations influence radiative transfer, then turbulence also influences the temperature and density fields.Therefore, since the density field influences the velocity field and so the scalar fluctuations, it can be concluded that turbulence influences radiation and radiation influences turbulence.Few attention has been devoted to investigate the influence of radiation on turbulence.At the authors best knowledge, the only investigation dealing with such influence in high temperature flows is in Soufiani (1991), where it was found that radiation may smooth the intensity of temperature fluctuations.On the other hand, the influence of turbulence on radiation has been received much more attention. The first coupled calculation of radiative transfer in reactive flow to investigate TRI was reported in Song and Viskanta (1987), in which property functions were prescribed for the combustion gases.The most recent literature has been focused on analyzing the most important TRI correlations (temperature self-correlation, absorption coefficient-temperature correlation, absorption coefficient self-correlation, and absorption coefficient-radiation intensity correlation).Some examples of coupled investigations were reported in Li and Modest (2002a), Habibi et al. (2007a), Poitou et al. (2012) and Gupta et al. (2013).Results pointed that the absorption coefficient-temperature correlation and the temperature selfcorrelation are the most important TRI terms in reactive flows (Li andModest, 2002a, 2002b;Gupta et al., 2013;Habibi et al., 2007b).Furthermore, it was found in Gupta et al. (2013) and in Modest and Mehta (2006) that the absorption TRI term (correlation between absorption coefficient and radiation intensity fluctuations, which is neglected in optically thin fluctuation approximation -OTFA) is important only for optically thick medium. An accurate description of radiative heat transfer is of great importance for simulations of combustion systems.Modeling thermal radiation exchanges in combustion gases standard k-ε model.The source terms of the energy equation consisted of the heat generated in the chemical reaction rates as well as in the radiation exchanges.The discrete ordinates method (DOM) was employed to solve the radiative transfer equation (RTE), including the TRI.Comparisons of simulations with/without radiation demonstrated that radiation influenced turbulent-properties (root mean square of velocity and temperature fluctuations, and turbulent kinetic energy of the velocity fluctuations).Radiation smoothed turbulent-properties fields.The influence of radiation was more important on the temperature fluctuations than on the velocity fluctuations. Felipe R. Centeno, Elizaldo D. dos Santos, Cristiano V. da Silva, Francis H. R. França (such as water vapor and carbon dioxide) is a difficult task, due to the highly complex dependence of the absorption coefficient with the wavenumber, which is typically characterized by hundreds of thousands or millions of spectral lines.Thus, the integration of the radiative transfer equation (RTE) over the spectrum would be very expensive or even impossible without the use of spectral or global models.As a first simplification, the RTE is frequently solved with the gray gas (GG) model, where the dependence of the absorption coefficient over the wavenumber is simply neglected.In order to provide realistic results, more refined models are however needed.As one advance to the GG model, the weighted-sum-of-gray-gases (WSGG) (Hottel and Sarofim, 1967) makes perhaps the best compromise between accuracy and computation demand, especially in global simulation of combustion processes in which the RTE is solved together with fluid flow, chemical kinetics and energy equation.In the WSGG model, the entire spectrum is represented by a few bands having uniform absorption coefficients, each band corresponding to a gray gas.The weighting coefficients account for the contribution of each gray gas, and can be interpreted as the fractions of the blackbody energy in the spectrum region where the gray gases are located.In practice, those coefficients are obtained from fitting total emittances computed from experimental-gas-data, such as those presented in Smith et al. (1982) and Smith et al. (1987).In a recent study, Demarco et al. (2011) assessed several radiative models, such as the narrow band, wide band, GG and global models, such as the WSGG and spectral-line-based WSGG (SLW).According to the authors, the WSGG is very efficient from a computational point of view, and can yield accurate predictions, although significant discrepancies can appear in high soot loadings.Simplified radiative property models, such as the WSGG or GG models, are often used in computational fluid dynamics (CFD) to simulate combustion problems.The main reason is that implementing more sophisticated models may become excessively time consuming when fluid flow/combustion/radiative heat transfer are coupled.This study presents a numerical RANS (Reynolds Average Navier-Stokes) simulation of a turbulent non-premixed methane-air flame in a cylindrical combustion chamber taking into account radiation effect of non-gray gases by means WSGG correlations (Dorigon et al., 2013) generated from HITEMP 2010 database (Rothman et al., 2010) and including TRI (Snegirev, 2004), with the objective of evaluating the influence of radiation on turbulence, since such influence has been received much less attention in the literature than the influence of turbulence on radiation. Problem statement The physical system consists of the natural gas combustion chamber described in Garréton and Simonin (1994), which presents several challenges for thermal modeling in the sense that the flame is turbulent, and with highly non-isothermal, non-homogeneous medium. Keeping the same conditions as described in Garréton and Simonin (1994), the cylindrical chamber has length and diameter of 1.7 m and 0.5 m, respectively, as shown in Fig. 1.Natural gas is injected into the chamber by a duct aligned with the chamber centerline, leading to a non-swirling flame.The burner provides the necessary amount of air and natural gas as required by the process.In all cases, a fuel excess of 5% (equivalence ratio of 1.05) was prescribed.For a fuel mass flow rate of 0.01453 kg/s at a temperature of 313.15 K, this requires an air mass flow rate of 0.1988 kg/s, at a temperature of 323.15 K.The fuel enters the chamber through a cylindrical duct having 0.06 m diameter, while air enters the chamber through a centered annular duct having a spacing of 0.02 m.For such mass flow rates, the fuel and air velocities are 7.23 and 36.29 m/s, respectively.The Reynolds number at the entrance, approximately 1.8×10 4 , points that the flow is turbulent.The inlet air is composed of oxygen (23% in mass fraction), nitrogen (76%) and water vapor (1%), while the fuel is composed of 90% of methane and 10% of nitrogen.The burner power is about 600 kW.The fan and the other external components are not included in the computational domain, although their effects are taken into account through the inlet flow conditions.Buoyancy effects are neglected due to the high velocities that are provided by the burner.To complement the boundary conditions, Figure 1 depicts the thermal boundary conditions of the cylindrical chamber: symmetry in the centerline, and prescribed temperature on the walls, equal to 393.15 K.In addition, impermeability and no-slip conditions were assumed on the walls.In the symmetry line, it was assumed that both radial velocity A numerical study of the influence of radiation on turbulence in a 2D axisymmetric turbulent flame and velocity gradient were null.The same procedure was adopted for the turbulent kinetic energy and its dissipation rate, enthalpy, and species concentrations in the symmetry line.In the outlet, null diffusive fluxes were assumed for all variables, the axial velocity component was corrected by a factor to satisfy mass conservation, and the radial velocity was imposed to be null.For radiation modeling, both chamber walls, inlet and outlet ducts were modeled as black surfaces.The temperature at the inlet duct was prescribed at the fuel and the oxidant temperatures, while the temperature at the outlet duct was equal to the outlet flow bulk temperature. In addition, in the inlet, the velocity and concentration profiles were assumed uniform in the axial direction, while the turbulent kinetic energy was computed as k = 3 2 (u in i) 2 , where i is the turbulence intensity (prescribed as 6% and 10% for the air and for the fuel streams, respectively) and u in is the inlet axial mean velocity, and for the turbulent kinetic energy dissipation rate, the relation ε = (C μ 3 4 k 3 2 )/l was employed, where l is the turbulence characteristic length scale (taken as 0.04 m and 0.03 m for the air and the fuel streams, respectively).For both energy and momentum conservation equations, standard wall functions were applied for the combustor walls treatment, which take into account the viscous layer dominated by molecular diffusion close to the walls (Patankar, 1980). Mathematical formulation The proposed work is stated as: considering a steady turbulent non-premixed methane-air flame in a cylindrical chamber, compute the temperature, species concentrations and velocity fields, and verify the influence of radiation on turbulence main parameters, taking into account the WSGG model based on HITEMP 2010 data (Dorigon et al., 2013) and TRI effects (Snegirev, 2004). Governing equations Conservation equations for mass, momentum in the axial and radial directions, k-ε turbulence model, energy, and chemical species (CH 4 , O 2 , CO 2 , CO, H 2 O) for steady low-Mach flow in 2D axisymmetric coordinates are solved.Detailed information about governing equations can be found in Centeno et al. (2014a). Combustion kinetics As a basic assumption, it is considered that the combustion process occurs at finite rates with methane oxidation taking two global steps: The rate of formation or consumption, R a,c , of each α-th species in each c-th reaction (there are two reactions in the present study, so c = 2) is obtained by the Arrhenius-Magnussen's model (Eaton et al., 1999;Turns, 2000;Fluent, 2009), in which the rate of formation or consumption of the chemical species are taken as the smallest one between the values obtained from Arrhenius kinetics or from Magnussen's equations (Eddy Break-Up) (Magnussen and Hjertager, 1977).The investigation in Silva et al. (2007), which considered the same combustion chamber, provided the relative importance of the combustion kinetics by computing the Damköhler number, and found that the combustion process is governed by Arrhenius rates in the flame core and by Magnussen's rates in all the other regions.This formulation was also successfully employed in Silva et al. (2007), Nieckele et al. (2001) and Centeno et al. (2013Centeno et al. ( , 2014a)). The average volumetric rates of formation or consumption of the α-th chemical species, R α , which appears in both energy and species Felipe R. Centeno, Elizaldo D. dos Santos, Cristiano V. da Silva, Francis H. R. França conservation equations, are then computed from the summation of the volumetric rates of formation or consumption in all the c-th reactions where the α-th species is present, i.e., R a = Σ c R a,c . The weighted-sum-of-gray-gases (WSGG) model The original formulation of the WSGG model (Hottel and Sarofim, 1967) consists of expressing the total gas emittance by weighted-sum-of-gray-gas emittances.The emission weighted factors, a j (T), and the absorption coefficients, k j , for the j th gray gas are in general determined from the best fit of the total emittance with the constraint that the a j must sum to 1. From a more general point of view, the WSGG model can be applied as a non-gray gas model (Modest, 1991), solving the radiative transfer equation (RTE) for the N G (number of gray gases) plus one (j = 0, representing spectral windows where H 2 O and CO 2 are transparent to radiation) for a clear gas: in which the emission weighted factor a j (T) is given by, (2) with j varying from 0 to N G , and I = Σ NG j=0 I j .The functional dependence of the weighted factors with temperature is generally fitted by polynomials, Eq. ( 2), where the polynomial coefficients (b j,i ) as well as the absorption coefficients for each gray gas can be tabulated.For H 2 O/ CO 2 mixtures, these coefficients are generally established for particular ratios of the partial pressure, p H2O /p CO2 , which could limit the application of the method.In the present study, the weighted factors polynomial coefficients and absorption coefficients were taken from Dorigon et al. (2013) for p H2O /p CO2 = 2.Such WSGG correlations were fitted from HITEMP 2010 (Rothman et al., 2010), which is the most recent molecular spectroscopic database for high temperatures.In the same study, Dorigon et al. (2013) compared results obtained with the new coefficients against line-by-line (LBL) benchmark calculations for one-dimensional non-isothermal and non-homogeneous problems, finding consistently satisfactory agreement between the LBL and WSGG solutions, with maximum and average errors of about 5% and 2% for different test cases.Centeno et al. (2013) tested the coefficients presented in Dorigon et al. (2013) against old ones presented in Smith et al. (1982) for an axisymmetric cylindrical combustion chamber, and found that the new coefficients provided better agreement with experimental data.It is assumed here that the contribution from other radiating species, such as CO e CH 4 , is negligible.The contribution from CO in the combustion gases is negligible, since its molar concentration is not expected to exceed 0.1%, while the contribution from CH 4 is even lower (Coelho et al., 2003). Turbulence-radiation interactions The radiative transfer equation (RTE) is applicable to instant quantities that fluctuate in a turbulent flow, while the RANS turbulence model can only provide time-averaged (mean) quantities and, possibly, their mean square fluctuations.Considering the spectrally integrated form of the RTE, and time averaging it, it results in: The absorption coefficient-radiation intensity correlation, i.e., the first term in the right hand of Eq. (3), is expressed as κI = κI + κ'I'.Several studies have neglected the second term on the right hand side of this expression (κ'I') based on arguments of Kabashnikov and Kmit (1979), known as the optically thin fluctuation approximation (OTFA), which relies on the assumption that the absorption coefficient fluctuations are weakly correlated with the radiation intensity fluctuations, i.e., κ'I' ≈ 0, if the mean free path for radiation is much larger than turbulence integral length scale. In the second term in the right hand of Eq. ( 3), which is proportional to κT 4 , the instant values of κ and T correlate in a turbulent flow. In the present study, it is applied the approximation proposed in Snegirev (2004), in which both the absorption coefficient-temperature correlation and the temperature self-correlation are considered.These two TRI correlations were found to be the most important in reactive flows (Li andModest, 2002a, 2002b;Habibi et al., 2007b;Gupta et al., 2013).Decomposition of temperature and absorption coefficient into average and fluctuating components, A numerical study of the influence of radiation on turbulence in a 2D axisymmetric turbulent flame T = T + T' and κ = κ + κ' , followed by time averaging, and neglecting higher order terms, κT 4 can be written as (Snegirev, 2004): which allows the consideration of the absorption coefficient-temperature correlation and the temperature self-correlation.The value for C TRI was initially suggested by Snegirev (2004) from data fitting for T 4 T 4 and T' 2 T 2 , as presented in Burns (1999), followed by an adjustment leading to a value of 2.5 for C TRI . To evaluate T' 2 , required for Eq. ( 4), an additional transport equation for temperature fluctuation variance is solved. Results and discussions The set of equations were solved using the finite volume method (Patankar, 1980) by means of a Fortran code.The power-law was applied as the diffusive-advective interpolation function on the faces of the control volumes.The pressure-velocity coupling was made by the SIMPLE method.The resulting system of algebraic equations was solved by the TDMA algorithm, with block correction in all equations except the equations for k and ε.A grid with 140 volumes in the axial direction and 48 volumes in the radial direction was used.The numerical accuracy was checked through the grid convergence index (GCI) method (Roache, 1994;Celik et al., 2008) comparing predicted results calculated using this grid with results obtained using coarser grids.As found, the 140×48 grid provided grid independent results, and required reasonable computational effort.The grid is uniformly spaced in both radial and axial directions.The radiative transfer calculations were performed with the discrete ordinates method using the same spatial grid and S 6 quadrature.Convergence criteria were based on the imposition that the normalized residual mass in the SIM-PLE method was 10 -8 .For the other equations, the maximum relative variation between iterations was 10 -6 . In order to study the effect of the gas radiation heat transfer inside the combustion chamber, allowing to analyze its influence on turbulence, two different scenarios were considered.In the first scenario, radiation was completely ignored, while, in the second scenario, radiation was completely considered, including TRI.Comparisons were made to verify how the different radiative scenarios affect some turbulence-related parameters, as the root mean square (RMS) of the temperature fluctuations and the turbulent kinetic energy of the velocity fluctuations. Figures 2 and 3 present fields of the turbulent kinetic energy of the velocity fluctuations and the root mean square of the temperature fluctuations (computed from the temperature fluctuation variance square root: T' rms = T' 2 ), respectively, computed in both scenarios -neglecting radiation calculations and considering them.These two turbulence-related properties were selected to verify the influence of radiation on turbulence.As observed, the different radiative scenarios investigated in the present work did not affect importantly those turbulent properties.However, the turbulent fields were smoothed when comparing results obtained without radiation (fields "a" in Figures 2 and 3) against those results obtained with radiation (fields "b" in Figures 2 and 3), in agreement with the findings in Soufiani (1991). Additionally, Figures 4 and 5 present profiles of the root mean square of the temperature fluctuations and of the root mean square of the velocity fluctuations (considering fully developed isotropic turbulence, RMS of the velocity fluctuations can be computed as the square root of the turbulent kinetic energy of the velocity fluctuations: ν' rms = k ).In such figures, profiles are shown for the axial direction at the chamber symmetry-line and for the radial direction at axial position z = 1.3 m.It can be observed that the influence of radiation on those turbulence-related properties was small, but not negligible (for example, a difference of nearly 70 K for the rms temperature fluctuation was noticed for r = 0.0 m at z = 1.3 m).Radiation tended to smooth turbulent fluctuations of temperature and velocity.Besides, the influence of radiation was more pronounced on the temperature fluctuations than on the velocity fluctuations; such behavior can be especially important for consideration in problems involving transition from laminar to turbulent flows, which in general are determined considering isothermal flows. Conclusions This study presented an analysis of the influence of thermal radiation on the turbulence in a turbulent non-premixed methane-air flame in a cylindrical combustion chamber.The radiation field was computed with the WSGG model using recently obtained correlations (Dorigon et al., 2013) based on the up-todate HITEMP2010 and considering TRI effects (Snegirev, 2004).A two-step global reaction mechanism was used and turbulence modeling was considered via standard k-ε model.The RTE was solved employing the discrete ordinates method.This work showed the influence of radiation on turbulence in a combustion problem by means of two scenarios: radiation neglected from calculations, and radiation included into calculations.Comparison of the results obtained from the different radiative scenarios showed that radiation did not importantly influence the turbulence-related properties (root mean square of the temperature fluctuations and of the velocity fluctuations, and the turbulent kinetic energy of the velocity fluctuations), but such influence, despite small, was not negligible.Radiation tended to smooth turbulent fields, in agreement with results reported in the literature for high temperature flows.The influence of radiation on temperature fluctuations was more important than its influence on velocity fluctuations.Some possible future advances in the radiation-turbulence analysis are testing different turbulence models (other than standard k-ε) and performing simulations with different turbulence methodology (other than RANS). Figure 2 . Figure 2. Turbulent kinetic energy fields of the velocity fluctuations: (a) radiation neglected; (b) radiation computed.Source: Figure extracted from Centeno et al. (2014b), as a courtesy of ABCM -Rio de Janeiro, Brasil. Figure 5 . Figure 5. RMS of the temperature fluctuations and of the velocity fluctuations: profiles at z = 1.3 m (radial direction).Source: Figure extracted from Centeno et al. (2014b), as a courtesy of ABCM -Rio de Janeiro, Brasil. Figure 4 . Figure 4. RMS of the temperature fluctuations and of the velocity fluctuations: profiles at chamber symmetry-line (axial direction).
5,273.8
2016-03-08T00:00:00.000
[ "Physics", "Engineering" ]
Bioinspired rational design of bi-material 3D printed soft-hard interfaces Durable interfacing of hard and soft materials is a major design challenge caused by the ensuing stress concentrations. In nature, soft-hard interfaces exhibit remarkable mechanical performance, with failures rarely happening at the interface. Here, we mimic the strategies observed in nature to design efficient soft-hard interfaces. We base our geometrical designs on triply periodic minimal surfaces (i.e., Octo, Diamond, and Gyroid), collagen-like triple helices, and randomly distributed particles. A combination of computational simulations and experimental techniques, including uniaxial tensile and quad-lap shear tests, are used to characterize the mechanical performance of the interfaces. Our analyses suggest that smooth interdigitated connections, compliant gradient transitions, and either decreasing or constraining strain concentrations lead to simultaneously strong and tough interfaces. We generate additional interfaces where the abovementioned toughening mechanisms work synergistically to create soft-hard interfaces with strengths approaching the upper achievable limit and enhancing toughness values by 50%, as compared to the control group. At the same time, we used the simple parametric equations of a helix for the design of the collagen-inspired specimens 2 .We defined these equations in the cartesian system as: = ; = sin( + ) ; = cos( + ) … (4) where ∈ [0,2) defines the unit-cell discretization, = 2.032 2 mm is a constant that describes the separation of the helix, = 0.508mm is the radius of the helix, and = [0, 1 ] is the phase angle that defines the rotations of each of the three helices in the design. After repeating each of these cubic 2.032mm edged unit-cells across the total dimensions of the interface, we used the () function to define the thickness of the helical beams, thereby achieving the proper material discretization. We processed an additional set of simulations to study the effects of the number of modeled unit cells on the simulation results.In particular, we tried to establish whether the models in which only a single unit cell was included could capture the behavior of the actual experimental tensile tests.To this end, we prepared simulations of 3 different designs.These were the nongraded control ( = 0 mm), the long OC ( = 12 mm), and the long PA ( = 12 mm) (Supplementary Figure 5).We selected these designs because they are quadrant-symmetric, allowing us to simulate only 2 unit cells of the interface (i.e., 1 along their thickness and 2 across their width).These simulations were prepared under the same conditions as described in the main article.However, symmetric boundary conditions were additionally applied on the = 0 (i.e., = 0, = 0, = 0 ) and = 0 planes (i.e., = 0, = 0, = 0). The estimations made by the quadrant symmetric (Quad) model of the control group were remarkably similar to those of the single unit cell (UC) models and DIC-measured strain distributions (Supplementary Figure 5a).In both types of simulations, the deformations were highly concentrated at the interface, with the Quad and UC estimations showing values of 2.28 and 1.7, respectively.Although the absolute strain values differed between both types of models, they occurred at the corners of the specimens regardless of the model type.The corners of the specimens are the locations where high values of shear deformation tended to be present. These peak strain values also appeared in the DIC measurements, where, as expected, the value was lower than those predicted computationally.This is due to the averaging effects caused by the lower resolution of the DIC measurements as compared to the computational models and the potential photopolymer mixing prior to curing.Finally, both UC and Quad simulations showed diminished values of the von Mises strains at the center of the cross-sections of the specimens, which is expected because, in this region, only volumetric strains occur. In the case of the long OC geometries, the strain patterns of the first layer of the Quad and UC simulations were almost identical to those of the DIC measurements (Supplementary Figure 5b).Similarly, the patterns of the , () distribution within the first layer of the specimens presented the same transitions as observed in the experimental results, indicating that both types of models capture the essential features of the experimental observations.Consistent patterns of strain were also present within the layers with the maximum strain value, regardless of the type of the FEM model.The FEM-predicted values were 6.36 and 4.53 for the Quad and UC simulations, respectively.Although the magnitude of these strain peaks differed, the shapes of the maximum , () plots followed the same trends, with the strain peaks occurring in the regions with a hard discontinuous transition from the hard to the soft material. Similarly, the () functions estimated by both UC and Quad models were similar both in values and trends, indicating that the number of the modeled unit cells does not alter the essence of the predicted mechanical behavior. Considering the long PA simulations, both Quad and UC models estimated somewhat higher peak strain values within the top layer of the interface than those measured through DIC (Supplementary Figure 5c).As discussed in the main text of the article, the absence of these peaks in the DIC images was likely due to its lower resolution and the potential material mixing between single droplets of the photopolymers.Despite these differences, the , () values and their localizations were remarkably similar in both the Quad and UC models.Similar to both other designs, the strain values predicted by the Quad models were higher than those resulting from the UC simulations (FEM(max) = 1.84 and 1.79 for the Quad and UC models, respectively).In both cases, such strains were localized within the middle region of the interfaces and along the edges of the specimens.As described in the main text, the strain peaks observed within the regions made from mostly hard material are of relatively minor importance for the PA designs because any cracks initiated in these regions are likely to be arrested.Moreover, the plots of , () were remarkably similar in the regions closest to the soft material, indicating that the maximum strains estimated by the UC model are representative of the full specimens at regions where failure is most likely to occur.As in the case of the long OC specimens, little to no differences were present between the trends seen in the plots corresponding to the UC and Quad () models, further corroborating the claim that UC models can successfully capture the main mechanisms behind the observed differential mechanical behaviors of the various considered groups. Overall, the main differences observed between the UC and Quad models were the FEM(max) values which were somewhat higher for the quadrant symmetric simulations. The localization of these peaks was, however, highly consistent between both types of models. Moreover, the differences in the peak strain values tended to disappear in the regions closest to the soft material, which are the most critical places (because failure due to a sub-optimal interface is likely to occur within those regions).Finally, the similarity between the elastic modulus functions predicted by both types of models allowed us to conclude that UC FEM models are representative of the full-size models, allowing for the application of such computationally efficient simulations for the analysis of the remaining designs. Supplementary Note 3 An additional set of measurements were performed to compare the three-dimensional behavior of different interface designs under loading and to corroborate the performance of our computational models.For these experiments, we used TESCAN CorTOM micro-CT scanner (TESCAN, Brno, Czech Republic) for micro-CT imaging of a non-graded (Ctrl), a PA ( = 12 mm), and an OC ( = 12 mm) design.To enable in situ assessment of the deformation behavior of the specimens, we scanned the specimens under two loading conditions: without load and under a 6 N load (Supplementary Figure 6a).A load of 6 N was selected because this was the load used for the Quad FEM simulations presented in Supplementary Note 2. The specimens were scanned over a 360˚ rotation with a 3D pixel size of 31 µm and an angular rotation step of 0.25˚.The CT images were acquired at 120 kV and 260 µA with a total duration of ~5 min per imaging cycle.The image analysis of the specimens was performed using the software Fiji (v1.53).Since no internal material pattern was present within the 3D images or any difference was present between the greyscale values of the hard and soft material regions, no digital volume correlation (DVC) analysis could be performed.However, we measured the cross-sectional areas along the length of the specimens and compared with their computational equivalents as per the FEM simulations reported in Supplementary Note 2 (Supplementary Figure 5). When deformed, the Ctrl design presented a region with sharp deformations at the location of the soft-hard transition (Supplementary Figure 6b).Moreover, its surface area pattern under loading showed an abrupt transition of magnitude at the same region, confirming the presence of high strain concentrations at the edges of the interface.Contrary to these results, the PA and OC designs did not show the aforementioned high deformations at the corners of the interface, and their surface area plots showed a smooth transition at the interface edges.These measured surface area curves were highly similar to those estimated with the FEM simulations with smooth transitions present for the PA and OC designs and the sharp transition characterizing the Ctrl one.Furthermore, when loading the OC design, two gaps were detected in the middle of the interface when loading the specimen (Figures S4d).The locations of these gaps corresponded to the regions where the highest strain concentrations of this study were found, which was at the hard material discontinuity of the long OC interface.Overall, these results confirm that our FEM models and the study of strain concentrations at soft material regions are capable of capturing the internal 3D behavior of the presented 3D soft-hard interface designs. Moreover, these results call for future studies with in-situ micro-CT scanning under continuous loading, where the 3D-printing equipment is modified to include micro-or nanoparticles that can be visualized with a micro-CT scanner, allowing for the recognition of the internal structure of the specimen and performing DVC. Supplementary Note 4 We performed additional simulations to determine the failure modes that can occur within various soft-hard interface designs.These simulations were performed for the control (i.e., nongraded), short (i.e., = 4 mm) GY, long (i.e., = 12 mm) CO, PA, and GP designs with the same meshes as the ones used for the initial hyperelastic simulations.In this case, however, we introduced plasticity and ductile damage with element deletion into the material models used for the soft material, allowing us to study the failure route of each design.These failure considerations were chosen since they yield an efficient and simple method to analyze the failure mode of the interfaces at multiple voxel locations without a need for predefining a single propagating crack or for including cohesive zone elements between every hard and soft voxel of the interface.Due to software (i.e., Abaqus) limitations, such models required changing the elastic material properties of the soft material from hyperelastic to linear elastic (i.e., elastic modulus soft = 1.3 MPa and soft = 0.48) and the type of analysis from quasistatic to dynamic explicit.The yield stress point of the soft material was set to 1.2 MPa under a von Mises yield criterion, followed by a linear strain hardening modulus of 0.58 MPa, where element deletion was set for a plastic strain of 5% (Supplementary Figure 10a).These values are based on the existing coarse-graining models for these materials 3 .Regarding the boundary conditions, we applied a displacement to the hard edge of the specimens until mesh separation while applying symmetric boundary conditions to the soft edge of the mesh. Following the post-processing stage (Supplementary Videos 10 to 14), we found that the stress vs. strain curves of all the designs show similar increases in the stress followed by failure due to element deletion (Supplementary Figure 10b).The long GP design showed the highest values of the strain energy density, while the short GY design underperformed when compared to the other designs (Supplementary Figure 10c).Regarding the plastic energy dissipation prior to failure, the non-graded design showed hardly any capacity for energy dissipation (Supplementary Figure 10d), instead featuring high strain concentrations at the edges of the interface, which caused its sudden failure (Supplementary Figure 10e).Similarly, in the short GY models, the sharp-ended features at the edges of the specimens introduced strain concentrations that caused the separation of the mesh (Supplementary Figure 10e), confirming the design guideline presented in the main text that recommends avoiding such features.It is, however, important to emphasize that the short GY designs showed the lowest strain concentration values among all the TPMS designs in the initial hyperelastic FEM simulations. Indeed, potential photopolymer blending effects within the 3D-printed specimens may have ameliorated the negative effects of these features, explaining their improved performance in the initial experiments.The long CO and GP designs showed comparatively high strains along their interfaces with regions of relatively low values of strain concentrations that resulted in multiple regions of plastic deformations, confirming how relatively compliant transitions can help in accommodating strains through plastic energy dissipation and can prevent the catastrophic failure of more periodic interfaces.Moreover, both the PA and GP designs showed multiple regions of soft material separation along their interface prior to failure, where cracks were arrested by neighboring hard material voxels.These observations confirm how particlebased designs provide the additional toughening mechanism of crack deflection.However, it is important to mention that for such a mechanism to be applicable, the distribution of the randomly positioned particles must be periodic 4 , unlike in these simulations where particles were present only in a very small cross-section area.All in all, this comparative study allowed us to study how the presented design guidelines that suggest avoiding sharp-ended geometries the zone in which the different photopolymer phases were mixed as well as additional ductile fracture computational simulations that considered photopolymer blending. The mechanical characterization of the soft-hard mixing zone was performed using atomic force microscopy (AFM, JPK Nanowizard 3, Berlin, Germany) on a non-graded specimen (Supplementary Figure 12a).We utilized a TESP-HAR probe (Bruker, Billerica, Massachusetts, USA) with a length of 125 μm, a width of 40 µm, a thickness of 4 µm, and a nominal spring constant of 42 N/m to measure the elastic modulus change at the transition between the hard and the soft 3D printed material.The calibration of the probe was performed using the thermal tuning method and resulted in a sensitivity of 26.63 nm/V and a stiffness constant of 49.953 N/m.We indented an area of 250 × 100 μm 2 with an indenter separation of 0.39 μm, yielding 250 × 100 indentation force-displacement curves.The indentation curves were acquired for a force setpoint in the range of 1 -4 μN.For each curve, the elastic modulus was calculated using the Hertz-Sneddon model, assuming the AFM tip to be a flat cylinder with a tip radius of 10 nm (the nominal value) with the following relation: , where is the elastic modulus, is the Poisson's ratio, is the nominal tip radius, and is the displacement.After processing the data, a sigmoidal function was used to characterize the mixing zone (Supplementary Figure 12b), similar to what other studies have proposed 5 .This function is defined as: , where () is the elastic modulus along the interface, ,ℎ and , are the hard and soft material elastic modulus values, respectively, and and are constants. ,ℎ and , were calculated from the average value of the farthermost 10 μm from their respective sides of the interface.The values of and were obtained by fitting the function using the nonlinear least-squares method.The size of the mixing zone was calculated as the distance between the regions exhibiting elastic moduli corresponding to 0.5% and 99.5% of the elastic modulus of the hard material in the sigmoid design.The obtained size was 151.56 μm, which is in good agreement with the ~150 μm transition reported in a previous study 5 .Moreover, 90% of the material transition (i.e., 5% to 95% hard material) occurs in a span of 84.37 μm, which matches the nominal dimension of each voxel (i.e., 84 μm)..Additional ductile fracture analyses were performed to unravel the potential effects of photopolymer blending on the performance of soft-hard interfaces.Toward this end, we introduced an algorithm that allowed us create an intermediate phase between the soft and hard phases throughout the entire geometry of the specimens in our computational models (Supplementary Figure 12c).For this intermediate phase, we first changed the element-tovoxel ratio from 1 to 27 (3×3×3).To consider the potential blending at the edges of each voxel 5 .We also updated the value of the ratio of the hard material ' of each element across the ijk directions (i.e., initially ′ = 0 for pure soft and ' = 1 for pure hard material) by averaging it with its adjacent elements.This averaging was performed using the following equation: The resulting ' values of each element, which were in accordance with the mixing zone fit presented above (Supplementary Figure 12d), were then used as input for obtaining the mechanical properties of each element.For this purpose, coarse-graining models for large deformations similar to those existing in the literature were used as the constitutive models 3 . We applied this process for the non-graded control design and the long PA ones and compared their resulting mechanical response with those of their non-blended variants.Since increasing the element to voxel representation by 27 would make the computational models infeasible to run, we restricted the meshes to interfaces of 12×12 voxels in the surface area and 96 voxels in length, yielding 373248 C3D8 elements per mesh.The resulting four simulations were processed under the same conditions as described in the Supplementary Note 4 of this document. After post-processing the results (Supplementary Videos 15-18), the blended version of both the control and the PA gradients showed improvements in their ultimate stress and strain energy density (Supplementary Figure 12 e-g).For the non-graded design, the strength increased from 0.382 MPa to 0.399 MPa, while the strain energy density increased from 20.04 kJ/m 3 to 22.37 kJ/m 3 .As for the PA, the strength increased from 0.41 to 0.45 MPa, whereas the strain energy density improved from 35.96 to 44.6 kJ/m 3 .The relatively larger improvements observed in the particle-based design can be partially attributed to the increased contact surface area between the polymer phases.It is, however, important to acknowledge that the separation of the blend phases highly depends on the selected coarse-graining model.In this case, we assumed that the ultimate strain before the separation of the intermediate phase is given by the rule of mixtures between the hard and soft material phases.This assumption was necessary because no experimental data exist regarding the ultimate strain of the sub-voxel blended phase.That said, the overall strain distribution of the blended simulations and the failure modes of both designs were remarkably similar regardless of whether blending was implemented in the models (Figures S13h).In the case of the PA designs, multiple non-critical cracks were present prior to failure for both simulations, indicating that the crack deflection capacity of this design remains present regardless of whether blending is implemented in the models. In short, these analyses showed that while photopolymer mixing plays a potential toughening role by reducing interfacial strain concentrations, the overall behavior and the toughening mechanisms achieved by varying the geometry of the interface remain consistent between the models incorporating such a blending effect and those neglecting it.To properly account for the additional toughening mechanisms resulting from blending, one would need to perform additional experimental characterizations of the effects of photopolymer blending on the complex particle-based composites, building up on this and other existing analyses of soft-hard slab connections 5,6 . the length of the gradient as well as permuting the interface geometry present at the end of the crack tip. We defined a symmetrical hard-soft-hard interface geometry for the fracture toughness test specimens, with nominal dimensions of 24.4×24.4mm 2 in length and width (L×W), as well as an out-of-plane thickness of 24.4 mm (B) to promote critical crack propagation (Supplementary Figure 13a).The crack, with an initial length of 0 = 6.1 mm, was prescribed at the soft end of the interface, which was the location where cracks propagated for most of the inefficient interfaces under uniaxial tensile deformations.The selected FG geometries were GY, PA, GP ( = 8.13 mm), and a non-graded control (Ctrl).After 3D-printing three specimens for each design, we tested them for mode I deformations using the same equipment and setup as the tensile tests described in Section 2.3 of the main manuscript.For post-processing, we calculated the fracture toughness of each test in terms of the J-integral, 0 , utilizing the standard for ductile homogeneous materials ISO 12135.This standard defines 0 with the following expression: , where is the maximum recorded force of the test, is the elastic modulus (i.e., the slope of the stress-strain curve of the crosshead measured between 0 to 50% maximum stress), and is the Poisson's ratio of the specimen which is assumed to be 0.45, for simplicity. 2 and are geometrical factors defined by the following expressions: The remaining term, , represents the plastic strain energy of the system and is defined in terms of the total strain energy, , and the elastic strain energy, , with the following expressions: , where CMOD is the crack mouth opening displacement of the specimen, measured with digital extensometers within the DIC software, is the slope of the F-CMOD curve measured between 0% and 50% of , and U was measured as the area under the F-CMOD curve. After post-processing the data, the results showed a distinct behavior between the non-graded specimens and the FG designs (Supplementary Figure 13b).The non-graded design showed the lowest mean 0 values of the study (i.e., mean = 64.07,SD = 7.6 KJ/m 2 ).These designs were followed by the GP (i.e., mean = 70.22,SD = 5.9 KJ/m 2 ), the GY (i.e., mean = 73.79,SD 3.9 KJ/m 2 ), and the PA graded design (i.e., mean = 78.91,SD 7.5 KJ/m 2 ), which showed the highest performance.These indicate that introducing a graded interface improved the performance of the specimens by 9.6-23.1%.However, it is essential to address that, when inspecting the equivalent von Mises strain fields of the interfaces (Supplementary Figure 13c), only the control group specimens presented a secondary crack that initiated at the interface edge that was not included in the original design.This observation corroborates the observation that the unwanted strain concentrations at the edges of the soft-hard connections play a critical role in the performance of soft-hard connections.Overall, this analysis contributes to the definition a suitable geometry for studying the fracture behavior of soft-hard interfaces and confirms the improved behavior of our transition geometries as compared to their gradient-less equivalents.However, future studies on the configuration of the fracture test specimen require testing configurations that guarantee no additional cracks are propagated during the analysis. Since it is important to prescribe the crack at a position within the interface where the outcome performance is expected to be minimal, performing a permutation analysis on the crack position through computational techniques may be necessary.We performed a pilot analysis with such a method using linear elastic XFEM simulations.These XFEM analyses comprised two different sets of tests.For the first one, we varied the position of the prescribed crack along the interface and estimated the corresponding stress intensity factor, (Supplementary Figure 13d).For this purpose, the crack position was permuted between 5 voxels behind the interface (i.e., the soft material side) to 6 voxels within the interface geometry (i.e., toward the hard material side).For the second set of tests, the crack position was fixed at the edge of the interface geometry while the geometry of the FG was shifted between 0 and 12 voxels along the direction of the crack (Supplementary Figure 13e).Such a shift in the interface geometry allows us to evaluate how much the local material configuration ahead of the crack tip affects the value.The selected interface designs for testing included OC, DI, GY, CO, PA, and GP, with the addition of a non-graded specimen as a control group.This yielded 84 XFEM simulations for the first set of tests and 91 simulations for the second.To make the computational models feasible, we discretized only one interface quadrant (i.e., 144×144 voxels) and only one unit cell in thickness (i.e., 24 voxels) of the fracture toughness specimen design, leading to 497664 hexahedral elements with full integration per simulation.Symmetric boundary conditions were applied at the soft material end (i.e., = = = 0), at the edge opposite to the interface (i.e., = = = 0), and at the back of the specimen (i.e., = = = 0), while a linear stress of 0.186 MPa was prescribed at the hard material end. After post-processing the simulations, high variations of were present when changing the crack position along the interface length (Supplementary Figure 13f).The maximum value of the non-graded design was 1086 MPa.mm 1/2 , estimated at one voxel within the hard material region, which was similar to the value at the edge of the interface, which was 924 MPa.mm 1/2 . Contrary to this, the maximum estimated values of the FG designs varied between 472 and 646 MPa.mm 1/2 , with most of these estimations being found for cracks located well within the interface geometry.Moreover, when positioned within the interface region, some of the values of the FGs were as low as 3.76 MPa.mm 1/2 .These differences of up to two orders of magnitude indicate a high dependence of fracture toughness value on the positioning of the crack.Similarly, shifting the FG geometry while leaving the crack stationary showed substantial variations of (Supplementary Figure 13g).For this case, the behavior of all the geometries was within the range of 3 to 6 MPa.mm 1/2 .However, one FG design can have a lower or higher than the other designs depending on what part of its geometry is directly ahead of the crack tip.For example, a lattice shift of half a unit cell (i.e., 12 voxels) of a CO design can make its increase from 3.8 to 5.27 MPa.mm 1/2 and change its rank from the most efficient geometry to the fourth most efficient.Therefore, these results corroborate our hypothesis that measuring the most critical of any interface design would require extensive permutation analyses to test all the possible configurations of crack positioning within a geometry, making this a highly laborious evaluation method.Consequently and as suggested in the main text, analyzing the performance of FG designs in terms of the strain concentration factor is a far more efficient methodology than the one presented in this section. Table 1 . The quasi-static tensile test results corresponding to the various types of functional gradient (mean ± standard deviation). Table 2 . The quasi-static shear test results corresponding to the various types of functional gradients (mean ± standard deviation).
6,252.4
2023-12-12T00:00:00.000
[ "Materials Science", "Engineering" ]
Some results regarding the ideal structure of C∗$C^*$ ‐algebras of étale groupoids We prove a sandwiching lemma for inner‐exact locally compact Hausdorff étale groupoids. Our lemma says that every ideal of the reduced C∗$C^*$ ‐algebra of such a groupoid is sandwiched between the ideals associated to two uniquely defined open invariant subsets of the unit space. We obtain a bijection between ideals of the reduced C∗$C^*$ ‐algebra, and triples consisting of two nested open invariant sets and an ideal in the C∗$C^*$ ‐algebra of the subquotient they determine that has trivial intersection with the diagonal subalgebra and full support. We then introduce a generalisation to groupoids of Ara and Lolk's relative strong topological freeness condition for partial actions, and prove that the reduced C∗$C^*$ ‐algebras of inner‐exact locally compact Hausdorff étale groupoids satisfying this condition admit an obstruction ideal in Ara and Lolk's sense. Introduction The purpose of this paper is to investigate the ideal structure of the reduced C *algebras of locally compact Hausdorff étale groupoids.This very broad class of C * -algebras contains all reduced crossed products of commutative C * -algebras by discrete groups.It also includes graph C * -algebras [KPRR97], higher-rank graph C * -algebras [KP00], the models described by Spielberg [Sp07] and Katsura [Ka2008] for Kirchberg algebras, the stable and unstable Ruelle algebras of Smale spaces (up to Morita equivalence), and many self-similar action C * -algebras [EP2017]. Among the more natural invariants of a C * -algebra, but also among the most difficult to compute, is its lattice of ideals.In the situation of étale groupoid C * -algebras, definitive theorems are available for C * -algebras of amenable groupoids that are essentially principal in the sense of Renault [Re91], graph C * -algebras [aHR97,HS04], and for C * -algebras of a single local homeomorphisms [Ka2021], but few other truly general results about ideal structure of groupoid C * -algebras are available. The analysis of ideals in étale groupoid C * -algebras typically has two components.The first is concerned with what we call here dynamical ideals, and is well understood.The continuous functions on the unit space G (0) of an étale groupoid G embed as a σ-unital subalgebra D of the groupoid C * -algebra.So each ideal I of C * (G) yields an ideal I ∩ D of D and hence an open subset U I of G (0) on which it is supported.This U I is invariant in the sense that if s(γ) ∈ U I then r(γ) ∈ U I .If I is generated as an ideal by I ∩ D, we call it a dynamical ideal.The assignment I → U I is a lattice isomorphism between dynamical ideals of C * (G) and open invariant sets of G (0) , giving a complete description of the dynamical ideals.In particular, by identifying the essentially principle (now sometimes referred to instead as strongly effective) and amenable groupoids for which every ideal of C * r (G) is dynamical, Renault gives a complete description of the ideal structure for this class of groupoid C * -algebras [Re91]. The second component of the analysis is more complicated.It amounts to understanding the collection of all possible ideals that have fixed intersection with D. For full C * -algebras, this is, in general, hopelessly intractable: there is a zoo of ideals contained in the kernel of the regular representation, which has trivial intersection with D, alone.So we are led to restrict our attention to reduced C * -algebras.Another problem arises almost immediately: given an open invariant set U, the restriction map f → f | G\G| U on C c (G) extends to a homomorphism C * r (G) → C * r (G \ G| U ) whose kernel contains the dynamical ideal I U associated to U. In the setting of full C * -algebras, this containment is an equality, but for reduced C * -algebras it need not be: as Willet's example [Wi15] shows, the quotient C * r (G)/I U can coincide with the full C * -algebra C * (G \ G| U ), and we encounter the same zoo of ideals as before.So we restrict attention further to groupoids that are inner-exact in the sense that C * r (G)/I U ∼ = C * r (G\G| U ) for every open invariant set U. Lest this seem overly restrictive, note that this includes all amenable étale groupoids G, and therefore all nuclear étale groupoid C * -algebras [A-DR00]. In this setting, existing results rely, explicitly or otherwise, on a kind of sandwiching lemma.This technique was developed by an Huef and Raeburn [aHR97] to analyse Cuntz-Krieger algebras.Here the dynamical ideals are better known as gauge-invariant ideals (see Proposition 3.9).To understand the ideals of a Cuntz-Krieger algebra, an Huef and Raeburn concentrate on primitive ideals and demonstrate that for each primitive ideal I there are a unique smallest gauge-invariant ideal K containing I and largest gaugeinvariant ideal J contained in I.They then analyse the quotient K/J, which is itself Morita equivalent to a graph algebra-but of a graph consisting of just one vertex and one edge.The C * -algebra of this graph is C(T), so its ideal structure is well understood, and their analysis proceeds from there.A similar idea was used in [HS04], and again in [Ka2021] to understand ideal structure first for graph C * -algebras and then for topologicalgraph algebras, viewed as C * -algebras associated to singly generated irreversible dynamics. Another instance of the same idea appears in Ara and Lolk's very interesting work on partial actions [AL18].They identify a relative strong topological freeness condition that generalises Renault's topologically principle condition in the setting of transformation groupoids for partial actions.They show that relative strong topological freeness guarantees the existence of an obstruction ideal : a smallest dynamical ideal of C * r (G) that contains every ideal with trivial intersection with D. This can again be regarded as a kind of sandwiching result, but with the quantifiers switched: there exists a pair of dynamical ideals, namely the zero ideal and the obstruction ideal, that sandwich every ideal that has trivial intersection with D. One of our motivations in writing this paper is that, because this particular aspect of Ara and Lolk's paper appears as a technical step along the way to their main objective, it is in danger of receiving less attention than we think it deserves, and we want to advertise the idea more broadly. In this paper, we take up the idea of the sandwiching lemma and of Ara and Lolk's relative strong topological freeness condition and obstruction ideal.We first establish a general sandwiching lemma for groupoid C * -algebras (Lemma 3.4): given any innerexact locally compact Hausdorff étale groupoid G, and any ideal I of C * r (G), there are a unique smallest dynamical ideal K containing I and largest dynamical ideal contained in I.As a result the ideals of C * r (G) are parameterised by triples (U, V, J) consisting of open invariant sets U ⊆ V ⊆ G (0) , and an ideal J of C * r (G| V \ G| U ) that has trivial intersection with D and vanishes nowhere on G| V \ G| U (Theorem 3.7). We then adapt Ara and Lolk's notions of topological freeness and strong topological freeness at a point (see also Renault's notion of discretely trivial isotropy [Re91]), and of relative strong topological freeness, from their setting of partial actions of groups to the setting of étale groupoids.We identify a condition on étale groupoids, which we phrase as being jointly effective where they are effective, that ensures that C * r (G) admits an obstruction ideal in the sense of Ara and Lolk (see Theorem 4.12 and Corollary 4.14).We also show that this obstruction ideal is minimal in the strong sense that there exists an ideal that has trivial intersection with D and whose support exhausts the support of the obstruction ideal.We show that any groupoid whose isotropy groups are all either trivial or infinite cyclic is jointly effective where it is effective.This includes all graph groupoids and groupoids arising from single local homeomorphisms.In our companion paper [BCS23], we show how to use our results to give a complete description of the ideal structure of a large class of Deaconu-Renault groupoid C * -algebras, including those considered in [aHR97, HS04, Ka2021] and all C * -algebras of rank-2 graphs. The paper is arranged as follows.We introduce the background we need in Section 2. In Section 3 we prove our sandwiching lemma and explore its consequences.In Section 4 we introduce the notions of a groupoid being effective at a unit, strongly effective at a unit, and being strongly effective where it is effective.We then prove that such groupoids admit an obstruction ideal, and discuss some examples.Finally in Section 5, we present examples of groupoids that are jointly effective where they are effective, and describe the support of the obstruction ideal; in particular, we devote Section 5.2 to showing exactly how our work in Section 4 generalises the ideas of Ara and Lolk. Preliminaries 2.1.Hausdorff étale groupoids.We will always be working with topological groupoids that are locally compact, Hausdorff, and étale, and we shall adopt most of the notation and terminology from [Si20] (see also [Re80]). We consider the unit space G (0) as a locally compact Hausdorff subspace of G, and we denote range and source maps by r, s : G → G (0) .A bisection is a subset B of G such that both r and s restrict to injective maps on B. That G is Hausdorff means that the unit space G (0) is a closed subset of G, and that G is étale (in the sense that the range and source maps are local homeomorphisms) implies that G (0) is also open, that G has a basis consisting of open bisections, and that the range and source fibres over a unit x ∈ G (0) given by G x = {γ ∈ G : r(γ) = x} and G x = {γ ∈ G : s(γ) = x}, respectively, are discrete in the relative topology.In particular, the isotropy group over a unit x ∈ G (0) given as the intersection The isotropy subgroupoid is then the group bundle We write I • (G) for the topological interior of the isotropy of G.A Hausdorff groupoid G is said to be effective if I • (G) = G (0) , that is, the interior of the isotropy subgroupoid with the subspace topology coincides with the unit space.When G is second-countable this coincides (using a Baire category argument) with the notion of G being topologically principal in the sense that G (0) has a dense set of points with trivial isotropy.2.2.Reduced groupoid C*-algebra.We will be working with the reduced groupoid C * -algebras of locally compact Hausdorff étale groupoids.We follow the exposition of [Si20]. The convolution algebra C c (G) of a locally compact Hausdorff étale groupoid G is the set of compactly supported and complex-valued functions on G equipped with the convolution product for all f, g ∈ C c (G) and γ ∈ G, and the involution f given by is both open and closed in G, the commutative algebra C 0 (G (0) ) sits naturally as a subalgebra of C * r (G), and we refer to C 0 (G) as the diagonal subalgebra.Note that C 0 (G (0) ) need not be a C * -diagonal (in the sense of Kumjian [Ku86]) nor a Cartan subalgebra (in the sense of Renault [Re08]). Renault [Re80, Proposition II.4.2] shows (Renault makes the standing assumption that the groupoids considered there are second-countable, but that assumption is not needed for the following) that any element in the reduced groupoid C * -algebra may be thought of as a function on the groupoid.More precisely, there exists a linear and norm-decreasing map j : for all a ∈ C * r (G) and γ ∈ G, and j is the identity on C c (G).The reduced groupoid C *algebra admits a faithful conditional expectation E : C * r (G) → C 0 (G (0) ) onto the diagonal given by restriction of functions in the sense that j(E(a)) = j(a)| G (0) for all a ∈ C * r (G) [Si20, Proposition 10.2.6].Renault shows that for a, b ∈ C * r (G), the convolution formula for j(a) * j(b) is a convergent series that converges to j(a * b). A subset U of G (0) is G-invariant (or simply invariant) if r(GU) ⊆ U, and the reduction of an open subgroupoid of G (and hence locally compact, Hausdorff, and étale), and the inclusion i . This is an ideal with the property that I U ∩ C 0 (G (0) ) = C 0 (U) and I U is generated as an ideal by C 0 (U).We shall refer to such ideals as dynamical ideals (see Definition 3.1). The complement G (0) \U is a closed invariant set of units, and there is a * -homomorphism The groupoid ) is exact, (see [A-D19, Definition 3.7] and also [BL17, Definition 3.5]).Any amenable groupoid is inner exact.Combining Proposition 4.23 and Theorem 7.10 in [A-D21], we also see that the (partial) crossed product groupoid of an exact group acting (partially) on a second-countable locally compact Hausdorff space is inner-exact.Willett's example of a nonamenable groupoid whose full and reduced C * -algebras coincide is not innerexact [Wi15].Remark 2.2.The empty set satisfies the axioms defining a locally compact Hausdorff étale groupoid.By convention, we take the C * -algebra of the empty groupoid to be the zero C * -algebra; in particular (2.1) collapses to the exact sequence }.We thank the referee for pressing us on this point. A sandwiching lemma for Hausdorff étale groupoids The characterisations of the primitive-ideal spaces of graph C * -algebras of [HS04] and [aHR97] were founded on the "sandwiching lemmas" [HS04, Lemma 2.6] and [aHR97, Lemma 4.5] that show that every primitive ideal is sandwiched between a pair of uniquely determined gauge-invariant ideals.Here we observe that a similar sandwiching lemma holds for ideals of reduced Hausdorff étale groupoid C * -algebras.Definition 3.1.We say that an ideal I in a reduced groupoid C * -algebra C * r (G) is dynamical if it is generated as an ideal by its intersection with the diagonal subalgebra that is both a dynamical ideal and a purely non-dynamical ideal.Though linguistically unsatisfactory, this convention simplifies the statements of our key results: in Proposition 3.3 treating {0} as a dynamical ideal avoids treating the open invariant set ∅ as a special case; but later in Theorem 3.7, treating {0} as a purely non-dynamical ideal avoids treating dynamical ideals as a special case-see Remark 3.8. In the context of Deaconu-Renault groupoids, the dynamical ideals are precisely the usual gauge-invariant ideals-see Proposition 3.9.Proposition 3.3.Let G be a locally compact Hausdorff étale groupoid.The map U → I U is a lattice isomorphism from the lattice of open invariant subsets of X to the lattice of dynamical ideals of C * r (G).For each open invariant U ⊆ G (0) , we have Proof.The map U → I U is always an injection [Si20, Theorem 10.3.3], and surjectivity follows from the definition of dynamical ideals.Proposition 10.3.2 of [Si20] shows that I U is the closure of C c (G| U ) ⊆ C c (G).In particular, supp(I U ) ⊆ G| U , and Since lattice isomorphisms preserve least upper bounds and greatest lower bounds, it follows from Proposition 3.3 that, for example, I U ∩ I V = I U ∩V and I U + I V = I U ∪V for all open invariant U and V . We now state our sandwiching lemma. Lemma 3.4 (The sandwiching lemma).Let G be a locally compact Hausdorff étale groupoid that is inner-exact and let I be an ideal of Consider the open and invariant subsets For each x ∈ U, choose f x ∈ I ∩ C 0 (G (0) ) such that f x (x) = 0. Then {f x : x ∈ U} generates C 0 (U) as an ideal of C 0 (G (0) ), and it is contained in I. Hence I U ⊆ I. Suppose that U ′ is an open subset of G (0) strictly containing U and fix x ∈ U ′ \ U and f ∈ C c (U ′ ) with f (x) = 0. Then f ∈ I U ′ but f ∈ I U by definition of I U .In particular, f / ∈ I, so I U ′ ⊆ I.This proves that I U is the largest dynamical ideal contained in I. The set V is open because j(a) is continuous for every a ∈ C * r (G).We claim that V = s(supp(I)).That V ⊆ s(supp(I)) is obvious.For the reverse inclusion, suppose that a ∈ I and j(a)(γ) = 0.For any open bisection B containing γ −1 and any ) be the faithful conditional expectation onto the diagonal and observe that E(I) ⊆ C 0 (V ).Since G is inner-exact, it follows from [BL17, Lemma 3.6] that I is contained in the ideal in C * r (G) generated by E(I), so we find that I ⊆ I V as wanted.To see that V is minimal with this property, suppose that Remark 3.5.If the ideal I in Lemma 3.4 is a purely non-dynamical ideal of C * r (G), then U is empty, and then I U = {0}; if I is a dynamical ideal, then V = U and I U = I. Consider a pair of nested open invariant subsets extending the canonical inclusion of algebras of compactly supported functions.For these maps, the diagram Lemma 3.6.Let G be a locally compact Hausdorff étale groupoid that is inner-exact. Let I be an ideal of C * r (G) and let U and V be the open invariant sets of Lemma 3.4. ) that is purely non-dynamical and has full support. Next we show that J has full support.Clearly, supp(J) = 0, and we conclude that supp(J) = G| V \U . Let T (G) be the collection of triples (U, V, J) where U ⊆ V ⊆ G (0) are nested open and invariant subsets and J is a purely non-dynamical ideal in C * r (G| V \U ) with full support.Theorem 3.7.Let G be a locally compact Hausdorff étale groupoid that is inner-exact.There is a bijection Θ from T (G) to the collection of ideals of to the triple (U, V, J) ∈ T (G) consisting of the sandwich sets U ⊆ V and the purely non-dynamical ideal J ⊳ C * r (G| V \U ) with full support of Lemma 3.4.Remark 3.8.It is important in the statement of Theorem 3.7 that ∅ is a groupoid, that its reduced C * -algebra is {0}, and that {0} is a purely non-dynamical ideal of C * r (G): the dynamical ideals of C * r (G) are in the range of Θ because each I U = Θ(U, U, {0}).Proof of Theorem 3.7.The map Θ takes values in the ideals of C * r (G) by definition.To see that Θ is injective, fix (U, V, J) ∈ T (G) and let I = Θ(U, V, J).We will prove that U and V are the sandwiching sets U I , V I obtained from Lemma 3.4 applied to I, and that J = ι −1 V \U (π U (I)).This defines a left inverse to Θ, defined on the image of Θ, which implies that Θ is injective. We have and the latter has trivial intersection with G (0) \ U (since J is purely non-dynamical). As π U implements restriction of functions, we see that ). Observe that E(Θ(U, V, J)) ⊆ C 0 (V ).By [BL17, Lemma 3.6], Θ(U, V, J) is contained in the ideal generated by E(Θ(U, V, J)), so we see that Θ(U, V, J) ⊆ I V .In particular, supp(Θ(U, V, J)) ⊆ supp(I V ) = G| V .On the other hand, since supp(J) = G| V \U we have G| V which contradicts our observation above.Therefore, V is the smallest such open invariant subset.Finally, observe that = J, and this completes the proof that Θ is injective. To see that it is surjective, fix an ideal ).We claim that K := Θ(U, V, J) is equal to I, which will establish surjectivity of Θ.By definition, both I and K are ideals of C * r (G) that contain I U , so it suffices to show that To link Lemma 3.4 back to the results [HS04, Lemma 2.6] and [aHR97, Lemma 4.5] that inspired it, we observe that for Deaconu-Renault groupoids, the dynamical ideals employed above are precisely the gauge-invariant ideals of the C * -algebra of a Deaconu-Renault groupoid.The result is certainly well-known, but we are not aware that it has been recorded explicitly elsewhere in this generality.For the case of finitely aligned higher-rank graphs, this was observed in [Li21, Lemma 7.5]. Recall that if T : Proof.Since γ z (f ) = f for all z ∈ T k and f ∈ C 0 (G (0) ), the ideals of C * (G T ) generated by subsets of C 0 (G (0) ) are gauge invariant.In particular, each I U is gauge-invariant. The map U → I U is an injection [Si20, Theorem 10.3.3].For surjectivity, we follow the second paragraph of the proof of [Si20, Theorem 10.3.3],dropping the assumption that G is strongly effective but fixing a gauge-invariant ideal I, until its penultimate sentence.At that point, while GW need not be effective, we observe that GW is identical to the groupoid of the topological higher-rank graph Λ defined by Λ n = X × {n} for all n, whose range and source maps are given by s(x, n) = (T n (x), 0) and r(x, n) = (x, 0) and with the factorisation rules (x, m)(T m (x), n) = (x, m + n) = (x, n)(T n (x), m).We may now apply the gauge-invariant uniqueness theorem of [CLSV11, Corollary 5.21] in place of [Si20, Theorem 10.3.3] to see that π is injective, and the surjectivity of U → I U follows.The final statement follows from Proposition 3.3. Effectiveness at a unit and the obstruction ideal In this section we introduce the notions of effectiveness at a unit and joint effectiveness at a unit for étale groupoids.The key property that emerges is that of being jointly effective where effective.This is inspired by the notions in [AL18, Section 7] of (strong) topological freeness at a point for a partial group action.The points in the unit space of a groupoid that are not effective comprise an open invariant set and hence a dynamical ideal that we call the obstruction ideal.Our main results in this section (Theorem 4.12 and Corollary 4.14) say that if a Hausdorff étale groupoid is inner-exact and its full and reduced C * -algebras coincide (Anantharaman-Delaroche calls this the weak containment property [A-D19]), then the obstruction ideal contains all purely non-dynamical ideals, and is minimal with this property. Recall that a groupoid G is effective if the interior I • (G) of the isotropy is equal to the unit space G (0) .For x ∈ G (0) , we write I • (G) x for the intersection of G x with I • (G).Definition 4.1.A locally compact Hausdorff étale groupoid G is effective at a unit x ∈ G (0) if I • (G) x = {x}.Equivalently, G is effective at x if for any nontrivial isotropy element γ ∈ I(G) x \ {x} and any open bisection B in G \ G (0) containing γ there exists y ∈ s(B) such that r(By) = y.When the groupoid is understood, we may just say that the unit is effective.We let G (0) eff denote the collection of effective units.Any unit with trivial isotropy is effective.An isolated unit with nontrivial isotropy is not effective. We have the following general description of the units that are not effective.This also shows that our terminology is consistent with the literature on effective groupoids.Lemma 4.2.Let G be a locally compact Hausdorff étale groupoid.We have and this is an open and invariant subset of G (0) .Consequently, G eff is closed and invariant.Moreover, G is effective if and only if G is effective at each of its units. Proof.Suppose that G is not effective at x ∈ G (0) .Then x has nontrivial isotropy and any is an open neighbourhood of x consisting of points that are not effective, so In order to see invariance, let x ∈ s(I • (G) \ G (0) ) and take γ ∈ G with x = s(γ) and r(γ) = z = x.We will show that z is not effective. γ is an open bisection containing γηγ −1 (which is isotropy over z), and it consists only of isotropy elements, because B η consists only of isotropy elements.Therefore, The final statement is a direct consequence of (4.1). The obstruction ideal defined below will play a central role in Theorem 4.12. Definition 4.3.Let G be a locally compact Hausdorff étale groupoid.The set of all units that are not effective is an open and invariant subset of G (0) , so it determines a dynamical ideal I G (0) \G (0) eff of C * r (G).We call this the obstruction ideal and denote it by J ob .This terminology is explained in Remark 4.16. We let G eff denote the reduction of G to the closed invariant subset of effective points.The unit space of G eff then coincides with G (0) eff .We require a groupoid analogue of the notion of strong topological freeness introduced in [AL18, Section 7]. Definition 4.4.A locally compact Hausdorff groupoid G is jointly effective at a unit x ∈ G (0) if for any finite collection of nontrivial isotropy elements γ 1 , . . ., γ n ∈ I(G) x \ {x} and any open bisections B 1 , . . ., B n in G\G (0) such that γ i ∈ B i there exists y ∈ n i=1 s(B i ) such that r(B i y) = y for all i = 1, . . ., n. Remark 4.5.If G is effective, then it is jointly effective at every unit.More generally, any unit in an open set of effective points is jointly effective. For the first assertion, first suppose that G is effective, and fix x ∈ G (0) and γ 1 , . . ., γ n ∈ I(G) x \ {x}.Fix open bisections B i in G \ G (0) containing γ i .By shrinking if necessary, we can assume that W := s(B i ) = s(B j ) for all i, j.Since G is effective, each B i ∩ I(G) has empty interior.So for each i, the set ) is open and dense in W . Hence i W i is open and dense, and in particular nonempty.Now any y ∈ i W i satisfies r(B i y) = y for all i. For the second assertion, suppose only that U is an open subset of G (0) contained in G eff .The first assertion applied to G| V shows that x is jointly effective in G| V , and hence in G. It is possible for a groupoid to be effective at a unit but not jointly effective at that unit-see Example 4.8.This leads us to an analogue of Ara and Lolk's notion of relative strong topological freeness. Definition 4.6.Let G be a locally compact Hausdorff étale groupoid.We say that G is jointly effective where it is effective if G is jointly effective at every point in G (1) By Remark 4.5, if G is effective then it is jointly effective where it is effective.In particular, if G is principal, then it is jointly effective where it is effective. (2) Suppose G is a Hausdorff étale group bundle (for example, G is a nontrivial discrete group).Since Since G is trivially effective at x when G x x = {x}, it follows that G is jointly effective where it is effective.We have G (0) eff = {x : G x = {x}}, and the obstruction ideal is generated by C 0 ({x : G x = {x}}). (3) In particular, Willett's groupoid [Wi15] consists entirely of isotropy, and hence is jointly effective where it is effective.It is not inner-exact.The obstruction ideal is the whole reduced groupoid C * -algebra. The next examples show that groupoids need not be jointly effective where they are effective and that the property of being jointly effective where effective does not necessarily pass to reductions to closed invariant subsets.This latter permanence property does hold in groupoids all of whose nontrivial isotropy groups are infinite cyclic (see Section 5.1). In this example, the interior of the isotropy and the only effective unit is (0, 0) ∈ X. Every point in X has nontrivial isotropy (so G ϕ⊕ψ is not effective).More specifically, the isotropy group of every point that is not the origin is isomorphic to Z/2Z while the isotropy group at the origin is isomorphic to Z/2Z ⊕ Z/2Z.The origin is the only point that is effective, but it is not jointly effective.Therefore, G ϕ⊕ψ is not jointly effective where it is effective. Ara and Lolk [AL18, Section 7] exhibit an example of a partial action that shows that their relative strong topological freeness is not automatic, and their example can be adapted to our groupoid setting. Example 4.9.We can extend Exel's cross to see that being jointly effective where effective does not pass to closed invariant subgroupoids.To see this, let X be as in Exel's cross, and let Y = X × [−1, 1]. Neither a nor b fixes any point in Y \ X because both invert the t-coordinate.Since the only point in X fixed by ab is the point (0, 0) ∈ X, the only points in Y fixed by ab are those of the form ((0, 0), t).So G ϕ, ψ := Y ⋊ (Z/2Z) 2 is effective, and in particular jointly effective where it is effective.However, its reduction to the closed invariant set X is Exel's cross, which is not jointly effective where it is effective. The next lemma is an easy adaptation of [Ex17, Lemma 29.4] from partial actions of groups to groupoids, so we give just a fairly succinct proof. Lemma 4.10.Let G be a Hausdorff étale groupoid, let x ∈ G (0) be a unit, and let B be an open bisection such that B ∩ I(G) x = ∅.Let f ∈ C c (G) be such that f has support in B. Given ε > 0 there exists h ∈ C 0 (G (0) ) satisfying 0 h 1, h is constantly 1 on a neighbourhood of x, and hf h < ε. The next two results say that when an inner-exact groupoid G whose full and reduced C * -algebras coincide is jointly effective where it is effective, its obstruction ideal J ob is the minimal dynamical ideal that contains all purely non-dynamical ideals of C * r (G).The proof of the first result closely follows that of [AL18, Theorem 7.12] (which does not require the weak containment property) with only minor modifications. Remark 4.11.The hypothesis below that the sequence 0 for example, G is inner-exact (in particular, if it is amenable).However, it also holds trivially if G is effective, and we invoke it in that situation in Proposition 4.15.So we have stated Theorem 4.12 accordingly.Theorem 4.12.Let G be a locally compact Hausdorff étale groupoid that is jointly effective where it is effective.Let J ob be the obstruction ideal in C * r (G) and suppose the sequence 0 We suppose that I ⊆ J ob and derive a contradiction.Fix a ∈ I \ J ob .In particular, and contains x 0 .By Urysohn's lemma we may pick a function u ∈ C 0 (G (0) ) such that 0 u 1, u(x 0 ) = 1, and u vanishes outside V .Set z := ua * a ∈ I \ J ob and observe that E(z) = uE(a * a) = uf , and We claim that there exists h ∈ C 0 (G (0) ) satisfying 0 h 1, h(x 1 ) = 1 and For each i such that B i ∩ I(G) x 0 = ∅, we can apply Lemma 4.10 to obtain a function and for each i such that For each i such that B i ∩ I(G) x 0 = ∅, Lemma 4.10 for B i at x 1 yields a function Altogether we have constructed functions h 1 , . . ., h k that all satisfy (4.8).Set h := ) and note that 0 h 1 and h(x 1 ) = 1.It remains to verify (4.4) and (4.5); we do this by direct computation.Using (4.3) and the fact that u(x 0 ) = 1, we see that By first using the choice of g and then the choice of x 1 from (4.7), we find Remembering that h(x 1 ) = 1, we obtain This means that E(z) − ε < hE(z)h so (4.4) follows.For (4.5), we use the decomposition (4.6) and then (4.8) to see that and this proves (4.5). The lemma below uses the full groupoid C * -algebra C * (G).We refer the reader to [Wil19] for a discussion of this C * -algebra that does not assume second-countability.Lemma 4.13.Let G be a locally compact Hausdorff étale groupoid whose full and reduced C * -algebras coincide.Let J ob be the obstruction ideal in C * r (G).There is a *representation ε of the full groupoid C * -algebra C * (G) such that ker(ε) is purely nondynamical and such that supp(J ob ) ⊆ supp(ker(ε)). To see that supp(ker(ε)) contains supp(J ob ), by Lemma 2.1 it suffices to show that Corollary 4.14.Let G be a locally compact Hausdorff étale groupoid that is jointly effective where it is effective.Suppose the sequence 0 → J ob → C * r (G) → C * r (G eff ) → 0 is exact and that the full and reduced groupoid C * -algebras of G coincide.Then there is a purely non-dynamical ideal whose support is equal to that of J ob , and J ob is the minimal dynamical ideal that contains all purely non-dynamical ideals of C * r (G).Proof.Lemma 4.13 gives a purely non-dynamical ideal I such that supp(J ob ) ⊆ supp(I).Theorem 4.12 shows that J ob contains all purely non-dynamical ideals in C * r (G), and in particular contains I. Hence supp(I) ⊆ supp(J ob ), and we obtain equality.Now suppose that I U is a dynamical ideal that contains every purely non-dynamical ideal.Then in particular, To finish the section, we observe that our results can be used to recover [BCFS, Proposition 5.5(2)], without the assumption that G is second-countable.This is not new.For example, it can be recovered from a special case of [KM19,Theorem 7.29].We include it here only to illustrate how our results relate to effective groupoids.Proposition 4.15.Let G be a locally compact Hausdorff étale groupoid. (1) If G is effective, then every nontrivial ideal of C * r (G) contains a nonzero element of C 0 (G (0) ). (2) If every nontrivial ideal of the full C * -algebra C * (G) contains a nonzero element of C 0 (G (0) ), then the full and reduced C * -algebras of G coincide and G is effective. Proof. (1) Fix an ideal I of C * r (G) that contains no nonzero element of C 0 (G (0) ); we must show that I = {0}.Since G is effective, it is jointly effective where it is effective, and J ob is trivial.The sequence 0 → J ob → C * r (G) → C * r (G eff ) → 0 is then trivially exact, and Theorem 4.12 implies that I ⊆ J ob = {0}. (2) We prove the contrapositive.First suppose that the full and reduced C * -algebras of G do not coincide.Then the kernel of the regular representation λ : C * (G) → C * r (G) is a nonzero purely non-dynamical ideal.Now suppose that the full and reduced C * -algebras of G coincide but that G is not effective.Then J ob is nontrivial, and Lemma 4.13 implies that there is a purely non-dynamical ideal I of C * r (G) whose support contains that of J ob , and in particular is nonzero. Remark 4.16.A mainstay of the theory of étale groupoid C * -algebras is the diagonal uniqueness theorem, dating back to [Re80]: for amenable effective étale groupoids, any * -homomorphism that is injective on the diagonal is injective (See Proposition 4.15).If G is a groupoid that does not satisfy the conclusion of this theorem, then there is a * -homomorphism φ of C * r (G) whose kernel is purely non-dynamical.So if G is also innerexact Hausdorff étale groupoid whose full and reduced C * -algebras coincide, then the kernel of φ is contained in the obstruction ideal.This justifies the terminology obstruction ideal : the obstruction ideal measures how far away a groupoid is from satisfying a diagonal uniqueness theorem. For example, if G is the groupoid of a higher-rank graph in the sense of [KP00], then the obstruction ideal is zero if and only if the higher-rank graph is aperiodic (so its C * -algebra satisfies the Cuntz-Krieger uniqueness theorem) [RS07, Proposition 3.6]. Examples 5.1.Groupoids from local homeomorphisms.First we consider the groupoid constructed from a local homeomorphisms T on a locally compact Hausdorff space X.The associated semi-direct product groupoid, usually called the Deaconu-Renault groupoid, is where the product of (x, p, y) and (y ′ , q, z) is defined precisely if y = y ′ in which case (x, p, y)(y, q, z) = (x, p + q, z) while inversion is (x, p, y) −1 = (y, −p, x).The unit space is naturally identified with X and the range and source maps are then r(x, p, y) = x and s(x, p, y) = y. We first verify that this groupoid is jointly effective where it is effective.For open subsets U and V of X, the sets of the form For the rank-one Deaconu-Renault groupoids, we can describe explicitly the points that are not effective .For p ∈ N + , let P p = {x ∈ X : T p pointwise fixes a neighbourhood of x} (5.1) and let P = ∞ p=1 P p .Then P is open and invariant in X and the restricted system (P, T ) is reversible.(5.3)Note that B ⊆ I • (G) \ X.In particular, I • (G) x ′ contains (x ′ , p, x ′ ), so x ′ is not effective, and by invariance x is not effective. For the other inclusion, suppose x is not effective.Then (x, p, x) ∈ I • (G) x for some p ∈ N + .Fix an open bisection B in I • (G) \ X containing (x, p, x).We may assume that T p x = x, so by shrinking B we may assume that B ⊆ Z(U, p, 0, U) for some open subset U of X.Then T p y = y for every y ∈ s(B) since B ⊆ I(G).Therefore, x ∈ V . Next we show that any Deaconu-Renault groupoid G T is jointly effective where it is effective.The result actually only depends on the nontrivial isotropy being infinite cyclic, so we record this more general result here. Lemma 5.2.Any Hausdorff étale groupoid G whose nontrivial isotropy is infinite cyclic is jointly effective where it is effective. Proof.Let x ∈ G (0) be a point with nontrivial isotropy and suppose B 1 , . . ., B N are open bisections in G such that each B i contains an element γ i ∈ Iso(G) x \ G (0) .Since the isotropy group at x is infinite cyclic there are minimal integers p 1 , . . ., p N such that γ p i i = γ p j j for all i, j = 1, . . ., N. Put γ := γ p i i .Then B := N is an open bisection containing γ. Assume now that x is effective.So whenever U ⊆ G (0) is an open neighbourhood of x, there is a point y ∈ U such that r(By) = y.Applying this to a neighbourhood basis of x, we can find a sequence (y n ) n in G (0) such that y n → x and r(By n ) = y n for all n.We show that G is jointly effective at x.It suffices to show that for large n, we have r(B i y n ) = y n for all i = 1, . . ., N. Since B p 1 1 contains γ, we have x ∈ s(B p 1 1 ) so y n ∈ s(B p 1 1 ) for large n, and since B ⊆ B p 1 1 we see that r(B p 1 1 y n ) = r(By n ).If r(B 1 y n ) = y n , then y n = r(B 1 y n ) = r(B p 1 1 y n ) = r(By n ), which contradicts our choice of y n .So for large n we have r(B 1 y n ) = y n as required.Hence G is jointly effective at x. As an immediate corollary we see that the groupoids built from a local homeomorphism T on a locally compact Hausdorff space X, called rank-one Deaconu-Renault groupoids, are covered by the above result. Corollary 5.3.Any rank-one Deaconu-Renault groupoid is jointly effective where it is effective.5.2.Partial actions.Our notion of being jointly effective for groupoids is directly inspired by Ara and Lolk's notion of relative strong topological freeness for partial actions [AL18, Section 7].A partial action θ : Γ X of a countable discrete group Γ on a locally compact Hausdorff space X is topologically free at x ∈ X if whenever θ g (x) = x for some 1 = g ∈ Γ, for any open neighbourhood U of x, there exists y ∈ U such that θ g (y) = y.We say θ is strongly topologically free at x if for any finite collection 1 = g 1 , • • • , g k ∈ Γ such that θ g i (x) = x and any neighbourhood U around x, there exists y ∈ U such that θ g i (y) = y for all i = 1, • • • , k.Finally, θ is relatively strong topologically free if it is strongly topologically free at all points at which it is topologically free. Following [Ab04, Section 2], a partial action θ : Γ X has an associated groupoid whose unit space G (0) θ is naturally identified with X. Elements (x, g, y) and (y ′ , g ′ , z) in G θ are composable if and only if y = y ′ in which case (x, g, y)(y ′ , g ′ , z) = (x, gg ′ , z).Inversion is given by (x, g, y) −1 = (y, g −1 , x).The source and range maps s, r : G θ → X are s(x, g, y) = y and r(x, g, y) = x.The groupoid G θ carries a locally compact and Hausdorff étale topology. Lemma 5.4.Let θ : Γ X be a partial action of a countable discrete group Γ on a locally compact Hausdorff space X.Then θ is topologically free at x ∈ X if and only if G θ is effective at x.Moreover, θ is strongly topologically free at x if and only if G θ is jointly effective at x.In particular, θ is relatively strongly topologically free if and only if G θ is jointly effective where it is effective. Proof.Suppose that θ is not strongly topologically free at x.There exist g 1 , . . ., g k ∈ Γ \ {e} that all fix x, and a neighbourhood U of x such that for every y ∈ U there exists i such that θ g i (y) = y.For each i, define B i := {(θ g i (y), g i , y) : y ∈ U}. Then each B i is a bisection containing (x, g i , x), and there is no y ∈ i s(B i ) = U such that r(B i y) = y for all i.So G θ is not jointly effective at x. Taking k = 1 shows that if θ is not topologically free at x then G θ is not effective at x. Now suppose that G θ is not jointly effective at x. Fix elements γ 1 , . . ., γ k ∈ I(G θ ) x \{x} and open bisections B i containing γ i such that for each y ∈ i s(B i ) there exists i such that r(B i y) = y.By definition of G θ , each γ i = (x, g i , x) for some g i ∈ Γ \ {e}.By definition of the topology on G θ , for each i there is an open neighborhood U i of x such that {(θ g i (y), g i , y) : y ∈ U i } ⊆ B i .Now U = i U i is a neighbourhood of x and for each y ∈ U there exists i such that r(B i y) = y.That is, θ g i (y) = y.So θ is not strongly topologically free at x. Again, taking k = 1 throughout shows that if G θ is not effective at x then θ is not topologically free at x. The final statement follows by definition. an action by d local homeomorphisms, then we let G T denote the Deaconu-Renault groupoid of T as in, for example, [SW16, Section 3].An ideal I of C * (G T ) is gauge-invariant if the canonical gauge action γ of T d on C * r (G T ) satisfies γ z (I) ⊆ I for all z ∈ T d .Proposition 3.9.Let X be a locally compact Hausdorff space and suppose T : N d X is an action on X by d commuting local homeomorphisms.The map that carries an open invariant subset U of X to the ideal I U generated by C 0 (U) is a lattice isomorphism from the lattice of open invariant subsets of X to the lattice of gauge-invariant ideals of C * (G T ). fix x ∈ U. Since G (0) eff is invariant, V := r(GU) is open and invariant with U ⊆ V ⊆ G (0) be the function given by f (η) = f (s(η)) for all η ∈ B. By extending by zero, both functions can be regarded as elements of C c (G). Direct calculation on basis elements (see the proof of [BCFS, Proposition 5.5(2)]) shows that f − f ∈ ker(ε).So x ∈ supp(ker(ε)). comprise a basis for a locally compact Hausdorff étale topology on G T .The groupoid G T is amenable, and hence inner-exact [SW16, Section 3]. Lemma 5. 1 . For a local homeomorphism T on a locally compact Hausdorff space X, we have X \ X eff = {x ∈ X : orb T (x) ∩ P = ∅}.(5.2)Proof.Let V := {x ∈ X : orb T (x) ∩ P = ∅} and let G = G T be the Deaconu-Renault groupoid of T .It is straightforward to verify that V is open and invariant in X.We verify (5.2) one inclusion at a time.Let x ∈ V and choose l ∈ N such that x ′ := T l (x) ∈ P p for some p ∈ N + .Pick an open set U ⊆ X all of whose points are p-periodic and consider the open bisection given by B = {(y, p, y) ∈ G : y ∈ U}.
11,291
2022-11-11T00:00:00.000
[ "Mathematics" ]
Link prediction in social network based on local information and attributes of nodes Link prediction is essential to both research areas and practical applications. In order to make full use of information of the network, we proposed a new method to predict links in the social network. Firstly, we extracted topological information and attributes of nodes in the social network. Secondly, we integrated them into feature vectors. Finally, we used XGB classifier to predict links using feature vectors. Through expanding information source, experiments on a co-authorship network suggest that our method can improve the accuracy of link prediction significantly. Introduction Link prediction plays an important role in the field of data mining [1].The traditional methods are based on the topological information, which have been widely used in various fields [1], like social recommendation, information retrieval and other fields. It is also very helpful to solve the problem of information overload [2] [3]. The traditional methods can be divided into two categories, the first category is based on the local information like CN [4], and the other one is based on the global information like Katz [5]. Centrality is used to describe importance of nodes in the network [6] [7]. If the centrality of a node is higher than others, it is easier for it to attract attention and construct new links with other nodes in the network, so researchers improve accuracy of link prediction methods with the help of centrality [8].Some researchers construct feature vectors using characteristics including centrality of nodes to predict links [9] [10],and some researchers use machine learning methods to improve performance of link prediction methods [11]. It has become more convenient for people to communicate with others using social media websites which produced a huge amount of information. In order to improve the accuracy of link prediction, researchers begin to pay attention to the relevance between content and structure of the social network, and then design new methods to predict links effectively [12] [13]. Many researchers used some characteristics of social networks to improve accuracy of link prediction, and got good results [14] [15].According to [12], people belong to the same social network tends to form a local community, so they defined a new index: CAR which can improve the effect of classical CN index significantly .CAR is also better than AA index [14] and RA index [15]. The relationship in social networks is dynamic with the change of time and network context [9]. In a co-authorship network, if two authors have no co-authorship at the moment, how can we predict whether they will construct relationship in the future? In this paper, we focus on topology and context of network and consider attributes of nodes, then design feature vectors to describe characteristics of nodes. In addition, this kind of framework is suitable for most social networks since the calculation of features is very easy. Our method is based on classification using XGB classifier, and experiments on a part of DBLP (Digital Bibliography Library Project) show that our method can predict links effectively. Related Definitions and Concepts In order to formalize characteristics of the social network, researchers usually use graph theory to model it. For a social network, each individual is considered as a point, and the relationship between them is considered as an edge, here we will give some basic definitions. Definition 1. Social network. If nodes i v and j v have no direct edges at the moment, then CAR index [12]is defined as in equation (2), where ) , ( We choose XGB classifier [16]to predict links. Babajide [17] proposed an algorithm to predict molecular activities based on XGB and experiments show that it is more accurate than some popular algorithms at present, like support vector machine, stochastic forest, naive Bayesian and so on. Methods and Experiments Setup Common neighbors of two users often have an impact on the construction of new links between strangers, so we propose the concept of relevant force to describe influence of CN nodes at first and design the modified CN index by combining relevant force. Then we design some other features to describe attributes of nodes. Finally, we construct feature vectors using those features to predict links. RF index is defined as in equation (5): Our For any node belongs to ) , ( Experiments Setup DBLP is a collection of English literatures in the field of computer science which lists information of authors and papers. Link prediction on DBLP refers to predicting co-author relationship between authors in the future based on information within specific period. The author of the paper corresponds to a node, and the co-author relationship corresponds to the edge. We pay attention to authors' research fields, and integrate co-authorship information into MCN .Here are some definitions based on DBLP. Similarity of research area. In the field of computer science, different international academic conferences focus on different research areas. We select 15 international conferences in DBLP and divide them into four areas. The details as follows: 1) Data mining: KDD, ICDM, PAKDD; 2) Database: SIGMOD, VLDB, ICDE, PODS; 3) Information retrieval: SIGIR, ECIR, WWW, CIKM; 4) Machine learning: ICML, NIPS, AAAI, CVPR. We assume that if the author published a paper at an international conference which belongs to a specific research area, there will be a link between the author and the research area. Then we regard the number of links between authors and research areas as a feature which expresses author's interests. We use cosine similarity of feature vectors to represent the similarity between authors. We regard A = (A1,A2,..,An) and B = (B1,B2,..,Bn) as the authors' feature vectors, then the is defined as in equation (6). We define the vector Confer to represent author's interests, so similarity of research areas is defined as ConfSim which is calculated as in equation (7): (8) Unlike the MCN index, CoPub and ConfSim are the indexes which are designed according to the characteristics of DBLP. According to the specific conditions, we can extract different kinds of domain knowledge from different fields, and integrate them to the feature vectors, all of which will be helpful to improve the accuracy of link prediction methods. Results and Discussion After obtaining the information such as network topology and attributes of nodes, we can calculate the Data Set We select a part of DBLP and define it as DBLP_fourarea which contains 1500 authors and 6303 co-author relationships. It includes information of authors and papers published in 15 international conferences mentioned in 3.2.1 during the period from 2004 to 2010.Then we select information of DBLP_fourarea from 2004 to 2008 as the train set, and regard the information from 2009 to 2010 as the test set. Our aim is to predict co-author relationships which are included in train set but not exist in test set. Evaluation Standards In this paper, we choose F-value and AUC to evaluate the performance of our method. F-value combines precision and recall, which is the harmonic mean of precision and recall, so it is more reliable. The traditional F-value is defined as in equation (9): The area under the curve (AUC) describes the ability of the classifier to classify the patterns submitted correctly. It is defined as in equation (10): where i x and i y represent the X-axis and Y-axis of the ROC curve respectively. Experiments and Analysis The experimental results of the single index based on the DBLP_ fourarea are shown in Table 1. As can be seen from Table 2, the feature vector which combines some indexes has improved performance of link prediction methods dramatically, which indicates that expanding information source is efficient. In addition, it can be seen that the feature vector combining with attributes of nodes has a certain improvement comparing with the single index, so we should utilize attributes of nodes and network context to enhance the accuracy of link prediction. Conclusion We propose a new method to predict links in social network. We calculate some indexes to represent topological information and attributes of nodes to construct feature vectors, and predict links based on XGB classifier. Experiments show that our method improves accuracy of link prediction effectively. In addition, components of the feature vector are easily calculated, so it can be applied to many types of networks and allows us pay more attention to the domain knowledge. Acknowledgements This work was supported by National Natural Science Foundation of China (NSFC Grant No.
1,958.6
2017-08-01T00:00:00.000
[ "Computer Science" ]
Type II chiral affine Lie algebras and string actions in doubled space We present affine Lie algebras generated by the supercovariant derivatives and the supersymmetry generators for the left and right moving modes in the doubled space. Chirality is manifest in our doubled space as well as the T-duality symmetry. We present gauge invariant bosonic and superstring actions preserving the two-dimensional diffeomorphism invariance and the kappa-symmetry where doubled spacetime coordinates are chiral fields. The doubled space becomes the usual space by dimensional reduction constraints. Introduction The low energy effective field theory of the string theory has the T-duality symmetry. The worldsheet origin of the T-duality is mixing of momenta and winding modes of a string. These string modes are generalized to the supercovariant derivatives for a superstring which satisfy the affine super-Lie algebra [1]- [3]. Doubling spacetime coordinates makes the T-duality manifest [4]- [7]. A manifestly T-duality formulation based on the affine Lie algebra in the doubled space was proposed in [7]. The affine Lie algebra determines the new Lie bracket or equivalently C-bracket which gives rise to the stringy modification of the general coordinate transformation. The generalized geometry proposed in [8] is described by the Courant bracket which is reduced from the C-bracket by the dimensional reduction. The generalized geometry and the double field theory have been widely studied [9]- [15]and review articles are [16,17]. Recently we have proposed a manifestly T-duality formulation for a type II superstring [18,19] and the one with the Ramond-Ramond gauge fields [20]. For a type II supersymmetric extension of the manifestly T-duality formalism chiral separation of affine Lie algebras is an essential problem. We specify chirality as left and right moving modes in a two dimensional worldsheet. Chiral currents for a bosonic string in a nonabelian background are constructed by the Wess-Zumion-Witten model [21]. A group element g = g(σ + )g ′ (σ − ) is considered where σ ± are the left and right moving twodimensional coordinates. The left moving current is constructed as the right-invariant current, ∂ + gg −1 = ∂ + gg −1 (σ + ), while the right moving current is constructed as the leftinvariant current, g −1 ∂ − g = g ′ −1 ∂ − g ′ (σ − ). However it is known that the local supersymmetry generator (supercovariant derivative) is obtained from the left-invariant current, while the global supersymmetry generator is obtained from the right-invariant current for supersymmetric theories. For a type II superstring theory both the supercovariant derivative and the supersymmetry generator must have both the left and right moving modes, not one for each. Although the chiral separation of the supercovariant derivative algebra for a superstring on the anti-de Sitter space is obtained on the constrained surface [22], the chiral separation of currents for a superstring in nonabelian backgrounds is in general still difficult. In our formulation we begin by two independent Lie groups G and G' which are parameterized by Z M and Z M ′ . For a direct product of the groups G×G' a group element satisfies g = g(Z M )g ′ (Z M ′ ) = g ′ (Z M ′ )g(Z M ). Therefore the left-invariant and the right-invariant one forms contain both the left and right moving modes as g −1 dg = g −1 dg + g ′−1 dg ′ = J(Z M ) + J(Z M ′ ) and dgg −1 = dgg −1 + dg ′ g ′−1 =J(Z M ) +J(Z M ′ ). This is similar to the nonabelian currents given by Tseytlin [5]. The chiral scalar action used in that paper does not preserve the two-dimensional diffeomorphism invariance [23]. In our formulation a chiral scalar action preserves the two-dimensional diffeomorphism invariance allowing the κ-symmetry for the superstring action. The doubled coordinates are chiral fields, Z M (σ + ) and Z M ′ (σ − ). The stringy geometry is governed by the affine Lie algebras generated by the chiral supercovariant derivatives. The covariant derivatives are still manifestly chiral even after reducing into the usual space by dimensional reduction constraints, since the dimensional reduction constraints are given by the auxiliary symmetry generators. In order to construct a doubled space there are two types of doubling of a group G: 1. Semidirect product, G→ G⋉G * : A Lie group G is generated by g corresponding to derivative operators which include momenta, while another group G * is generated by g * corresponding to one form currents which include winding modes. This is a conventional way of doubling discussed in for example [24,8,33]. It gives the following inhomogeneous algebra [g, g] = g , [g, g * ] = g * , [g * , g * ] = 0 . (1.1) 2. Direct product, G→G×G': G and G' are independent copy and they are generated by g and g ′ corresponding to the left and right moving modes respectively. They satisfy the following algebra It is straightforward to construct one form currents and derivative operators which satisfy (1.1). In terms of them we present a general construction of chiral currents satisfying (1.2). This construction requires that the Lie group must have a nondegenerate group metric and the Lie algebra must have grading by the canonical dimension. The organization of the paper is the following: In the next section we present a general construction of chiral affine Lie algebras generated by the supercovariant derivative and the symmetry generator. We begin by a Lie algebra with a nondegenerate group metric [27,28]. It is necessary to include the Lorentz S mn and its nondegenerate partner Σ mn for construction of two independent affine Lie algebras generated by the covariant derivative, P m , and the symmetry generator,P m . The Lie algebra is graded by a dilatation operator which plays an important role in this construction. Generators of the chiral affine algebras include the B field. For a flat background the B field can be written in terms of the dilatation operator. In section 3 we present chiral affine Poincaré algebras in the doubled space and concrete expression of generators. A set of dimensional reduction constraints are examined which reduce the doubled space into the usual space with preserving the local geometry. We present an gauge invariant action for a bosonic string in the doubled space. Then we demonstrate that the string action in the doubled space is reduced into the usual action. In section 4 we present chiral affine super-Poincaré algebras in the doubled space. Then an gauge invariant action for a type II superstring in the doubled space is given. The super-doubled space is also reduced into the usual space. Chiral affine Lie algebras In this section we present a general construction of two sets of affine Lie algebras generated by the covariant derivatives and the symmetry generators. • Affine Lie algebras Lie algebra G I particle → string affine Lie algebra ր covariant derivative∇ I →⊲ I (σ) ց symmetry generator∇ I →⊲ I (σ) Next we double the Lie algebra as a direct product: • Doubled chiral affine Lie algebras The doubled coordinates manifest chirality as well as the T-duality symmetry. In subsection 2.1 at first for a given Lie algebra we construct the left-invariant current J, the right-invariant currentJ, the particle covariant derivative∇, and the particle symmetry generator∇. The derivatives and the currents satisfy the case 1 algebra in (1.1). The canonical dimensions of operators are expressed by an eigenvalue matrix of the dilatation operator. In subsection 2.2 the general construction of affine Lie algebras for the string covariant derivative⊲, and the string symmetry generator⊲. They are linear combinations of the particle derivatives,∇,∇, and the σ-components of the currents, J 1 ,J 1 , with the B field as coefficients. In subsection 2.3 the Lie algebra is doubled. This doubling corresponds to the case 2 algebra in (1.2). The way of the doubling gives chiral affine Lie algebras. Derivative operators and one form currents A Lie algebra is generated by G I with where [A, B} = AB − (−) AB BA is the graded commutator. A nondegenerate group metric η IJ is introduced in such a way that the structure constant f IJ K with lowered indices becomes totally graded antisymmetric We introduce a dilatation operatorN whose eigenvalues are canonical dimensions n I as The Jacobi identity of the dilatation operator,N, and Lie algebra generators, G I 's , gives an identity The sum of canonical dimensions of a nondegenerate pair is set to be n 0 , and the sum of canonical dimensions of the lower indices of the structure constant in (2.4) becomes also n 0 We take n 0 = 2 in order to choose n P = 1 for η P P = 1. Particle derivatives of the Lie algebra∇ I and∇ I constructed at first. An element of the Lie group g = g(Z) is parameterized by the coordinates Z M . There are two kinds of currents which are invariant under right or left multiplicative actions: • The left-invariant current and the particle covariant derivative generating the right multiplication g 0 → g 0 g : • The right-invariant current (Noether current) and the particle symmetry generator generating the left multiplication g 0 → gg 0 : • Independence of the right and left multiplications: Particle derivatives are extended to the string affine algebra generators;∇ I →∇ I (σ) and∇ I →∇ I (σ). τ and σ components of currents are denoted by J I = dσ i J I i = dτ J I 0 + dσJ I 1 andJ I = dτJ I 0 + dσJ I 1 . Currents J I 1 andJ I 1 carry the canonical dimension 2 − n I , since ∂ σ carries the canonical dimension 2 where α ′ is abbreviated. The indices of currents are lowered with η IJ as J I ≡ J L η LI andJ I ≡J L η LI , and they are covariant under∇ I and∇ I respectively. Derivatives and currents satisfy the case 1. semidirect product G⋉G * in (1.1), where∇ ∈ g, J I 1 ∈ g * and∇ ∈ g,J I 1 ∈ g * ; Affine Lie algebras and B field We construct two independent sets of affine Lie algebras generated by the covariant derivative and by the symmetry generator. • Covariant derivative⊲ I : • Symmetry generator⊲ I : • The covariant derivative⊲ I and the symmetry generator⊲ I commute with each other, (2.21) The algebras (2.18), (2.19) and (2.20) give conditions on b IJ andb IJ : The symmetric parts of b IJ andb IJ are uniquely determined from the signature of the Schwinger terms, the term including ∂ σ δ(2 − 1), of the affine Lie algebras (2.18) and (2.19), Vanishing the coefficient of δ(2 − 1) in (2.20) using with (2.16) leads to A simple solution of b IJ andb IJ is obtained from the Jacobi relation for N I J given in (2.4) as In this solution B IJ is constant and M I J depends on parameters. There is ambiguity in solutions of b IJ andb IJ , which can be interchanged. The generators of the affine algebras are given as the followings: The B field in the Wess-Zumino term is written as The three form H = dB is written from (2.24) as and it is closed "Chirality" from doubling Then we double the affine Lie algebras generated by the covariant derivative (2.18) and the one by the symmetry generator (2.19). Doubled affine Lie algebras are given as below: • Covariant derivative⊲ M : • The covariant derivative⊲ I and the symmetry generator⊲ I commute with each other, The signature of the Schwinger term ∂ σ δ(2 − 1) in (2.31) corresponds to the left or right chirality. Two-dimensional diffeomorphisms are generated by the Virasoro operators, H τ and H σ . The Virasoro operators and the Virasoro algebras are given by The two-dimensional diffeomorphisms of a function of double coordinates Φ(Z M ) are given by The two dimensional left and right derivatives are given by Therefore the covariant derivatives satisfy the left or right chiral conditions as which comes from chiral property of the doubled coordinates Z M For example the left-inariant current J M i (σ + ) are functions of only σ + satisfying the Maurer-Cartan equation as Its affine Lie algebra (2.31) is transformed covariantly preserving the structure of G×G' as Under the global O(n,n) transformation the left and right moving modes are mixed and they are no more chiral operators in general. Under the O(n,n) transformation the twodimensional σ-diffeomorphism constraint, H σ , is inert, but the τ -diffeoporphism constraint, H τ , causes the O(n,n) transformation on the gravitational background fields E A M as The O(n,n) transformation is recognized as the coordinate transformation in the doubled space. An group element g( is transformed under the right multiplicative action of the O(n,n) as Under the right multiplicative action which is denoted as the left-invariant one form J = g −1 dg and the right-invariant one formJ = dgg −1 are transformed as Then the right-invariant one form is inert under the global O(n,n) transformation, d∆ = 0 ⇒ δJ = 0. Hence the symmetry generator is inert under the global O(n,n) transformation which rotates the covariant derivative. Bosonic string action in doubled space We begin with the nondegenerate Poincaré algebra including both the Lorentz generator and its nondegenerate partner [22,28]. Then it is doubled to construct chiral affine Poincaré algebras [26,27][18]- [20]. Concrete expression of the covariant derivatives and the symmetry generators for the left and right moving modes are given. The dimensional reduction constraints are examined on its consistency, chirality and the O(n,n) transformation. A gauge invariant bosonic string action is presented, and dimensional reduction of the doubled space action is demonstrated. Doubled chiral Poincaré generators Generators of nondegenerate Poincaré algebra are given by G I = (s mn , p m , σ mn ) with canonical dimensions (0, 1, 2) respectively. The algebra is given as The nondegenerate group metric is The nondegenerate Poincaré algebra (3.1) is extended to double affine Lie algebras. The covariant derivatives and the symmetry generators with the constant b IJ solution in (2.25) are given as follows: • Covariant derivatives: Flat left : Flat right • Symmetry generators: Symmetry generators include coefficients c N M which are given from (2.23) and (2.24) as Rescaled currents with parameters α and β satisfy the following algebra modified from (2.18) and analogous rescaling for another sector. The usual notation of the left and right moving modes is P m =⊲ P + J P 1 for α = 2, β = 1, or P m = 1 √ 2 (⊲ P + J P 1 ) for α = 1, β = 2. The Virasoro operators for a bosonic string in a flat space, H τ and H σ are given by They satisfy the Virasoro algebra in (2.35). The two-dimensional chirality is determined by the Virasoro operators (2.40). Then a group element for the nondegenerate Poincaré algebra in (3.1) is parameterized as The S mn component of the left-invariant current is given by    Detailed computation is given in the appendix B. From the above expression it is obvious that there is no difference bewteen J P andJ P if u is absent. The covariant derivatives∇ M and the symmetry generators∇ M are obtained as follows. Dimensional reduction constraints In this section the procedure of dimensional reduction of the doubled space into the usual space is presented. The doubled space is defined by the covariant derivatives⊲ M given in (3.3). Fields are functions of the doubled coordinates Z M . This enlarged space contains auxiliary dimensions which are reduced if we impose the following constraints. Section condition (strong constraint): In the curved space covariant derivative operators are multiplied with the vielbein superfield E A M [27], [18]- [20]. It can be orthonormal in the doubled space The Virasoro operators (3.8) in curved space are given as Dimensional reduction constraints: For nondegenerate pairs of generators (S mn , Σ mn ) and (S m ′ n ′ , Σ m ′ n ′ ), Σ mn and Σ m ′ n ′ are auxiliary dimensions so these dimensions should be reduced. But Σ mn = Σ m ′ n ′ = 0 cannot be imposed as first class constraints, since they do not commute with the isotropy constraints (3.22). Instead symmetry generators can be imposed as first class constraints [3,27,18,20] Σ mn =Σ m ′ n ′ = 0 . Left/right mixing dimensional reduction constraint : We also impose further dimensional reduction constraint which mixes the left and right sectors to reduce the doubled space into the usual space. Covariant derivatives P m and P m ′ are dynamical degrees of freedom with the Virasoro constraints, while one combination of symmetry generators can be used as a first class constraint [27] P m + γP m ′ = 0 . (3.24) A commutation relation between constraints in (3.24) requires the dimensional reduction constraintsΣ mn =Σ m ′ n ′ = 0 in (3.23) where the Schwinger term cancels out only for γ 2 = 1. If the constraint is chosen as P m −P m ′ = 0 with γ = −1 , then the sum of the left and right momenta corresponds to the total momentum for a usual spaceP m +P m ′ . Summarizing the above, dimensional reduction constraints for a bosonic string in the double coordinates space are isotropy constraints (3.22), dimensional reduction constraints (3.23) and the left/right mixing dimensional reduction constraint (3.24) in addition to the section constraint (3.21). The two-dimensional diffeomorphisms are modified with the dimensional reduction constraints as (3.26) The covariant derivatives are still manifestly chiral up to the trivial Lorentz rotation Bosonic string action The Hamiltonian for a bosonic string in the doubled space is given by An action for the bosonic string is given by In this section we obtain an gauge invariant action without specifying a solution for B field. From (2.21) and (2.25) we use where * 's denote nonzero elements. The matrices M P P and M P ′ P ′ are Lorentz rotation matrices, and they are functions of only Lorentz parameters with the canonical dimension 0. The triangular property of M I J leads tõ Using (3.34) and (3.36) the Lagrangian becomes sum of the kinetic part L 0 , the Wess-Zumino terms L W Z , the boundary of the Wess-Zumino terms L W Z;0 and constraint part L const : by redefining Lagrange multipliers as Variation of the action with respect toμ P gives the left/right mixed dimensional reduction constraint M P ′ P is a Lorentz rotation matrix which relate the left and the right spaces where the similar matrix is introduced in [25]. After integrating out both P and P ′ the kinetic part becomes where α and β are parameters for normalization in (3.6) and (3.7). Two Lagrange multipliers correspond to the worldsheet metric as As a result the gauge invariant action for a bosonic string based on the double nondegenerate Poincaré algebra is given by If we impose the simple solution for B field in (2.25), the Wess-Zumino term L W Z is cancelled out by the boundary term of the Wess-Zumino term L W Z;0 . For a general solution of B field L W Z + L W Z;0 is total derivative terms. Taking variation of the action (3.42) with respect to Z M the following first class constraints are derived; the Virasoro constraints in (3.8), isotropy constraints (3.22), dimensional reduction constraints (3.23) and the left/right mixing dimensional reduction constraint (3.24) (3.43) The gauge invariance generated by the above first class constraints is preserved. Gauge fixing Corresponding to the first class constraints (3.43) we can choose the following gauge fixing conditions. For the isotropy constraints (3.22) and the dimensional reduction constraints (3.23) the simplest gauge is an unitary gauge Let us introduce two kinds of coordinates as where X m and Y m correspond to the usual space coordinate and the dual coordinate respectively. The unitary gauge (3.44) allowsJ S =J S ′ = 0,∇ P =∇ P ,∇ P ′ = ∇ P ′ ,J P = J P andJ P ′ = J P ′ . The momentum operators are rewritten from (3.17) and (3.18) in the unitary gauge in terms of X m and Y m as They become the usual "dual coordinate relation"P m ±P m ′ ⇔ ∂ i Y m − ǫ ij ∂ j X m in a flat space. Contrast to the conventional relation ∂ i Y m − ǫ ij ∂ j X m = 0 [4,5], we set only one of them to be zero,P −P ′ = 0. This is the left/right mixed dimensional reduction constraint which allows the following gauge fixing condition on Y m . Then the momentum operators become where P m and P m ′ are the left and right moving modes in the usual space and theP m +P m ′ is the total momentum of the space. The section condition (3.21) becomes simpler in the unitary gauge where ≈ uses the local Lorentz constraints (isotropy constraints). A solution in the second quantized level of (3.46) is given by Ψ(X, Y ) = e i 2 Y ·∂σX Φ(X). The section condition reduces to the usual σ diffeomorphism constraint (3.48) in the simple gauge (3.46) as There is another simple gauge u = u ′ in such a way that M P ′ P = 1, which gives In the unitary gauge u = u ′ = 0 they become very simple with the usual space coordinates (3.45); J P+ i = ∂ i X and J P− i = ∂ i Y . The Lagrangian for a bosonic string in an unitary gauge u = u ′ = 0 is rewritten as (3.49) The second term in (3.49) including the dual coordinate is a total derivative in a flat space. This term gives the first class constraint ∂ i∂Y m − 1 2 ∂ σ X m = 0 in (3.46). Further simple gauge for the section condition (3.21), Y m = 0, the action reduces to the usual bosonic string action. Superstring action in doubled space In this section we construct an gauge invariant action for the type II superstring in the doubled space. At first we present the chiral affine super-Poincaré algebras. The dimensional reduction constraints are extended for a supersymmetric case. Then we write down an gauge invariant action without using a specific solution of B-field. Doubled chiral super-Poincaré generators For a superstring we use a nondegenerate super-Poincaré algebra generated by G I = (s mn , d µ , p m , ω µ , σ mn ) with canonical dimensions (0, 1 2 , 1, 3 2 , 2) respectively. The algebra is given as The nondegenerate group metric is The nondegenerate super-Poincaré algebra (4.1) is extended to double affine Lie algebras. The covariant derivatives and the symmetry generators (2.26) are given as follows: • Covariant derivatives: • Symmetry generators: (2.19) with the superspace metric (4.2) up to the rescaling of currents. Rescaling currents with parameters α and β, in such a way that they satisfy the algebra (3.6), is given as The same rescaling is done for other sectors. Superstring action In this section the supersymmetric extension of the section 3 is presented. The doubled space is defined by the supercovariant derivatives⊲ M which are given in (4.3) and parameterized by doubled super-coordinates Z M . This enlarged space contains auxiliary dimensions which are reduced by a set of first class constraints as same as the bosonic case: 1. the section condition (strong constraints), 2. isotropy constraints and 4. Left/right mixing dimensional constraint are the same as before. Only the condition 3. dimensional reduction constraints for fermions are added. Dimensional reduction constraints: In order to describe correct physical degrees of freedom unphysical dimensions introduced by nondegeneracy of the group are eliminated. In order to preserve isotropy constraints and the κ-symmetry, symmetry generator currents are chosen to be constraints to reduce auxiliary dimensions as [27,18,20] ⊲ M = 0 for n M > 1 ⇔Σ mn =Σ m ′ n ′ =Ω µ =Ω µ ′ = 0 . For superstrings the Virasoro constraints are extended to the κ-symmetric Virasoro constraints ABCD constraints [2,22,20] as The same set of constraints for the right sector. First class constraints for a type II superstring in a flat space are ABCD constraints (4.7), isotropy constraints (3.22), dimensional reduction constraints (3.23) and the left/right mixing dimensional reduction constraint (3.24) and the similar constraints for the right sector. The Hamiltonian for a type II superstring in T-duality covariant form is given by The matrices ρ M N are nilpotent metrics introduced to represent BCD constraints [18] ( and the similar relation for the right sector. An action for a type II superstring in the doubled space is given by The analogous relations in (3.34) and (3.35) hold for a supersymmetric case. From the triangle property of the matrix M I J they are rewritten as Then the Lagrangian is rewritten as by suitable redefinition of the Lagrange multipliers similar to (3.38) except ρ's The first class constraints BCD are included in ρ · D and ρ ′ · D ′ terms in L const . In order to compare it with the Green-Schwarz action we use the second class constraints D µ = D µ ′ = 0 instead of first class constraints BCD = 0. The kinetic term becomes the same as the bosonic one L 0 , while the Wess-Zumino term includes bilinears in the fermionic currents. We impose the fermionic second class constraints in addition to the first class constraints D µ = D µ ′ = 0. The resultant gauge invariant action for a type II superstring in the doubled space is given by where The normalization parameter αβ given in (3.6) and (3.7) is natural to set αβ = 2. The Lagrange multipliers ρ and ρ ′ are Taking variation of the action (4.14) with respect to the super-coordinates Z M the following first class constraints are derived; the fermionic second class and the κ-symmetry first class constraints, Virasoro constraints in (3.8), the isotropy constraints (3.22), dimensional reduction constraints (3.23) and the left/right mixing dimensional reduction constraint (3.24) (4.17) Type IIA or IIB is determined by the gamma matrix chiral projection obtained from the algebra between superchargesD µ andD µ ′ and the total Lorentz charge (S −S ′ ) mn . Under the global O(n,n) transformation, S mn and Σ mn components of the O(n,n) matrix are treated as same as the bosonic case (3.30). Its fermionic components, D µ and Ω µ , involve the Ramond-Ramond dimensions Υ µν ′ and µν ′ introduced in [20]. This issue will be discussed in another paper. In the simple solution for B field in (2.25) the Lagrangian becomes If further simple gauge M P ′ P = 1 and the constant B field are used, then it reduces into the (p, q)-brane action proposed by Sakaguchi [30] which is obtained from the central extended superalgebra. The manifest SL(2) S-duality is proposed into the (p.q)-brane action [31]. The manifest S and T-duality action will be unified in the F-theory [32]. With the gauge fixing condition J P− i = 0 and ignoring the surface term, it reduces to the usual Green-Schwarz superstring action. Conclusions In this paper we have presented general construction of chiral affine Lie algebras generated by the supercovariant derivatives and the symmetry generators for a type II superstring. The covariant derivatives and the symmetry generators have the general form given in (2.26) where the B field is determined from the relation (2.23) and (2.24). There is a constant solution of the B field (2.25) where the nondegenerate group metric and the dilatation operator play essential roles. The obtained covariant derivatives and symmetry generators become chiral by doubling the Lie group. Chirality is manifest in the doubled space; each coordinate is a function of only the left or right moving coordinate in a string worldsheet as Z M (σ + ) and Z M ′ (σ − ). Nondegeneracy of the group gives the unique chiral representations. The supercovariant derivatives are manifestly chiral even after the dimensional reduction into the usual space. The doubled space is reduced into the usual space by a set of dimensional reduction constraints. Auxiliary directions introduced for the nondegeneracy of the group are reduced by using symmetry generators, since symmetry generators commute with covariant derivatives. So the local geometry governed by the covariant derivatives is preserved under the dimensional reduction. Therefore the local geometry of the doubled space with manifest T-duality is preserved. Gauge invariant actions for a bosonic string and a type II superstring in the doubled space are obtained in (3.42) and (4.14) respectively. The resultant actions include the kinetic term, the Wess-Zumino term and the boundary of the Wess-Zumino term. There exists the winding mode contribution through the term J P+ ∧ J P− , which can be gauged away by the constraint. T-duality transformations on branes in the doubled space and the M-theory and the F-theory extension will be interesting issues. The solution C 1 is found in the form B Nonlinear realization of nondegenerate Poincaré groups The left-invariant one form for a Lorentz group is calculated as follows. Using an abbreviated notation u · s = 1 2 u mn s mn , the canonical dimension zero part is given g −1 dg = e −iu·s de iu·s + e −iu ′ ·s ′ de iu ′ ·s ′ ; e −iu·s de iu·s = • Right-invariant currents Flat rightJ M ′ = (J S ′ m ′ n ′ ,J P ′ m ′ ,J Σ ′ m ′ n ′ ) They satisfy the following Maurer-Cartan equations. • The Maurer-Cartan equations for the right-invariant currents where Lorentz indices are contracted with nearest neighbor indices.
7,058.8
2015-07-11T00:00:00.000
[ "Physics" ]
Separability Measure Supervised Network for Radar Target Recognition In the radar automatic target recognition (RATR) field, radar high-resolution range profiles (HRRPs) have garnered significant attention. While traditional methods focus on extracting features having physical explanations, including power spectra, FFT magnitudes, etc, the effectiveness of these features relies heavily on personal experience and skills. In contrast, deep learning networks have shown strong competence in extracting discriminative features of HRRPs. However, the deep learning networks’ feature extraction procedure is solely based on the targets’ label information, which has almost no correlation with the feature separability. As a result, this approach can lead to poor convergence and limited recognition performance. To address this issue, we propose a Separability Measure Supervised Network (SMSN), which integrates a separability measure based on the rate-distortion function into the loss function to direct the training of the network. Comparative experiments on the airplane electromagnetic simulation HRRP dataset demonstrate that SMSN achieves higher recognition accuracy compared to the backbone networks, with significantly improved feature separability. Introduction In the Radar Automatic Target Recognition (RATR) field, the high-resolution range profile (HRRP) is a crucial data source due to its efficiency and low complexity [1].It provides valuable knowledge regarding target structure, including scattering distribution, geometry size, and target shape [2].Additionally, obtaining HRRP data is easy, and it has low requirements on radar systems.As a result, research on HRRP-based radar target recognition has garnered widespread attention in radar communities as a promising approach. Generally, the approach based on HRRP for radar target recognition involves three steps: data preprocessing, feature extraction, and classifier design.In recognition, data quality and feature separability are crucial for improving recognition performance.Various methods have been proposed for addressing the amplitude, translation, and orientation present in HRRP data [3].However, current feature extraction algorithms do not optimize feature separability. Early traditional HRRP recognition methods extract features with physical explanations, including power spectrum, FFT magnitude, etc.Although these features are interpretable, their separability depends heavily on personal experience and skills.Recent studies suggest that neural networks can extract more discriminative features of HRRPs than traditional shallow feature learning methods.The variational autoencoder (VAE) as a deep feature learning method is first considered to apply for HRRP recognition.E.g., for the purpose of learning hierarchical features for the HRRP, Feng et al. [4] introduced a stacked corrective auto-encoder (SCAE) model.However, the VAE may not create a discriminative subspace without label information because it's a typical unsupervised generative model. 2 To address this issue, Du et al. [5] used the conditional variational auto-encoder (CVAE) with label information to obtain discriminative latent representations.Coincidentally, Liao et al. [6] constructed a complex VAE that exploits the magnitude and phase information of HRRP echoes.In addition to VAE models, the convolutional neural network (CNN) is applied to extract the corresponding spectrogram features of HRRPs [7].Additionally, recurrent neural networks (RNNs) have been used to learn the sequential information over time between HRRP range cells [8].In general, the deep feature learning models described above improve recognition performance by incorporating label information into the loss function.However, this approach can lead to convergence of the loss function only being closely related to the gap between the predicted label and the actual label.As a result, this label-guided model fails to fully utilize the separability information of latent features during training, thus limiting recognition performance. To address this issue, the proposed model introduces the separability measure, which is an inherent property that describes how data points belonging to different classes are mixed with each other [9].In [10], existing classification complexity measures are summarized from a data separability perspective, which primarily assesses the Euclidean distance between intra-class and inter-class data.However, such distance-based methods still raise an important question, namely, whether different distance metrics, such as Euclidean, Manhattan, and Minkowski distance, are suitable for assessing data separability in high-dimensional spaces [11].In contrast, the separability measure based on the rate-distortion function [12] is more robust to changes in feature dimensionality, as it utilizes the singular value of a feature to measure the distance between inter-class samples.Recent research has applied rate-distortion theory to explain neural network models, optimal feature learning methods, and so on [13]. In this study, we propose the SMSN, a separability measure supervised network for HRRP recognition.The SMSN is composed of three modules.The backbone module follows the Autoencoder (AE) structure and extracts latent features of the input HRRP data in an unsupervised manner.The unsupervised latent feature generation process is derived from the reconstruction loss optimization.The Separability Measure Module evaluates the latent feature separability and applies the rate-distortion function to calculate the separability loss.The Loss Fusion Module combines the two losses stated above with predetermined weights to construct the total loss.The SMSN extracts the separable latent features for the training and test data, and the Linear Support Vector Machine (SVM) classifies the separable feature to complete the classification process.Extensive experiments on an airplane electromagnetic simulation dataset demonstrate that SMSN improves the backbone networks' recognition capability with enhanced feature separability.This paper's primary contributions are: 1) Our proposed method extracts a more separable feature.We introduce the rate distortion-based separability measure to quantify feature separability.By optimizing the separability loss during the training process, intra-class samples are more clustered, and inter-class samples are more dispersed. 2) Our proposed model achieves a higher recognition accuracy.Ablation experiments demonstrate that it has a significant improvement in recognition performance with fewer iterations compared to the simple AE and VAE models. Backbone module of the AE and VAE structure The backbone module extracts hidden layer features for recognition by an AE or VAE framework consisting of fully connected neural network units, as shown in Figure 1.To extract separable latent features, a separability measure is introduced into the loss function. represents the output of the AE model, then the reconstruction loss function is expressed as The VAE model is a variant of the AE model, which is actually a variational inference model.Its loss function appends the Kullback-Leibler (KL) divergence to the reconstruction error, constraining the latent feature Z to obey a Gaussian distribution.The VAE loss function can be defined as: where q Φ (Z|X) is defined as the probability distribution of the hidden variable Z generated by the input X, and p θ (X|Z) denotes the probability distribution of the output X sampled from the reparametrized Z, and Φ, θ respectively hold the parameters of the encoder f and the decoder g. Separability measure module The separability measure module quantifies the latent feature separability extracted by the backbone module.Specifically, assume an HRRP latent feature Z = [z 1 , …, z m ] ∈ R n×m with m samples of n dimensions and an encoding precision ε > 0, let Π = {Π j ∈ R m×m } k j=1 be the label matrix of the Z in the k classes, and Π j (i, i) be the label of z i belonging to class j, our data separability measure based on rate-distortion function is: where the R(Z, ε|Π) and R(Z, ε) denote the local and global coding rate of the data.According to Cover and Thomas' [14] definition of rate-distortion, with the anticipated decoding error less than ε, the rate-distortion R(Z, ε|Π) is the desired minimum volume of binary bits to encode Z.The actual estimation coding rate of Z with zero means can be defined as follows: Furthermore, suppose Z has k-class samples, then Z = Z 1 ∪Z 2 ∪…∪Z k , and the data Z j in each class j also occupy a certain volume in its low dimensional subspace.By applying the above coding rate equation (4) for each subset, R(Z, ε|Π) can be given by The loss function in Equation ( 3) is employed to guide the network learning a more separable feature with low dimension compared to the original HRRP data.During the training process, maximizing L SEP means the higher R(Z, ε) and the lower R(Z, ε|Π) are expected.Consequently, the volume of all features Z expands to its maximum, and each class Z j compress to its minimum.Thus, the hidden layer features exhibit discriminative between classes, while preserving intra-class similarity. Loss fusion module The loss fusion module combines the reconstruction loss L AE or L VAE with the separability loss L SEP .Considering the magnitude gap between different types of loss functions, the preset hyperparameter λ is applied to balance the loss value.For the AE model, λ equals 0.00001, and for the VAE model, λ is 100.The complete loss function can then be represented below. Experimental results and analysis In this section, we outline the implementation and results of our experiments.Firstly, we introduce the airplane electromagnetic simulation dataset.Next, we preprocess the data using L2 normalization and Maximum correlation alignment to address issues related to HRRP amplitude and translation and improve data quality.Finally, we conduct an ablative experiment to demonstrate the superiority of our proposed separability measure supervised network.Our comparative results, which include LSVM's recognition accuracy, cosine similarity matrix between samples, and t-distributed stochastic neighbor embedding (t-SNE), demonstrate that our proposed method (SMSN) can extract more separable latent features and achieve better recognition performance.We used Pytorch to carry out all of our experiments using a laptop with an NVIDIA GeForce MX150 graphics card. Dataset In this study, we experiment with the F-35, F-117, and P-51 aircraft types from the aircraft electromagnetic calculation dataset.Figure 2 displays their 3D models, and The aircraft data is simulated on X-band radar with a simulated frequency range of 9.5 GHz to 10.5 GHz in 5 MHz steps.The dataset size is 901×101×20, where 901 represents the number of HRRP samples taken every 0.1 degrees in the radar azimuth angle range from 0 to 90 degrees, 101 represents the number of HRRP samples taken every 0.1 degrees in the radar pitch angle range from 0 to 10 degrees, and 201 represents the dimension of one HRRP.For the training set, we select the 46th to 49th pitch angles of the HRRPs from all azimuth angles for the three types of aircraft, while the 50th pitch angle is used for the test set.The training set consists of 10,812 instances, and the test set contains 2,703 instances.Figure 3 illustrates the three types of HRRPs in training and test data, with the 46th to 49th pitch angles of the training HRRPs plotted in the same figure.As shown in Figure 3, each HRRP has a varying amplitude and center translation, which makes it different from intra-class samples and results in a challenging recognition issue. Data preprocessing results To enhance the quality of the HRRP data and address issues related to amplitude and translation, we apply two preprocessing techniques, namely amplitude L2 normalization and the Maximum Correlation Alignment (MCA) method.The effectiveness of these techniques is validated by visualizing the 48th pitch angle of F-117 HRRPs before and after preprocessing. Figure 4 illustrates the results of this comparison.Specifically, as shown in Figures 4(a) and 4(b), the amplitude of the HRRPs is normalized by the L2 normalization technique, resulting in a more prominent magnitude feature for each HRRP.Furthermore, as shown in Figures 4(c) and 4(d), the MCA method is applied to align the HRRP sequences, resulting in a more symmetrical center of gravity for the HRRPs. Comparative and ablative experiment results On the basis of the aforementioned simulated dataset, the comparison experiments are conducted in this section using conventional AE and VAE methods as well as the ablative experiments using the proposed method.Our backbone network is a component of a series of fully connected layers.For the encoder, each layer's neural unit number is set to 201, 1500, 500, and 50 respectively.And for the decoder, the unit number is 500, 1500, and 201.The Adam optimizer is applied to train the network, using a learning rate of 0.001.Each loss function summarized in section II respectively stands for the methods shown in Table 2. L AESEP and L VAESEP are our proposed method training with feature separability measure, and traditional L AESEP and L VAESEP models are considered as comparison methods in an ablative experiment.The highest recognition accuracy for each approach is displayed in Table 2. Additionally, we provide the recognition accuracy curves during 100 iterations for each model in Figure 5. Table 2.The highest accuracy of models with different loss functions.In general, the proposed SMSN model outperforms the AE and VAE models in terms of recognition accuracy.Specifically, the method L VAESEP has nearly 10% higher accuracy than the method L VAE , and the method L AESEP also achieves a significant accuracy improvement, arriving at the highest accuracy of 0.9926.Furthermore, from Figure 5, it is evident that the separability measure supervised methods have outstanding performance at the beginning of the training process, which means that the separability loss function is optimized and the SMSN model focuses on mining for a more separable latent feature. Loss Accuracy To demonstrate how our proposed method SMSN makes a difference in extracting more separable features and improving recognition accuracy, the cosine similarity metric is utilized to calculate the similarity between instances and generated a feature reduction map using the t-SNE method.The resulting cosine similarity matrix and t-SNE map of the latent feature for each model are presented in Figure 6 and Figure 7, respectively. Figure 1 . Figure 1.The framework of the separability measure supervised network. Figure 2 .Figure 3 . Figure 2. The three types of aircraft 3D models Figure 5 . Figure 5.The accuracy comparison results during 100 iterations Figure 6 . Figure 6.The cosine similarity matrix between instances. Figure 7 . Figure 7.The 2D t-SNE visualization results of the test data's latent feature. Table 1 . Table 1 lists the specific simulation settings for each of the several types of aircraft.Three distinct types of aircraft simulation settings
3,164
2023-11-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Tumor Radiosensitization by Gene Electrotransfer-Mediated Double Targeting of Tumor Vasculature Targeting the tumor vasculature through specific endothelial cell markers involved in different signaling pathways represents a promising tool for tumor radiosensitization. Two prominent targets are endoglin (CD105), a transforming growth factor β co-receptor, and the melanoma cell adhesion molecule (CD1046), present also on many tumors. In our recent in vitro study, we constructed and evaluated a plasmid for simultaneous silencing of these two targets. In the current study, our aim was to explore the therapeutic potential of gene electrotransfer-mediated delivery of this new plasmid in vivo, and to elucidate the effects of combined therapy with tumor irradiation. The antitumor effect was evaluated by determination of tumor growth delay and proportion of tumor free mice in the syngeneic murine mammary adenocarcinoma tumor model TS/A. Histological analysis of tumors (vascularization, proliferation, hypoxia, necrosis, apoptosis and infiltration of immune cells) was performed to evaluate the therapeutic mechanisms. Additionally, potential activation of the immune response was evaluated by determining the induction of DNA sensor STING and selected pro-inflammatory cytokines using qRT-PCR. The results point to a significant radiosensitization and a good therapeutic potential of this gene therapy approach in an otherwise radioresistant and immunologically cold TS/A tumor model, making it a promising novel treatment modality for a wide range of tumors. Introduction To grow beyond a limited size, solid tumors require a proper vasculature that grants oxygen, nutrients, and waste disposal. Therefore, activation of angiogenesis is needed to maintain the proliferation of tumor cells and their dissemination to distant sites. In recent decades, this therapeutic area has been extensively researched. Hypoxia, followed by a lack of nutrients, triggers the angiogenic switch and promotes the expression of inflammatory signals and cytokines, as well as the transcription of genes mandatory for angiogenesis, such as vascular endothelial growth factor (VEGF) and platelet-derived growth factor (PDGF) [1]. Antiangiogenic therapies using antibodies or tyrosine kinase inhibitors have been approved to treat several types of cancer. Despite the ever-growing list of FDAapproved drugs, the efficacy of the therapies results in short-term effectiveness, leading to resistance and modest survival benefits [2]. Therefore, targeting the tumor vasculature through specific endothelial cell markers involved in different signaling pathways and with different therapeutic approaches represents a promising tool for cancer treatment. One of the promising targets is CD105 (endoglin), a transforming growth factor coreceptor that is crucial in developmental biology and tumor angiogenesis. It is highly expressed on tumor vessels and correlates with poor survival prognosis. The endoglin neutralizing antibody (TRC105; carotoximab), after being tested in many preclinical cancer models, has entered clinical studies (phase I-III) for cancer patients as mono-or combination therapy [3]. Despite very promising preclinical studies, pivotal trials for TRC105 did not exert clinical benefits. Another notable target is CD146 (melanoma cell adhesion molecule). It is an adhesion molecule and a VEGF-R2 coreceptor that promotes the transcription of many proangiogenic factors. CD146 is also expressed in many tumors, and its levels correlate with aggressiveness and invasiveness; therefore, it is also used as a marker for poor prognosis. An antibody targeting only CD146 expressed in tumor cells has already been tested in preclinical studies, demonstrating its effectiveness in reducing proliferation, migration, and tube formation in vitro and tumor growth and metastasis formation in vivo [4][5][6][7][8]. The main disadvantages of the therapeutic approaches described above are the instability of the antibodies and their short time of action. Therefore, it is reasonable to aim at specific markers with more sustainable, local, and specific approaches, which can be obtained using gene electrotransfer (GET). This is a safe nonviral gene therapy approach, which has demonstrated its efficacy in many preclinical studies and has also reached clinical application [9,10]. In oncology, several clinical trials are using plasmid DNA encoding different therapeutic molecules, such as interleukin 12 [11][12][13][14][15] or tumor-associated antigens [11,13,[15][16][17][18]. Plasmids can also be used to express shRNA for gene silencing. Studies have shown that plasmid DNA allows longer and more stable silencing than siRNA [19,20]. It is also easier to produce and is more stable and resistant to nucleolytic degradation than RNA alone. Furthermore, an important advantage of plasmids is their capability to accommodate large genetic payloads, allowing co-transfection of multiple plasmids or transfection of larger polycistronic plasmids [21,22]. One possible limitation of plasmid DNA is the antibiotic resistance gene present in the backbone of conventional plasmids that is needed for their production in bacteria [23]. However, this can be circumvented with several new technologies, one of which is operator repressor titration (ORT) technology, which was also successfully implemented by our research group [24][25][26]. In our previous studies, we have shown that GET of plasmids encoding shRNA against either CD105 or CD146 results in significant vascular reduction and pronounced antitumor effects in several tumor models [27][28][29][30][31][32]. By combining this therapy with irradiation, the therapeutic effect was further improved [29,30,33], most likely due to the normalization of tumor vasculature that promotes better oxygenation status of the tumor and, therefore, tumor radiosensitization [34,35]. Additionally, the combined therapy can also promote activation of the immune system since many mice remained complete responders after secondary tumor challenge [30,33]. However, some complete responders were also obtained in the groups using plasmids devoid of therapeutic genes, indicating the action of the foreign plasmid DNA introduced via GET and DNA released from the cells after therapyinduced damage, which can act as pathogen-associated molecular patterns (PAMPs) or danger-associated molecular patterns (DAMPs), respectively [36][37][38][39]. These can activate cytosolic pattern recognition receptors (cytosolic DNA sensors), one of importance being the stimulator of interferon genes (STING), which triggers type I interferons and proinflammatory cytokine production. In our previous studies, we demonstrated activation of DNA sensors after GET and after irradiation (IR) of tumors separately; however, we did not determine this activation after our combined treatment with GET of shRNA-encoding plasmids and IR [36,37]. In our recent in vitro study, we evaluated a newly constructed plasmid for simultaneous GET-mediated silencing of CD105 and CD146, which was devoid of an antibiotic resistance gene [25]. The study confirmed the functionality of the plasmid, as both of the targets were successfully silenced. This prompted a new study to explore the therapeutic potential in vivo and to elucidate the effects of combined therapy with tumor IR. The results of this study point to significant radiosensitization and immunostimulatory effectiveness of this gene therapy approach in an otherwise radioresistant and immunologically cold murine mammary adenocarcinoma TS/A tumor model [40,41]. Altogether, a good therapeutic potential of the simultaneous silencing of the two targets responsible for distinct angiogenic pathways in combination with irradiation was demonstrated, making it a promising novel treatment modality for a wide range of tumors. Dual Silencing of CD105 and CD146 Promotes Radiosensitization of TS/A Tumors Injection of plasmids, application of electric pules (EP) or IR alone, did not affect tumor growth delay (TGD). GET of the dual silencing plasmid pDual resulted in 11.3 days of TGD, which was statistically significantly longer than after GET of the control plasmid pEmpty (3.5 days). When GET of either of the plasmids or even EP alone was combined with tumor IR, the TGDs were statistically significantly prolonged. The longest TGD, i.e., 23.9 days, was obtained after combining GET of pDual with IR, which was significantly longer than that after GET pDual monotherapy (13 days) (Table 1, Figure 1a,b). Additionally, 50% of mice were cured in this therapeutic group, although this was not statistically significant compared with the EP + IR, GET pDual, and GET pEmpty + IR groups, where there were also some complete responders (1-2 mice) (Table 1, Figure 1c). The tumor-free mice were challenged with the injection of tumor cells 100 days from the beginning of the treatment to determine possible induction of immune memory. In general, mice were not resistant to the secondary challenge, with the exception of one mouse in the GET pDual + IR group. No body weight loss over 10% or any other side effects were observed, except for temporary hair loss in the irradiated area without skin desquamation, proving the safety of the treatment. all groups except pDual, EP + IR and GET pEmpty + IR; +, p < 0.05 vs. all groups except EP + IR, GET pDual and GET pDual + IR. n = 6-8 mice. Histological Analyses Indicate Vascular Targeted Effects Combined with Immune Activation Immunohistochemistry (IHC) analyses of the tumor samples obtained on day 6 after the beginning of the therapy were performed to evaluate the levels of tumor vascularization, proliferation, hypoxia, apoptosis, immune infiltration (granzyme B-positive cells), and necrosis ( Figure 2a). In general, regression of the tumor vasculature was observed in the therapeutic groups ( Figure 2b). The highest and statistically significant results were observed in the GET pDual + IR group, where the vascularization was reduced to 17%. The same trend was also observed in proliferation, which was the most reduced after GET of either of the plasmids (pEmpty, pDual) in combination with IR, i.e., to 33.5% or 29%, all groups except pDual, EP + IR and GET pEmpty + IR; +, p < 0.05 vs. all groups except EP + IR, GET pDual and GET pDual + IR. n = 6-8 mice. Histological Analyses Indicate Vascular Targeted Effects Combined with Immune Activation Immunohistochemistry (IHC) analyses of the tumor samples obtained on day 6 after the beginning of the therapy were performed to evaluate the levels of tumor vascularization, proliferation, hypoxia, apoptosis, immune infiltration (granzyme B-positive cells), and necrosis ( Figure 2a). In general, regression of the tumor vasculature was observed in the therapeutic groups ( Figure 2b). The highest and statistically significant results were observed in the GET pDual + IR group, where the vascularization was reduced to 17%. The same trend was also observed in proliferation, which was the most reduced after GET of either of the plasmids (pEmpty, pDual) in combination with IR, i.e., to 33.5% or 29%, respectively ( Figure 2c). On the other hand, the percentage of hypoxia, apoptosis, necrosis, and granzyme B-positive cells was elevated after GET monotherapies and in combination with IR. Specifically, hypoxia levels were significantly increased compared with the control and IR alone in all groups where EP was used either alone or in combination with plasmids (GET) or in combination with IR ( Figure 2d). The highest elevation (62.4% of hypoxic cells) was achieved after the GET of the therapeutic plasmid combined with IR. Very similar results were also obtained for necrosis ( Figure 2e). Correlating to tumor size, the percentage of the necrotic area was higher in the groups receiving GET of either of plasmids alone or combined with IR. The induction of apoptosis was moderate but still statistically significant compared with the control and IR alone in groups combining GET of either of the plasmids with IR; however, the highest induction was achieved after GET of the therapeutic plasmid combined with IR (19.1% of apoptotic cells, Figure 2f). Regarding the infiltration of immune cells, the number of granzyme B-positive cells was significantly increased in groups receiving GET of either plasmid (Figure 2g). When GET was further combined with IR, the number of positive cells was even more pronounced (23.3% of granzyme B-positive cells). DNA Sensor Sting and Proinflammatory Cytokine Levels Indicate Induction of Immune Response The expression levels of the cytokines Il1β, Ifn-β1 and Tnf-α and the DNA sensor Sting were measured on day 6 after GET of the pDual in TS/A tumors (Figure 2h). All the cytokines were detected in the control, untreated groups of a TS/A tumor model. The average CT value (n = 6) was 27.7 for Il1β, 33.8 for Ifn-β1 and 28.5 for Tnf-α. All cytokines were significantly increased in the group where GET of the control plasmid (pEmpty) or therapeutic plasmid (pDual) was combined with IR (GET pEmpty + IR and GET pDual + IR). A 20.3-fold and 10.4-fold increase in the expression of the proinflammatory cytokine Il1β, a 75-fold and 83.4-fold increase in the expression of the cytokine Ifn-β1 and a 10.4-fold and 10.5-fold increase in the expression of the proinflammatory cytokine Tnf-α were observed after GET pEmpty + IR and GET pDual + IR, respectively. Ifn-β1 was also significantly increased (68.2-fold) after GET of pEmpty. The DNA sensor Sting was detected at a CT level of 25.5 in the control tumors. After GET of pEmpty and pDual in combination with IR, a statistically significant 2.6-fold increase in STING expression was detected. Discussion The study aimed to evaluate the therapeutic effectiveness of gene therapy with a new plasmid DNA devoid of an antibiotic resistance gene, simultaneously silencing two independent targets, CD105 and CD146, which both have demonstrated great potential in preclinical and clinical studies. Gene therapy was performed using GET and was further combined with single-dose IR (15 Gy) in a murine mammary adenocarcinoma TS/A tumor model that does not express either of the targeted markers to distinguish between direct effects on the vasculature and tumor cells [33,42]. The results showed radiosensitization of the treated tumors, which resulted in prolonged tumor growth delay and up to 50% of tumor-free mice. Histological analyses demonstrated a significant decrease in tumor vascularization, mainly due to GET, which in combination with IR reduced the proliferation of tumor cells, resulting in an increase in necrosis, hypoxia, apoptosis and infiltration of immune cells. The treatment was accompanied by the activation of DNA sensors and proinflammatory cytokines, which were upregulated in groups combining GET and IR, indicating activation of the immune system. However, since the tumors regrew after the secondary challenge, with the exception of one mouse in the therapeutic group, the induced innate immune response was obviously not converted to specific immunological memory. Generally, these findings are in line with the results of our previous study [29] that was also performed in an immunologically cold TS/A tumor model using plasmids targeting only one vascular marker, either CD105 [29,43] or CD146 [33]. However, with the current therapy, targeting both vascular markers simultaneously, overall tumor growth delays and complete responders were more pronounced. Combining vascular-targeted therapies and irradiation was proven to be an effective treatment modality since by regulating tumor oxygenation, the effectiveness of irradiation can be enhanced. Namely, antiangiogenic agents can improve tumor oxygenation status through normalization of tumor vasculature, vessel depletion and even immune activation [2,34], whereas vascular-disrupting agents can promote a more oxygenated tumor rim through disruption of already formed tumor vasculature [34]. These phenomena were demonstrated in different studies and resulted in prolonged tumor growth delay and tumor-free mice. In one of our studies, targeting tumor vasculature by silencing the expression of CD105 by GET resulted in both vascular disruption and prevention of the formation of new tumor vasculature. Furthermore, this resulted in a prolonged tumor growth delay (6 days) [32], which was also confirmed in this study, where in addition to silencing CD105, we also silenced CD146 simultaneously. CD146 is a marker involved in different signaling pathways; in the case of our study, it is important primarily as an alternative to CD105 in endothelial signaling. In our previous study, the efficacy of CD146 silencing was already evaluated in a TS/A tumor model, where a growth delay of 8.9 days was observed [33]. In our current study, we also obtained 12.5% tumor-free mice, which was not observed in our previous studies in this tumor model. However, the therapeutic effectiveness in the TS/A tumor model was still lower than that in the murine melanoma tumor model B16F10 using GET of plasmids silencing either CD105 or CD146 alone; specifically, silencing CD105 in melanoma resulted in pronounced tumor growth delay (8.6 days) and tumor-free mice (44%), from which the majority were also resistant to secondary challenge (75%) [30]. Silencing CD146 in the same melanoma tumor model resulted in a more pronounced tumor growth delay (13.2 days) and more tumor-free mice (35.7%), of which 60% remained tumor free after the secondary challenge [33]. These differences could be attributed to the different expression statuses of the targeted markers in specific tumor models, namely, while adenocarcinoma TS/A lacks the targeted markers CD105 and CD146, melanoma B16F10 expresses both of the markers; therefore, GET of the plasmid encoding either of these markers also has an impact on tumor cells itself, in addition to the antiangiogenic and vascular disrupting effects on endothelial cells in tumor vasculature. Therapeutic effectiveness was reflected in the histological analyses of the tumors; the levels of vascularization and proliferation were decreased, while elevated levels were observed for hypoxia, apoptosis, necrosis and granzyme B-positive cells in the GET pDual group compared with the pertinent control group. The same phenomenon was observed in other aforementioned studies [30,32,33], again indicating the vascular-targeted effect of this treatment in the TS/A tumor model. It is well known that improving the oxygenation status of the tumor can improve radiation therapy [34], which is also the case with silencing either of the markers CD105 or CD146 [30,33]. To improve the oxygenation status of tumors, we tested the gene silencing of two independent targets, CD105 and CD1045. While in the past we silenced each target separately, either using siRNA or improved treatment using GET of shRNA encoding plasmids, simultaneous targeting of the two mentioned genes was performed for the first time in the current study. For silencing, we used a novel double targeting plasmid that was prepared and tested in our previous in vitro study [25]. In the current study, we observed that a single 15 Gy dose had a moderate impact on proliferating tumor cells and tumor vasculature, and the tumor growth delay was not significant since the TS/A tumor model is known to be radioresistant [24,33,44]. However, IR combined with EP alone already resulted in a moderate but significant tumor growth delay (12.8 days) and 12.5% of tumorfree mice, which is also in accordance with our previous studies [29,30,33,45], confirming once again the efficacy of the EP on the radiosensitization status of tumors. By adding GET to the therapeutic plasmid, the effectiveness was significantly improved (37.3 days of tumor growth delay) in 50% of tumor-free mice, which is a significantly better outcome than that observed in previous studies by targeting either CD146 alone (28.7 days TGD and 26.7% CR) [33] or CD105 alone (15 days TGD and 40% CR) [29]. These observations were in line with histological analyses of tumors, where statistically significant reductions in vasculature (to 38.7%) and proliferating tumor cells (to 64.4%) were observed, while levels of hypoxia, apoptosis, necrosis and granzyme B-positive cells were statistically significantly elevated compared with the control group and monotherapies. These results indicate that GET of the novel dual tumor vasculature targeting plasmid is an improved treatment modality, which could have a great impact on future studies on targeting radioresistant tumor models. One important observation of the current study is also the effectiveness of the GET of the control plasmid combined with irradiation (GET pEmpty + IR). Namely, compared with the control group and IR alone, it resulted in pronounced antitumor effectiveness, with a significant tumor growth delay (13.9 days) and 25% of tumor-free mice. The histological trends were more moderate than after GET of pDual and IR, but yet still followed the trend observed in the therapeutic group. These results were not completely unexpected, as similar results were already reported in other studies using GET of different control nontherapeutic plasmids, which resulted in up to 20% tumor-free mice in the TS/A tumor model [29,33]. In more immunogenic tumor models, such as melanoma, this effect was even more pronounced and resulted in up to 89% of tumor-free mice [30,33], which were mostly also resistant to secondary challenge. The observed therapeutic effectiveness after GET of the pEmpty plasmid correlated with increased expression of the DNA sensor Sting and pro-inflammatory cytokines. The rationale behind the effectiveness of the combination of GET and IR lies in the stimulation of the immune system through foreign DNA introduced into cells by gene therapy as well as DNA fragments released from damaged tumor cells after IR, which together act as PAMPs and DAMPs, activating DNA sensors [37][38][39]46]. Namely, DNA sensors are activated by the abnormal presence of DNA in the cytosol, and their activation leads to the upregulation of proinflammatory cytokines and different types of cell death [47]. In our previous study, we confirmed that cytokines and DNA sensors were upregulated in different tumor and nontumor cell types after GET of control plasmids and after IR of cells [36,37,48]. Stimulator of interferon genes (STING) is an important DNA sensor triggering type I interferons and proinflammatory cytokine production, and its importance in the tumor radiation response has been confirmed in numerous studies [11][12][13][14]44]. As expected, its levels were also significantly increased in our study, regardless of the plasmid that was used. Additionally, in the GET pEmpty + IR and GET pDual + IR groups, all tested cytokines (Il1β, Ifn-β1 and Tnf-α) were upregulated 6 days after treatment, further confirming DNA sensor activation and adding to the therapeutic effectiveness of our treatment. Taken together, we achieved significant tumor growth delay in the groups combining GET and IR; however, the outcome was better when the therapeutic plasmid was used since it also had a significant effect on the targeted tumor vasculature, therefore promoting radiosensitization of the tumors. Additionally, by combining GET and IR, which unselectively target vasculature and tumor cells, we were able to stimulate the immune system, even in the case of the control plasmid, and even further improve the therapeutic outcome. The results of this study are encouraging, since we obtained 50% tumor-free mice in an otherwise immunologically cold and radioresistant TS/A tumor model. Furthermore, we have even greater expectations for curing tumors that are immunologically hot; therefore, we believe that this therapy could be a promising novel treatment modality for a wide range of tumors. Cell Lines The mouse mammary adenocarcinoma cell line TS/A [49] (American Type Culture Collection, Manassas, VA, USA) was cultured in advanced DMEM (Dulbecco's Modified Eagle Medium, Thermo Fisher Scientific, Waltham, MA, USA) supplemented with 5% fetal bovine serum (FBS), 10 mM/µL-glutamine (GlutaMAX) and 1% penicillin-streptomycin (all Thermo Fisher Scientific) in a 5% CO 2 humidified incubator at 37 • C. Cells at 80% confluence were trypsinized using 0.25% trypsin/ethylenediaminetetraacetic acid in Hank's buffer (Thermo Fisher Scientific), washed with Advanced DMEM with FBS, and collected by centrifugation (470 g, 5 min). The cell line was frequently tested for mycoplasma contamination by MycoAlertTM PLUS Mycoplasma Detection Kit (Lonza, Basel, Switzerland) and was confirmed to be mycoplasma free. Plasmids In the experiments, two plasmids constructed and evaluated in vitro by our research group were used [25]. Both were obtained by ORT technology to prepare antibiotic resistance gene-free plasmids. The therapeutic plasmid pDual (pU6-antiCD105-CD146-ORT) encodes two shRNAs against CD105 and CD146, while the plasmid pEmpty (pEmpty-ORT) has no homology to any gene in the mouse genome and was used as a control plasmid. The plasmids were isolated using an EndoFree Plasmid Mega Kit (Qiagen, Hilden, Germany) according to the manufacturer's protocol. Plasmid purity and concentration were measured spectrophotometrically (Epoch microplate spectrophotometer, Take3 microvolume plate, BioTek). Additionally, the concentration and identity were confirmed by restriction analysis on an electrophoretic gel. The final concentrations were fine-tuned using a Qubit 4 Fluorometer (Thermo Fisher Scientific). The working concentration of 4 mg/mL was prepared in endotoxin-free water supplied within the kit. Animals Female BALB/cOlaHsd mice were purchased from Envigo RMS SrL (San Pietro al Natisone, Italy) and subjected to an adaptation period of 1 week. The mice were housed in individually ventilated cages and specific pathogen-free conditions at a temperature of 20-24 • C, relative humidity 55 ± 10%, and a 12 h light/dark cycle. Food and water were provided ad libitum. All procedures were performed in compliance with the guidelines for animal experiments of the EU directive (2010/63/EU) and permission from the Veterinary Administration of the Ministry of Agriculture and the Environment of the Republic of Slovenia (permission no. U34401-3/2022/11). The experiments were repeated twice, and the groups contained 6-8 animals per group. In Vivo Gene Electrotransfer The subcutaneous tumors were induced on the back of the mice by injecting 100 µL of 0.9% NaCl containing 2 × 10 6 TS/A cells into BALB/cOlaHsd mice (one tumor per mouse). In vivo experiments were performed as described previously [29]. Briefly, when tumors reached 6 mm in the longest diameter, they were treated with an intratumoral injection (12.5 µL) of pDual (therapeutic plasmid) or pEmpty (control plasmid) or endotoxin-free water (mock control). Ten minutes after the injection, the electric pulses (EP) were delivered through two parallel stainless-steel electrodes; after the delivery of 4 pulses, the electrodes were turned 90 • for the delivery of 4 additional pulses to expose the whole tumor. The parameters of EP were as follows: 8 square wave electric pulses with a voltage-to-distance ratio of 600 V/cm, a pulse duration of 5 ms, and a frequency of 1 Hz. The EP was generated by the electric pulse generator ELECTRO CELL B10 (Leroy Biotech, Saint-Orensde-Gameville, France). The GET was performed on days 0, 2, and 4. Tumor Irradiation One day after the first GET, tumors were irradiated. The mice were placed in special lead holders with apertures for the local exposure of tumors. A single dose of 15 Gy at a dose rate of 1.92 Gy/min was delivered by a Glumay MP1-CP225 X-ray Generator (Gulmay Medical Ltd., Suwanee, GA, USA) operating at 200 kV and 9.2 mA with Cu (0.55 mm) and Al (1.8 mm) filtering. The Tumor Growth Delay The antitumor effect was determined by measuring three orthogonal diameters (a, b, c) of tumors using a Vernier caliper every second to third day. The tumor volume was calculated using the formula V = a × b × c × π/6. From the tumor volumes, arithmetic means for each group were calculated, and tumor growth curves were drawn with error bars representing the standard error of the mean. The tumor doubling time was defined as the time when tumors doubled in volume from the initial day of the experiment. The TGD was calculated as the difference in the tumor doubling times of the therapeutic and control groups. In all of the groups, tumor growth was followed until the tumors reached 350 mm 3 , which represented an event for the generation of Kaplan-Meier survival curves. Animals with tumors in the regression were examined weekly for tumor presence for 100 days after the treatment. The animals were considered cured or complete responders if they were tumor-free at day 100. The TGD of complete responders was set at 30 days for the generation of TGD graphs. The cured mice were challenged with a secondary subcutaneous injection of the tumor cells in the right flank as described above. Animals with no tumor growth 30 days after the injection of tumor cells were considered resistant to secondary challenge. Animal weight loss was monitored as a sign of systemic toxicity of the treatments. In addition, in the irradiated animals, acute skin reactions in the irradiated field were monitored as previously described [45]. Histology From each experimental group, six tumors were collected on day 6 from the beginning of the experiment to evaluate their histological properties. The tumors were fixed in zinc fixative (BD Biosciences, San Diego, CA, USA), embedded in paraffin blocks, and cut into six consecutive 2-µm-thick sections for immunohistochemistry (IHC) analysis. The first section was stained with hematoxylin and eosin (H&E) to estimate the percentage necrotic tumor area, and the other five sections were stained immunohistochemically to determine the percentage of apoptosis, hypoxia, immune cells (granzyme B-positive cells), proliferation and the number of blood vessels. Apoptosis was detected with antibodies against cleaved Caspase-3 (Ca-3, Cell Signaling Technology, Danvers, MA, USA) at a dilution of 1:500, and hypoxia was determined by hypoxia-inducible factor-1-α antibodies (ab2185, Abcam, Cambridge, MA, USA) at a dilution of 1:2000. For staining of immune cells (cytotoxic T lymphocytes and natural killer cells), antibodies against granzyme B (ab4059, Abcam) were diluted in a 1:1600 ratio. Proliferation was determined by antibodies against Ki-67 (clone SP6, Thermo Fisher Scientific) at a dilution of 1:1250. Blood vessels were visualized with antibodies against CD31 (ab28364, Abcam) at a dilution of 1:1000. Primary antibodies were detected with a peroxidase-conjugated streptavidin-biotin secondary antibody (Rabbit-specific HRP/DAB (ABC) detection IHC kit ab64261, Abcam) and counterstained with hematoxylin, as described previously [30]. The whole area of the tumor H&E-stained section was captured by a DP72 CCD camera connected to a BX-51 microscope (Olympus, Hamburg, Germany) under 10× magnification (numerical aperture 0.40). The necrotic area was evaluated by two independent researchers and presented as the percent of the necrotic area of the tumor section. From the remaining (five) immunohistochemically stained sections, at least five viable parts of each tumor sample were captured under 40× magnification (numerical aperture 0.85). The captured images were analyzed by two independent researchers and presented as the percentage of positive cells (apoptosis, hypoxia, immune cells, proliferation) or the number of blood vessels, as described previously [30]. Quantitative Reverse Transcription-Polymerase Chain Reaction (RT-qPCR) From each experimental group, six tumors were collected on day 6 from the beginning of the experiment to determine the expression of the DNA sensor Sting and cytokines Il1β, Ifn-β1 and Tnfα. The tumors were ground, and total RNA was isolated using TRIzol™ Reagent (Thermo Fischer Scientific) and purified with a peqGOLD Total RNA kit (PEQLAB, VWR™, Life Science, Leuven, Belgium) according to the manufacturer's instructions. The concentrations and purity of RNA were quantified spectrophotometrically using a Cytation 1 Imaging Multi-Mode Reader (Agilent (BioTek instruments) Santa Clara, CA, USA). Total RNA (500 ng) was reverse transcribed into complementary DNA (cDNA) using a SuperScript VILO cDNA Synthesis Kit (Thermo Fisher Scientific). Tenfold diluted mixtures of transcribed cDNA were used as a template for RT-qPCR using PowerUp SYBR Green Master Mix (Thermo Fisher Scientific) and the primers (IDT, Newark, NJ, USA) specified in Supplementary Table S1. The reaction was performed on a QuantStudio™ 3 Real-Time PCR System (Thermo Fisher Scientific) under the cyclic conditions specified in Supplementary Table S2, and the results were analyzed with QuantStudio ® Design & Analysis Software v1.1 (Thermo Fisher Scientific). The expression was quantified using the ∆∆Ct method [50] relative to the reference β-actin and glyceraldehyde 3-phosphate dehydrogenase mRNA and normalized to the control group. Statistical Analysis GraphPad Prism (GraphPad, San Diego, CA, USA) was used for statistical analyses. All data were tested for the normality of distribution with the Shapiro-Wilk test. Data are presented as the arithmetic mean (AM) ± the standard error of the mean (SEM). The differences between the experimental groups were statistically evaluated by oneway analysis of variance (one-way ANOVA) followed by Fisher's LSD test for multiple comparisons. A P value of less than 0.05 was considered statistically significant. Survival was estimated by the Kaplan-Meier method, and survival curves were compared by the log-rank test. Tumor volumes of 350 mm 3 were counted as events for the construction of the survival curves.
7,177.8
2023-02-01T00:00:00.000
[ "Medicine", "Engineering" ]
Image Retrieval System using Fuzzy-Softmax MLP Neural Network Many databases contain huge volume of data, mostly in the form of digital images. Digital images such as vector or raster type or medical images such as X-Rays, MRI, and CT are extensively used in research, diagnosis and planning treatment schedule. Large medical institutions produce gigabits of image data every month. For effective utilization of medical images from the archives for diagnosis, research and educational purpose, efficient image retrieval system is essential. Image retrieval systems extract features in the image to a feature vector and use similarity measures for retrieval of images from group of images. Thus the effectiveness of the image retrieval system solely depends upon the feature selection and the way they are classified. The aim of this paper is to implement a novel feature selection mechanism using Discrete Wavelet Transforms (DWT) with Information Gain for feature reduction. Classification results obtained from the proposed method using existing classifiers is compared with the proposed Neural Network model. Results obtained show that the proposed Neural Network classifier outperforms conventional classification algorithms and multi layer perceptron neural network. Introduction Visual information has been extensively used in the areas of multimedia, medical imaging and other numerous applications. Management of this visual information is challenging as the quantity of data available is very huge and growing exponentially. Digital images play a vital role in diagnosis and treatment schedule planning of a disease. It provides visual information for diagnosis, progress in treatment. Image retrieval of digital medical images from archives is a challenge that is widely researched. Textual annotations of images were the basis on which images were retrieved during the early 80s [1,2]. The images were retrieved using semantic queries. A system which can automatically classify images and retrieve images based on query image is required for efficient use of the archived medical data images. Earlier works in literature include use of visual features with text annotation for image retrieval [3,4]. Modern radiology techniques like CT, PET, MRI, X-Rays, provide essential information required for diagnose and plan treatments to the medical professionals [5]. Thus, efficient storage and image retrieval system for utilization of the images for diagnosis, research and educational purposes are required. Image retrieval based on visual features or image based query wherein the retrieval system responds to a query image by retrieving query similar images from the archive. In this retrieval system, the images in the database are preprocessed automatically to extract features and on the basis of the features, the images are classified. The query image is similarly preprocessed to extract features and based on the similarity measures appropriate images are retrieved from the database. Figure I show the block diagram of an image retrieval system. Figure 1: Overview of Image retrieval process Image retrieval plays a fundamental role in handling large amount of visual information in medical applications [6]. An effectiveness of an image retrieval system depends on: · Multi magnitude feature vector formed using information extracted from images · Computing distance metrics · Identify the images in database with lowest distance metrics from the query image · How to select features to achieve highest discrimination, · Combining them effectively, · Application of proper distance metrics, · Location of optimal classifier configuration for classification problems, · scaling/adapting classifier when many classes/features are incrementally introduced and finally, · Training classifier to maximize classification accuracy. Generally features such as color, texture, shape, size and spatial relationship are used for classifying images. In medical imaging, color is an effectively used feature; in fields of dermatology [8] color is extensively used as a feature. MRI images, X-Rays are in grey scale, thus color may not be an effective feature for image retrieval. Similarity measures computed from low level image features are mainly used for image retrieval. To automatically categorize medical images, data mining techniques such as decision tree, Bayesian network, Neural networks, Support vector machines are widely used [9]. In this paper it is proposed to extract the feature vector from medical images sing Discrete Wavelet Transform (DWT) and feature reduction using Information Gain (IG). The proposed Fuzzy Softmax Multilayer perceptron (FS-MLP) Neural Network is used to classify the obtained feature vectors for the given class. Rigau,et al.,[9] proposed a two-step mutual informationbased algorithm for medical image segmentation. In the first step, binary space partition splits the image into relatively homogeneous regions. Second step involves clustering around the histogram bins of the partitioned image. The clustering is done by minimizing the mutual information loss of the reserved channel. The proposed algorithm preprocesses the images for multimodal image registration. The multimodal image registration integrates the information of different images of the same or different subjects. Experimental results using proposed algorithm on different images show that the segmented images perform well in medical image registration using mutual information-based measures. Previous Research K. Rajkumar et al., [11] proposed a two step medical image retrieval framework to retrieve similar images. A content based image retrieval framework based on PCA and wavelet was proposed. Wavelet filtering process is used to create a subset of images. The energy efficient wavelet decomposition is used to decompose images and corresponding energies were extracted. The retrieval system uses this subset to search for similar images. Further reduction of dimensions is obtained by applying PCA to the extracted features. Similarity matches of query image and database image was obtained using Euclidean distance. The calculated eigen vectors and the similarity measures were applied to retrieve the medical images. Due to the reduction of searching space efficiency and retrieval accuracy is improved. Experiments conducted using 200 medical images showed that the proposed method has better retrieval accuracy in terms of recall rate and precision. Kambhatla, et al., 1997 [12] developed local nonlinear extensions of PCA for dimension reduction. The algorithm was applied on both speech and image data. The proposed algorithm is fast to compute and provides accurate representations of the data. PCA and neural network implementations of non-linear PCA were used to compare with the proposed algorithm. Results showed that nonlinear PCA performed better than PCA and the proposed local linear techniques perform better than neural network implementations. Park, et al., 2003 [13] proposed a method of image classification using neural network. In the preprocessing stage, the object region is extracted using region segmentation techniques. The images are transformed using wavelet transforms. Shape based texture features are extracted from transformed images and are used for classification of the images. The neural network was trained using back propagation learning algorithm. The training of neural network was done using 300 training data composed of 10 images from each of 30 classes. Results showed that the classification rates of 81.7% accuracy were achieved. Su, et al., 2003 [14] proposed a new feedback approach with progressive learning capability. The proposed approach is based on a Bayesian classifier. The positive and negative feedback are treated with different strategies. The positive examples are used for refining image retrieval results and negative images are used to modify the ranking of the retrieved images. The images are retrieved by estimating Gaussian distribution of the positive examples that represents the desired images for a given query. Bayesian network is used to re-rank the images in the database. PCA is used to update the feature subspace during the feedback process thus reducing sub-space dimensionalities. Thus the feedback process improves the retrieval process. Experimental results show that the proposed method improves the speed, memory and accuracy of the retrieval process. Research Method This section briefly introduces to Discrete Wavelet Transform (DWT), Information Gain (IG) and the Multi Layer Perceptron (MLP) Neural Network. Discrete Wavelet Transform (DWT) The discrete wavelet transform (DWT) is an implementation of the wavelet transform using a discrete set of the wavelet scales and translations obeying some defined rules. In other words, this transform decomposes the signal into mutually orthogonal set of wavelets, which is the main difference from the continuous wavelet transform (CWT), or its implementation for the discrete time series sometimes called discrete-time continuous wavelet transform (DT-CWT). The feature vector from each image was extracted using the discrete wavelet transform. Pixels which are one length away from each other are selected. The algorithm pseudo is given below: 1. Compute Image size MxN 2. For each alternate value 'i' in array M and array size less than M or M+1 3. For each alternate value 'j' in array N and array size less than N or N+1 4. Compute DWT(array[xi,yj]) 5. Store computed value in one dimensional array 6. Repeat from step 1 till all images are computed Discrete wavelet transform is preferred over Fast Fourier transform due to its simplicity and the reduced time to compute the image coefficients. Haar Wavelet In mathematics, the Haar wavelet is a sequence of rescaled "square-shaped" functions which together form a wavelet family or basis. Wavelet analysis is similar to Fourier analysis in that it allows a target function over an interval to be represented in terms of an orthonormal function basis. The Haar sequence is now recognised as the first known wavelet basis and extensively used as a teaching example. The Haar sequence was proposed in 1909 by Alfréd Haar. Haar used these functions to give an example of a countable orthonormal system for the space of square-integrable functions on the real line. The study of wavelets, and even the term "wavelet", did not come until much later. As a special case of the Daubechies wavelet, the Haar wavelet is also known as D2. The Haar wavelet is also the simplest possible wavelet. The technical disadvantage of the Haar wavelet is that it is not continuous, and therefore not differentiable. This property can, however, be an advantage for the analysis of signals with sudden transitions, such as monitoring of tool failure in machines. The Haar wavelet's mother wavelet function can be described as (3) Its scaling function can be described as (4) Figure 2: The Haar wavelet ReseaRch PaPeR To calculate the Haar transform of an array of n samples: 1. Find the average of each pair of samples. (n/2 averages) 2. Find the difference between each average and the samples it was calculated from. (n/2 differences) 3. Fill the first half of the array with averages. 4. Fill the second half of the array with differences. 5. Repeat the process on the first half of the array. (The array length should be a power of two) Two samples, l and r, can be expressed as an average, a, and a difference, d, like in mid-side coding: This is reversible: Information Gain The main aim of information gain criteria is to discover the amount of unique information is added by a feature to the whole feature set. A features information gain f can be computed as F (S U f) − F (S), where F (.) is the evaluation criterion and S the selected subset of features. The feature with greater information gain is preferred. Bayes error rate, conditional probability, and information gain are a little information gain criteria. Quinlan suggested a classification algorithm called ID3 that introduced the information gain concept. Information gain is a measure based method, used for selecting best split attributes in decision tree classifiers and indicates the extent to which data's entropy is reduced. It also identifies values of each particular attribute. Each feature basis gets an information gain value, which is used to decide whether a feature is selected or deleted. Hence, a threshold value for feature selection must be established first; a feature is chosen when its information gain value is bigger than the threshold value. Let a set of s instances be set A and let B be the set of k classes. Let P(Bi, A) be the fraction of the examples in A that have class Bi, then, the expected information for the class membership is given by: If a particular attribute X has y distinct values, anticipated information for the decision tree with X as root is the weighted sum of expected information of subsets of X according to distinctive values. Let Ai be the set of instances whose attribute value of X is Xi. Then, difference between Info(A) and InfoX (A) provides information gained by partitioning A according to testing X. Gain(X) = Info(A ) -Info X (A) The higher the information gain, the higher the chances of getting pure classes in a target class if the split is based on the variable with the highest gain. Information gain selects the feature vectors which are essential for the classification process. On the computed coefficient from DST, the information gain can be computed based on the class attribute. The information gain that has to be computed for an attribute X whose class attribute Y is given by the conditional entropy of Y given X, H(Y|X) is The conditional entropy of Y given X is Multilayer Perceptron (MLP) Multilayer perceptron (MLP) is the most favored supervised learning network model. The neural network consists of one input layer, one or more hidden layer and an output layer. The connections between the layers are typically formed by connecting each of the nodes from a given layer to all neurons in the next layer. During the training phase each connection's scalar weight is adjusted. The outputs are got from the output nodes of network. The feature vector x is input at the input layer and the output represents a discriminator between its class and all of the other classes. In training, the training examples are fed and the predicted outputs are computed. The output is compared with the target output and error measured is propagated back through the network and the weights are adjusted. The training set of size m can be represented as T M ={(x 1 ,y 1 ),… .,(x m ,y m )} where x i ∈ R a are the input vectors of dimension a and Y i ∈ R b are the output vectors of dimension b and R represents the set of real numbers. Let fw represent the function with weight w for the neural network. Supervised learning adjusts the weight such that: After the Neural network is trained with all feature vectors, and is tested on new samples its output will be correct to a certain extent. The activation function in a neural network controls the amplitude of the output such that the range of output is between 0 and 1 or -1 to 1. Mathematically the interval activity of the neuron can be shown to be: Proposed FS-MLP Where xi is the input and wjk is the weights. The output of the neuron, y k would therefore be the outcome of some activation function on the value of v k . The most common type of activation used to construct the neural network is the sigmoid function. A sigmoid activation function uses the sigmoid function to determine its activation. The sigmoid function is given as: The softmax activation function, (Bridle, 1990), applied to the network outputs ensures that the outputs conform to the mathematical requirements of multivariate classification probabilities [15]. If the classification problem has C classes ReseaRch PaPeR or categories, then each category is modeled by one of the network outputs. If Zi is the weighted sum of products between its weights and input then for the i-th output, i.e., Then The softmax activation function ensures that all outputs conform to the requirements for multivariate probabilities. That is, 0<softmaxi<1, for all i=1,2,….,C and (16) Back Propagation Algorithm The standard way to train a multi layer perceptron is using a method called back propagation. This is used to solve a basic problem called assignment of credit, which comes up when we try to figure out how to adjust the weights of edges coming from the input layer. Recall that in the single layer perceptron, we could easily know which weights were producing the error because we could directly observe the weights and output from those weighted edges. However, we have a new layer that will pass through another layer of weights. As such, the contribution of the new weights to the error is obscured by the fact that the data will pass through a second set of weights [16]. a. First case: function composition In the feed-forward step, incoming information into a unit is used as the argument for the evaluation of the node's primitive function and its derivative. In this step the network computes the composition of the functions f and g. Figure 3shows the state of the network after the feed-forward step. The correct result of the function composition has been produced at the output unit and each unit has stored some information on its left side. Figure 3: Result of the feed-forward step In the backpropagation step the input from the right of the network is the constant 1. Incoming information to a node is multiplied by the value stored in its left side. The result of the multiplication is transmitted to the next unit to the left. We call the result at each node the traversing value at this node. Figure 4 shows the final result of the backpropagation step, which is f l (g(x))g l (x), i.e., the derivative of the function composition f(g(x)) Figure 4: Result of the backpropagation step Implemented by this network. The backpropagation step provides an implementation of the chain rule. Any sequence of function compositions can be evaluated in this way and its derivative can be obtained in the backpropagation step. We can think of the network as being used backwards with the input 1, whereby at each node the product with the value stored in the left side is computed. b. Second case: function addition The next case to consider is the addition of two primitive functions. Figure 5 shows a network for the computation of the addition of the functions f1 and f2 . The additional node has been included to handle the addition of the two functions. The partial derivative of the addition functions with respect to any one of the two inputs is 1. In the feedforward step the network computes the result f1(x) + f2(x). In the backpropagation step the constant 1 is fed from the left side into the network. All incoming edges to a unit fan out the traversing value at this node and distribute it to the connected units to the left. Where two right-to-left paths meet, the computed traversing values are added. Figure 6 shows the result f l 1(x) + f l 2 (x) of the backpropagation step, which is the derivative of the function addition f1 + f2 evaluated at x. A simple proof by induction shows that the derivative of the addition of any number of functions can be handled in the same way. Weighted edges could be handled in the same manner as function compositions, but there is an easier way to deal with them. In the feed-forward step the incoming information x is multiplied by the edge's weight w. The result is wx. In the backpropagation step the traversing value 1 is multiplied by the weight of the edge. The result is w, which is the derivative of wx with respect to x. From this we conclude that weighted edges are used in exactly the same way in both steps: they modulate the information transmitted in each direction by multiplying it by the edges' weight. After choosing the weights of the network randomly, the backpropagation algorithm is used to compute the necessary corrections. The algorithm can be decomposed in the following four steps: 1. Feed-forward computation 2. Backpropagation to the output layer 3. Backpropagation to the hidden layer 4. Weight updates The algorithm is stopped when the value of the error function has become sufficiently small. Backpropagation to the output layer The backpropagation path from the output of the network is propagated starting from output layer. Backpropagation to the hidden layer Each unit in the hidden layer is connected to each unit in the output layer with an edge of weight W. The backpropagated error up to unit in the hidden layer must be computed taking into account all possible backward paths. The backpropagated error can be computed in the same way for any number of hidden layers. Weight updates It is very important to make the corrections to the weights only after the backpropagated error has been computed for all units in the network. Otherwise the corrections become intertwined with the backpropagation of the error and the computed corrections do not correspond any more to the negative gradient direction. Training Dataset Nearly 100 images were used in the experimental setup containing five class labels. The top 40 relevant attributes were selected using information gain. Figure 8 shows some of the images used in this work. ReseaRch PaPeR The results obtained from normal MLP Neural Network and the proposed FS-MLP Neural Network is shown in figure 10. Figure 10: Classification accuracy measured in percentage. Summary and Conclusion In this paper it was proposed to extract features using Discrete Wavelet Transform (DWT) and select the top attributes based on class attribute using information gain. The extracted features were trained with the existing MLP Neural network classifier and compared with the proposed FS-MLP neural network. The classification accuracy of the proposed method improved by a percentage of 3.45. Using less number of features in the proposed method decreases the overall processing time for a given query.
4,993.8
2011-10-01T00:00:00.000
[ "Computer Science" ]
Comparative genomics of Klebsiella michiganensis BD177 and related members of Klebsiella sp. reveal the symbiotic relationship with Bactrocera dorsalis Background Bactrocera dorsalis is a destructive polyphagous and highly invasive insect pest of tropical and subtropical species of fruit and vegetable crops. The sterile insect technique (SIT) has been used for decades to control insect pests of agricultural, veterinary, and human health importance. Irradiation of pupae in SIT can reduce the ecological fitness of the sterile insects. Our previous study has shown that a gut bacterial strain BD177 that could restore ecological fitness by promoting host food intake and metabolic activities. Results Using long-read sequence technologies, we assembled the complete genome of K. michiganensis BD177 strain. The complete genome of K. michiganensis BD177 comprises one circular chromosome and four plasmids with a GC content of 55.03%. The pan-genome analysis was performed on 119 genomes (strain BD177 genome and 118 out of 128 published Klebsiella sp. genomes since ten were discarded). The pan-genome includes a total of 49305 gene clusters, a small number of 858 core genes, and a high number of accessory (10566) genes. Pan-genome and average nucleotide identity (ANI) analysis showed that BD177 is more similar to the type strain K. michiganensis DSM2544, while away from the type strain K. oxytoca ATCC13182. Comparative genome analysis with 21 K. oxytoca and 12 K. michiganensis strains, identified 213 unique genes, several of them related to amino acid metabolism, metabolism of cofactors and vitamins, and xenobiotics biodegradation and metabolism in BD177 genome. Conclusions Phylogenomics analysis reclassified strain BD177 as a member of the species K. michiganensis. Comparative genome analysis suggested that K. michiganensis BD177 has the strain-specific ability to provide three essential amino acids (phenylalanine, tryptophan and methionine) and two vitamins B (folate and riboflavin) to B. dorsalis. The clear classification status of BD177 strain and identification of unique genetic characteristics may contribute to expanding our understanding of the symbiotic relationship of gut microbiota and B. dorsalis. Supplementary Information The online version contains supplementary material available at 10.1186/s12863-020-00945-0. Background Bactrocera dorsalis (Hendel) (Diptera: Tephritidae) is a destructive polyphagous and highly invasive insect pest of tropical and subtropical species of fruit and vegetable crops. While insecticides have been used to control this pest, insecticide resistance, and environmental pollution by chemical pesticides have been severely limiting this type of control [1]. Moreover, B. dorsalis has a powerful biological invasion ability. It is its invasion, spread, and establishment in sub-Saharan Africa and has caused about $2 billion in economic losses in the horticultural export markets of Africa [2]. The current movement of B. dorsalis into Central China, without any apparent intense selective pressure, must pose a deep concern for other temperate regions of the world, especially Europe and North America [3]. The sterile insect technique (SIT) has been used for decades to control insect pests of agricultural, veterinary and human health importance [4,5]. Compared with insecticide control strategies, SIT has several attractive features, including species specificity and environment friendliness. Ionizing irradiation was used to sterilize insects, and these insects were subsequently handled, transported, and released in the field, ideally only males [6]. Thus, SIT can be an alternate strategy for the management of B. dorsalis. Previously, SIT has been used to control pest fruit fly species, including Ceratitis capitata [7], B. tryoni [8], B. cucurbitae [9] and B. dorsalis [10]. However, SIT may have some limitations related to the ecological fitness of sterile male adult flies due to domestication, mass-rearing, irradiation, and handling [11]. These procedures also impact the tephritid gut microbiome, with detrimental effects on physiology, behavior, and fitness [12,13]. Thus, the deleterious impact on the ecological fitness of the released insects has been one of the most considerable issues of SIT applications [11,14]. Gut microbiota is strongly connected with the biology of the host and contributes to its health [15]. Gut microbiota affects insects in several ways, such as aiding food digestion and detoxification [16], providing essential nutrients [17], and protecting against infectious pathogens [18]. Much recent research of Tephritids suggests that gut microbiota reduced larval development time [19,20], increased pupal weight [21], larger males [22], improved male performance [13,23], increased female fecundity [24], increased longevity [23,25] and increased chilling resistance [26]. Mass rearing and irradiation processes affect the gut microbial community structure in the Tephritids [13,23,27,28]. Compared to wild flies, the abundance and diversity of the major gut microbiota community Enterobacteriaceae in mass-reared irradiated flies are reduced, and the abundance of the minor members (e.g., Pseudomonas or Bacillaceae) increased [13,23]. This disturbance of gut microbial homeostasis may be causally related to the competitiveness disadvantaged of sterile males. The above researches show that the manipulation of gut microbiota has great potential and can be introduced to SIT facilities to improve the efficiency of pest control. Gut bacteria can be used as probiotics to prevent pathogens, promote larval growth, and male performance during all the production stages of sterile male fly, from the egg to the released fly. Gut symbiotic bacteria community of B. dorsalis has been investigated [23,27,29]. Enterobacteriaceae were the predominant family of different B. dorsalis populations and different developmental stages from laboratory-reared and field-collected samples [27,29]. Our previous study found that irradiation causes a significant decrease in Enterobacteriaceae abundance of the sterile male fly [23]. We succeed in isolating a gut bacterial strain BD177 (a member of the Enterobacteriaceae family) that can improve the mating performance, flight capacity, and longevity of sterile males by promoting host food intake and metabolic activities [23]. However, the probiotic mechanism remains to be further investigated. Therefore, the genomic characteristics of BD177 may contribute to an understanding of the symbionthost interaction and its relation to B. dorsalis fitness. The here presented study aims to elucidate the genomic basis of strain BD177 its beneficial impacts on the sterile males of B. dorsalis. An insight into strain BD177 genome feature helps us make better use of the probiotics or manipulation of the gut microbiota as an important strategy to improve the production of high performing B. dorsalis in SIT programs. Results Assembly description and genome information of K. michiganensis BD177 The genome of K. michiganensis BD177 was sequenced using the Illumina HiSeq and Pacbio technology. Raw reads (~1.3 Gbp) were processed to remove SMRT bell adapters, short and low-quality reads (< 80% accuracy) using SMRT Analysis version 2.3 (https://www.pacb. com/products-and-services/analytical-software/wholegenome-sequencing/). A total of 155,828 filtered reads (average length, 8.9 Kb) were used for de novo assembly using Celera Assembler [30], with self-correction of the PacBio reads. A total of 1091 Mb paired-end Illumina reads screened from a 500-bp genomic library were mapped using SOAP [31] to the bacterial plasmid database for tracing the presence of any plasmid and filling the gaps. The de novo assembly resulted in five contigs, representing the K. michiganensis BD177 complete genome of 6,812,698 bp with GC content 55.03% in a single chromosome and four plasmids. The annotated genome contains 6714 genes, 25rRNA genes, and 86 tRNA genes ( Table 1). 16S rRNA phylogenetic analysis and phenotypic of isolated strain BD177 Maximum parsimony based and maximum likelihood phylogenetic trees using the sequence that encodes for the 16S rRNA gene showed that the closest species to BD177 is K. michiganensis and K. oxytica (Additional file 1: Fig. S1, Additional file 2: Table S1). Evolutionary divergence distance of 16S rRNA gene sequence between BD177 and K. michiganensis E718 was minimum value (BD177 vs K. michiganensis E718: 0.0079) among all sequences. Biochemical tests were performed on BD177 by API20E to confirm its position in the K. oxytoca and K. michiganensis. Biochemical characteristics of K. oxytoca ATCC13182(T)(=NCTC13727) and K. michiganensis W14(T)(= DSM2544) were obtained from BacDive Webservices [32]. Compared with K. oxytoca ATCC13182(T) and K. michiganensis W14(T), BD177 is the same with W14 in the β-galactosidase enzyme positive and urease negative, same with ATCC13182(T) in arginine dihydrolase positive. However, BD177 does not have the utilization ability of citrate as only carbon source, different from both type strains (Table 2). Pan-genome analysis In this study, we considered the 128 publicly available genome assemblies for the Klebsiella sp. (Additional file 3: Table S2). Of these genomes, 26 were originally annotated as K. aerogenes, 13 were K. michiganensis, 27 were K. oxytoca, and 15 were K. pneumoniae. 25 were K. quasipneumoniae, 1 was K. quasivariicola, 21 were K. variicola. The type strain genome of each species is included in these genomes. These genome assemblies were passed strict quality control (N75 values of > 10,000 bp, < 500 undetermined bases per 100,000 bases, > 90% completeness and < 5% contamination) by quast and checkm (Additional file 4: Table S3). This resulted in a total of 118 Klebsiella sp. strains studied, where ten low-quality genome assemblies were discarded. quasivariicola, and K. quasipneumoniae were considered as a high GC group content, and K. michiganensis and K. oxytoca were considered as a low GC content group. The genome sequence of our new isolate, which we designated K. michiganensis BD177 with 55.03% GC content (Table 1), shows a similar to that of the low GC content Klebsiella sp. group (Fig. 1a). The pan-genome shape of the 119 analyzed Klebsiella sp. genomes is presented in Fig. 1b. Hard core genes are found in > 99% genomes, soft core genes are found in 95-99% of genomes, shell genes are found in 15-95%, while cloud genes are present in less than 15% of genomes. A total of 49,305 gene clusters were found, 858 of which comprised the core genome (1.74%), 10,566 the accessory genome (21.43%), and 37,795 (76.66%) the cloud genome (Fig. 1b). Comparative genomic analysis evidenced that the 119 Klebsiella sp. pangenome can be considered as "open" since nearly 25 new genes are continuously added for each additional genome considered (Additional file 5: Fig. S2). To study the genetic relatedness of the genomic assemblies, we constructed a phylogenetic tree of the 119 Klebsiella sp. strains by using the presence and absence of core and accessory genes from pan-genome analysis (Fig. 2). The tree structure reveals six separate clades within 119 analyzed Klebsiella sp. genomes (Fig. 2). From this phylogenetic tree, type strain genomes originally annotated K. aerogenes, K. michiganensis, K. oxytoca, K. pneumoniae, K.variicola, and K. quasipneumoniae in the NCBI database were divided into six different clusters. Some non-type strain genomes originally annotated as K. oxytoca in the NCBI database are clustered in type strain K. michiganensis DSM25444 clade. The K. oxytoca group, including type strain K. oxytoca NCTC13727, have the unique gene cluster 1 (Fig. 2). K. michiganensis group, including type strain K. michiganensis DSM25444, has the unique cluster 2 (Fig. 2). Genes cluster 1 and cluster 2 based on unique presence genes from the pan-genome analysis can distinguish between non-type strain K. michiganensis and K. oxytoca (Fig. 2). However, our new isolated BD177 is clustered in type strain K. michiganensis clade (Fig. 2). In the current era of high-throughput sequencing with easy access to bacterial genomes, average nucleotide identity (ANI) of genome-wide comparison has been recommended as a standard method to improve the accuracy of taxonomic identification of prokaryotic genomes [33]. Figure 3a shows the ANIm (ANI calculated by using a MUMmer3 [34] implementation) between all 119 Klebsiella sp. genomes. The distance metrics support the grouping of the genomes in the six clades Fig. 2. For species delimitation, Ciufo et al. [35] advise the use of an ANI cutoff of 96% to define species boundaries. When comparing genomes belonging to different Klebsiella species, we observed ANI values of ≤94.8% ( Fig. 3b and Additional file 6: Table S4), indicating that each clade consists of one species(expect for K.oxytoca P620, K. oxytoca NCTC9146, and K. oxytoca JKo3) that are distinct from the other clades (Fig. 3b). The average nucleotide identity (ANI) value between BD177 and type strain K. michiganensis DSM25444 was 98% more than the cutoff of 96% (Fig. 3b), while between BD177 and type strain K. oxytoca NCTC13727 was 93%. Table S4). b Pairwise ANIm values for genomes. Heatmap and hierarchal clustering of average nucleotide identity (ANI) scores between BD177, 21 K. oxytoca and 12 K. michiganensis strains Genetic potential of BD177 for a symbiotic relationship Based on the pan-genome analysis of 119 Klebsiella sp. strains, we focused on the genomic relatedness of K. michiganensis BD177, 21 K. oxytoca, and 12 K. michiganensis strains. In total, 201,774 genes were present in their genomes, with an average number of 5934 genes per genome. These genes were clustered into 9643 orthogroups by OrthoFinder [36], where an orthogroup is defined as the group of genes descended from a single gene in the most recent common ancestor of a group of species. Of these orthogroups, 3833 were identified as core orthogroups, and 3351 were identified as singlecopy orthogroups (Additional file 7: Table S5). In K. michiganensis BD177, 213 strain-specific orthogroups were identified. Mapping of these orthogroups to the eggNOG database (v4.5) [37] revealed a unique in the predicted functional capacity (Additional file 8: Table S6). The metabolic features of K. michiganensis BD177 were also investigated by cumulatively mapping the unique functional KEGG genes of the K. michiganensis BD177 to the KEGG pathways (Fig. 4). The KEGG pathway analysis showed that BD177 harbored complete phenylalanine, tyrosine, and tryptophan biosynthesis pathways (KEGG genes K04518, K04093, K11646), and Cysteine and methionine metabolism pathways (KEGG genes K13060). The KEGG pathway analysis also showed that the BD177 strains harbored Metabolism of cofactors and vitamins pathway for riboflavin metabolism (KEGG genes K19286) and folate biosynthesis (KEGG genes K06920). In addition, BD177 strains harbored genes of the xenobiotics biodegradation and metabolism pathway (KEGG genes K01061 and K05797) for chlorocyclohexane and chlorobenzene degradation, fluorobenzoate degradation, and toluene degradation. Discussion Klebsiella sp. strains are among the dominant symbiotic bacteria in the gut of Tephritidae, with the ability to play diverse roles [11]. In this study, we describe the complete genome sequence of K. michichanensis BD177, isolated from wild adult B. dorsalis. We found that the taxonomic status of the B. dorsalis gut bacterial strain BD177 could not be identified only based on the 16S rRNA gene phylogenetic tree and biochemical analysis of the strain. Hence, we used the Illumina HiSeq and Pacbio technology to assemble a complete 6.8 Mbp length genome of K. michiganensis BD177. Using the genome-wide information, we identified the taxonomic status and strain-specific genome features of the BD177 Fig. 4 Metabolic pathways of BD177 strain. The pathways were generated using the iPath (ver. 3) module and are based on KEGG Orthology numbers of orthogroups identified from the pan-genome analysis of 34 genomes. Metabolic pathways identified from unique orthogroups of BD177 are depicted in blue, red and green, respectively strain to explore the symbiotic relationship with B. dorsalis. The phylogenetic analysis of the 16S rRNA gene shows that the closest species to BD177 is K. oxytoca and K. michiganensis (Fig. S1). Bootstrap values were more than 70%, which corresponds to a probability of ≥95% that the corresponding clade is real [38]. In addition, the biochemical indicators of the BD177 strain are not the same as the type strain of K. oxytoca and K. michiganensis(Table 2). Thus, the species status of the strain BD177 is still unclear. The 16S rRNA gene is often used as a putative marker for species circumscription, but the conservative nature of the gene did not show enough resolution on such a taxonomic scale [33]. Phenotypic properties can be unstable at times, and expression can be dependent upon changes in environmental conditions, e.g., growth substrate, temperature, and pH levels [39]. Furthermore, biochemical properties do not accurately reflect the entire extent of the genomic complexity of a given species [39]. As whole-genome sequencing has become more widely accessible due to the introduction of costeffective high-throughput DNA sequencing technology, it is evident that genome sequence similarities have been developed to be a routine taxonomic parameter. We compared the genome feature of BD177 with 118 currently available high-quality genomic assemblies of the Klebsiella sp., which comprises the species K. aerogenes, K. michiganensis, K. oxytoca, K. pneumoniae, K. variicola, and K. quasipneumoniae. Here we clarified the classification status of K. michiganensis BD177, compared with the six taxonomic clades on the basis of (i) differences in whole-genome GC content (Fig. 1a), (ii) a phylogenetic tree constructed on the presence and absence of core and accessory genes from pan-genome analysis (Fig. 2), and (iii) pairwise ANI (Fig. 3). All 118 Klebsiella sp. genomes were divided into a low GC content group (including K. aerogenes, K. michiganensis and K. oxytoca species) and a high GC content group (including K. pneumoniae, K. variicola and K. quasipneumoniae species). Strain BD177 with 55.03% GC content belongs to the low GC genome group. The GC content of complex microbial communities seems to be globally and actively influenced by the environment. Similar environments tend to have similar GC-content patterns [40]. A phylogenetic tree constructed on the presence and absence of core and accessory genes confirms the position of strain BD177 within the Klebsiella sp. strains. The K. michiganensis group, including type strain K. michiganensis DSM25444, have the unique gene cluster 2. K. oxytoca group, including type strain K. oxytoca NCTC13727, has the unique cluster 1. Non-type strains K. michiganensis and K. oxytoca are distinguishable based on genes cluster 1 and cluster 2 from the pan-genome analysis. Whole-genome sequence data as a basis for taxonomic assignment display greater discriminatory power than 16S rRNA gene sequence analysis alone [41]. In addition, pairwise genome comparison metrics such as average nucleotide identity (ANI) is also used as a reliable method to verify taxonomic identities in prokaryotic genomes, for both complete and draft assemblies [33]. Based on average nucleotide identity (ANI) value with the type strain K. michiganensis and K. oxytoca similarity, BD177 belongs to K. michiganensis species rather than K. oxytoca species (Fig. 3b). This result is consistent with the phylogenetic analysis base on the pan-genome. Strain BD177 belongs to K. michiganensis. To explore potential probiotic of K. michiganensis BD177, an in-depth comparative genomics analysis of 34 genomes, including 21 K. oxytoca, 12 K. michiganensis and K. michiganensis BD177, was performed by Orthofinder. We found the 213 strain-specific orthogroups of the strain BD177 were identified from a total of 9643 orthogroups in comparative genomics analysis. Predicted functional capacity analysis of these orthogroups showed that these unique orthogroups include metabolic key enzymes of amino acid, vitamins and xenobiotics. Of potential importance to the symbiosis of strain BD177 with the insect, the host is the bacterium's encoded ability to biosynthesize the phenylalanine, tyrosine, tryptophan, cysteine and methionine. Our previous research also showed that supplementation of K. michiganensis BD177 to sterile male B. dorsalis improved total free amino acid levels in hemolymph [23]. The obligate primary endosymbionts of many sap-feeding insects provide their hosts with essential amino acids [42][43][44]. The symbiotic fungi of Drosophila melanogaster promote amino acid harvest to rescue the lifespan of undernourished flies [45]. It suggested that K. michiganensis BD177 can provide amino acids, especially essential amino acids such as phenylalanine, tryptophan and methionine, to the B. dorsalis. Additionally, K. michiganensis BD177 was found to encode the ability to biosynthesize the B vitamins riboflavin (B2) and folate (B9). In humans, gut microbiota can synthesize and supply vitamins B to their hosts, which lack the biosynthetic capacity for most vitamins [46]. Recent studies have implicated the Drosophila microbiota in supplying folate [47], riboflavin [48] and thiamine [49]. The riboflavin and folate biosynthesize ability of K. michiganensis BD177 suggests these B-vitamins may be of particular importance, especially in adult life stages fed on undernourished nectar and dew [50]. In D. melanogaster, Acetobacter pomorum provides thiamine to its host to promote larval development [49]. Folate (B9) biosynthesis of Wigglesworthia glossinidia plays a role in Glossina morsitans maturation and reproduction [51]. Our previous research also showed that K. michiganensis BD177 improved the mating competitiveness and lifespan of sterile male B. dorsalis [23]. Additionally, the recent study reported that K. oxytoca could affect the foraging decision [52] and mate-selection [53] of B. dorsalis. It is suggested that riboflavin and folate synthetic ability of K. michiganensis BD177 may contribute to the sexual performance and lifespan of B. dorsalis. Compared with other 33 Klebsiella sp. genomes, some strain-specific genes from xenobiotics biodegradation and metabolism pathway were also identified in the K. michiganensis BD177 genome. Recent research shows that gut bacteria of insects significantly contribute to resistance against xenobiotics, including phytotoxins and pesticides [54]. Pseudomonas fulva can assist with digestion and detoxification of caffeine as alkaloid allelochemical in the coffee berry borer [16]. Gut symbiont Burkholderia of Riptortus pedestrians has gained the ability to hydrolyze insecticide fenitrothion [55]. Gut symbiont Citrobacter sp. can degrade trichlorphon and conferred host insecticide resistance in B. dorsalis [56]. Interestingly, gut symbiont Lactobacillus plantarum of D. melanogaster can significantly enhance the toxicity of insecticide chlorpyrifos [57]. It is hypothesized that K. michiganensis BD177 may play a role in insect resistance against pesticides in the wild field. However, the potential mechanisms of resistance or sensitivity of insect to insecticide are unclear. Conclusions Supplement of gut symbiotic bacteria as probiotic to the larval or adult diet are encouraging for their potential application in SIT programs to produce high-quality insects. However, understanding of the bacterial probiotic mechanism is important for the selection and application of different insect species probiotics. Using longread sequence technologies, we assembled the complete genome of K. michiganensis BD177 strain. The comparison of the genome sequence against other Klebsiella species showed a percentage of genes that are unique to the BD177 strain, including metabolic key enzymes of amino acid, vitamins, and xenobiotics that could play an important role in the resistance and fitness of B. dorsalis. These findings extend our previous work [23] to improve the understanding of the relationship between gut bacteria and B. dorsalis. In the future, we can engineer the gut bacteria strain symbionts by strengthening the probiotic genetic elements of bacteria. It will improve the application efficiency of gut microbiota in pest management programs incorporating SIT. 16S rRNA gene analysis and biochemical characterization The Klebsiella michiganensis BD177 reported in this study was isolated in a previous study from the gut of B. dorsalis male adult, collected from the Institute of Urban and Horticultural Pests of Huazhong Agricultural University [23]. The bacterial DNA was extracted with the HiPure Bacterial DNA Kit (Magen) following the protocol for Gram-negative bacteria and used for the amplification of 16S rRNA gene using the primers 27F (5′-GTTTGATCCTGGCTCAG-3′) and 1492R (5′-GGTT ACCTTGTTACGACTT-3′) [16]. Subsequently, the1 .4 kb PCR product was purified using a PCR purification kit (Axygen) and was subjected to bidirectional Sanger sequencing. The 16S rRNA gene sequence of strain BD177 was compared with the reference sequences using the "identify" tool of EzBioCloud database for the taxonomic assignment [58]. The similarity of the 16S rRNA gene between strain BD177 to Klebsiella michiganensis type strain W14 and Klebsiella oxytoca type strain JCM 1665 was 99.25 and 99.15%, respectively. In addition, 16S rRNA gene sequences longer than 1300 nucleotides of type strains K. oxytoca, K. michiganensis, K. pneumonia, K. quasipneumoniae, K. aerogenes, Klebsiella variicola, and Pseudomonas aeruginosa were downloaded from EzBioCoud database [58]. Alignments of the sequences were performed using the MUSCLE software [59]. A neighbor-joining phylogenetic tree based on sequences of the 16S rRNA gene was constructed with the Molecular Evolutionary Genetics Analysis (MEGA X) [60], using the Kimura 2-parameter model with 1000 bootstrap replicates. The maximum parsimony and maximum likelihood trees with the inclusion of outgroup were constructed by MEGA X. Distance matrix of evolutionary divergence between 16 s rRNA gene sequences was estimated by MEGA X. Strain BD177 was biochemically confirmed by using the API 20E system according to the manufacturer's instructions, which is a biochemical panel for identification and differentiation of members of the family Enterobacteriaceae (bioMerieux Inc., Hazelwood, MO) [61]. DNA isolation and genome sequencing K. michiganensis BD177 strain was grown on LB, with incubation at 37°C in aerobic conditions. For DNA extraction, the strain was grown in 8 ml of medium for overnight, followed by pelleting at 6000 g for 10 min, and genomic DNA was obtained using E.Z.N.A.® Bacterial DNA Kit (OMEGA). The bacterial DNA extraction was followed by quantity and quality estimation using a NanoDrop (Thermo Scientific) and also visualization of aliquots onto a 1.2% agarose gel stained with ethidium bromide to verify DNA integrity. Whole-genome sequencing of Klebsiella michiganensis BD177 was performed on the PacBio RS II platform and Illumina HiSeq 4000 platform at the Beijing Genomics Institute (BGI, Shenzhen, China). For Illumina sequencing, genomic DNA was sheared randomly to construct three read libraries with lengths 500 bp by a Bioruptor ultrasonicator (Diagenode, Denville, NJ, USA) and physicochemical methods. DNA was sequenced on a HiSeq sequencer (Illumina) with pair-end 125 bp reads. Low-quality trimming and adapter removal for the Illumina reads was performed using Trimmomatic [62], resulting in a total of 1091 Mb clean data. For Pacbio sequencing, the program pbdagcon (https://github.com/PacificBiosciences/pbdagcon) was used for self-correction. Then these reads were filtered using the RS_Subreads protocol (minimum subread length = 1 kb, minimum polymerase read quality = 0.8), resulting in a total of 1394 Mb useable data (total number of subreads = 155,828 reads, mean subread length = 8897 bp, subread N50 = 11,669 bp). Genome assembly and annotation of K. michiganensis BD177 Draft genomic unitigs, which are uncontested groups of fragments, were assembled using the Celera Assembler [30] against a high quality corrected circular consensus sequence subreads set. To improve the accuracy of the genome sequences, GATK (https://www.broadinstitute.org/ gatk/), and SOAP tool packages (SOAP2, SOAPsnp, SOA-Pindel) were used to make single-base corrections [31]. To trace the presence of any plasmid, the filtered Illumina reads were mapped using SOAP [31] to the bacterial plasmid database (last accessed 5 March 2018) [63]. Gene prediction was performed on the K. michiganensis BD177 genome assembly by glimmer3 [64] with Hidden Markov Models. tRNA, rRNA, and sRNAs recognition made use of tRNAscan-SE [65], RNAmmer [66] and the Rfam database [67]. The tandem repeats annotation was obtained using the Tandem Repeat Finder [68], and the minisatellite DNA and microsatellite DNA selected based on the number and length of repeat units. The Genomic Island Suite of Tools (GIST) used for genomics lands analysis [69] with IslandPath-DIOMB, SIGI-HMM, Island-Picker method. Prophage regions were predicted using the PHAge Search Tool (PHAST) webserver [70] and CRISPR identification using CRISPRFinder [71]. Seven databases, which are KEGG (Kyoto Encyclopedia of Genes and Genomes) [72], COG (Clusters of Orthologous Groups) [73], NR (Non-Redundant Protein Database databases) [74], Swiss-Prot [75], and GO (Gene Ontology) [76], TrEMBL [75], EggNOG [37] are used for general function annotation. A wholegenome BLAST search (E-value below 1e− 5, minimal alignment length percentage above 40%) was performed against the above seven databases. Virulence factors and resistance genes were identified based on the core dataset in VFDB (Virulence Factors of Pathogenic Bacteria) [77] and ARDB (Antibiotic Resistance Genes Database) database [78]. The molecular and biological information on genes of pathogen-host interactions were predicted by PHI-base [79]. Carbohydrate-active enzymes were predicted by the Carbohydrate-Active enZYmes Database [80]. Type III secretion system effector proteins were detected by EffectiveT3 [81]. Default settings were used in all software unless otherwise noted. Unique genes inference and analysis Orthogroups of BD177 and 33 Klebsiella sp. (K. michiganensis and K. oxytoca) genome assemblies were inferred with OrthoFinder [36]. All protein sequences were compared using a DIAMOND [87] all-against-all search with an E-value cutoff of <1e-3. A core orthogroup is defined as an orthogroup present in 95% of the genomes. The single-copy core gene, pan gene families, and core genome families were extracted from the OrthoFinder output file. "Unique" genes are genes that are only present in one strain and were unassigned to a specific orthogroup. Annotation of BD177 unique genes was performed by scanning against a hidden Markov model (HMM) database of eggNOG profile HMMs [37]. KEGG pathway information of BD177 unique orthogroups was visualized in iPath3.0 [88].
6,469.4
2020-12-01T00:00:00.000
[ "Biology" ]
Bond strength repair of a bulk-fill composite using different adhesive systems and resin composites In this study evaluated the effect of different adhesive systems and resin composites on the microtensile bond strength of repairs using a bulk-fill composite. Ninety specimens were prepared using a half-hourglass mold of composite Filtek Bulk Fill using a silicone matrix. Specimens were randomly distributed in 9 experimental groups (n=10) according to adhesive [Universal Single Bond (SBU), Scotchbond Multipurpose Adhesive (SBMP), and Single Bond 2 (SB2)] and resin composite (Filtek Bulk Fill, Aura Bulk Fill, and Filtek Z250). For control group, hourglass specimens were used to measure the ultimate bond strength. Specimens were submitted to thermal cycling (5,000 cycles, 5 and 55°C, 30s) to simulate the aging of restoration and then the repair procedure was performed. After the diamond-tipped surface roughening to be repaired, the adhesive protocol was performed according to group, the specimen was placed in an hourglass-shaped mold and the other half was filled with the repair composite. After 24h, bond strength of specimens was obtained by microtensile using a universal testing machine at a speed of 0.5mm/min. Data were statistically analyzed by two-way ANOVA, Tukey’s and Dunnett’s tests (α=0.05). SBU showed higher bond strength compared to SB2, while SBMP showed intermediate values. However, all experimental groups showed lower bond strength compared to ultimate bond strength. In conclusion, bulk-fill composite repair using universal or conventional solvent-free adhesive improved the adhesion independent of composite tested. Introduction Adhesive Dentistry enables minimally invasive restorative procedures, restoring the shape, function, and esthetic of teeth; replacing the loss of dental structure, either by carious processes or trauma, using direct and indirect restorations. Thus, the clinical use of resin composite as a restorative material has increased considerably in recent years because of the excellent esthetic properties and simplified adhesion procedures, especially in direct restorations (Duran et al., 2015;Fornazari et al., 2017;Kiomarsi et al., 2017;De Medeiros et al., 2019). However, resin composites are subject to occurrence of failures that may lead to clinical failure of adhesive restorations. Among the most common problems are fractures, margin staining, color change, deficiency in restoration anatomy, recurrent caries, and postoperative sensitivity (Duran et al., 2015;Kiomarsi et al., 2017). The difficulty of differentiating between restoration margins and cavity walls, as well as the need to remove previously conditioned enamel and dentin, in order to make a new adhesive restoration (Shahdad & Kennedy, 1998), may result in occurrence of larger and larger cavities at time of deficient restorations replacement (Duran et al., 2015;Kiomarsi et al., 2017;Cueva-Suárez et al., 2020). Thus, resin composites have been increasingly used for dental restorations due to their favorable features by requiring less clinical time to perform, satisfactory cost-benefit ratio, acceptable esthetic and it represents a more conservative procedure in some situations (Cueva-Suárez et al., 2020). Success in repair procedures depends in obtaining an adequate adhesive interface between the aged and new composite. Several studies (Baur & Ilie, 2013;Lima et al., 2014;Baena et al., 2015;Ahmadizenouz et al., 2016;Lima et al., 2016;Fornazari et al., 2017) have been shown that combination between the treatment of surface to be repaired and the use of adhesive systems can increase the bond strength between resin composites. Furthermore, the repair of composite restorations could be performed regardless of composite type and adhesive technique employed (Turner & Meiers. 1993). In an attempt to speed up the restoration process, bulk-fill composites of low polymerization allow to build increment of 4-5 mm of thickness and it may be polymerized in one step, reducing the time that would be necessary in incremental technique Ayar et al., 2019;De Medeiros et al., 2019;Rocha et al., 2020). Clinically, large and deep cavities could be restored more easily and quickly with the use of these materials (Ilie et al., 2014;Carvalho et al., 2020). In addition, universal or multimode adhesive systems that can be used under any application technique and on any surface, some including the bifunctional silane molecule, they could promote adequate adhesion between the aged restoration and composite used in repair procedure (Cura et al., 2016;De Medeiros et al., 2019). However, after performing the restoration, there is no way to differentiate with which resin composite it was made, leading the clinician to try a simpler, faster, and more conservative intervention such as repair, in cases where it is necessary. Therefore, in this study was evaluated the effect of different adhesive systems and resin composites on bond strength of repairs using bulk-fill composite. The null hypotheses were that there would be no difference on bond strength among the (I) adhesive systems and (II) resin composites used in repair procedure. Experimental design The experimental design in this study was two factors randomized complete block arrangement. The factors considered were adhesive systems in three levels (Single Bond Universal -SBU, Scotchbond Mutipurpose Adhesive -SBMP, and Single Bond 2 -SB2) and resin composites in three levels (Filtek Z250, Aura Bulk Fill, and Filtek Bulk Fill). Specimen preparation For this study, 90 blocks of a bulk-fill composite (Filtek Bulk Fill -B1 shade; 3M ESPE, St. Paul, MN, USA) were carried out using silicone matrix. The half-hourglass matrix was filled with the resin composite, and then covered with a polyester strip and a glass slide. To compact the material and prevent bubble formation, the glass slide was gently pressed to remove excess material. Resin composite was light cured for 20 s, using a polywave LED (light emitting diode) Bluephase N (Ivoclar Vivadent, Schaan, Liechtenstein) at irradiance of 1,200 mW/cm 2 , monitored by a radiometer (model L.E.D.; Demetron/Kerr, Danbury, CT, USA). After 24 h, the specimens were submitted to thermal cycling (MSCT-3; Elquip, São Carlos, SP, Brazil) in distilled water (5,000 cycles, 5 and 55ºC, 30 s bath at each temperature) for aging simulation of restoration. Then, specimens were randomly distributed into nine experimental groups (n = 10), according to adhesive system and resin composite for repair. Control group was obtained by making the specimen in hourglass format for measurement of the ultimate bond strength of restorative material. Prior to repair, the surface treatment was performed using a diamond tip #3098 (KG Sorensen, Barueri, SP, Brazil) at a constant high speed under refrigeration, only once and in a single direction, covering the whole interface area (Ahmadizenouz et al., 2016). Then, the surface was washed, dried for 15 s, and the adhesive system was applied according to protocols: (A) SBMP (3M ESPE, St. Paul, MN, USA) -conditioning with 35% phosphoric acid (Ultra-Etch; Ultradent Products Inc., South Jordan, UT, USA) for 15 s, washing, and drying. Then a layer of adhesive was applied using microapplicator and light cured for 10 s; (B) SB2 (3M ESPE) -acid conditioning for 15 s, washing, and drying; followed by application of two layers of adhesive using disposable micro-applicator brush, drying for 5 s, and light curing for 10 s; and (C) SBU (3M ESPE) -surface conditioning using 35% phosphoric acid for 15 s, washing, and drying. Application of two-layers Research, Society and Development, v. 10, n. 5, e31410514951, 2021 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v10i5.14951 4 adhesive, drying for 5 s, followed by light curing for 10 s. A similar matrix was used for repair procedure, but hourglass-shaped. After surface asperization to be repaired and adhesive protocol, the restoration was placed in matrix and the other half was filled with composite that was performed the repair: Filtek Bulk Fill (A2 shade -3M ESPE), Aura Bulk Fill (Universal shade -SDI Limited, Bayswater, VIC, Australia), and Filtek Z250 (A2 shade -3M ESPE). The repair composite was used in different color of restoration in order to facilitate the evaluation of failure mode. Bond strength Firstly, adhesive interface area was measured using a digital caliper ( Failure mode After the microtensile test, the fractured interfaces of each sample were evaluated in a stereomicroscope (45x, Meiji 2000; Meiji Techno, Saitama, Japan) to determine the failure mode of each specimen. Fracture pattern was classified according to predominant failure mode: (a) adhesive failure at restoration/repair interface, (b) cohesive failure in composite of restoration, (c) cohesive failure in composite of repair, or (d) mixed failure (combination of failure modes) (Fornazari et al., 2017). Statistical analysis The sample size was determined by preliminary tests comparing with previous values published (Lima et al., 2016), in which that number of specimens yielded adequate power at α = 0.05, this sample size reached a power of 1 for detecting statistically significant differences. Data normality and homogeneity were verified by Kolmogorov-Smirnov and Levene tests, respectively. Data were analyzed by two-way analysis of variance (ANOVA) and Tukey's post hoc test (α = 0.05). For comparison of experimental groups with control group, the Dunnett's test was used at significance level of 5% (SPSS Version 20, IBM Corp., Armonk, NY, USA). Results A significant difference on bond strength was observed only for adhesive system factor (p = 0.03). There was no significant difference for resin composite and factor interactions (p > 0.05 The distribution of failure modes occurred according to adhesive and composite used in repair procedure (Figure 1). The predominant failure mode for all experimental groups was adhesive type, especially when the repair was performed with SB2. Discussion Success in repair procedures depends on obtaining an adequate adhesive interface between the aged and new resin composite. Several studies (Baur & Ilie, 2013;Lima et al., 2014;Baena et al., 2015;Ahmadizenouz et al., 2016;Lima et al., 2016;Fornazari et al., 2017) showed that the combination between the surface treatment to be repaired and the use of adhesive systems increase the bond strength between the composites. In this study, it was simulated restorations performed using a bulkfill resin composite and evaluated the bond strength of repairs performed with different adhesive systems and resin composites. An important step for success of repair procedure is the surface asperization, that has the function of increasing the surface roughness and consequently increasing the bond strength by mechanical adhesion (Brunton et al., 2017). Some methods have been evaluated for this purpose, such as aluminum oxide blasting, diamond tip asperization, laser pretreatment, hydrofluoric and phosphoric acid application, among others (Duran et al., 2015;Brunton et al., 2017;De Jesus et al., 2017;Fornazari et al., 2017;Ghavam et al., 2018;Ayar et al., 2019). Microporosities creation using diamond-tip in high rotation propitiates the micro-retention of resin-based materials, showing effectiveness in adhesion increase of way practical and accessible for clinicians (De Jesus et al., 2017;Ghavam et al., 2018), thus this method was selected in the present study for roughening of aged composite. After surface treatment to increase the roughness, the silane coupling agent has been usually used as bonding agent, in order to improve the adhesion between the filler particles and organic matrix of aged and new resin composite, because it is a bifunctional molecule capable of to bind with carbon double bonds of organic matrix by non-hydrolyzable functional group and hydroxyl groups of inorganic phase by hydrolyzable alkoxy groups (Fornazari et al., 2017). However, the benefit of using silane is controversial, as some studies (Ferracane, 2011;Tantbirojn et al., 2015;Fornazari et al., 2017;Cuevas-Suárez et al., 2020) reported the increase on bond strength of composite repairs, while others (Cho et al., 2013;Lima et al., 2014) showed no difference in use or not of this bifunctional coupling agent. In this way, isolated silane application was not tested, it was assessed a universal adhesive containing this bifunctional molecule and that can be used under any adhesive technique and on any surface (Cura et al., 2016), which could promote adequate adhesion between restoration and composite used to perform the repair. As reported by previous studies (Fornazari et al., 2017;Çakir et al., 2018) the silane-containing universal adhesive alone was as effective as any combination of silane and adhesive, particularly when applied to abraded surfaces, and requiring fewer steps during adhesive protocol, corroborating with our study in which the adhesive system containing silane showed higher bond strength. The conventional solvent-free adhesive used in this study also resulted in adequate bond strength values, promoting similar adhesion to universal adhesive evaluated. Probably the absence of solvents in adhesive could explain this finding, since solvent-free adhesives have superior mechanical properties when compared to solvated adhesives, due to formation of a polymer with higher crosslink density (Gaglianone et al., 2012). In addition, the use of more hydrophobic intermediate resin- based materials promotes higher bonding stability (Ferracane, 2011). Conventional solvated adhesive showed lower bond strength compared to universal adhesive, possibly by reduction of its mechanical properties due to presence of solvents that result in a polymer with more linear bonds (Gaglianone et al., 2012). Thus, adhesive systems showed different behavior on bond strength of repairs to aged bulk-fill composite, thus the first null hypothesis was not accepted. Bulk-fill composites were modified in their composition, which resulted in increased translucency, allowing greater penetration of light in deeper layers of restoration, more reactive photoinitiators, and incorporation of different filler particles, such as pre-polymers and glass fibers (Benetti et al., 2015;Fronza et al., 2015). However, monomeric composition is based on dimethacrylates as in conventional resin composites, which probably resulted in no difference on bond strength among composites tested. Thus, the second null hypothesis was accepted. In general, the predominant failure mode for all experimental groups was adhesive type (60%), especially for group in which the conventional solvated adhesive system was used, which presented the lowest bond strength values and lower mechanical properties (Gaglianone et al., 2012). Failure of composite restoration was the less frequent type (4.45%), and this fracture pattern was not found when the repair was performed using adhesive containing solvent. Failure in composite repair and mixed type were observed in 21.11 and 14.44% of cases, respectively. Therefore, the repair of composite restorations seems to be a viable alternative to replacement, it considered a minimally invasive procedure, since it preserves dental structure, in addition to requiring less time and less cost to perform (Brunton et al., 2017), showing success after 7 years of clinical use (Demarco et al., 2012). However, all experimental groups presented lower bond strength to the ultimate bond strength of composite in which the restoration was made (control group). In this way, other surface treatments should be evaluated to verify the improvement in effectiveness of repair procedure of adhesive dental restorations. Conclusion According to results, it can be concluded that the universal and solvent-free conventional adhesive systems are preferable to repair procedure because they promoted higher bond strength and bulk-fill composite can be repaired adequately using bulk-fill and conventional resin composites.
3,350.2
2021-05-08T00:00:00.000
[ "Materials Science", "Medicine" ]
E-Learning Environment Based Intelligent Profiling System for Enhancing User Adaptation : Online learning systems have expanded significantly over the last couple of years. Massive Open Online Courses (MOOCs) have become a major trend on the internet. During the COVID-19 pandemic, the count of learner enrolment has increased in various MOOC platforms like Coursera, Udemy, Swayam, Udacity, FutureLearn, NPTEL, Khan Academy, EdX, SWAYAM, etc. These platforms offer multiple courses, and it is difficult for online learners to choose a suitable course as per their requirements. In order to improve this e-learning education environment and to reduce the drop-out ratio, online learners will need a system in which all the platform’s offered courses are compared and recommended, according to the needs of the learner. So, there is a need to create a learner’s profile to analyze so many platforms in order to fulfill the educational needs of the learners. To develop a profile of a learner or user, three input parameters are considered: personal details, educational details, and knowledge level. Along with these parameters, learners can also create their user profiles by uploading their CVs or LinkedIn. In this paper, the major innovation is to implement a user interface-based intelligent profiling system for enhancing user adaptation in which feedback will be received from a user and courses will be recommended according to user/learners’ preferences. Introduction To improve the learning experience of a novice learner on the internet, it is essential to propose a recommendation system that can recommend relevant courses from various MOOC platforms as per the preference of the learner [1].In order to enhance user/learner adaption, an intelligent profiling system for e-learning environments is required to deeply understand the needs of novice learners [2].The objective of this paper is to build the learners' profiles that will increase the level of understanding of learners' needs [3].Different kinds of filters are available for websites that divide users into different categories.For example, if a user likes articles on Coursera, EdX, SWAYAM, SkillShare, Lynda, Research Scholar, or any other online platform, it can be predicted that the user is interested in research and could communicate this inference to the user [4].In the present scenario, learning systems usually do not act according to learners' profiles and preferences.When a learner searches for any course, they get a huge list of available platforms and courses without taking into consideration the learner's learning requirements [5].Therefore, it becomes a time-consuming process to decide which platform or course is most suitable for a learner.Hence, when they choose any random course, the probability of dropout increases and thus creates a negative impact on the instructor [6].Moreover, the essential features of the online platforms are in collaboration with various learning tools, brand integration, online course catalogs, responsive design features, and a natural user interface.Machine learning techniques and recommender systems are used to build personalized information filters for online platforms, and these filters are also on trend.The customization of any platform helps the system act as a smart agent for users [7].There is a growing need to provide people with the opportunity to gain new knowledge by combining the internet with education.This pandemic has forced schools, colleges, universities, business houses, and companies to do work remotely, which, in turn, demands e-learning platforms [8].It has led to an increase in the availability of e-learning platforms and the nature of the content on these platforms.Researchers are actively doing research in the field of MOOCs to improve the learning experience by analyzing the needs and learning behaviors of learners.Machine learning is a part of artificial intelligence and an approach that provides the ability to learn and improve experiences without being explicitly programmed. In Figure 1, the process of machine learning is elaborated [9].A large dataset is required to train the machine, and based on the dataset, the model is trained and gives recommendations as an output.The relevant search option is the biggest challenge for the fast delivery of search results, and there is a lack of advanced filters in the current scenario.The following are some other research gaps that were analyzed during this research: learning systems usually do not act according to learners' profiles and preferences.W a learner searches for any course, they get a huge list of available platforms and co without taking into consideration the learner's learning requirements [5].Therefore, comes a time-consuming process to decide which platform or course is most suitab a learner.Hence, when they choose any random course, the probability of dropou creases and thus creates a negative impact on the instructor [6].Moreover, the esse features of the online platforms are in collaboration with various learning tools, b integration, online course catalogs, responsive design features, and a natural user face.Machine learning techniques and recommender systems are used to build pers ized information filters for online platforms, and these filters are also on trend.The tomization of any platform helps the system act as a smart agent for users [7].Ther growing need to provide people with the opportunity to gain new knowledge by com ing the internet with education.This pandemic has forced schools, colleges, univers business houses, and companies to do work remotely, which, in turn, demands e-lea platforms [8].It has led to an increase in the availability of e-learning platforms an nature of the content on these platforms.Researchers are actively doing research i field of MOOCs to improve the learning experience by analyzing the needs and lea behaviors of learners.Machine learning is a part of artificial intelligence and an appr that provides the ability to learn and improve experiences without being explicitly grammed. In Figure 1, the process of machine learning is elaborated [9].A large dataset quired to train the machine, and based on the dataset, the model is trained and give ommendations as an output.The relevant search option is the biggest challenge fo fast delivery of search results, and there is a lack of advanced filters in the current scen The following are some other research gaps that were analyzed during this research Understanding Learner Behavior In the present scenario, when a learner searches for any course, they get a hug of available platforms and courses without taking into consideration the learning n and competence of the user [10].The list of available platforms is shown without kno Understanding Learner Behavior In the present scenario, when a learner searches for any course, they get a huge list of available platforms and courses without taking into consideration the learning needs and competence of the user [10].The list of available platforms is shown without knowing the requirements of the learner.Therefore, it becomes a time-consuming and random process to decide which platform/course is suitable for that individual learner.Hence, when learners choose any random course, the probability of dropout is increased [11]. Smart Recommendation System Currently, recommendation systems are used for many platforms, where recommender systems recommend courses for online learners.But these recommendation systems work for particular platforms [12].So, there is still a need for customization through which learners can opt for courses based on their needs, income, and levels (beginner, intermediate, or high level).With the help of a smart recommendation system, learners can also identify which topics are useful in their job or industry [13]. Usability The measurement of usability of platforms is difficult due to the variation according to the learner's experience, ability in dealing with technology, and understanding [14].Learning behaviors are learned actions that enable students to access learning and interact with others productively in the community.They complement the curriculum content taught in the elementary grades and are a natural part of the process of learning about oneself while interacting with others [15][16][17].Learner behavior, motivation, and engagement patterns are important factors in understanding the success of MOOCs.To understand this, researchers have been using many qualitative, quantitative, or mixed methods.MOOC user behavior is generally studied using the data collected within platform interactions within the learning system or via outside social media platforms [18].To analyze the learners' behavior, user profiling plays an important role.User profiling is based on two approaches, namely knowledge-based or behavior-based [19].On the one hand, questionnaires and interviews need a knowledge-based approach, and on the other hand, machine learning techniques are used in the behavior-based approach.Machine learning techniques are used to find patterns in user behavior [20].The recommendation systems commonly use a behaviorbased approach to determine the feedback of the users.After developing a user profile, the next step is to move toward the recommendation system [21].Recommender systems are software tools that provide a customized environment for users.The recommender system provides revolutionary changes in e-commerce websites [22].These websites need one click from the user to know their favorites.After that, the recommender system starts filtering similar items and starts recommending them to learners [23].Nowadays, the recommender system is used by high-rated sites like Amazon, Flipkart, YouTube, LinkedIn, Spotify, and many more.Sometimes, users find it difficult to search for their appropriate choices because there is so much content available online [24].Choices are good, but more choices does not mean better results, and at this time, the recommendation system helps a lot [25]. The focus of this paper is to create learners' profiles based on learners' preferences to understand their requirements related to online e-learning platforms [26].The major contributions of this paper are as follows: • To identify the research gaps that learners face while opting for MOOC courses. • In order to enhance user adaption of online courses, an intelligent profiling system for e-learning environments is proposed to deeply understand the needs of the learners. • To develop learners' profiles and datasets extracted from various websites such as LinkedIn, indeed, Google Forms, etc. • To propose a recommender system that will compare and recommend courses according to learner preferences. • To design a user-interface that will help to scrutinize learning behavior. The rest of the paper is organized as follows.A related review of recent studies is carried out in Section 2. Section 3 consists of a proposed methodology in which parameters and survey results are displayed.Section 3 elaborates on the proposed methodology.Section 4 highlights the experimental results, which are followed by the conclusion in Section 5. Related Work E-learning is a trending topic nowadays, with so many researchers actively working in this area.The main benefit of MOOCs is that learners can learn at their own pace, on their schedule, and in any location in the world.Nowadays, the challenge faced by MOOC platforms is to improve the quality of content, increase the number of enrolments, and keep the learners engaged with the right course as per their preferences.Chen et al. [27] presented the personalized learning path recommender system for proposing the path to meet the preferences of the learners.All the preferences were proposed by the LINE Bot.Further, the LSTM model was used to analyze the video preferences, clusters of students, and learning paths.Chuang et al. [28] proposed a system that recommends personalized exercises for students while analyzing their behavior, knowledge, and level of course.Farnadi et al. [29] proposed a statistical relational learning framework using Hinge-loss Markov random fields to compile the user profile.Boussakssou et al. [30] analyzed that the e-learning system can generate adaptive paths for the learner's profile.Indeed, the authors proposed a dynamically composed approach based on the behavior of learners.This kind of learning is known as reinforcement learning.Kolekar and Pai [31] presented an approach to identify the learning styles for adaptation as per the Felder-Silverman Learning Style Model (FSLSM).The data is used to cluster the learners as per the learning categories of FSLSM.The customization is provided on the portal by generating an adaptive user interface for each learner based on the learning style of FSLSM.Pereira et al. [32] introduced a framework to fetch users' profiles and educational details from Facebook and some educational resources.Information extraction techniques and semantic web technologies were used for the extraction, improvement, and description of users' profiles and interests.The proposed approach was based on learning repositories, linked data, and video repositories.Hagedoorn and Spanakis [33] concluded in the paper that online courses are impressive but the dropout based on learners' profiles raises crucial tasks for MOOC platforms.So, to overcome this problem, dropout learners' behavior was analyzed by extracting the features of their behavior.In this proposed work, three classifiers were compared, namely logistic regression, random forest, and AdaBoost.However, the accuracy of all these three classifiers was the same, but in terms of accuracy, logistic regression was better than the other two classifiers.Liang et al. [34] analyzed the behavior of online learners and built a student profile and also gave countermeasures.This study is based on students' profiles to guide both e-learning and improve learning outcomes.Big data processing technology was used for building the learners' profiles.This kind of profile helps the learner to understand their learning situations and their problems, and also motivates them to improve the completion rate of the online course.Chrysoulas and Fasli [35] proposed two main sources of personalization information, that is, learning behavior and personal learning style based on the adaptive learning approach.Ali et al. [36] contributed to the recommendation of pedagogical resources within a learning ecosystem.Clemente [37] used a biased matrix factorization algorithm to improve the prediction of online latent Dirichlet allocation for building users' and item profiles.In this paper, Yelp data was used, which is limited to the restaurant category.The authors analyzed and compared the applicability of different user modeling strategies in the context of MOOC recommendations.In this study, the author's research was based on five parameters, such as degree below bachelor's, bachelor's degree, master's degree, and Ph.D. degree.A bootstrapped paired t-test was used to test the statistical significance of the results, and the results show that in comparison to skill, job, and education-based profiles, skill-based performance is the best.Li and Kim [38] applied a clustering technique to build a framework for semantic content for user profiles and also suggested some methods to construct user profiles from rating information and attributes to put up user preferences.In another paper, Bradley et al. [39] described and evaluated a two-stage personalized information retrieval system that combines a server-side similarity-based retrieval component with a client-side case-based personalization component.Although recommendation systems recommend courses for many platforms, the issue is that recommendation systems only suggest courses for particular platforms.So, after studying the literature and the research gap, the platform is built with a system that recommends courses after analyzing the user's needs, whether that course is from Coursera, Udemy, or Edx. Proposed Methodology The proposed research work aims to create learners' profiles and build a recommendation model that compares different courses on online platforms and offers courses according to learners' preferences.For developing a learner's profile, three parameters have been considered, including personal details, educational details, and knowledge level, as depicted in Figure 2. In this proposed model, learners can also register with their existing LinkedIn profiles so that they can save time. for semantic content for user profiles and also suggested some methods to construct user profiles from rating information and attributes to put up user preferences.In another paper, Bradley et al. [39] described and evaluated a two-stage personalized information retrieval system that combines a server-side similarity-based retrieval component with a client-side case-based personalization component.Although recommendation systems recommend courses for many platforms, the issue is that recommendation systems only suggest courses for particular platforms.So, after studying the literature and the research gap, the platform is built with a system that recommends courses after analyzing the user's needs, whether that course is from Coursera, Udemy, or Edx. Proposed Methodology The proposed research work aims to create learners' profiles and build a recommendation model that compares different courses on online platforms and offers courses according to learners' preferences.For developing a learner's profile, three parameters have been considered, including personal details, educational details, and knowledge level, as depicted in Figure 2. In this proposed model, learners can also register with their existing LinkedIn profiles so that they can save time.Personal details are further divided into subcategories such as name, date of birth, email address, location, employment status (if any), occupation (student or employee), experience (if any), and earnings (if any), as shown in Figure 3. Personal details are further divided into subcategories such as name, date of birth, email address, location, employment status (if any), occupation (student or employee), experience (if any), and earnings (if any), as shown in Figure 3. for semantic content for user profiles and also suggested some methods to construct user profiles from rating information and attributes to put up user preferences.In another paper, Bradley et al. [39] described and evaluated a two-stage personalized information retrieval system that combines a server-side similarity-based retrieval component with a client-side case-based personalization component.Although recommendation systems recommend courses for many platforms, the issue is that recommendation systems only suggest courses for particular platforms.So, after studying the literature and the research gap, the platform is built with a system that recommends courses after analyzing the user's needs, whether that course is from Coursera, Udemy, or Edx. Proposed Methodology The proposed research work aims to create learners' profiles and build a recommendation model that compares different courses on online platforms and offers courses according to learners' preferences.For developing a learner's profile, three parameters have been considered, including personal details, educational details, and knowledge level, as depicted in Figure 2. In this proposed model, learners can also register with their existing LinkedIn profiles so that they can save time.Personal details are further divided into subcategories such as name, date of birth, email address, location, employment status (if any), occupation (student or employee), experience (if any), and earnings (if any), as shown in Figure 3.The second parameter is the "education details" of the learner, which is further divided into sub-categories such as "Highest Qualification", "Name of School/University", "Board", and "Grade/Percentage", as shown in Figure 4.The second parameter is the "education details" of the learner, which is further divided into sub-categories such as "Highest Qualification", "Name of School/University", "Board", and "Grade/Percentage", as shown in Figure 4.The knowledge level is further divided into subcategories such as a course of interest, level of skill set (beginner, intermediate, and advanced), preferred language, and interest in theoretical or practical parts, as shown in Figure 5.The methodology for creating user profiles is depicted using a flowchart in Figure 6.An intelligent profiling system can be produced by creating a user profile.The knowledge level is further divided into subcategories such as a course of interest, level of skill set (beginner, intermediate, and advanced), preferred language, and interest in theoretical or practical parts, as shown in Figure 5. The second parameter is the "education details" of the learner, which is further divided into sub-categories such as "Highest Qualification", "Name of School/University", "Board", and "Grade/Percentage", as shown in Figure 4.The knowledge level is further divided into subcategories such as a course of interest, level of skill set (beginner, intermediate, and advanced), preferred language, and interest in theoretical or practical parts, as shown in Figure 5.The methodology for creating user profiles is depicted using a flowchart in Figure 6.An intelligent profiling system can be produced by creating a user profile.The methodology for creating user profiles is depicted using a flowchart in Figure 6.An intelligent profiling system can be produced by creating a user profile.When a user logs in, there are three ways in which a user's profile can be c registration, uploading their Curriculum Vitae (CV), and access through Linked velop a user profile, the first step is to fill out the registration form.The learne will then be entered into the database.This interface was developed in PHP an and is used as a database.The next step is to click on the Sign In form, and now can log in using the username and password that were earlier assigned in the re step.After filling in the credentials, the user is required to click on the login b steps are mentioned in Algorithm 1. Welcome.setVisible(true);//Create a welcome label and set it to page 7. print ("Please enter the valid credentials"); When a user logs in, there are three ways in which a user's profile can be created via registration, uploading their Curriculum Vitae (CV), and access through LinkedIn.To develop a user profile, the first step is to fill out the registration form.The learner's details will then be entered into the database.This interface was developed in PHP and MySQL and is used as a database.The next step is to click on the Sign In form, and now the user can log in using the username and password that were earlier assigned in the registration step.After filling in the credentials, the user is required to click on the login button.The steps are mentioned in Algorithm 1. Procedure Login (EID, pwd) 2. Welcome.setVisible(true);//Create a welcome label and set it to the new page 7. print ("Please enter the valid credentials"); 12. End else 13.End Procedure Login To register as a new user, they need to click on the "New User" or "Register Here" table.The steps of registration are also mentioned in the Algorithm: Registration.The registration page, with the help of three parameters that were earlier discussed in the research methodology, is displayed on the screen, and users fill in their name, EID, pwd, DOB, Exp, status, and loc, then click on the submit button.The steps are mentioned in Algorithm 2. Procedure personalDetails (name, EID, pwd, DOB, Exp, Occ, Emp_status, Loc) 2. Enter When a user logs in through authorized credentials then the learners' details are displayed on the screen, as shown in Figure 7.The second step is to create a profile by uploading a CV.In this step, the CV is uploaded through the "Affinda API" option.In this step, PDF files are uploaded by a user on a website and, with the help of the Affinda API, parsing will be done and, further, the information will be saved into the database, as shown in Figure 8.The data will be received in text format and then it will be easy to manipulate that data using Regular Expressions and save them into the database.The steps are explained in Algorithm 3. Procedure upload_cv() 2. Browse file from local device 3. OCR will convert CV into text 5. Details saved in DB 6. End Procedure upload_CV The third step is to login through a LinkedIn account.With the help of an authorized email ID, a user will jump from a LinkedIn account to the "welcome.php"page, where The second step is to create a profile by uploading a CV.In this step, the CV is uploaded through the "Affinda API" option.In this step, PDF files are uploaded by a user on a website and, with the help of the Affinda API, parsing will be done and, further, the information will be saved into the database, as shown in Figure 8.The data will be received in text format and then it will be easy to manipulate that data using Regular Expressions and save them into the database.The steps are explained in Algorithm 3. Procedure upload_cv() 2. Browse file from local device 3. OCR will convert CV into text 5. Details saved in DB 6. End Procedure upload_CV Electronics 2022, 11, x FOR PEER REVIEW 9 of 17 The second step is to create a profile by uploading a CV.In this step, the CV is uploaded through the "Affinda API" option.In this step, PDF files are uploaded by a user on a website and, with the help of the Affinda API, parsing will be done and, further, the information will be saved into the database, as shown in Figure 8.The data will be received in text format and then it will be easy to manipulate that data using Regular Expressions and save them into the database.The steps are explained in Algorithm 3. Procedure upload_cv() 2. Browse file from local device 3. OCR will convert CV into text 5. Details saved in DB 6. End Procedure upload_CV The third step is to login through a LinkedIn account.With the help of an authorized email ID, a user will jump from a LinkedIn account to the "welcome.php"page, where The third step is to login through a LinkedIn account.With the help of an authorized email ID, a user will jump from a LinkedIn account to the "welcome.php"page, where the details of the user will be displayed.For this step, the LinkedIn API service will be used.With the help of the LinkedIn developer, a client id and secret key will be generated.By using this client id and secret key, the data will be displayed via a LinkedIn account.The steps are explained in Algorithm 4. Procedure viaLinkedln () 2. Generate Request by clicking on LinkedIn icon 3. Requests access with token from web application 6. Respond with requested data 7. If (EID(abc@gmail.com)&&pwd("********")) Once the user profile is created, it is saved in the database, which is used as an input by the recommender system to make recommendations for online courses.Every e-learning platform has a recommendation system, and these platforms create profiles of the learners as well.However, the challenge is to understand the learner's needs.Every learner is different from others.Every day, online platforms offer so many courses that understanding which course is relevant for novice learners is a difficult task.When novice learners are not able to find a suitable course, they switch platforms, but every time filling out the registration details is cumbersome, and it may also happen that the learner has lost interest in that particular course.The recommender system facilitates both the instructors and the learners.A user profile is created so that the user's correct information can be fetched and, as per the needs of the learner, courses can be offered to them.If a learner is enrolled in a course as per their preference, then the chances of dropout will automatically be reduced, thereby also helping the instructors.The novelty of this work is to build a platform where all the online courses are available on a single platform.This single platform only offers e-learning courses, not the platforms.The motive of this research is to create competition between the platforms and maintain the quality of the courses. Result & Discussion After analyzing the literature of MOOC courses, it is stated that among all the MOOC platforms, learners mostly enrolled in Coursera courses.To study the dropout rate of Coursera courses, a dataset is fetched from Kaggle.com that contains information on 891 courses.The dataset contains details of the total number of students enrolled in a particular course.Course_difficulty contains information regarding the level of the course: beginner, intermediate, and advanced.Certification_type contains information regarding the certification, including whether that course counts as a professional degree or not.Course_rating contains the rating of the course, and the course_dropout field contains the number of learners who left the courses after enrolment.In Figure 9, from 891 Coursera courses, 487 courses are offered to beginners, 385 courses are offered for intermediate learners, and 19 courses are for advanced learners.In Figure 10, out of 891 courses, 297 courses hold specialization, 12 courses offer professional certificates, and 582 courses help to enhance your skills.In this paper, the authors have applied regression analysis to the Coursera dropout dataset that was fetched from Kaggle.com to predict student dropouts from the Coursera platform.The regression analysis is performed using the IBM Watson studio service for predicting student dropout cases.The relationship map of coursera_data is depicted in Figure 11, which highlights the pipelines, top algorithms used, and feature transformers.The training set consists of 90% data, and for cross-validation, 3 folds were performed for computation.In this paper, the authors have applied regression analysis to the Coursera dropout dataset that was fetched from Kaggle.com to predict student dropouts from the Coursera platform.The regression analysis is performed using the IBM Watson studio service for predicting student dropout cases.The relationship map of coursera_data is depicted in Figure 11, which highlights the pipelines, top algorithms used, and feature transformers.The training set consists of 90% data, and for cross-validation, 3 folds were performed for computation.In this paper, the authors have applied regression analysis to the Coursera dropout dataset that was fetched from Kaggle.com to predict student dropouts from the Coursera platform.The regression analysis is performed using the IBM Watson studio service for predicting student dropout cases.The relationship map of coursera_data is depicted in Figure 11, which highlights the pipelines, top algorithms used, and feature transformers.The training set consists of 90% data, and for cross-validation, 3 folds were performed for computation.In the progress map shown in Figure 12, eight pipelines are generated from algorithms.Each pipeline contains unique steps that generate each model candidate.The top algorithms or estimators are selected in order to train and test the pipelines.The top two algorithms are selected and trained based on the selected dataset.Feature transformers are applied to features (columns) in the dataset during the feature engineering phase.These transformers are applied to two pipelines per algorithm.In the progress map, various steps have been discussed that help in building a model.Firstly, the AI tool reads the dataset and then further divides it into two parts: training and testing.Then a preprocessing step is done in which various filters are applied to clean the dataset.According to the dataset, IBM Watson studio suggests a model that helps train the machine.Furthermore, according to the model, eight pipelines are created for this model.In this model, the dropout_students column is used as a prediction column.As a comparison to other algorithms, the XGB Regressor algorithm gives better results.The XGB Regressor algorithm used four features: course_student_enrolled, course_difficulty, certification_type, and course_rating. • XGB Regressor stands for Extreme Gradient Boosting Regressor.It is an open-source library which renders an efficient and effective implementation of the gradient boosting algorithm. • The Hyperparameter optimization is all the parameters that can be arbitrarily set by the user before starting training. • Feature Engineering is the art of articulating the useful features (characteristics, properties, and attributes) from datasets and targets to be learned by the machine. • Ridge: Regression is a way to create a frugal model when the number of predictor variables in a set overpass the number of observations or when a data set shows correlations between predictor variables.In the progress map shown in Figure 12, eight pipelines are generated from algorithms.Each pipeline contains unique steps that generate each model candidate.The top algorithms or estimators are selected in order to train and test the pipelines.The top two algorithms are selected and trained based on the selected dataset.Feature transformers are applied to features (columns) in the dataset during the feature engineering phase.These transformers are applied to two pipelines per algorithm.In the progress map shown in Figure 12, eight pipelines are generated from algorithms.Each pipeline contains unique steps that generate each model candidate.The top algorithms or estimators are selected in order to train and test the pipelines.The top two algorithms are selected and trained based on the selected dataset.Feature transformers are applied to features (columns) in the dataset during the feature engineering phase.These transformers are applied to two pipelines per algorithm.In the progress map, various steps have been discussed that help in building a model.Firstly, the AI tool reads the dataset and then further divides it into two parts: training and testing.Then a preprocessing step is done in which various filters are applied to clean the dataset.According to the dataset, IBM Watson studio suggests a model that helps train the machine.Furthermore, according to the model, eight pipelines are created for this model.In this model, the dropout_students column is used as a prediction column.As a comparison to other algorithms, the XGB Regressor algorithm gives better results.The XGB Regressor algorithm used four features: course_student_enrolled, course_difficulty, certification_type, and course_rating. • XGB Regressor stands for Extreme Gradient Boosting Regressor.It is an open-source library which renders an efficient and effective implementation of the gradient boosting algorithm. • The Hyperparameter optimization is all the parameters that can be arbitrarily set by the user before starting training. • Feature Engineering is the art of articulating the useful features (characteristics, properties, and attributes) from datasets and targets to be learned by the machine. • Ridge: Regression is a way to create a frugal model when the number of predictor variables in a set overpass the number of observations or when a data set shows correlations between predictor variables.In the progress map, various steps have been discussed that help in building a model.Firstly, the AI tool reads the dataset and then further divides it into two parts: training and Then a preprocessing step is done in which various filters are applied to clean the dataset.According to the dataset, IBM Watson studio suggests a model that helps train the machine.Furthermore, according to the model, eight pipelines are created for this model.In this model, the dropout_students column is used as a prediction column.As a comparison to other algorithms, the XGB Regressor algorithm gives better results.The XGB Regressor algorithm used four features: course_student_enrolled, course_difficulty, certification_type, and course_rating. • XGB Regressor stands for Extreme Gradient Boosting Regressor.It is an open-source library which renders an efficient and effective implementation of the gradient boosting algorithm. • The Hyperparameter optimization is all the parameters that can be arbitrarily set by the user before starting training. • Feature Engineering is the art of articulating the useful features (characteristics, properties, and attributes) from datasets and targets to be learned by the machine. • Ridge: Regression is a way to create a frugal model when the number of predictor variables in a set overpass the number of observations or when a data set shows correlations between predictor variables. The model is evaluated using eight pipelines which are explained as follows: This is a standard deviation of the error that is predicted according to the dataset.The errors are squared before they are averaged, as mentioned in Equation (1).It means RMSE is useful when a large number of errors are present and they affect the performance of the model.This metric avoids taking absolute value and this metric also lowers the value and improves the model preference. RMSE This metric indicates how the model is fitted to a given dataset.R Squared values lie between 0 and 1, where 0 indicates that the model is not fit and 1 indicates that the model is perfectly fit to the dataset.R Squared is also known as the coefficient of determination. Mean Squared Error (MSE) These metrics are commonly used in machine learning.This metric is useful when the dataset contains outliers.The values calculated by MSE will never be negative.The formally defined equation is mentioned in Equation ( 2): where T is the number of samples that are used for testing. Mean Squared Log Error (MSLE) The MSLE metric is used to measure the percentual difference between true and predicted values as mentioned in Equation (3): where, Â demotes the predicted value. Mean Absolute Error (MAE) The MAE is similar to the MSE, the result of these metrics will never be negative, and they always use absolute values for error.In comparison with MSE, the MAE metric is not sensitive to outliers.This metric is used when the performance of continuous variable data is measured.The MAE has a slightly different definition from the MSE but provides almost exactly the opposite properties, as mentioned in Equation (4). Median Absolute Error (MedAE) MedAE is a non-negative floating-point metric.The best value of MedAE is 0.0.To evaluate the regression models, unlike classification, regression models allow the gradation of true or false.However, it is always challenging to choose the right metrics and to determine the realistic range of prediction errors.Knowledge regarding different error metrics is a must. After the analysis of all the eight pipelines, model evaluation is done in which various measures are Root Mean Squared Error (RMSE), R Squared (R2), Explained Variance (EV), Mean Squared Error (MSE), Mean Squared Log Error (MSLE), Mean Absolute Error (MAE), Median Absolute Error (MedAE), and Root Mean Squared Log Error (RMSLE), as shown in Figure 13.In a nutshell, it is observed that although every MOOC platform has their own recommender system, these systems work for their particular platform only.So, in order to provide an e-learning environment, online learners require a system in which all the courses will be compared and recommended according to learners' preferences.In this paper, a recommender system is proposed that compares all the courses at one interface after analyzing the user profile of a learner so that relevant courses can be recommended to a learner according to their preferences. Median Absolute Error (MedAE) MedAE is a non-negative floating-point metric.The best value of MedAE is 0.0.To evaluate the regression models, unlike classification, regression models allow the gradation of true or false.However, it is always challenging to choose the right metrics and to determine the realistic range of prediction errors.Knowledge regarding different error metrics is a must. After the analysis of all the eight pipelines, model evaluation is done in which various measures are Root Mean Squared Error (RMSE), R Squared (R2), Explained Variance (EV), Mean Squared Error (MSE), Mean Squared Log Error (MSLE), Mean Absolute Error (MAE), Median Absolute Error (MedAE), and Root Mean Squared Log Error (RMSLE), as shown in Figure 13.In a nutshell, it is observed that although every MOOC platform has their own recommender system, these systems work for their particular platform only.So, in order to provide an e-learning environment, online learners require a system in which all the courses will be compared and recommended according to learners' preferences.In this paper, a recommender system is proposed that compares all the courses at one interface after analyzing the user profile of a learner so that relevant courses can be recommended to a learner according to their preferences. Conclusions Today's world is facing the COVID-19 pandemic, and this pandemic has forced schools and universities to work remotely.Nowadays, schools and universities are opting for e-learning platforms to survive, but the problem here is that every user is different, and their learning patterns are also different.So, to improve the results and reduce the drop-out ratio, customization is a must.One of the solutions is to create a user profile for each learner.In this paper, the user profile is created through PHP.For developing a user profile, three parameters are used: personal details, education details, and knowledge level.The data is collected through three different modes: by registration on the website, Conclusions Today's world is facing the COVID-19 pandemic, and this pandemic has forced schools and universities to work remotely.Nowadays, schools and universities are opting for elearning platforms to survive, but the problem here is that every user is different, and their learning patterns are also different.So, to improve the results and reduce the drop-out ratio, customization is a must.One of the solutions is to create a user profile for each learner.In this paper, the user profile is created through PHP.For developing a user profile, three parameters are used: personal details, education details, and knowledge level.The data is collected through three different modes: by registration on the website, uploading a CV and through a Google account.The main goal of this user's profile is to recommend courses to learners by understanding their needs.The courses that will be recommended to the user will not be based on one platform.The courses that the recommender system recommends are from Udemy, Udacity, Coursera, SWAYAM, and many more.Thus, a user profile is created that is based on various platforms and will recommend courses from all these platforms according to the learner's preferences.So, in order to improve the learning experience of novice online learners, a recommendation system is needed.In this paper, a recommender system is proposed that compares all the courses on one interface after analyzing the user profile of a learner, so that relevant courses can be recommended to a learner according to their preferences. In the future, results will be generated with the help of implementing the hybrid recommender system, which is a combination of content-based and collaborative filtering techniques.With the help of content-based and collaborative-based recommendation filters, a top-popular course list will be generated.The recommendation system will make Figure 1 . Figure 1.Process of Machine Learning. Figure 1 . Figure 1.Process of Machine Learning. Figure 2 . Figure 2. Parameters used for developing the Learners' Profile. Figure 3 . Figure 3. Sub-categories of personal details parameter. Figure 2 . Figure 2. Parameters used for developing the Learners' Profile. Figure 2 . Figure 2. Parameters used for developing the Learners' Profile. Figure 3 . Figure 3. Sub-categories of personal details parameter.Figure 3. Sub-categories of personal details parameter. Figure 3 . Figure 3. Sub-categories of personal details parameter.Figure 3. Sub-categories of personal details parameter. Figure 4 . Figure 4. Sub-categories of educational details parameter. Figure 5 . Figure 5. Sub-categories of knowledge level parameters. Figure 4 . Figure 4. Sub-categories of educational details parameter. Figure 4 . Figure 4. Sub-categories of educational details parameter. Figure 5 . Figure 5. Sub-categories of knowledge level parameters. Figure 5 . Figure 5. Sub-categories of knowledge level parameters. Figure 6 . Figure 6.Flowchart of developing the learners' profile. Figure 6 . Figure 6.Flowchart of developing the learners' profile. Figure 7 . Figure 7. User profile of the registered user. Figure 7 . Figure 7. User profile of the registered user. Figure 7 . Figure 7. User profile of the registered user. Electronics 2022 , 11, x FOR PEER REVIEW 11 of 17 learners, and 19 courses are for advanced learners.In Figure10, out of 891 courses, 297 courses hold specialization, 12 courses offer professional certificates, and 582 courses help to enhance your skills. Figure 9 . Figure 9. Number of students enrolled in each level. Figure 10 . Figure 10.Number of certification courses in each level. Figure 9 . Figure 9. Number of students enrolled in each level. Figure 9 . Figure 9. Number of students enrolled in each level. Figure 10 . Figure 10.Number of certification courses in each level. Figure 10 . Figure 10.Number of certification courses in each level.
9,660
2022-10-18T00:00:00.000
[ "Computer Science" ]
Polarization memory effect in the photoluminescence of nc-Si−SiOx light-emitting structures The polarization memory (PM) effect in the photoluminescence (PL) of the porous nc-Si−SiOx light-emitting structures, containing nanoparticles of silicon (nc-Si) in the oxide matrix and passivated in a solution of hydrofluoric acid (HF), has been investigated. The studied nc-Si−SiOx structures were produced by evaporation of Si monoxide (SiO) powder in vacuum and oblique deposition on Si wafer, and then the deposited silicon oxide (SiOx) films were annealed in the vacuum at 975 °C to grow nc-Si. It was found that the PM effect in the PL is observed only after passivation of nanostructures: during etching in HF solution, the initial symmetric nc-Si becomes asymmetric elongated. It was also found that in investigated nanostructures, there is a defined orientational dependence of the PL polarization degree (ρ) in the sample plane which correlates with the orientation of SiOx nanocolumns, forming the structure of the porous layer. The increase of the ρ values in the long-wavelength spectral range with time of HF treatment can be associated with increasing of the anisotropy of large Si nanoparticles. The PM effect for this spectral interval can be described by the dielectric model. In the short-wavelength spectral range, the dependence of the ρ values agrees qualitatively with the quantum confinement effect. Background Thin-film structures containing nanoparticles of silicon (nc-Si) embedded in the silicon oxide (SiO x ) matrix attract the attention of many researchers, because of their promising applications in advanced electronic and optoelectronic devices [1][2][3][4]. The intensity and spectral range of the photoluminescence (PL) of these nanocomposites are determined mainly by the size and structure (amorphous or crystalline) of silicon nanoparticles, which in turn depends on the stoichiometry index of the oxide matrix and temperature of the forming annealing. Main PL characteristics (spectral, kinetics) and photoluminescence mechanism of nc-Si−SiO x structures have been studied sufficiently, while the polarization properties of the PL, unlike for the porous silicon, have been sparsely investigated. In porous silicon, the polarization memory (PM) effect that is the correlation between polarization of excited light and polarization properties of PL was found, and its features and mechanism were studied [5][6][7][8][9]. But in the nc-Si−SiO x structures, obtained using high-temperature annealing of non-stoichiometric SiO x in inert atmosphere or vacuum, the PM effect was not observed. This may be explained by the fact that because of the isotropy of amorphous oxide, silicon nanoparticles formed during annealing are also isotropic. Thus, the effect of PM in these structures does not manifest itself. In the previous works, it was shown [10,11] that the treatment of these structures in solution or vapors of hydrofluoric acid can significantly increase the PL intensity and shift the PL peak position to short-wave region due to partial etching of nc-Si and passivation of their surface. Especially, effective etching and passivation takes place in porous nc-Si−SiO x structures that are formed by oblique deposition of Si monoxide (SiO) in vacuum and the subsequent high-temperature annealing of the obtained SiO x layer [12][13][14]. These layers have a porous columnar structure with oxide nanocolumns inclined at a certain angle to the sample surface. During high-temperature annealing of these films, the thermally stimulated formation of Si nanoinclusions occurs in a restricted volume of the SiO x columns. Due to free space (cavities) between the oxide columns, the structures are more susceptible to chemical treatments, e.g., to etching in HF solution or vapor. Recently, it was shown for the first time [15] that after vapor fluorine-hydrogen treatment of such structures, the PM effect also manifests in them. It means that porous nc-Si−SiO x structures can be used for fabrication of polarized light sources. Linearly polarized emission of these nanostructures may have potential uses as backlight for flat-panel displays [16] and as a biological labeling [17]. In this paper, we study the influence of HF aqueous solution processing time of porous nc-Si−SiO x structures on their nc-Si morphology and the PL polarization properties. Methods The investigated nc-Si−SiO x light-emitting structures were produced by thermal evaporation of 99.9 % pure silicon monoxide SiO powder (Cerac Inc., Milwaukee, WI, USA) in vacuum ((1…2) × 10 -3 Pa) onto polished nc-Si substrates. The substrates were arranged at the angle (α) of 60°relative to the normal to the substrate surface with the direction to the evaporator (oblique or glance angle deposition). The evaporation rate was monitored in situ by the KIT-1 quartz-crystal-oscillator monitor system. The deposited film thickness was measured, using MII-4 micro-interferometer, and amounted to 900 to 950 nm. The films were annealed in vacuum for 15 min at the temperature of 975°C. Passivation of the nc-Si−SiO x structures, obtained in this manner, was carried in an aqueous solution of the HF. The effect of treatment on the PL spectra was studied by varying the time of treatment at a certain temperature (20°C) of the solution and at a certain concentration (0.5 wt% HF). The PL spectra were excited using linearly polarized emission of a semiconductor laser at the 415 nm wavelength. Polarization of the exciting light was rotated by an achromatic half-wave plate and cleaned by a linear polarizer. The orientation of the polarization vector of the emitted light was defined by a sheet polarizer (analyzer) placed in the detection path. PL was excited and detected in nearly normal to the sample surface. PL spectra were measured at room temperature within the wavelength range from 550 to 850 nm. These spectra were normalized to the spectral sensitivity of the experimental system and were corrected with respect to the polarization dependent response of the measurement system. Results and discussion The structure of obliquely deposited SiO x films was studied by the SEM apparatus (ZEISS EVO 50XVP, Oberkochen, Germany) in previous papers [10,12]. The film structure presents well-defined columns characterized by a certain orientation of growth; the column diameter varies in the 10-100 nm range. The dimensions of the columns, their orientation, and the porosity (the relative volume of pores) of the films depend on the angle of deposition. At the angle α = 60°, the porosity equals to 34 % and the inclination angle of the formed oxide nanocolumns relative to the normal to the sample surface is 26…29° [12,18]. The porosity of the films and the dimensions and inclination of the columns remained unchanged on annealing [14]. After annealing, the obtained nc-Si −SiO x structures show weak PL. The PM effect in the spectra of PL is not observed, despite the fact that they possess biaxial optical anisotropy [18]. But the HF treatment of the samples is accompanied by a gradual change in the emission spectra and their polarization properties. The spectra were measured for parallel orientation of the analyzer to polarization of the exciting radiation, and polarization of the exciting light is oriented parallel to the projection of the inclined SiO x nanocolumns on the sample plane radiation. The sample is oriented so that the polarization of the exciting light is parallel to the projection of the inclined SiO x nanocolumns on the sample plane. The nc-Si−SiO x structure exhibits the broad PL band within the 550 to 820 nm wavelength range which can be attributed to exciton recombination in nc-Si [19]. These spectra have similar line shapes; however, the positions of their emission peak and intensities are largely different. By increasing the etching time, we observe a gradual shift of the position of emission peak to shorter wavelengths and an increase in the PL intensity. The blue shift of PL band in HF-treated samples can be attributed to the selective-etching-induced decrease in the Si nanoparticle dimensions [13]. In the porous films studied here, HF penetrates deep into the film and dissolves SiO 2 at the surface of the oxide columns, thus stripping nc-Si. After HF treatment, the nc-Si again forms a thin native oxidized layer on the surface when exposed to the atmospheric oxygen. Because of oxidation of the nc-Si surface, the dimensions of the initial nc-Si core are reduced, resulting in the blue shift of the emission spectrum. Significant enhancement in the PL intensity after HF treatment is related to the passivation of Si dangling bonds (non-radiative recombination trap states) by hydrogen and oxygen [11,14]. As has been shown in our previous paper [15], etching in HF vapor can change also the shape of symmetric Si nanoparticles to the elongated anisotropic one, similar to that in porous silicon, and leads to the PM effect. Indeed, in such samples, the intensity of PL polarized in parallel to polarization of excitation is higher than the PL component polarized in the perpendicular direction, i.e., in the investigated samples, the PM effect is really observed. This effect can be illustrated by the degree of linear polarization of the PL, which is known to be defined with the expression: where I II and I ┴ are the intensities of the photoluminescence with polarization parallel and perpendicular to that of the excitation light, respectively. The spectral dependences of the ρ for the same samples as in Fig. 1 are shown in Fig. 2. For all investigated samples, there is an observed non-monotonous variation of the degree of linear polarization over the whole spectra range. In the long-wave region of emission (730-800 nm), which corresponds to larger sizes of Si nanoparticles, the ρ value increases with the etching time and grows into long-wavelength part of the spectrum. For the sample etched 5 min (a), the ρ value grows more smoothly in comparison with the samples etched 10 (b) and 17 (c) min. In the ranges near the PL maxima, the ρ have the minimal values; then, it increases also by moving toward higher emitted energy. But unlike the long-wave region of PL, the increase of ρ value in short-wave region occurs more sharply for the sample etched 5 min (a), i.e., for smaller etching time. These results are somewhat different from the dependencies that were obtained in our previous works [15,20] for porous nc-Si−SiO x samples etched in HF vapor. In the vapor-treated samples, the ρ slightly decreases with increasing wavelength in the long-wavelength spectral region, similar to how it happens in porous silicon [5][6][7][8][9]. This difference in observed behavior of ρ in the longwavelength spectral region may be connected with the difference in the etching mechanism of the porous structure in the liquid or vapor phase. Figure 3 shows ρ spectra for the samples which were HF-treated for 5 (a, b) and 17 (c, d) min. The polarization of the exciting radiation is oriented parallel to the projection of the inclined SiO x nanocolumns on the sample plane (b, c) and in perpendicular direction (a, d). It can be seen that for the sample that was HF-treated for 5 min, the PM effect is pronounced more efficiently for parallel orientation of the excitation polarization and the nanocolumns' projection than for perpendicular orientation. At the energy of PL maximum, the ρ values are equal to 0. consistent with the conditions of etching of the porous matrix: dissolution occurs starting from the surface of the SiO x nanocolumns; first, the side surface of nc-Si is dissolved, since it is closer to the column surfaces, which leads to elongation of nanoparticles along the axis of the column. Obtained results are consistent with earlier studied angular dependencies of the PL intensity in the porous nc-Si−SiO x structures passivated in HF vapor which indicate the well-defined orientation dependence of ρ in the sample plane [20]. This result is similar to that observed in porous silicon formed by electrochemical etching of Si with orientation [100] in the presence of linearly polarized light illumination [6], which indicated the existence of PM anisotropy in the plane of the sample. For the sample that was treated for 17 min, the PM effect is almost the same for both orientations of the excitation polarization relative to the sample, and the ρ values at the PL maximum are equal to 0.20 (c) and 0.16 (d), for parallel and perpendicular orientation of the excitation polarization and SiO x nanocolumns projection, respectively. As has been shown previously by electron microscopic studies [14], long-duration etching of our samples leads to a reduction in their thickness, etching of the nanocolumns, and the formation of isotropic disordered structure. As in the continuous (solid) nc-Si −SiO x structures [15], in the plane of this sample, there are no preferential orientations of anisotropic silicon nanoparticles. There were several mechanisms proposed that are responsible for the PM effect in silicon nanostructures, where the two main ones are the following: modification of energy spectrum and optical matrix elements by size quantization of carriers [21][22][23] and dielectric confinement of optical electric field caused by the difference in the dielectric constants of the nanoparticle and the environment [5,24]. In [21], the linear character of PL polarization in porous Si is studied experimentally and theoretically using a quantum cylindrical model in the framework of the effective-mass theory. From the experimental and theoretical results, it is concluded that PL polarization anisotropy of elongated particles associated with the structure of the valence band due to quantum confinement in two directions-parallel and perpendicular to the longer axis of the particle. However, there exists the more common explanation of the PM effect-within the dielectric model in which porous silicon is considered as a composite that includes elongated and flattened silicon nanocrystals [5][6][7]. The probability of optical absorptions and emission is proportional to the square of the electric field inside the nc-Si, and therefore, nanocrystals with their longest dimensions aligned along the polarization direction of exciting light will preferentially absorb and emit photons. Then, the PM is the result of selective excitation that part of the non-spherical silicon nanoparticles whose longer axis is parallel to polarization of exciting radiation [5,8,24]. Polarization properties of individual silicon nanorods with diameter around 5 nm, embedded in SiO 2 and oriented parallel to the Si substrate, are studied using an optical micro-spectroscopy setup [25]. Experimental results are compared with available theoretical models leading to the conclusion that the high polarization degree is mostly due to surface charges (dielectric confinement) with smaller contribution of quantum confinement effects. Both interpretations as on the basis of quantum size effects and within the dielectric model associate the PM effect with asymmetric, elongated nanoparticles that emit PL. On the basis of these models, it is possible to explain the features of our results. The broad PL bands in our samples (Fig. 1) are the superposition of radiation from nc-Si with different sizes; the smaller particles correspond to a more short-wave radiation. The position of PL maximum is determined by the maximum in the size distribution of nanoparticles. The increase of the ρ values in the long-wave spectral range and with time of HF treatment (Figs. 2 and 3) can be associated with the increase of the asymmetry of large Si nanoparticles, for example, with the increase of eccentricity of elongated ellipsoidal nc-Si. The PM effect for this spectral interval is most likely associated with the dielectric model. Using dielectric model, we estimated the asymmetry of nanoparticles emitting near PL maximum for the sample treated in HF solution for 5 min. We assume that the nc-Si are elongated ellipsoids of rotation with the semi-axes a, b, c (a = b, c > a) and that the value of the angles between the long axis of ellipsoids and the normal to the sample surface is equal to the columns inclination angle value. Using formulas from the paper [5] and the values of optical dielectric constants inside (ε i = 15 [26]) and outside nc-Si (ε o = 1.5), we found that the ρ values are close to the experimental ones (Fig. 3, curves a, b) if the ellipsoids have a/c ≈ 0.36. Since the ρ values slightly increase toward longer wavelengths, the eccentricity of larger nanoparticles somewhat increases too. The increase of the ρ values in the long-wave spectral range with time of HF treatment ( Fig. 2(b, c)) can be associated with the increase of the asymmetry of large Si nanoparticles. With further increase of HF treatment time, along with an increase in nc-Si asymmetry, their preferred orientation are also destroyed, as can be seen in Fig. 3(c, d). We interpreted the experimental data for the samples that were HF-treated for 17 min assuming that these samples can be considered as an ensemble of randomly oriented elongated nc-Si, embedded in an effective dielectric medium. The ρ value for such ensembles with randomly orientation of ellipsoids axes was calculated in [27,28]: The k value is related to the depolarizing factor n of the ellipsoid by where the depolarizing factor n is calculated according to the expression [29]: Using formula (1)-(4) and the same values of the optical dielectric constants, it was determined that, near PL maximum, the ρ value is equal to 0.18 if the elongated ellipsoids have a/c ≈ 0.4. As seen, this calculated ρ value is close to experimental ones (≈0. 19). However, in a more long-wave spectral range, the ρ increases sharply enough with the increase of the wavelength (Fig. 3, curves c, d). In this spectral range the ρ values, calculated by the formulas (1)(2)(3)(4), are approximate to the experimentally obtained ones if we assume that, in consequence of HF treatment, emitting nc-Si ellipsoids of the larger sizes have a/c ≈ (0.1-0.2), i.e., their eccentricity is essentially increased. As shown in [30], in a random system of nanowires, excited by polarized light, the maximal ρ value is equal to 0.5. We see also that the increase in time of HF treatment in the investigated interval of times has a considerably smaller effect on the asymmetry of Si nanoparticles which are responsible for the emission in the range near the PL maxima. With the further decrease of the nc-Si size, i.e., in the short-wave region of the PL band, the increase of ρ with the decrease of wavelength, as in porous silicon, agrees qualitatively with the quantum-confined nanostructure model in which the degree of polarization increases with decreasing nanostructure size [21,22]. Conclusions We investigated the polarization memory effect in the PL of nc-Si−SiO x light-emitting nanocomposites in which the silicon nanoparticles are embedded into the optically anisotropic SiO x matrix possessing a porous column-like structure. It was found that the degree of PL linear polarization depends on the time of the samples treatment in HF water solution. During etching in HF, the nanoparticles with initial spherical shape become asymmetric elongated, preferably in a direction along oxide nanocolumns. This results in experimentally observed well-defined orientation dependence of the ρ value in the sample plane. But long-duration HF treatment of our samples leads to etching of the nanocolumns and the formation of isotropic disordered structure. We have concluded that observed spectral dependence of the ρ value in nc-Si−SiO x structures in long-wavelength region can be explained on the basis of classical surface charge (dielectric) model. In short-wave region of emission spectra, it is necessary to take into consideration the contribution of quantum confined model in which the degree of polarization increases with decreasing nanostructure size.
4,431
2016-06-02T00:00:00.000
[ "Physics" ]
Surface-Condition-Dependent Deformation Mechanisms in Lead Nanocrystals Serving as nanoelectrodes or frame units, small-volume metals may critically affect the performance and reliability of nanodevices, especially with feature sizes down to the nanometer scale. Small-volume metals usually behave extraordinarily in comparison with their bulk counterparts, but the knowledge of how their sizes and surfaces give rise to their extraordinary properties is currently insufficient. In this study, we investigate the influence of surface conditions on mechanical behaviors in nanometer-sized Pb crystals by performing in situ mechanical deformation tests inside an aberration-corrected transmission electron microscope (TEM). Pseudoelastic deformation and plastic deformation processes were observed at atomic precision during deformation of pristine and surface-oxidized Pb particles, respectively. It is found that in most of the pristine Pb particles, surface atom diffusion dominates and leads to a pseudoelastic deformation behavior. In stark contrast, in surface-passivated Pb particles where surface atom diffusion is largely inhibited, deformation proceeds via displacive plasticity including dislocations, stacking faults, and twinning, leading to dominant plastic deformation without any pseudoelasticity. This research directly reveals the dramatic impact of surface conditions on the deformation mechanisms and mechanical behaviors of metallic nanocrystals, which provides significant implications for property tuning of the critical components in advanced nanodevices. Introduction As the feature size of nanodevices gradually decreases to the sub-10 nm scale, surface conditions have been proved to dominate properties and behaviors of nanomaterials [1][2][3]. Particularly, surface-mediated mechanical deformation process of metals has attracted interest since metallic materials are usually affected by external stress in practical applications [4][5][6]. Previous studies have shown that many factors, including temperature and material size, may affect and determine the deformation process of materials [4,5,[7][8][9][10][11][12]. Based on dislocation dynamics which is common and dominant for bulks, the traditional theories have been successfully applied to explain deformation processes of many metals such as Au, W, Pt, Bi, and Sn [9,[11][12][13][14][15]. Moreover, recently, it was also found that as material size decreases to nanometer scale, surface diffusion would predominantly affect the deformation process of metals [12,16,17], which successfully explains several unusual mechan-ical deformation phenomena observed in metallic nanocrystals [7,12,16,18]. The mechanical deformation processes of different small-volume metals have been extensively studied. Most previous studies pursued the intrinsic mechanical properties of small-volume metals under ideal conditions where clean sample surfaces were obtained by either using inert metals such as Au and Pt [8,10,14,15,19,20] or adopting techniques including plasma cleaning, electron beam etching, and in situ welding [2,16,18,21,22]. In practical applications, however, devices often have complex surfaces that are oxidized or corroded after long-time exposure to their serving environments, e.g., in air or liquids [1,3,6,[23][24][25][26]. Thus, the impact of surface conditions on deformation of small-volume metals should not be neglected. Indeed, some studies have already shown that surface coating remarkably affects the sintering of metals [2,17]. Yet the effect of surface passivation on the mechanical behavior of metals and the mechanisms involved therein remains largely unknown. Thus, it is necessary to provide direct and real-time observation of the deformation processes of metallic materials with different surface conditions at atomic precision. Lead is one of the earliest metals that have wide technological applications in many fields such as alloys, glasses, lead-acid batteries, or photovoltaic devices [27][28][29][30]. The mechanical properties of Pb under distinct conditions have also been studied since the deformation process of Pb can significantly influence the performances of devices [5,6,31]. Nevertheless, there is a lack of direct evidence on the deformation of Pb nanoparticles, and details of the surface-mediated deformation process have not yet been fully clarified. In this work, in situ mechanical deformation experiments were conducted on Pb particles with distinct surface conditions in an aberration-corrected Titan TEM/ STEM. Surface-condition-dependent pseudoelastic deformation and plastic deformation were observed. In pure Pb nanoparticles, the prevailing surface diffusion process contributes to pseudoelastic deformation, while in passivated Pb particles with a PbO surface layer, surface diffusion is prohibited and gives way to crystal slips such as dislocations, stacking faults, and twinning, leading to irreversible plastic deformation. Results and Discussion Preparation of Pb particles with distinct surface conditions is shown in Materials and Methods. Typical results are shown in Figure S1 and Figure S2. More details can be found in our previous research [32]. In situ mechanical deformation tests were conducted on pure Pb particles with clean surfaces, and the liquid-like pseudoelastic deformation process has been captured, as shown in Figure 1 and Figure S3. Initially, a Pb particle supported on the PX-PbTiO 3 nanowire surface was relaxed nearly into an ellipsoid, where 49 layers of Pb(111) planes can be identified, as shown in Figure 1(a). The two diameters of the ellipsoid particle were measured to be 11.7 nm and 12.9 nm, respectively. Then, a W tip was moved upward to press the Pb particle in Figure 1(b). As the W tip kept moving upward, the particle was gradually compressed into a thick slice in Figure 1(c) and Figure S3(d). The width increased to 13.9 nm and thickness decreased to 5.3 nm. Then, the W tip was moved downward away from the nanowire, and the Pb particle was relaxed in Figure 1(d) and was even slightly stretched in Figure 1(e) due to the van der Waals forces. As the W tip still moved downward, the van der Waals forces became too weak and the Pb particle lost its contact with the tip. Meanwhile as shown in Figure 1(f), the Pb particle quickly shrank into an ellipsoid which was almost the same with its initial shape in Figure 1(a). Moreover, the crystal fringes of Pb{111} can be observed clearly during the whole process, and this proved that the particle is in a solid state. These results clearly represent pseudoelastic deformation instead of the common plastic deformation process, as illustrated in Figure 1(g). A similar pseudoelastic deformation phenomenon has been observed in sub-10 nm Ag particles in a previous study, and it was proved to be induced by the diffusion of surface atoms [16]. It has been shown that such recoverable shape change is mediated by diffusion of surface atoms and driven by the minimization of surface energy. In fact, surface diffusion can lead to other phenomena. As shown in Figures 1(h)-1(j) for instance, coalescence of two pure Pb particles during mutual extrusion was observed. The two particles were independently placed on a W tip and substrate in Figure 1(h), respectively. As the tip moved upward, the two particles touched and pressed against each other in Figure 1(i). Then, as the tip moved downward, the initial two particles have merged into one bigger particle in Figure 1(j). Similar diffusion-induced coalescence processes have also been observed in other materials [2,17]. Since surface diffusion plays a vital role during deformation of Pb nanoparticles, the surface condition of Pb particles may significantly affect these diffusive deformation processes. Then, the influence of surface passivation during mechanical deformation process was studied. PbO x layers were introduced to the surface of Pb particles by in situ irradiation, and the details can be found in our previous research [32]. Examples are shown in Figures 2(a) and 2(b) and Figure S4. Mechanical deformation experiments were then conducted on surface-passivated Pb particles. As shown in Figures 2(c)-2(f) and Figure S5, when the surface of Pb nanoparticle is oxidized, plastic deformation instead of the pseudoelastic deformation was observed during the similar in situ compression and stretching processes. Initially, a Pb particle supported on a PX-PbTiO 3 nanowire was completely covered with a PbO x surface layer in Figure 2(c). The particle was measured to be 13.8 nm in height and the 14.3 nm in width. Then, a W tip was moved leftward and pressed the particle to a thickness of only 7.7 nm in Figure S5(b). After that, the W tip was moved rightward and thus stretched the particle due to the van der Waals forces. When the particle was stretched, some fresh surfaces were also formed in Figure 2(d). Then, similar with the results in Figure S4, a new PbO x layer grew on the clean surfaces since the whole system is under electron beams, as shown in Figure 2(e). Finally, a PbO x layer again covered the whole Pb particle in Figure 2(f). As a result, the Pb particle grew into an irregular shape with its length increased to 16.7 nm and width decreased to only 10.0 nm as shown in Figure S5(f). Different from the results shown in Figure 1, this particle did not recover its initial shape in Figure 2(c), indicating that surface passivation led to plastic deformation instead of pseudoelastic deformation of Pb particles. Other similar results are shown in Figure S6. In all cases, diffusioninduced liquid-like spontaneous shrinking of Pb particles after stretching was suppressed by surface oxidization, which led to irregular particle shapes after unloading. Moreover, we also found that surface passivation prevents coalescence of Pb particles (Figures 2(g)-2(i)), in stark contrast with the case in Figures 1(h)-1(j). Note that plastic deformation is observed in both particles after detachment in Figure 2(i). Figures 1 and 2 have shown dominant diffusive pseudoelastic deformation of clean Pb particle and plastic deformation of surface-passivated Pb particle. According to previous 2 Research studies, the deformation mode of pristine nanometals is mediated by a size-dependent competition between diffusive and displacive processes [18]. In our case, based on the observations in Figure 1 (diffusion) and Figure 3(a) (slip), pure Pb nanoparticles are found to undergo a combination of both deformation modes. To deeply understand the dominant deformation mechanism of Pb nanoparticles and quantify the influence of surface passivation, mechanical tests were conducted on more than 100 Pb particles with different surface conditions. Note that pseudoelasticity can mainly be attributed to the diffusion of surface atoms and thus can be viewed as an indicator of dominant diffusive deformation. That is, the existence or absence of substantial pseudoelasticity is used for differentiate between diffusive-dominant and displacive-dominant deformation modes. A statistic study (Figure 4(a)) reveals that~78% of the clean Pb particles exhibited dominant diffusive pseudoelastic deformation, while only 22% showed certain degrees of displacive plasticity (such as the slip process in Figure 3(a)). By contrast, all surfacepassivated Pb particles demonstrated displacive plastic deformation absent of pseudoelasticity. Such dramatic differences can be understood by considering the size-dependent competition between the diffusive and displacive deformation modes. In the case of pure Pb particles, a half-quantitative analysis similar to a previous study on Ag nanoparticles [18] was performed by comparing the diffusive and displacive deformation rate of pure Pb particles with different sizes during the stretching process, as shown in Figure 4(b) and Figure S7. A threshold particle diameter of 67 nm was derived, below which diffusive deformation overwhelms displacive deformation and vice versa. Such findings rationalize the prevailing of diffusive deformation in Pb nanoparticles with diameters below 25 nm (Figure 4(c)). In the case of surfacepassivated Pb particles, however, the surface oxidation layer effectively suppresses the diffusion of Pb atoms at the layer interface and consequently diffusive deformation, making displacive deformation the exclusive dominant deformation mode for all tested Pb particles ( Figure S8). These findings highlight the dramatic impact of surface condition on the mechanical behavior and properties of small-volume metals. Due to the confinement of the surface oxide layer, crystal slips in surface-passivated Pb particles proceed in a different manner compared to those in clean particles. A comparison between Figures 3(a) and 3(b) reveals that the particle shape during plastic deformation keeps much smoother in passivated particles than in clean particles. Due to the lack of work hardening mechanisms, small-volume metals often suffer from plastic instability arising from localized dislocation dynamics or crystal slips (Figure 3(a)). Such localized deformation generates continuously thickening surface steps and thus a rough surface contour in pristine particles [10,14]. By contrast, in surface-passivated particles, the formation (Figures 3(f)-3(h)) than in clean particles. This can be rationalized by the fact that partial dislocation-mediated deformation generates thinner surface steps and thus smoother surfaces compared to full dislocation slip, especially in the case of twinning/detwinning where the particle surface contour is gradually changed by sequential migration of twinning partial dislocations on successive adjacent crystal planes [19,20]. In addition, surface passivation also increases the stress required for driving plastic deformation. As demonstrated in the half-surfacepassivated Pb particle in Figure S9, upon tensile loading, the segment with a clean surface stretched noticeably, while the passivated segment kept stable. These findings offer an effective approach for increasing the mechanical stability of small-volume metals. With clean surfaces, diffusive deformation process is found dominant in metals with low activation energy barrier for atomic diffusion, for example, Pb, Ag, and Sn [12,16,18,27,31]. However, in ambient environments, most metals, except a few inert ones such as Au and Pt, are prone to surface oxidation, whose impact on mechanical behavior was often neglected and yet to be clarified. Based on our observations, changes in surface condition can dramatically alter the deformation process and thus the mechanical properties of small-volume metals. As such, we believe that except for common factors such as the temperature and material size, surface condition is another important factor that should be taken into account when investigating mechanical properties of materials. The surface condition effect is expected to be most prominent in active metals with small sizes and good surface diffusivities, such as Cu, Bi, Sn, and Ag. Conclusions In this research, the mechanical deformation process of Pb nanoparticles with different surface conditions was studied by in situ experiments inside an aberration-corrected Titan TEM/STEM. It was found that clean and oxidized surfaces, respectively, lead to dominant pseudoelastic behavior and plastic deformation. During these experiments, the deformation process was determined by the probability of surface diffusion of Pb atoms. For Pb particles with clean surfaces, diffusion of Pb surface atoms prevails and leads to pseudoelasticity. When surfaces of Pb particles are covered with PbO x , surface diffusion is suppressed and gives way to displacive deformation processes, including dislocations, stacking faults, and twinning. This research opens up new opportunities to tune the mechanical properties of smallvolume metals by surface engineering and provides implications for developing new advanced nanodevices in the future. Then, to prepare the Pb particles, firstly the PX-PbTiO 3 nanowires were dissolved in ethanol by ultrasonic dispersion. Then, the suspension was dropped onto a half Cu grid The surface condition of Pb particles is tuned by the amount of electron beam irradiation time. As shown in our previous research [32], the pristine PX-PbTiO 3 structure was destroyed by electron beam irradiation. Pb atoms diffuse to the surface of the nanowires and form pure Pb particles, while most of the O atoms (or O 2 ) escape into the environment (only a small fraction of oxygen is adsorbed on the nanowire surface). Within a time period of~10 minutes, the as-formed Pb particles can maintain clean surfaces since the amount of adsorbed oxygen is not enough to generate an oxide layer. During prolonged irradiation, however, the concentration of absorbed surface oxygen continuously increases, leading to the formation of passivated Pb particles with an intact surface oxide layer. Materials and Methods All the TEM characterization and the in situ experiments were carried out inside the aberration-corrected TEM (FEI Titan 80-300). In Situ Mechanical Tests. As for the in situ mechanical tests, after the Pb particles were prepared on the half Cu grid, the grid was fast transferred to a TEM-scanning tunneling microscopic holder by glued onto a gold wire with conductive epoxy. Then the in situ mechanical experiments on the Pb particles were conducted using some tungsten tips as prepared by electrochemically etching: 2 mol/L NaOH was used for the etching and the etching voltage was 2.5 V with a compliant current of 20 mA. All W tips were cleaned by plasma cleaning for 5 minutes for the in situ TEM experiments. The in situ compression and stretching process of the Pb particles were conducted at a speed approximate 0.3 nm/s. Data Availability All data are available in the manuscript, supplementary materials, or from the author. Supplementary Materials Half-quantificational comparison of plastic and pseudoelastic deformation processes of pure Pb particles. Figure S1: in situ oxidization of Pb particles. Figure S2: characterization of Pb particle covered by multilayers of PbO. Figure S3: liquid-like pseudoelasticity deformation of pure Pb particle. Figure S4: in situ repairing of PbO layers. Figure S5: plastic deformation of surface-passivated Pb particle. Figure S6: behavior of clean and surface-passivated Pb particles after the stretching process by W tip. Figure S7: illustration of slip and diffusion process of pure Pb particles. Figure S8: statistics of plastic and pseudoelastic deformation of surfacepassivated Pb particles with different diameters. Figure S9: mechanic deformation of Pb particle with part of surface covered with PbO layers.
3,869.6
2022-07-27T00:00:00.000
[ "Materials Science" ]
One aptamer, two functions: the full-length aptamer inhibits AMPA receptors, while the short one inhibits both AMPA and kainate receptors. AMPA and kainate receptors, along with NMDA receptors, are distinct subtypes of glutamate ion channels. Excessive activity of AMPA and kainate receptors has been implicated in neurological diseases, such as epilepsy and neuropathic pain. Antagonists that block their activities are therefore potential drug candidates. In a recent article in the Journal of Biological Chemistry by Jaremko et al. 2017, we have reported on the discovery and molecular characterization of an RNA aptamer of a dual functionality: the full-length RNA (101 nucleotide) inhibits AMPA receptors while the truncated or the short (55 nucleotide) RNA inhibits both the AMPA and kainate receptors. The full-length RNA aptamer was isolated through a specially designed, systematic evolution of ligands by exponential enrichment (SELEX) using only a single type of AMPA receptors expressed in HEK-293 cells. The design feature and the results of our recent article are highlighted here, as they demonstrate the utility of the SELEX approach and the potential of using a single AMPA receptor type to develop potent, novel RNA aptamers targeting multiple subunits and AMPA/kainate receptor subtypes with length-dependent functionalities. groups [5,6] , and even an amino acid mutation [7] on a target have been reported. To achieve target specificity through SELEX, one routinely uses a strong chemical pressure, such as an inhibitor that binds to the same site or mutually exclusive sites on that single target, in order to displace and enrich target-specific aptamers through in vitro evolution. However, there could be a utility of using the same target in SELEX to purposely evolve RNA aptamers with intended actions against multiple targets for drug discovery. This approach is based on the assumption that these targets are all involved in a disease. This approach may be especially useful with the application of SELEX to isolating aptamers against membrane proteins that must be expressed in a heterologous expression system such as HEK-293 cells. Expressing a single target, i.e., one protein or receptor, to maximize the surface expression and density as opposed to expressing multiple receptors with "diluted" surface density for any one of the targets would be an advantage for this approach. In a recent article in the Journal of Biological Chemistry by Jaremko et al. 2017 [8] , we have reported such an approach for the discovery and molecular characterization of an RNA aptamer that has a length-dependent, dual functionality against AMPA and kainate receptors. AMPA and kainate receptors, along with the N-methyl-D-aspartate (NMDA) receptors, are distinct subtypes of the glutamate ion channel receptor family. These receptors mediate the majority of excitatory neurotransmission in the mammalian central nervous system (CNS) and are indispensable for brain development and function [9][10][11] . AMPA and kainate receptors are more alike in both sequence and structure, as compared with NMDA receptors [11,12] . AMPA receptors have 4 subunits, GluA1-4, while kainate receptors have five subunits, GluK1-5. Functionally, AMPA receptors are expressed post-synaptically and are involved in fast excitatory neurotransmission [9] . Kainate receptors are expressed both pre-and post-synaptically; they contribute to excitatory neurotransmission and also modulate network excitability by regulating neurotransmitter release [13,14] . AMPA and kainate receptors are involved in neurological diseases. A study of GluK2deficient mice has revealed that hippocampal neurons in the CA3 region express both AMPA and kainate receptors, and both receptor types are involved in seizures [15] . Entorhinal cortex, a highly epilepsy-prone brain region, also expresses GluA1-4 and GluK5 [16] . In both human patients and animal models of temporal lobe epilepsy, the axons of granule cells that normally contact CA3 pyramidal cells sprout to form aberrant glutamatergic excitatory synapses onto dentate granule cells [17][18][19] . The formation of aberrant mossy fiber synapses onto dentate granule cells has been suggested to induce the recruitment of kainate receptors in chronic epileptic rats. These granule cells express AMPA receptors as well, especially GluA1 and GluA2 subunits [20] . Another example of a neurological disorder that involve both AMPA and kainate receptor types is acute and chronic pain mediated through interior cingulate cortex [21,22] . Interestingly, AMPA and KA receptors have been also implicated in osteoarthritis and rheumatoid arthritis [23] . Specifically, both receptors are expressed in human arthritis tissue. In rat models of antigen-induced arthritis, the use of the AMPA/KA receptor antagonist NBQX was shown to alleviate inflammation, pain and joint degeneration [23] . These lines of evidence therefore suggest that antagonists capable of blocking the activity of both AMPA and kainate receptors in vivo should be therapeutically useful. In fact, a nonselective AMPA/kainate receptor inhibitor, tezampanel (NGX424; Torrey Pines Pharmaceutics), reduced both migraine pain and other symptoms in a Phase II trial. NS1209 (NeuroSearch A/S), another nonselective AMPA/kainate receptor antagonist, was also shown in Phase II studies to alleviate refractory status epilepticus and neuropathic pain [24] . In this context, RNA aptamers with dual actions on both AMPA and kainate receptors would be a class of water-soluble antagonists, alternative to small-molecule inhibitors. The hypothesis we tested was based on the assumption that an RNA exerts a variety of tertiary interactions with its target(s) (i.e., hydrophobic and electrostatic interactions, hydrogen bonding and van der Walls forces) [25] , and the types and the strengths of these interactions should be length (and sequence) dependent. If we can find an aptamer that covers a sufficient range of these interactions with two targets, it is possible that different subsets of these interactions may be differentially used for the two targets -truncation of the length, thereby fine-tuning these subsets of interactions, may decouple the differential molecular recognitions and specificities. To test this hypothesis, namely finding an aptamer that may act on both AMPA and kainate receptors but by using a single receptor as the target of selection in SELEX, we designed our approach based on the following rationale. (i) AMPA and kainate receptors share a high degree of sequence and structural homologies [10,12] . (ii) Given its size (100 nucleotides in length as in our library), an RNA may bind to the surface of a receptor topologically. As a result, the larger area of interaction with the receptor, as compared with the interaction of a small molecule, may generate a range of size-dependent, multivalent binding interactions so that an RNA could bind to and inhibit AMPA and kainate receptors. In contrast, using multiple targets may likely lead to the identification of individual aptamers with singular activity. (iii) We further decided to choose an AMPA receptor, rather than a kainate receptor, as that single receptor target for SELEX, based on the fact that there are far more inhibitors of AMPA receptors [26] than those of kainate receptors [27] . Developing antagonists against kainate receptors in general has been far more challenging [27] . Among all possible AMPA receptor types, we chose GluA1/2R as the target of selection. GluA1/2R is an important channel type found in vivo [28][29][30][31] ; here GluA2R is th Using our design approach and SELEX, we successfully isolated an aptamer, which we termed as "AB9" aptamer (101 nt), and a series of truncations of the full-length RNA led to a 55-nt RNA, which we termed as "AB9s" (Figure 1) [8] . There are several interesting features about AB9 and its short version AB9s. First, the full-length aptamer inhibits AMPA receptors (red columns symbolize all AMPA receptor subunits in Figure 1). In fact, AB9 is more selective towards the GluA1/2R, the SELEX target. Not surprisingly, AB9 also inhibits GluA1 and 2 AMPA receptor subunits. However, AB9s, the short, 55-nt RNA, inhibits both the AMPA and kainate receptors ( Figure 1, the lower bar graph, purple columns for the kainate receptors). Second, AB9 and AB9s appear to have different binding profiles and dissociation kinetics, measured with the use of 32 P-labeled aptamers. AB9 is slower to dissociate from the target, whereas AB9s is fast, suggesting a difference in interacting with their respective sites. Removing the central sequence segment (see secondary structures as predicted by MFold in Figure 1) has turned AB9 into a much better inhibitor of the GluK1 and GluK2, the two key kainate receptor subunits, although the origin of this enhancement due to a smaller size, as compared with the full size aptamer, is unclear at the present. Nonetheless, these results are consistent with the assumption that both receptors share high degree structural homology and perhaps even a high degree of homology in sites or "druggable" sites. As expected, neither aptamer has any effect on NMDA receptors ( Figure 1, bar graphs). Our aptamers are better than the existing AMPA and kainate receptor antagonists by several measures. First, virtually all known kainate receptor antagonists generally have a stronger selectivity towards GluK1 than any other kainate receptor subunits [27] . Almost all that inhibit GluK2 actually have stronger potency towards GluK1 [27] . In this context, it is unique that AB9s is nearly equally effective in inhibiting both the GluK1 and GluK2 kainate receptors. Second, AB9s further possesses a nearly identical potency for both the AMPA and kainite receptors (Figure 1). Yet, among the existing inhibitors of either AMPA or kainate receptors, NBQX does inhibit roughly equally well both GluK1 and GluK2; but NBQX is considered an AMPA receptor inhibitor, because it inhibits AMPA receptors >8-fold stronger than it does on kainate receptors [27] . Glutamylaminomethyl sulfonic acid marginally distinguishes kainate from AMPA receptors, based on various in vivo and in vitro tests, including a test in a seizure model [32][33][34][35] . Yet, GAMS shows a significant antagonism on NMDA receptors [36] . In contrast, AB9s can block the activity of both AMPA and kainate receptors equally well without appreciable NMDA receptor activity. In addition, because the aptamer is an RNA molecule, it is a water soluble antagonist, different from almost all of the existing antagonists for either AMPA or kainate receptors. The experimental design by which we used a single SELEX target (i.e., GluA1/2R) in a single SELEX operation to evolve a single RNA aptamer that acts on both the AMPA and kainate receptors, depending on its length, turns out to be an effective way of generating RNA inhibitors with a desirable inhibitory versatility. It should be noted that the success of this approach relies on high degree sequence and structural similarities not only between the kainate and AMPA receptor subtypes but also within a single receptor subtype. More precisely, no place shows a higher structural similarity than the site to which AB9 binds, although at the moment, we do not know where this site is. We do know, however, this site is a noncompetitive one [8] . It is highly likely that the "footprint" of AB9 site covers a larger surface area, which is needed to inhibit more selectively AMPA receptors. A short version (AB9s), however, uses perhaps only partial footprint, enough for recognizing and effectively inhibiting kainate receptors. In fact, as seen in the two bar graphs, the enhancement of the kainate receptor antagonism in the short RNA aptamer is actually at the expense of diminishing slightly the AMPA receptor potency. Finally, the existence of this site(s), full or partial, further suggests a possibility of developing chemically modified RNA aptamers amenable for in vivo application as therapeutic RNAs. NMDA N-methyl-D-aspartate nt nucleotide SELEX systematic evolution of ligands by exponential enrichment. Figure 1. AB9 and AB9s inhibit AMPA and kainate receptors in a length dependent manner The Mfold predicted structures for AB9 and AB9s are shown on the left. The red and blue regions represent sequence stretches essential for function (inhibition). AB9 (full length, 101 nucleotides in length) inhibits more selectively AMPA receptors, whereas AB9s (55 nucleotides in length inhibits both AMPA and KA receptors (see the bar graphs on the right). In the middle are two cartoon drawings of the AMPA receptors (red/green) and kainate receptors (blue/purple). For functional assay, each receptor was transiently expressed in HEK-293 cells. Whole-cell current recording was used to measure the whole-cell current amplitude in the absence, A, and presence of an aptamer, A(I) (in these bar graphs, 2 µM aptamer was used in each assay). The potency and the selectivity of an aptamer against the open-channel, and the closed-channel state are assayed [8] .
2,839.2
2017-06-12T00:00:00.000
[ "Biology", "Chemistry", "Medicine" ]
Laminin-1 Peptides Conjugated to Fibrin Hydrogels Promote Salivary Gland Regeneration in Irradiated Mouse Submandibular Glands Previous studies demonstrated that salivary gland morphogenesis and differentiation are enhanced by modification of fibrin hydrogels chemically conjugated to Laminin-1 peptides. Specifically, Laminin-1 peptides (A99: CGGALRGDN-amide and YIGSR: CGGADPGYIGSRGAA-amide) chemically conjugated to fibrin promoted formation of newly organized salivary epithelium both in vitro (e.g., using organoids) and in vivo (e.g., in a wounded mouse model). While these studies were successful, the model’s usefulness for inducing regenerative patterns after radiation therapy remains unknown. Therefore, the goal of the current study was to determine whether transdermal injection with the Laminin-1 peptides A99 and YIGSR chemically conjugated to fibrin hydrogels promotes tissue regeneration in irradiated salivary glands. Results indicate that A99 and YIGSR chemically conjugated to fibrin hydrogels promote formation of functional salivary tissue when transdermally injected to irradiated salivary glands. In contrast, when left untreated, irradiated salivary glands display a loss in structure and functionality. Together, these studies indicate that fibrin hydrogel-based implantable scaffolds containing Laminin-1 peptides promote secretory function of irradiated salivary glands. Regarding stem cells/progenitors, previous studies showed that c-Kit + cells, which normally are found in very low numbers within salivary gland specimens (Nanduri et al., 2011;Nanduri et al., 2013) can be expanded ex vivo for restoring salivary gland function; however, further characterization (e.g., how they incorporate into host tissue as well as long term secondary effects such as tumorigenesis and survival rates) must be determined before translating this approach into humans. Another technology involves the use of embryonic organ culture transplantation, where embryonic salivary cells grown in culture can be transplanted in vivo (Ogawa et al., 2013); nonetheless, a diminished gland size and an absence of studies showing long-term outcomes following treatment significantly decrease the utility of this model for translational applications. Bioprinting strategies have shown the possibility of assembling glandular compartments (e.g., acinar/ductal epithelial, myoepithelial, endothelial, and neuronal) into salivary gland organotypic cultures; however, this technology does not mimic the salivary gland native architecture (e.g., cell polarity and organization (Ferreira et al., 2016;Adine et al., 2018)). Cell sheets made of salivary gland cells have demonstrated positive results, as they promote cell differentiation and tissue integrity in wounded mouse submandibular gland (SMG) models, yet the main challenge facing this technology is the need to standardize cell composition within the sheets and thereby achieve greater reproducibility (Nam et al., 2019a;dos Santos et al., 2020). Regarding scaffolds other than the Fibrin Hydrogels (FH), various biomaterials (Aframian et al., 2000;Sun et al., 2006;Cantara et al., 2012;Soscia et al., 2013;Hsiao and Yang, 2015;Yang and Hsiao, 2015) have been shown to promote cell growth and attachment but the degree of structural organization, as demonstrated by hollow multi-lumen formation, cell polarity and functionality, has been modest. Likewise, studies have shown that human cells grown on a hyaluronic acid-based scaffold and transplanted into a wounded mouse parotid gland lead to improved secretory function (Pradhan-Bhatt et al., 2014); nevertheless, these results included neither monitoring for degradation of the scaffold nor evidence of new tissue formation, thus raising concerns with the stability of the biomaterial and capacity for regeneration, respectively. Together, these technologies offer the potential for more advanced solutions to hyposalivation due to head and neck radiation therapy but have yet to truly deliver. In response to these needs and challenges, we developed FH with conjugated Laminin-1 peptides (L 1p ) A99 and YIGSR that were used successfully to repair salivary gland tissue in a wounded SMG mouse model (Nam et al., 2017a;Nam et al., 2017b;Nam et al., 2019b). To apply these results to a more translational setting, the goal of the current study is to determine whether transdermal injection with the L 1p A99 and YIGSR chemically conjugated to FH can promote secretory function in irradiated salivary glands. Animals Female 6-week-old C57BL/6J mice weighing ∼17-20 g were purchased from Jackson Laboratory (Bar Harbor, ME). Power analysis was performed to determine mouse numbers using G*Power 3.1.9.7 software (Heinrich-Heine-Universität Düsseldorf, Düsseldorf, Germany; http://www.gpower.hhu.de/). All calculations were conducted using a significance level of 0.05 with 95% power. Then, 105 mice were randomly distributed into three groups to receive the following treatments: non-irradiated (40 mice), irradiated without L 1p -FH injection (40 mice), and irradiated while also receiving the L 1p -FH injection (25 mice), comprising treatment groups 1-3, respectively. All animal usage, anesthesia and surgeries were conducted with the approval of the University of Utah Institutional Animal Care and Use Committee (IACUC) in compliance with the ARRIVE guidelines. Radiation Treatment Salivary gland tissue damage is a late degenerative response observed after radiation therapy (Wu and Leung, 2019;Jasmer et al., 2020). To confirm L 1p -FH regenerative effects in a more clinically relevant animal model, a widely accepted head and neck irradiated mouse model was used for this study (Deasy et al., 2010;Varghese et al., 2018). Briefly, mice were anesthetized with ketamine (100 mg/kg) and xylazine (5 mg/kg) solution administered intraperitoneally with the head and neck area positioned over the 1 cm slit of a customized lead shield, thereby protecting other areas of the body from radiation. SMGs then received a single 15 Gy radiation dose using a JL Shepherd 137 Cs irradiator ( Figure 1A). Animals were allowed to recover for 3 days and received hydrogel treatment soon after, as detailed below. Transdermal Injection C57BL/6J mice were anesthetized with 3% isoflurane using an oxygen flow rate set at 2.0 L/min, and 10 μL of freshly mixed L 1p -FH solution was transdermally injected using insulin syringe (G 28) to irradiated mouse SMGs at post-radiation day 3. L 1p -FH effects were studied at days 8 and 30. Using thrombin prior transdermal injection causes rapid polymerization of L 1p -FH which clogs the needle. To overcome this issue, the mixture was applied in a liquid form using endogenous thrombin for internal polymerization. To confirm scaffold implantation in Frontiers in Bioengineering and Biotechnology | www.frontiersin.org September 2021 | Volume 9 | Article 729180 vivo, FH was labeled with DyLight 680 and quantified within dissected glands using a Bio-Rad Chemi-Doc ™ MP imaging system ( Figure 1C). Hematoxylin and Eosin and Masson's Trichrome Stain SMGs were fixed in 10% formalin at room temperature overnight, dehydrated in 70% ethanol solution, embedded in paraffin wax and cut into 3 μm sections. Sections were then deparaffinized with xylene and rehydrated with serial ethanol solutions (100%, 95% 80, 70 and 50%, v/v) and distilled water. For hematoxylin and eosin staining, the rehydrated sections were stained with hematoxylin for 5 min, washed with distilled water for 5 min, tap water for 5 min and distilled water for 2 min. Next, slides were stained with eosin for 30 s, washed with tap water for 5 min and distilled water for 2 min. Finally, hematoxylin and eosin stained gland sections were dehydrated with 95 and 100% ethanol (v/v), cleared in xylene and mounted with a xylene-based mounting medium. As for Masson's trichrome staining, the rehydrated sections were re-fixed in Bouin's solution at 60°C for 1 h then washed with running tap water for 10 min and distilled water for 5 min. Next, sections were stained with Weigert's iron hematoxylin solution for 10 min then washed with running warm tap water for 10 min and distilled water for 5 min. For cytoplasm staining, sections were incubated with Biebrich scarlet acid fuchsine solution for 5 min and washed three times with distilled water for 2 min. Regarding collagen staining, sections were incubated in phosphotungstic/phosphomolybdic acid for 15 min, stained with aniline blue solution for 5 min and washed three times with distilled water for 2 min. Stained sections were then differentiated in 1% acetic acid solution for 1 min and washed two times with distilled water for 2 min. Finally, Masson's trichrome stained sections were dehydrated with serial ethanol solutions (95 and 100%), cleared in xylene and mounted with a xylene-based mounting medium. Finally, the samples were analyzed using a Leica DMI6000B (Leica Microsystems, Wetzlar, Germany) to determine tissue morphology. Confocal Analysis For antigen retrieval, the rehydrated and fixed tissue sections were incubated in Tris-EDTA buffer [10 mM Tris, 1 mM EDTA, 0.05% (v/v) Tween ® 20, pH 9.0] for ZO-1 and E-cadherin or with sodium citrate buffer [10 mM sodium citrate, 0.05% (v/v) Tween ® 20, pH 6.0] for TMEM16A, Na + /K + -ATPase, iNOS, Arg-1, VCAM-1 and ICAM-1 at 95°C for 30 min. Next, samples were permeabilized with 0.1% (v/v) triton X-100 in PBS at room temperature for 45 min. Specimens were then blocked in 5% (v/v) goat serum in PBS for 1 h at room temperature and incubated at 4°C with the following primary antibodies overnight: rabbit anti-ZO-1, mouse anti-E-cadherin, rabbit anti-TMEM16A, mouse anti-Na + /K + -ATPase, rabbit anti-VCAM-1 or mouse anti-ICAM-1. At that time, sections were incubated with antirabbit Alexa Fluor 488 and anti-mouse Alexa Fluor 568 secondary antibodies in 5% goat serum at room temperature for 1 h followed by 300 nM DAPI staining at room temperature for 5 min. For M1 and M2 marker staining, specimens were blocked in 3% (w/v) bovine serum albumin (BSA) in PBS for 1 h at room temperature and incubated with primary antibodies (rabbit anti-iNOS or rabbit anti-Arg-1) at 37°C for 1 h. Then, sections were incubated with anti-rabbit Alexa Fluor 568 in 3% BSA at room temperature for 1 h followed by 300 nM DAPI staining at room temperature for 5 min. Finally, specimens were analyzed using a STELLARIS Confocal Microscope (Leica Microsystems, Wetzlar, Germany). Macrophage Ratio M1 and M2 macrophage cells were determined using ImageJ. Specifically, the color threshold was set to isolate the colocalized signal of nuclei and M1 (Figure 4, white arrows)/M2 (Figure 4, red arrows) positive cells, which were counted and normalized by area. Statistical significance was assessed using one-way ANOVA (*p < 0.01) and Dunnett's post-hoc test for multiple comparisons to group 2 (irradiated with no L 1p -FH injection at day 30). Saliva Flow Rate Measurements Mice were anesthetized with ketamine (100 mg/kg) and xylazine (5 mg/kg) followed by intraperitoneal injection with pilocarpine (25 mg/kg) and isoproterenol (0.5 mg/kg). Then, whole saliva was collected using a micropipette for 5 min and flow rate was calculated using the following formula: Saliva flow rate Stimulated saliva µL Body weight of mouse g x collection time (5min) Statistical Analysis Experimental data were analyzed using one-way ANOVA and Dunnett's post hoc test for multiple comparisons to the nonirradiated group 1 at day 30. All values represent means ± SD (n 5), where p values <0.01 were considered statistically significant. Finally, these calculations were performed using GraphPad Prism 6. A Head and Neck Irradiated Mouse Model was Achieved To investigate whether L 1p -FH could restore irradiated SMG structure and function, C57BL/6J mice were subjected to a single radiation treatment as described in Materials and Methods ( Figure 1A). Mice treated with a single 15 Gy radiation dose displayed a significant reduction in saliva flow rates as compared to non-irradiated controls (i.e., from 1.43 to 0.80 μL/g/min, n 5, p < 0.01) in the first 8 days and remained steady thereafter until day 30 ( Figure 1B). These results demonstrated that the radiation dose utilized here caused significant loss of salivary secretory function and can thus be used as a head and neck irradiated preclinical model, consistent with previous studies (Lombaert et al., 2008;Varghese et al., 2018;Weng et al., 2018). L 1p -FH was Successfully Implanted in Irradiated Mouse Submandibular Glands Our previous studies showed the biocompatibility of L 1p -FH with host tissue when surgically implanted in a wounded mouse model (Nam et al., 2017a;Nam et al., 2017b). To avoid an open wound surgery, we attempted to deliver the L 1p -FH to irradiated mouse SMG via transdermal injection as described in Materials and Methods. For these experiments, we used a fluorescently labeled hydrogel using DyLight 680 and successfully implanted L 1p -FH in irradiated mouse SMG via transdermal injection ( Figure 1C, white arrows). L 1p -FH Preserved Epithelial Integrity After Radiation Treatment Our previous studies showed that L 1p -FH promoted tissue repair in a wounded SMG mouse model (Nam et al., 2017a;Nam et al., 2017b;Nam et al., 2019b). To determine whether these effects occur in the head and neck irradiated mouse model, we randomly distributed mice in three groups and applied this scaffold as follows: non-irradiated, irradiated without L 1p -FH injection and irradiated that received the L 1p -FH injection, comprising treatment groups 1-3, respectively (see Material and Methods section). As shown in Figure 2, group 1 (non-irradiated glands) displayed intact lobules where the parenchyma was separated by areas of thin connective tissue at days 8 (Figures 2A,B) and 30 ( Figures 2C,D). As for cytologic features, serous acini cells showed a typical pyramidal shape with basophilic cytoplasm and basal nuclei. In contrast, mucous cells showed a pale cytoplasm with flat basilar nuclei, intercalated ducts were lined by cuboidal and/or flat cells, striated ducts showed cuboidal to low columnar cells and granular convoluted ducts were lined by tall columnar cells containing intracytoplasmic eosinophilic granules. Together, these features indicate that the nonirradiated glands in group 1 showed the morphology of a healthy epithelium. In contrast, group 2 (irradiated with no L 1p -FH injection) demonstrated glandular parenchyma separated by thicker connective tissue strands, ductal areas with ectasia, intraluminal depositions and increased presence of fibrosis when compared to controls ( Figures 2E,F). Furthermore, tissue damage was even more severe at day 30 ( Figures 2G,H), where SMG showed an extensive disruption of the lobular architecture as indicated by the replacement of acini and ducts with sheets of vacuolated cells, adipocytes and fibrosis. Together, these results indicated that irradiated glands with no L 1p -FH injection (group 2) dramatically lost epithelial integrity. Remarkably, mice in group 3 (irradiated with L 1p -FH injection) recovered many of the features of healthy glands. For instance, we observed the presence of serous acinar units with organized ductal structures surrounded by thin connective tissue strands Figures 2K,L). These changes indicate that group 3 (irradiated glands treated with L 1p -FH) had a morphology consistent with a healthy salivary gland epithelium and results in this section indicate that L 1p -FH is a suitable scaffold for promoting epithelial integrity in irradiated SMG. L 1p -FH Maintained Epithelial Polarity and Preserved Ion Transporter Expression To determine whether L 1p -FH maintained epithelial polarity in an irradiated mouse model, we stained the SMG sections with the apical tight junction marker ZO-1 and basolateral marker E-cadherin. As shown in Figure 3A, group 1 (non-irradiated glands) displayed apical ZO-1 (green) and basolateral E-cadherin (red) after 30 days. However, in group 2 (irradiated glands with no L 1p -FH injection), a mild residual ZO-1 signal was detected at day 8 ( Figures 3B,F, blue solid line), and a weaker ZO-1 signal was expressed at day 30 ( Figure 3F, blue dotted line), together with ZO-1 disorganization ( Figure 3C), thereby indicating loss of epithelial polarity. In contrast, group 3 (irradiated glands treated with L 1p -FH) showed apical ZO-1 and basolateral E-cadherin signals both at days 8 ( Figure 3D) and 30 ( Figure 3E), indicating that the scaffold treatment helps to maintain epithelial polarity ( Figure 3F, red line and red dotted line). Regarding the presence of functional markers, group 1 (non-irradiated SMG) showed apical TMEM16A ( Figure 3G, green) and basolateral Na + /K + -ATPase localization ( Figure 3G, red) at day 30, consistent with a healthy salivary epithelium. In contrast, group 2 (irradiated glands with no L 1p -FH injection) showed a moderate TMEM16A signal ( Figure 3L, blue solid line) at day 8 ( Figure 3H, green) and weaker TMEM16A signal ( Figure 3L, blue dotted line) at day 30 ( Figure 3I, green). Interestingly, group 3 (irradiated glands treated with L 1p -FH) expressed strong apical TMEM16 (Figures 3J,K, green; Figure 3L, red line and red dotted line) and basolateral Na + /K + -ATPase similar to non-irradiated glands, thus suggesting that L 1P -FH treatment helps to maintain epithelial polarity and preserve ion transport expression, both of which are critical for saliva secretion. L 1p -FH Promoted Macrophage Polarization Our previous studies indicated that treatment with L 1p -FH promoted macrophage polarization in a wounded SMG female mouse model (Brown et al., 2020). To determine whether similar effects occur in an irradiated mouse model, we identified the presence of M1 and M2 subtypes within the SMG using macrophage-specific antibodies (i.e., iNOS and Arg-1, corresponding to M1 and M2, respectively). As shown in Figures 4A,F, group 1 (non-irradiated glands) expressed iNOSpositive cells with approximately 0.94 macrophages per 100,000 µm (Sroussi et al., 2017). In contrast, group 2 (irradiated glands with no L 1p -FH injection) showed a significant increase in M1 macrophages (approximately 28.65 iNOS-positive cells) at day 30 ( Figures 4C,F). Notably, group 3 (irradiated glands treated with L 1p -FH) showed a significant decrease of M1 macrophages (approximately 5.92 iNOS-positive cells) at day 30 ( Figures 4E,F) compared to group 2. Regarding the presence of M2 markers, group 2 (irradiated glands with no L 1p -FH injection) expressed Arg-1-positive cells with approximately 5.92 macrophages at day 30 ( Figures 2I,L), which is not a significant difference from group 1 (Figures 4G,L, 2.60 macrophages). Interestingly, group 3 (irradiated glands treated with L 1p -FH) expressed a significant increase of Arg-1-positive cells at day 30 (approximately 11.37 macrophages, Figure 4K,L). Together, these results indicate that L 1p -FH causes a decrease in M1 macrophages together with an increase in M2 macrophages in SMG following radiation treatment. L 1p -FH Increased Saliva Secretion After Radiation Treatment Our previous studies indicate that treatment with L 1p -FH enhances saliva secretion in a wounded SMG mouse model Frontiers in Bioengineering and Biotechnology | www.frontiersin.org September 2021 | Volume 9 | Article 729180 6 (Nam et al., 2017a;Nam et al., 2017b;Nam et al., 2019b). To determine whether similar effects occur in an irradiated mouse model, we treated irradiated SMG with a transdermal injection of L 1p -FH as described in Materials and Methods. As shown in Figure 5, group 1 (non-irradiated glands) showed intact saliva flow rates (i.e., 1.43 μL/g/min), as expected. In contrast, group 2 (irradiated untreated glands) exhibited a significant reduction in saliva flow rates (i.e., 0.80 μL/g/min, n 5, p < 0.01). Notably, group 3 (irradiated glands treated with L 1p -FH) showed a significant increase of saliva flow rates (1.32 μL/g/min, n 5, p < 0.01) at day 30, thereby demonstrating that L 1p -FH restores saliva secretion after radiation treatment. DISCUSSION Our previous studies indicated that treatment with FH alone promotes neither cell polarity nor differentiation in salivary gland epithelium, both in vitro or in vivo (Nam et al., 2016;Nam et al., 2017a;Nam et al., 2017b;Nam et al., 2019b;Dos Santos et al., 2021). However, specific L 1p sequences (A99: CGGALRGDN-amide, YIGSR: CGGADPGYIGSRGAAamide) proved to be useful for improving salivary gland regeneration (Hoffman et al., 1998). Specifically, freshly isolated SMG cells grown on L 1p chemically attached to FH induced lumen formation and secretory function (Nam et al., 2016). Moreover, L 1p -FH promoted salivary gland regeneration in an in vivo wound-healing mouse model (Nam et al., 2017a;Nam et al., 2017b), thus leading to increased saliva secretion. Such functional recovery indicates that FH-based scaffolds can be used to promote salivary gland function in radiation-induced hyposalivation. Additionally, we developed a transdermal delivery system specifically for this study with the aim of using the patient's own blood for polymerization to increase biocompatibility (Froelich et al., 2010;Dietrich et al., 2013) and having the ancillary benefits of displaying optimal rheological properties (i.e., softness) and being less invasive than other delivery methods (i.e., retro-ductal delivery (Nair et al., 2016) and surgical application (Ogawa et al., 2013)), all of which indicates a greater degree of clinical applicability for our newly designed mouse model. Regarding results of the current study, salivary gland morphology was significantly improved by L 1p -FH FIGURE 4 | L 1p -FH promotes macrophage polarization. Macrophage marker expression was analyzed using confocal microscopy with specific antibodies against iNOS (A-F), Arg-1 (G-L), and DAPI (blue; everywhere). Scale bars represent 100 µm. White and red arrows indicate iNOS or Arg-1 positive cells, respectively. Representative image from a total of five mice per group. iNOS (F) and Arg-1 (L) positive cells were analyzed using ImageJ and GraphPad Prism 6. Data represent the means ± SD of n 5 mice per condition with statistical significance assessed using one-way ANOVA (*p < 0.01) and Dunnett's post-hoc test for multiple comparisons to group 2 (irradiated with no L 1p -FH injection at day 30). FIGURE 5 | L 1p -FH increases saliva secretion after radiation treatment. Mice were anesthetized and stimulated with pilocarpine and isoproterenol at days 8 and 30 with saliva collected for 5 min. Data represent the means ± SD of n 5 mice per condition with statistical significance assessed using one-way ANOVA (*p < 0.01) and Dunnett's post-hoc test for multiple comparisons to group 1 (non-irradiated mice at day 30). The symbol (+) indicates L 1p -FH injection, while the symbol (−) indicates no L 1p -FH injection, and n. s indicates no significant differences from group 1 (non-irradiated mice at day 30). Frontiers in Bioengineering and Biotechnology | www.frontiersin.org September 2021 | Volume 9 | Article 729180 (Figures 2I-L and Figure 3D,E) and saliva secretion ( Figure 5) was likewise restored by day 30 post-radiation; however, such treatment gains cannot be counted on to persist, given the residual fibrosis noted ( Figure 2L). Additionally, future studies will use growth factors specifically targeted for angiogenesis (i.e., VEGF and FGF9) (Nam et al., 2019b) in response to current results demonstrating L 1p -FH promoted macrophage polarization (Figure 4) but gave rise to no blood vessel formation (Supplementary Figure S1). Moreover, should such gains in fact prove persistent (e.g., maintained over long periods of time), we as yet have limited knowledge of the mechanisms responsible for this recovery. These issues notwithstanding, the results to date are important because they are the first time that L 1p -FH has been used in irradiated glands to restore their form and function. It is noteworthy to mention three major differences between our previous studies and the current work. First, our previous studies used L 1p in trimeric form (Dos Santos et al., 2021) and in combination with growth factors (Nam et al., 2019b), while the current work employs only monomeric forms and no growth factors. Next, our previous studies used a more invasive SMG surgical punch model (Nam et al., 2017a;Nam et al., 2017b;Nam et al., 2019b) as compared to currently used transdermal injection implantation method. Finally, we replaced the SMG wounded mouse model of our prior studies with a radiation model for greater specificity in terms of clinical features and increased translational application. To expand on this work, future studies will perform extended saliva secretion studies and track the appearance of fibrosis at multiple time points via histological studies and investigate how L 1p used here (i.e., A99 (Mochizuki, 2003;Rebustini et al., 2007;David et al.,2008) and YIGSR (Caiado and Dias, 2012;Frith et al., 2012;Huettner et al., 2018;Motta et al., 2019)) bind to specific integrins, thus addressing the questions noted above in relation to treatment duration and mechanisms. Finally, should this treatment near the stage of clinical trials, it would be important to replace the current single dose of radiation used for proof of concept and early exploration with more clinically appropriate fractionated doses. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. ETHICS STATEMENT The animal study was reviewed and approved by University of Utah IACUC.
5,530
2021-09-24T00:00:00.000
[ "Biology", "Chemistry" ]
Neuronal Lhx1 expression is regulated by DNMT1-dependent modulation of histone marks ABSTRACT Apart from the conventional view of repressive promoter methylation, the DNA methyltransferase 1 (DNMT1) was recently described to modulate gene expression through a variety of interactions with diverse epigenetic key players. We here investigated the DNMT1-dependent transcriptional control of the homeobox transcription factor LHX1, which we previously identified as an important regulator in cortical interneuron development. We found that LHX1 expression in embryonic interneurons originating in the embryonic pre-optic area (POA) is regulated by non-canonic DNMT1 function. Analysis of histone methylation and acetylation revealed that both epigenetic modifications seem to be implicated in the control of Lhx1 gene activity and that DNMT1 contributes to their proper establishment. This study sheds further light on the regulatory network of cortical interneuron development including the complex interplay of epigenetic mechanisms. Introduction An increasing number of studies challenged the textbook model of repressive DNA methylation that is catalysed by DNA methyltransferases (DNMTs). The identification of the diverse genomic locations that can be methylated like enhancer and intragenic loci in addition to promoter regions has led to new findings for functional implications of DNA methylation like alternative splicing and promoter choice [1][2][3]. Moreover, in contrast to the traditional view of DNA methylation preventing the binding of transcription factors, numerous reports indicated that DNA methylation signatures can even serve as binding motifs for particular factors, thereby mediating methylation-dependent biological processes [4]. Apart from this, DNA methylation can instruct histone-modifying complexes (HMCs) and vice versa [5][6][7][8]. In addition, DNMTs act on histone modifications by transcriptional control over genes encoding for proteins implicated in HMCs or by interacting with protein complexes independent of their DNA methylating activity [9][10][11][12]. This diversity of actions requires detailed investigations to decipher the functional implications of distinct epigenetic mechanisms in directing cell-and stage-specific differentiation and maturation programmes, and to reveal causes for dysfunctions in related diseases. Alterations in epigenetic signatures or the function of epigenetic key players in neurons were reported to contribute to the pathophysiology of diverse neurological diseases and psychiatric disorders [13,14]. Changed Dnmt1 expression was observed post-mortem in inhibitory cortical interneurons of schizophrenia patients, which is suggested to be associated with altered expression of GABA-related transcripts [15,16]. By shaping the response of excitatory neurons through inhibitory actions, GABAergic interneurons are essential key players in cortical information processing. For this, it is not surprising that numerous neuropsychiatric diseases like schizophrenia, epilepsy, and autism involve defects in GABAergic interneuron function [17][18][19][20], which are suggested to be in part developmental in their origin [18][19][20]. In support of this, prenatal stress alters DNA methylation networks in inhibitory cortical interneurons during development, which elicits a schizophrenia-like phenotype in offspring [21][22][23]. However, little is known so far about the stage-and context-specific effects of epigenetic transcriptional regulation during cortical interneuron development, which is a highly complex process [24]. A major step embraces the long-range migration of post-mitotic interneurons from their sites of origin in the basal telencephalon towards cortical target areas [24][25][26][27]. This requires comprehensive control over cytoskeletal remodelling to achieve successful migration, prerequisite for the correct number of cortical interneurons in the diverse cortical regions [26][27][28]. The strict regulation of cell survival during the different developmental steps is likewise critical for proper interneuron numbers in adults [26,27,29]. We have recently reported that the DNA methyltransferase 1 (DNMT1) orchestrates the post-mitotic maturation of POA-derived cortical interneurons by promoting their migratory morphology and survival in part through the modulation of Pak6 expression [27]. Of note, we found that Pak6 transcription is not regulated by DNMT1-dependent DNA methylation [12,27], but non-canonically through interactions of DNMT1 with histone methylating enzymes [12]. Apart from DNMT1, we identified the LIMhomeobox transcription factor LHX1 as crucial transcriptional regulator of POA-derived inhibitory interneuron development [26]. LHX1 modulates the expression of major guidance receptors in migrating interneurons facilitating their tangential and radial migration through the basal telencephalon and the developing cortex, respectively [26]. Alike DNMT1, LHX1 acts on interneuron survival by controlling gene expression of genes like Bcl2 or Bcl6 [26]. Thereby, the expression of Lhx1 is restricted to early post-mitotic stages and timed Lhx1 silencing is critical for the proper regulation of interneuron survival and migration during development [26]. To this end, we here investigated whether and how Lhx1 expression is controlled by DNMT1 as a potential upstream regulator. We identified Lhx1 expression to be controlled by non-canonical DNMT1 activity. Besides evidence for a DNMT1-dependent bivalent regulation through H3K4 and H3K27 trimethylation, our data propose a contribution of DNMT1-mediated histone acetylation and deacetylation to the regulation of Lhx1 expression. This study emphasizes the complexity of epigenetic networks in transcriptional control of key players relevant for cortical interneuron development. DNMT1 regulates the expression of Lhx1 noncanonically In the embryonic mouse brain, Lhx1 is very restrictively expressed in the mantle zone of the embryonic POA, partially overlapping with the local and post-mitotic expression of the transcription factor HMX3 (Figure 1(a) [26]). We have previously shown that LHX1-dependent transcriptional control is of great relevance for the survival and migration regulation in post-mitotic HMX3positive cortical interneurons originating in the POA [26]. For this immature interneuron subset, we further identified DNMT1 to be essential for orchestrating stage-specific gene expression [27]. Hence, in this study, we first aimed to investigate whether DNMT1 controls the expression of Lhx1. To this end, we checked for changes in the expression levels of Lhx1 in FACS-enriched Hmx3-Cre/tdTomato/ Dnmt1 control and Hmx3-Cre/tdTomato/Dnmt1 interneurons isolated from the POA of mouse embryos at embryonic day (E) 16, at the peak of POA interneuron migration (Figure 1(a)). Indeed, we detected a highly elevated Lhx1 expression in Dnmt1 knockout (KO) cells, which points to a DNMT1-mediated repression of Lhx1 in wild-type interneurons (Figure 1(b)). To control whether these transcriptional changes correlate with alterations in DNA methylation levels, we screened the MeDIP-sequencing data performed with equally aged embryonic FACS-enriched control and Dnmt1 KO cells (E16), published previously in Pensold et al. [27]. However, the Lhx1 gene locus did not show any significant alterations in the DNA methylation levels between the two genotypes ( Figure 1(c)). Apart from this, the RNA-sequencing and MeDIPsequencing dataset of FACS-enriched Dnmt1 KO and control cells (E16) [27] did not reveal altered methylation and expression levels of potential regulators of Lhx1 in Dnmt1-deficient interneurons. In agreement with the expression and DNA methylation analysis of POA-derived embryonic interneurons, elevated levels of Lhx1 expression were also detected upon Dnmt1 siRNA application in Neuro2a (N2a) cells, a cell culture model already applied in a previous study [12]. Of note, this increase in Lhx1 expression was not observed upon treatment with RG108, an inhibitor of DNA methylation (Figure 1(d)), which even leads to a significant decrease putatively by eliciting secondary effects. Together, our data suggest a DNA methylation-independent repression of Lhx1 by DNMT1. In general, direct effects of repressive DNMT1dependent DNA methylation appear to play a rather subordinate role during embryonic development of POA-derived interneurons. First, by MeDIP and RNA sequencing we identified a nonsignificant overlap of genes displaying both, changes in the methylation and expression profiles in Dnmt1-deficient Hmx3-Cre/tdTomato compared to control cells [12]. Second, among the overlapping genes, we identified only very few genes displaying reduced DNA methylation and increased expression in the Dnmt1-deficient cells ( Figure 1(e), lower right quadrant), which would be consistent with the canonical function of DNMT1 performing repressive DNA methylation in controls. In turn, most genes were elevated in expression and displayed at the same time increased levels of DNA methylation ( Figure 1(e), upper right quadrant), pointing to secondary or indirect effects caused by Dnmt1 deletion. This is consistent with the emerging new functional implications of DNA methylation being far more complex than just leading to gene repression. DNA methylation is described to mediate alternative splicing and promoter choice [1][2][3] and can even lead to the formation of binding motifs for particular factors that upon binding drive the transcription of particular genes [4]. In sum, our data so far suggest that DNMT1 exerts transcriptional control over Lhx1 in embryonic POA-derived interneurons, but rather independent of direct DNA methylation of the Lhx1 gene locus or gene loci encoding for known Lhx1 regulators. Dnmt1 deficiency resulted in altered H3K4me3 and H3K27me3 levels at the Lhx1 gene locus In addition to its DNA methylating activity, we have recently reported that non-canonical functions of DNMT1, such as a crosstalk with histone-modifying enzymes, are involved in the transcriptional regulation in developing interneurons [12]. In detail, Pak6 expression was found to be controlled through direct or indirect interactions of DNMT1 with EZH2, the core enzyme of the polycomb-repressor complex 2 (PRC2), catalysing repressive trimethylations at H3K27 [30,31]. To evaluate a potential implication of DNMT1dependent modulation of repressive histone methylation in the transcriptional control of Lhx1, we analysed Lhx1 expression in N2a cells upon treatment with 3-deazaneplanocin A (DZNep, Figure 2 (a)), a potent inhibitor of the histone methyltransferase EZH2 [32,33]. We detected significantly elevated Lhx1 expression levels in DZNep-treated N2a cells (Figure 2(a)), suggesting a role of histone methylation in repressing Lhx1 transcription. As LHX1 was shown to influence POA cell migration [26], we next analysed whether DZNep-treatment affects the migratory potential of N2a cells. To this end, we monitored the migratory speed of N2a cells on matrigel, which was significantly decreased upon DZNep-treatment compared to control treatment with DMSO ( Figure 2(b-d)). In line with that, we found DZNep-induced morphological alterations which could account for the reduced motility. By comparing the morphology of DZNep-and controltreated POA and N2a cells, we detected increased numbers of processes as well as higher numbers of branch points of the longest process of each cell ( Figure 2(e-g)), which is indicative for the loss of the polarized migratory morphology. We previously showed that DNMT1 negatively acts on permissive H3K4me3 levels and promotes the establishment of repressive trimethylation of H3K27 at the global level [12]. Here we investigated whether DNMT1 is required to prevent or promote the setup of permissive H3K4-or repressive H3K27-trimethylation marks at regulatory sites of the Lhx1 gene locus, respectively. For this we performed targeted chromatin-immunoprecipitation (ChIP) with an H3K4me3-and an H3K27me3-specific antibody followed by qPCR with primers directed against regulatory and nonregulatory regions of the Lhx1 locus in control and Dnmt1 siRNA-treated N2a cells (Figure 2(h-l)). Compared to controls, we identified an enhanced association of H3K4me3 within the promoter region of the Lhx1 gene locus in Dnmt1-depleted compared to control cells, whereas promoterflanking and non-regulatory regions did not reveal detectable changes (Figure 2(h-j)). As Gapdh is often associated with H3K4me3 in neurons [34,35], it was used as a positive control. This suggests that DNMT1 negatively influences the establishment of permissive H3K4me3 marks at the Lhx1 promoter region. Many promoters are bivalently regulated by H3K4me3 and H3K27me3 [36][37][38][39][40][41], and we already showed a DNMT1-dependent establishment of H3K27me3 signatures in immature POA-derived interneurons and neuron-like N2a cells [12]. Consistent with the global reduction of H3K27me3 found upon Dnmt1 depletion [12], H3K27me3 association was significantly diminished site-specifically at the promoter region of the Lhx1 gene locus, which also displayed increased H3K4-trimethylation marks ( Figure 2(h, k, l)). As a positive control, we included MyoD as muscle-specific gene, which is usually marked by repressive H3K27 trimethylation in neurons [42]. Together, the targeted ChIP experiments presented here suggest that DNMT1 regulates Lhx1 expression by promoting the establishment of repressive H3K27me3 and the removal of permissive H3K4me3 signatures at regulatory Lhx1 gene regions. This is in part reminiscent to what we found for the DNMT1-dependent regulation of Pak6 expression, which is mediated by the concerted action of DNMT1 and EZH2, the core enzyme of the PRC2, promoting the setup of repressive H3K27me3 marks in regulatory regions of the Pak6 locus [12]. DNMT1-mediated alterations in histone acetylation contribute to the modulation of Lhx1 expression Besides affecting H3K27 and H3K4 trimethylation [5][6][7][8]12], DNMT1 was further reported to act on histone acetylation, which leads to open chromatin [43,44]. In line with this, we detected significantly increased H3K9/K14/K18/K23/K27 acetylation levels in Dnmt1 siRNA-treated N2a cells compared to controls by performing immunocytochemistry with a pan-specific antibody (Figure 3(a-c)). This indicates that DNMT1 can also modulate transcription by negatively acting on permissive histone acetylation marks. The following set of experiments was designed to address whether the DNMT1-mediated modulation of histone acetylation contributes to the transcriptional regulation of Lhx1 as well as its cellular effects. First, we examined whether histone acetylation affects Lhx1 expression by treating N2a cells with the histone acetyltransferase inhibitor anacardic acid, causing global histone deacetylation [45,46]. Inhibiting histone acetylation resulted in significantly decreased Lhx1 expression levels compared to DMSO control treatment (Figure 3(d), left side of the diagram). To investigate whether DNMT1 controls Lhx1 expression by modulating histone acetylation, we determined whether the elevated Lhx1 expression levels seen upon Dnmt1 siRNA treatment (Figures 1 (d) and 3(d)-right side of the diagram) can be reversed by concurrent application of the histone acetyltransferase inhibitor anacardic acid. Indeed, collective administration of anacardic acid together with Dnmt1 siRNA reduced the boosted Lhx1 transcription levels seen after Dnmt1 depletion to levels which were even below the Lhx1 expression levels of the control treatment (control siRNA and DMSO; Figure 3(d)). This proposes that (i) permissive histone acetylation promotes Lhx1 expression and that (ii) DNMT1 represses Lhx1 in part by impeding the establishment of such permissive histone acetylation marks. We have previously characterized the relevance of DNMT1 and LHX1 function for survival regulation in POA-derived cortical interneurons [26,27]. We identified several downstream targets of LHX1, through which cell survival in immature POAderived cortical interneurons is controlled. We showed that LHX1 drives the expression of proapoptotic genes and negatively acts on the transcription of pro-survival genes [26]. Hence, tight orchestration of Lhx1 expression during interneuron development is required to maintain the delicate balance of interneuron cell death and survival, and hence the control over correct interneuron numbers, for which we propose DNMT1 as up-stream repressor. As we here provided evidence that the DNMT1mediated repression of Lhx1 is in part achieved by impeding the establishment of permissive histone acetylation marks (Figure 3(a-d)), we next investigated whether manipulating histone acetylation affects cell survival regulation. For this, we determined cell death rates of N2a cells that were treated with the histone acetylation inhibitor anacardic acid by applying a live/dead assay. We indeed detected elevated proportions of living cells in anacardic acidtreated samples (Figure 3(e-g)). This is in line with the diminished Lhx1 levels observed upon anacardic acid treatment (Figure 3(d)), and the role of LHX1 in promoting cell death, as we previously reported [26]. Our data so far indicate that DNMT1-mediated modulation of histone acetylation could contribute to Lhx1 transcriptional control with potential implications for cortical interneuron migration and survival. In support of this, numerous studies propose a DNMT1-dependent transcriptional regulation of genes encoding for histone-modifying complexes as a potential mechanism for a crosstalk of DNMT1 with histone modifications [9][10][11][12]. Such transcriptional control could also affect the concerted actions of histone acetylases (HATs) and histone deacetylases (HDACs), and hence the balance of histone acetylation and deacetylation, respectively [47]. Indeed, by screening the RNA sequencing dataset of FACS-enriched embryonic (E16) wild-type and Dnmt1-deficient POA cells [27], we found evidence that DNMT1 regulates the expression of genes associated with histone deacetylation. In Dnmt1-deficient POA cells we observed significant changes in the expression of Hdac2, Hdac4 and Hdac8 encoding histone deacetylases (Figure 3(h)). In addition, numerous other genes associated with histone deacetylation were changed in expression upon Dnmt1 deletion. While the expression of Hat1 encoding for a histone acetylase was not significantly altered, the expression levels from other histone acetylation-related genes like Arid5a [48] and Jade2 [49] were changed prominently in Dnmt1-deficient POA cells compared to controls (Figure 3(h)). Together, these transcriptional alterations induced by Dnmt1 deletion support a role of DNMT1 in the transcriptional regulation of genes related to histone acetylation/deacetylation complexes. HDAC8 was already described as a potent inhibitor of Lhx1 transcription in cranial neural crest cells [50] and is implicated in cell survival regulation [51]. However, Hdac8 expression was significantly increased in FACS-enriched Dnmt1deficient POA cells (E16) (Figure 3(h)). While being consistent with a repressive function of DNMT1, this does not provide a logic explanation for the elevated pan-histone acetylation induced by Dnmt1 siRNA in the N2a cell culture model, as augmented HDAC8 expression would rather be . Student's t-test was applied for the comparison shown in the left part of the diagram. Two-way ANOVA and Tukey Test were performed for the analysis depicted in the right part of the diagram. The two-way ANOVA revealed that the siRNA conditions, the (inhibitor) treatment conditions as well as the combination of both were highly significant (***P < 0.001). The significances resulting from the post-hoc Tukey Test are indicated in the diagram. (e-g) Representative microphotographs of N2a cells treated with DMSO or anacardic acid and stained for living (green) and dead cells (red) analysed as percentage of total cell number (g). (h) Heat-map of differential expression levels for genes associated with GO terms histone deacetylation and acetylation in FAC-sorted E16 control and Dnmt1-deficient POA cells revealed by RNA sequencing (*DEG with p < 0.05, Bonferroni-corrected). (i) Lhx1 expression in Hdac8 siRNA-treated N2a cells compared to control cells (g) and Scale bar: 10 µm in (a) and (b), 40 µm in (e) and (f). ***P < 0.001; Student's t-test. AnaA, anacardic acid; Ctrl, control; RNE, relative normalized expression. consistent with diminished histone acetylation. Moreover, Lhx1 expression levels were not changed in Hdac8 siRNA-treated N2a cells (Figure 3 (i)), indicating that HDAC8 is not affecting Lhx1 transcription in our cell culture model. Hdac2 was the only histone deacetylase-encoding gene, which we found diminished in Dnmt1-deficient POA cells, indicating that DNMT1 directly or indirectly promotes its expression (Figure 3(h)). HDAC2 was already reported to regulate the expression of Lhx1 [52]. Hence, a DNMT1-dependent promotion of histone deacetylation by enhancing Hdac2 expression represents a potential scenario for the DNMT1mediated transcriptional repression of Lhx1. Whereas deciphering the underlying mechanism requires further investigation, our data so far indicate that DNMT1-mediated gene expression regulation in immature interneurons could be realized via the modulation of histone acetylation in addition to the previously reported crosstalk with histone methylation. Discussion The development of interneurons, including their long-range migration and their subtype-specific differentiation as well as their survival, is strictly controlled by various regulatory mechanisms [24]. Here we provided evidence that DNMT1 regulates the expression of the LIM homeodomain transcription factor LHX1, a relevant regulator of cortical interneuron development. Lhx1 was shown to be expressed in a proportion of POA interneurons during brain development, promoting proper migration from their subpallial origin towards cortical targets as well as controlling their survival [26]. Our current data support a role of DNMT1 in executing transcriptional control over Lhx1 through DNA methylationindependent mechanisms. Apart from DNMT1dependent modulation of H3K4 and H3K27 trimethylation, partially organized in bivalent regions, DNMT1-mediated interference with histone acetylation may contribute to the regulation of Lhx1 gene activity in interneurons and neuron-like cells. Epigenetic mechanisms are key for neuronal development and function and are implicated in diverse neurological diseases and psychiatric disorders [13,14]. DNA methylation exerted by DNA methyltransferases (DNMTs) was shown to play a major role in the regulation of gene transcription and the modulation of neuronal differentiation programs [53][54][55]. Thereby, DNA methylation was often reported to silence gene transcription by preventing the binding of transcription factors to the DNA [53][54][55]. We recently reported that the function of DNMT1 is fundamental for the maturation of POAderived cortical interneurons [27]. Interestingly, we observed that the majority of genes in post-mitotic POA interneurons which revealed elevated expression upon Dnmt1-deletion also showed increased methylation states. This is not in line with the proposed model of repressive DNMT1-dependent DNA methylation. Recently studies added new aspects on how DNA methylation or DNMTs may contribute to the control of gene activity. These include a crosstalk with histone tail modifications or RNA silencing that concertedly contribute to the complex network of gene regulation [53,56]. For this reason, we asked whether the regulation of relevant DNMT1 downstream target genes depends on non-canonical DNMT1 functions. In this context, we recently showed that the expression of the gene coding for the serine/threonine-protein kinase PAK6, which regulates interneuron morphology and survival [12,27], is modulated by DNMT1-dependent establishment of repressive H3K27me3 marks at gene promoter sites [12]. Following this line of research, we here asked, whether Lhx1 is likewise regulated by DNMT1dependent changes in the histone code, as in contrast to Dnmt1 deletion-induced alteration in Lhx1 expression, no respective changes in the DNA methylation levels were found. Our data emphasize a bivalent regulation of Lhx1 by a DNMT1-dependent modulation of both repressive H3K27me3 and permissive H3K4me3 marks. An interaction of DNMT1 with enzymes involved in establishing H3K27me3 marks like EZH2 as the main methyltransferase of the polycomb repressor complex 2 (PRC2), as well as a transcriptional regulation of associated genes was already reported for nonneuronal cells in different studies [11,57,58]. We recently revealed that in POA interneurons and neuron-like N2a cells the DNMT1-dependent establishment of H3K27-trimethylation marks relies on protein-protein interactions between DNMT1 and EZH2 [12]. In turn, DNMT1-dependent repression of activating H3K4me3 marks in control cells appears to be achieved via transcriptional control of relevant key players. This assumption was based on the observation that in Dnmt1-deficient Hmx3-expressing POA cells an enhanced expression of H3K4me3associated genes was revealed [12,27]. However, other ways of action are likewise conceivable. A simultaneous association of permissive H3K4me3 and repressive H3K27me3, like we and others observed for the Lhx1 promoter [59][60][61], is typically found for many developmental genes adopting a 'winner-takes-all' principle [36,38,39,62] with the decision about gene transcription being defined by the proportion of these histone modifications. Such bivalent gene regulation, initially reported for the repression of lineage restricting genes in early embryogenesis (reviewed in [63]), enables the repression of genes until their expression is required. Since LHX1 regulates the survival and migration of specific interneurons from the POA within a given time window [26], a highly coordinated expression of this transcription factor is of great importance. A bivalent regulation of Lhx1 expression would allow for such a temporally and spatially limited expression, which seems to depend on DNMT1 function. Besides the connection to histone methylation, DNMT1 also interacts with key enzymes relevant for histone acetylation and deacetylation [43,44]. Acetylated histones are highly associated with euchromatic gene regions and activated gene transcription, while histone deacetylation results in 'closed' heterochromatin and gene repression [64,65]. Histone acetylation and deacetylation processes are intimately linked to proper development and function of several cortical interneuron types and enable a dynamic change of gene accessibility [66,67]. The data presented here indicate a DNMT1dependent repression of activating histone acetylation marks in immature POA interneurons, as well as a regulation of the Lhx1 expression level by the histone acetylation status. Based on this, we propose the hypothesis that DNMT1 represses Lhx1 transcription at least partly by contributing to changes in histone acetylation levels. DNMT1 has already been reported to be associated with HDAC activity, which removes histone acetylation marks to silence gene transcription. For example, Fuks et al. [43] identified a specific domain in the DNMT1 protein that partially contributes to transcriptional repression by recruiting histone deacetylase activity. Apart from this, DNMT1 was also shown to directly bind HDAC2 and the co-repressor DMAP1 to form a repressive transcription complex [44]. Thus, a DNMT1-dependent removal of acetyl groups from histone tails could account for Lhx1 repression in POA interneurons and N2a cells. Interestingly, an HDAC-dependent repression of Lhx1 was already shown in cranial neural crest cells for the class I histone deacetylase HDAC8 [50], in which the enzyme prevents the aberrant expression of homeobox transcription factors during skull development. For the cell types investigated here, HDAC8, in turn, seems of subordinate relevance for the transcriptional control of Lhx1, as Hdac8 siRNA application had no effect on Lhx1 transcription levels. This underlines that the function of HDACs appears to be cell type specific and likely depend on cell-specific cofactors and their integration into protein complexes. Apart from that, the regulation of Lhx1 expression was also described for the class I histone deacetylases HDAC1 and HDAC2 in non-neuronal progenitor cells [52]. Here, we detected reduced levels of Hdac2 expression in Dnmt1-deficient POA cells, which is in line with the increased pan-histone acetylation determined upon Dnmt1 depletion. Consequently, a DNMT1-dependent transcriptional regulation of HDAC2 could represent a possible scenario for how DNMT1 modulates Lhx1 transcription through a crosstalk with histone acetylation. Moreover, we identified numerous transcripts related to histone acetylation and deacetylation that were altered in expression in Dnmt1-deficient POA cells, indicating multiple levels of regulation. Besides, other mechanisms that would enable a crosstalk between DNMT1 and histone acetylation, for example, via transcriptional control over long non-coding RNA expression that in turn can recruit or avoid the binding of chromatin-modifying complexes [68], or an interaction of DNMT1 with the histone acetylation machinery at protein level similar to what we identified for the DNMT1-dependent establishment of H3K27me3 marks [12], are likewise conceivable and subject of further investigations. Furthermore, since the regulation of gene expression through epigenetic mechanisms is based on a complex network of diverse factors that regulate different histone tail modifications, we cannot rule out that additional mechanisms like histone phosphorylation, ubiquitination, or deimination contribute to the DNMT1-dependent regulation of Lhx1 expression. We likewise cannot exclude that the effects we described could also partially represent secondary effects through intermediary factors, and not necessarily be due to the direct interaction of DNMT1 with histone methylating complexes. For this, the identification of potential binding partners and whole protein complexes interacting with DNMT1 is of great relevance to fully understand the complex crosstalk of bivalent gene sites by histone 3 trimethylation and histone acetylation. Together, the transcriptional control by epigenetic mechanisms once more emerges as a complex interplay of numerous factors, likely acting in large complexes that integrate intracellular and extracellular cues to drive cell differentiation and maturation processes. Animals For all experiments, transgenic mice on C57BL/6J background were used including Hmx3-Cre/tdTomato/ Dnmt1 wild-type as well as Hmx3-Cre/tdTomato/ Dnmt1 loxP 2 mice. Transgenic mice were generated by crossing Hmx3-Cre mice (obtained from Oscar Marin, King's College, London, UK and described in Gelman et al. [25]) with tdTomato transgenic reporter mice (obtained from Christian Hübner, University Hospital Jena, Germany and described in Madisen et al. [69]) and Dnmt1 LoxP 2 mice (B6; 129Sv-Dnmt1tm4Jae/J, Jaenisch laboratory, Whitehead Institute, USA). Cre-mediated deletion in Dnmt1 mice leads to out-of-frame splicing from exon 3 to exon 6, resulting in a Dnmt1 null allele [70]. Transgenic mice are abbreviated as Dnmt1 WT and Dnmt1 KO in text and figures. Mice were housed under 12 h light/dark conditions with ad libitum access to food and water. All animal procedures were approved by the local government (Thueringer Landesamt, Bad Langensalza, Germany) and performed in strict compliance with the EU directives 86/609/EWG and 2007/526/ EG guidelines for animal experiments. Study design and experiments were performed according to the ARRIVE guidelines. Preparation of POA single cells For the preparation of cells of the embryonic preoptic area (POA), timed pregnant Dnmt1 WT and KO mice were killed by an intraperitoneal injection of 1x PBS (pH 7.4) with 2.5 μg chloral hydrate per g body weight. Embryonic POA was prepared under visual control, dissociated with 0.04% trypsin (Thermo Fisher Scientific) in Hank´s balanced salt solution (Invitrogen) for 17 min at 37°C prior trituration and removal of cell aggregates by filtering through a 200 µm nylon gauze. Preparations of Crepositive embryos were used for fluorescenceactivated cell sorting. POA cells of Cre-negative embryos were used for morphometric studies. They were seeded at densities of 300 cells/mm 2 on coverslips (Ø 12 mm) coated with 19 µg/µL laminin (Sigma-Aldrich) and 5 µg/µL poly-L-lysine (Sigma-Aldrich) and cultured according to Symmank et al. [12] at 37°C and 5% CO 2 . Fluorescence-activated cell sorting (FACS) of POA cells FACS of tdTomato reporter-positive cells was performed as described in Pensold et al. [27]. FACSenriched cell pellets were either frozen directly for DNA isolation or dissociated with TRIzol™ Reagent (Life Technologies) for RNA isolation. RNA Sequencing and MeDIP analysis of embryonic POA cells RNA sequencing and methylated DNA immunoprecipitation (MeDIP) sequencing of FACS-enriched POA cells were described and performed by Pensold et al. [27] and reanalysed in this study. Briefly, pooled samples were tested in technical duplicates for RNA sequencing. Due to higher quantities of required material for MeDIP sequencing, one pooled sample was evaluated per genotype and a special bioinformatic pipeline for computational analysis was applied for such rare samples as previously described in Pensold et al. [27] and Pensold et al. [71]. Complete data set of RNA sequencing and MeDIP analysis is provided by Pensold et al. [27] and uploaded at GEO with the series number GSE146968. Heat-maps were generated using R package pheatmap (https://CRAN.R-project.org/ package=pheatmap). For heat-maps showing a comparison between two datasets, data were normalized to WT and log 2 fold-change to KO is depicted. RNA isolation and expression analysis RNA isolation of FACS-enriched POA cells was performed as described in Pensold et al. [27]. For expression analysis of siRNA-or inhibitor-treated N2a cells grown in six well plates, cells were harvested with TRIzol™ Reagent (Thermo Fisher Scientific) following the manufacturer's guidelines. RNA was isolated using 1-Bromo-3-chloropropane, centrifuged for 30 min at 13,000 × g and 4°C and the aqueous phase was purified with the RNA Clean & Concentrator-5 kit (Zymo Research) including DNAse treatment according to manufacturer's protocol. Superscript IV™ first-strand synthesis system (Thermo Fisher Scientific) was used for cDNA synthesis according to the manufacturer's instructions with the same amount of input RNA in all compared probes. Quantitative reverse transcription PCR was performed with Luminaris Color HiGreen qPCR Master Mix (Thermo Fisher Scientific) according to manufacturer's protocols and following primers were used (indicated as 5′ → 3′; fw, forward; rev, reverse): Lhx1 fw GGAGCGAAGGATGAAACAGC, Lhx1 rev TGCGGGAAGAAGTCGTAGTT, Rps29 fw GAAGT TCGGCCAGGGTTCC, Rps29 rev GAAGCCT ATGTCCTTCGCGT. As housekeeping gene, Rps29 was used. Each sample was tested in three biological replicates analysed in separate qPCR runs with one to three technical replicates. The qPCR program included the following optimized steps: UDG pretreatment at 50°C for 2 min, initial denaturation at 95°C for 10 min, denaturation at 95°C for 15 sec as well as annealing and elongation at 60°C for 1 min. Denaturation and annealing/elongation steps were repeated 40 times and primer dimers were excluded by a melting curve analysis. Data were analysed with ΔΔCt method [74]. Immunocytochemistry N2a cells, cultured on coverslips were fixed with 4% PFA/ 1x PBS for 10 min and immunocytochemistry was performed as previously described in Zimmer et al., 2011 [72]. A pan-specific rabbit-anti-H3K9/K14/K23/K27 acetylation (Abcam) was used as the primary antibody. As secondary antibody, Cy3-goat anti-rabbit IgG (1:1000; Jackson Laboratory) was used. For analysis of cell morphology, incubation with Alexa Fluor™ 488 Phalloidin (Thermo Fisher Scientific) was performed according to the manufacturer's guidelines. Live-dead-assay N2a cells cultured on coverslips were stained after inhibitor treatment for living and dead cells with the LIVE/DEAD™ Cell Vitality Assay Kit for mammalian cells (Invitrogen) according to the manufacturer's protocol. Migration assay with N2a cells Standardized imaging plates (Eppendorf; 170 µm glass thickness) were coated with matrigel (GelTrex™; Thermo Fisher) according to the manufacturer's instructions using a working concentration of 0.1 mg GelTrex TM diluted in 1 mL N2a culture medium. One hundred microlitre matrigel working solution was added per well and incubated for 60 min until hardening of the substrate. N2a cells were seeded with a density of 57 cells/mm 2 and treated with the inhibitor DZNep after 24 h as described above. To avoid phototoxic effects during imaging, culture medium was exchanged to DMEM without phenol red (Invitrogen) with 10% FBS (Biowest), 100 U/mL penicil-lin (Gibco), 100 µg/mL streptomycin (Gibco). Forty-eight hours after seeding, N2a cells were imaged every 15 min for 20 h at 37°C and 5% CO 2 . Microscopy and data analysis Fluorescent images were taken with the inverted confocal laser scanning microscope TCS SP5 (Leica). Life cell imaging of migrating cells and images of the Live-Dead-Assay were taken with the DMi8 with thunder imaging platform (Leica). Photographs were analysed with Fiji (ImageJ) software [75]. Background correction was performed for fluorescence intensity measurement. Mean fluorescent intensity of Dnmt1 siRNA-treated cells was normalized to control siRNAtreated cells. Quantitative RNA results were analysed by efficiency corrected ΔΔCt method and presented in relation to control samples. Photoshop CC was applied for image illustration. Significance was analysed with two-tailed Student's t-test or two-way ANOVA with Tukey Test. Shapiro-Wilk was used as Normality Test. Significance levels: P value <0.05 *; P value <0.01 **; P value <0.001 ***. If not stated differently, experiments were repeated three times. Disclosure statement The authors declare that they have no competing interest. Funding The experimental realization of this study was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -368482240/GRK2416 and ZI 1224/8-1; and the IZKF (Interdisciplinary Center for Clinic and Research) Jena. Authors´contribution JS: designed and performed experiments, data analysis, figure illustration, conceptual design of the study, wrote the manuscript; CB: performed experiments, data analysis, figure illustration, corrected the manuscript; JR: performed experiments, data analysis, figure illustration, corrected the manuscript; DP: performed experiments and data analysis; corrected the manuscript; GZ, conceptual design of the study results, discussion, wrote the manuscript. All authors read and approved the manuscript. Availability of data and materials The reanalyzed RNA sequencing and MeDIP data sets comparing Hmx3-Cre/tdTomato/Dnmt1 wild-type as well as Hmx3-Cre/tdTomato/Dnmt1 loxP 2 mice that are used in this study are published by Pensold et al. (27) and uploaded at GEO with the series number GSE146968.
7,799.4
2020-05-22T00:00:00.000
[ "Biology" ]
Research on the radiation exposure “memory effects” in AlGaAs heterostructures Radiation exposure and long running time cause degradation of semiconductors' structures as well as semiconductors based on these structures. Besides, long running time can be the reason of partial radiation defects annealing. The purpose of the research work is to study the “memory effect” that happens during fast neuron radiation in AlGaAs heterostructures. Objects of the research are Infrared Light Emitting Electrodes (IRED) based on doubled AlGaAs heterostructures. During the experimental research LEDs were preliminarily radiated with fast neutrons, and radiation defects were annealed within the condition of current training with high temperatures, then emission power was measured. The research proved the existence of the “memory effect” that results in radiation stability enhancement with subsequent radiation. Possible mechanisms of the “memory effect” occurrence are under review. Introduction It is necessary to give a brief observance of microelectronic articles utilization. Microelectronic articles are utilized in different environments. More specifically, they serve in outer space, in upper atmosphere and at nuclear energy centers [1,2] being subjected to different kinds of ionizing radiation. As ionizing radiation takes place, active layers of microelectronic articles get different radiation defects, which change articles specifications and finally cause their break-down. Similar effect occurs as a result of a longtime service when electric and thermal fields cause defects of articles' active layers. Consequently, ionizing radiation and longtime exploitation lead to defects in microelectronic articles' active layers and to degradation of their output characteristics. It is worth noticing that defects caused by ionizing radiation can differ in their structure and properties from defects caused by long operation time. However, it is well known that longtime service factors might result in full or partial annealing of defects that occur as a consequence of ionizing radiation (e.g., [3]). In this case if we preliminarily subject microelectronic articles to ionizing radiation and then to longtime service, we shall obtain full or partial recovery of articles specifications, which can be observed in actual practice. Behavior of prepared articles newly subjected to radiation is expected to be different from that of primary ones. Whence, if ionizing radiation and/or annealing provoke defect migration to the stock region, where it might interact with other defects, there is a little hope it will move back. _________________________ Consequently, difference in primary and prepared articles behavior being newly radiated can be called the "memory effect". Similar effects were observed previously in metal alloys [4]. The purpose of this research work is to study the "memory effect" in AlGaAs doubled heterostructures meant to be used to produce infrared light emitting diodes (LED) radiated with fast neutrons. In the first instance it is necessary to prove the existence of this effect. The "memory effect" investigations (conditions that cause its occurrence, its impact on radiated articles criteria parameters changes, its stability over time, etc.) will hereafter allow us to forecast microelectronic articles behavior when subjected to complex (simultaneous) or combined (spread out over a period of time) ionizing radiated emission or to longtime service factors. Research objects and methods Commercial LEDs based on doubled AlGaAs heterostructures with 1μm thickness of an active layer, obtained through liquid-phase epitaxi with monocrystall n-GaAs as a base-member, were taken as research objects. LEDs were fabricated through the standard sandwich technology where deposition techniques and layer metallization to form ohmic contacts are applied; photolithography and chemical etching to form crystals (chips) as well as scribing to divide wafers into separate chips were used. LEDs had corpses and lens elements to form necessary diagram of light flux directivity from optical compound. According to the preliminary research data optical compound used to fabricate lenses does not change its optical properties even at F n = 2·10 14 cm -2 fast neutron radiation. From there all changes of LEDs′ optical properties resulted from radiation within the mentioned above fast neutrons transfer might be considered as conditioned by changes of diodes active region optical properties. When battery mode is continuous the intensity of forward operating current of LEDs under review makes I op = 100 mА, for this purpose voltage supply does not exceed U op = 3 V. Maximum wave mode is within 0,82 …. 0,90 μm range. Watt -ampere specification of each LED before and after radiation were evaluated in a blob. Data obtained were analyzed by mathematical statistics technique. Each lot of LEDs was classified by average measured values. Dispersion in the radiation power in primary LEDs did not exceed ±10%, but it increased after fast neutron radiation up to ±15%. It is worth noticing that equipment used to measure vat-ampere specification in a blob provided each individual LED with radiation measurement repeatability with less than ± 2,5% deflection for (1-200) mA operating current range. Fast neutron radiation was delivered in passive battery mode without operating current, while fast neutron impact level was characterized by F n (cm -2 ) particle transfer. Each lot of LEDs was provided with a sequential number of fast neutron transfers with each definite pitch. Results from the analysis of different lots of LEDs and vat-ampere specification measurements make it possible to prevent any radiation defects annealing during measurement. Therefore an observable change of radiant power is determined by fast neutrons impact only. Three lots of LEDs were investigated during the research work. Each lot specifications are shown in table 1. Experiments allowed obtaining and describing corresponding relations between radiation power decrease and fast neutrons transfer. Table 1 shows that the "memory effect" research was carried out by means of comparative study of relation between LEDs′ radiant power decrease when radiated with fast neutrons (for primary LED-0) and LEDs′ radiation power decrease when radiated with fast neutron (other two lots of LED-1 and LED-2) which were obtain under following conditions: LEDs were preliminarily subjected to radiation at different fast neutrons transfer rates that resulted in radiant power decrease (ground for neutron transfer rate choice will be given in the following chapter); Then LEDs were subjected to current training with I op = 75 mA operating current during 24 hours with + 65 0 С environment temperature, which led to radiant power recovery; Further LEDs′ radiant power decrease when radiated with fast neutrons was investigated. During the experiments a sequential increase in effect level was applied. Experimental findings and discussion In the first instance it is necessary to study the findings obtained during the investigation of LED-0 lot′s radiant power decrease when radiated with fast neutrons, presented in table 1. Radiant power measured after radiation P F was normalized to Р 0 in case with primary LEDs, thereby power level measured when applying I op = 100 mA operating current was used. Similar normalization of LEDs′ radiant power will be employed further on using for starting value its value before it obtained these relations. Radiation model [5] for LEDs based on AlGaInP heterostructures with multiple quantum wells is the most relevant to describe experimental findings. Data presented in figure 1 show that process of power decrease can be described through three periods. During the first period radiant power decrease occurs as a result of radiation-stimulated where Р F , Р 0 -radiant power with I ор = 100 mA, measured before and after radiation consequently; А -coefficient of proportionality that characterizes the first period input into the overall radiant power decrease when radiated with fast neutrons; С 1 -damage coefficient that specifies the speed of radiant power decrease during the first period at neutron transfer growth [cm 2 ]. The second period is characterized by LED′s radiant power change with neutrons transfer growth that occurs due to the radiation defects and is described with the equation where P min -LED radiant power with low injection of electrons into its active region without any dependence from current flow; В -coefficient of proportionality that characterizes the second period input into the overall radiant power decrease when radiated with fast neutrons; С 2 -damage coefficient that specifies the speed of radiant power decrease during the second period at neutron transfer growth [cm 2 ]. The third period corresponds to LED fast neutron radiation limiting value. LED occurs in a mode of low injection of electrons into its active region. Radiant power is described in the equation Earlier investigations [6] in AlGaInP heterostructured LEDs′ radiant power decrease when radiated with gamma-ray quantum proved that at the edge of the first and the second periods of power decrease some relaxation processes occur. These processes result in partial recovery of radiant power. At this LEDs′ radiant power at the edge of the first and the second periods were observable in an explicit form. In this research radiant power growth Summing up, LEDs under consideration being subjected to fast neutron radiation get through the period when radiant power conditioned by radiation-stimulated reconfiguration of present defect structure decreases. This fact ground the decision to take these LEDs as "memory effect" research objects. This is a serious possibility to assume the "memory effect" at this period. Research findings submitted in the paper allow explaining the choice of fast neutrons transfer for preliminary radiation (figure 1). The first transfer of neutrons (LED-1) is chosen to the effect that radiant power decrease occurs within the first period and the second transfer of neutrons (LED-2) corresponds to the second period of radiant power while radiated. Further the influence of a preliminary fast neutron radiation mode in the identical conditions of annealing on radiant power decrease with further fast neutron radiation (LFD-1, LED-2) is described. Figure 2 shows relative changes of radiant power for given LEDs with further fast neutron radiation. In this case LED s′ radiant power normalization was up to its value after annealing (see table 1). Here are the experimental findings obtained for lot LED-0 to make comparison possible. Results in figure 2 prove that radiant power changes after radiation are explained by earlier presented equations (1-3). Table 2 contains proportionality and damage coefficients for 1-3 equations, which were obtained during experimental findings analysis, presented in figure 2. Obtained data show that the "memory effect" brings into change А, В, С 1 ,proportionality coefficients value of which depends on the LED′s history, whereas in contrast С 2 coefficient is identical to all presented lots of LED. The fact that C 2 coefficient does not depend on the history of LED additionally proves that during the second period fast neutron radiated LED s′ radiant power decrease is the result of radiating defects, at that, defect structure of LED s′ primary active layer is absolutely prevented. Besides, the "memory effect" exceeds recovery degree of radiant power at the edge of the first and the second periods of LED radiant power decrease when radiated with fast neutrons. Thereby, "memory effect" occurrence when AlGaAs heterostructures are radiated with fast neutrons has been proved. Experiment findings presented in figure 2 demonstrate a possibility to increase LED s′ stability to fast neutron radiation by means of the "memory effect". Of course, it is possible to object that preconditioning result in radiant power decrease in some cases, in spite of its partial (sometimes full) recovery during annealing. It is worth pointing out that any microelectronic article with high ionizing radiation resistance does not possess peak specifications as its basic property is parameter permanence. If parameters change, they change in some limited boundaries being subjected to maximal value of ionizing radiation. Moreover, taking into consideration that the purpose of the research work is to prove experimentally the anticipation that the ionizing radiation "memory effect" exists; chosen conditions cannot be admitted as optimum. In future radiation and annealing modes optimization might make it possible to develop some practical techniques of the "memory effect" application in order to improve exploitation specifications of different microelectronic articles. At this, ionizing radiation stability might grow without any significant decrease of primary microelectronic articles′ parameters. Fast neutrons radiation stability growth can be demonstrated with the help relations presented in figure 3. Here are relations of the maximum allowable transfer of fast neutrons that allows to keep radiant power decrease at the maximum permissible value (P F /P 0 ) limit for the lots of LEDs under consideration. It is obvious that the "memory effect" causes fast neutron influence stability growth not only within the first period but also within the second period. At this the maximum LED s′ stability growth occurs within the first period as a consequence of a relative contribution of this period into the general radiant power decrease when radiated, and owing to damage coefficient decrease. Stability during the second period of radiant power decrease grows as well, but damage coefficient remains unchanged. Conclusion 1. For the first time a hypothesis of the "memory effect" of radiation exposure in semiconductor materials and instruments was formulated. 2. For the first time the "memory effect" existence during fast neutron radiation in doubled AlGaAs heterostructures, aimed to be used for IRED production, was experimentally proved. 3. The "memory effect" causes fast neutron influence stability growth. At this LED′s fast neutron radiation stability growth within the first period of radiant power decrease is preconditioned by relative contribution reduction of this period into the general radiant power decrease during radiation and damage coefficient depletion. Stability during the second period of radiant power decrease grows as well, but damage coefficient remains unchanged. 4. The "memory effect consideration" allows increasing a prognostication quality of LEDs′ operation in the conditions of complex (simultaneous) and combined (spread out over a period of time)
3,134.6
2015-04-23T00:00:00.000
[ "Materials Science" ]
Bioremediation and tolerance of zinc ions using Fusarium solani Evaluating the mechanism of tolerance and biotransformation Zn(II) ions by Fusarium solani based on the different physiological was the objective of this work. The physical properties of synthesized ZnONPs was determined by UV-spectroscopy, transmission electron microscope, and X-ray powder diffraction. The structural and anatomical changes of F. solani in response to Zn(II) was examined by TEM and SEM. From the HPLC profile, oxalic acid by F. solani was strongly increased by about 10.5 folds in response to 200 mg/l Zn(II) comparing to control cultures. The highest biosorption potential were reported at pH 4.0 (alkali-treated biomass) and 5.0 (native biomass), at 600 mg/l Zn(II) concentration, incubation temperature 30 °C, and contact time 40 min (alkali-treated biomass) and 6 h (native biomass). From the FT-IR spectroscopy, the main functional groups implemented on this remediation were C–S stretching, C=O C=N, C–H bending, C–N stretching and N–H bending. From the EDX spectra, fungal cellular sulfur and phosphorus compounds were the mainly compartments involved on ZN(II) binding. Introduction Anthropogenic and natural activities discharge poisonous heavy metals into the surroundings (Singh et al., 2018), that might be non-degradable and long persisting in the environment (Taamalli et al., 2014), causing sever harmful influences on the environment, and food chains (Mahmood and Malik, 2014). Zinc (Zn(II)) is an important microelement for livings, however, the elevated levels of Zn (II) are harmful and threaten the lives of many organisms (Salem and Fouda, 2020). Normal Zn(II) levels in soil and fresh water are usually ranged 10-300 mg/kg and <0.1-50 μg/l, respectively. Due to the anthropogenic pollution and natural processes, total Zn(II) concentrations in soil and freshwater are raised up to 35000 mg/kg and 3900 μg/l, respectively (WHO, 2001). The principal sources of Zn(II) are steel and iron production, mining, zinc-containing pesticides, fertilizers, and corrosion of galvanized structures (Ferreira et al., 2018). The symptoms of Zn(II) poisoning are gastrointestinal pain, diarrhea, and vomiting, due to the usage of water kept in galvanized units. Thus, Zn(II) toxicity could be a field of concern for the environmental, organic process and ecological reasons (Jaishankar et al., 2014). Microbes display various approaches to overcome the hazardous effects of metals. The metal resistance mechanism are mainly avoidance and tolerance (Fouda et al., 2018;Nedkovska and Atanassov, 1998). Fungal tolerance mechanisms involve mobilization, immobilization, and biotransformation. Mobilization of metals occurs by heterotrophic and auxotrophic leaching, complexation/chelation by various metabolites, whereas immobilization occurs due to metal sorption with the biomass or exopolymers, intracellular sequestration and precipitation as organic and inorganic compounds (Singh et al., 2018). Some microorganisms show an excellent bio-transformational efficiency, transforming the poisonous chemical to nontoxic forms that permit the microbe to overcome the toxicity of pollutants (Mishra and Jha, 2009;El-Sayed and Shindia, 2011;Mohamed et al., 2019). They released energy from the reduction/oxidation of As(V)/As(III) are mainly implemented on their growth (Oremland and Stolz, 2003). The potency of As(V) reduction were reported for T. asperellum, P. janthinellum, and Fusarium oxysporum (Su et al., 2012). This study aimed to assess F. solani for its tolerance towards Zn(II). The biosorption of Zn(II) by living and alkali-treated F. solani under various factors studied. The mechanism of tolerance and biosorption elucidated by FTIR, EDX, SEM, TEM, and XRD. Fungal isolate and zinc tolerance assay Fusarium solani KJ 623702 had been isolated and molecularly identified from our previous work (El -Sayed, 2014, El-Sayed andEl-Sayed, 2020a, b). The strain conserved on PDA slants at 4 C, supplemented with ZnSO 4 .2H 2 O to give final desired concentrations (0,2000,4000,6000,8000,9000,10000,11000, and 12000 mg/l), then poured to the petri plates. The plates were centrally inoculated with the fungal inoculum plug (6 mm of 7 days old cultures), incubated at 25 C for 6 days, and the fungal growth was measured. The Minimum Inhibitory Concentrations (MIC) was denoted by the lowest Zn (II) concentration that inhibited the growth of F. solani Sabatini et al., 2016, El-Sayed et al., 2012a. All the experiments throughout the recent study were performed in triplicates. Scanning electron microscopy (SEM) analysis For evaluation of the morphological deformation in response to Zn(II)-stress, the fungus was treated at the sub-MIC dose, incubated, and investigated with SEM. The mycelia were fixed in 2.5% glutaraldehyde for 24 h at 4 C, post-fixed in osmium tetraoxide (1.0%) for 1 h, and then dehydrated with acetone (El- . The gold coated samples were examined by the scanning electron microscope (JEM-1200XII). Energy dispersive X-ray (EDX) microanalysis The collected mycelia were treated with Zn(II) at the sub-MIC, incubated, and subjected to EDX micro-analysis for quantitative elemental analysis by X-ray microanalyzer (model Oxford 6587 INCA Xsight) connected to scanning electron microscope. Transmission electron microscopy (TEM) analysis For estimating the cytomorphological changes caused by Zn(II), the fungus was incubated with Zn(II) at the sub-MIC dose, incubated, and the cellular organelles were investigated (El- Sayed and Ali, 2020). Samples were primary fixed with 2.5 % glutaraldehyde for 3 h at 4 C, washed with 0.2 M phosphate buffer (pH 7.4) for 30 min, post-fixed in osmium tetraoxide (1.0 %) for 2 h at 4 C, washing with phosphate buffer for 30 min. Samples were dehydrated in a gradient concentrations of ethanol (50-100%), transferred via a three changes of acetone: ethanol (1:2, 1:1, and 2:0) for 10 min, then embedded in epoxy medium. A diamond knife sectioned the blocks into ultrathin sections of 70 nm and placed on copper grids. The sections contrasted by uranyl acetate and by lead citrate for 30 min. Transmission and photographing was conducted by the electron microscope. Growth response of F. solani in Zn(II)-enriched media To explore the response of F. solani to Zn(II) stress, the tolerance index (TI), dry weight, percentage of removal, contents of H 2 O 2, lipid peroxidation, the concentrations of antioxidants, soluble protein, and thiol, and polyphenol oxidase (PPO) activity were determined (El-Sayed et al., 2015a,b,c,d,e). F. solani evaluated for Zn(II) tolerance index (TI) at concentrations extending from 1000 to 9000 mg of Zn(II)/l. Zn(II)-free medium was considered as a control. PDA plates were inoculated at the middle with six mm agar plugs and kept at 25 C for ten days. TI was determined from the radial growth of Zn(II)-stressed strain divided by the growth in the Zn(II)-free plates. The TI was evaluated as follows: 0.00 to 0.39 (very low tolerance), 0.40 to 0.59 (low tolerance), 0.60 to 0.79 (moderate tolerance), 0.80 to 0.99 (high tolerance) and 1.00 to >1.00 (very high tolerance) (Oladipo et al., 2018). To investigate Zn(II) bioremoval and the impact of Zn(II) on the fungal dry weight, sterilized ZnSO 4 .2H 2 O solutions were aseptically supplemented to the sterile PD broth (pH was maintained at 5.8 by the standard solution of 0.1 N NaOH/HCl) to get the final concentration of 0, 200, 500, 1000, 2000, 4000, 6000, 7000, and 8000 mg/l, then inoculated with spore suspension (10 6 /ml), and incubated for seven days at 25 C and 140 rpm. During the growth, the white coalescence was notice, suggesting the reduction of Zn(II) and the formation of zinc oxide nanoparticles (ZnONPs). The biomass was separated by centrifugation and dried at 60 C, the filtrates were utilized for characterization of ZnONPs by UV-Visible spectroscopy, TEM analysis (JEOL TEM-1400), and X-ray powder diffraction (XRD) (Broker D8 Advanced target Cu Koα powder diffractometer (λ ¼ 1.5418 A) (El- Sayed and El-Sayed, 2020a, b). For the TEM analysis, the samples were loaded on carbon-coated grids and dried, a thin film on glass slides was dried at 45 C and used for XRD. The residual Zn(II) was measured with atomic absorption spectrophotometer (Unicam 969) (El- Sayed and El-Sayed, 2020a, b). The efficiency of removal (E) determined according to the following equation: Where Ci and Cf are initial and residual concentrations of Zn(II) (mg/l), respectively. Antioxidants and enzymatic activities For antioxidative studies, the fungal biomass was pulverized in 50 mM phosphate buffer (pH 7.0) of 50 mM EDTA in an ice-cold mortar and centrifuged. The supernatants were used to clarify the tolerance mechanism. 2.6.1. Polyphenol oxidase (PPO) Samples (200 μl) subjected to the reaction with 5 U/ml horseradish peroxidase, guaiacol (0.2 mM), and catechol (10 mM) in one ml as a final volume, kept at 30 C for 60 min and frozen for 10 min. The detected color conducted at 436 nm (Bergmeyer et al., 1974, El-Sayed et al., 2013a. The specific enzymatic activity expressed in enzyme units (the enzyme amount that released 1 μmol H 2 O 2 /min under optimum conditions)/mg protein/min. Catalase assay the reaction solution containing 3 ml of 10 mM phosphate buffer pH 7, 0.2 ml of 0.2 M H 2 O 2 , and 0.1ml of the enzyme extract incubated for 10 min and absorbance measured at 240 nm (Abhishek et al., 2010;El-Sayed et al., 2016). Total antioxidant The total antioxidant was estimated by ferric-thiocyanate method (Gupta et al., 2004, El-Sayed et al., 2017a. In brief, the supernatant (1 ml) was mixed with 0.2 ml ferrous chloride (20 mM) and 0.2 ml ammonium thiocyanate (30%) and kept for 10 min and red color measured at 500 nm. Assay of total thiol content The total thiols were determined by Ellman's reagent (1959) with some modifications (El-Sayed et al., 2019a,b,c). The fungal extracts (3 ml) were mixed well with 2 ml phosphate buffer of pH 7.0 and 5.0 ml distilled water. Three milliliters of the mixture were shaken with 0.01 M DTNB (20 μl) and absorbance estimated at λ412 nm. Protein measurement The soluble proteins was quantified by Folin's reagent (Lowry et al., 1951). Briefly, 1 ml of the prepared fungal extract was mixed with 1 ml of freshly prepared solution C (50:1 V/V, solution A to B), incubated for 15 min at room temperature. Folin's reagent (50 μl) was added to the mixture, shaking for 20 min, and the developed blue color was measured at A 650 nm. The actual concentration of proteins was calculated using bovine serum albumin as authentic (El-Sayed et al., 2018a,b). Hydrogen peroxide (H 2 O 2 ) content The fungal mycelia were pulverized in 0.1% TCA, filtered, the mycelium extract (0.5 ml) was mixed with 2 ml of 1 M KI in bi-distilled water and 0.5 ml 100 mM potassium phosphate buffer (pH 6.8) and left for 1 h in dark (Alexieva et al., 2001). The hydrogen peroxide concentration was determined at 390 nm with baseline of TCA. Absorbance measured at 390 nm. From a standard curve prepared with known concentrations of H 2 O 2 , the amount of H 2 O 2 expressed as μg/g of fresh weight. Determination of malonyl dialdehyde (MDA) content (lipid peroxidation product) In 5% TCA (1.5 ml), 0.2 gm of the mycelia were homogenized and centrifuged. A mixture composed of 0.5 ml of the supernatant, 1 ml of 20% TCA, and 0.5 % thiobarbituric acid (1 ml) put in water bath for 25 min at 95 C, then cooled immediately and centrifuged. The absorbance was measured at 450, 532, and 600 nm (Li, 2000). To study the function of oxalic acid in Zn(II) tolerance, the concentrations of oxalic acid in Zn(II)-free and Zn(II)-stressed culture filtrates (4000 mg/l) were assessed by HPLC. The HPLC system consisted of a GBC UV/vis detector, GBC LC 1110 pump monitored by WinChrome chromatography (Kromasil column, 100 Â 4.6 mm). The samples were eluted with 85% acetonitrile and 15% water at flow rate 1 ml/min. The concentration of oxalic acid was assessed at λ 254 nm comparing to known concentrations of authentic oxalic acid. Preparation of the biosorbents After culturing F. solani in PDB at 25 C for 7 days under shaking conditions (120 rpm), the mycelia were harvested and rinsed with sterilized distilled water. Part of the mycelia was utilized for uptake analysis of the biosorption potency of biosorbents. The other part was treated by mixing the mycelia with NaOH (0.2N) for 1 h till neutral pH (Kapoor and Viraraghavan, 1998). All sorption measures were performed in 250 ml Erlenmeyer flasks of 50 ml of Zn(II) solutions at 140 rpm with pH range 2-6, biosorbent dose (1.0-5.0 g/l), metal concentration (200-700 mg/l), contact time (0-24h) and incubation temperature (10-60 C). The working solutions were centrifuged to determine the residual Zn(II) concentration. (Fan et al., 2008) Where Ci and Cf are the initial and residual Zn(II) concentrations (mg/l), respectively. M is the biosorbent mass (g), V is the volume of the solution, and q is the sorption capacity (mg/g). The native and alkali-treated biomass before and after Zn(II) uptake was investigated by EDX and FTIR. Biomass investigated with Perkin-Elmer FTIR 1650 at the Center of Microanalysis, Cairo University, Cairo, Egypt. Statistical analysis All the experiments were conducted in biological triplicates and the results were expressed by the mean AE STDEV. The significance was calculated with one-ANOVA with Fisher's Least Significant Difference of post hoc test. Zinc tolerance and its effect on the growth of F. solani Metal resistance is the ability of microorganisms to withstand heavy metals toxicity through one or more mechanisms designed to respond directly to the metals involved (Iram et al., 2013). The utilization of mycoremediation to minimize metal pollution is based on the tolerance and bioaccumulation capacity of a specific fungus (Di Piazza et al., 2018). F. solani displayed tolerance to Zn(II) up to 10000 mg/l. Average daily Zn(II) intake from drinking-water is should be less than 0.2 mg/day (WHO, 2001). Patchy irregular growth of F. solani was observed at >8000 mg of Zn(II)/l. Yazdani et al. (2010) reported that T. atroviride was highly tolerant of Zn(II) and can grow at 6000 mg/l. When assessing the TI, F. solani showed a very high tolerance at 1000 mg/l (TI ¼ 1.00), high tolerance at 2000-4000 mg/l (TI ¼ 0.99 and 0.88, respectively), moderate tolerance at 6000 mg/l (TI ¼ 0.61), low tolerance at 8000 mg/l (TI ¼ 0.50) and very low tolerance at 9000 mg/l Zn(II) (TI ¼ 0.17). Low Zn(II) concentrations increased the growth of eight litter-decomposing basidiomycetous fungi by 2%-272%. In contrast, high Zn(II) concentrations completely inhibited the fungal growth (Hartikainen et al., 2012). SEM, TEM, and EDX investigations To recognize the effect of metal on the biomass surface during Zn(II) bioaccumulation, mycelia of F. solani subjected to SEM (Figure 1a-e) and EDX examinations (Figure 2 a and b). The surface of the mycelia was smooth before exposure to Zn(II) (Figure 1a). As shown in Figure 1b, curling; and formation of mycelia clusters in response to Zn(II) stress was observed. Moreover, the mycelia became covered by a substance that could be a precipitate-containing Zn(II) (Figure 1c). The surface of F. solani also had a rough texture with the formation of protrusions on the hyphae (Figure 1d, e). The gathering of mycelia and formation of coils could be likely due to the excretion of polysaccharides as a fungal resistance mechanism (Wan Maznah et al., 2012). The adaptation of fungi to metal stress caused the modifications of the cell surface that depended on the type and concentration of metal and thought to be associated with intracellular detoxification of heavy metals (Kim et al., 2012;Luna et al., 2015). These changes refers to the formation of intracellular vacuoles that act as storage compartments for thiol-containing compounds that can bind metal ions and accumulate them in the vacuoles and hence increase the pressure within the mycelia leading to cell wall protrusions (Paraszkiewicz et al., 2010, Li et al., 2017, Gururajan and Belur, 2018 al., 2020 a,b). The EDX microanalysis is a valuable tool focused on the production of distinctive X-rays showing semi-quantitative as well as semi-qualitative element data in the samples (Siddiquee et al., 2015). The EDX spectrum of control biomass revealed a very weak signal for Zn(II) (Figure 2a). The 3.18-fold rise of Zn(II) relative to control referred to metal adsorption on the surface of F. solani. There was a 1.8, 2.24, 3.44, 3.15, and 1.7 folds increase in element % of Na, P, S, K, and Cu, respectively (Figure 2b), that could be due to the participation of ions and complexation during the bioaccumulation process. Treatment of fungi with high doses of metal ions caused an increase in cysteine synthesis and release of phosphorus. Phosphorus and sulfur could sequester and chelate excess metal ions (Lima et al., 2013). Transmission electron microscopy was used to assess the mechanism of Zn(II) remediation (Figure 3a-f). TEM micrographs of metal-unloaded cells displayed a complete cell wall (170 nm in thickness) and homogeneous cytoplasm with few electron-dense granules probably clarify the cytoplasmic deposits and genetic materials (Figure 3a). Zn (II)-stressed cells showed a ruptured wall with exclusion of some cellular contents, and formation of intra-and extracellular precipitates suggesting the homogenous Zn(II) compatibilization (Figure 3b). The precipitation outside the cell seemed to be the first defense of F. solani against Zn(II). The biosorption of heavy metals depended on ionic species associating with the cell surface or extracellular polysaccharides, proteins, and chitins (Garcıa-Hernandez et al., 2017). The complete lysis of cytoplasmic organelles and the formation of some precipitates within the dark cell wall was observed ( Figure 3c) and this distortion may be due to the oxidative stress of Zn(II). The tolerance and ability to detoxify metal ions have been addressed like valence transformation, extra and intracellular precipitation and active uptake (Siddiquee et al., 2015). Sequestration within vacuoles, and formation of nanoparticles within the periplasm space ( Figure 3d). Zn(II)-loaded cells had relatively thinner (90 nm) and darker cell walls. Plasmolysis and lysis of internal organelles were observed (Figure 3e and f). Metal immobilization includes vacuoles compartmentation and complexation by cytoplasmic protein (Gonz alez-Guerrero et al., 2008). Fungi can facilitate biotransformation of metals by chemical reactions like methylation, oxidation, reduction, and dealkylation, that reduce metal toxicity (Saha and Orvig, 2010). The dry weight was increased by about 10.21% comparing to Zn(II)free media at 200 mg/l of Zn(II), with the increasing on the initial concentration of metal from 500 to 6000 mg/l, while the fungal growth was decreased by 84.2 % at 7000 mg/l of Zn(II) (Figure 1s). When the initial Zn(II) concentrations increased from 200 to 4000 mg/l, the removal efficiency was increased from 28.5 to 94.9 % (Figure 2s). The microbial growth on solid media did not give a correct picture of metal tolerance where the complexation, diffusion, and availability of metals differ from those in the broth, agar had protecting effects and chelate metal ions. In consequence, the heavy metals became slightly available for the growth, giving a miss-indication to a higher tolerance response (Moghannem et al., 2015), therefore, tolerance assay compared in both solid and liquid media. Simultaneously, a decrease in the uptake of Zn(II) by about 32.5% was occurred with 6000 mg/l, the metal uptake relies on availability of active sites present on the surface of biomass and metal concentration. As long as the active sites are free, the specific metal removal was increases with the higher Zn(II) concentration (Sharma et al., 2002). Bioaccumulation comprises the incorporation of many processes such as complexation, electrostatic attraction, covalent binding, ion exchange, van der Waals forces, precipitation, and adsorption (Vaishaly et al., 2015). Uptake of metal ions by fungi has been stated to involve an initial rapid binding of metal ions to negative functional cell wall groups, such as amide, carboxyl, phosphate, hydroxyl, and sulfhydryl followed by a slower energy-dependent entry (Cecchi et al., 2017a). The recognition of ZnONPs was achieved by creating white coalescence at !500 mg/l. The Surface Plasmon Resonance (SPR) peaks were observed at 368 nm (500 mg/l Zn(II)), 368 nm (1000 mg/l Zn(II)), 380 nm (4000 mg/l Zn(II)) and 388 nm (5000 mg//l Zn(II))) ( Figure 4A). The concentration of ZnONPs was 300 mg/l as determined by UV-analysis. The position of SPR peaks relies on the particle shape, size, and adsorption of electrophile or nucleophile to surface of the particle (Umadevi et al., 2012). The diameter of spherical ZnONPs was extended from 19.67 to 32.12 nm (25.22 AE 7.14 nm) ( Figure 4B). The particles were confirmed as elemental Zn(0) by XRD ( Figure 4C). Oxidoreductase implementation could be among the detoxification mechanisms (Iravani, 2011). Catalase and polyphenol oxidase activity In response to heavy metal, an uncontrolled synthesis of reactive oxygen species (ROS) have been reported. Damaging influences of ROS to cellular constituents diminished by antioxidant defense mechanisms including enzymatic mechanisms that based mainly on superoxide dismutase, catalase, polyphenol oxidase and glutathione S-transferase have been observed (Hu et al., 2015). The CAT activity was slightly induced (8.31%) by the growth in 200 mg/l of Zn(II)-enriched media and reached its highest value at 2000 mg/l. Compared to control, the PPO activity enhanced by 246.8% at 4000 mg/l Zn(II) (Figure 3s). Under metal stress, the level of ROS in the cells surpassed the tolerance level of natural antioxidant systems (Bai et al., 2015). Due to toxic metal stress, the activities of antioxidant enzymes could be changed by these ways: 1) a regular increase in enzymes activities with the increase on metal concentration. 2) an increase in the activities of enzymes to attain the highest values and then declined with a further increase in metal concentration (Kusvuran et al., 2016). However, according to the present findings, the changes in PPO and CAT activities belonged to the second type. Similar results reported by Feng et al. (2018). Total antioxidants, thiols contents, soluble protein content of F. solani A linear increase on the total antioxidants was found in response to increasing concentrations of Zn(II) (Figure 4s). The total antioxidants could be an organic acids, phenolic compounds, amino acids, vitamins and some metallic ions. A plateau region was noticed at 200 and 500 mg/ l. A noticeable increase in extracellular total antioxidants (that present on the fungal filtrate) (37.62%, relative to control) and intracellular (112.67%) was noticed at 4000 mg/l. This would reveal a remarkable effort that was done by F. solani to reduce the excess toxicity by mobilizing non-enzymatic antioxidants to trap excess Zn(II) and remove it outside the cell. The highest intracellular and extracellular thiols (115.3 and 92.3 mM/g, respectively) was recorded at 1000 mg/l. Thiol contents was drastically decreased at 2000 mg/l and inhibited at 4000 mg/l (Figure 5s). Thiols could be involved in metal homeostasis and metal detoxification (Kalsotra et al., 2018). The decrease in thiol contents at 2000 mg/L Zn(II) showed the inability of F. solani to tolerate such stress and disturbances induced by high Zn(II) concentrations in cellular tolerance/detoxification mechanism (Mukherjee et al., 2010). The present results showed that the intra and extracellular soluble protein Lipid peroxidation and H 2 O 2 content ROS reaction with methylene groups of the polyunsaturated fatty acids causes lipid peroxidation, releasing malondialdehyde (MDA) as one of the terminal by-products. MDA values usually reflect the level of damage to plasma membranes (Hu et al., 2015). F. solani exposed to 500 mg/l showed no accumulation of MDA, while, exposure up to 4000 mg/l, led to an increase in MDA content accompanied by a plateau up to 6000 mg/l. H2O2 content was increased with Zn(II) treatments but decreased markedly at 6000 mg/l (Figure 7s). The substantial increment in MDA titer revealed the elevated formation of ROS (Mukherjee et al., 2010). Oxalic acid secretion Oxalic acid produced by some fungal isolates are mainly used to immobilize potentially toxic metals by forming insoluble compounds such as complex of metal-oxalate (Siddiquee et al., 2015). From the HPLC analyses, the Zn(II)-free and Zn(II)-stressed samples displayed an oxalic acid concentrations 270 and 2820 μg/ml, respectively (Figure 5a and b). Zn(II) stimulated oxalic acid production by about 10.5 folds, comparing to control. It plays a prominent role in the tolerance of fungal consortia contained Aspergillus niger, Penicillium sp., and Rhizopus sp. to Cu(II) and Pb(I) (Shivakumar et al., 2014). Initial pH The initial pH seemed to be a substantial factor influencing the biosorption process. It determines the solution chemistry and complexation of the ions (Frutos et al., 2016). Furthermore, it influences the natures of biomass and binding sites activities. Biomass regarded as natural ion-exchange materials that chiefly have positively and negatively charged groups. The Zn(I)) removal capacities of native and alkali-treated biomass were low at pH 2.0 (2.5 and 4.21 mg/g, respectively) (Fig 8s). When pH is less than 3, poor ionization or protonation of functional groups cause a weak complexation affinity between ions and cell wall (Iram et al., 2015). When pH increased to 5 (native biomass) and 4 (treated biomass), the removal capacities was enhanced to 136.4% and 178.69%, respectively. As the pH increased [H3O] þ levels was decreased and the sites were deprotonated. Therefore the competitive effects of hydronium ions were limited, and the exchange of protons with Zn(II) preferred (Mrudula et al., 2016). The subsequent decline in biosorption ability was due to ions speciation and their precipitation as metal hydroxides (Hlihor et al., 2014). Furthermore, the degree of ionization of organic molecular groups and the release of organic ligands from the cells increased at high pH. The ligands made soluble complexes with the ions and diminished the biosorption capacity. Pleurotus spp. had optimum biosorption capacities of Ni(II) and Cu(II) between pH 5 and 6 (Tay et al., 2012). The alkali treatment enhanced the biomass electronegativity by ionizing the functional groups and hence attracting many cations (Bux and Kasan, 1994). Wang and Chen (2006) suggested that the deacetylation of the fungal cells affected the chitin structure and led to the formation of chitosan-glycan complexes and improved metal affinities. Initial metal ion concentration Native and treated mycelia's sorption abilities exponentially increased (from 2.79 to 7.1 mg/g, and 5.81-12.5 mg/g, respectively) with increased concentration of Zn(II) from 200 to 600 mg/l (Figure 9s). Nevertheless, a further rise in the metal concentrations to 700 mg/l resulted in a decline in Zn(II) biosorption to 6.3 mg/g (native biomass) and 10.1 mg/g (treated biomass), suggesting overload of all binding sites and a balance between biosorbents and adsorbents. The concentration of metal ions played a significant role as a driving potential in overcoming the resistance to mass transfer between solid and aqueous cases. Bioremoval increases with an increase in initial concentration at certain biomass dose (Abbas et al., 2014). At lower initial concentration, the amount of the initial moles solute to the accessible surface area was the minimum. Because of such, the fractional biosorption did not rely on the initial metal concentration (Binupriya et al., 2007). Maximum capacity at 600 mg/l Zn(II) associated with the higher mass transfer and kinetic energy, and availability of metal ions thus the possibility of collision between the biosorbent and the ions (El-Gendy et al., 2017). Reducing the biosorption capacity at higher concentrations could be ascribed to the inadequacy of free accessible binding sites and the competition between ions (Garcıa-Hernandezet al. 2017). Biosorbent concentration The adsorbent concentrations played a key role in the uptake due to the durable dependency on the number of available sites and the electrostatic interactions between biosorbent cells (Shamim, 2018). The uptake capacities are inversely proportional to the biomass doses. The highest capacities of uptake of native (6.84 mg/g) and treated biomass (12.8 mg/g) was achieved at the biosorbent dose 1.0 g/l (Figure 10s). At a given equilibrium, the biomass adsorbs more metal ions at low cell densities than at high densities. The uptake capacities progressively decreased with a further increase in the biosorbent concentrations and reached the lowest values (2.1 mg/g, native biomass and 4.57 mg/g, treated biomass) at 5 g biosorbent/l. High biomass concentrations can exert a shell effect that restricts the access of metal ions to binding sites (Kanamarlapudi et al., 2018). Moreover, at the higher biomass dosage, the metal ions are not enough for complete distribution over the accessible binding sites. Effect of temperature The temperature has an important impact on the biosorption as it can make chemical moieties ionization and influences the cell wall's firmness and its structure (Iram et al., 2015). There was a gradual increase in the metal uptake with a rise in temperature (from 10 to 40 C) reaching a maximum of 7.7 mg/g (native) and 13.2 mg/g (treated) (Figure 11s) at 30 C. However, the biosorption capacity reduced by 76.62% (native) and 73.48% (treated) at 60 C. As the collision frequency between F. solani and Zn (II) increased at 30 C, more zinc particles electrostatically sorbed on the biosorbent. It is usually supposed that the biosorption process is carried out between 20 and 35 C. Temperatures above 45 C may result in the structural damage to proteins which in turn impacts metal uptake (Deng and Wang, 2012). A. flavus and A. niger exhibited maximum sorption capacity for Cu(II) at 26 C and 37 C, respectively (Iram et al., 2015). Effect of contact time Time-course profiles for Zn(II) uptake by F. solani showed that the saturation levels reached within 40 min (treated biomass, q ¼ 12.5 mg/g) and 6 h (native biomass, q ¼ 7.9 mg/g) (Figure 12s). The plateau levels accomplished within 2 h and 12 h for treated and native biomass, respectively. Then, Zn(II) uptake slightly declined after 12 h. The results proved two stages of the process, a rapid initial one assigned to the surface adsorption. The subsequent slow phase ascribed to membrane transport into the cell or reduced cell wall permeability or slow intracellular diffusion (Shamim, 2018). The time needed to achieve maximum uptake depends on type of biosorbents, metals, and their interactions (Kanamarlapudi et al., 2018;Chatterjee et al., 2010). 3.13. Surface characterization 3.13.1. FTIR Fungal cell walls composed of complicated macromolecules like chitins, mannans, proteins, glucans, lipids, and pigments, such as melanins. In general, polysaccharides are the main components and constitute about 90% of the wall. Various types of ionizable sites influence metal absorption capacity: COOH À (carboxyl groups), and -OH (hydroxyl groups) on uronic acids and proteins, -SH (sulfhydryl groups), and nitrogen-containing ligands on proteins, chitin, and chitosan, and (PO4) 3-(phosphate groups) (Shamim, 2018). The FTIR spectra of NU (native-unloaded biomass), NL (native-loaded biomass), TU (treated-unloaded biomass), and TL (treated-loaded biomass) depicted in Figure 6a-d, respectively. The shift in the wave number at 3424.96 cm À1 (NL) (Figure 6b) and 3429.78 cm À1 (TL) (Figure 6d) assigned to the interaction of -NH 2 asymmetric stretching mode of amines and -OH groups with Zn(II) uptake (Gururajan and Belur, 2018). The disappearance of 3008.4 cm À1 (NL) peak was due to C-H stretching frequencies (Bright et al., 2010). Changes in the peak intensity at 2924.52 and 2854.14 cm À1 (NU) and 2925.48 cm À1 and 2857.02 cm À1 (TL) can be attributed to CH 3 symmetric stretching of proteins and lipids and CH2 symmetric stretching, respectively (Zhang et al., 2015). The shift at 2363.34 cm À1 with an increase in the intensity (NL) was due to the asymmetric stretching of the -N¼C¼Ogroup (Mishra and Jha, 2009). The new band at 2065.39 cm À1 , the disappearance of 1745 cm À1 (NL and TL) peak, and shift at 1641.13 cm À1 (NL) and 1644.02 cm À1 (TL) are due to the C¼O stretching mode of the carbonyl group in esters, alcohol, and carboxylic acids. The noticeable shift at 1455.2 to 1417.4 cm À1 (TL) and 1422.24 cm À1 (NL) was due to the C-N stretching, N-H bending vibration, and complexation with N-H group (Hu et al., 2015). The shift also indicating the acidic groups; carboxyl and hydroxyl, are chief agents in uptake (El-Gendyet al. 2017). The shifts at 1547.59 cm À1 (Δ 5 cm À1 , NL) and 1562.06 cm À1 (Δ10 cm À1 , TL) attributed to N-H bending strongly coupled with C-N stretching (amide II band) (Feng et al., 2018). A marked shift at 1457.59 cm À1 (Δ 35 cm À1 , NU) and 1455.03 cm À1 (Δ38 cm À1 , TL) was assigned to CH3 asymmetric bending vibration of protein (Ramalingam et al., 2014). The role of amide III, sulfonamide, and C(O)-O stretching vibrations recognized in Figure 6. FTIR spectra of F. solani, (a) native cells, (b) Zn(II)-loaded cells, (c) alkali-treated biomass, and (d) Zn(II)-loaded alkali treated cells. Biosorption conditions: initial pH ¼ 4 (alkali-treated biomass) and 5.0 (native biomass), initial Zn(II) concentration ¼ 600 mg/l, biosorbent dose ¼ 1.0 g/l, contact time 40 min (alkali-treated biomass) and 6 h (native biomass), temperature ¼ 30 C at 140 rpm. the disappearance of peaks at 1380.78 and 1318.11 cm À1 (NU) and a new peak at 1317.14 cm À1 (TL). The shift at 1240.97cm À1 was due to P¼O asymmetric stretching of phosphodiesters in phospholipids. The disappearance of the peak at 1158.04 cm À1 (NL) assigned to stretching of C-O. The shift at1075.12 cm À1 indicated the Zn(II) interaction with sulfoxides, S¼O stretching, sulfones, and sulfonic acid. The shift at 1031.73 cm À1 (NL) is due to the binding of heavy metals to phosphate groups (Mahmoud et al., 2011). El-Gendy et al. (2017) reported the binding of phosphorus compounds, C-N stretching, O-H bending, and sulfur compounds in the region 1000-1400 cm À1 . A very marked shift at 573.72 cm À1 (Δ 29 cm À1 ) (TL) and a new peak at 559.26 cm À1 (NL) revealing the C-S stretching. The absence of peaks at 712.57 cm À1 (NL) and 709.68 cm À1 (TL) and the change in intensity of peaks at 875 cm À1 assigned to N-H wag of primary amines (Mishra and Jha, 2009). The C-S stretching reveals the appearance of a new band at 413.66 cm À1 in the case of NL biomass. Similar results have been reported for soft metals that form stable bonds with sulfur-containing (soft) ligands, nitrogen-, S-, SH-, CN-, R-NH 2 À, and imidazole (Wang and Chen, 2006). The higher covalent index (X2mr) (Xm is electronegativity and r is the ionic radius), the greater the potential to form covalent bonds with biological ligands in order S > N > 0 (Chen and Wang, 2007). The electronegativity and the ionic radius of Zn(II) are1.65 and 139 pm, respectively. The covalent index of Zn(II) is 3.78. After Zn(II) uptake, the total shifts in TL biomass (Δ 100cm À1 ) were more pronounced than in NL (Δ 75 cm À1 ). C-S stretching was involved in the process by TL than NL biomass. CH 3 asymmetric bending vibration of proteins was involved equally. After Zn(II) biosorption, the intensities of all peaks was increased in the case of NL while decreased in TL. initial pH ¼ 4 (alkali-treated biomass) and 5.0 (native biomass), initial Zn(II) concentration ¼ 600 mg/l, biosorbent dose ¼ 1.0 g/l, contact time 40 min (alkali-treated biomass) and 6 h (native biomass), temperature ¼ 30 C at 140 rpm. 3.13.2. Energy dispersive X-ray (EDX) microanalysis EDX analysis was used to confirm the identity of Zn(II) on the fungal cell surface. The EDX spectra of NL ( Figure 7b) and TL biomass ( Figure 7d) were marked by the appearance of Zn(II) by 6.00 and 8.84 element%, respectively. Simultaneously, P and S signals disappeared after Zn(II) uptake by TL biomass. The element% of P and S was reduced by 54.56 and 45.29%, respectively, after Zn(II) uptake by NL biomass. It was reasonable to conclude that some sulfur and phosphorus organics were released from the cells to the supernatant during Zn(II) uptake. Similarly, Na(I), Mg(II), K(I), Ca(II), and Cu(II) was released during biosorption. Usually, the release of these metal ions from biosorbents in binding Zn(II) was regarded as an indicator of the mechanism of ion exchange for heavy metal binding Patel et al., 2016;Reddad et al., 2002). Similar results reported by Can and Jianlong (2008). They concluded that ion exchange of K(I), Mg(II), Na(I), or Ca(II) with Zn(II) during biosorption by Saccharomyces cerevisiae indicated a certain degree of the ionic binding interaction between Zn(II) and the biomass. Conclusion Fungi are one of the most applicable microorganisms for remediation of toxic heavy metals due to their powerful biosorption and biotransformation potency, nevertheless, few studies uncovering the mechanisms of fungal removal of heavy metals have been reported. The pattern of growth, bioaccumulation, organic acids production, non-proteineous antioxidants, and antioxidative enzymes of F. solani responsive to Zn(II) were determined. It has been observed that oxalic acid of F. solani was increased by 10 folds due to the presence of Zn(II) regarding to the control. The ratio of Zn(II) ions biosorption is strongly dependent on the treated biomass, pH values, initial metal ion does, incubation temperature, and time of contact. Author contribution statement Manal T. El Sayed: Conceived and designed the experiments; Performed the experiments; Wrote the paper. Ashraf S.A. El-Sayed: Analyzed and interpreted the data; Wrote the paper. Funding statement This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
8,498
2020-09-01T00:00:00.000
[ "Environmental Science", "Biology" ]
ESCAPE OF RESOURCES IN A DISTRIBUTED CLUSTERING PRO- CESS In a distributed clustering algorithm introduced by Coffman, Courtois, Gilbert and Piret [1], each vertex of Zd receives an initial amount of a resource, and, at each iteration, transfers all of its resource to the neighboring vertex which currently holds the maximum amount of resource. In [4] it was shown that, if the distribution of the initial quantities of resource is invariant under lattice translations, then the flow of resource at each vertex eventually stops almost surely, thus solving a problem posed in [2]. In this article we prove the existence of translation-invariant initial distributions for which resources nevertheless escape to infinity, in the sense that the the final amount of resource at a given vertex is strictly smaller in expectation than the initial amount. This answers a question posed in [4]. Definitions and statement of the main result Consider, for d ≥ 1, the d-dimensional integer lattice.This is the graph with vertex set d , and edge set comprising all pairs of vertices (x, y) (= ( y, x)) with |x − y| = 1.Here | • | denotes the 1-norm.We use the notation d for this graph as well as for its vertex set.It will be clear from the context which of the two is meant. The following model for 'distributed clustering' was introduced by Coffman, Courtois, Gilbert and Piret [1].To each vertex x of the lattice d , we assign a random nonnegative number C 0 (x) ∈ [0, ∞] which we regard as the initial amount of a 'resource' placed at x at time 0. (The family (C 0 (x); x ∈ d ) is not necessarily assumed independent).Then we define a discrete-time evolution in which, at each step, each vertex transfers its resource to the 'richest' neighbouring vertex.More precisely, the evolution is defined recursively as follows.Suppose that, at time n, the amount of resource at each vertex x is C n (x).Let N (x) = { y ∈ d : |x − y| ≤ 1} be the neighbourhood of x (note that it includes x itself) and define C n (z) .Now let v n (x) be a vertex chosen uniformly at random in M n (x), independently for each x, and take: For a fixed vertex x, the random variable C 0 (x) will be called the initial amount of resource at x, and the family C 0 (x); x ∈ d will be called the initial configuration.Analogously, C n (x); x ∈ d will be called the configuration at time n.Note that a n (x) is the vertex to which the resources located at x at time n (if any) will be transferred during the (n + 1)-th step of the evolution.We say that there is a tie in x at time n if C n (x) > 0 and the cardinality of M n (x) is strictly greater than one.In case this occurs, a n (x) is chosen uniformly at random among the vertices around x that maximize C n .Note that, apart from those possible tie breaks, all the randomness is contained in the initial configuration.As soon as a vertex has zero resource, its resource remains zero forever.Also note that, when two or more vertices transfer their resources to the same vertex, these resources are added up.Thus this algorithm models a clustering process in the lattice starting from a disordered initial configuration.For a fixed vertex x, we use the notation C ∞ (x) for lim n→∞ C n (x) in case this limit exists.We write for expectation with respect to the underlying probability measure.Our main result is the following theorem.The proof is given in Section 3. Background and motivation Here is some more terminology.If, for all sufficiently large n, we have that a n (x) = x and a n ( y) = x for all neighbours y of x, then we say that the flow at x terminates after finitely many steps.In that case, the limit C ∞ (x) is attained after finitely many iterations and will be called the final amount of resource at x.If for all sufficiently large n we have a n+1 (x) = a n (x), then we say that x eventually transfers its resource to the same fixed vertex. The following stability questions for this process (formulated here similarly as in [4]) have been investigated in the literature: Of course the answers to the above questions may depend on the assumptions made about the distribution of the initial configuration.Note that if the answer to Question 2 is affirmative, then so is the answer to Question 1.In that case, answering Question 3 is equivalent to answering the question whether the resource quantity that started on a given vertex will eventually stop moving almost surely.So, informally, Question 2 is related to fixation while Question 3 is related to conservation. Van den Berg and Meester [2] considered the case d = 2 and i.i.d.initial resource quantities.Using translation-invariance and symmetries of the system they proved that the answer to Question 1 is positive in the case that the initial quantities of resource have a continuous distribution.They also showed that, if the resources are integer valued, then Question 2 has a positive answer as well. Later, van den Berg and Ermakov [3] considered again i.i.d.continuously distributed initial quantities of resource on the two-dimensional lattice.Using a percolation approach, they were able to relate Questions 2 and 3 to a finite (but large) computation.By using Monte Carlo simulation, they obtained overwhelming evidence that the answer to these questions is positive for this case. In [4] it was proved that, for every dimension and every translation-invariant distribution of the initial configuration, the answer to Question 2 is positive.However, Question 3 was left open. Our Theorem 1 says that, for some initial distributions in this class, the answer to that question is negative. The conclusion of Theorem 1 is false for d = 1.To see that, suppose that the probability that the resource starting at the origin does not stop after finitely many steps is positive.Then, by translation invariance, there is, with positive probability, a positive density of vertices for which the initial resource will not stop after finite time.This implies that, with positive probability, there are infinitely many steps at which resource enters or leaves the origin, contradicting the fixation result of [4] mentioned in the previous paragraph.This argument can be generalized for example to any graph of the form × G, where G is a finite vertex-transitive graph.(For such graphs translation-invariance is replaced with automorphism-invariance). In order to prove Theorem 1, we will construct a random collection (forest) of one-ended trees, which is embedded in d , in a translation-invariant way, and then assign resource quantities to the vertices in such a way that, during the evolution, each resource follows the unique infinite self-avoiding path to infinity in the forest.In Section 2 we present a short discussion about the existence of certain random forests on d .In Section 3, Theorem 1 is proved.In Section 4 we present some concluding remarks and open questions. Translation-invariant forests on d Let G be an infinite graph.A forest of G is a subgraph of G that has no cycles.A tree is a connected forest.A subgraph spans G if it contains every vertex of G.A spanning forest (respectively tree) on G is a subgraph of G that is a forest (respectively a tree) and that spans G.The leaves of a forest T are the vertices of T that only have one neighbor in the forest.The number of ends of a tree is the number of distinct self-avoiding infinite paths starting from a given vertex.A tree is said to be one-ended if it has one end. We choose the d-dimensional integer lattice as the underlying graph.For this choice, the literature provides several constructions of random spanning forests with translation-invariant distributions, for example, the uniform spanning tree [5], and the minimal spanning tree [6].To be explicit, we briefly discuss one construction, based on the two-dimensional minimal spanning tree.Let E be the set of edges of the lattice 2 , and let U e ; e ∈ E be a family of independent random variables distributed uniformly in the interval [0, 1].For each cycle of the lattice, delete the edge having the maximum U-value on the cycle.The resulting random graph is called (free) minimal spanning forest and is known to be almost surely a one-ended tree which is invariant and ergodic under lattice translations (see [7]). For d > 2, we can use the two-dimensional minimal spanning forest to construct a random forest in d of which the distribution is invariant under lattice translations, and of which every component is one-ended.We regard d as 2 × d−2 and in each 'layer' 2 × {z} (where z runs over d−2 ) we embed an independent copy of the two-dimensional minimal spanning tree T z .The resulting subgraph of d is a translation-invariant random spanning forest with one-ended components.This gives the following lemma. Corollary 3. For each d ≥ 2 there exists a translation-invariant random forest T on d , for which the following two properties hold almost surely. (i) Every connected component of T is one-ended. (ii) Every edge of 2 of which both endpoints are in T is an edge of T . Proof.Let H be a spanning forest as in Lemma 2 and write F for the set of its edges, and V for the set of its vertices.Let H be the forest with vertex set {2x : x ∈ V } ∪ x + y : (x, y) ∈ F and edge set E = (2x, x + y) : (x, y) ∈ F .Informally, H corresponds to the forest which is obtained when H is scaled up by a factor 2. Thus to each edge (x, y) of H, there correspond two edges, (2x, x + y) and (x + y, 2 y), in H.Note that H is a random forest which is invariant under translations of 2 d , and which has the property that every pair x, y of its vertices satisfying |x − y| = 1 is connected by an edge of H.To restore invariance under all translations of d , let W be a uniformly random element of the discrete cube {0, 1} d , independent of H, and set T = H + W . Proof of main result In this section we fix d ≥ 2. We will prove Theorem 1 by giving an explicit construction of an initial configuration (C 0 (x), x ∈ d ) whose distribution is translation-invariant and for which (1) holds.Let T be a random forest on d as given by Corollary 3.For vertices x and y of T we write x ∼ y if (x, y) is an edge of T .We define a (random) partial order ≤ on d by setting y ≤ x if and only if x and y are vertices of T and x belongs to the unique infinite self-avoiding path in T starting at y.If y ≤ x we say that x is an ancestor of y and that y is a descendant of x.If y ≤ x and x ∼ y we say that x is a parent of y and that y is a child of x.Note that every vertex of T has a unique parent.Moreover, for every vertex x of T , exactly one vertex in { y : y ∼ x} is the parent of x, and the others are descendants of x.We now define, for each x ∈ d , the initial quantity of resource at x by: Note that, if x ∈ T , then C 0 (x) is the number of descendants of x.Since every connected component of T is one-ended, it follows from the definitions that this number is finite.Also note that, since the distribution of T is invariant under the translations of d , so is that of the family C 0 (x); x ∈ d .We now define a nested ('decreasing') sequence of forests that will be shown to describe the dynamics of resources when C 0 (x) is given by (2).For a forest S, let φ(S) denote the forest obtained from S by deleting all its leaves.Let T 0 = T and, for n = 1, 2, . .., define inductively T n = φ(T n−1 ). The following observation follows easily from the definitions. Observation 4. Let x be a vertex of T and n ≥ 0. Then x is in T n+1 if and only if some child of x is in T n . Lemma 5.For every vertex x in T , there is a finite index n 0 (depending on x) such that, for all n ≥ n 0 , x does not belong to T n . Proof.By Observation 4, n 0 (x) is at most 1 plus the number of descendents of x.As we mentioned before, this number is finite. Lemma 6. Suppose that, for all x, C 0 (x) is given by (2).Then for all n ≥ 0, Proof.We use induction on n.To verify (3) for n = 0 we note that, if x belongs to T 0 (= T ) then, by (2), Now, suppose that (3) holds for a given n.Since T was taken as in Corollary 3, two vertices of T n which are adjacent in d must be linked by an edge of T n .By this and (3) it follows that, for each vertex z of T n , a n (z) is the parent of z.Therefore, and because C n ≡ 0 outside T n , we have By applying ( 5), (3) and Observation 4 (and noting that (5) also holds for x / ∈ T n+1 , since then both sides of (5) are equal to 0), we get, for x ∈ T n+1 , Now ( 4) and ( 6) complete the induction step, and the proof of Lemma 6. Proof of Theorem 1.Let the initial configuration be defined as in (2).Let x ∈ d .By Lemma 6 and Lemma 5, we have that almost surely C n (x) = 0 for all sufficiently large n.Hence C ∞ (x) = 0 almost surely.On the other hand, it is clear that C 0 (x) > 0 with positive probability, and hence [C 0 (x)] > 0. Concluding remarks and open problems At the end of the proof of Theorem 1 we mentioned the obvious fact that [C 0 (x)] > 0 for every x.It turns out that this expectation is even ∞.Indeed, we have where the second and forth equality follow by relabeling, the third equality follows by translationinvariance and the last inequality follows from the fact that x has infinitely many ancestors almost surely. We have not been able to construct an example where resources escape to infinity but the initial amount of resource at a given vertex has finite expectation.It is an interesting question whether such examples exist.In particular, in our construction, the initial configuration was chosen in such a way that, almost surely, the induced dynamics takes place in a forest with one-ended components, embedded in d and, at each step, the resources are transferred from every vertex with non-zero resource to its parent.It is not clear if for every initial configuration with these properties the expectation of the initial amount of resource of a vertex is infinite.We state these considerations more formally by the following two questions. Question 1 .Question 2 .Question 3 . Does each vertex eventually transfer its resource to the same fixed vertex almost surely?Does the flow at each vertex terminate after finitely many steps almost surely?If the answer to the previous question is affirmative, is the expected final amount of resource of a vertex equal to the expected initial amount? Question 4 .Question 5 . Suppose that (C 0 (x); x ∈ d ) has a translation-invariant distribution and is positive exactly on the vertices of a forest with one-ended components.Furthermore, suppose that during the n-th step of the evolution, every vertex for which C n−1 (x) > 0 transfers its resource to its parent.Is it the case that [C 0 (x)] = ∞?Does there exist a translation-invariant distribution for the initial configuration for which [C ∞ (x)] < [C 0 (x)] < ∞?A negative answer to Question 4 would yield a positive answer to Question 5.
3,836
2010-09-30T00:00:00.000
[ "Computer Science", "Mathematics" ]
Research on Algorithms for Multi-Vector Attitude Determination Various attitude estimation methods are based on an optimization problem posed in 1965 by Grace Wahba, which is called Wahba’s problem in the Šeld of attitude estimation. As a key for attitude determination, many diŒerent algorithms for minimizing Wahba’s loss function have been proposed in the past 60 years. Among the most representative are Quaternion Estimator (QUEST), Singular Value Decomposition (SVD), and Fast Optimal Attitude Matrix (FOAM), which are brie›y introduced in this research paper, and new algorithms proposed in recent years such as Fast Linear Quaternion Attitude Estimator (FLAE), are also included. ­e calculation principle and derivation process are given in the article, and simulation under high noise conditions for algorithms mentioned has been completed. Finally, several practical engineering applications related to Wahba’s problem are introduced and the future development trend of attitude estimation is discussed. Introduction Attitude determination is a measurement technology based on relative positioning [1]. By observing two or more baselines at the same time, the three-dimensional attitude calculation of the carrier can be realized, which is widely used in aerial, marine, and land navigation. Generally, there are two methods to determine the three-axis attitude: (1) Deterministic algorithm like TRIAD. (2) Optimization algorithm such as QUEST or SVD. When using a double baseline to solve the attitude determination problem, deterministic algorithm and several optimal algorithms could all work, but various algorithms have di erent performance in speci c application scenarios. In the deep study of the optimal algorithm, many scholars have focused on reducing the computational power consumption of the algorithm for many years. is is mainly because of the limited computing power of early computers. Researchers have proposed a series of improved fast quaternion optimization algorithms such as FOAM (Fast Optimal Attitude Matrix), ESOQ (EStimator of the Optimal Quaternion), and ESOQ2 (Second ESOQ), which are mainly used to reduce the amount of calculation and improve robustness. In this period, researchers generally believe that quaternion have fewer constraints than calculating attitude matrix, which reduces the number of oating-point multiplication calculations required for each operation [2]. e early research of attitude determination algorithm mainly comes from solving the problem of satellite attitude determination. e attitude of the body in the inertial coordinate system can be obtained by calculating the installation position of the star sensor in the body. With the continuous progress of navigation technology, attitude determination is also widely used in the static coarse alignment of IMU (Inertial Measurement Unit) and the information fusion attitude determination of multi-source navigation system. e projection of gravity in two inertial coordinate systems is used to construct the observation vector and solve Wahba's problem to determine the attitude matrix at the initial time. In recent years, the attitude determination algorithm has also been applied in the new eld in machine vision, including image mosaic, visual measurement, and so on. e geometric constraints are constructed by using the image feature information to make up for the relative attitude determination or observation error of multiple unmanned platforms. is paper aims to sort out the representative attitude determination algorithms proposed since the 1960s and derive seven di erent attitude optimization algorithms such as SVD, QUEST, ESOQ, and FLAE. In the first section, this paper briefly describes Wahba's problem and introduces the development process of an attitude measurement algorithm. e second section of this paper focuses on comparing various optimization algorithms based on Wahba's problem and summarizes their mathematical principles in a different expression of the loss function and the methods to obtain a final solution. en the simulation is carried out, and the embodiment of each algorithm under high observation noise is explored at the simulation to show the performance of each algorithm with high observation noise. e rest of the paper briefly illustrates the application of attitude optimization algorithm in several typical scenes, like posture tracking, image mosaic, and alignment of moving base, and explores the application of attitude measurement algorithm in new scenes in the future. Finally, we summarize the advantages and disadvantages of each algorithm in the practical engineering application. e applicability and validity of other intelligent algorithms that may be used to solve the attitude determination problem are also mentioned in the conclusion. Development of Attitude Determination Algorithm e materials and methods section should contain sufficient detail so that all procedures can be repeated. It may be divided into subheadings if several methods are described. In 1964, Harold Black proposed the algebraic method TRIAD (Tri-axial Attitude Determination) to obtain the attitude transform matrix by using two sets of observation vectors. In 1965, Grace Wahba proposed the least square optimization problem (often called Wahba's problem) of star attitude determination using observation vectors. On this basis, many researchers carried out research on the optimization algorithm of attitude calculation. e first solution of Wahba's problem is composed by J. L. Farrell, J. C. Stuelpnagel, R. H. Wessner, J. R. Velman, 1 E. Brock, and R. Desjardins and Wahba gave it in 1966, but it is an immature algorithm, which requires at least three groups of accurate observation vectors to calculate the attitude conversion matrix, attaching an unnecessary burden to the computing power and observation strategy [3]. e first algorithm that can properly deal with Wahba's problem at the technical level at that time was proposed by Davenport in 1968 [5], which chose to use quaternions to replace the attitude conversion matrix to reduce the number of unknown parameters, so as to solve the Wahba's problem at a lower computational cost. In 1979, in order to complete the Magsat (Geomagnetic Satellite) mission, the QUESTalgorithm was designed [6] and has been used until now, which is the most widely used attitude measurement algorithm. SVD was proposed by Markley in 1988 [4], but it was not widely used in practice at that time because of the large amount of calculations when the computational power was seriously limited. Using the adjoint matrix of vector observation matrix, Markley proposed the FOAM algorithm in 1993, which does not need singular value decomposition of the matrix either. e use of quaternion ensures that FOAM is at least as fast as the QUEST algorithm with employing Newton algorithm to calculate the maximum eigenvalue because the iteration and normalization could be much easier [7]. In 1997, based on Shuster's proof, Mortari designed ESOQ [8] using Gibbs vector and used an analytical method to obtain the optimal quaternion, and then in the same year, the improved ESOQ algorithm launched ESOQ2 [9]. Compared with the former, the improved algorithm uses only one sequential rotation to avoid introducing a singularity, making the new algorithm more robust and fast. In the relevant research before 2000, because of the limitation of the computing power of the computer and the extremely accurate observation provided by the star sensor in the application background, the improvement of algorithms was more inclined to be fast and simple, but did not pay attention to enhancing its robustness in the case of observation errors. In this century, with the rapid development of computer computing power, Wahba's problem has also been applied in more fields. Yang used the general root of quartic equation to solve the optimal quaternion in 2013. e algorithm is fast, but there is still a problem. It may not have a real root of characteristic polynomial, which will lead to plural quaternion [10]. So, in 2015, Yang introduced the idea of Riemannian manifold and developed a more robust iterative method [11]. In 2018, Wu et al. named their proposed method the fast linear attitude estimator (FLAE) because it is faster than known representative algorithms [12]. In his method, Wahba's problem is transmitted to several one-dimensional equations based on quaternions. en the linear solution of the multi-dimensional equation equivalent to the traditional Wahba's problem is established by using the pseudo-inverse matrix. e analytical method and iterative method for solving the eigenvalues are provided at the same time. e main classical methods developed above are mostly based on Davenport's Q-method, which needs to solve the matrix characteristic polynomial. In fact, some matrix operations (such as obtaining determinants and adjoint matrices) may be too complex for batch processing. We can also observe that it is difficult to achieve a balance between robustness and time consumption for all existing methods because fast methods are not always robust and vice versa. Wahba's Problem and Loss Function. e results and discussion may be presented separately or in one combined section, and may optionally be divided into subheadings. e Wahba's problem, in short, is to convert the direction measurement into attitude measurement. Its specific expression is as follows: if there are n unit vectors in the inertial coordinate system which are recorded as v i , i � 1, . . . n. e value obtained by measuring these unit vectors in IMU coordinate system (body coordinate system) is v i , i � 1, . . . n. e problem is to find a rotation matrix C to minimize the loss function. where w i represents weight factors for each obviation vector. w i , i � 1, . . . n, are a set of positive weights satisfying n i�1 w i � 1, usually chosen as w i � 1/σ 2 i , with σ 2 i the variance parameters of the measurement vectors. e rotation matrix has three degrees of freedom, and each vector measurement has two degrees of freedom. erefore, when there is only one vector measurement, this is an under constrained problem; that is, there are countless rotation matrices C that can realize L(C) � 0. When there are two or more vector measurements, this is an over constrained problem, unless these vector measurements have no error at all; otherwise, L(C) > 0. Besides, in Wahba's problem, these vectors v i have been normalized, so we only need to consider the direction of v i , not its module length. In mathematical statistics, there is a method to convert a quadratic form into the trace of a matrix, which is called "tracetrick." e formula is (2) After the transmission, the problem of obtaining the minimum value of the loss function L(C) related to the attitude conversion matrix is transformed into the problem of obtaining the maximum value of tr(BC T ). In addition, Davenport converts the direction cosine matrix into quaternion to establish new constraints. By introducing the Lagrange multiplier, the Wahba's problem is transformed into the following equation: where, In several different algorithms, Wahba's problem is transformed appropriately so as to improve the performance, but they all adopt the same loss function, so each transformation form has a certain equivalence. Introduction to Various Optimization Algorithms 3.2.1. SVD Algorithm. Calculate (1) as follows to obtain the reconstruction loss function in SVD algorithm: Substitute the (5) into the original loss function we get Because the observation has been determined, in equation (6) should be a known fixed value. So we need to calculate the maximum of n i�1 w i v T i Cv i to get the minimum of equation (6). Suppose that matrix B is reversible and there is a singular value decomposition e eigenvalue decomposition of matrix B is replaced by (7), and the following is obtained: Set C * � U T CV, then it is easy to know C * is the unit orthogonal matrix. Only when C * � I, tr(BC T ) could get its maximum. Davenport. In 1968, Davenport provided a solution to Wahba's problem by parameterizing the attitude matrix into a unit quaternion. When it comes to the transmission of Wahba's problem, Davenport's method is consistent with Markley's SVD. After replacing the attitude conversion matrix with quaternion, we can get where Mathematical Problems in Engineering By utilizing Lagrange operator to calculate the maximum value of (10), we can get If and only if λ is equal to the maximum eigenvalue of matrix K, the eigenvector of λ max is the optimal quaternion q opt . QUEST. QUEST algorithm uses Davenport's quaternion method applied in optimization as well, as mentioned in paper [6], we could rearrange (12) as Describe Y in terms of Gibbs vector then we get Inserting (14) into (15) leads to an equation for the eigenvalues Equation (16) is equivalent to the characteristic equation for the eigenvalues of K. Considering the problem that the Gibbs vector becomes infinite when the angle of rotation is π, a more accurate method which avoids the problems posed by this singularity was developed by Shuster. Derive an expression that permits the computation of q opt without the intermediary of the Gibbs vector, for an eigenvalue ξ of any square matrix S satisfies the characteristic equation. By the Cayley-Hamilton theorem [39], S satisfies this same equation in the sense we get a convenient expression for the characteristic equation, namely, where σ � tr(B), κ � tr(adj(S)), In Shuster's paper [6], it is considered that when tr(BC T ) is the minimum value, λ max is a very close number to unity. In this way, the solving process of is significantly shortened and the Newton-Raphson method [39] applied to equation (23) with unity as a starting value allows λ max to be computed to arbitrarily high accuracy. Besides, Markley express Davenport's eigenvalue condition as [40] tr λ max + tr(B) I − S q 1:3 � q 4 z, As mentioned in (15), we also could describe the optimal quaternion as where Substituting (24) into (23) gives aquartic equation for the maximum eigenvalue which is just the characteristic equation of the matrix K. e QUEST algorithm finds the largest root by Newton-Raphson iteration of (26) with the starting value and then solves (24) to find the optimal quaternion. 4 Mathematical Problems in Engineering 3.2.4. FOAM. As mentioned above, the SVD algorithm consumed a lot of computational power and was not popularized in the era at that time when first proposed because it involves the singular value decomposition of the matrix. However, the optimal attitude matrix C calculated by singular value decomposition has excellent robustness, so in 1993, Markley expressed the singular value decomposition of the matrix by using the adjoint matrix, determinant and "Frobenius norm." For a m × n matrix, its Frobenius norm could be defined as follows: by calculating the adjoint matrix, determinant and f norm of matrix B, its singular value decomposition can be obtained: Substitute (28) into (9), we get where It can be seen that the problem also focus on finding the parameter λ max . From (3), we can obtain Simplified equation (32), we get Similar to the QUEST algorithm, the exact solution of (33) can be obtained by Newton-Raphson method. Finally, the optimal attitude matrix is calculated by (29)-(31). ESOQ. ESOQ and its improved algorithm ESOQ2 emphasized the rapidity of the algorithm, which was the fastest attitude solution algorithm with the least floatingpoint operation at that time. ESOQ analyzes the characteristic polynomial of matrix K and obtains its eigenvalue. In reference [8], Markley gave a method to solve Q by using four-dimensional cross product method and matrix inversion method, respectively. First, move the right term of equation (12) to the left then obtain the matrix H � λ max I 4 Where λ is the eigenvalue of matrix K and λ 4 ≤ λ 3 ≤ λ 2 ≤ λ 1 � λ max at the same time. erefore, all qualified q k can be obtained after the appropriate matrix transformation of matrix H. where H ki represents the 3 × 3 matrix H after deleting the k th row and the i th column. Four quaternions can be obtained from solving (34). ese four quaternions should be completely parallel, only with different modules. In order to enhance the reliability of the results, the quaternion with the largest modulus is often selected as the optimal result, that is, (34) and (35) is the four-dimensional cross product solution mentioned above. e matrix inversion method uses the adjoint matrix of H to calculate the optimal quaternion. Parameter c in equation (36) is determined by normalizing the quaternion. e four-dimensional cross product method and matrix inversion method do not introduce matrix singularity, so using H to solve q is a more stable approach, and low demand of floating-point operations makes it suitable for fast optimal attitude estimation or huge observation. However, the eigenvalue solved by ESOQ algorithm may be complex in the process of solving the maximum eigenvalue. erefore, Markley believed this scheme was not suitable for solving (33) by Newton-Raphson method. In the improved ESOQ2 algorithm, Markley used vector transformation to calculate the maximum eigenvalue of matrix K. e improved algorithm is faster than ESOQ in speed, but ESOQ2 adopts some geometric approximations that are not always accurate, which causes the new algorithm to not be as stable as SVD, QUEST, and FOAM in practice, so we will not introduce it in detail in this paper. Aiming at the problem that solving the eigenvalue of matrix K in ESOQ algorithm is not fast and stable enough, Yang optimized this process by using the analytical solution method of quartic equation newly proposed by Shmakov in 2012 [13]. After using the new solution to get λ max , the optimized ESOQ algorithm has made full progress in stability while maintaining the excellent high-speed characteristics of ESOQ algorithm. Yang's Algorithm Based on Riemannian Manifolds. In 2007, Yang proposed a globally convergent geometric optimization algorithm based on Riemannian manifolds. Mathematical Problems in Engineering e algorithm adopts the loss function (10) used by Davenport. On this basis, in order to avoid the calculation of all matrix K eigenvectors, Newton-Raphson method based on Riemannian manifold is proposed to calculate the maximum eigenvalue and corresponding eigenvectors [15] so as to avoid the cumbersome process of obtaining all eigenvalues of K. Yang's Newton-Raphson method based on Riemannian manifold is summarized as the following four steps: (1) Establish the Newton vector in R n [14]. (2) Project the vector onto the ttangent space of the sphere where the optimal quaternion is located. P qk represents orthogonal projection to spherical space and P qk � I − q k q T k . By calculating the Hessian matrix of the loss function (10), the Newton equation of the loss function can be obtained and the geodesic vector y can be calculated. e attitude quaternion algorithm based on Riemannian manifold gives a robust iterative method to calculate the optimal quaternion, which is better than the query algorithm in numerical stability and robustness. However, the algorithm needs to consume too many floating-point operations in each quaternion iteration. FLAE. Jin Xu University of Electronic Science and Technology of China believes that although QUEST, ESOQ, and other methods have made progress in robustness and time consumption, Davenport's Q-method, on which these algorithms are based, requires too many matrix operations, including obtaining determinant and adjoint matrix, so it is difficult to achieve a balance between robustness and time consumption. In the FLAE (fast linearized attitude estimator) method proposed by Wu, the quaternion is not used to transmit Wahba's loss function, but to expand the attitude matrix directly and calculate it in the form of a column vector. en, using the transformation relationship between quaternion and attitude rotation matrix. We set C 1 � P 1 q, In this way, we can get the loss function used in FLAE. By calculating the pseudo inverse of matrix H, a homogeneous linear equation about Q can be obtained and the basic solution system of the equation can be given by elementary row transformation [37]. By Schmidt orthogonalization, the optimal quaternion can be calculated. In reference [12], the author gives the calculation equation in the case of multi-dimensional observation. where, Similarly, the eigenvalue λ max of the matrix is solved, and the optimal quaternion q opt is obtained. Simulation Verification of Multi-Vector Attitude Determination Algorithm. e mathematical equivalence of various attitude algorithms can be derived strictly. Strict mathematical derivation is complex and difficult. is section verifies the above algorithms from the perspective of simulation. e verification vector uses cases 1-6 set by Markley [7]. We Tables 1-3. During observation, set the signal-to-noise ratio of vector observation to 20 dB. From the above table, we see that the algorithms with poor robustness such as FOAM and ESOQ are difficult to solve the stable attitude when the vector observation is not accurate enough. Due to the improvement of CPU performance, there is little difference in computing speed between the algorithms in the case of fewer observation vectors. In the test environment set in this paper, SVD algorithm and FLAE algorithm have better performance. Alignment of IMU for Moving Base. Before entering the navigation task, the strapdown inertial navigation system (SINS) needs to complete the initial alignment and establish an accurate initial attitude matrix. Alignment is usually carried out under the condition of a static base, but the rapid development of weapon system has higher requirements for this process. To solve this problem, Yan et al. chose to reduce the gravity deflection error of inertial system caused by this through motion displacement compensation and presented SINS positioning method based on inertial reference datum [16]. erefore, without any initial attitude information, data storage, or complex nonlinear modeling and filtering calculation, this method proposed in this paper not only realizes the initial alignment of the attitude array on the moving base, but also has the ability of real-time positioning and navigation in the alignment process [17]. On the basis of method in reference [15], Weng Jun attributed the attitude estimation problem of vehicle moving base to Wahba's problem, which used FOAM algorithm with a small amount of calculation and good robustness to calculate the attitude conversion matrix from the local geographic coordinate system n 0 to the carrier coordinate system b 0 at the initial time, so as to solve the problem of real-time attitude estimation during vehicle driving. In practice, the reference vector and measurement vector are calculated, respectively, from the measurements of the on-board IMU, odometer, and accelerometer. In addition to the FOAM algorithm used to obtain the attitude matrix, the other operation parts of the FOAM moving base alignment algorithm can be completed only by simple integral summation, and the filtering algorithm with relatively complex calculation would be much less. Experiments show that the azimuth error convergence speed of FOAM algorithm is fast and the anti-interference capability is strong, which meets the requirements of the vehicle system to obtain the correct attitude information quickly for a long time and can provide high-precision position information for the end of alignment to the integrated navigation stage as well. Aiming at the problems of low precision and poor adaptability of traditional in-motion coarse alignment methods for vehicle strapdown inertial navigation system (SINS) caused by inaccurate measurement noise, an optimal indirect in-motion coarse alignment method aided by global navigation satellite system (GNSS) is proposed [41] as shown in Figure 1. Wahba's problem is solved by SVD in the new method, and the vectors construction based on sliding fixed interval is used to weaken the accumulation of constant error algorithm. In the practical application scenario of low precision SINS system, the alignment algorithm proposed in this paper can achieve high-precision in-motion alignment without any initial attitude information. Star Sensor Attitude Determination. e Wahba's problem was first proposed to meet the requirements of star sensor to provide high-precision attitude information relative to an inertial reference frame for satellites in space. Most of the attitude determination algorithms are promoted according to the problems encountered in the application of star sensor by the optimizing of solution process and the improving of the representation of unknown variables. In the process of attitude determination using the star sensor, it is necessary to rst obtain the star map image in a certain eld of view, then extract the centroid of the star points in the image, and nally use the known navigation star catalog to identify the position of the star in the celestial coordinate system, so as to determine the star point coordinate conversion relationship between the image spatial coordinate system and the celestial coordinate system. In the eld of star sensor attitude determination, the attitude solution algorithm has been relatively mature, so more researches try to consider the in uence of the number of star points distributed in the star map and focus on this point to improve the accuracy of attitude measurement. Chen et al. [19] pointed out that the problem can be regarded as an "Over determined Wahba's problem" under conditional redundancy when the eld of view of star sensor is more than 3 through experiments. Zhang et al. [20] introduced the deduction that the attitude measurement accuracy is not only a ected by the relative position between the navigation stars described by the condition number, but also related to the di erent positions of the navigation star group in the star map. Song's experiment shows that fourstar-point method can obtain higher accuracy than the double-star method when applying QUEST algorithm. However, excessive star points will a ect the calculation e ciency due to the real-time requirements of the star sensor [21]. Xiao combined the optimal recursive quaternion estimation (ReQUEST) [22,23] method with the Cubature Kalman lter (CKF). e attitude quaternion determined by the optimal reQUEST method is directly used as the observation in the CKF lter, and the gyro drift is estimated by the CKF lter to compensate for the system error [24]. e method takes CKF lter as the outer framework and embeds the optimal ReQUEST algorithm into CKF lter. According to the observation of starlight, the initial value of K matrix is determined by ReQUEST algorithm. e gyro drift is compensated before one-step prediction and update the time according to the calculated volume points and corresponding weights to solve the prediction of state and variance. When starlight information is input, the optimal Re-QUEST algorithm is used to calculate the K matrix. e quaternion is separated from the K matrix, and the vector part is taken as the observation of CKF. rough simulation, it is proved that the measurement accuracy of quantity used in CKF is improved after embedding ReQUEST algorithm into CKF, so as to speed up the convergence speed and improve the ltering accuracy. Image Mosaic. Wahba's problem is to nd the best rotation to align the vector observations corresponding to two sets of given assumptions, which also plays a very important role in current computer vision and robot applications. In Hengyang's paper, Wahba's problem is described by truncated least squares (TLS), and the problem is described as quadratic constrained programming (QCOP) in the case of quaternion. Finally, a QUASAR algorithm which is more robust than RANSAC algorithm is proposed [25]. When the latter uses surf feature description operator to establish the corresponding point relationship of di erent pictures for image stitching, the two camera frames can also be accurately stitched through quasar algorithm even though the overlapping area is small. New method's e ect is better than the common RANSAC, FGR, and other algorithms. Similarly, the improved Teaser (fast and veri able point cloud registration algorithm) based on Wahba's problem also solves the fast solution when there are a large number of outliers in the point cloud, so it provides robust postprocessing for the scanning matching of laser radar and ensures the accuracy of target pose estimation and positioning [26]. In Nasim Kayhan's paper [42], a content-based image retrieval (CBIR) system was presented which included two stages: feature extraction and similarity matching. e most efficient similarity/distance measure with respect to the weight of the extracted features was used at the similarity matching stage, which has a similar principle of image mosaic. at means the algorithms for multi-vector attitude determination used in QUASAR could be applied in this problem to obtain the weight of the extracted features. With the rapid development and use of all kinds of accurate sensors, attitude algorithm appears in the form of assistance in all kinds of applications. Under the background of the vigorous development of intelligent navigation of unmanned system, there is a great contradiction between the power consumption, carrying and cost of unmanned system, and the use of precision sensors. Human Posture Tracking in High Dynamic. Human body motion tracking is a key technique in robotics, virtual reality, and other human-computer interaction fields. Duan proposes a novel simple-structure Kalman filter to improve the accuracy of human body motion tracking, named the Second EStimator of the Optimal Quaternion Kalman Filter (E2QKF), by combining the Second Estimator of the Optimal Quaternion (ESOQ2) algorithm, the linear Kalman filter, and the joint angle constraint method in paper [27]. Besides the method designed by Duan, Yun, and Zhang proposed a new algorithm with a similar structure which employs EKF + QUEST, CF + ESOQ2 to realize real-time tracking of human body motion. In the design of Duan, the measurements of the accelerometer and magnetometer were used as the input vectors for the ESOQ2 algorithm to produce the observation quaternion which aims to eliminate the error caused by the acceleration of human motion in the measurement results. e structure of the system is given in Figure 1. e traditional attitude calculation algorithm has low accuracy and is not applicable at high dynamic. But solving the attitude problem through observation vectors is not easy to diverge and does not produce cumulative error, becoming a decent compensation method for gyroscope. e combination of Kalman filter and attitude solution algorithm solves the problem of decreasing the accuracy of attitude solution algorithm in the case of high dynamics, and its high accuracy in the case of low dynamics can also modify the output of gyroscope through Kalman filter, so as to obtain the optimal attitude estimation in complex situations. ESOQ2 algorithm is used in a combined system of optical motion capture and IMU/Magnetometer to calculate the human motion state to design a complementary filter (CF) with simple-structure [29].In order to eliminate the effect of limb motion acceleration on high-speed human motion measurements, the accelerometer compensation is added to ESOQ2 algorithm as shown in Figure 2. Finally, the fuzzy logic is utilized to calculate the fusion factor for a complementary filter, so as to adaptively fuse the input quaternion with the reference quaternion. e whole algorithm design is more simplified than the traditional method, and the efficiency and accuracy have been greatly improved at the same time. Yun's design is similar to the two filter mentioned above, which uses QUEST algorithm to preprocess the measured values output from accelerometer and magnetometer, then generates a quaternion for the filter as input [28]. is pretreatment reduces the dimension of the state vector and linearizes the measurement equation and gives the real-time implementation and test results of quaternion Kalman filter. e experimental results show that the designed filter can realize the accurate tracking of human motion and verify the feasibility of using QUEST algorithm to optimize the realtime tracking accuracy of human motion. Information Attitude Determination for Cooperative Navigation. At present, the rapid development of artificial intelligence puts forward new requirements for UAV such as lightweight, autonomy, intelligence, and functional diversification. As a new type of working mode, UAV cooperative operation has attracted extensive attention. e cooperative reconnaissance, search, detection, and positioning by multiple unmanned platforms are typical application of unmanned system in practice which requires enough accurate relative attitude determination and positioning in the aircraft cluster as basis. With the application of visual navigation system to UAV formation control, autonomous aerial refueling, and spacecraft autonomous Rendezvous and Docking (RVD), it has become the focus of many scholars' research. Visual vector information is widely used in relative navigation, especially in relative attitude determination [35] which draws much attention in recent years. Zhang et al. proposed a relative attitude determination algorithm of two aircraft formation UAV considering geometric constraints in paper [30,31], where they establish a solution model for relative attitude problem by using the geometric relationship of triangle formed by vector observation. e solution does not need the position of the target but only the line of sight from the aircraft to the target. So the error caused by the position measurement is avoided and the problem of unobservability caused by coplanar sight vector is solved by using the triangular geometric constraint relationship between aircraft. e relative attitude accuracy could be improved as well. is paper adopts leader-follower mode as the dual aircraft formation control strategy as shown in Figure 3. For any target points out of these two aircraft, enough line of sight vectors can be observed to calculate the relative attitude without knowing the position information of the target, as long as the sight vectors between aircraft and target could form a triangular constraint. Conclusions is paper brie y introduces the performance and simple derivation of several algorithms to solve Wahba's problem such as QUEST, SVD, ESOQ, FLAE, FOAM, and Davenport, and carries out simulation veri cation which is running on the current popular personal notebook. Low signal-to-noise ratio (SNR) is set to simulate the situation of poor measurement accuracy. eoretical analysis and simulation experiments show that when the vector observation is not accurate enough, the attitude solution algorithm should rst pay attention to its robustness because the generation of singular value will have a serious impact on the solution time and accuracy. In the engineering application like visual vector attitude determination and moving base alignment, the environment to obtain vectors is very di erent from algorithm's original scenario: star sensor attitude determination. With the accuracy of observation vector decreasing signi cantly and rapid development on computer performance, the future development trend of multi-vector attitude determination is inclined to enhancing stability and fault tolerance instead of speed only. Besides, algorithm's versatility still needs to be developed. e researchers at the State University of New York believe the algorithm of attitude determination is widely used in various engineering projects, but its improvement is often aimed at a xed background [43]. Although those optimization schemes have the e ect of improving the accuracy of the solution, the constraints that depend on a speci c problem are not portable after all. erefore, in order to better apply the attitude solution algorithm in di erent elds, we rst need to improve the performance of the algorithm aiming to speci c observation conditions. Secondly, the improvement of the algorithm should go deep into the mathematical level rather than relying on a single task scenario to set constraints, so as to ensure the universality of the new optimization scheme. In addition, some of the most representative computational intelligence algorithms can be used to solve the problems, like monarch butterfly optimization (MBO) [44], earthworm optimization algorithm (EWA) [45], elephant herding optimization (EHO) [46], moth search (MS) algorithm [47], slime mould algorithm (SMA) [48], hunger games search (HGS) [49], Runge Kutta optimizer (RUN) [50], colony predation algorithm (CPA) [51], and Harris hawks optimization (HHO) [52]. e excellent performance of swarm intelligence algorithm to search for the optimal solution makes it possible to solve the attitude determination when dealing with more complex tasks [18, 32-34, 36, 38]. Data Availability e data for simulation used to support the findings of this study are included within the article. Conflicts of Interest e authors declare that they have no conflicts of interest.
8,212.4
2022-09-14T00:00:00.000
[ "Computer Science" ]
Enhanced Phosphorus Release from Phosphate Rock Activated with Lignite by Mechanical Microcrystallization: Effects of Several Typical Grinding Parameters : Recently, microcrystallization technology has gained much interest because of the enhanced dissolution of the target sample and promotion of the sustainable development of agriculture. Phosphorus (P) is one of the most important nutrients for increasing crop yield; the increase in effective P ratio directly from raw phosphate rock (PR) powder by mechanical grinding to increase its microcrystallinity is believed to be the best choice for this purpose. This study reports the improvement in the activation property of PR powder with different lignite ratios (1%, 2%, 3%, and 5%), particularly the relationship between particle-size distribution, specific surface area, granule morphology, and the citric acid-soluble P. It was found that a 3% lignite addition was the optimal treatment for increasing the release of citric acid-soluble P. The maximum total amount of dry matter from rapeseed cultivation and the available P after the test increased by 56.1% and 89.6%, respectively, with direct use of PR and microcrystallized PR powder (PR2), compared with the control test without any addition of phosphate minerals. Introduction Phosphorus (P) deficiency is a common phenomenon in agricultural soils worldwide.Despite long-term application of phosphate fertilizers to increase crop yields, P availability is often low due to the high affinity of phosphate to the soil solid phase.Worldwide, ~80% of phosphate rock (PR) is mined for phosphate fertilizer production [1].PR resources play a significant role in supporting the sustainable development of agriculture [2,3].Most natural phosphate minerals are medium-low-grade; it is not easy to concentrate them for the production of phosphate fertilizer.Weathering rubber phosphate (ω P 2 O 5 ≤ 20%) is a typical refractory medium-low-grade mineral [4].PR is an important natural source of P for plant nutrition, but the low solubility and availability of P Sustainability 2019, 11, 1068; doi:10.3390/su11041068www.mdpi.com/journal/sustainabilityfrom PR limits its application in agriculture [5,6].With the rapid depletion of high-grade PR resources, improvement of the utilization rate of medium-low-grade PR powder is the only way to increase organic food production and achieve sustainable development of the efficient slow-release phosphate fertilizer industry [7]. In recent years, mechanochemistry has been widely applied in the direct processing of medium-low-grade PR.Zhao et al. and Liu et al. [8,9] reported that a significant amount of P could be released from PR powder through microcrystallization treatment.The microcrystallization technology refers to the deep grinding of phosphate to a micron-sized level, decreasing the energy consumption [10].Some activators have been used to accelerate and strengthen the transformation of P to bioavailable forms via chemical reactions and biological interactions [11]. Lignite is a typical activator that can decrease the angularity of PR powder particles and increase the proportion of middle-sized particles along with the reduction of dispersion [8,9].The fineness of PR powder affects the effectiveness of P. Surface area and viscosity of humic acid in lignite are high, and the adsorption force is strong [12].In the ultrafine pulverization of phosphate minerals, the addition of a high surface-reactive P activator such as lignite plays an important role in improving the product performance [13].The positive effect of humic acid by lignite addition was confirmed by the generation of effective P applied to the soil [14].Lignite humic acid was used to modify PR powder to prepare humic acid-activated phosphate fertilizer. The effect of microcrystallization on the performance of PR has been evaluated by previous studies [15,16].However, few correlation analyses of the properties of PR powder after microcrystallization have been reported, especially after adding lignite [17].In this study, humic acid-activated phosphate fertilizer was prepared by microcrystallization to use the PR powder as a raw material and lignite as an activator.The relationship between the mineral characteristics of PR and P availability from plant cultivation was assessed.The particle size, specific surface area, and granule morphology of PR were analyzed.The effects of microcrystallinityon PR powder and lignite humic acid during the preparation of humic acid-activated phosphate fertilizer were evaluated.After activation, the surfaces of PR particles are covered with organic molecules.This significantly increases the surface exposure to the soil matrix and enhances the physical, chemical, and biological reactions on PR surfaces, thus ensuring a continuous release of P and a subsequent increase in the utilization efficiency [18].Compared with traditional P fertilizer production using acids, the activation method used in this study has several benefits, such as minimal waste generation from production processes, beneficial use of medium-low-grade PR resources, low cost, and environmental friendliness [19]. Sample Preparation The PR powder samples with different dosages (1%, 2%, 3%, and 5%) of lignite were treated using microcrystalline equipment (WJH-02), custom-built by Tsinghua University.The tank volume of milling equipment was 2 L, and the medium contained steel balls.The mass fractions of slurry concentration with different dosages of lignite were 47.5%,50.0%,52.2%,and 56.0%.Grinding time was set at 5,10,15,20,25,30,45,60,75,90,105,120,135, and 150 min.Samples were dried at 105 • C for characterization and chemical analysis.And samples for XRD testing were sieved with a sieve of 200-mesh to remove big particles. Pot cultivation tests for plant growth were conducted at a glasshouse in Tsinghua University Organic Fertilizer Base in Beijing.A soil (aqui-cinnamon soil) with pH 8.2, and organic matter 11.7 g/kg was used for pot cultivation.The available phosphorous, potash, and nitrogen contents were 18.2, 205.2, and 67.0 mg/kg, respectively.Three treatments, control (No P), raw PR powder (PR1) 16 g/pot, and microcrystallized PR powder with lignite (PR2) 16.0 g/pot, were compared.Each treatment was repeated three times and randomly arranged.Rapeseed seeds were treated with hot water for 15 min and germinated at 25 • C for 3 days.The growth time was 57 days, and three strains were used as samples after harvest.All the standard cultivation practices were followed.The weights of biomass after harvest were noted, and post-harvest soil samples were analyzed to determine the available P content. Characterization XRF (X Ray Fluorescence, PFX-235 Rh 60kV LiF200 LiF220 Ge111 AX03) was used to characterize the elemental composition of the sample.Particle-size distribution was measured using a JL-6000 laser particle-size analyzer (range of measurement: 0.02-2000µm, repeatability of D 50 ≤ ±3% standard powder).Specific surface area was measured using a JB-5-type surface area analyzer (according BET equation, the specific surface area of sample can be calculated by its absorbed single layer N 2 ) method, 0.0005 m 2 /g to no upper limit, repeatability of specific surface area ≤ ±3% standard powder).The BET specific surface area measurement [20] was carried out as follows: The adsorbent was adsorbed on a solid surface at liquid nitrogen temperature (−195.8• C).When the nitrogen molecules were in contact with the solid surface (m 2 /g, cm 2 /g) [21], the specific surface area was obtained by calculating the total surface area of 1 g of material from the cross-sectional area of nitrogen molecules.Scanning electron microscopy (SEM) was conducted using a KYKYSEM6200 microscope (SE Detector: 4.5 nm@30 kV, negative amplification:15-250,000×).The citric acid-soluble P was assessed from the soluble P 2 O 5 content in 2% citric acid solution using the Bulgarian State Standard 13418-80 [22].The soil-available P in different treatments was measured using a 722S spectrophotometer (wavelength range 340-1000 nm).The data were analyzed using statistical software (SPSS software, 17.0, SPSS Institute Inc., USA) [23].All treatment effects were determined using Duncan's multiple-range test.Significant treatment effects are presented at P ≤ 0.05. Particle Size of PR Powder Particle size refers to the dimension of length, and D 50 and D 97 represent the two characteristic particle sizes of the powder, as a distribution of 50% and 97%.D 50 is often used to represent the average particle size of a powder sample; D 97 is commonly used to represent the particle size of a coarse sample [24]. Because of the high grinding efficiency of the microcrystallization equipment, the particle sizes D 50 and D 97 of the PR powder sample first rapidly decreased during 25 min and then slowly decreased with the increase in grinding time as shown in Figure 1, much similar to another study [25].Compared with the raw PR powder, the D 50 and D 97 of 25 min PR powder decreased by 94.2% and 81.8%, respectively.Thus, the body strength increased [26].The higher the surface energy, the higher the accumulation between particles, and the larger the particle size.When the two effects [27,28] were combined at a certain time, microcrystallinity and particle accumulation reached a dynamic equilibrium, and the particle size became stable.After 60 min of grinding, the particle size D 50 slowly decreased, whereas the particle size D 97 sharply increased up to 120 min, reaching a particle size of up to 34.3 µm.When the PR powder was ground for 30 min, the particle sizes D 50 and D 97 were 1.9µm and 18.0 µm, respectively.The equipment processing efficiency was very low from 30 min to 60 min; the particle sizes D 50 and D 97 were 1.5µm and 13.8 µm at 60 min, respectively.decreased, whereas the particle size D97 sharply increased up to 120 min, reaching a particle size of up to 34.3 μm.When the PR powder was ground for 30 min, the particle sizes D50 and D97 were 1.9μm and 18.0 μm, respectively.The equipment processing efficiency was very low from 30 min to 60 min; the particle sizes D50 and D97 were 1.5μm and 13.8 μm at 60 min, respectively. Particle Size Uniformity of PR Powder The uniformity coefficient (fractal dimension) is an indicator of the particle-size distribution of a powder sample, and it is also one of the important parameters to evaluate the quality and use of mineral powder products.Fractal dimension and particle-size distribution are positively correlated. The larger the fractal dimension, the wider the particle-size distribution, and the narrower the distribution width.According to fractal theory, the fractal dimension of particle-size distribution can be expressed as follows [29]: where S (D) is the particle-size cumulative distribution function, unit %; K is a constant; F is the fractal dimension of particle-size distribution, a dimensionless constant; D is the particle size, unit μm; Dmax is the maximum particle size, unit μm. Particle Size Uniformity of PR Powder The uniformity coefficient (fractal dimension) is an indicator of the particle-size distribution of a powder sample, and it is also one of the important parameters to evaluate the quality and use of mineral powder products.Fractal dimension and particle-size distribution are positively correlated.The larger the fractal dimension, the wider the particle-size distribution, and the narrower the distribution width.According to fractal theory, the fractal dimension of particle-size distribution can be expressed as follows [29]: where S (D) is the particle-size cumulative distribution function, unit %; K is a constant; F is the fractal dimension of particle-size distribution, a dimensionless constant; D is the particle size, unit µm; Dmax is the maximum particle size, unit µm.where S (D) is the particle-size cumulative distribution function, unit %; K is a constant; F is the fractal dimension of particle-size distribution, a dimensionless constant; D is the particle size, unit μm; Dmax is the maximum particle size, unit μm.With the increase in grinding duration, the fractal dimension of PR powder particle size gradually increased.Because the PR powder could be easily pulverized, the particle size of PR powder particles rapidly decreased at the beginning of grinding, whereas the large particles did not completely disappear, still accounting for a considerable proportion.The width of particle-size distribution gradually widened and can be expressed as an increase in fractal dimension.As the grinding continued, the proportion of larger particles gradually decreased, and the proportion of particles (d ≤ 1.0 µm and d ≤ 10 µm) gradually increased.Because of the accumulation of particles and cladding effects of lignite on PR powder, the particle-size distribution became stable after 45 min. In general, for particles with fractal boundaries (whose local and overall morphology are similar and don't vary with magnification), the fractal dimension can be used to describe their shape.The larger the fractal dimension, the rougher the grain contour [30].The fractal dimension of PR powder and the roughness of particles increased with the increase in grinding duration, and the variation in roughness after 45 min of grinding was smaller than that in the beginning of grinding.The sedimentary phosphate rock has a fractal structure, and the surface has uneven areas.The relatively small concave area reflects the active point on the surface of the phosphate rock.During the beginning 30 min of the crushing process, the main change was that the large particles became small particles.Big particles were mainly broken along fractal structure and the fracture surface of the particles was uneven, and the fractal dimension gradually increased.After 30 min, particles broke slowly, and their shape change was the main change.The shape corners of the particles were removed by friction and fine particles adhered to the surface of the big particle curves.The fractal dimension showed a downward trend [31]. Homogeneous Degree of PR Powder Uniformity represents the degree of particle size: the closer to 1, the better [32].The results of (D 90 −D 10 )/D 50 are shown in Figure 3.With the increase in grinding duration, the particle size of PR powder as a whole rapidly decreased, followed by a slow decrease and slight increase in the end.The particle size slowly decreased continuously from 30 min to 60 min.The finer the particles, the higher the surface energy, and the greater the tendency of accumulation between particles.The change in particle size depends on the combined effect of these two effects [33].With the increase in grinding duration, the homogeneity of PR powder gradually increased in the first 30 min and then slightly decreased until 60 min.Gai [34] reported that the initial particle size was large, and the interaction between particles was small.With the increase in grinding duration, the particle size became smaller, and the interaction between particles increased.Thus, the combined phenomenon was observed.When the crushing time was short, many coarse large particles were present; only a small part of the very fine particles was present.With the increase in grinding duration, more and more small particles were produced.The particle size became smaller and smaller, and the uniformity also decreased.The distribution of particles was more and more narrow. particle size depends on the combined effect of these two effects [33].With the increase in grinding duration, the homogeneity of PR powder gradually increased in the first 30 min and then slightly decreased until 60 min.Gai [34] reported that the initial particle size was large, and the interaction between particles was small.With the increase in grinding duration, the particle size became smaller, and the interaction between particles increased.Thus, the combined phenomenon was observed. When the crushing time was short, many coarse large particles were present; only a small part of the very fine particles was present.With the increase in grinding duration, more and more small particles were produced.The particle size became smaller and smaller, and the uniformity also decreased.The distribution of particles was more and more narrow. Correlation between Specific Surface Area and Characteristic Particle Size of PR Powder The specific surface area and characteristic particle size (such as D 10 , D 50 , D 90 , and D 97 ) of a powder are often used as characteristic indexes to characterize the thickness of powder.The specific surface area of PR powder was the surface area of unit powder, completely reflecting the thickness of PR powder. The median diameter D 50 was selected as the characteristic particle size of PR powder to study the correlation between specific surface area and characteristic particle size of PR powder, as shown in Figure 4.The specific surface area of PR powder was negatively correlated with particle size, following the logistic equation. Roughness and Specific Surface Area of PR Powder The specific surface area of PR powder was measured using the BET method, and the specific surface area calculated from the particle size was compared.Figure 5 shows that with the increase in grinding duration, the difference in PR powder specific surface area shows an increasing trend, and the surface roughness increases.This is because in most cases, the powder particles were not smooth isometric particles.They were composed of particles of different particle sizes, and the surface state of these particles was not the same.They developed inner and outer surfaces, including the surface of particles raised, concave parts, fissures, microslits, and walls of pores and cavities, that are in contact with the surface of particles.These surface areas add up to several times or several orders of magnitude larger than the surface area of an equal-diameter smooth sphere.Therefore, by comparing the difference between the specific surface area calculated using the BET method and that calculated from the particle size, the roughness of particle surface can be generally reflected [35]. Roughness and Specific Surface Area of PR Powder The specific surface area of PR powder was measured using the BET method, and the specific surface area calculated from the particle size was compared.Figure 5 shows that with the increase in grinding duration, the difference in PR powder specific surface area shows an increasing trend, and the surface roughness increases.This is because in most cases, the powder particles were not smooth isometric particles.They were composed of particles of different particle sizes, and the surface state of these particles was not the same.They developed inner and outer surfaces, including the surface of particles raised, concave parts, fissures, microslits, and walls of pores and cavities, that are in contact with the surface of particles.These surface areas add up to several times or several orders of magnitude larger than the surface area of an equal-diameter smooth sphere.Therefore, by comparing the difference between the specific surface area calculated using the BET method and that calculated from the particle size, the roughness of particle surface can be generally reflected [35]. The specific surface area of PR powder was measured using the BET method, and the specific surface area calculated from the particle size was compared.Figure 5 shows that with the increase in grinding duration, the difference in PR powder specific surface area shows an increasing trend, and the surface roughness increases.This is because in most cases, the powder particles were not smooth isometric particles.They were composed of particles of different particle sizes, and the surface state of these particles was not the same.They developed inner and outer surfaces, including the surface of particles raised, concave parts, fissures, microslits, and walls of pores and cavities, that are in contact with the surface of particles.These surface areas add up to several times or several orders of magnitude larger than the surface area of an equal-diameter smooth sphere.Therefore, by comparing the difference between the specific surface area calculated using the BET method and that calculated from the particle size, the roughness of particle surface can be generally reflected [35].Figure 6 shows the correlation between specific surface area and roughness of PR powder.The greater the specific surface area, the greater the roughness of PR powder.Figure 6 shows the correlation between specific surface area and roughness of PR powder.The greater the specific surface area, the greater the roughness of PR powder. Size Distribution of PR Powder with Lignite Addition A pile of dispersed powders consisting of many particles of different sizes was not quantified through individual particles but the particle-size distributions.Figure 8 shows typical particle-size distributions obtained using a laser particle size analyzer for different grinding durations of PR powder: 0.0, 0.1, 0.2, 0.5, 1.0, 2.0, 5.0, 7.3, 10.0, 10.9, 16.3, 18.7, 20.0, 24.4,27.8, 47.5, 50.0, 70.9, and 100.0 μm.Grinding changed the particle-size distribution of PR powder without lignite significantly in a large region.The composition ratio of particles (d ≤1.0 μm and d ≤10 μm) first rapidly increased and then gradually decreased because big particles were broken, and the small ones adhered to the big ones.The values were 30.3% and 91.5% at 30 min, respectively.At 30 min, the change in particle size Size Distribution of PR Powder with Lignite Addition A pile of dispersed powders consisting of many particles of different sizes was not quantified through individual particles but the particle-size distributions.Figure 8 shows typical particle-size distributions obtained using a laser particle size analyzer for different grinding durations of PR powder: 0.0, 0.1, 0.2, 0.5, 1.0, 2.0, 5.0, 7.3, 10.0, 10.9, 16.3, 18.7, 20.0, 24.4,27.8, 47.5, 50.0, 70.9, and 100.0 µm.Grinding changed the particle-size distribution of PR powder without lignite significantly in a large region.The composition ratio of particles (d ≤ 1.0 µm and d ≤ 10 µm) first rapidly increased and then gradually decreased because big particles were broken, and the small ones adhered to the big ones.The values were 30.3% and 91.5% at 30 min, respectively.At 30 min, the change in particle size was small, and these values were 36.1% and 94.4% more than those of the raw PR powder, respectively; and the values increased by 36.1% and 92.2% at the maximum duration of 60 min, respectively. However, compared with the raw PR powder and lignite, the composition ratio of particles (d ≤ 1.0 µm and d ≤ 10 µm) of the PR powder added with 1%, 2%, 3%, and 5% lignite changed significantly, first increasing and then gradually decreasing.With the increasing amount of added lignite, the grinding duration of PR powder particles (d ≤ 1.0 µm and d ≤ 10 µm) reaching the maximum was different.With the addition of 1%, 2%, and 3% lignite, the grinding duration decreased to 25 min, 15 min, and 20 min, respectively, accounting for 30.3%, 9.8%, and 20.6% of particles (d ≤ 10 µm), respectively, and 90.2%, 77.5%, and 87.7% of particles (d ≤ 1.0 µm), respectively.However, with the addition of 5% lignite, the grinding duration was 60 min, accounting for 41.9% of particles (d ≤ 1.0 µm) and 85.4% of particles (d ≤ 1.0 µm).This was the optimal activation effect on the particles obtained by adding 3% lignite to PR powder in an appropriate grinding duration (20 min). Grinding altered the distributions of the mixture (PR and lignite) by decreasing particle sizes.For every treatment, the mixture with 5% and 3% lignite contained the higher ratio of particles whereas the mixture with 2% and 1% lignite contained the lower ratio due to the release of citric acid-available P in higher quantity, meaning that the mixture with 5% and 3% lignite released more citric acid-available P than the mixture with 2% and 1% lignite.The mechanism is perhaps that lignite prevented small particles adhering to the big ones. Grinding altered the distributions of the mixture (PR and lignite) by decreasing particle sizes. For every treatment, the mixture with 5% and 3% lignite contained the higher ratio of particles whereas the mixture with 2% and 1% lignite contained the lower ratio due to the release of citric acidavailable P in higher quantity, meaning that the mixture with 5% and 3% lignite released more citric acid-available P than the mixture with 2% and 1% lignite.The mechanism is perhaps that lignite prevented small particles adhering to the big ones. Effect of Lignite Addition on Particle Size (D 50 , D 97 ) Figure 9 shows that the characteristic particle size (D 50 , D 97 ) of PR powder added with 1%, 2%, and 3% lignite first rapidly decreased during 20 min and then slowly decreased until 105 min, and a peak appearedat 135 min, followed by a sharp drop to 150 min.Compared with other treatments, the D 50 and D 97 of treatments with the addition of 5% lignite was significantly different, which first decreased exponentially during 30 min and then slightly decreased until 90 min.A sharp rise was observed until 150 min.Addition of 5% lignite had an inhibition effect on the decreasing (D 50 , D 97 ) of PR powder.The microcrystallization process not only decreases the particle size of lignite, but also activates the lignite.According to 21 lignite statistics, the total humic acid in the free humic acid-based sample accounts for about 80-99% [36].Because of the presence of free humic acid molecules containing many carbonyl, phenolic hydroxyl, quinone, and other functional groups as well as metal ions, oxides, minerals, and organic substances including toxic and harmful substances, the environmental chemical behavior of these substances can be affected [37].These behaviors mainly include the following: (1) Adsorption: Lignite contains humic acid in the porous sponge-like structure, and the molecular structure contains many active groups, providing a large surface area, high viscosity, and good adsorption properties [38].P powder coating and other complex factors such as pellet agglomeration, grain refinement, and particle agglomeration are used to achieve a dynamic balance, so that the difficulty of smashing increases.( 2) Decomposition: Humic acid promotes the decomposition of PR powder, so that water-insoluble P is converted into water-soluble P, benefiting crop absorption. (3) Complexation: An increase in the solubility of apatite in the presence of humic acid can be attributed to the formation of P-HA (humic acid) complex.IR (infrared) and PNMR (proton nuclear magnetic resonance) analyses confirm the presence of such complexes that immobilize and activate the nutrients in soil and fertilizer under suitable pH conditions [39]. based sample accounts for about 80-99% [36].Because of the presence of free humic acid molecules containing many carbonyl, phenolic hydroxyl, quinone, and other functional groups as well as metal ions, oxides, minerals, and organic substances including toxic and harmful substances, the environmental chemical behavior of these substances can be affected [37].These behaviors mainly include the following: (1) Adsorption: Lignite contains humic acid in the porous sponge-like structure, and the molecular structure contains many active groups, providing a large surface area, high viscosity, and good adsorption properties [38].P powder coating and other complex factors such as pellet agglomeration, grain refinement, and particle agglomeration are used to achieve a dynamic balance, so that the difficulty of smashing increases.( 2 After grinding for 30 min, without lignite, the specific surface area of PR powder rapidly increased until 90 min; after this fluctuation, it increased up to 26.2 m 2 .g−1 until 150 min.However, the addition of lignite decreased the efficiency of grinding, and the specific surface area of PR powder became smallerbecause of the agglomeration of granules and cladding effect of lignite on PR powder. Effect of Lignite Addition on Specific Surface area of PR Powder Specific surface area is an indicator of the macroscopic fineness of a powder [24].After adding 1%, 2%, 3%, and 5% lignite, the specific surface area of the sample was measured, and the results are shown in Figure 10.A positive correlation was observed between the specific surface area of samples after different treatments and grinding duration, which follows the langmuirEXT1 equation (0%, 5%) and allometric1 equation (1%, 2%, 3%), as shown in Table 1.According to the correlation equation, the specific surface area of all samples goes up until even.In the order of 0%, 1%, 2%, 3%, and 5% lignite to PR powder during 30 min, 5% (15.0 m 2 .g−1 ) > 2% (14.2 m 2 .g−1 ) > 1% (13.2 m 2 .g−1 ) > 3% (13.1 m 2 .g−1 ) > 0% (12.6 m 2 .g−1 ).The addition of 5% lignite had the maximum effect on the specific surface area of PR powder during 30 min.After grinding for 30 min, without lignite, the specific surface area of PR powder rapidly increased until 90 min; after this fluctuation, it increased up to 26.2 m 2 .g−1 until 150 min.However, the addition of lignite decreased the efficiency of grinding, and the specific surface area of PR powder became smallerbecause of the agglomeration of granules and cladding effect of lignite on PR powder.Solid action through mechanical forces is often a combination of multiple phenomena and can be divided into two stages: (1) The particles hit themselves and rupture and become refined.The material surface area increases the crystallinity, lattice defects, and lattice displacement.Thus, the system temperature increases, and thus the free energy also increases.( 2) The free energy decreases; therefore, the chemical potential energy of the system also decreases.The powder is agglomerated, and the specific surface area decreases.The material can be recrystallized, resulting in mechanical chemical effects [40]. Effect of Lignite Addition on Granule Morphology of PR Powder PR powder has obvious fractures: the particles are plate-shaped with a smooth surface and dense structure.Figure 11 shows the SEM images of PR powder and lignite grinded for different times.The SEM images show that the particle sizes of PR powders are significantly different, as shown in Figure 11.After grinding for 150 min, the particle-size distribution is uneven with a relatively high dispersion.The large particles are predominant with less intermediate size particles, and the small particles have obvious agglomeration.This is closely related to the way in which the grinding medium acts on the material to generate forced vibration during the microcrystallization process.However, by extending the grinding duration, the particles rub against the grinding medium or against each other, and the angular edges of flaky particles are ground to form spherical or spheroidal particles.The particle-size distribution tends to be uniform, and the degree of amorphization increases. With the addition of 3% lignite, the surface structural characteristics of PR powder are more distinct with the increase in grinding duration.The lignite raw material has many layers of particles, a large viscosity, different particle sizes, and a rough surface.There are a large number of primary cell pores, interchain pores, cracks, and pores in the particles.The pore-size distribution is wide, and the particle surfaces are rich in oxygen-containing functional groups.These pore structures and oxygen-containing functional groups make it easier for lignite to adsorb on the surface of PR powder particles during the grinding. Effect of Lignite Addition on the Citric Acid-soluble P of PR Powder The PR powder was added with 1%, 2%, 3%, and 5% lignite to make it microcrystalline, and the released amount of citric acid-soluble P in different treated PR powders was measured as shown in Figure 12.Compared with no addition of lignite, in the addition of lignite 1% and 2%, the release of citric acid-soluble P from PR powder was very low, whereas the addition of lignite 3% and 5% increased the citric acid-soluble P. The citric acid-soluble P of PR powder (+3% lignite) and PR powder (+0% lignite) was 6.7% and 3.0%, respectively, in 30 min grinding; PR powder (+3% lignite) increased the release of citric acid-soluble P by 125.3% at a grinding time of 30 min.This is the optimal proportion.It was observed that the activation is very weak with 1% and 2% lignite addition due to a slow decrease in particle size; however, the activation effect is stronger with the addition of 3% and 5% lignite in PR powder.In an acidic environment, the dissolution process is: Ca 10 (PO 4 ) Hydrogen ions produced by humic acid in lignite can react with microcrystallization PR to promote the continuous release of phosphate ion [39].Compared with water-soluble chemical phosphate fertilizer, phosphorus is not completely released at one time.The release of phosphorus from mineral phosphate fertilizer is slow, and the local concentration of phosphate ion can be avoided, resulting in the reduction of the effectiveness of trace elements such as zinc and iron affecting plant growth.The correlations were observed between citric acid-soluble P of different treatments and grinding duration, which respectively followed the Linear equation (+0% lignite), Allometric equation (+1% lignite), Allometric equation (+2% lignite), Belehradek equation (+3% lignite), Belehradek equation (+5% lignite), as shown in Table 2 in the beginning 30 min.However, only the correlations of the samples (+0% lignite, +3% lignite, +5% lignite) were positive.The citric acid-soluble P of samples (+3% lignite, +5% lignite) increased significantly during the beginning 30 min.There are two causes of citric acid-soluble P increasing.First, the interaction of the surface phosphate with citric acid was easy since the particles of samples decreased, and specific surfaces increased during grinding.Second, lignite released H + , which activated PR.This function was rapid.However, lignite could adsorb phosphate ions through metal ions (Ca 2+ ) and thus decrease the P content dissolved in citric acid solution.The sites of lignite that combined with phosphate ions through metal increased during grinding and thus it made the decreasing trend of citric acid-soluble P.However, the phosphate combined with lignite is available to plants and it is not easy to be fixed to soil colloid [41]. acid was easy since the particles of samples decreased, and specific surfaces increased during grinding.Second, lignite released H + , which activated PR.This function was rapid.However, lignite could adsorb phosphate ions through metal ions (Ca 2+ ) and thus decrease the P content dissolved in citric acid solution.The sites of lignite that combined with phosphate ions through metal increased during grinding and thus it made the decreasing trend of citric acid-soluble P.However, the phosphate combined with lignite is available to plants and it is not easy to be fixed to soil colloid [41].3.2.6.Dry Matter from Rapeseed Growth, Available P inside Soil, and the N, P, K Uptake Content P fertilizer effectiveness was determined in a pot cultivation test with the amounts of rapeseed biomass including both the aboveground plant part and the underground root.P application promoted the accumulation of dry matter in rapeseed, and with different treatments of rapeseed, the difference in dry matter was significant, as shown in Figure 13a.Compared with the control treatment, the dry matter of aboveground and underground parts of the PR1 treatment increased by 35.4% and 28.6%, respectively; with the PR2 treatment, an increase of 56.6% and 54.0% against the control, and an increase of 15.7% and 19.8% against PR1, respectively. Regarding the total amount of rapeseed dry matter, significant differences were observed for different treatments.The maximum was obtained with the PR2 treatment: an increase of 56.1% compared with the control treatment.This indicates that the application of PR powder promotes the growth and development of aboveground and underground plants of rapeseed.The effect of microcrystalline-activated PR powder is stronger than that of ordinary PR powder.The data clearly indicate that microcrystalline grinding improves the effectiveness of PR powder, as shown in Figure 12.This is beneficial to the photosynthesis of rapeseed and promotes the accumulation of dry matter of rapeseed, as shown in Figure 13a. The results show fertilizing microcrystallization PR increased the nutrient uptake of rape, as shown in Figure 13c.The content of available P in the treated soil considerably increased with microcrystallization processing; PR2 was higher than PR1 by 28.3%, whereas for PR2 and PR1 against the control, it significantly increased by 89.6% and 69.8%, respectively, as shown in Figure 13b.The application of PR powder has a significant effect on soil-available P in rapeseed.All differences reached a significant level.The treatments fertilizing PR increased the soil-available P level compared to the control, so that the rapes absorbed phosphorus easily, prompting other nutrients (N and K) to be absorbed.Since microcrystallization adding lignite made PR releasing phosphorus more easily, the rapes of treatment PR2 absorbed more P than other treatments.Proper application of PR powder could promote the growth and development of plants and could have a certain positive effect on alleviating the shortage of PR. reached a significant level.The treatments fertilizing PR increased the soil-available P level compared to the control, so that the rapes absorbed phosphorus easily, prompting other nutrients (N and K) to be absorbed.Since microcrystallization adding lignite made PR releasing phosphorus more easily, the rapes of treatment PR2 absorbed more P than other treatments.Proper application of PR powder could promote the growth and development of plants and could have a certain positive effect on alleviating the shortage of PR. Conclusion The microcrystallization operation significantly affected the particles, specific surface area, granule morphology, and the release of citric acid-soluble P of PR powder. For PR powder, grinding primarily broke big particles along the fractal structure so that the PR fractal dimension was increased.Then the grinding cut the corners of particles and made fine particles adhered to big ones.PR powder D50 primarily decreased, and then went smoothly, whereas, its fractal dimension went up and then changed slowly.Correspondingly, the particle homogeneous Conclusions The microcrystallization operation significantly affected the particles, specific surface area, granule morphology, and the release of citric acid-soluble P of PR powder. For PR powder, grinding primarily broke big particles along the fractal structure so that the PR fractal dimension was increased.Then the grinding cut the corners of particles and made fine particles adhered to big ones.PR powder D 50 primarily decreased, and then went smoothly, whereas, its fractal dimension went up and then changed slowly.Correspondingly, the particle homogeneous degree nearly showed the same trend as the fractal dimension.In 150 min grinding time, the specific surface area went up all the time. With the addition of lignite (1% and 2%), the particle-size distribution of PR powder became complicated, and produced less fine particle ratio (dimeter ≤ 1 um) and released less citric acid-available P. PR powder with 3% and 5% added lignite prevented fine particles from adhering to big ones when grinding and so they produced a large amount of independent particles and released more citric acid-available P. Fertilizing lignite-active PR powder improved soil-available phosphorus content and rape promptly absorbed phosphorus and N, P nutrient elements.Additionally, rape production was increased, too. Figure 1 . Figure 1.Particle size of phosphate rock (PR) powder at different grinding durations. .Figure 2 . Figure 2. Fractal dimension of PR powder at different grinding durations. Figure 1 . Figure 1.Particle size of phosphate rock (PR) powder at different grinding durations. Using ln S(D) for the y-axis and (ln D − ln Dmax) for the x-axis, 11 characteristic particle sizes D 03 , D 06 , D 10 , D 16 , D 25 , D 50 , D 75 , D 84 , D 90 , D 97 , and D 98 of PR powder and different traits (F) were calculated using Equation (1).The results are shown in Figure 2. Figure 2 . Figure 2. Fractal dimension of PR powder at different grinding durations. Figure 2 . Figure 2. Fractal dimension of PR powder at different grinding durations. Figure 3 . Figure 3. Homogeneous degree of PR powder at different grinding durations. Figure 3 . Figure 3. Homogeneous degree of PR powder at different grinding durations. Minerals 2018, 8, x FOR PEER REVIEW 6 of 16 3.1.4.Correlation between Specific Surface Area and Characteristic Particle Size of PR Powder The specific surface area and characteristic particle size (such as D10, D50, D90, and D97) of a powder are often used as characteristic indexes to characterize the thickness of powder.The specific surface area of PR powder was the surface area of unit powder, completely reflecting the thickness of PR powder.The median diameter D50 was selected as the characteristic particle size of PR powder to study the correlation between specific surface area and characteristic particle size of PR powder, as shown in Figure4.The specific surface area of PR powder was negatively correlated with particle size, following the logistic equation. Figure 4 . Figure 4. Correlation between specific surface area and middle diameter of PR powder. Figure 4 . Figure 4. Correlation between specific surface area and middle diameter of PR powder. Figure 5 . Figure 5. Difference in specific surface area of PR powder at different grinding durations. Figure 5 . Figure 5. Difference in specific surface area of PR powder at different grinding durations. Figure 6 . Figure 6.Correlation between specific surface area and roughness of PR powder. Figure 6 . Figure 6.Correlation between specific surface area and roughness of PR powder.3.1.6.XRD Patterns of PR at Different Milling Times Figure 7 shows the mineral phases present in the sample are fluorapatite [Ca 5 (PO 4 ) 3( F,OH)] and quartz (SiO 2 ), as confirmed by XRD analysis.The diffraction peaks from both phases in the PR sample changed clearly.Although the PR sample came from weathered PR, which had bad crystal and low diffraction peaks of [Ca 5 (PO 4 ) 3 (F,OH)], compared to the raw materials, the (002), (211), and (300) crystal face diffraction peaks of [Ca 5 (PO 4 ) 3 (F,OH)] clearly decreased.Whereas the diffraction peaks of the quartz, the (011) crystal faces, decreased significantly during milling time. Figure 7 . Figure 7. XRD patterns of PR at different milling times. Figure 7 . Figure 7. XRD patterns of PR at different milling times. Figure 8 . Figure 8. Size distribution of PR powder at different grinding durations. Figure 8 . Figure 8. Size distribution of PR powder at different grinding durations. ) Decomposition: Humic acid promotes the decomposition of PR powder, so that water-insoluble P is converted into water-soluble P, benefiting crop absorption.(3) Complexation: An increase in the solubility of apatite in the presence of humic acid can be attributed to the formation of P-HA (humic acid) complex.IR (infrared) and PNMR (proton nuclear magnetic resonance) analyses confirm the presence of such complexes that immobilize and activate the nutrients in soil and fertilizer under suitable pH conditions[39]. Figure 9 . Figure 9. Particle size of PR powder after adding different amounts of lignite at different grinding durations. Figure 9 . Figure 9. Particle size of PR powder after adding different amounts of lignite at different grinding durations. Minerals 2018, 8 , 318 Figure 10 . Figure 10.Specific surface area of PR powder after adding different amounts of lignite at different Figure 10 . Figure 10.Specific surface area of PR powder after adding different amounts of lignite at different grinding durations. Figure 11 . Figure 11.SEM images of PR powder and lignite at different grinding durations: PR powder raw Figure 12 . Figure 12.Changes in the citric acid-soluble P content of PR powder by adding different amounts of lignite at different grinding durations. Figure 12 . Figure 12.Changes in the citric acid-soluble P content of PR powder by adding different amounts of lignite at different grinding durations. Figure 13 . Figure 13.Dry matter (a), available P (b), and the N, P, K uptake content(c) of samples in different treatments (the different letters within a column showed statistically significant (p <0.05)). Figure 13 . Figure 13.Dry matter (a), available P (b), and the N, P, K uptake content (c) of samples in different treatments (the different letters within a column showed statistically significant (p < 0.05)). 320 Table 1 . Correlation of specific surface area and grinding durations. Table 1 . Correlation of specific surface area and grinding durations. Table 2 . Correlation of citric acid-soluble P and grinding durations. Table 2 . Correlation of citric acid-soluble P and grinding durations.
9,791.6
2019-02-18T00:00:00.000
[ "Materials Science" ]
Regimes and mechanisms of transient amplification in abstract and biological networks We use upper triangular matrices as abstract representations of neuronal networks and directly manipulate their eigenspectra and non-normality to explore different regimes of transient amplification. Counter–intuitively, manipulating the imaginary distribution can lead to highly amplifying regimes. This is noteworthy, because biological networks are constrained by Dale’s law and the non-existence of neuronal self-loops, limiting the range of manipulations in the real dimension. Within these constraints we can further manipulate transient amplification by controlling global inhibition. R ecurrent network models are known to produce different types of dynamics, ranging from regular to irregular, and from transient to persistent activity [1][2][3][4][5][6] . Moulding network dynamics to resemble experimental observations usually involves changes in the network architecture, i.e., the existence of synapses and their efficacies [7][8][9] . With this approach, the eigenspectrum and the non-normality of the connectivity matrix are indirectly affected, and the relationship between changes in those qualities of the weight matrix and the network dynamics remain nebulous. Here, we manipulate the spectrum and non-normality of upper triangular matrices, such that their characteristics can be directly translated into dynamical properties (Fig. 1A). These matrices no longer represent the neuronal connectivity, but modes of activation that are arranged in a feedforward manner [10][11][12] . We are particularly interested in the different forms of transient amplification, a phenomenon that can resemble motor cortex activity during reaching [13][14][15] and also emulate long-lasting working memory dynamics [16][17][18] . After a dissection of the underlying mechanisms of transient amplification using general upper triangular matrices, we consider biological constraints on the spectral distributions, and consequently, on the dynamics. Finally, we show how we can implement our findings in a biological plausible connectivity matrix with excitatory and inhibitory neurons, i.e., a matrix satisfying Dale's law. Throughout the paper we use the following notation for the connectivity matrix: W for a generic connectivity matrix, W for a matrix given in upper triangular form, and W B for a matrix following biological constraints. The dynamics of the recurrent network are defined by where x(t) is the internal state of the network at time t, and can be understood here as the membrane potential of a given neuron. This internal state of the neurons evolves with a characteristic time constant τ and is affected by the activity of other neurons of the network through the recurrent connections determined by W. Finally, the activation function, f (x(t)) = r(t), * represents the input-output relation between the internal state, x(t), and the firing rate deviation, r(t), from the baseline activity r 0 . We take r = f (x) = x for the mathematical analysis, and compare to networks with richer dynamics using a known non-linear function 4,8 . In the linear case, the network dynamics can be described using the eigenvalues, λ i , and eigenvectors, v i , of the weight matrix W (with i = 1, . . . , N; N the number of neurons in the network). To quantify whether and by how much the network can amplify specific inputs, we calculate the norm of the rate vector, r(t) , by decomposing it in the directions of the eigenvectors of W, Here, r k (t) = α k e (λ k −1)t is the solution of the system along the direction of the eigenvector v k , which is associated with the eigenvalue λ k (α k is a constant, uniquely determined by the initial condition). In a stable regime, Re(λ k ) < 1, ∀k, the system exhibits a single fixed point that represents the baseline activity. An increase of the response norm, r(t) , with respect to the norm of the initial condition, r(t 0 ) (here always normalised to 1), defines the phenomenon of transient amplification. A necessary condition for this to happen is the non-normality of W, v k , v j 0, for some j, k, i.e., the eigenvectors do not form an orthogonal basis 11 . To explore regimes of transient amplification, we thus focus on matrices of the form W = Λ + T (Fig. 1B), with the diagonal, Λ, containing the eigenvalues [10][11][12] , and the strictly upper triangular part, T, representing the feedforward structure between patterns of activation. Note that Λ contains 2 × 2-blocks around the diagonal to accommodate for complex eigenvalues in real-valued matrices. The real parts of the eigenvalues are on the diagonal and the imaginary parts lie on the off-diagonal entries of the 2 × 2 blocks 19 . We create Λ by sampling the real and imaginary parts of the eigenvalues from different distributions, but keeping the number of complex versus real eigenvalues constant (here 3% real). The imaginary distribution needs to be symmetric with respect to zero (a condition imposed by the conjugacy of the complex eigenvalues), while the real distribution must be below 1 (and is here always set to have 0.5 as a supremum) for stability reasons. We create T in two different ways: from the Schur decomposition of a stability optimized circuit 20 , or sampled from a uniform distribution. We scale the norm of T after its structure is fixed. We start our investigation of how the eigenspectrum affects the dynamics by drawing both real and imaginary parts from In each panel clockwise: The spectrum; linear dynamics; non-linear dynamics; the logarithm of the maximum norm of the firing rate per initial condition. Pink dotted line and arrow correspond to the last initial condition whose norm is amplified by at least 50%. The feedforward structure is taken from a SOC 8 and its Frobenius norm is fixed to 75. Real and imaginary parts follow an uniform distribution with diameters d im and d re , respectively. C, When d im = d re = 10 only two conditions are slightly amplified. D, When d im = 10 and d re = 1, the system is capable of more amplification. E, Here d im = 1 and d re = 10 and surprisingly this also creates more amplification compared to the case shown in C. F, When d im = d re = 1, the system amplifies almost half of the initial conditions. The dynamics, given an initial condition of norm 1 reach the value of ∼ 10 5 in the linear case and consequently long-lasting dynamics in the non-linear case. uniform distributions with diameters d im and d re , respectively ( Fig. 1C-F, top left). To quantify the dynamical response of the network, we find an optimal orthogonal basis of initial conditions, I B = {a 1 , . . . , a N }, ordered according to their evoked energy, E(a) 21 . To make sure that the evoked energy is due to an amplified response rather than merely a slower exponential decay, we compute the maximum value of the norm of the firing rate vector, for all vectors in I B (Fig. 1C-F, bottom left). With broad distributions, the system can slightly amplify a few conditions (Fig. 1C). When the range of the real-part distribution is decreased and pushed towards 0.5, the resulting network produces stronger amplification (Fig. 1D). This can mainly be attributed to the fact that the eigenvalues have now larger real parts and hence longer decay envelopes. Indeed, clustering away from 0.5 leads to less amplification (not shown). More surprisingly shrinking, instead, the imaginary distribution also leads to more amplification (Fig. 1E), and shrinking both produces very large amplification that in the non-linear case lasts for a long time, approximating timescales of working memory dynamics (Fig. 1F). Additionally, the percentage of conditions that are amplified is considerably increased, i.e., the ability of such a network to amplify orthogonal initial conditions is enhanced. Note that splitting and clustering the (positive and negative) imaginary parts away from zero gives rise to slightly different amplification regimes that also depend on the linearity of the system 19 . When we study the effects of the imaginary and real distributions more systematically, we find that the shape of the real distributions 19 has minimum effect on the amplification (Figs. 2Ai and 2Aii). Amplification emerges from the nonnormality of W, which can be partly quantified by the angles between the eigenvectors (Eq. 2) 19 ; if more pairs have overlaps, the matrix will be more non-normal. The imaginary distribution changes the geometry of the eigenvectors (Fig. 2Aiii), providing a mechanism for its drastic effect on the amplification in these networks (Figs. 2Ai and 2Aii). This is a surprising effect given that we do not alter the feedforward norm nor the decay envelopes at all. The feedforward norm is more directly linked to the nonnormality 11 , and as expected, it increases both the norm of the maximum response (Fig. 2Bi), and the percentage of amplified conditions (Fig. 2Bii), for larger values. The percentage of eigenvector pairs with small angles also grows with increasing feedforward norm strength (Fig. 2Biii). Interestingly, there is a saturating point that depends on the imaginary distribution. Once the number of pairs saturates, increased amplification is mainly due to the increased matrix norm, ||W||, indicating that the eigenvector pairwise angles are not sufficient to explain the behaviour of the network. Next we study the relative position of the directions of the overlaps in state-space. If most eigenvectors are pointing in similar directions, the dynamics will be biased towards these directions too. This does not mean that W or the eigenvector matrix V are not full rank -on the contrary, they almost always are. What it means is that, in order to quantify the global eigenvector geometry, we have to use the effective rank of V. The effective rank of V measures the average number of significant dimensions in its range, and is formally defined as the exponential of the spectral entropy of its normalised singular values 22 . Specifically, if σ 1 , σ 2 , · · · , σ N are the singular values of V, and p i = where H(p 1 , · · · , p N ) is the Shannon entropy, i.e., H(p 1 , · · · , p N ) = − N k=1 p k logp k . The effective rank of V is indeed small in the highly amplifying regimes (Fig. 2C), revealing an underlying duality between amplification and output dimensionality. The consequence for the dynamics is that, even though the system may amplify many initial conditions, they nevertheless evolve in the same low dimensional subspace. To identify the dimensionality of this subspace we compute the effective rank of the matrix P which is constructed as follows: the j-th column of P is the first principal vector of the dynamics, given the j-th amplified initial condition of the I B basis 19 . We find that there is a discrepancy between the number of amplified directions and effective rank when the system produces large amplifications (Fig. 2D). This suggests that the dynamical responses evoked by orthogonal initial conditions evolve in the Percentage of directions whose norm is amplified more than 50% and (iii) The percentage of angles (between pairs of eigenvectors) that are less than 45 • . Every line is a function of the imaginary diameter. We plot three real distributions. Pink; a single valued real distribution in which all real parts are equal to zero. Purple; a distribution with a large real negative outlier at −20 and the rest of the real eigenvalues distributed between the values 0 and 0.5. Green; a uniform distribution in which all real parts are distributed uniformly in the interval (−0.5, 0.5) 19 . B, (i-iii) Same as A, but plotted as a function of the feedforward Frobenius norm. Different colours correspond to 5 different spectra; all spectra have fixed single-valued real distributions (equal to zero) and different imaginary diameters. C, The effective rank of the eigenvector matrix V of W as a function of the imaginary diameter (orange) or the ffd norm (red). D, Amplified directions and effective rank of the matrix P (see text) in the linear and nonlinear cases. The feedforward structure is random, from a uniform distribution, and the real distribution is uniform on (−0.5, 0.5). (i) As a function of the imaginary diameter with feedforward norm equal to 75. (ii) As a function of the feedforward Frobenius norm, with imaginary diameter equal to 20. same subspace. This also means noise will be amplified in the same direction as the signal. Additionally, different initialisations would potentially lead to similar linear readouts. There is thus a trade-off between the number of amplified conditions, i.e., the capacity, and the noise robustness of the system. To summarise, the system can be described by three regimes of amplification: weak; short transient; and long transient. In the weak case, the eigenvectors are effectively orthogonal to each other but span the entire output space equally. In the short transient regime, there is a good balance between amplification of orthogonal inputs and diversity in the responses. In the long transient, many initial conditions are amplified but the responses lie in the same low-dimensional subspace. Moreover, we found that the mechanism behind the different regimes of amplification depends on the difference between the norm of the eigenspectrum and the norm of feedforward structure (Fig. 3A). Indeed, when we fix the norm of W, and distribute a -continuously decreasing-percentage of this norm on the diagonal and the rest on the feedforward structure, the network transitions from weakly to strongly amplifying (Fig. 3B). Thus, it's the relation between the diagonal (representing the spectrum) and feedforward parts of the matrix that shapes the dynamics of the network. Next, we can investigate neuronal networks with certain biological constraints. First, we consider the effect of nonself loops in the connectivity matrix. Neurons are not typically structurally connected to themselves, which means that the trace of the weight matrix of such a network is equal to zero. This unfolds as follows: Im(λ i ) = 0 due the conjugacy of the eigenvalues, the weight matrix W B of an upper triangular matrix without self-loops requires N i=1 Re(λ i ) = 0. This, together with the stability constraint, max (α(W)) = 0.5, bounds the real distribution from below and above, restricting it to a very limited range. The observation explains why the spectrum of the SOC has the pancake shape after optimisation, i.e., not only the positive eigenvalues but also the negative are pushed towards the stability line after optimisation. This result also highlights the importance of the imaginary spectral manipulations: if the real distribution is limited, the imaginary spectrum must probably carry important information 5 , and its role is likely critical for shaping the dynamics. As a last application, we explore how to navigate the regimes of transient amplification in networks with excitatory and inhibitory neurons, i.e., satisfying Dale's law. In this case, we have to design a biological matrix W B which has a real Schur transformation with large feedforward and small diagonal norm. We find that a mechanism to control this in the short transient regime, is the strength of global inhibition. Larger inhibitory global strength leads to more amplified conditions and also to larger amplification per condition (Fig. 3C). By assigning larger values to the inhibitory weights, the feedforward norm increases and the spectrum norm decreases (Fig. 3D). Finally, the new amplified conditions induced by the strongest inhibition do not share their first principal component directions in their dynamical responses, i.e., the noise robustness of the system is not compromised in this case (Fig. 3E). This is possible because we are still in the short transient regime; the long transient regime cannot be reached by solely increasing the global inhibitory strength 19 . In this letter we used upper triangular matrices as abstract representations of the dynamical properties of a connectivity matrix, to control the quantities that are relevant for the neural dynamics in the transient amplification regime. Although transient non-normal amplification has been previously studied 4,6,8,9,11 , the entire dynamical regime that can be spanned by this kind of networks had not been explored. Usu- initial condition for different percentages of the norm assigned to the spectrum, ranging from a matrix whose entire norm is assigned to the spectrum (yellow; 100% case, normal matrix) to a matrix whose entire norm is assigned to the feedforward part (dark green; 0% case, nilpotent matrix). C-E, Simulations for a connectivity matrix satisfying Dale's law (50% excitatory and 50% inhibitory neurons). An initial spectrum of radius 10 with random connections but global inhibitory dominance of strength I/E. Following the optimisation algorithm used in SOCs 8 , we stabilise the network keeping the initial I/E ratio. C, The amplification landscape for different I/E ratios. Purple dotted line corresponds to a response norm that is 50% larger than the norm of the initial condition. D, The evolution of the spectrum and feedforward norms for different values of I/E in the corresponding real Schur transformation. E, Percentage of amplified conditions and effective rank of the corresponding matrix P (defined in text) in the linear case. ally, any alteration in the weights of a neuronal connectivity matrix has obscure effects on the spectrum and non-normality. By by-passing, temporarily, the connectivity matrix and focusing on a hypothetical Schur transformation, we found new dynamical regimes of large amplification that had not been reported before. We also showed that the amount of transient amplification a network can produce can be controlled by the ratio between the norms of the spectrum and hidden feedforward structure that the Schur transformation unveils. Moreover there is a trade-off between the capacity and noise robustness of those systems. The source of amplification, i.e., the overlaps of the eigenvectors, inevitably restrict the subspace in which the dynamical outputs evolve. Finally, we found that stronger global inhibitory dominance helps navigate amplification regimes in networks that satisfy Dale's law. Our work opens the door for the exploration of new questions related to neuronal dynamics, such as how the structure -besides the norm -of the feedforward part as well as how non-uniform imaginary distributions affect the dynamics. We thank F. Zenke for his comments, and especially his contribution on the effect of the self-loops on the spectrum. We also thank G. Hennequin Supplementary material of Regimes and mechanisms of transient amplification in abstract and biological networks Georgia Chrisodoulou, Tim P. Vogels, and Everton J. Agnes Details for upper triangular matrix setup. To construct the upper triangular matrices we start with an example of a connectivity matrix known to create strong non-normal amplification, i.e., the Stability Optimised Circuit (SOC) 2 . We take the real Schur transformation of this connectivity matrix. In this form, the matrix is upper triangular with some 2 × 2 blocks on the diagonal. These blocks have real entries and their eigenvalues are the complex eigenvalues of the initial matrix (a pair of conjugates). We fix the triangular part that is not involved in the eigenvalue blocks. In most of the manipulations of the feedforward coupling, we only change the norm of this feedforward part of the matrix by scaling all its entries accordingly. In some cases (for control) we also chose the feedforward connections to be drawn from a uniform distribution on the interval (−1, 1), and scale the norm accordingly as well. For the manipulation of the spectrum we construct our distributions by hand. Specifically, if we want the matrix to have the pair of complex eigenvalues α ± ωi in its spectrum, we add the block α −ω ω α along the diagonal. For real eigenvalues, we just add the corresponding real value on the diagonal. Why upper triangular? The idea behind the use of an upper triangular matrix arises from the real Schur decomposition. Given a connectivity matrix W, one can find the eigenspectrum using the basis of eigenvectors. However, the non-normality of the matrix is lost under this linear transformation. Since we are especially interested in the dynamical regime of transient amplification we have to go beyond the spectrum, and a better way to access the nonnormality is to use its Schur decomposition. Indeed, any square matrix is unitarily equivalent to an upper triangular one, and by definition, the minimum over all such decompositions norm of the strictly upper triangular part is its non-normality index. We follow the same idea, but use instead the real Schur transformation. The advantage is that we still have in our hands a real-valued matrix. The disadvantage is that we now have to deal with 2 × 2 blocks along the diagonal. However, the important thing is that W is still orthogonally equivalent to its real Schur transform. This means that the non-normality quantity we are interested in is still preserved, i.e., the dynamical characteristics of transient amplification between the two matrices are not qualitatively different. Details on the real distributions. The real distributions we compare in the main text are the following: • A single valued distribution: all real parts are the same and equal to a fixed value. In the simulations shown in the main text, this value is taken to be zero for comparison reasons, and also to agree with the (subsequently introduced) idea of the zero trace condition. In figure S1A we compare to cases where this fixed value is equal to −0.5 and 0.5. • A distribution with a negative outlier: in this construction, we add a negative outlier at a specific point. Because we require that the sum of all real parts is equal to zero, we have to set some real parts to be equal to 0.5. The number of real parts set at 0.5 depends on the value of the outlier. The rest of the real values are set to zero. • A uniform distribution on the interval (−0.5, 0.5): all real parts, except for the last one, are distributed uniformly between the values −0.5 and 0.5. As before, because of the zero trace condition, we have to add a small outlier (the last real eigenvalue) to complement for the non-zero sum of the rest of the values. In all cases the pairings with the corresponding imaginary parts are random -except for forcing the conjugacy of eigenvalues, that is, we make sure that the same real part is paired with conjugate imaginary parts. All simulations are run for 200 realisations, with respect to the randomness of the imaginary distribution, and final quantities are averaged across all realisations for plotting. Details on the feedforward structure. In some simulations, the feedforward structure of the upper triangular matrices was taken to be equal to the upper triangular part of the Schur transform of a fixed stability optimised circuit. In other simulations, the upper triangular part of the matrix was simply drawn from a uniform distribution. In Figure S1B we compare the results for both structures when the real and imaginary distributions are identical. Eigenvector overlaps. Recall that the eigenvectros are, in general, complex, in conjugate pairs and that in order to compute the overlap of the eigenvectors we need to consider their inner product. The inner product of two complex Christodoulou, Vogels, and Agnes (2021) numbers is defined as a, b = a i b i (S1) and the angle between two complex vectors is given by Therefore, to compute the angles between the eigenvectors we use Equation S2. In particular, we normalise the eigenvectors to unit norm and compute all pairwise angles. Finally, since cos(π − θ) = −cos(θ), when computing the percentage of small eigenvector overlaps (i.e. less than 45 • ), we consider as angle the minimum angle between θ and π − θ. We would like to note here that non-normality depends on the complex inner product between eigenvectors, and not only its real part. However, we have chosen to compute this more intuitive version of an angle between two complex vectors (which is commonly used in the literature) as a characterisation of the amplification dynamics. Imaginary clustering at different points. To understand whether the surprising effect of the imaginary spectrum is due to the clustering of the eigenvalues we checked what happens when the imaginary clusters are not around zero, but at e.g. the points {100, −100} (Fig. S2A inset). In this case the linear responses exhibit an interesting phenomenon, resembling the beats in acoustics (Fig. S2A). Because the frequencies are close to each other (due to the clustering), the amplitudes of the different neuronal responses are superimposed when phased, to create a response of very high amplitude (which by our definition would count as amplification). Moreover, the differences in the frequencies create an envelope that is modulating this amplitude over time. On the other hand, the nonlinear responses fail to capture most of the interesting dynamics seen linearly and do not amplify to the same extent (Fig. S2B). The very high frequency makes it impossible for any potentially amplifying mode to drive the rest of the nodes and create a big amplified response. Because of this discrepancy between linear and nonlinear behaviour, we will not consider these regimes as amplifying for the purposes of this letter. It is worth noting that similar behaviour to the ±100 example is seen when clustering the imaginary spectrum at different nonzero values (Fig. S2C). Dimensionality of dynamics-effective rank of eigenvector matrix. Here we briefly explain the intuition behind the effective rank of the eigenvector matrix V. This is understood as the number of significant dimensions in the range of a matrix. For example, if the effective rank is equal to κ, then a random trajectory in the range of V is sufficiently approximated by κ dimensions. The fact that the effective rank of the eigenvector matrix is small indicates that there are a few prevalent directions in the space spanned by the eigenvectors, which indicates that dynamical trajectories will be biased towards a small subspace of the entire eigenvector space. This is further explored and verified with the computation of the dynamical matrix P defined below. Construction of the matrix P. We constructed the matrix P to understand how correlated the dynamics of the network are given different initial conditions. This matrix represents the prevalent directions of the dynamics, given different initialisations. This is done as follows: after having identified the optimal orthogonal basis of initial conditions I B , we initialise the network at each of the vectors in this basis, one at a time. For each such vector, if the induced dynamics are amplified, i.e., if the norm of the rate vector is at some point in time larger than 1.5 (the initialisation vectors have always unit norm), then we perform Pricipal Component Analysis on the dynamics. More specifically, we compute the eigenvectors of the covariance matrix of the neuronal dynamics for each of these simulations. From these eigenvectors we only consider the eigenvector corresponding to the largest eigenvalue and store it as a column in the matrix P. Once we have initialised the network at all vectors in I B we are left with a N × M matrix P. The number M is the same as the number of conditions that lead to an amplified response and provides a maximum bound for the effective rank of matrix P. The effective rank of P thus gives us the effective dimensionality of the space spanned by the columns of P. If the effective rank is less than the the number of columns, we can deduce that orthogonal initial conditions have first principal components that are close to each other in state-space. This implies that the initial network amplifies orthogonal initial conditions along the same low dimensional subspace. Inhibitory dominance. To inutitively understand why the strength of inhibitory dominance might facilitate amplification regimes, we can first model global inhibitory dominance abstractly in a matrix W that is upper triangular. We do this by adding a real negative outlier to the real spectral distribution. Interestingly, the existence of the negative outlier together with the zero trace condition has a very interesting effect. The larger the value of the outlier (in absolute value), the bigger the amplification (Fig. S3A) and the number of amplified directions (Fig. S3B). On one hand, this can be explained by the fact that more real parts are pushed to the right, creating longer decay envelopes, hence prolonging the time for the hidden feedforward structure to be amplified. However this is not the sole source of the increased amplification; a larger negative outlier has an additional non-intuitive effect on the geometry of the eigenvectors, i.e., it gives rise to larger eigenvector overlaps (Fig. S3C). We confirm the above result in the main text, with a biologically plausible matrix, W B satisfying Dale's lawcolumns have either only positive or negative entries (Fig. 3C-E). Taking into account the effect of the negative outlier, we hypothesized that assigning the matrix strength in the inhibitory weights will result in this good real Schur decomposition. Larger inhibitory weights can lead to a larger global inhibitory dominance ratio, hence to a larger negative outlier and therefore to more amplification according to the theoretical, upper triangular matrix results (Fig. S3). To highlight this finding, we note that in the example shown in Fig. 3C-E, when the inhibitory to excitatory ratio is large, I/E = 40, the strength of every nonzero excitatory-to-excitatory connection is 0.08, and yet the network is capable of stronger amplification compared to when I/E = 3 in which the nonzero excitatory-to-excitatory weights are set to 1.05. This also explains why we can't reach the long transient regime by increasing the inhibitory strength. Since the overall norm of the matrix stays the same (for comparison reasons), increasing more than that the inhibitory dominance would necessarily decrease even further the excitatory weights. Therefore, the amplification power of the network through this mechanism eventually saturates before reaching the long transient regime. We compare results for three real values: −0.5, 0 and 0.5. Top: maximum response norm for preferred initial condition. Naturally a larger real part leads to more amplification as the decay envelope becomes slower. Middle: % of amplified conditions that are amplified by at least 50%; this is also affected by the value of the real part indicating that the the amplification landscape changes its shape in a uniform way. Bottom: the percentage of pairwise eigenvector angles is independent of the real value, i.e., the increased amount of amplification is mainly a result of the slower decay times. Results in all cases are qualitatively similar in their evolution with respect to the imaginary radius. B, Comparing results for two different feed-forward structures. One is the feed-forward structure taken from the corresponding ffd part of a matrix constructed using the SOC algorithm (pink). The other has a uniform ffd entry distribution, with overall ffd norm equal to the SOC one (yellow). In both cases the spectra are identical and correspond to the spectral distribution of the pink curve from A, i.e., single real value at zero, varying imaginary range represented on the x-axis.
7,218.4
2021-04-02T00:00:00.000
[ "Physics" ]
Overexpression of GPX3, a potential biomarker for diagnosis and prognosis of breast cancer, inhibits progression of breast cancer cells in vitro Growing evidence has demonstrated that glutathione peroxidases (GPXs) family genes play critical roles in onset and progression of human cancer. However, a systematic study regarding expression, diagnostic and prognostic values, and function of GPXs family genes in breast cancer remains absent. Several databases were employed to perform in silico analyses for GPXs family genes. qRT-PCR, western blot and immunohistochemistry staining were introduced to validate GPX3 expression in breast cancer. The functions of GPX3 in breast cancer cells were successively determined. By combination of receiver operating characteristic (ROC) curve analysis, survival analysis and expression analysis, GPX3 was considered as a potential tumor suppressor and a promising diagnostic/prognostic biomarker in breast cancer. Next, low expression of GPX3 was confirmed in breast cancer cells and tissues when compared with corresponding normal controls. Overexpression of GPX3 markedly suppressed proliferation, colony formation, migration and invasion of breast cancer in vitro. Moreover, two potential mechanisms responsible for GPX3 downregulation in breast cancer, including hypermethylation of GPX3 promoter and release of hsa-miR-324-5p inhibition. Collectively, we demonstrate that GPX3 is markedly downregulated in breast cancer, possesses significant diagnostic and prognostic values and attenuated in vitro growth and metastasis of breast cancer. Background Breast cancer is the most common diagnosed women's malignant tumor and also the second leading cause of cancer-related deaths in women worldwide [1,2]. Despite a variety of advancements have been achieved in diagnosis and therapy, the total outcome of patients with breast cancer remains unsatisfactory. Thus, developing effective therapeutic targets and promising biomarkers for diagnosis and prognosis prediction is very meaningful to improve prognosis of breast cancer. Glutathione peroxidases (GPXs), consisting of eight members (GPX1-8), are ubiquitously expressed proteins that catalyze the reduction of hydrogen peroxides and organic hydroperoxides by glutathione [3]. GPX family members have been well demonstrated to be frequently aberrantly expressed and are also closely linked to progression of diverse types of human cancer, including kidney cancer [4], pancreatic cancer [5], hepatocellular carcinoma [6], cervical cancer [7] and gastric cancer [8]. However, a comprehensive study about expression, function, diagnostic and prognostic values of GPXs family in breast cancer remain absent. In this study, we first assessed the roles of GPXs family genes in predicting diagnosis and prognosis of breast cancer and then determined the mRNA and protein expression of GPXs family genes in breast cancer using bioinformatic analysis. Next, the low expression of GPX3 was detected in breast cancer cells and tissues. Subsequently, the function of GPX3 in breast cancer cell growth and metastasis was also investigated. Finally, we explored the potential detailed mechanisms responsible for GPX3 downregulation in breast cancer. ROC curve analysis Using TCGA breast cancer and normal breast expression data, the diagnostic values of GPXs family genes were evaluated by ROC curve as we previously described [9]. P-value < 0.05 was considered as statistically significant. Kaplan-Meier-plotter database analysis Kaplan-Meier-plotter database (http://kmplo t.com/ analy sis/), which is capable to access the effect of 54,000 genes on survival in 21 cancer types, including breast cancer, was employed to perform survival analysis for GPXs family genes and miRNAs in breast cancer [10]. Logrank P-value < 0.05 was considered as significant. GEPIA database analysis GEPIA database (http://gepia .cance r-pku.cn/index .html), a newly developed interactive web server for analyzing the RNA sequencing expression data of 9736 tumors and 8587 normal samples from the TCGA and GTEx projects, was used to determine mRNA expression profile of GPXs family genes in breast cancer [11]. P-value < 0.05 was considered as statistical significance. Oncomine database analysis Oncomine database (https ://www.oncom ine.org/), which is a cancer microarray database and integrated data-mining platform, was also utilized to analyze mRNA expression of GPXs family genes in breast cancer [12,13]. Fold change (FC) > 1.5, P-value < 0.05 and a gene rank in top 10% were set as the thresholds for selecting the included datasets. UALCAN database analysis The protein expression levels of GPXs family genes in breast cancer were assessed using UALCAN database (http://ualca n.path.uab.edu/index .html), which is a comprehensive, user-friendly and interactive web resource for analyzing cancer OMICS data [14]. UALCAN database was also introduced to determine the promoter methylation level of GPX3 in breast cancer. P-value < 0.05 of statistical analysis was considered to have significant differences. starBase database analysis starBase database (http://starb ase.sysu.edu.cn/index .php), an open-source platform for investigating miRNAassociated studies, was used to predict the upstream binding miRNAs of GPX3 [15,16]. The correlation of GPX3 with miRNA in breast cancer and miRNA expression level in breast cancer were also assessed by starBase database. P-value < 0.05 was considered as statistical significance. Cell lines and clinical tissues The human breast cancer cell lines MCF-7 and MDA-MB-231 and normal breast cell line MCF-10A were purchased from Shanghai Institute of Biological Science, Chinese Academy of Sciences (Shanghai, China). 59 breast cancer tissues and 59 matched normal tissues were obtained from 59 patients with breast cancer, who received surgical resection in the First Affiliated Hospital of Zhejiang University, College of Medicine (Hangzhou, China). This study was approved by the ethics committee Protein extraction and western blot Protein of breast cancer cells was extracted using RIPA buffer (Beyotime, China) supplemented with protease and phosphatase inhibitors (Thermo Scientific, USA). Western blot was performed as previously described [18]. The primary antibodies of GPX3 (1:1000) and GAPDH (1:1000) were purchased from Abcam, and anti-rabbit peroxidase conjugated secondary antibody was purchased from Sigma (1:5000). GPX3 band density was normalized to GAPDH and quantified by ImageJ software. Immunohistochemistry (IHC) analysis IHC was utilized to analyze the protein expression of GPX3 in breast cancer tissues and matched normal breast tissues as we previously reported [19]. Establishment of stably-overexpressed cell Full length of GPX3 was first amplified, after which the PCR product was cloned into pcDNA3.1-PURO vector digested with BamH1 and XhoI. GPX3-overexpressed plasmid was transfected into breast cancer cells using Lipofectamine ™ 3000 (Invitrogen, USA) according to the manufactures' instruction. Then, stably-overexpressed cell was screened using puromycin (2 μg/mL). CCK-8 assay 2500 stably-overexpressed cells were seeded into 96-well plates, and cultured for varied period (24, 48, 72 and 96 h). At the culture end of each time point, 20 μl CCK-8 solution was added into each well and incubated for another 4 h at 37 °C. Finally, the optical density (OD) value at 450 nm of each well was determined by a microplate reader. Colony formation assay 1000 stably-overexpressed cells were seeded into six-well plates, and cultured for 2 weeks. At the end of culture, the plates were washed using phosphate buffered saline (PBS) for two times. Next, the plates were fixed in methanol for 15 min and stained with 0.1% crystal violet solution for another 10 min. Finally, the visible colonies of each well were counted. Wound healing assay Wound healing assay was introduced to detect the migrated ability of breast cancer cells. 40 × 10 4 stablyoverexpressed cells were seeded into six-well plates. When the cells were grown to 100% confluence, a wound cross was made using a micropipette tip. Photographs were then taken through a microscopy immediately or 24 h after wounding. Transwell invasion assay Cell invasion was determined by Transwell invasion assay. Briefly, transwell inserts were firstly coated with Matrigel (BD, USA). Then, 10 × 10 4 stably-overexpressed cells suspended in 0.2 mL serum-free medium were added into inserts. And 0.6 mL medium containing 20% FBS was added to the lower compartment as a chemoattractant. After culturing for 48 h, the cells on the upper membrane were carefully removed using a cotton bud and cells on the lower surface were fixed with methanol for 15 min and successively stained with 0.1% crystal violet solution for 10 min. Photographs were then taken through a microscopy. Statistical analysis Statistical analysis of bioinformatic analysis was performed by online databases as mentioned above. The results of experimental data were shown as mean ± SD. Student's t-test was used to assess differences between two groups. The diagnostic value was determined by ROC curve analysis. A two-tailed value of P < 0.05 was considered as statistically significant. The diagnostic and prognostic values of GPXs family genes in breast cancer To explore if the expression of GPXs family genes possesses significant diagnostic values in patients with breast cancer, receiver operating characteristic (ROC) curve analysis was employed based on breast cancer data from TCGA database (Fig. 1). As shown in Fig. 1, four GPXs family genes had the significant ability to distinguish breast cancer tissues from normal breast tissues, including GPX2, GPX3, GPX4 and GPX8. However, the other four GPXs family genes (GPX1, GPX5, GPX6 and GPX7) showed no statistical diagnostic values in breast cancer. Notably, these findings suggested that GPX3 was the most potential diagnostic biomarker for patients with breast cancer, with the Area Under Curve (AUC) value being equal to 0.9207. Next, we investigated the prognostic values of GPXs family genes in breast cancer using Kaplan-Meier-plotter database (Fig. 2). Increased expression of GPX1 (Fig. 2a) indicated poor prognosis of breast cancer. Breast cancer patients with higher expression of GPX2 (Fig. 2b), GPX3 (Fig. 2c) or GPX5 (Fig. 2e) had better prognosis. GPX4, GPX6 and GPX7 had no significant predictive values for prognosis of breast cancer. All these findings together indicated that only GPX2 and Fig. 2 The prognostic values of GPXs family genes in breast cancer determined by Kaplan-Meier plotter database. GPX3 possessed significant diagnostic and prognostic values for breast cancer. The expression levels of GPXs family genes in breast cancer Next, we further studied the expression levels of GPXs family genes in breast cancer. First of all, TCGA and GTEx databases were introduced to mine the mRNA expression of 8 GPXs family genes in breast cancer. The mRNA expression profile of GPXs family was shown in Fig. 3a (TCGA tumor tissues compared with TCGA normal tissues) and Fig. 3b (TCGA tumor tissues compared with TCGA normal tissues and GTEx normal tissues). We found that GPX2 and GPX3 were significantly downregulated in breast cancer ( Fig. 3c-f ). Next, Oncomine database was used to further analyze mRNA expression of GPXs family genes in breast cancer (Fig. 4a). We performed meta-analysis for 15 included studies about GPX3, and found that GPX3 mRNA expression was markedly decreased in breast cancer (Fig. 4b). The downregulation of GPX3 mRNA expression in breast cancer of the 15 GPX3-associated studies was presented in Fig. 4cq. However, we found that GPX2 was not significantly downregulated in breast cancer. Subsequently, CPTAC database was utilized to assess the protein expression of GPXs family genes in breast cancer (Fig. 5). The results revealed that GPX1, GPX2, GPX3 and GPX4 protein levels were markedly decreased in breast cancer when compared with normal controls. GPX7 protein expression in breast cancer was significantly increased. GPX8 showed no statistical difference between breast cancer tissues and normal tissues. And GPX5 and GPX6 were not found in CPTAC. Taken together, GPX3 was the most potential one among all GPXs family genes in breast cancer and was selected for following research (Fig. 6). The expression level of GPX3 was confirmed in breast cancer and negatively correlated with tumor progression To further validate the results from in silico analysis, we detected the mRNA and protein expression levels of GPX3 in breast cancer cells and tissues. As presented in Fig. 7a, b, GPX3 mRNA and protein were significantly downregulated in two breast cancer cells, MCF-7 and MDA-MB-231, when compared with normal cell, MCF-10A. We also found that GPX3 mRNA expression in breast cancer tissues was much lower than that in adjacent matched normal tissues (Fig. 7c). The protein expression of GPX3 was also detected using immunohistochemistry (IHC) analysis. The results showed that GPX3 protein expression was significantly decreased in breast cancer tissues (Fig. 7d). Collectively, GPX3 mRNA and protein expression levels were significantly downregulated in breast cancer, which was identical with the bioinformatic analytic results. Furthermore, Chi square test revealed that low expression of GPX3 was significantly negatively correlated with ER/PR expression and positively linked to tumor size, histopathological grade and lymph node metastasis ( Table 1). All these findings showed that GPX3 was negatively correlated with progression of breast cancer and might function as a tumor suppressor in breast cancer. GPX3 overexpression suppressed proliferation and colony formation of breast cancer cells Given the low expression of GPX3 in breast cancer, overexpression technology was used to study GPX3′s functions. We then constructed the overexpressed plasmid of GPX3. After transfection of GPX3-overexpressed plasmid, GPX3 mRNA and protein expression levels were significantly upregulated in breast cancer cells (Fig. 8a, b). Firstly, we explored the effect of GPX3 on growth of breast cancer cells. CCK-8 assay demonstrated that overexpression of GPX3 markedly suppressed in vitro proliferation of breast cancer cells, MCF-7 and MDA-MB-231 (Fig. 8c, d). Furthermore, colony formation assay also revealed that GPX3 upregulation led to the inhibition of clonogenic capacity of breast cancer cells (Fig. 8e, f ). These findings indicated that GPX3 overexpression significantly suppressed in vitro proliferation and colony formation of breast cancer cells. GPX3 overexpression inhibited migration and invasion of breast cancer cells Metastasis is another hallmark of malignant tumors, including breast cancer. We intended to ascertain if GPX3 affects metastasis of breast cancer. Wound healing assay was first employed to investigate GPX3′s function in controlling migration of breast cancer cells, and the result demonstrated that overexpression of GPX3 obviously attenuated the migrated ability of breast cancer cells (Fig. 9a, b). Moreover, increased expression of GPX3 could also suppressed invasion of breast cancer cells, which was detected by transwell invasion assay (Fig. 9cf ). Taken together, overexpression of GPX3 suppressed in vitro migration and invasion of breast cancer cells. The potential mechanisms responsible for GPX3 downregulation in breast cancer Finally, we preliminarily probed the possible molecular mechanisms that accounted for GPX3 downregulation in breast cancer. Promoter hypermethylation may be responsible for expression suppression of tumor suppressors. Intriguingly, we found that the promoter methylation level of GPX3 was significantly upregulated in breast cancer tissues compared with normal controls (Fig. 10a). Gene expression was also frequently negatively regulated by miRNAs at post-transcriptional level. The miRNAs that potentially bind to GPX3 were predicted by starBase database, and 79 miRNAs were finally found. For better visualization, miRNA-GPX3 network was established (Fig. 10b). Based on the action mechanism of miRNA, there should be negative correlation between miRNA and target gene. We performed expression correlation analysis for miRNA-GPX3 pairs. As listed in Table 2, four potential miRNAs (hsa-miR-324-5p, hsa-miR-328-3p, hsa-let-7a-5p and hsa-miR-449b-5p), which were inversely associated with GPX3 expression in breast cancer, were identified. The prognostic values of the four miRNAs in breast cancer were also evaluated by Kaplan-Meier-plotter database (Fig. 10c, d). Survival analysis revealed that, among the four miRNAs, only high expression of hsa-miR-324-5p indicated poor prognosis for patients with breast cancer (Fig. 10c). The expression levels of four miRNAs in breast cancer was subsequently determined by starBase (Fig. 10g-j), and showed that miR-324-5p and hsa-miR-449b-5p were significantly upregulated whereas hsa-miR-328-3p and hsa-let-7a-5p were markedly downregulated in breast cancer compared with normal controls. By combination of survival and expression analysis, miR-324-5p was considered as the most potential upstream miRNA of GPX3 in breast cancer. The above results implied that promoter hypermethylation and miR-324-5p-mediated suppression were two potential mechanisms that may be responsible for GPX3 downregulation in breast cancer (Fig. 10l). Discussion Breast cancer is the most common cancer type in women. The molecular mechanism of carcinogenesis of breast cancer is still unclear and need to be further investigated. Increasing findings have showed that GPXs are critical regulators in onset and progression of human cancer. However, the knowledge of GPXs in breast cancer is still limited. ROC curve and survival analysis for GPXs family revealed that some of them might serve as promising diagnostic and prognostic biomarkers for breast cancer, especially GPX2 and GPX3. Expression analysis demonstrated the significant low expression of GPX3 in breast cancer. GPX3 was reported to act as a tumor suppressor suggested that GPX3 arrested cell cycle and functioned as a tumor suppressor in lung cancer [21]; Hua et al. showed that silencing GPX3 expression promoted tumor metastasis in human thyroid cancer [22]; Caitlyn et al. revealed that plasma GPX3 limited the development of colitis -associated carcinoma [23]. However, the function and mechanism of GPX3 in breast cancer have not been reported and need to be further elucidated. Next, we confirmed the low expression of GPX3 in breast cancer cells and tissues using qRT-PCR, western blot and IHC, which supported the results of bioinformatic analysis. Functional experiments revealed that overexpression of GPX3 significantly inhibited in vitro proliferation, colony formation, migration and invasion of breast cancer cells. Previous studies have showed the effect of promoter methylation level in regulating gene expression [24]. Thus, we preliminarily evaluated the promoter methylation level of GPX3 in breast cancer, and found that it was significantly upregulated in breast cancer compared with normal breast tissues. Moreover, Mohamed et al. also demonstrated the link between promoter hypermethylation of GPX3 and inflammatory breast carcinogenesis [25]. The report together with our finding revealed that hypermethylation of GPX3 promoter might be a potential mechanism responsible for GPX3 downregulation in breast cancer. miRNAs are involved in multiple biological processes by suppressing gene expression [2,[26][27][28]. We also explored the upstream regulatory miRNAs of GPX3. By combination of correlation analysis, survival analysis and expression analysis for these miRNAs, miR-324-5p was regarded as the most potential miRNA, which was overexpressed, negatively correlated with GPX3 expression, and possessed poor prognosis in breast cancer. Numerous studies have demonstrated that miR-324-5p served as an oncogenic miRNA in human cancer. For example, miR-324-5p promoted progression of papillary thyroid carcinoma via microenvironment alteration [29]; miR-324-5p facilitated progression of colon cancer by activating Wnt/beta-catenin pathway [30]. Moreover, the Fig. 7 The expression levels of GPX3 in breast cancer cells and tissues. The mRNA (a) and protein (b) expression of GPX3 in breast cancer cells was significantly lower than that in normal breast cell. c The mRNA expression of GPX3 was markedly decreased in breast cancer tissues compared with matched normal breast tissues. d IHC analysis of GPX3 expression levels in normal breast tissues and breast cancer tissues. Bar scale: 150 um; *P < 0.05 relationship between GPX3 and miR-324-5p has already been reported in lung cancer [31]. Thus, overexpressed miR-324-4p might be another mechanism that accounted for GPX3 downregulation in breast cancer. In the future, the oncogenic roles of miR-324-5p need to be further investigated by in vitro and in vivo assays. Conclusions In summary, our current findings indicate that GPX3 is markedly downregulated in breast cancer, promotes in vitro growth and metastasis of breast cancer cells, and servers as a promising diagnostic or prognostic biomarker for patients with breast cancer. Moreover, we also elucidate that promoter hypermethylation and miR-324-5p-mediated suppression may be two Fig. 10 The potential mechanisms responsible for GPX3 downregulation in breast cancer. a The promoter methylation level of GPX3 was increased in breast cancer compared with normal controls. b The miRNA-GPX3 network. c-f The prognostic values of four miRNAs in breast cancer. g-j The expression levels of four miRNAs in breast cancer. k The intersection analysis of survival analysis and expression analysis. l The model of GPX3′s function and dysregulated mechanism in breast cancer. P < 0.05 was considered as statistically significant potential mechanisms responsible for GPX3 downregulation in breast cancer. These results provide key clues for developing effective therapeutic targets and biomarkers for breast cancer. Authors' contributions WL and PF: designed this work, performed experiments, analyzed data and draft the manuscript. BD: performed some experiments. SW: revised the manuscript. All authors read and approved the final manuscript. Funding Not applicable. Availability of data and materials The data in this work are available from the corresponding author on reasonable request.
4,601.8
2020-08-06T00:00:00.000
[ "Medicine", "Biology" ]
LMM-22: An enhanced Linear Mixed Model (LMM) approach for Genome-wide Association Studies (GWAS) for the prediction of diseases and traits among humans from genomics data - Increasingly, genomics is being used for the prediction of specific traits and diseases (phenotypes) among humans. Wider availability of genomics data through multiple research projects (such as International HapMap Project 1 and 1000 Genomes 2 ) has been a catalyst in that direction. With the recent advances in machine learning and big data analysis, data computation resources and data models needed for genomics data analysis are readily available. However, the prediction of traits and diseases has its own challenges in terms of computational requirements and computational analysis, statistical analysis (example: confounding variables), and limited quality of data collection. Linear Mixed Models (LMM, a type of linear regression) is a common approach for Genome-wide Association Studies (GWAS) for the prediction of common traits among humans using genomics. This paper researches the existing LMM-based approaches for Genome-wide Association Studies (GWAS), describes the experiment performed on FaST-LMM approach from Microsoft Research, and then proposes an enhanced approach (called LMM-22) on how to address computational and statistical issues. LMM-22 focuses on the parallelization of LMM computations and execution of LMM-22 on General Purpose Graphics Processing Units (GPU) as against CPUs to accelerate the LMM approach for GWAS studies. INTRODUCTION enome-wide associations studies (GWAS) are emerging as commonly used method for scientists and medical researchers to identify genes involved in human diseases. This method searches the genomes for variations in single nucleotide polymorphisms or SNPs that occur more frequently in people with a particular disease than in people without the disease. Each study can look at hundreds or thousands of SNPs at the same time. Researchers use data from this type of study to pinpoint genes that may contribute to a person's risk of developing a certain disease. Linear Mixed Models (LMMs) has emerged as a common statistical and data science approach for performing GWAS studies. BACKGROUND Gene A gene is the basic physical and functional unit of heredity. Genes are made up of DNA and act as instructions to make molecules called proteins. In humans, genes vary in size from a few hundred DNA bases to more than 2 million bases. Every person has two copies of each gene, one inherited from each parent. Most genes are the same in all people, but a small number of genes (less than 1 percent of the total) are slightly different between people. An allele is one of two or more versions of a gene. An individual inherits two alleles for each gene, one from each parent. If the two alleles are the same, the individual is homozygous for that gene. If the alleles are different, the individual is heterozygous. Genome and Nucleotide Genome is the genetic material of an organism. A genome is an organism's complete set of Deoxyribonucleic Acid (DNA), including all of its genes. Each genome contains all of the information needed to build and maintain that organism. In humans, a copy of the entire genome-more than 3 billion DNA base pairs-is contained in all cells that have a nucleus. Nucleotides are organic molecules that form the DNA and RNA. A genome sequence is the complete list of the nucleotides (A, C, G, and T for DNA genomes) that make up all the chromosomes of an individual or a species. Within a species, the vast majority of nucleotides are identical between individuals, but sequencing multiple individuals is necessary to understand the genetic differences between humans. Chromosomes In the nucleus of each cell, the DNA molecule is packaged into thread-like structures called chromosomes. Each chromosome is made up of DNA tightly coiled many times around proteins called histones that support its structure. In humans, each cell normally contains 23 pairs of chromosomes 3 , for a total of 46. Single Nucleotide Polymorphisms (SNP) Single nucleotide polymorphisms, called SNPs (and pronounced as "snips"), are the most common type of genetic variation among humans. Each SNP represents a difference in a single DNA building block, called a nucleotide. SNPs occur normally throughout a person's DNA. They occur once in every 300 nucleotides on average, which means there are roughly 10 million SNPs in the human genome. Most commonly, these variations are found in the DNA between genes. SNPs can act as biological markers, helping scientists locate genes that are associated with disease. When SNPs occur within a gene or in a regulatory region near a gene, they can play a more direct role in disease by affecting the gene's function. Most SNPs have no effect on health or development. Some of these genetic differences, however, have proven to be very important in the study of human health. Researchers have found SNPs that may help predict an individual's response to certain drugs, and risk of developing particular diseases. Genotypes and Phenotypes Genotype is an organism's full hereditary information expressed in terms of genomes. Genotype also refers to set of genes carried by an individual. Phenotype is the observable physical or biochemical characteristics of an individual organism, determined by both genetic structure and environmental influences. Examples of phenotypes include height, eye color, IQ, genetic diseases (Prostate or colorectal cancer, breast cancer, Type-2 diabetes) etc. Most phenotypes are influenced by both one's genotype and by the unique environment that one has lived in. The genes contribute to a trait, and the phenotype is the observable expression of the genes (and therefore the genotype that affects the trait). The relationship between genotype and phenotype is expressed as follows: An example of a phenotype is eye color, which is an inherited trait influenced by more than one gene, including OCA2 and HERC2. The interaction of multiple genes-and the variation in these genes between individuals-help to determine a person's eye color. Genome-wide Association Studies Genome-wide association studies (GWAS) have become a common way for scientists to identify genes involved in human diseases. This method searches the genome for small variations in SNPs that occur more frequently in people with a particular disease than in people without the disease. Each study can look at hundreds or thousands of SNPs at the same time. Researchers use data from this type of study to pinpoint genes that may contribute to a person's risk of developing a particular disease. GWAS studies typically focus on associations between (SNPs) and traits such as major human diseases but can equally be applied to any other organism. When applied to human data, GWAS studies compare the DNA of participants having varying phenotypes for a particular trait or disease. GWAS studies have been used successfully to identify genetic variations that contribute to the risk of type 2 diabetes, Parkinson's disease, heart disorders, obesity, Crohn's disease and prostate cancer etc. For a GWAS study 4 , researchers use two groups of participants: people with the disease being studied and similar people without the disease. Researchers obtain DNA from each participant. Each person's complete set of DNA or genome, is then purified from the blood or cells, placed on tiny chips and scanned on automated laboratory machines. The machines quickly survey each participant's genome for selected markers of genetic variation, which are SNPs. If certain genetic variations are found to be significantly more frequent in people with the disease compared to people without disease, the variations are said to be "associated" with the disease. The associated genetic variations can serve as pointers to the region of the human genome where the disease-causing problem resides. International HapMap Project This project is a scientific effort to identify common genetic variations among people. The HapMap (short for "haplotype map") is a catalog of common genetic variants SNPs. Each SNP represents a difference in a single DNA building block, called a nucleotide. When several SNPs cluster together on a chromosome, they are inherited as a block known as a haplotype. The HapMap describes haplotypes, including their locations in the genome and how common they are in different populations throughout the world. EXISTING APPROACHES FOR GWAS GWAS studies require analysis of genomics data from tens of thousands of individuals. The human genome contains roughly 10 million SNPs. Hence, GWAS studies are difficult, time-consuming, and expensive to look at such large number of SNPs and then determine whether specific SNPs play a role in human disease. Statistical methods are becoming more common and widely adopted approach for GWAS studies. However, there are additional issues in the use of statistical methods for GWAS studies. Any observation in GWAS studies can be confounded by population structures, which are presence of subgroups in population with ancestry differences. In 4 Detailed explanation of GWAS is here: https://www.genome.gov/20019523/ statistics, a confounding variable affects both dependent and independent variables causing spurious associations. See the diagram below 5 for an example of a confounding variable related to subgroups with ancestry differences. Ethnic groups often share similar dietary habits and lifestyle characteristics that lead to environmental factors that affect traits. For example, South Indians eat more rice than North Indians. Ignoring such ancestry differences among sample individuals can lead to false positives or incorrect associations. Furthermore, family relatedness (example: alleles transmitted from parents to children) can also cause confounding problems. Initially, Linear Regression Models were used for GWAS studies. In linear regression, statistical analysis is done to model relationship between dependent variable and one or more independent variables. Linear regression models can be used to either a) fit a predictive model to an observed data set of y and X values, and the predict value of y, or b) quantify the strength of relationship between y and X. Linear regression is written in the vector form as: y = Xb + e where y is vectors of dependent variables, e is noise, b is parameter vector, and X is a matrix of independent variables. The following diagram shows an example of linear regression of random data points. However, the presence of confounding variables (such as population structure) in GWAS analysis requires more sophisticated models. Linear Mixed Models (LMM) have emerged as a common statistical method. A LMM approach combines both fixed and random effects using a combination of fixed and random variables. LMM is represented as: where y is a known vector of observations, b is unknown vector of fixed effects, u is unknown vector of random effects, and e is unknown vector of errors, and X and Z are design matrices relating the observations y to b and u respectively. Linear Mixed Models (LMMs) have emerged as a common approach for identifying causal features and predicting phenotypes. As with a standard linear model, LMMs include fixed effects for each genomic feature and any recorded covariates (also called feature vectors), such as age or sex. LMMs also include random effects: in the context of genomic models, these random effects are correlated between individuals on the basis of their genetic similarity. These random effects can account for heritable differences in phenotype that are not reflected by genomic features or covariates. LMM approaches for GWAS have been refined in the research community over years to address issues related to confounding variables and computational complexity. Mathematical computation (for example: matrix and vector multiplications, variable computation) for massive amount of data for SNPs (10M SNPs per individuals and 10s of thousands of individuals) are both complex in time and memory and are expensive in terms of computational capacity needs. For example, 100s of computers may be needed over multiple weeks to perform such computations. Yu et al [1] researched the use of unified mixed-model method for association mapping that accounts for multiple levels of relatedness. To reduce computational complexity and number of SNPs to be processed, Lippert, C. et al 6 [2] proposed to use only a subset of SNPs in the LMM. This approach relies on an estimate of the genetic similarity matrix (GSM), which encodes the pairwise similarity between every two individuals in the dataset. Lippert, C et al [2] also showed how estimating the GSM from fewer SNPs than individuals leads to computations which are linear in time and memory instead of cubic and quadratic, respectively. In a related approach [2], SNPs are chosen such that they are roughly equally spaced across the genome. The idea behind this approach is that linkage disequilibrium (non-random association of alleles at different loci in a given population) among the SNPs mitigates the need to use all of them. When the number of selected SNPs is less than the sample size of the data, then the computation of P values becomes linear in sample size, rather than quadratic. This further reduces computational complexity. GRAPHICS PROCESSING UNITS (GPU) Given LMM algorithms for GWAS studies need to perform computation on large blocks of SNPs data and matrices/vectors, GPUs have emerged as a more common approach for parallel data processing than general purpose CPUs. Initially, GPUs were designed for graphics and image processing. However, the ability of GPUs to do parallel processing of data (specific those involving vectors and matrices) has made them ideal choice for parallel computation of massive amount of data for applications such as machine learning, high performance computing, genomics etc. Cloud Computing platforms offer general-purpose GPU compute instances in an on-demand and scalable basis. For example, GPU compute instances from AWS, Amazon EC2 P3 instances 9 , offer up to 8 NVIDIA Volta GV100 GPUs. 1) I setup a Linux EC2 server on AWS and then did remote ssh to that Linux server. 2) I setup python 2.7 environment. 3) Next, I installed numpy, pysnptools and fastlmm packages for python. 4) FaST-LMM uses four input files containing (1) the SNP data to be tested, (2) the SNP data used to determine the genetic similarity matrix (GSM) between individuals, (3) the phenotype data, and (4) a set of covariates. 5) I downloaded genotype data from International HapMap project 11 to simulate phenotypes and perform GWAS analysis. The HapMap3 dataset, available in PLINK format from the HapMap Project website, contains genotypes from 1,184 persons. 6) I used PLINK to select relatively common variants (those whose minor allele has frequency >5% in this dataset) on chromosome 22. 7) I used the single_snp() function to perform singlevariant association testing using LMM. EXPERIMENT SETUP Manhattan plots are typically used to visualize the results of GWAS analysis. The following Manhattan plot shows the scatter plots of -log p vs. genomic position for each variant. Causal variants and their neighbors form peaks above a background of unassociated variants. In this example, the p-values for all true causal variants pass the threshold for significance: their p-values are sufficiently low that they lie above the threshold line on the -\log p axis. The plot also reveals two common types of "false positives", i.e. variants which are not causal that nonetheless have significant p-values. Next, to assess the accuracy of phenotype predictions as per FaST-LMM documentation, I split the 1,184 persons in my dataset into training and validation groups. FaST-LMM fits covariate and SNP effects based on the individuals in the training set, then generates predictions based on neverbefore-seen individuals in the validation set. Then, I trained a FaST-LMM model using the randomly-chosen training set. Note that FaST-LMM extracts the phenotype and covariate information for training set individuals from the provided text files, which contain information on all individuals. Finally, I performed predictions on the validation set and compared these to the true simulated phenotypes. I used the coefficient of determination (R²), i.e. the proportion of variance in phenotype accounted for by the predictions, as my metric for accuracy. My LMM-22 approach is focused on the use of parallelization of LMM execution on GPU clusters. The general idea is to take python implementation of FaST-LMM and then change the implementation to map the computation of matrix computation to stages that can be executed in parallel on GPUs. LEARNING AND CHALLENGES The statistical models and methods behind GWAS studies are more complex than I had originally assumed. Specific I had to understand matrix transformations and deeper statistical concepts such as confounding variables, level-2 regularization methods. I had to use synthetic data for 12 Nvidia NUMBA CUDA package for GPU acceleration https://developer.nvidia.com/how-to-cuda-python phenotypes instead of any real data. Also, genomics data I could use for my experiment from HapMap project was limited. The cost of running LMM models on a GPU compute cluster on AWS was more than I had originally budgeted for the project. FURTHER RESEARCH One of the stated objectives and hypothesis of my project is to try existing and proposed LMM algorithm on a massive amount of synthetic and real SNP and phenotype data. Ideally, such data should be for ~10,000 individuals with segmented set of SNPs = ~100,000. Such data should also account for confounding variables such as family relatedness and population structure, as I discussed in the background. I couldn't get to this part of my project during the current time duration. I plan to carry such analysis as part of the next steps for my research. Another hypothesis for my project is that LMM algorithms can be accelerated in terms of time by using GPU-based computing resources instead of general-purpose CPU computing resources. To validate this hypothesis, I plan to take massive amount of synthetic and real SNP and phenotype data, and then perform LMM-based GWAS study across two environments: a) cluster of CPU-based computing server instances on AWS cloud, and b) cluster of GPU-based computing server instances. Then I need to compare the relative speed-up of computing on b) as compared to a) for similar LMM analysis and data set. Given the $ cost associated with provisioning CPU and GPU servers on AWS cloud platform, I couldn't perform this experiment as part of my project given the limited $ budget I had allocated to my project. ACKNOWLEDGMENT I will like to acknowledge and thanks Mr. Kevyn Adams, my project mentor. I would also like to thank the research team at Microsoft who developed FaST-LMM for the contributions to the field of genomics.
4,187.4
2020-05-09T00:00:00.000
[ "Computer Science", "Medicine", "Biology" ]
A Pragmatic Ensemble Strategy for Missing Values Imputation in Health Records Pristine and trustworthy data are required for efficient computer modelling for medical decision-making, yet data in medical care is frequently missing. As a result, missing values may occur not just in training data but also in testing data that might contain a single undiagnosed episode or a participant. This study evaluates different imputation and regression procedures identified based on regressor performance and computational expense to fix the issues of missing values in both training and testing datasets. In the context of healthcare, several procedures are introduced for dealing with missing values. However, there is still a discussion concerning which imputation strategies are better in specific cases. This research proposes an ensemble imputation model that is educated to use a combination of simple mean imputation, k-nearest neighbour imputation, and iterative imputation methods, and then leverages them in a manner where the ideal imputation strategy is opted among them based on attribute correlations on missing value features. We introduce a unique Ensemble Strategy for Missing Value to analyse healthcare data with considerable missing values to identify unbiased and accurate prediction statistical modelling. The performance metrics have been generated using the eXtreme gradient boosting regressor, random forest regressor, and support vector regressor. The current study uses real-world healthcare data to conduct experiments and simulations of data with varying feature-wise missing frequencies indicating that the proposed technique surpasses standard missing value imputation approaches as well as the approach of dropping records holding missing values in terms of accuracy. Introduction Amongst the most prevalent problems in data science is the challenge of missing value [1]. This is especially true in health care records, where multiple missing values are common [2,3]. In current history, there is a greater emphasis on ensuring the quality of the data and reusability and automating data discovery and analysis procedures through the publication of data tags and statistical techniques [4]. The creation and use of automated decision support, which can improve reliability, accuracy, and uniformity [5,6], is a fundamental medical application of data science. Substantial training data is often utilised to produce a classifier. In contrast, test data is used to validate system correctness when creating a diagnosing prototype in a clinical decision support system (CDSS) [7]. The training and test data should, in principle, be accurate, with no incomplete data for any parameters. It's not practicable or viable to get lacking information to enhance data modelling in circumstances of missing value, which frequently happens in real-world traditional therapeutic records. As a result, being the core of their analytical procedure, computational approaches must include a methodology for dealing with missing values. Motivation In healthcare prediction, missing data raises serious analytical difficulties. If missing data isn't treated seriously, it might lead to skewed forecasts. The challenge of dealing with missing values in massive medical databases still needs more effort to be addressed [8]. To minimise the harm to data processing outcomes, it is advisable to integrate multiple known ways of addressing missing data (or design new ones) for each system. The demand for missing data imputation approaches that result in improved imputed values than conventional systems with greater precision and smaller biases is the driving force behind this study. Missing Data Classification Little and Rubin [9] described the missing data issue in terms of how missing values are generated, and thus offered three categories: (1) missing completely at random (MCAR), (2) missing at random (MAR), and (3) missing not at random (MNAR). The classification is critical since it influences the prejudices that could exist in the data and the safety of procedures like imputation. When an occurrence lacking a given parameter is unrelated to any other parameter, as well as to the missing values, it is known as missing completely at random (MCAR). It may be claimed that possible occurrences in MCAR are unrelated to any other actual or perceived element in the research. This is the more secure setting in which imputation may take place. When the likelihood of catching a missing value in a database depends on the observed data of other features and not on the missing data, this is known as missing at random (MAR). MCAR may be thought of as a subset of MAR. Although data MAR has certain ingrained prejudices, it is possible to examine this type of information without specifically correcting for incomplete information. When the chance of the record having a null value is dependent on unseen data, this is known as missing not at random (MNAR). MNAR is prevalent in longitudinal data, such as a medical dataset where illness expansion may result in patients dropping out of the research [10,11]. Longitudinal research studies on mental impairment (i.e., [12,13]) have a high enfeeble rate. In general, medical records are susceptible to the missing value of the MAR variety [14]. However, the likelihood of missing medical data is frequently influenced by the dependent variables since ailment intensity might affect data collection possibilities [15]. Endeavours to Impute Missing Data A missing value is replaced with appropriate values through data imputation methodologies such as random values, the mean or median, spatial-temporal regressed values, the most common value, or prominent values recognised using k-nearest neighbour [16]. Further, various data imputation methods such as Multivariate Imputation by Chained Equation (MICE) [17] have been established to fill incomplete data numerous times. Deep learning strategies, such as Datawig [18], can predict significantly more precise outcomes than classic data imputation approaches [19] by using the capabilities of GPU and huge data. However, as asserted in the statistical literature [20,21], as the volume of missing data increases, the fluctuation of impact forecasts increases and outcomes may not be accurate enough for hypothesis affirmation if over 40% of values are missing in relevant characteristics [11], implying that data imputation is not a good option when a considerable volume of data is missing. In addition, missing data in the healthcare domain does not happen randomly. Some measured values are missing due to patient discontinuation, medication toxicity, or complicated indicators [22]. Applying MAR data imputation methods to healthcare data may result in biases in forecasting [23]. Importance of Imputing Missing Health Data for Entropy Entropy is extensively employed in the healthcare field for illness prediction as a nonlinear indicator to quantify the intricacy of the biological system [3]. Aside from the routinely discovered signs, sample entropy can assist doctors in precisely confirming the diagnosis and prediction, allowing them to make better therapy recommendations to patients. However, missing values, which are widespread in the massive volumes of data gathered through medical devices, might make it difficult to use analytic approaches like sample entropy to extract information from them. One research [24] showed that sample entropy can be super vulnerable to missing data and the entropy variations will be substantial once the dataset has missing items. Unfortunately, if the fraction of missing numbers rises, the unexpected variations will rise as well [3]. In order to calculate entropy, it is necessary to handle missing values. Thus, the authors of the current research present a new approach for imputing missing values in health data to reduce the impact of missing data on sample entropy computation. Research Contributions Current research provides the following key research contributions. • We introduce a unique Ensemble Strategy for Missing Value to analyse healthcare data with considerable missing values to identify unbiased and accurate prediction statistical modelling. Overall, there are four computational benefits of the suggested model: 1. It can analyse huge amounts of health data with substantial missing values and impute them more correctly than standalone imputation procedures such as the k-nearest neighbour approach, iterative method, and so on. 2. It can discover essential characteristics in a dataset with many missing values. 3. It tackles the performance glitches of developing a single predictor to impute missing values, such as high variance, feature bias, and lack of precision. 4. Fundamentally, it employs an extreme gradient-boosting method, which includes L1 (Lasso Regression) and L2 (Ridge Regression) regularisation to avoid overfitting. • The current study uses real-world healthcare data (snapshot presented in Figure 1) to conduct experiments and simulations of data with varying feature-wise missing frequencies indicating that the proposed technique surpasses standard missing value imputation approaches. The paper is divided into various sections. Section 2 highlights the related work. Section 3 provides a detailed description of the proposed ensemble method. Section 4 details the experiments conducted, and the results obtained. Section 5 provides a detailed discussion of the current research. Finally, Section 6 concludes this research. The paper is divided into various sections. Section 2 highlights the related work. Section 3 provides a detailed description of the proposed ensemble method. Section 4 details the experiments conducted, and the results obtained. Section 5 provides a detailed discussion of the current research. Finally, Section 6 concludes this research. Related Work In current history, strategies for dealing with missing values in large datasets have been established. Complete-case analysis (CCA) is the basic and most used technique, which entails deleting the instances containing any missing data and thus concentrating exclusively on individuals who have a complete record for all variables [25]. In fact, because there is typically a large gap between the true distribution of all participants and that of participants with complete details [26], excluding individuals with any missing data would certainly induce biases. Furthermore, the CCA technique will dramatically lower the data size for training prediction models, leading to under-trained frameworks. Data imputation is another typical approach for dealing with missing data. Single and multiple imputation procedures are the two types of imputation approaches [27]. A single imputation is employed when a missing value can be replaced with an approximated value [28]. The mean imputation [29] replaces a missing item with the mean value. The simple imputation technique has the drawback of drastically underestimating data variation and ignoring intricate interactions among potential determinants [25]. For missing value imputation, the k nearest neighbours (kNN) approach is often employed. The kNN imputation method substitutes mean values from k closest neighbours for relevant attributes. Many studies have been conducted to increase kNN's imputation accuracy. To improve imputation efficiency, Song et al. took comparable neighbourhoods into consideration [30]. More advanced single imputation strategies, including regression imputation and expectation-maximization (EM), can be used to resolve this issue [29]. Regression models were used as a substitute to repair missing values in [31]. Instead of attempting to deduce missing values, Song et al. [32] recommend first estimating lengths between absent and entire values, and then imputing values using inferred lengths. These techniques allocate a missing value by analysing the correlations between the dependent attribute and the remaining parameters in the dataset. Chu et al. [33] focused on data cleaning approaches, including various functional dependencies in a unified framework. Breve et al. [34] proposed a novel data imputation technique, based on relaxed functional dependencies, that identifies value possibilities that effectively ensure data integrity. However, in the case of healthcare data, we often encounter temporal functional dependencies for the data of patients collected for a time span [35]. On the other hand, numerous imputation approaches use multiple imputed values to approximate a missing value. Multivariate imputation by chained equations (MICE) is an approach in which the statistical uncertainties of diverse imputed data are properly considered [36]. Unfortunately, for every database, neither of the available imputation methods beats all others, implying no standard framework [29] for missing value imputation. Although most machine learning techniques can only be used to impute missing data or to employ CCA by default [29], XGBoost [37,38], a modern version of the gradientboosting technique, has crafted features that can autonomously manage missing data. XGBoost addresses the issue of missing values by including a pre-set path for missing data in each tree split. During the training phase, the best path for a missing value in every explanatory parameter at every node is discovered with the objective of minimizing the regulatory losses [37]. If no missing data in any explanatory parameter exists in the training examples, but there are missing values in the testing dataset, the XGBoost model takes the pre-set path. The pre-set path for parameter estimates on the testing set is often chosen by XGBoost, which might be a concern when dealing with missing values in XGBoost. If the missing data trends in the training and test dataset are dissimilar, the forecast might be a rough estimate. This might be the scenario if there is a significant quantity of missing value, particularly in the test dataset. Overall, conventional machine learning algorithms face the challenge of not being adaptable enough to handle big missing data. Furthermore, the disparity across training and test data has not been adequately addressed when it comes to model inference. An ensemble model for data imputation is introduced in this paper. Ensemble models are a machine learning methodology that combines numerous different models to provide a forecast. Other models involved in the ensemble model are referred to as base predictors. Ensemble approaches benefit from boosting poor learners to turn into leading ones [39,40]. Ensemble approaches have been utilized in a variety of domains to improve the accuracy of the system. Troussas et al. [41] suggested an ensemble classification approach that uses the support vector machine, naive bayes, and KNN classifiers in combination with a majority voting mechanism to categorise learners into appropriate learning styles. The model is first trained using a collection of data, and then the category of the occurrence is forecasted using the base classifiers with the majority of votes. Zaho et al. [42] devised an ensemble technique by integrating patch learning with dynamic selection ensemble classification, wherein the miscategorised data have been used to educate patch models in order to increase the variety of base classifiers. Rahimi et al. [43] used ensemble deep learning approaches to construct a classification model that improved the accuracy and reliability of classifying software requirement specifications. Authors have devised a pragmatic ensemble technique for missing value imputation based on the same concept. The below-listed technological obstacles of developing a single imputer are likewise solved by this strategy. • High variance is achieved by rendering the model supersensitive to the inputs given to the acquired characteristics. • Inaccuracy due to fitting the intensive training data with a single model or technique may not be sufficient to satisfy expectations. • When making predictions, noise and bias cause the models to rely mainly on one or a few features. Materials and Methods Ensemble learning is an amalgamation of various machine learning techniques that contemplates the estimate of various base machine learning models (base estimators) in order to achieve better predictive performance. As a base estimator, one can implement any machine learning algorithm. If the nature of considered base learners is homogeneous then the ensemble strategy is termed a homogeneous ensemble learning method, otherwise, the ensemble strategy is termed non-homogeneous or heterogeneous. The ensemble machine learning can be constructed on three sorts of mechanisms viz. bootstrap aggregation (Bagging), boosting, and stacking. Bootstrap aggregation comprises independently learning weak learners (base estimators) and the outcome is the average of resultants calculated by different weak learning. While in boosting mechanism, the base estimators are summarized one after the other and then resultant is generated as the weighted average of base estimators' outcomes. On the other hand, stacking ensemble mechanism fed the same data to all chosen base estimators and then trains an additional machine learning model called a meta-learner to upgrade model's overall performance. In this research, the authors have employed the stacking mechanism of ensemble strategy in order to devise a novel methodology of missing data imputation for Health Informatics. This research will be using different stand-alone imputations as individual base estimators in Level 1 and then combining the outcomes of these base estimators and feeding them to a meta learner machine learning model in Level 2 to make the final predictions. Figure 2 illustrates the conceptual schema of the proposed ensemble strategy. model called a meta-learner to upgrade model's overall performance. In this research, the authors have employed the stacking mechanism of ensemble strategy in order to devise a novel methodology of missing data imputation for Health Informatics. This research will be using different stand-alone imputations as individual base estimators in Level 1 and then combining the outcomes of these base estimators and feeding them to a meta learner machine learning model in Level 2 to make the final predictions. Figure 2 illustrates the conceptual schema of the proposed ensemble strategy. The proposed ensemble approach targets to discover unbiased and accurate prediction trends from healthcare data, which, if trained directly, might lead to biases due to significant missing values [44]. The suggested model has three stages: Hereafter in this research, the authors will be using the ∈ ℛ × matrix to represent the dataset, which has M observations and N characteristics. Further, , , which is the parameter value for the jth characteristic of ith observation, is the item of D at position (i, j). Many parameters' values are missing because of several intercurrent occurrences, including medication suspension or early cessation for multiple causes. The features that hold missing values have been discovered, and their feature indices have been placed in vector . In addition, is a vector that represents feature indices which do not include any missing values. Also, dataset consists of p samples with no missing values in any of the rows. The proposed ensemble approach targets to discover unbiased and accurate prediction trends from healthcare data, which, if trained directly, might lead to biases due to significant missing values [44]. The suggested model has three stages: Hereafter in this research, the authors will be using the D ∈ R M×N matrix to represent the dataset, which has M observations and N characteristics. Further, d i,j , which is the parameter value for the jth characteristic of ith observation, is the item of D at position (i, j). Many parameters' values are missing because of several intercurrent occurrences, including medication suspension or early cessation for multiple causes. The features that hold missing values have been discovered, and their feature indices have been placed in vector Q. In addition, Q is a vector that represents feature indices which do not include any missing values. Also, D train dataset consists of p samples with no missing values in any of the rows. Data Pre-Processing In the data pre-processing phase, the raw data is processed to produce training data that will be used as input to a regressor model. Figure 3 depicts the entire data preprocessing procedure, which is accomplished as listed below. Data Pre-Processing In the data pre-processing phase, the raw data is processed to produce training data that will be used as input to a regressor model. Figure 3 depicts the entire data pre-processing procedure, which is accomplished as listed below. Initially, the training data i.e., D train , does not contain any missing values. Thus, a dataset, i.e., D train mv , is prepared by randomly eliminating the present values from the features present in Q. 2. Three imputation techniques were chosen for the proposed ensemble methodology as unrelated base predictors since using unrelated base predictors may significantly reduce prediction errors in ensemble learning, as indicated in [40]. D train mv data is passed to three imputation methods, i.e., (1) simple mean imputer, (2) KNN imputer, and (3) iterative imputer, that have been chosen as base predictors in current research. • Simple mean imputer: Missing values are substituted in this imputer by the mean of all non-missing values in the corresponding parameter. • KNN imputer: By assessing respective distance measurements, the KNN method seeks the other k non-missing findings, most comparable to the missing one for every missing value. The missing data is subsequently replaced by a weighted average of the k nearby but non-missing values, with the scores determined by their Euclidean distances from the missing value. • Iterative imputer: Multiple copies of the same data are generated and then integrated to get the "finest" predicted value in this approach. The MICE technique has been used to provide iterative imputation based on completely conditional requirements. 3. The values predicted to be imputed for the missing data in D train mv by the base predictors are reserved in three 2-D matrices, i.e., Pred 1 , Pred 2 , and Pred 3 , for simple mean, KNN, and iterative imputer, respectively. 4. Corresponding to each attribute index in Q, a regressor model is trained. For training each q {1, 2, . . . , Q} regressor models, a corresponding matrix P q (structure presented in Equation (1)) is provided as input. where, Pred 1 p,q , Pred 2 p,q , and Pred 3 p,q represents the value of qth attribute of pth sample imputed by simple mean, KNN, and iterative imputer, respectively, and D train p,q depicts the actual known value of qth attribute of pth sample. Model Training The proposed ensemble model employs the eXtreme Gradient Boost (XGB) regression technique for training purposes. An XGB Model is trained for each attribute index in Q. Thus, there will |Q| XGB models. As detailed in the previous section, the training data P q (for q {1, 2, . . . , Q}) is provided as input to each XGB regression model for model training, as depicted in Figure 4. where, represents the value of qth attribute of pth sample imputed by simple mean, KNN, and iterative imputer, respectively, and , depicts the actual known value of qth attribute of pth sample. Model Training The proposed ensemble model employs the eXtreme Gradient Boost (XGB) regression technique for training purposes. An XGB Model is trained for each attribute index in . Thus, there will | | XGB models. As detailed in the previous section, the training data (for q {1,2, … , }) is provided as input to each XGB regression model for model training, as depicted in Figure 4. The value of ith entry of , i.e., , is predicted using Equation (2), where is the observed value of ith entry and is the sample input corresponding to { , , , , , }. The value of ith entry of P q , i.e.,â i , is predicted using Equation (2), where a i is the observed value of ith entry and b i is the sample input corresponding to {P i,1 q , P i,2 q , P i,3 q }. The function score t presents an independent tree among the set of regression trees, T and score t (b i ) refers to the anticipated score provided by the ith sample and tth tree. The objective function of the XGB, designated by O XGB , is calculated as presented in Equation (3): The regression tree model functions score t can be trained by minimizing the objective function, O XGB . The gap between the forecasted value,â i and the true value, a i is evaluated by the training loss function ∆(a i ,â i ). Further, χ is employed to prevent the challenge of overfitting by penalising model intricacy as presented in Equation (4) for the independent tree, t among the set of regression trees. where ϕ and η are the regularization factors. ϕ dictates if a particular node split depending on the anticipated loss minimization after the split, and η is L2 regularisation on leaf weights. ξ and Θ are the numbers of leaves and scores on every leaf, respectively. The objective function can be approximated using a second-degree Taylor series [45]. Further, summation is a useful mechanism to train the ensemble model. Let Φ j = {i | t(b i ) = j be an occurrence set of leaf j with the fixed structure t(b). The Equation (5) is used to calculate the optimum weights Θ j * of leaf j using first and second gradient orders of loss function and also the optimum value of associated loss function O XGB * . The first and second gradient orders of the loss function, O XGB are r i and s i , respectively. Further, O XGB can be used to discover the quality score for t. The lower the score, the more accurate the model. Because computing all the tree topologies is impossible, a greedy approach is employed to tackle the issue, starting with only one leaf and repeatedly extending paths to the tree. After splitting, let Φ L and Φ R be the occurrence sets of the left and right nodes, respectively. If the original set is Φ = Φ L ∪ Φ R , the loss reduction following the split, O XGB_split will be as presented in Equation (6). where the first term depicts the summation of score associated with left and right leaf, second term depicts the score associated with the original leaf, i.e., leaf before the splitting operation is performed and ϕ is the regularisation term on additional leaf that will be used further in the training process. In practice, this approach is commonly used to evaluate split candidates. During splitting, the XGB model employs many simple trees, as well as the leaf node similarity score. Imputation Utilising the trained ensemble model, the missing values are imputed for the test dataset. The test data is represented as D M×N , which has M instances and N attributes, with Q being the attribute with at least one missing value. The dataset is pre-processed in the same fashion as in the first phase of the proposed model, with the exception that there will only be three columns since the actual value is to be anticipated, resulting in a two-dimensional matrix of the form presented in Equation (7). where, Pred 1 test m,q , Pred 2 test m,q , and Pred 3 test m,q are the value of qth attribute of mth sample, imputed by the base predictors, i.e., simple mean, KNN, and iterative imputer, respectively. The missing value within every feature may be simply inferred using equation (2) with the support of trained ensemble models. Let Y q denote the vector holding the anticipated values of the proposed ensemble model's qth XGB regressor, as shown in Figure 5. Using the anticipated set of vectors, Y 1 , Y 2 , . . . , Y q (as presented in Equation (8)), the missing values are imputed in test dataset D. where m = {1, 2, . . . , M}, q = {1, 2, . . . , Q}, nan = missing value or empty. Algorithm 1 presents procedure of proposed ensemble model. The algorithm has been partitioned into three sections, i.e., variable declaration, generation of training dataset, then training the model and applying trained model to the testing dataset. • In the first section (variable declaration), all the required datasets and matrices have been initialised. • In the second section, the algorithm performs two sequential tasks. a. The first task involves generation of training dataset using three imputation strategies, i.e., simple imputation, kNN imputation, and iterative imputation; after applying imputation method on the training dataset, the resultant dataset is stored in Pred 1 , Pred 2 , and Pred 3 , respectively. Now, for each attribute index present in Q, a corresponding matrix P q is formed that comprises of four attributes (simple, kNN, iterative, and actual). The first three attribute elements are represented by vector B denoting the values of qth attribute's elements imputed by simple imputation, kNN imputation, and iterative imputation method, and the fourth attribute element is represented by vector A, denoting the known value of qth attribute's elements. b. The second task involves the training of a regressor model (XGB) using generated training dataset. The vectors B and A are passed into XGBRegressor method for training the model and the trained resultant regressor associated with the qth attribute is represented by reg[q]. • In the third section, the algorithm performs three sequential tasks. a. The first task involves the preprocessing of the testing dataset as done in previous section and transform testing dataset representation into P q test matrix associated with each missing valued attribute (q). P q test matrix comprises of three attribute elements represented by vector B test denoting the values of qth attribute's elements imputed by simple imputation, kNN imputation, and iterative imputation methods. b. The second task involves the prediction of missing values in testing dataset using trained regressor models (XGB) reg[q] associated with each missing valued attribute (q). The predicted values are stored in a vector Y q c. Lastly, the third task involves the substitution of imputed results of missing values associated with qth attribute as stored in Y q into the actual dataset D. After substitution, the dataset is completed, and no missing value is present in it. • In the first section (variable declaration), all the required datasets and matrices have been initialised. for qth in Q: #Applying trained ensemble models on D Pred 1 test = SimpleImputer(D train mv , strategy = 'mean') Experiments and Results The experimental environment was a PC with an Intel(R) Core (TM) i3-6006U CPU @ 2.00 GHz running the Windows 10 operating system with 11.9 GB RAM. This research utilised XGB, Support Vector, and Random Forest Regressor to quantify the accuracy of the decision support system provided after imputing the missing values through the underlying imputation approach to assess the proposed ensemble imputation technique with a simple mean, kNN, and multiple imputation methodologies. Table 1 lists the configurations of the three regressors and four imputation techniques. Further, the experiments are also conducted on the dataset by simply dropping the missing value to assess its effects on prediction in comparison to the proposed ensemble method. Real Time Dataset This research utilised the real-time COVID-19 epidemic dataset [46], which included missing values with varying missing percentages, and varying quantities of characteristics and occurrences for the experimentation. This real-time dataset contains information on the COVID-19 epidemic in the United States, with records from 3142 US jurisdictions from the start of the epidemic (January 2020) through June 2021. This information refers to different publicly accessible databases and encompasses the everyday count of COVID-19 confirmed incidence and mortality, as well as 46 other attributes that could affect pandemic trends, such as each county's demographic, spatial, environmental, traffic, public health, socioeconomic compliance, and political characteristics. The underlying dataset constitutes 750,938 records and 58 attributes, among which 12 attributes hold missing values. A total of 10K records are chosen randomly from the original dataset for model training. Further, three varying sizes (5K, 10K, and 20K records) of the dataset are chosen randomly for testing the proposed model. The randomly chosen test data is statistically analysed to quantify the missing values present, as depicted in Table 2. Moreover, missing values are observed in each attribute (i.e., 12 attributes holding missing values) for every varying size test dataset as presented in Table 3 and Figure 6. Regressor Models For determining the performance of the proposed ensemble framework, the authors have selected three regression models, i.e., Support Vector Regressor (SVR), Random Forest Regressor (RFR), and eXtreme Gradient Boost Regressor (XGBR). These regression models are built to check the performance of different missing data-handling methodologies discussed in the paper (i.e., proposed ensemble imputation method, simple mean imputation method, kNN imputation method, and iterative imputation method). The covid_19_deaths attribute is chosen as the target attribute to train and test these models because it has no missing values and it also happens to be the target variable in the dataset [46]. The regressor models are briefly illustrated as follows: 1. eXtreme Gradient Boost Regressor (XGBR): XGBoost is a tree-based enactment of gradient boosting machines (GBM) utilised for supervised machine learning. XG-Boost is a widely used machine learning algorithm in Kaggle Competitions [47] and is favoured by data scientists as its high execution speed beats principal computations [37]. The key concept behind boosting regression strategy is the consecutive construction of subsequent trees from a rooted tree such that each successive tree diminishes the errors of the tree previous to it so that the newly formed subsequent trees will update the preceding residuals to decrease the cost function error. In this research, the XGB Regressor Model has a maximum tree depth of 10, and L1 and L2 regularisation terms on weights are set as default, i.e., 0 and 1, respectively. 2. Random Forest Regressor (RFR): Random Forest is an ensemble tree-based regression methodology proposed by Leo Breiman. It is a substantial alteration of bootstrap aggregating that builds a huge assemblage of contrasting trees, and after that aggregates them [48]. A random forest predictor comprises an assemblage of unpremeditated regression trees as the base T i A, Ψ j , D i , where Ψ 1 , Ψ 2 , . . . , Ψ j , are independent and identically distributed (IID) outcomes of a randomising variable Ψ and j ≥ 1. An aggregated regression estimate is evaluated by combining all these random trees by using formula with the random variable conditionally on A and the dataset D i . In this research, the maximum depth of RFR tree is tuned to 5, and other parameters, such as the minimum sample split and the number of trees, are kept as the default, i.e., 2 and 1000, respectively. 3. Support Vector Regressor (SVR): Support Vector Machine (SVM) used for regression analysis is named as support vector regressor (SVR) [49]. In SVR, the input values are mapped into a higher-dimensional space by some non-linear functions called kernel functions [50,51] so as to make the model linearly separable for making predictions. The SVR model is trained by a structural risk minimisation (SRM) principle [52] to perform regression. This minimises the VC dimension [53] as a replacement for minimising the mean absolute value of error or the squared error. In this research, SVR uses the radial basis function as kernel and a regularisation parameter (C) of 1.5. Evaluation Metrics An evaluation is a common method of determining a model's performance. After imputation of the missing values, this research employed eXtreme Gradient Boost, Support Vector Machine, and Random Forest regressors to determine the desired values, with mean absolute error (as depicted in Equation (9)) and mean squared error (as depicted in Equation (10)) used to assess correctness where a imputed and a actual are the imputed and actual value for p records. Results As stated above, experiments are conducted using three regressors, i.e., XGB Regressor, Support Vector Regressor (SVR), and Random Forest Regressor (RFR), for varying sizes of test data (i.e., 5000, 10000, and 20,000 records) employing four imputation methods (i.e., proposed ensemble, iterative, kNN, and simple mean) and simply dropping the instances holding missing values. The results obtained are presented in Table 4 and Figure 7 in terms of two evaluation metrics, i.e., mean absolute error and mean squared error. To generalize the evaluation metrics for comparison, in each regression model authors have normalised the resultant value of all underlying imputers with respect to the resultant value of the proposed ensemble model as devised in Equations (11) and (12). (mean absolute error normalized ) regressor = (mean absolute error ) imputationMethod (mean absolute error ) proposedMethod (11) (mean squared error normalized ) regressor = (mean squared error) imputationMethod (mean squared error) proposedMethod (12) where, imputation method {Iterative, KNN, Simple Mean, Dropping Instances} and regressor {XGB, RFR, SVR}. If the normalised value is obtained as 1, the performance of the underlying imputation technique is identical to the proposed ensemble model. Further, if the normalised value is greater than 1, the corresponding imputation approach outperforms the proposed ensemble model; otherwise, the underlying imputation technique underperforms in comparison to the proposed ensemble model. The observed normalised values are presented in Table 5. There are two key conclusions based on the experimental comparison for the proposed ensemble model presented in Table 5 and the graphical analysis illustrated in Figure 8. • It has been discovered that primitive imputation strategies, such as iterative, kNN, and simple mean imputation do not perform well when imputing the missing values of huge datasets. When the imputed dataset is submitted to XGB regressor and random forest regressor to assess target values, dropping the records with missing values appears to be highly promising, as demonstrated in Table 5 Discussion After analysing the evaluation metrics generated by three regressor models, it has been found that the proposed ensemble strategy is the most suitable option for the imputation of missing values. The imputed dataset produced by the Proposed Ensemble approach when passed to XGB Regressor for performance evaluation results in the least mean absolute error, i.e., 60.81, 54.06, and 49.38, and least mean squared error, i.e., 8266.08, 6046.26, and 4473.7, in all three test cases considered. Similarly, when the same dataset is passed to the RFR model, the model gives the least mean absolute error, i.e., 112.8, 115.98, and 113.57, and the least mean squared error, i.e., 23,966, 23,256.3, and 23,298.47, in all three test cases considered. However, when the same imputed dataset is passed to the SVR Model, then in one of the test cases, i.e., with 20,000 records, it gives the least mean absolute error of 188.31, and in two cases, i.e., with 10,000 and 20,000 records, it gives least means squared error of 63,853.1 and 59,422.4, respectively, as represented in Figure 8. For the comparison of state-of-the-art missing value-handling strategies such as simple imputation, kNN imputation, iterative imputation, and dropping the missing value contained instances method, normalised error results have been calculated using Equations (11) and (12) with respect to the proposed imputation method as depicted in Table 5. It has been observed that the approach of dropping the instances with missing values is the closest missing value handling method to the proposed ensemble model as it results in the normalised error estimate in the range of 0.7 and 1.0 in all three considered test cases. But the method reduces the dataset size, thus it should not be preferred for large and crucial datasets. On the other hand, among simple mean, kNN, and iterative methods, iterative imputation is closest to the proposed imputation method having a normalised MAE of 0.775, 0.742, and 0.679 in the three considered test cases, i.e., 5000, 10,000, and 20,000 records, respectively, and a normalised MSE of 0.593 and 0.473 in two test cases, i.e., 10,000 and 20,000 records, respectively, as computed by XGB Regressor Model. On the contrary, the simple mean imputation method is closest to the proposed imputation method having a normalised MAE 1.023, 1.011, and 0.994 and a normalized MSE 1.021, 0.981, and 0.954 in the three considered test cases, i.e., 5000, 10,000, and 20,000 records, respectively, as predicted by the SVR Model and a normalised MAE and normalised MSE of 0.768 and 0.678 as predicted by RFR and XGB Model. Similarly, the kNN imputation method closest to the proposed imputation method having a normalised MAE 0.782 in one test case, i.e., 20000 records, and a normalised MSE of 0.634 and 0.627 in the two considered test cases, i.e., 5000 and 20,000 records, respectively, as predicted by RFR Model. Hence it can be said, when the dataset size is small and has fewer missing values, dropping the records holding the missing values seems the most suitable approach, as predicted by almost all three regression models, and with a large dataset size the simple mean, kNN, and iterative method give equivalent results in most of the cases but could not match with the performance of the proposed ensemble strategy as estimated by considered regressor models. In current research, authors are focused on establishing an ensemble technique for missing value imputation employing mean value, kNN, and iterative imputation techniques. However, in the near future, authors aim to extend the current research on the below-listed limiting parameters of the proposed model. • Functionally dependent domain: Current research is not exploiting the functional dependencies present in the dataset for identification of missing values. The authors target to employ the devised ensemble strategy on other healthcare datasets including genomics-based and specific disease diagnosis-based, which may include the significance of attribute's functional dependencies. • Intelligent selection of base predictors: The base predictors chosen in the proposed model are fixed and thus do not consider other base predictors available. The authors intend to develop a system for intelligent selection and hybridisation of the different base estimators on the basis of attributes, for instance, domain dependency; categorical data must be addressed by classification-based machine learning models and contin-uous data must be addressed by regression machine learning models. Further, the multiple stacking approach can be integrated for the meta learners in the proposed ensemble approach, wherein the XGB model can be replaced with the kNN-based deep learning methods when handling complex healthcare datasets which can help in producing much better outcomes and can be more reliable in terms of performance. Conclusions To efficiently model computer systems to aid in medical decision-making, clean and reliable data is essential, yet data in medical records is usually missing. Leaving a considerable amount of missing data unaddressed frequently results in severe bias, which leads to incorrect conclusions being reached. In the current research work, an ensemble learning framework is introduced that (1) can handle large numbers of missing values in medical data, (2) can deal with various datasets and predictive analytics, and (3) considers multiple imputer values as base predictors, utilising them to construct new base learners for the entire ensemble that result in maximum correlation value with respect to the negative gradient of the loss function. The performance of the proposed ensemble method has been evaluated compared to three commonly used data imputation approaches (i.e., simple mean imputation, k-nearest neighbour imputation, and iterative imputation) and a basic strategy of dropping records containing missing values in the experiments conducted. Simulations on real-world healthcare data with varying feature-wise missing frequencies, number of instances, and three different regressors (eXtreme gradient boosting regressor, random forest regressor, and support vector regressor) revealed that the proposed technique outperforms standard missing value imputation approaches.
9,787.4
2022-04-01T00:00:00.000
[ "Computer Science" ]
The role of spirituality as a coping mechanism for South African traffic officers The work of traffic officers is very important yet quite stressful, thus the need for emotional stability, self-awareness and stress management skills in their field (Pienaar 2007:3). Interestingly, people in general respond differently even when they are confronted with identical stressors (Louw & Viviers 2010:1), and so do traffic officers. There might be various reasons for these different responses. One such reason might be situated in the concept of spirituality and how a person applies spirituality to cope more effectively in the place of work (Jacobs 2013:160; Moos 2002:67). Introduction The work of traffic officers is very important yet quite stressful, thus the need for emotional stability, self-awareness and stress management skills in their field (Pienaar 2007:3).Interestingly, people in general respond differently even when they are confronted with identical stressors (Louw & Viviers 2010:1), and so do traffic officers.There might be various reasons for these different responses.One such reason might be situated in the concept of spirituality and how a person applies spirituality to cope more effectively in the place of work (Jacobs 2013:160;Moos 2002:67). The context within which the South African traffic officer functions stems firstly from their appointment as prescribed by the National Road Traffic Act, No. 93 (1996).Accordingly, their duties include inspecting any vehicle for compliance within the provisions of this Act, temporarily forbidding a person from driving a vehicle should that person seem incapable, controlling and regulating traffic on any public road and requiring any person to furnish any particulars needed for identification.Actual law enforcement is done on all the public roads within the boundaries of a particular local municipal or provincial area (National Road Traffic Act 1996).The work environment of the traffic officer also includes the Law Enforcement Section within Traffic Services, where the officer carries out administrative duties.Secondly, traffic officers are appointed as peace officers in terms of section 334(1) (a) of the Criminal Procedure Act, No. 51 (1977), as published on 02 September 2011 (Gazette No. 34583,Regulation 707).These duties include all powers bestowed upon peace officers and all powers awarded to police officials, with the exclusion of specified sections and subject to certain provisions.Given the established overlapping nature of the duties of police officers and traffic officers, it is useful to consider findings from police research which are relevant to the current study. Studies on policing and stress indicated that police work is indeed stressful (Anderson, Litzenberger & Plecas 2002:399).Police tasks contain a spiritual element because in their job they pursue what is good, face threats to their lives and find their worth being ignored by the community (Smith & Charles 2010:320).Similarly, the road environment places growing demands on traffic officers (Pienaar 2007:3), as they may be attacked by disgruntled motorists, targeted for their fire arms and thrown off motor cycles (Mushwana 1998:108).This may be detrimental to their well-being and affect the individual's perspective on life (Pienaar 2007:93;Smith & Charles 2010:320).Nonetheless, research indicated that people can affect their own well-being through Traffic officers are faced with many stressful situations, yet each traffic officer might cope differently with these stressors.Spirituality is regarded as an essential defence in stressful situations.Therefore, this article provides a basic framework guiding traffic officers and practitioners, on how spirituality can be used as a coping mechanism when faced with various work-related stressors.An interpretative, qualitative study was conducted utilising purposive sampling in which 10 traffic officers participated in in-depth interviews.In line with the interpretive paradigm, data were analysed using content analysis.The research findings indicate when utilising spirituality to various degrees in their workplace, traffic officers displayed adaptive coping capabilities.Traffic officers associated less spirituality or a lack thereof with weaker coping capability.Furthermore, spirituality in traffic officers is informed by their spiritual or religious foundation, their purpose in work and life, their connection to a spiritual source, and the fruits of spirituality.The coping ability of traffic officers is influenced by their upbringing and background, by stressors in their work environment and by their coping mechanisms.The role of spirituality in the coping of traffic officers culminated in their ability to interpret the meaning of spirituality, and then implementing spirituality as a coping mechanism.practices of forgiveness, spirituality and religion (Boehm & Lyubomirsky 2009:8;Schreuder & Coetzee 2011:275).The differences between, and the overlapping aspects related to, spirituality and religion are acknowledged (Collett 2011:50;Jacobs 2013:164;Karakas 2010:8;Kourie 2009:153;Welzen 2011:37) and often applied interchangeably and complimentary to each other (Kourie 2009:153). Spirituality Defining the complex concept of spirituality seems challenging (Jacobs 2013:1;Naidoo 2014:4) because of its subjective nature (Karakas 2010:26).Among the various definitions of spirituality, some or all of the following elements seem to be included: meaning and purpose, ethical values and beliefs, relationships or connectedness, and transcendence (Jacobs 2013:159).Other definitions describe a personal experience (Lombaard 2011:77;Waaijman 2007:5) searching for a persistent understanding of the existential self, connecting with the sacred (Karakas 2010:8), believing in a higher power and understanding life through lived experience (Jacobs 2013:160;Welzen 2011:38). Spirituality relates to the place of work as 'spirit at work' and entails becoming aware of your higher self, within which one seeks to be purposeful in life and work (Kinjerski & Skrypnek 2004:319;Schreuder & Coetzee 2011:11).It is also concerned with cherished relationships with both a higher force and other people (Mohan & Uys 2006:58).Connectedness with people is part of the need for self-actualisation in Maslow's hierarchy of needs, which motivates some people (Theron 2009:132).Motivation involves spiritual purpose which includes being driven by a need for spiritual wholeness, deeper meaning and the search for creative self-expression through work and relationships (Coetzee & Roythorne-Jacobs 2007:162).'Spirit at work' comprises interpersonal, physical, emotional, mental and spiritual characteristics, and this definite state entails physiological stimulation and positive affect (Kinjerski & Skrypnek 2004:26).However, Naidoo (2014:3) cites Ali and Gibbs (1998) and shows that the work ethics of believers may also bring about disadvantages such as discrimination because of spiritual beliefs (Karakas 2010:26).Notwithstanding, adopting spirituality on a personal level also prompts people to consider ethical aspects, which leads to individual and societal transformation (Kourie 2009:168;Naidoo 2014:1).Hence, the benefits of spirituality in the workplace seem to outweigh the disadvantages (Van Tonder & Ramdass 2009:2), with its motivating energy facilitating service regardless of challenges (Kinjerski & Skrypnek 2004:28), which increases commitment and productivity (Naidoo 2014:1). The post-modern approach recognises all these different views and truths on spirituality (Jacobs 2013:164) and emphasises the interconnectedness of living things (Kourie 2009:152).Even though literature indicates that there is no clear definition of spirituality (Jacobs 2013:1), it can be contested that the issue may be less about clarity, as commonalities in perspectives are evident (Mohan & Uys 2006:58).Perhaps, the issue is more about whether there is a common definition understood by all, as there may be too many people defining spirituality differently.For the purpose of this study, spirituality is defined as individuals' experiences of the Divine (Lombaard 2011:77) on their journey of meaningful authentic self-discovery (Karakas 2010:8) and entails how people express these experiences through actions and attitudes (Lombaard 2011:77). As 'spirit at work' begins with the individual and requires a holistic consideration of existence (Gnanaprakash 2013:383;Kourie 2009:151;Krok 2008:643), more attention needs to be given to the human element of traffic officers and not just to the law enforcement aspects of their profession (Pienaar 2007:5).Human elements include the values that drive them (Pienaar 2007:5) and how to cope with stress presented during the performance of their daily duties (Pienaar 2007:5).Research specific to the concept of coping of traffic officers seems to be limited (Pancheri et al. 2002;Pienaar 2007;Van Heerden 1990). Coping Generally, coping is a positive psychological construct (Coetzee & Viviers 2007:485), which refers to the perceptual, mental or behavioural efforts that people employ to deal with situations deemed potentially difficult and stressful (Schreuder & Coetzee 2011:383).Coping ability may be viewed by some as the presence or absence of coping skills (Ryan, Rapley & Dziurawiec 2014:1069), but may differ in the extent to which contextual factors, such as social factors as opposed to individual factors, are considered (Ryan, Rapley & Dziurawiec 2014:1069).For the purpose of this study, coping is viewed as the efforts employed by individuals to reduce the negative influences of stress on personal well-being, which may involve individual resources as well as their perceptions of challenges (Cheng, Mauno & Lee 2014:73;Edwards 1988:243). Lack of coping skills may negatively affect employees and organisations, through poor service to society, increased employee absenteeism and turnover (Dewe, O'Driscoll & Cooper 2010).Pienaar (2007:93) propose the solution to be increased traffic officer training at departmental level focused at assisting them to cope better with stressors.Generally, internal organisational stressors include red tape, internal politics and role uncertainty (Luthans 2008:249).External organisational stressors include technology, changes in the economy, globalisation, the effects of family on the community, and race and gender considerations (Luthans 2011:282).Traffic officers might be faced with internal organisational stressors such as rotating work shifts, tight controls and demanding work conditions, as well as external organisational stressors such as those listed above.According to Pienaar, Rothmann and Van De Vijver (2007:14), people can manage the effects of negative work experiences by confronting the stressors and looking for lessons in such experiences from a religious perspective. Theoretical relationship The theoretical relationship between spirituality and coping in the workplace has been explored and is depicted in Figure 1.Trends in the literature indicate the influence of spirituality on coping (Gnanaprakash 2013:383;Krok 2008:643) and that people may respond differently given the same challenges (Louw & Viviers 2010:1).The same stressor may be posed to different people.Yet, higher and lower levels of spirituality result in the application of positive and negative thoughts, respectively.Therefore, according to Rowe and Allen (2003:63) and Bryant-Davis et al. (2012), positive thoughts are associated with high spirituality and appear to result in stronger coping styles, while the application of negative thoughts is associated with lower spirituality, resulting in weak coping styles. Subsequently, this study aimed to explore how traffic officers experience the role of spirituality in coping within the South African work context and to provide a framework that can assist towards better understanding this phenomenon. Methodological approach Exploring the question 'how do traffic officers experience the role of spirituality in coping?' required a methodology of the heart with characteristics of truth (Denzin, Lincoln & Giardina 2006:770).Therefore, the researcher considered the nature of the reality being studied (Ponterotto 2005:130), in selecting a qualitative study -specifically an interpretive approach -to explore participants' related lived experience in their specific context (Terre Blanche, Kelly & Durrheim 2006:276).A non-probability purposive sample of 10 traffic officers was used (Durrheim & Painter 2006:139), with a biographical composition consisting of one black, one white and eight mixed-race participants.Their gender distribution consisted of 6 males and four females of which 4 were between the ages of 26 and 35, 3 between the ages of 36 and 45, and 3 between the ages of 46 and 55 years. The research was located within the South African traffic law enforcement environment, comprising a Law Enforcement Section within the particular Traffic Service Centre where they perform administrative duties, as well as the public roads according to the area of jurisdiction, where actual law enforcement takes place. Data were collected with semi-structured, in-depth interviews.Interviews were primarily conducted away from the work environment to avoid interruptions, ensure privacy and create a relaxed environment (Kelly 2006a:298;Terre Blanche, Kelly & Durrheim 2006:276).Permission was obtained from the participants to audio record the interviews and to make verbatim transcriptions (Koekemoer & Mostert 2010:3;Wassenaar 2009:75).Conversing with traffic officers on two or more occasions further contributed towards ensuring that findings about their perceptions were correct representations of their views (Kelly 2006b:380). Findings The data analysis yielded various themes and subthemes that relate to spirituality, coping and the role of spirituality in the coping of traffic officers. Spirituality A foundation of spirituality or religion plays a fundamental role in the development and formation of the traffic officer's spirituality.Participants indicated that the influence of a higher power was important to their spirituality (Kinjerski & Skrypnek 2008:319).In the challenging high-risk traffic environment (Mushwana 1998:134), traffic officers rely on this higher power for the protection of the self, family and different road users who are all at risk of accidents and dangerous incidents.Upbringing within spirituality or religion by parents and others seems to play a fundamental role in the formation of participants' foundation and subsequent views on spirituality.Participants further indicated that spirituality provided values and morals (Jacobs 2013:159), helping them to distinguish between right and wrong and to explore ethical aspects (Kourie 2009:168). Participants suppose that spirituality informed the construction of their reality (Ponterotto 2005:130), and subsequently spirituality or religion dictates their lives through their actions and reflecting on the truth that they strive towards (Leedy & Ormrod 2010:135).Furthermore, participants feel that spirituality gives purpose and meaning to life.Participants described that the guidance obtained from spirituality added meaning to work and life (McLeod 2003:142;Mohan & Uys 2006:58), creating a feeling that work has purpose (Karakas 2010:20).This entails a desire to be of service and staying the course even in difficult times.Through spirituality, participants are able to be passionate about their work and view their career as a calling (Coetzee & Roythorne-Jacobs 2012:220;Hall & Chandler 2005:1; Karakas 2010:2).Spirituality, furthermore, seems to enable decision-making.Participants believe that they have the power to choose what they want to happen in their lives through mindful actions (Boehm & Lyubomirsky 2009:1), by making conscious decisions informed by prayer and spiritual beliefs.They take responsibility for what manifests in their lives, thus acknowledging their free will (Bergh 2009:317). Participants apply various strategies to connect to their higher power or spiritual source (Mohan & Uys 2006:58).One such strategy is through involvement in church.One traffic officer thought it not necessary to go to church to experience this connection, thus viewing spirituality as independent from denomination (Karakas 2010:8).Traffic officers who value involvement in church use it as a coping mechanism, by drawing support and learning spiritual lessons from the social connections.Another strategy is through prayer.When connecting to their spiritual source (Kinjerski & Skrypnek 2008:319), traffic officers use prayer to talk to God and ask for help in various situations (Mohan & Uys 2006:58).Lastly, to sustain the connection with their higher power (Karakas 2010:8), the traffic officers use various spiritual resources, including listening to spiritual music, conversing with family and friends, or reading spiritual material. Participants describe that spirituality manifests in their expression (Lombaard 2011:77) of values in their work and private lives.Individuals may be recognised by their actions or by the fruits of spirituality, which emerged primarily as calmness, positive attitudes, forgiveness, inner strength and awareness.Participants treasure the ability to become calm and exert self-control when faced with challenges.They must deal with the public's negative attitudes towards them (Pienaar 2007:63), thus the importance of dealing responsibly with emotions.A positive attitude is associated with spirituality, and the two cannot be separated (Rowe & Allen 2003:64).Spirituality facilitates coping by inspiring positive attitudes, as participants are able to tap into their best, despite negative situations.Spirituality further fosters forgiveness, which apparently helps to overcome disappointment and moving forward (Schreuder & Coetzee 2011:275).Forgiveness facilitates coping through helping traffic officers to voluntarily change how they feel about, and respond to, negative occurrences.Inner strength is important to participants as they view their higher power as a source of strength (Mohan & Uys 2006:58), which makes it possible to withstand the demands of the job, and thus cope.Lastly, spirituality leads to an increased sense of awareness (Kinjerski & Skrypnek 2004:32), including awareness of the self, of God and others.The awareness of negative consequences also emerged (Chopko 2007:89), involving the effects of one's actions or words on others. Coping Human context is vital to understanding coping (Moos 2002:67), thus background and upbringing emerged to include communities plagued by gangsterism, crime, violence, alcohol, drugs, poverty and peer pressure.Traffic officers revealed that childhood circumstances developed their coping abilities, because beliefs and spiritual values were instilled during childhood.Most participants appear to have been taught adaptive coping mechanisms at a young age. The traffic work environment presents stressors (Mushwana 1998:134), the effects of which may manifest as aggression and frustration, adversely affecting relationships with others (Pienaar 2007:93).One such stressor is work conditions.Brutal incidents, crime and possibly being involved in accidents seem to pose a threat to work conditions.These may affect individuals emotionally and psychologically (Edwards 1992:245).Further, exhausting shift work or odd working hours may have a negative influence on physical well-being. A second stressor seems to be treatment from others.Traffic officers may face negative attitudes from road users (Pienaar 2007:63), poor treatment by colleagues and unfounded accusations.To cope participants may alter their perceptions around this treatment (Edwards 1992:243), alter distressed relationships between themselves and others, and regulate the emotional distress experienced (Boehmer, Luszczynska & Schwarzer 2007:63).A further stressor appears to be inadequate coping mechanisms.Participants indicated that there are inadequate coping mechanisms and counselling services available at departmental level to assist in dealing with these stressors, and subsequently issues are just left without being attended to. Personal coping mechanisms comprise positive attitudes, thankfulness and recreation, involving strategies employed to deal with stressors, as illustrated below.Participants also found it useful to surround themselves with positive people and to employ positive ways of thinking to assist them in coping with their work environment (Boehm & Lyubomirsky 2009:1).Coping is also made possible by being thankful to God and people close to one in the midst of challenges.Traffic officers furthermore use physical activities and socialising to combat stress.The effects of stressors also manifest on physical levels (Edwards 1992:245), and recreational activities affect physical well-being and thus coping. The role of spirituality in the coping of traffic officers Participant spirituality entails an interpretation of spirituality, which may be taught during childhood or by conversing with others on scriptures.Interpreting the meaning of spirituality deepens their understanding of the positive fruits of spirituality, and positive thoughts contribute significantly to effective coping ability (Rowe & Allen 2003:63).The ability to understand others and their views, and to respond to them in caring ways, is linked to the ability to cope effectively (Krishnakumar et al. 2015:21).The interpretation of spirituality facilitates better coping by enhancing social connections. Life is understood through lived experiences (Welzen 2011:38), as reflected by traffic officers' implementation of spirituality.They apply what spirituality means to them within the context of work and life, which enables them to cope better.They treasure viewing their work in a spiritual way in order to persist and remain peaceful amidst challenges. It is within the context of their external work environment that one participant introduced the concept 'spirituality on the road' -a concept complementing the experiences of some other participants as well. Discussion Research findings revealed that traffic officers in the sample all utilised spirituality to various degrees and exhibited adaptive coping abilities when doing so.Traffic officers associated less spirituality or a lack thereof with weaker coping abilities.The following framework was derived (Figure 2) from the contributions of the participants and reflects their experiences regarding the role of spirituality in their coping as traffic officers: • Future research may draw upon these findings and replicate the study with respondents from a wider variety of South African Traffic Services Centres.Research on the role of spirituality in the coping of traffic officers can also be expanded upon by exploring the influences of such factors as race, age and gender.Future research may also consider the latest relevance of spirituality in different religious contexts (Jacobs 2013:144), as well as comparison with traffic officers who do not utilise spirituality at all in their opinion. Conclusion This article concludes that the utilisation of spirituality enables traffic officers to cope more effectively with the daily demands they are faced with.Spirituality in traffic officers is further informed by their foundation of spirituality or religion, purpose to work and life, the connection to their spiritual source and the fruits of spirituality.Their coping ability seems to be influenced by their upbringing and background, as well as by stressors encountered and coping mechanisms employed.The role of spirituality in the coping of traffic officers is ultimately described by their ability to interpret the meaning of spirituality and the implementation of spirituality, which facilitates coping. Conventional content analysis was applied to analyse the data as it assisted with reducing textual data of subjective experiences into meaningful categories(McMillan & Schumacher 2010:367; Muchinsky, Kriek & Schreuder 2009:31) through inductive reasoning(Yin 2010:94). FIGURE 1 : FIGURE 1: Theoretical relationship between spirituality and coping. Stressors occur: It is sometimes expected of the traffic officers to still remain humane and unaffected, at the end of a shift, when something stressful has happened to them.• Connection to spiritual source: In challenging times, individuals need to go to that emotional place where they can tap into their spiritual source or higher power.• Interpretation of spirituality: The connection to the spiritual source enables the individual to become aware of what their spiritual source means to them, and how they are being served spiritually by this higher power.• Extracting the fruits of spirituality: Individuals then take this spiritual meaning and extract what they need at the time, for example, inner strength, self-confidence or perseverance to deal with the given stressor.• Implementation of spirituality: The spiritual resources received are then utilised in work and in life.• Coping made possible: With the implementation of what spirituality means to the individual, or by applying the fruits of spirituality to the stressful situation, adaptive coping is made possible. Framework towards understanding the role of spirituality in the coping of traffic officers.
4,974.2
2017-02-22T00:00:00.000
[ "Philosophy" ]
CaPSID: A bioinformatics platform for computational pathogen sequence identification in human genomes and transcriptomes Background It is now well established that nearly 20% of human cancers are caused by infectious agents, and the list of human oncogenic pathogens will grow in the future for a variety of cancer types. Whole tumor transcriptome and genome sequencing by next-generation sequencing technologies presents an unparalleled opportunity for pathogen detection and discovery in human tissues but requires development of new genome-wide bioinformatics tools. Results Here we present CaPSID (Computational Pathogen Sequence IDentification), a comprehensive bioinformatics platform for identifying, querying and visualizing both exogenous and endogenous pathogen nucleotide sequences in tumor genomes and transcriptomes. CaPSID includes a scalable, high performance database for data storage and a web application that integrates the genome browser JBrowse. CaPSID also provides useful metrics for sequence analysis of pre-aligned BAM files, such as gene and genome coverage, and is optimized to run efficiently on multiprocessor computers with low memory usage. Conclusions To demonstrate the usefulness and efficiency of CaPSID, we carried out a comprehensive analysis of both a simulated dataset and transcriptome samples from ovarian cancer. CaPSID correctly identified all of the human and pathogen sequences in the simulated dataset, while in the ovarian dataset CaPSID’s predictions were successfully validated in vitro. Background Specific viruses have been proved to be etiologic agents of human cancer and cause 15% to 20% of all human tumors worldwide [1]. Moreover, epidemiological studies indicate that new oncogenic pathogens are yet to be discovered [2]. The International Cancer Genome Consortium (ICGC) [3], which intends to study 25 000 tumors belonging to 50 different types of cancer using next generation sequencing technologies, will allow for the first time an in-depth analysis of the viral sequence content of thousands of complete However, this opportunity can be fully realized only in conjunction with the development of new genomewide bioinformatics tools. In this context, several computational approaches have already been developed and successfully applied for the discovery and detection of known and new pathogens in tumor samples [4][5][6][7][8][9]. We present here CaPSID, a comprehensive open source platform which integrates fast and memory-efficient computational pipeline for pathogen sequence identification and characterization in human genomes and transcriptomes together with a scalable results database and an easy-to-use web-based software application for managing, querying and visualizing results. http://www.biomedcentral.com/1471-2105/13/206 Implementation CaPSID implements an improved form of a computational approach known as "digital subtraction" [10] that consists of subtracting in silico known human short read sequences from human transcriptome (or genome) samples, leaving candidate non-human sequences to be aligned against known pathogen reference sequences. CaPSID differs from traditional digital subtraction (e.g., [8]), which is used as a filter, eliminating human sequences from the dataset before comparison with pathogen reference sequences. By contrast, CaPSID matches reads against both human and pathogen reference sequences, dividing the reads into three disjoint sets per sample: a set that aligns to pathogen sequences, a set that aligns to both human and pathogen sequences, and a set that does not align to either human or pathogen sequences. This threeway division forms the basis for an exploratory environment for both known and unknown pathogen research. As shown in Figure 1, CaPSID consists of three linked components: • A pipeline to analyze and maintain sequencing datasets • A database which stores reference samples and analysis results • An interactive interface to browse, search, and explore identified candidate pathogen data The CaPSID Pipeline The CaPSID pipeline is a suite of command-line tools written in Python designed to identify, through digital subtraction, non-human nucleotide sequences in short read datasets generated by deep sequencing of RNA or DNA tumor samples. The pipeline can be conceptually divided in two distinct modules. The first module, called the Genomes Module, provides users with tools to create and update the in-house reference sequence database required by CaP-SID for applying the digital subtraction. It uses BioPython [11] to efficiently parse GenBank files and loads whole genome reference sequences, as well as some of their annotations (e.g. gene and CDS locations), into CaP-SID's database. Our database contains complete sets of human (GRCh37/hg19), viral (4015), microbial (bacterial and archaea) (38035), and fungal (53098) genomes (as of December 2011) from UCSC [12] and NCBI [13]. This module also provides the tools to create customized reference sequence FASTA files needed by short read sequence alignment software. The second module, called the Analysis Module (see Figure 1), is responsible for executing the digital subtraction and for analyzing its results. It requires two BAM files as input for each sequenced sample to be analyzed: one containing the short read alignment results to Figure 1 CaPSID platform. The CaPSID platform is made of three components: A computational pipeline written in Python for executing digital subtraction, a core MongoDB database for storing reference sequences and alignment results, and a web application in Grails for visualizing and querying the data. http://www.biomedcentral.com/1471-2105/13/206 the human reference sequences (HRS) and one containing the alignment results to all the pathogen reference sequences (PRS) found in the CaPSID database. CaPSID can directly process BAM files containing header lines formatted according to NCBI FASTA specification. In order to produce BAM files that can be loaded into CaPSID, a user can also export genome reference files from CaPSID for passing to the short read aligner. Before processing the BAM files, the user can specify a minimum required MAPQ score and all aligned reads that fall below that value will be removed from the analysis. CaPSID can process BAM files containing either single-end or pairend reads and will automatically determine which analysis method is appropriate. Digital subtraction is not limited to a human reference sequence and can be executed with any host organism as long as its reference sequence has been loaded into the database. In theory, any short read alignment program -as long as it produces BAM files-is suitable for CaPSID. However, one restriction applies, namely that multiple alignment locations for individual reads must be reported as they can map to more than one pathogen genome. We selected Novoalign (Novocraft Technologies [14]) for its flexibility (allows both mismatches and gaps), accuracy and speed. For ABI SOLiD sequencing data, we chose to use BioScope (Applied Biosystems). Both of these aligners are commercial programs with costly licences. However there are freely available and open source alternatives such as the recently released Bowtie 2 [15] (also allowing both mismatches and gaps) and SHRiMP2 [16]. As output, CaPSID's digital subtraction implementation produces three disjoint sets of short reads (or records) per sample: a set that aligns to any of the PRS, a set that aligns to both HRS and PRS and a set of reads that does not align to any of the reference sequences (both human and pathogens). In Algorithm 1 we outline the digital subtraction method used by CaPSID, and how the three disjoint sets are constructed. CaPSID uses the pysam Python module to read and process aligned short read sequences from two input BAM files (LHRS and LNHRS are read from the same BAM file, in two passes). The algorithm first processes alignment information for each read that maps to the PRS and stores it to the CaPSID's database (lines 1 to 3). Next, the algorithm processes alignment information for reads that map to HRS, storing it into CaPSID if the same read identifier also maps to the PRS (lines 4 to 8). To avoid memory performance problems, LHRS, which may be very large, is processed sequentially. Finally, with another pass through the same BAM file, this time selecting reads that do not align to the human reference sequence, CaPSID identifies unknown reads when they also do not map to the PRS by testing against an in-memory indexed copy of LPRS, which is expected to be relatively small (lines 9 to 13). Algorithm 1 CapSID's digital subtraction method Require: input LPRS -list of reads that align to the pathogen reference sequence Require: input LHRS -list of reads that align to the human reference sequence Require: input LNHRS -list of reads that do not align to the human reference sequence To better evaluate the significance of the findings, the Analysis Module calculates four different metrics for each sample and for each project as a whole (defined as a collection of samples): (i) the total number of aligned reads (or hits) across any given pathogen genome, (ii) the total number of hits across genes only within a pathogen genome, (iii) the total coverage across each pathogen genome and (iv) the maximum coverage across any of the genes in a given pathogen genome. Here we define coverage as the number of genome nucleotides represented in aligned reads normalized by the genome length. Code that calculates the four metrics for each sample has been parallelized to run on multiple processors in order to make these calculations over samples faster. In addition to the Genome and Analysis Modules, CaPSID includes a command-line script for manipulating FASTQ files that allows the filtering of low quality reads and the removal of duplicates before alignment to reference sequences. The filtering is based on two parameters set by the user, the Phred quality threshold and the number of base pairs allowed below that threshold. Processing digital subtraction and calculating metrics on two BAM files of 34 GB and 31 GB respectively takes approximately 15 minutes when using 5 GB of RAM on a 16-core AMD 64-bit processor. The CaPSID Database One of the main unique features of CaPSID is that reference sequences and digital subtraction results are both stored as linked data in MongoDB, a scalable, highperformance, open source document-oriented database [17]. http://www.biomedcentral.com/1471-2105/13/206 The CaPSID database records information on each read identified by the pipeline as part of the two first output sets described above, namely on each read that aligns to either the PRS alone, or to both the HRS and the PRS. The stored fields include, for example, alignment location, length, score, average base qualities, alignment sequence, CIGAR (Compact Idiosyncratic Gapped Alignment Report) information, number of mismatches and whether the read aligns to HRS. Storing both genome and read data makes it possible to run analysis quickly over known gene locations to determine which reads are aligning over gene regions. The third output set of reads identified by the pipeline (reads which do not align to any of the reference sequences) is not stored in the database but is rather saved in FASTQ format to the local file system for further processing. Next generation sequencing produces large amount of data and because CaPSID stores information about each aligned read and each reference sequence, it needs to deal effectively with large data sizes. CaPSID uses MongoDB [17], a database software that scales horizontally by sharding the database across a cluster of servers, while still enabling fast retrieval of large volume of data. Thanks to this scalable architecture, there is virtually no limit as to how many reads or experiments CaPSID can store in its database. MongoDB reports [17] examples of highly accessed production systems with more than 3.5 TB more than 3.5 TB and over 20 billion records. MongoDB also provides the safety of having no single point of failure, as well as distributing both the processing load and data storage requirements. We tested CaPSID on a single node with more that 25 million read records from 113 different transcriptomes samples without significant drop in its performance. Another advantage of using MongoDB is that it offers API access to a number of programming languages (such as R, Python, Java and more) which, depending on users' needs, allow a broad range of custom type data analysis. CaPSID allows also users to store meta-information about each project (defined as a set of samples) and sample. For example, the user can specify the type of disease, the type of cell and sample source together with alignment information (such as the aligner used, sequencing platform, type of sequencing and the location of BAM files) for each sample to be processed. This database is a key component of the CaPSID platform since it allows users to store, organize and analyze the relevant information from individual samples (derived from the BAM files) in a simple and seamless way, without the burden of manipulating them programmatically. The CaPSID Web Application After digital subtraction, CaPSID's web interface enables research attention to be focused on those parts of the sequencing datasets that match the pathogen reference sequences. Its aim is to provide researchers with a rich and interactive interface to manage, query and visualize project results stored in the database. For example, it allows users to search for a specific pathogen and view its statistics across multiple samples or to display sortable tables of coverage statistics for any sample or project (see Figure 2). CaPSID also lets users rank genome hits in a given sample or project by using any of the four metrics described above. CaPSID integrates the genome browser JBrowse [18] to allow, in a simple mouse click, visualizing and analyzing the distribution of read alignments from one or many samples of a given pathogen genome and its genes (see Figure 3). The browser can also differentiate between reads that align PRS only and those that align to both HRS and PRS, allowing users to quickly see which proportion of reads also align to a reference sequence. CaPSID was designed to be used as a platform for large collaborative projects, such as those between laboratories, and functions as a centralized repository of information. Separating the interface from the analysis pipeline is central to this, as it enables those without direct experience of high-throughput sequencing systems to participate in expert judgements on the digitally subtracted sample datasets. To enable this, projects in CaPSID are defined as a collection of samples and allow for fine-grained control over user access levels. There are three levels of access to a project: users, collaborators and owners. Users have readonly access to projects, collaborators have permission to add, edit and remove samples and owners have full access to the project, which includes the ability to give permission for others users to access the project, and to remove the project and all associated data entirely from the platform. In addition to specific user level access, projects on the platform can be made public, giving all CaPSID users read-access to the samples. The CaPSID web application is written in Groovy using the Grails web framework, and allows user authentication and authorization data to be defined in the CaPSID database, an LDAP server or a combination of the two. For example, specific user permissions can be stored in the CaPSID database, while login credentials come from an external LDAP server. This gives full project control to the CaPSID administrators, while still keeping the centralized access control of an LDAP server. CaPSID's manual and documentation We have provided comprehensive documentation for installing and using CaPSID, complete with a step by step tutorial that takes users from creating a project and loading sequencing data, to analyzing and visualizing aligned reads across pathogen genomes. The full http://www.biomedcentral.com/1471-2105/13/206 documentation is available at https://github.com/capsid/ capsid/wiki. Results and discussion Analyzing sequencing data using CaPSID Testing CaPSID pipeline accuracy In order to assess our pipeline and its efficiency to subtract human sequences and detect pathogen ones, we have created a dataset by combining short read sequences from a real human transcriptome sample (publicly available from dbGAP) with sequences simulated at random from 10 viral reference genomes. The publicly available dataset consists of 9 million single-end reads (with read length of 65 base pairs) sequenced from a normal human tissue. We found that 97.3% of reads aligned to the human genome and 0.08% (7241) to viral genomes. The simulated dataset consisted of 10000 65-mers randomly generated from 10 viral genomes, and 270 reads known to map to both viral and human genomes. The 270 reads were not simulated but selected from one of our own sequenced dataset that has been previously aligned to both the HRS and PRS. We note that the Novoalign algorithm, before it attempts to align short reads, filters those that have low quality and low complexity by default. For users who want to use other short read aligners that do not include filtering options, CaPSID offers a way of filtering short reads with low quality prior to the alignment (see the CaPSID's pipeline section). Users should choose their filtering criteria with care as too stringent filtering could eliminate too many reads and produce a drop in sensitivity in detecting viral genomes. The two datasets (human and simulated) were combined into one single FASTQ file and all short reads were then aligned to the human and viral reference sequences using Novoalign. The two BAM files produced from the alignment step were then processed by CaPSID. CaPSID correctly identified all of the 17511 reads that mapped to viruses (i.e., 7241 from the original human sample and 10270 from the simulated dataset). As explained in the implementation section, CaPSID also keeps a record of reads that align to both human and non-human reference sequences. Our simulated dataset contained 270 of these reads, and CaPSID correctly identified all of them as well. These results demonstrate the accuracy of our pipeline that is its ability to correctly discriminate all of the alignments results provided by the aligner. It is worth emphasizing, however, that the alignment accuracy is independent of the CaPSID pipeline and depends entirely on the choice of the alignment algorithm and the quality of the sequenced reads. The BAM files from this simulated dataset can be downloaded from the CaPSID's homepage as part of its demo package. Ovarian cancer dataset Sixteen ovarian tumor transcriptomes were sequenced (for a total of 2.5 billion of reads) at the Ontario Institute for Cancer Research using both the Illumina and AB SOLiD technologies. Sequenced samples were first aligned to human and pathogen genomes and then processed through CaPSID's analysis pipeline. After ranking all reported virus genomes in each sample according to the maximum gene coverage metric using CaPSID's web interface, one sample, OVCA0016, came to our attention. Figure 4 shows that in this sample, among its top four genome hits with gene coverage greater than 90% were the simian virus 40 (SV40) followed by three group Ctype human adenovirus genomes. Further analysis using CaPSID's genome browser revealed that SV40 reads were concentrated almost entirely across its small and large T-antigens ( Figure 5A) whereas only the early region 1 E1A and E1B adenovirus (most likely Ad5) genes were expressed ( Figure 5B). As it seemed highly unlikely to find a human ovarian tumor expressing both adenovirus and SV40 oncogene products, we hypothesized if the OVCA0016 cell line had been contaminated with or accidentally replaced by 293T cells, an SV40 T antigenexpressing human embryonic kidney cell line [19] derived from Ad5 E1A-/E1B-expressing 293 cells [20]. To process the OVCA0016 sample with approx 255 million reads in two BAM files of 22 GB and 30 GB takes approximately 21 minutes when using 5GB of RAM on a 16-core AMD 64-bit processor. In vitro validation To examine this possibility OVCA0016 cells as well as 293T (positive control) and human H1299 lung carcinoma (negative control) cells were harvested in lysis buffer and proteins were separated on SDS-PAGE gels, transferred onto PVDF membranes and probed for Ad5 E1A ( Figure 6A top panel), Ad5 E1B55K (middle panel), and SV40 large T (bottom panel), as described previously [21]. With each of these proteins, a signal was detected in OVCA0016 at intensities similar to those found in 293T cells, whereas no signal was observed for either in H1299 cells. The same three cell lines were also grown on cover slips, fixed with 4% paraformaldehyde, stained with the same antibodies and analysed by immunofluorescence on a confocal microscopy as described previously [22]. In each case about 400 cells were analyzed in multiple fields for the expression of E1A protein, E1B55K or SV40 T antigen. Figure 6B shows representative fields examined and that in the case of 293T cells, every cell observed expressed all three viral proteins whereas with H1299 cells none of these species was observed in any cell. Interestingly, with OVCA0016, all cells examined appeared to express both Ad5 E1A and E1B; however, in the case of SV40 T antigen, only one field of all cells examined contained any cells that failed to express this viral protein (marked in the Figure 6B with arrows). As 293T Extracts from OVCA0016, 292T and H1299 cells were analyzed by western blotting using M73 (E1A), 2A6 (E1B55K) and Pab101 (SV40 T antigen, BD Pharmingen) antibodies, as described previously [21]. B -Immunofluorescence microscopy. The same cells types were grown on coverslips and analyzed by confocal immunofluorescence microscopy [22]. Cells not expressing T antigen have been indicated with arrows on the figure. cells are known to multiply in clumps, these results suggested that at most only 1-2% of cells failed to express these viral proteins, suggesting that the OVCA0016 cell line may indeed have been accidentally contaminated and eventually largely overgrown by 293T cells. This hypothesis was then confirmed by interrogation of the DSMZ STR database [23] using 9 short tandem repeat (STR) sequences of OVCA0016 cells generated at the The Centre for Applied Genomics (TCAG). The table provided (see Additional file 1) shows that the top 10 results with the highest arbitrary evaluation value (EV) are 293T and other corresponding variants (HKb20, ProPak-X.36, ProPak-A.52...). These results indicated that indeed the OVCA0016 cell line appeared to consist almost entirely of 293T cells, with only a small percentage of the original tumor cell line remaining. Although these results are of no biological significance, this unfortunate contamination of the OVCA0016 ovarian tumor cell line by 293T cells offered an ideal blind test of the efficacy of CaPSID and demonstrated its ability to detect viral gene expression in transcriptome sequences from human tumor cell lines. Assembly of the unaligned reads As mentioned earlier, CaPSID saves after digital subtraction, the reads that do not align to any of the references sequences in a FASTQ file. The existence of these unaligned reads could be indicative of the presence of some unknown pathogen (or organism) in the sequenced sample. In order to further characterize the unaligned reads in the contaminated sample, we performed de novo assembly of unmapped reads using the short-read assembler Trinity [24]. We note that when assembling whole genome sequencing data (WGS), where the result of alternative splicing is not observed, users should choose a different assembly algorithm such as Velvet [25]. Of the total of 255 million reads, 9 million did not map to any reference sequence in the CaPSID's database, of these approximately 7.69% http://www.biomedcentral.com/1471-2105/13/206 assembled into contigs. The Trinity assembler produced in total 13395 contigs ranging from 201 to 6483 bp in length. Using BLAT [26] and MEGA-BLAST [27], we found that only 69 of the assembled contigs did not map to any of the sequences present in either CaPSID's or NCBI's reference databases. In order to identify possible novel pathogens, these 69 contigs were screened for the presence of known protein features, using the InterPro [28] database of protein families, domains and functional sites. Nucleic acid sequences were first translated in all 6 frames and then scanned using the Inter-ProScan (iprscan soappy.py -crc -goterms). We found that none of them displayed a protein feature similar to the pathogen protein features found in the Inter-Pro database. We conclude that none of the remaining contigs are derived from a known pathogen and more likely are due to either sequencing artifacts, uncharacterized regions of the human genome or the presence of some unknown organism having protein features that do not match any of the protein motifs in the InterPro database. The comparison of CaPSID to other software for the identification of pathogens in high-throughput sequencing As mentioned in the background section a number of computational approaches already exist [4][5][6][7][8][9] for the discovery and detection of known and new pathogens from high-throughput sequencing data. However only two of these, namely PathSeq [6] and RINS [9], are available as integrated open source software similar to CaPSID. Specificity and sensitivity analysis comparison In this section we compare the accuracy and sensitivity of CaPSID to that of RINS [9], a recently published software for the identification of non-human sequences in high-throughput sequencing datasets. In order to compare results obtained with CaPSID and RINS we have created a benchmark dataset composed entirely of simulated reads drawn at random from both the human and viral reference sequences. The benchmark dataset is composed of 10 million reads (read length of 100 bp) generated at random from the human reference sequence (GRCh37/hg19) spiked with 10000 reads (read length of 100 bp) generated at random from 10 viral genomes. Each viral genome was then mutated by random substitutions at 3 distinct rates (5%, 10% and 25%). One thousand reads from each mutated genome were then randomly generated to produce an additional 30000 viral reads (1000×10×3). Since the read composition of the benchmark dataset is exactly known it serves as a good standard for evaluating the accuracy and sensitivity of both CaPSID and RINS. In the CaPSID analysis all reads from the benchmark dataset were aligned with the freely available Bowtie 2 [15] aligner. The three metrics used to compare the two software applications are defined in eqs. (1)(2)(3) as shown below. Sensitivity = n vgNOT n vgTot (1) Specificity = n hg n hgTot (2) and the average hit rate (hit rate) as hit rate (in%) = 1 10 · 10 g=1 n vgTRUE g n vgTot g where n vgNOT , n vgTot , n hg , n hgTot , n vgTRUE g and n vgTot g are defined as shown below n vgNOT = # of reads derived from viral genomes mapping to the human reference n vgTot = total # of reads derived from viral genomes n hg = # of reads derived from the human genome mapping to the human reference n hgTot = total # of reads derived from the human genome n vgTRUE g = # of reads derived from the viral genome g mapping to the viral reference sequence g n vgTot g = total # of reads derived from the viral genome g We found that the sensitivity and specificity of both CaPSID and RINS were 100% on the benchmark dataset (including both the non-mutated and mutated data). The results of our analysis indicate that CaPSID gives very similar results to those of RINS in terms of sensitivity, specificity and performance (CaPSID runs in < 20 min and RINS in < 14 min when performing the analysis on the benchmark dataset containing ≈ 10 million reads on a 16-core AMD 64-bit processor using < 6GB or RAM). However when using our third metric (see eq.3) we find that CaPSID performs better than RINS by mapping significantly more reads derived from mutated genomes back to their original reference sequences as shown in Table 1. This result demonstrates the ability of our approach to identify viral genomes with substantially divergent sequences (up to 25%) and indicates that it could be used for the identification of novel pathogen with up to 25% homology to the reference sequences stored in its database. Other feature comparison In addition to the comparison in accuracy and performance presented above, CaPSID includes additional important features that to the best of our knowledge are not part of either PathSeq or RINS. • Unlike PathSeq or RINS, which are primarily analysis tools, CaPSID has an easy to use and manageable http://www.biomedcentral.com/1471-2105/13/206 database allowing users to organize, store, analyze and visualize information from each project and sample in a seamless way. CaPSID's database is accessible by CaPSID's user-friendly web application that integrates a genome browser (complete with genome annotations) making the analysis and visualization of alignment results straightforward. • CaPSID provides coverage metrics that allow users to rank pathogens in significance based not only the overall genome coverage but on the gene coverage as well. • In CaPSID, users can align short reads using any aligner of their choice: having a single preferred aligner might be a limiting factor especially as new faster and more accurate aligners become available (for example PathSeq uses the MAQ aligner followed by Mega BLAST and BLASTN in order to align those reads with additional mismatches or gaps that are not aligned by MAQ (see [6]) and RINS [9] uses a combination of BLAT and Bowtie). • Unlike PathSeq or RINS, CaPSID does not wholly remove reads that simultaneously map to both human and pathogen genomes but keeps them in the database allowing rapid identification of pathogen-to-host integration sites. • CaPSID does not require a third party commercial computing platform such as the one used by PathSeq (Amazon Elastic Compute Cloud, EC2) and can be run efficiently on either a desktop computer or a cluster. PathSeq performs subtractive alignments using six different human genomes and then uses local aligners such as Mega BLAST and BLASTN to re-align reads (not aligned by MAQ during the initial alignments) to the two additional human sequence databases. In comparison, CaPSID's current subtraction is done in one pass against a single human reference genome with splice junctions. Thus CaPSID might potentially fail to subtract some of the reads aligned by PathSeq to its database of human references-although this is not a fundamental constraint of the CaPSID architecture, and further work to use multiple human reference databases would be relatively straightforward. However, because PathSeq uses local aligners its approach can be computationally very intensive and for users desiring to identify known pathogens with a large reduction in runtime CaPSID's approach might be more advantageous, especially when handling the large datasets of next generation sequencing. In addition, CaPSID's three-way approach of retaining reads that map to pathogen genomes, even when they also map to human genomes, reduces the risk that pathogen reads are inadvertently omitted during subtraction by both PathSeq or RINS. Conclusions In this article, we have presented CaPSID-a comprehensive bioinformatics platform for the detection of pathogen sequences in genome and transcriptome samples. We have demonstrated that CaPSID is an efficient tool that performs well on both the simulated and real datasets. We have shown that CaPSID's predictions can be successfully validated in vitro, and that CaPSID offers new and useful features that are not available in any current software used for the identification of pathogens in high-throughput sequencing. Furthermore and more importantly, CaPSID is suitable for collaborative types of projects between teams of scientists, for example between bioinformaticians and molecular virologists, through its web interface allowing researchers without expert knowledge in computational techniques to analyze alignment results stored in the CaPSID's database. The CaPSID platform is currently used in real production environment to analyze sequencing data generated by the OICR laboratories from different tumor types. Since CaPSID was deployed at the OICR, it was used in the analysis of more than 113 transcriptome samples across six different projects with a total of 25 million aligned reads stored in the CaPSID database. We believe CaPSID to be a versatile research tool that we hope will be used by researchers for the detection and identification of known and new pathogens using next generation sequencing data. Availability and requirements Project Name: CaPSID Project home page: https://github.com/capsid/capsid Operating system(s): Linux, Mac OS X Programming language: Python 2.7 (no support for Python 3 at the moment), Groovy Other requirements: MongoDB, OpenJDK (≥ 1.6.0 20), BioPython License: GNU GPL3 Any restrictions to use by non-academics: None
7,336
2012-08-17T00:00:00.000
[ "Biology", "Computer Science", "Medicine" ]
Insights into structure and redox potential of lignin peroxidase from QM / MM calculations † Redox potentials are computed for the active form (compound I) of lignin peroxidase (LiP) using a suitable QM/MM methodology (B3LYP/SDD/6-311G**//BP86/SVP:CHARMM). Allowing for dynamic conformational averaging, a potential of 0.67(33) V relative to ferrocenium/ferrocene is obtained for the active form with its oxoiron(IV) core. The computed redox potential is very sensitive to the charge distribution around the active site: protonation of titratable residues close to the metal center increases the redox potential, thereby rationalising the known pH dependence of LiP activity. A simple MM-charge deletion scheme is used to identify residues that are critical for the redox potential. Two mutant proteins are studied through homology modelling, E40Q and D183N, which are predicted to have an increased redox potential by 140 mV and 190 mV, respectively, relative to the wild type. These mutant proteins are thus promising targets for synthesis and further exploration toward a rational design of biocatalytic systems for oxidative degradation of lignin. Insights into structure and redox potential of lignin peroxidase from QM/MM calculations † Ludovic Castro, Luke Crawford, Archford Mutengwa, Jan P. Götze and Michael Bühl* Redox potentials are computed for the active form (compound I) of lignin peroxidase (LiP) using a suitable QM/MM methodology (B3LYP/SDD/6-311G**//BP86/SVP:CHARMM). Allowing for dynamic conformational averaging, a potential of 0.67(33) V relative to ferrocenium/ferrocene is obtained for the active form with its oxoiron(IV) core. The computed redox potential is very sensitive to the charge distribution around the active site: protonation of titratable residues close to the metal center increases the redox potential, thereby rationalising the known pH dependence of LiP activity. A simple MM-charge deletion scheme is used to identify residues that are critical for the redox potential. Two mutant proteins are studied through homology modelling, E40Q and D183N, which are predicted to have an increased redox potential by 140 mV and 190 mV, respectively, relative to the wild type. These mutant proteins are thus promising targets for synthesis and further exploration toward a rational design of biocatalytic systems for oxidative degradation of lignin. Dwelling in all vascular plants, lignin is one of the most abundant renewable sources of polymers on earth, making it an indispensable feedstock for carbon recycling. However, lignin is extremely recalcitrant to degradation because of its complex and heterogeneous structure, derived from oxidative-coupling of monolignols. 1 Most microorganisms are ineffective for such matter as anaerobic processes do not attack the aromatic rings of the polymer and aerobic processes are slow. In nature, only basidiomyceteous white-rot fungi can degrade lignin in an effective way. 2 It has indeed been demonstrated that some white-rot fungi prefer to attack lignin rather than cellulose in wood, 3 leaving the latter pale and fibrous because of oxidative bleaching. This selectivity is extremely interesting for indus-trial applications since many wood processes consist of the removal of lignin only (e.g. in biopulping). Lignin peroxidase (or ligninase, LiP) is one of the enzymes produced by white-rot fungi for the degradation of lignin. LiP belongs to class II peroxidases. 4 These enzymes can oxidise substrates by multi-step electron transfers (Fig. 1). 1 The general LiP catalysed reaction is a two-step mechanism 5 involving (1) the ferric resting state of the native enzyme, (2) the radical-cation oxoferryl intermediate compound I (Cpd I), and (3) the neutral oxoferryl intermediate compound II (Cpd II). LiP is able to catalyse the H 2 O 2 -dependent oxidative depolymerisation of lignin 6 and is also known to oxidise non-phenolic aromatic substrates and organic compounds presenting high redox potentials (up to 1.4 V versus the standard hydrogen electrode (SHE)) when H 2 O 2 is present, which is quite uncommon among peroxidases. 7 As for the P450 enzyme, the reactive species of the catalytic cycle is assumed to be Cpd I. In the case of LiP, the reduction of Cpd I by an electron-donating substrate to yield Cpd II is pH-dependent, the rate of the reaction increasing when the pH is low. 8 Kersten et al. compared the activity of lignin peroxidase, horseradish peroxidase and laccase regarding the oxidation of a series of methoxybenzenes presenting a range of redox potentials (from 0.81 V to 1.76 V versus the saturated calomel electrode). 9 The authors show that LiP can oxidise 10 out of 12 congeners used in the study, while laccase and horseradish peroxidase can respectively oxidise only 1 and 4 out of 12 congeners (the ones with the lowest redox potentials), making LiP a stronger oxidant. The broad substrate specificity of the enzymes, and in particular of LiP, suggest that they do not have specific binding sites and that the redox potentials of the enzymes are mostly responsible for their activity. The authors also demonstrate a very important pH-dependency for the three enzymes, with a maximum of activity around pH 3 and a very sharp decrease when the pH increases. They estimate the redox potential of LiP as more than 1.2 V/SHE at pH 3. 10 The present theoretical paper gives some insights about the redox potential of the Cpd I/Cpd II couple of LiP. The calculations of redox potentials for molecular systems in solution are usually carried out with the use of a thermodynamic Born-Haber cycle, using quantum mechanics (QM) methods such as density functional theory (DFT) to evaluate the free energy of electron uptake by the oxidant (Cpd I in our case) in solution (more details in the ESI †). As LiP is too big for full QM calculations, the steric and electrostatic contributions of the surrounding environment on the active site are modelled through combined quantum mechanical/molecular mechanical (QM/ MM) calculations, which have become a powerful theoretical tool to study proteins and enzymes. 11 In such calculations, the active site of the enzyme can be treated at a QM level while the protein environment and the solvent are simulated by means of molecular mechanics (MM). QM/MM calculations on LiP have been reported before, but with different focus (namely the role of Trp radicals in the long-range electron transfer). 12 We now present the first QM/MM simulations of the redox potentials of the active Cpd I in LiP. 13 The geometries of Cpd I and Cpd II were optimised after 1000 ps of equilibration using classical MD; results obtained with the largest QM region ( presented in Fig. 2) will be discussed preferentially. Geometrical differences between Cpd I and Cpd II are very subtle (see ESI † for details); also, formation of Cpd I from the wild type does not involve very important changes in the global structure of the enzyme (Fig. 3). The redox potentials have been calculated for a range of different QM regions. The size of the QM region does not have a large influence on the redox potentials. The latter have been computed with BP86/Def2-SVP at the QM/MM level and singlepoint calculations have then been carried out with B3LYP/ SDD/6-311G** and the same QM/MM setup. Because both levels afford the same trend (see Fig. S4 in the ESI †), we can either discuss BP86 or B3LYP computed redox potentials. From here on, we will only discuss the B3LYP results for consistency with QM/MM calculations in the literature. 14 They will be given relative to the absolute redox potential (ARP) of the ferrocenium/ferrocene redox couple in water, calculated as 5.03 V. For the "neutral" Cpd I/Cpd II couple, where all residues were in their "normal" protonation states expected for pH 7, a relative redox potential (RRP) of 0.48 V is computed (Table S3 in the ESI †). 15,16 Averaging over a number of snapshots increases this value to 0.67(33) V, which is in good qualitative agreement with the experimental estimate 9 (see Table S4 and discussion in the ESI †). In order to gain some insights into the relationship between the activity of the enzyme and the pH, we performed additional QM/MM calculations for systems where specific residues have been protonated. Three such systems have been prepared, with Asp238, or His47, or both protonated. 16 The concomitant predicted increase in the redox potential is 1.14 V, 1.22 V, and 1.73 V, respectively. 15 This increase arises because Cpd I is destabilised by the close proximity between the protonated residue and the electron-deficient porphyrin (see ESI † for details). In order to gain deeper insights into the impact of local charge distributions on the RRPs and to identify potential targets for subsequent mutation studies, we devised a simple protocol for rapid screening: we performed single-point B3LYP/SDD/6-311G** calculations on the optimised geometries of Cpd I and Cpd II deleting the MM charges of individual specific residues. The QM region of the system used for this test only includes the heme and His176, employing the geometry from the QM/MM optimisation with BP86, Def2-TZVPP for iron and Def2-SVP for all other atoms. With this Fig. 2 The largest QM region used in this study, carbon: grey, hydrogen: white, oxygen: red, nitrogen: blue, iron: turquoise). The "outer" propionate is protonated in order to form an H-bond with Asp183. Fig. 3 Superposition of the optimised structure of Cpd I (in blue) and the experimental structure of the resting state (in red). A zoom into the active site is depicted; for a view of the full structure see Fig. S3 in the ESI. † methodology, the calculated B3LYP RRP with the complete set of charges is equal to 0.29 V. Calculated RRPs with selected charges deleted are compiled in Tables S5-S7 in the ESI. † When charges from neutral residues are deleted, the RRP is affected by only 0.14 V or less. When the charges of negatively charged residues (aspartates and glutamates) are deleted, the effect is stronger (see plot in Fig. 4a). Overall, the deletion of these negatively charged residues increases the redox potential of the Cpd I/Cpd II couple, suggesting that these residues stabilise Cpd I due to electrostatic interactions. As expected, the increase is larger when the residue is closer to the active site, but is still noticeable at long range. However, the magnitude of this effect is quite surprising. If it was possible to "knock out" the residue located at 6 Å away from Fe, the RRP could exceed 2 V (leftmost data point in Fig. 4a). Deleting charges from negatively charged residues (equivalent to neutralising them) as far as 27 Å away from Fe should still strongly affect the RRP, since we still find an increase of ca. 0.5 V (rightmost data point in Fig. 4a). Similarly, the deletion of the charges of positively charged residues leads to a decrease of the calculated redox potentials. This means these residues destabilise Cpd I because of electrostatic repulsions. Their absence stabilises Cpd I and decreases the redox potential. As for the negative residues, the effect is larger at short distance, but can be seen even at long range (Fig. 4b). All these results show that the electrostatic interactions between the active site and the protein residues have a very important influence on the stability of Cpd I with respect to Cpd II, and thus on the redox potential of the Cpd I/Cpd II couple. For example, a low pH will have the effect to protonate deeply buried residues such as Asp238. The neutralisation of the negative charge of these residues destabilises Cpd I and thus increases the redox potential substantially. This is most likely one of the reasons for the increase of activity of lignin peroxidase at low pH (another being the protons required to convert Cpd II back to the resting state, cf. Fig. 1). Whether this increase is due to protonation of a single residue close to the active site or to protonation of several ones further away is difficult to determine at this point. Site-specific targeted protonation is not possible experimentally. An alternative way to design an enzyme with increased activity would be to perform mutations by replacing negatively charged residues by neutral ones, for examples Asp by Asn and Glu by Gln. In order to test this idea, mutations have been carried out in silico. For this purpose, we looked for suitable residues to mutate. Since these have to be negatively charged residues, they are usually implicated in hydrogen bonds which are important for the integrity of the enzyme. Only two suitable residues have been found: Asp183 and Glu40 (ca. 11 Å and 12 Å, respectively, away from Fe, see Table S6 in the ESI †). Asp183 forms an H-bond with the protonated propionate of the porphyrin ring, as well as H-bond interactions with surrounding water molecules. Replacing it by Asn183 still maintains the H-bond between the propionate of the heme and the oxygen of the asparagine and the solvent can rearrange in order to form new interactions with the NH 2 group. The carboxylic group of Glu40 only interacts with water molecules so replacing it by Gln40 should not be problematic. For each of these residues, the structure of the enzyme was changed (Asp183 to Asn183 or Glu40 to Gln40) from the geometry corresponding to the 1000 ps snapshot. Then we performed QM/MM optimisations with these two mutant proteins, using the smallest QM region. 17 As expected, the global structures of these mutant proteins (D183N and E40Q respectively) are essentially unchanged from that of the wild type (see comparison of local optimised structures in Fig. 5 and S7 †). Single-point B3LYP/6-311G**/SDD calculations were carried out on the optimised geometries and the redox potentials for D183N and E40Q were found to be 0.57 V and 0.52 V, respectively. By comparing them with the redox potential of the wild type at the same level of theory (0.38 V), it is clear that the mutations lead to a slight, but noticeable increase of the redox potential. However, the magnitude of this increase is clearly lower than the one induced by the protonation of the residues directly interacting with the active site, like His47 and Asp238. The actual, predicted effect of this mutation is also less pronounced than expected from the simplistic charge-deletion scheme in Fig. 4a (where RRPs around 1.6 V were computed for Asp183 and Glu40). However, a predicted increase in oxidative power by up to ca. 200 mV could result in substantially increased activity toward lignin degradation (or alleviate the harsh pH requirement). The two mutant proteins that we have identified are thus promising targets for synthesis and further engineering. In summary, we have presented the first QM/MM study of the redox potential of lignin peroxidase (LiP) in its active form. This form corresponds to the famous compound I in cytochrome P450 enzymes with a heme-based oxoiron(IV) core. The computed redox potential is very sensitive to the charge distribution around the active site. Protonation of titratable residues close to the metal centre results in a substantial increase in the redox potential, qualitatively consistent with the observation that LiP has its highest oxidative power at low pH. An increase in the computed redox potential is also predicted upon "knocking out" negatively charged residues, either using a simple charge-deletion scheme (namely setting the MM charges of individual residues in the QM/MM minima to zero), or through actual homology modelling of suitable mutant proteins (i.e. by replacing negatively charged residues with neutral analogues). Two mutant proteins were identified, which are predicted to have an increased redox potential by up to 190 mV relative to the wild type (while maintaining the structural integrity of the enzyme). Because such an increase could translate into significant enhancement of activity, or the requirement of less harsh pH conditions, these mutants are promising targets for synthesis and further exploration. The oxidative power of LiP is arguably just one piece in the puzzle of lignin degradation by this enzyme, and much further work will be needed to deduce the detailed mechanism of this important process. As the present paper shows, quantumchemical calculations can be a valuable tool along this way, and may ultimately be useful in the design of new and improved biocatalytic systems that can degrade lignin and tap into this vast, and vastly underused resource. Experimental section Starting from the experimental crystal structure of the major LiP isozyme from Phanerocheate chrysosporium, 18 QM/MM calculations were performed using ChemShell, 19 where the QM part was treated with DFT while the MM part was described by the CHARMM force field. Geometry optimisations were carried out at the BP86/Def2-SVP level. The energies were recomputed by single-point calculations of the QM region surrounded by the optimised MM point charges using B3LYP functional, the Stuttgart-Dresden pseudopotential in combination with its adapted basis set for Fe, and 6-311G** basis set for all other atoms. For details and references see ESI. †
3,869.6
2016-02-16T00:00:00.000
[ "Chemistry", "Biology" ]
Penetration and bouncing during impact in shallow cornstarch suspensions The impact-activated solidification of cornstarch suspensions has proven to be a multi-faceted problem and a complete explanation of the different phenomena observed during this process remains elusive. In this work, we revisit this rich problem and focus on impact on shallow suspension baths where the solidification partly leads to bouncing of the impactor. We systematically vary the depth and solid fraction of the suspension, the mass of the impactor, and the impact velocity to determine which conditions lead to bouncing. For cases where bouncing occurs we observe distinctly different dynamics as compared to those cases without it. Our results allow us to connect the velocity oscillations and stop-go cycles that were observed during settling in a deep layer, with more recent work dealing with high-force and high-speed impact on a cornstarch suspension. Introduction Suspensions of cornstarch in water have become a shearthickening model system because they present continuous shear thickening, discontinuous shear thickening and shear jamming in a particularly clear manner [2,16]. Furthermore, the dramatic transition from a liquid-like to a solid-like material enables them to, for example, support a load [2], present cracks [17], and transmit a force between boundaries [10], features that distinctly set them apart from other liquids. Many efforts have been done to explain this peculiar behavior, including both steady-state [2,16,18] and dynamic approaches [6-8, 12, 15, 19]. However, there are phenomena that remain to be understood completely, in particular, the impact-activated solidification of cornstarch suspensions has proven to be highly non-trivial and multifaceted. It was shown that a localized jammed region grows ahead of the impactor [10,19], with a velocity larger than that of the projectile [6,15,19] and generates an added mass that however only partly accounts for the large decelerations that were measured. In fact, recent work from the Behringer group [9] provided evidence for the existence of two fronts, namely a fast pressure front followed by the slower, abovementioned solidification front. This was realized by replacing one of the side walls by a photo-elastic material, thus creating a sensitive device to measure pressure and pressure wave speeds. Simultaneously, it has been observed that the velocity of a sphere settling in a deep bath of cornstarch suspension reaches an average terminal velocity around which it oscillates (bulk oscillations), while once the sphere approaches the bottom of the container, it undergoes what has been called stop-go cycles, in each of which the sphere comes to a complete stop and then re-accelerates again [7,8]. Both observations were attributed to the building-up and subsequent relaxation of a solid plug. The complete stop during the stop-go cycles would then be caused by the interaction of this solid plug with the bottom boundary. Direct interaction between the boundaries and the impactor is believed to occur only once the jammed front reached the boundaries, after which the deceleration is observed to be much larger than that caused by added mass [15]. It has been suggested that this large acceleration is the result of the compression of the jammed plug between the impactor and the boundaries [1,11,13]. In this work we revisit the impact of a projectile on a water-cornstarch suspension. We focus our attention to shallow baths and high impact velocities, which in many cases causes the impactor to bounce. We explore the conditions that lead to bouncing and compare the time evolution of position, velocity and acceleration for cases with bouncing, i.e., where the impactor velocity changes sign, and cases in which the impactor position monotonously decreases until the object comes to rest. As we will show below, in this way we are able to make a bridge between the studies that focused on the growth of the jammed front and those that deal with the settling of a sphere in a deep suspension. The paper is structured as follows. In the next section, we will describe our experimental setup. Subsequently, we will present our results and discuss under which condition bouncing is observed. Turning to the trajectories of the impactor, we will discuss how these are affected by the effect of changing the depth of the suspension layer and how two very distinct regimes are observed, namely one that is dominated by a viscous response and the other by viscoelasticity (Sect. 3). We will conclude by discussing the different forces acting on the impactor in these two regimes. Methods The experimental setup consists of an acrylic box (of height 15 cm, and with a square bottom area of 30 × 30 cm 2 ) with transparent side walls, which contains the cornstarch suspension, as depicted in Fig. 1a. The depth of the suspension layer, H Cs , is varied from 1.0 to 12.0 cm. The impactors are cylinders of fixed diameter d = 2 cm and height h = 6 cm, with masses M cyl of 49.6 g and 129.9 g (consisting of aluminium and stainless steel, respectively). These were left to freely fall from different initial heights, H fall = 0−150 cm, resulting in velocities up to 5.4 m/s. To vary the falling height, the impactors were suspended from an electromagnet connected to a vertical translation stage. The impact event is recorded with a fast camera (Photron, FASTCAM Mini UX100) at 5000-6400 fps with a resolution of 65-68 pixels/cm. The suspensions are prepared using demineralized water and additive-free cooking cornstarch (250 g sealed boxes, Maizena, Duryea). For every suspension layer depth H Cs a new mixture was prepared. Before every single impact, the suspension was mixed thoroughly to avoid sedimentation. Four different cornstarch volume fractions Cs were tested, namely 0.38, 0.40, 0.42, and 0.44. We initially obtained the mass fraction of the suspension, then we converted it to volume fraction taking the density of cornstarch as: Cs = 1, 542 kg/m 3 [14] and the density of water as: w = 994 kg/m 3 , measured with a density meter (DMA 35, Anton Paar). In this conversion, we have neglected the effect of the porosity of the cornstarch grains [5], which may cause the true volume fractions to be slightly different from the reported ones, but will not affect our conclusions. We tracked the position z of the top of the cylinder versus time using a home-made MatLab code (see Fig. 1b for an example of the detected top) and from it we obtained its velocity v and acceleration a by differentiating a piecewise 3rd degree polynomial fit to the position data. From the acceleration a, we obtained the force F Cs exerted by the suspension on the cylinder by subtracting the acceleration of gravity and multiplying by the mass of the cylinder. Typical time-evolution plots of the position z, velocity v, and force F Cs are shown in Fig. 1c. Results and analysis The time-evolution curves plotted in Fig. 1c are reminiscent of the trajectories shown in [19], in the sense that both have a strong force peak just after impact. In our case, however the impactor is not only highly decelerated but even bounces back due to the shallowness of the suspension layer. The rebound occurs at about 1.5 cm above the bottom of the container ( H Cs = 3.0 cm ), which means that there is no direct contact between the cylinder and the container, but suggests a solid-like stress transmission between them [10]. When we increase the suspension layer depth, the impactor continues to sink (Fig. 1b, bottom snapshots). To map out the bouncing behavior of the suspension layer, we systematically changed the layer depth H Cs and volume fraction Cs and for each case impacted it with different masses and velocities to explore under which conditions bouncing was observed. The result is summarized in Fig. 2, where we plot two phase diagrams showing for which conditions bouncing occurs as a function of Cs and H Cs . We combined the effects of cylinder mass and impact velocity by plotting the kinetic energy at impact, K i = M cyl gH fall 1 . This allowed us to match the results from different cylinder masses for Cs ≥ 0.4 , as shown in Fig. 2a. Figure 2a shows the effect of solid fraction on bouncing (at a constant layer depth H Cs = 3.0 cm ). For Cs < 0.4 (50.8 wt%), excepting K i = 0 , the impactor doesn't bounce, in accordance with the observations of Crawford et al. [3], where the kinetic energy of impact was large and their impactor simply sank at low mass fraction. Moreover, while at Cs = 0.4 the impactor bounces for low K i , for high impact kinetic energies the cylinder travels all the way to the bottom to either completely stop or to dramatically bounce due to direct contact with the bottom wall. The effect of the suspension layer thickness is shown in Fig. 2b (for constant Cs = 0.4 ). For the deepest layer used, no bouncing is observed at any kinetic energy of impact (as long as we can continue to track the cylinder), while bouncing happens for lower layer depths. However, for H Cs = 8.0 cm , bouncing is observed only for sufficiently high impact kinetic energies, whereas lower K i present monotonous sinking of the impactor. For shallower cases, even towards zero impact kinetic energy a slight rebound was recorded. As described above, for the shallowest cases there is a maximum kinetic energy beyond which the impactor doesn't bounce anymore. The color code in Fig. 2 represents the height the cylinder bounced, showing that there is a maximum rebound height for intermediate kinetic energies. Moreover, close to the boundaries of the bouncing regions the rebound height becomes small, suggesting a smooth transition between both regions. Clearly, the impact kinetic energy plays a role in determining if bouncing occurs or not. Part of the impact energy is dissipated due to viscous effects (although other effects like friction or cracking may also contribute) and another part is transferred into the solidified added mass in front of the impactor 2 . Finally once the front has reached the boundaries, the remaining energy may largely be transferred into plastic and elastic deformation of the solidified plug and may partly be recovered for bouncing. Note that the bounce is only in the order of a few millimetres as compared to tens of centimetres of H fall , which implies that only a tiny fraction of the impact kinetic energy is recovered during rebound. In addition, the increase of the upper boundary of the bouncing region with increasing Cs and H Cs suggests that for too dilute or shallow cases, the suspension layer cannot dissipate enough energy to stop the cylinder before reaching the bottom of the container, as also suggested by Roché et al. [17], which implies that as the solid fraction or the layer depth increase, the suspension obtains a larger capacity for dissipating and storing energy. Now, what causes this peculiar behavior of the cornstarch suspension layer? To obtain additional insight we now turn to measurements with the same impact velocity (i.e., the same H fall ) but varying cornstarch layer depth H Cs . In Fig. 3a we plot the typical time evolution of the vertical position z of the cylinder for different suspension depths, fixing the other parameters to M cyl = 49.6 g , Cs = 0.42 , and H fall = 10 cm . Initially, all individual experiments appear to follow the same master curve but eventually deviate, and reach a minimum z min after which the postion increases again (i.e., the cylinder bounces). The deeper the cornstarch layer is, the longer the the position follows the master curve. In the deepest case, H Cs = 12.0 cm , no bouncing is observed. This is consistent with our earlier observations: Note that we travel a horizontal path through a phase diagram similar to Fig. 2b, where for small H Cs bouncing is observed, whereas for large H Cs the cylinders sink continuously. We interpret the time at which the deviation from the master curve occurs as the moment the solidified front has reached the bottom of the container, as was shown in [1,11]. This is consistent with the observation that shallower layer depths present an earlier time of deviation, since there the jammed front has to grow over a smaller distance, before it reaches the bottom of the container. It is tempting to check whether the deviation always occurs at the same distance from the bottom. We did, and in fact this is not the case: The distance above the bottom increases with the layer depth, which is again consistent with the growth of a cornstarch plug below the cylinder in time. in [7], and another where the velocity suddenly and rapidly goes to zero and becomes positive. Interestingly, the difference in the times at which deviation occurs between H Cs = 4.0 cm and 6.0 cm , is much larger than the deviation-time difference between the cases of H Cs = 2.0 and 4.0 cm , which is in contradiction with the findings by [19], where the time for the front to reach the bottom scales close to linearly with the height of the suspension. The reason may lie in the following observation from Fig. 3b: It can be seen that for H Cs ≤ 4.0 cm the deviation happens before the velocity has reached the quasi-terminal velocity, while for the H Cs ≥ 6.0 cm case it happens afterwards, but just before the second oscillation. This indicates that whereas for small layer depths the cornstarch plug may hit the bottom during its growing phase (leading to the linear behavior of [19]), for large layer depths the plug is in a (quasi-)steady state between growing and melting, corresponding to the development of a terminal velocity. It is worth noting that a rebound may be followed by a second and even a third bounce or stopping event, reminiscent of the stop-go cycles reported in [7]. This can be best appreciated in Fig. 3c, where, e.g., for H Cs = 3.0 cm , one observes after the first rebound (signalled by v = 0 ) at a depth of ≈ 0.6 cm , a second and third event at depths of 0.9 and 1.2 cm , respectively. It should be noted that in our case the impactor is not completely immersed so after bouncing there may be a period of ballistic free-fall, whereas during the stop-go cycles for the fully immersed objects reported in [7] a drag force is always present. In Online Resource 3, we show that when keeping the layer depth H Cs fixed and increasing the fall height H fall (and thus the impact velocity), the time at which the curve starts to deviate reduces. This is not surprising, given that at a higher impact velocity the jammed front is expected to grow faster [6]. In the final part of this section we will concentrate on the force F Cs that the impactor experiences inside the suspension, which we obtain by subtracting the gravitational acceleration g from the measured acceleration a and subsequently multiplying with the cylinder mass M cyl . In Fig. 4 we plot this force as a function of the distance −z of the impactor to the surface and its vertical velocity v. Looking at Fig. 4a, where F Cs is plotted versus −z , we retrieve the two regimes found in Fig. 3: There is an approximately linear master curve (inset Fig. 4a), where a strong force peak is observed as soon as the cylinder starts to bounce. We can now interpret these two behaviors as a regime, corresponding to the master curve, that is dominated by viscous and added mass forces [19] and a regime that is dominated by viscoelastic forces. The height of this peak decreases with increasing suspension layer depth. This stands to reason, since in a deeper bath there is a larger time span in which viscous dissipation can occur, and in addition, a larger plug and therefore a larger added mass is created. Thus, by the time the front interacts with the boundary, it has less kinetic energy to deform the plug 3 . Turning to Fig. 4b, in which F Cs is plotted as a function of velocity v, we see that the curves for H Cs ≤ 3.0 cm that are very close in (v, z)-space (Fig. 3) Once the bouncing cases start to deviate from the master curve, the dominant force should come from the compression of the jammed plug between the impactor and the boundary. If the compression would be purely elastic however, the force should be the same linear function of position on the increasing and decreasing path, with a maximum at the moment the impactor stops. Clearly, from Fig. 4a it appears that the force doesn't follow the same path before and after reaching the maximum, instead it decreases with a steeper slope. A closer look at the decreasing section of the force curve (inset of Fig. 4a) reveals that after reaching the maximum force, the impactor continues to sink a very small distance before it bounces back. This is also clear from the F Cs versus v plot (Fig. 4b) where the maximum force does not occur for v = 0 , but for slightly negative v, i.e., the cylinder will continue to sink after reaching the maximum force. This kind of hysteretic behavior of the force as a function of position resembles that of a viscoelastic solid. To obtain some quantitative order of magnitude estimates for the elastic and viscous properties of the cornstarch suspension, we therefore used the simplest linear, viscoelastic solid model available, namely the Kelvin-Voigt solid. This model consists of a dashpot and a spring in parallel, where the jammed plug can be thought of as a series of parallel springs immersed in a liquid (water). We solved the resulting equation of motion M cylz + ḃz + kz = 0 , where b and k are a drag and a spring constant, respectively, with initial conditions z(t d ) = 0 and ̇z(t d ) = v dev , where v dev is the velocity of the cylinder at the moment t = t d the curves start to deviate from the master curve. We fitted only the deviatoric part of our experimental curves, for Cs = 0.40, 0.42 and for H Cs = 1.0, 2.0, 3.0 cm (a more detailed explanation of the fit can be found in the Online Resource 4). For the two values of Cs , this resulted in two estimates for the spring constant: k 0.40 = 8 ± 3 kN/m and k 0.42 = 25 ± 9 kN/m , and two estimates of the drag coefficient: b 0.40 = 7 ± 2 kg/s and b 0.42 = 15 ± 6 kg/s, where the error intervals represent one standard deviation. Furthermore, by looking at the position at which the deviation occurred, we estimated the initial size of the plug, L 0 , and with the base area of the impactor, A, we estimated the Young modulus E ≈ kL 0 ∕A as: E 0.40 = 0.3 MPa and E 0.42 = 0.8 MPa . We emphasize that these should be considered as order of magnitude estimates only, due to the uncertainty in the exact moment the plug comes in contact with the boundaries and the fact that the Kelvin-Voigt model is a very crude approximation for describing the jammed plug. We should note that once the plug is in contact with the boundary and it starts to be deformed it is possible that some changes in the volume fraction occur, therefore the drag coefficient might not be the same for the whole process. Finally, we fitted the model by Maharjan et al. [11] to the force curves after deviation but before the force became maximum and we obtained for the elastic modulus E 0.40 = 0.3 ± 0.1 MPa and E 0.42 = 0.6 ± 0.3 MPa , which are very similar to those obtained with the Kelvin-Voigt model. Conclusion In conclusion, we have revisited the impact solidification of a cornstarch-water suspension for different suspension volume fractions ( Cs ) and cornstarch layer depths ( H Cs ), as well as for different impactor masses and velocities. We found that for large depths, a minimum impact kinetic energy is necessary for the impactor to bounce. However, if the kinetic energy is too large and the suspension bath too shallow or too dilute, then no rebound is observed. Therefore, just as in [13] a minimum layer depth was found for the added mass model to be able to allow for an adult to run on top of a cornstarch pool, here we found that for a given kinetic energy, there is a minimum layer depth above which the compression of the solidified plug is able to cause the impactor to jump. We found that the time-evolution curves for different layer depths follow the same master trajectory but eventually deviate, with the deviation time increasing with the depth of the suspension and decreasing with the impact velocity. Additionally, if the layer depth is large enough to allow the impactor to reach a quasi-terminal velocity [7], then the deviation time increases, presumably due to melting of the solidified front. In this way, we connect the literature that deals with the settling of an object in a deep cornstarch layer [7,8,14] with the work concentrating on (high-speed) impact on a cornstarch layer [6,11,13,19]. Finally, we observed that the force response for the cases with bounce resembles that of a viscoelastic solid and were able to provide an order of magnitude estimate of its properties. We want to stress that models like the Kelvin-Voigt model used here are crude approximations to a complex reality that is in need of substantial research to capture the complex processes during the compression of the plug and just before bouncing. Close to publication we became aware of a similar work studying the impact of a sphere on a thin layer of cornstarch suspensions by Egawa and Katsuragi [4]. The results of that paper, where the focus lies on interpretation of the results using a Kelvin-Voigt model, appear to be in accordance with our results. were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
5,314.4
2020-01-08T00:00:00.000
[ "Physics" ]
Effect of contaminations and surface preparation on the work function of single layer MoS2 Summary Thinning out MoS2 crystals to atomically thin layers results in the transition from an indirect to a direct bandgap material. This makes single layer MoS2 an exciting new material for electronic devices. In MoS2 devices it has been observed that the choice of materials, in particular for contact and gate, is crucial for their performance. This makes it very important to study the interaction between ultrathin MoS2 layers and materials employed in electronic devices in order to optimize their performance. In this work we used NC-AFM in combination with quantitative KPFM to study the influence of the substrate material and the processing on single layer MoS2 during device fabrication. We find a strong influence of contaminations caused by the processing on the surface potential of MoS2. It is shown that the charge transfer from the substrate is able to change the work function of MoS2 by about 40 meV. Our findings suggest two things. First, the necessity to properly clean devices after processing as contaminations have a great impact on the surface potential. Second, that by choosing appropriate materials the work function can be modified to reduce contact resistance. Introduction Due to their unique properties which can differ a lot compared to bulk materials, two-dimensional materials are being targeted in a variety of research areas like surface physics, electrical engineering, chemistry and biomedical applications [1][2][3][4]. The 2D-material getting the most attention besides graphene are single layers of molybdenum disulfide (SLM) which consist of a plane of molybdenum atoms that are sandwiched between sulfur atoms. The main reason for this is the transition from an indirect (bulk MoS 2 ) to a direct (single layer MoS 2 ) band gap semi-conductor [5]. Single layer MoS 2 has a strong photoluminescence signal [5][6][7][8][9] and other interesting properties like a mechanical stiffness of 180 ± 60 N·m −1 , which is comparable to steel [10,11], charge carrier mobilities that are comparable to Si [12,13], and it is possible to grow these ultrathin layers using CVD [14][15][16]. The main advantage SLM has to offer compared to the model 2D-material graphene is its direct band gap. It allows the facile integration of SLM in electronic devices, which has been demonstrated for highly flexible transistors, optoelectronic devices, small-signal amplifiers, MoS 2 integrated circuits and chemical vapor sensors [12,[17][18][19][20][21]. It has been reported that the performance of these devices can greatly vary due to the choice of the material of the contacts, the cleanliness of the SLM surface and a top gated structure with a high κ dielectric [22][23][24][25][26][27]. By choosing appropriate materials in 2D-devices the work function can be tuned to, e.g., lower the contact resistance and improve their performance. First experiments adressing this issue for MoS 2 by using Kelvin probe force microscopy (KPFM) have already been reported [28,29]. However, these measurements were not done on SLM but bilayer MoS 2 (BLM) and higher layer numbers and the measurements were performed under ambient conditions using amplitude modulated KPFM, both having a great impact on the results. In this work we study the work function of SLM on a standard SiO 2 /Si substrate using non-contact atomic force microscopy (NC-AFM) and Kelvin probe force microscopy in situ. In our measurements we use a gold contact patterned on SLM in order to calibrate the work function of our AFM tip which allows us to determine quantitative work function values for SLM, BLM and few layer MoS 2 (FLM). Additionaly, we use reactive ion etching to pattern holes into the SiO 2 substrate. By comparing the work function of SLM on etched and pristine SiO 2 substrates, we show that a significant change in the work function can be achieved by substrate effects. Experimental For our studies we exfoliated MoS 2 (HQgraphene, Netherlands) on a patterned Si sample that has been covered by 90 nm SiO 2 layer (graphene supermarket, Calverton, NY, USA). The SiO 2 was patterned by using an inductive coupled plasma reactive ion etching (ICP-RIE) with Cl 2 /N 2 chemistry. The etching mask used was a standard photoresist patterned by optical lithography. The etching was performed at 35 °C using 300 W of ICP and 150 W table power. The chamber pressure was adjusted to 8·10 −3 mbar during this procedure. Reactive ion etching was employed to locally alter the surface roughness and introduce defects in the SiO 2 substrate [30,31]. The resulting structures on the SiO 2 surface consist of etched holes with a depth of about 40 nm measured using AFM. Immediately after etching, the MoS 2 was exfoliated by mechanical cleavage [32]. Single layer MoS 2 flakes were located by using their optical contrast and verified using Raman spectroscopy [33,34]. For Raman point measurements and mappings, a Renishaw InVia Raman spectrometer (λ = 532 nm, P < 0.4 mW, spectral resolution ≈ 1 cm −1 ) has been employed. Because SLM is highly flexibel, it is not covering the etched hole. Instead the SLM touches the etched SiO 2 surface at the bottom and follows the morphology like a membrane ( Figure 1). While this leaves the SLM heavily strained on the edge of the hole, it allows to experimentally compare the effect of two differently treated subtrates (SiO 2 and RIE SiO 2 ) on the same MoS 2 flake. After identification of SLM areas, a Ti/Au (5 nm/15 nm) contact was patterned on the MoS 2 flake by photolithography. We used the Photoresist ARP-5350 (Allresist GmbH, Strausberg, Germany) with the developer AR 300-35 (Allresist GmbH, Strausberg, Germany). Acetone was used for the lift-off and finally the samples were boiled in isopropyl alcohole. The contact served two purposes. On the one hand, the sample was electrically connected to ground potential, on the other hand, the gold surface was used for calibrating the work function of the AFM tip during KPFM measurements. The contacted SLM sample is introduced into an ultra high vacuum system with a base pressure of about 2·10 −10 mbar. Non-contact AFM measurements were performed using a RHK UHV 7500 system with the PLL Pro 2 controller. Simultaneously to NC-AFM, frequency-modulated KPFM measurements were conducted to probe the local contact potential difference (CPD) between the tip and the surface [35][36][37][38][39][40][41]. As force sensors, highly conductive Si cantilevers with a typical resonance frequency of f = 300 kHz (Vistaprobe T300) were utilized. During KPFM measurements an AC voltage is applied to the tip (U AC = 1 V and f AC = 1 kHz) and the built in lock-in amplifier of the PLL Pro 2 is used to apply a DC voltage which minimizes the resulting electrostatic forces between tip and sample surface. This DC voltage corresponds to the local CPD. Results and Discussion Raman spectroscopy characterization In Figure 2 we present an optical image of a sample prepared by the procedure described above together with additional Raman spectroscopy data. The SLM flake can be identified in the optical image in Figure 2a by its contrast, which is a transparent green tone. While the majority of the SLM flake is located on pristine SiO 2 , a small part of the SLM flake is at the bottom of a hole which was patterned by RIE. To unambiguously identify SLM we used Raman spectroscopy and compared the results to data obtained by literature [34]. In Figure 2b the Raman spectra of SLM on SiO 2 and on SiO 2 (RIE) as well as FLM on SiO 2 is shown. The two prominent peaks, the E 2g and A 1g peak, correspond to the opposite vibration of the two S atoms with respect to the Mo atom and the out-of-plane vibration of only S atoms in opposite directions, respectively [42,43]. For SLM on SiO 2 the Raman shifts obtained for the E 2g , ν = 386.1 cm −1 , and A 1g , ν = 403.0 cm −1 , are consistent with values reported by other groups. For higher layer numbers the E 2g has been reported to shift to lower wave numbers while the A 1g shifts to larger wave numbers which is again in good agreement with our data. However, the SLM on RIE SiO 2 shows a different behaviour compared to SLM on pristine SiO 2 . The E 2g is slightly downshifted to ν = 385.2 cm −1 and the A 1g shows a minor shift to ν = 403.4 cm −1 . Shifts of the E 2g and A 1g modes of SLM can have multiple reasons. Uniaxial tensial strain has been observed to cause a splitting in the E 2g mode and a shift to lower wave numbers for the resulting E − and E + modes by 4.5 and 1 cm −1 /% [44,45]. While the A 1g mode shows no distinct sensitivity to uniaxial strain, a charge carrier dependency has been observed [46]. Electron doping of 1.8·10 13 cm −2 leads to a linewidth broadening of 6 cm −1 and the phonon frequency decreases by 4 cm −1 . As our data shows a shift in both Raman active modes we suggest that the RIE SiO 2 surface causes a slight strain and maybe local doping by charge transfer in the MoS 2 flake. The Raman mapping shown in Figure 2c corresponds to the evaluation of point spectra performed in the green box marked in Figure 2a. Plotted is the difference of the E 2g an A 1g mode positions. While the difference between SLM and FLM on SiO 2 is significant with Δ = 8.2 cm −1 , the difference between SLM on SiO 2 and on RIE SiO 2 is relatively small with Δ = 1.3 cm −1 . As can be seen in the Raman mapping, the difference in the SLM induced by the substrate is constant over the whole flake and not just present in single point meaurements. In-situ KPFM on single layers of MoS 2 For the NC-AFM and KPFM measurements the sample was introduced to the UHV system. Before the data collection the sample was heated in situ to 200 °C for 30 min to remove any adsorbates from ambience. In Figure 3a and Figure 3c the NC-AFM topography and the corresponding surface potential map are shown, respectively. On the right side the Ti/Au contact can be seen which is about 20 nm high and shows a distinct contrast in the surface potential in comparison to the MoS 2 layers. In Figure 3d a surface potential histogram of SLM, FLM and the gold surface of the Ti/Au contact is given. We find a surface potential of 4.27 V for SLM, 4.37 V for FLM and 4.89 V for gold. The surface potential itself is always a relative value based on the local CPD between the AFM tip and the sample surface. To obtain quantitative work function values, we calibrated the tip on the gold surface by using the known work function of gold Φ Au = 5.10 eV [47,48]. With the relation Φ = 5.10eV − e·(CPD Au − CPD nMoS2 ) the work function of SLM Φ SLM = 4.49 ± 0.03 eV and FLM Φ FLM = 4.59 ± 0.03 eV can be assigned. The given errorbar consists of the experimental error of our system. Not included in this error is band bending, which occurs when doing KPFM measurements on a semi-conductor surface and a false estimation of the work function of the patterned gold contact. Besides graphite [49], gold is a common material to calibrate the work function of the AFM tip [48], but while the work function Φ Au = 5.10 eV is often used, other work function values in the range from 4.74 eV to 5.54 eV have been reported as well [50,51]. Surface roughness, homogeneity and humidity can have an effect on the measured work function of metal surfaces as Guo et al. recently demonstrated [52]. The presented data is measured in situ after annealing and we are therefore confident that humidity can be neglected. We want to point out that an error in the work function calibration does not affect the work function values of SLM, BLM and FLM with respect to each other. While the surface potential on the Au contact in Figure 3 appears uniform, strong local variations can be observed on the MoS 2 flake. We attribute these features, marked in Figure 3a with green circles, to contaminations due to the patterning process. The height of these contaminations varies between 1 nm and 20 nm. These contaminations have a noticeable effect on the work function of SLM, as Φ SLM can be lowered by up to 0.15 eV. As the work function of these contaminations is clearly different than that of the Au contact, the contaminations are most likely resist residues which have not been completely removed. Such contaminations may act as scattering centers or charge puddles which are likely to be detrimental to the performance of SLM devices [53]. For graphene and MoS 2 it has been shown, that adsorbates due to ambient exposure can have a strong impact on the work function of these materials, like inducing an additional charge transfer or even redox reactions with water [29,54]. In situ screening length of MoS 2 In the next step, we determine the work function of BLM and the screening length of MoS 2 . For this the SLM/BLM/FLM section of Figure 3 has been measured again in more detail and the work function is analyzed by line profiles. Shown in Figure 4a-c are the NC-AFM topography, work function map and the corresponding line profiles, respectively. The measured height for BLM is 0.92 ± 0.10 nm, which is slightly higher than the interlayer spacing of a bulk MoS 2 crystal [55]. For FLM we get two different heights, one is 2.96 nm (≈5 layers) and 7.89 nm (≈12-13 layers). In the work function map in Figure 4b, three contrasts can be observed -SLM, BLM and FLM. As the work function for FLM 2.96 nm and the other FLM with 7.89 nm is not changing, we conclude from our data that the screening length of MoS 2 is at least 2.96 nm, which is in good agreement with previous findings for annealed MoS 2 [29]. Li et al. compared the screening length of pristine MoS 2 flakes on SiO 2 with annealed MoS 2 flakes and found a decrease from approximately 5 nm down to 2.5 nm for annealed MoS 2 . Our measurements here yield a screening length between 1.6 and 2.96 nm, which is much lower than the value for pristine MoS 2 . We therefore conclude that the investigated MoS 2 is not affected by ambient adsorbates. In Figure 4c we used the line profile to quantify the work function of SLM and BLM. The work function of SLM is determined to be the same as using the histogram analysis in Figure 3 with Φ SLM = 4.49 ± 0.03 eV. The work function of BLM is increased with respect to SLM by about 0.05 eV to Φ BLM = 4.54 ± 0.03 eV. Again, contaminations on BLM appear to decrease the work function as can be seen in Figure 4b. Substrate effects on the work function of single layer MoS 2 To study the effect of the substrate on the work function of SLM, we compare the work function of SLM on SiO 2 with SLM in the RIE SiO 2 holes in Figure 5. The work function map in Figure 5b shows an increased work function over the etched hole of about ΔΦ = 0.04 eV. This shift is caused by the charge transfer from the etched substrate which leads to an effective doping that has been proven to have a large impact on the optical properties of SLM [56]. The etched SiO 2 substrate has an effect on the surface potential distribution as well. By comparing histogram data of SLM on SiO 2 and RIE SiO 2 (see inset in Figure 5c) we find a decreased surface potential fluctuation by 0.02 eV for SLM on the etched SiO 2 . The potential fluctuation is related to charge impurities which are detrimental for the performance of 2D-devices and KPFM is an efficient way to probe it [57]. Further, a lower potential fluctuation indicates a higher charge homogeneity. Charge inhomogeneity has been shown to play a crucial role in the oxidative reactivity of graphene [58]. At the edge of the etched hole, where SLM is heavily bent, a strong increase in the work function by another ΔΦ = 0.05 eV compared to SLM on the RIE SiO 2 substrate caused by stress can be observed. It has been shown by Castellanos-Gomez et al. that heavy strain in SLM has a large impact on the band gap of SLM [59]. However, KPFM only measures the contact potential difference (from which we derive the work function). For insulating materials there is no straightforward relation between the contact potential difference and the band-gap. Therefore, our results are not directly comparable. The plot in Figure 5c sums up our findings with respect to the work function of MoS 2 . The work function of FLM in ambient has been determined previously by amplitude modulated KPFM. The reported values of Φ = 5.25 eV [28] are significantly higher than the values found here. This difference is clearly due to the contaminations which are absent in our measurements. Our data should instead be compared to the values determined by other means like ultraviolet photoelectron spectrosocopy [60][61][62][63]. The excellent agreement again underlines the importance of UHV measurements if intrinsic properties are to be probed. Conclusion In conclusion we have performed the first in situ Kelvin probe force microscopy measurements on single layers of MoS 2 on a SiO 2 substrate. We find work functions of Φ SLM = 4.49 eV, Φ BLM = 4.54 eV and Φ FLM = 4.59 eV for SLM, BLM and FLM respectively. We observe a screening length between 1.6 and 3.5 nm which indicates a clean MoS 2 flake. We have further investigated the effect of the substrate on the work function of MoS 2 by partly etching the SiO 2 substrate. Raman spectroscopy measurements suggests substrate effects like strain which increase the work function of SLM of ΔΦ = 0.04 eV on etched SiO 2 . The next step is to investigate completely free standing MoS 2 flakes without a substrate in order to probe the intrinsic charge homogeneity and work function of SLM.
4,334.4
2014-03-13T00:00:00.000
[ "Materials Science", "Physics", "Engineering" ]
Exogenous Fluorescent Agents for the Determination of Glomerular Filtration Rate Glomerular filtration rate (GFR) is now widely accepted as the best indicator of renal function in the state of health and illness.1,2 Current clinical guidelines advocate its use in the staging of chronic kidney disease as well as in assessing the risk of kidney failure under acute clinical, physiological, and pathological conditions.3-6 Acute renal failure (ARF) is a major cause of complications in the post-surgical and post-intervention vascular and cardiac procedure patient populations. ARF is also a major public health issue because it may lead to chronic renal failure. Real-time, continuous monitoring of GFR in patients at the bedside is particularly important in the case of critically ill or injured patients, and those undergoing organ transplantation because most of these patients face the risk of multiple organ failure (MOF) resulting in death.7-10 MOF is a sequential failing of lung, liver, and kidneys and is incited by one or more severe causes such as acute lung injury (ALI), adult respiratory distress syndrome (ARDS), hypermetabolism, hypotension, persistent inflammation, or sepsis. The transition from early stages of trauma to clinical MOF is marked by the extent of liver and renal failure and a change in mortality risk from about 30% to about 50%.10 Accurate determination of GFR is also necessary for monitoring patients undergoing cancer chemotherapy with nephrotoxic anticancer drugs,11 or those at risk for contrast media induced nephropathy (CIN).12 Finally, GFR measurement is also useful for patients with chronic illness such as diabetes, hypertension, obesity, hyperthyroidism, cystic fibrosis, etc. who are at risk for renal impairment.13-15 Introduction Glomerular filtration rate (GFR) is now widely accepted as the best indicator of renal function in the state of health and illness. 1,24][5][6] Acute renal failure (ARF) is a major cause of complications in the post-surgical and post-intervention vascular and cardiac procedure patient populations.ARF is also a major public health issue because it may lead to chronic renal failure.][9][10] MOF is a sequential failing of lung, liver, and kidneys and is incited by one or more severe causes such as acute lung injury (ALI), adult respiratory distress syndrome (ARDS), hypermetabolism, hypotension, persistent inflammation, or sepsis.The transition from early stages of trauma to clinical MOF is marked by the extent of liver and renal failure and a change in mortality risk from about 30% to about 50%. 10 Accurate determination of GFR is also necessary for monitoring patients undergoing cancer chemotherapy with nephrotoxic anticancer drugs, 11 or those at risk for contrast media induced nephropathy (CIN). 12[15] Current GFR markers In order to assess the status and to follow the progress of renal disease, there is a need to develop a simple, accurate, and continuous method for the determination of renal function by non-invasive procedures.][18] The results from this analysis are frequently misleading since the value is affected by age, state of hydration, renal perfusion, muscle mass, dietary intake, and many other anthropometric and clinical variables.Theoretical methods for estimating GFR (eGFR) [19][20][21] from body cell mass and plasma creatinine concentration have also been developed, but these methods also rely on the above anthropomorphic variables.Moreover, creatinine is partially cleared by tubular secretion along with glomerular filtration, and, as Diskin 17 recently remarked, "Creatinine clearance is not and has never been synonymous with GFR, and all of the regression analysis will not make it so because the serum creatinine depends upon many factors other than filtration."More recently, endogenous cystatin-C has been suggested as an improvement over creatinine, 15,20 but this marker also suffers from the same limitations as creatinine, and thus it remains questionable whether it is really an improvement. 3][24][25][26][27][28][29] Unfortunately, all of these markers suffer from various undesirable properties including the use of radioactivity, ionizing radiation, and the laborious ex-vivo handling of blood and urine samples, and the use of HPLC method that render them unsuitable for continuous monitoring of renal function in the clinical setting.Furthermore, inulin as well as other polysaccharides are polydisperse polymers, and availability of these substances in a reliable, uniform batches is a serious limiting factor for their use as GFR markers.Currently, iothalamate and iohexol are the accepted standard for the assessment of GFR.However, iothalmate requires the collection of blood samples and requires HPLC method, which is not well suited for continuous monitoring.Continuous monitoring of GFR has been accomplished via radiometric 12 and magnetic resonance imaging 30 techniques, but these are not suitable at the bedside.Hence, the availability of an exogenous marker for the measurement of GFR under specific yet changing circumstances would represent a substantial improvement over any currently available or widely practiced method.Moreover, a method that depends solely on the renal elimination of an exogenous chemical entity would provide an absolute and continuous pharmacokinetic measurement requiring less subjective interpretation based upon age, muscle mass, blood pressure, etc. Development of fluorescent tracer agents 2][33][34][35][36][37] The key requirements for an ideal fluorescent tracer agent are: (a) must be excited at and emit in the visible region ( ≥ ~425 nm); (b) must be highly hydrophilic; (c) must be either neutral or anionic; (c) must have very low or no plasma protein binding; (d) must not be metabolized in vivo, and (e) must clear exclusively via glomerular filtration as demonstrated by equality of plasma clearance with and without a tubular secretion inhibitor such as probenecid. 38The selection of the lead clinical candidate(s) may be based on secondary considerations such as the ease of synthesis, lack of toxicity, and stability.The secondary screening criteria should further take into account the tissue optics properties and the degree of extracellular distribution of the fluorescent tracers.Volume of distribution is an important parameter in the assessment of hydration state of the patient, whereas the absorption/emission properties provide essential information for the design of the probe.This chapter focuses on the most recent development on luminescent tracers for GFR measurement.There are basically two principal pathways for the design of fluorescent tracers for GFR determination.The first method involves enhancing the fluorescence of known renal agents that are intrinsically poor emitters such as lanthanide metal complexes; and the second involves transforming highly fluorescent dyes (which are intrinsically lipophilic) into hydrophilic, anionic species to force them to clear via the kidneys. 32In the first approach, several europium-DTPA complexes endowed with various molecular 'antenna' to induce ligand-to-metal fluorescence resonance energy transfer (FRET) were prepared and tested. 32Some of metal complexes (e.g.compound 7 exhibited high (c.a.2000fold) enhancement of europium fluorescence and underwent clearance exclusively through the kidneys, but whether they cleared exclusively via glomerular filtration remains uncertain.Moreover, the excitation maxima of these complexes remained in the violet or UV-A region.][41] Pyrazine derivatives 8 containing electron donating groups (EDG) in the 2,5 positions and electron withdrawing groups (EWG) in the 3,6 positions such as compounds 9-11 are shown to absorb and emit in the visible region with a large Stokes shift on the order of ~ 100 nm and with fluorescence quantum yields of about 0.4. 39,40For example, conversion of the carboxyl group in 8 to the secondary amide derivatives 9 produces a bathochromic (red) shift of about 40 nm, and alkylation of the amino group in 9 produces further red shift of about 40 nm.Thus, the pyrazine nucleus offers considerable opportunity to 'tune' the electronic properties by even simple modifications.Furthermore, the relative small size of pyrazine renders it an ideal scaffold to introduce hydrophilic substituents to bring about renal clearance.Based on the structure and properties of known GFR tracer agents, and on the primary and secondary considerations stated earlier, the set of GFR tracer agents can be divided finto our categories as outlined in Table 1.The upper and lower quadrants address the tissue optics differences, and the left and right quadrants address volume of distribution (V d ) differences.(V d ) is important not only in affecting clearance rates, but also in the assessment of hydration state of a patient.Tissue optics parameters are important in instrument design in that the longer the wavelength of light, the deeper the penetration into the tissue.Recently, low and high molecular weight hydrophilic pyrazine derivatives 12-15 (Fig. 4) bearing neutral and anionic side chains such as alcohols, carboxylic acids, and polyethylene glycol (PEG) units were reported. 41The structures of the candidates from each of the four quadrants above are shown in Fig. 2. Unlike inulin, dextran, and other polymers, compounds 13 and 15 are monodisperse.The photophysical and biological properties of these compounds are given in Table 2.Both plasma protein binding and urinary clearance properties are superior to iothalamate, which is a currently used 'gold standard' for clinical GFR measurement.Furthermore, all four compounds displayed insignificant biodegradation.An in vivo fluorescence image of the renal clearance of compound 13 is shown in Fig. 5.The panel contains images of three mice.The mouse in the middle was administered 300 L of a 2 mM solution in phosphate-buffered saline (PBS) of compound 13.The other mice served as controls where the mice received only PBS.Compound 13 distributed throughout the body and then concentrated in one spot in the abdomen.Surgery after the 60 minute time point verified that this highly fluorescent spot in the abdomen was the bladder.Thus, this observation of fluorescence only appearing at the bladder is a visual demonstration of the high percent of injected dose recovered in urine given in Table 2. Real-time monitoring of renal clearance In vivo noninvasive real-time monitoring of renal clearance, with eventual translation to commercial development, has been demonstrated in the rodent model.A schematic of an apparatus is shown in Fig. 6.A 445 nm solid state laser was directed into one leg of a silica bifurcated fiber optic bundle, with the common end of this bifurcated bundle placed approximately 2 mm from the rat ear.The second leg of the bifurcated fiber optic bundle was fitted with a collimating beam probe.A long pass filter and narrow band interference filter were placed in front of a photosensor module.A chopper was placed after the laser and before the launch into the bifurcated cable.The output of the photosensor was connected to a lock-in amplifier.The lock-in output was digitized and the digitized data was acquired by computer using data acquisition software. Anesthesized Sprague-Dawley rats of weight ~ 400 g were used.A volume of 1 mL of a 0.4 mg/mL concentration in PBS of compound 12 was administered to a rat with normal functioning kidneys and to a rat with a recent bi-lateral nephrectomy.The continuously monitored fluorescent signal is shown in Figure 5.An increase in fluorescence at the ear is immediately seen in both rats.In the normal rat, the fluorescence decreases back to baseline as the kidney removes compound 12 from the body.In the ligated rat, the fluorescence remains elevated with time as the body is unable to remove compound with the kidneys not functioning. Conclusions On the basis of the fluorescence properties, plasma protein binding data, the injected dose recovered in urine, the plasma clearance data, and the renal tubular secretion studies, the pyrazine deriviatives 12-15 are promising candidates as exogenous fluorescent tracer agents for the determination of GFR under both chronic and acute settings.In the rat model, these compounds display superior properties compared to iothalamate, which is currently an accepted standard for the measurement of GFR. A prototype instrument for clinical trials has been developed based on the apparatus in Figure 4.A clinical trial with one of the pyrazine compounds is currently being planned. The clinical trial will test the safety and efficacy of the tracer agent, as well as refine the instrumentation.Optimization parameters for the instrument include incident light power and power density, light delivery and collection fiber optics, light source and detector, placement of detector on body, and the data acquisition and analysis algorithm. The addition of a fluorescent GFR tracer agent would be a major addition to the armament of fluorescent compounds in clinical use today.Indocyanine green (ICG) is FDA-approved for use in angiography, cardiac output, and liver function. 42Currently, there are on-going clinical trials for lymph node mapping and melanoma imaging using ICG. 43Fluorescein is the only other FDA appoved fluorescent agent, used for angiography. 42A near-infrared dye for attachment to targeting vectors for optical imaging has been studied for safety and pharmacology, and may soon be ready for human clinical trials too. 44 Table 2 . Physicochemical and Pharmacokinetic Properties of Pyrazine Tracers.
2,846.2
2012-03-16T00:00:00.000
[ "Chemistry", "Medicine" ]
New Insights about CuO Nanoparticles from Inelastic Neutron Scattering Inelastic Neutron Scattering (INS) spectroscopy has provided a unique insight into the magnetodymanics of nanoscale copper (II) oxide (CuO). We present evidence for the propagation of magnons in the directions of the ordering vectors of both the commensurate and helically modulated incommensurate antiferromagnetic phases of CuO. The temperature dependency of the magnon spin-wave intensity (in the accessible energy-range of the experiment) conforms to the Bose population of states at low temperatures (T ≤ 100 K), as expected for bosons, then intensity significantly increases, with maximum at about 225 K (close to TN), and decreases at higher temperatures. The obtained results can be related to gradual softening of the dispersion curves of magnon spin-waves and decreasing the spin gap with temperature approaching TN on heating, and slow dissipation of the short-range dynamic spin correlations at higher temperatures. However, the intensity of the magnon signal was found to be particle size dependent, and increases with decreasing particle size. This “reverse size effect” is believed to be related to either creation of single-domain particles at the nanoscale, or “superferromagnetism effect” and the formation of collective particle states. Introduction Magnetism exhibited by nanoscale materials (so-called "nanomagnetism") is not a fully understood phenomenon, and despite evidence to suggest that ferromagnetism is a universal feature of nanoscale metal oxides [1], research in this field is still in its infancy. The magnetic behavior of nanoparticles is well known to deviate significantly from bulk materials due to pronounced surface effects and/or the creation of single-domain particles [2][3][4][5][6]. Oxygen vacancies [7,8], surface tension [9,10], uncompensated surface spins and the exchange interactions between these spins and those within the core of the particle [3,11] can all significantly alter the magnetic ordering and transition temperatures of magnetic materials. Despite the known toxicity of copper (II) oxide (cupric oxide, CuO) nanoparticles [12,13], they are finding application in numerous commercially viable arenas due to their demonstrated efficacy as catalysis for complex chemical reactions [14][15][16], and as biological mimetics for the sensing of small molecules [17,18]. However, the magnetic properties of CuO nanoparticles are currently underexploited, certainly in comparison with other magnetic transition metal oxides such as the iron oxides Fe 3 O 4 , α-Fe 2 O 3 , and γ-Fe 2 O 3 . A possible explanation for this is that the often complex magnetic behavior of the copper oxides is perhaps one of the least understood, particularly at the nanoscale. However, recent evidence of multiferroicity and electromagnons in CuO has provided impetus for renewed interest in this simple oxide [19][20][21]. Moreover, while most simple monoxides containing 3d transition metals adopt the cubic rocksalt structure, CuO adopts a monoclinic (space group C2/c) structure [22][23][24]. In the structure, Cu is surrounded by four O atoms in a square planar configuration, which form ribbons of edge-sharing chains running along the [101] and [101] directions [22][23][24]. This unique structure gives rise to the unusual magnetic properties of CuO [5,25]. The majority of magnetic oxides exist in a completely disordered paramagnetic state above their individual Néel temperatures (T N ) and convert completely to give 3D antiferromagnetic states below these temperatures. However, bulk CuO behaves very differently by undergoing two magnetic phase transitions. Above the first T N at approximately (ca.) 230 K (T N1 ) antiferromagnetic dynamical spin correlations along the crystallographic [101] direction is still maintained up to high temperatures, but there is no long-range spin order and the material is in paramagnetic state. Below T N1 , a 3D incommensurate state is created in which the spins are ordered in a non-collinear helical arrangement. The second magnetic transition at ca. 213 K (T N2 ) induces the spins to adopt a collinear configuration and a fully commensurate 3D antiferromagnetic structure results below this temperature [26,27]. Importantly, T N is strongly suppressed in nanoparticulate CuO, is ca. 30 K for 5 nm particles, and as low as 13 K for particles 2-3 nm in size [22]. This is believed to be a consequence of the dependency of T N on the strength and number of superexchange interactions, which are drastically reduced/weakened at the nanoscale relative to the bulk oxide; this is because of a significant increase in the number of uncompensated surface spins and a reduction in the number of exchange pathways (via oxygen orbitals) due to an increase in the number under-coordinated surface Cu 2+ ions. Furthermore, the documented enhancement in magnetization with decreasing CuO particle size has also been attributed to the increasing number of uncompensated surface spins contributing to the net moment [7,27]. Materials and Methods The CuO nanoparticles employed in this study were prepared by a solvent deficient method [28,29]. In a typical synthesis, 53 g of Cu(NO 3 ) 2 ·2.5H 2 O and 38 g of NH 4 HCO 3 were ground together in a mortar and pestle for approximately 1 min. Distilled H 2 O (10 mL) was then added to the mixture. The resulting precursor was rinsed with 0.5 L of distilled H 2 O before being calcined, in air, at 523 K (15 nm particles) for 1 h. Particles 8 and 25 nm in size were calcinated at 523 K and 623 K, respectively. The phase purities of all samples were confirmed by powder X-ray diffraction (PXRD) analyses performed with a PANalytical X'Pert Pro diffractometer (Malvern Panalytical Inc., Westborough, MA, USA) operating with Cu-K α1 radiation set at 45 kV and 40 mA (λ = 1.540598 Å). Data were acquired over the 2θ range of 10-90 • . Only monoclinic CuO was observed. The average size of the particles was determined by powder X-ray diffraction by application of the Scherrer method [30]. The lattice parameters determined from Rietveld refinements of 15 nm CuO at 295 K (a = 4.6823 Å; b = 3.4242 Å; c = 5.1294 Å; β = 99.457 • ) are in good agreement with the literature data collected at room temperature for bulk and nanoscale CuO [3,[22][23][24][25]. It should be noted that a range of values have been reported for the lattice parameters that may reflect variations in oxygen concentration of the samples [25]. This is important as oxygen vacancies have been shown to affect the magnetic properties of oxides containing 3d transition metals [8,9,31]. Variable temperature (7-400 K) Inelastic Neutron Scattering (INS) data were collected with the fine energy high resolution direct geometry chopper spectrometer SEQUOIA situated at the Spallation Neutron Source (SNS), Oak Ridge National Laboratory (ORNL) (Oak Ridge, TN, USA) [32,33]. INS spectra were collected at several incident energies (E i = 10, 25 and 50 meV) that were selected by means of a Fermi chopper. Scattered neutrons of all energies were identified by position-sensitive detectors covering a wide range of scattering angles (−30 • to +60 • in the horizontal plane and ±18 • in the vertical directions). Background spectra for an empty container were collected and subtracted from the sample data. The raw spectra were transformed from the time-of-flight and instrument coordinate bases to the dynamical structure factor S(Q,E) and finally to the S(Q,E)·E/[n(E,T) + 1] which directly relates to the intensity of magnons corrected for the population Bose factor [34,35]. Results and Discussion Variable temperature spectra for 15 nm CuO nanoparticles are shown in Figures 1 and 2. At 7 K, there is a faint dispersed signal at Q ≈ 0.84 Å −1 that dramatically increases in intensity as the temperature increases to 250 K ( Figure 1). This dispersed signal originates from a purely magnetic Bragg peak with the scattering vector Q ≈ (0.5 0 −0.5) [36]. Figure 3 shows the evolution of the intensity of this elastic peak with temperature. The gradual reduction in the intensity of this peak indicates that the 15 nm CuO particles undergo a magnetic phase transition that commences at approximately 150 K and is fully complete by 225 K. This observed decrease in intensity is consistent with the commensurate → incommensurate antiferromagnetic transition at T N2 . Results and Discussion Variable temperature spectra for 15 nm CuO nanoparticles are shown in Figures 1 and 2. At 7 K, there is a faint dispersed signal at Q ≈ 0.84 Å −1 that dramatically increases in intensity as the temperature increases to 250 K ( Figure 1). This dispersed signal originates from a purely magnetic Bragg peak with the scattering vector Q ≈ (0.5 0 −0.5) [36]. Figure 3 shows the evolution of the intensity of this elastic peak with temperature. The gradual reduction in the intensity of this peak indicates that the 15 nm CuO particles undergo a magnetic phase transition that commences at approximately 150 K and is fully complete by 225 K. This observed decrease in intensity is consistent with the commensurate → incommensurate antiferromagnetic transition at TN2. Results and Discussion Variable temperature spectra for 15 nm CuO nanoparticles are shown in Figures 1 and 2. At 7 K, there is a faint dispersed signal at Q ≈ 0.84 Å −1 that dramatically increases in intensity as the temperature increases to 250 K ( Figure 1). This dispersed signal originates from a purely magnetic Bragg peak with the scattering vector Q ≈ (0.5 0 −0.5) [36]. Figure 3 shows the evolution of the intensity of this elastic peak with temperature. The gradual reduction in the intensity of this peak indicates that the 15 nm CuO particles undergo a magnetic phase transition that commences at approximately 150 K and is fully complete by 225 K. This observed decrease in intensity is consistent with the commensurate → incommensurate antiferromagnetic transition at TN2. In the low temperature (<TN2) commensurate phase of CuO, the antiferromagnetic spins are orientated parallel to [010] and are ordered along the wave vector Q = (0.5 0 −0.5) relative to the crystallographic basis of the atomic structure [23,24]. Conversion to the incommensurate antiferromagnetic phase that exists between TN1 and TN2 involves a 0.85° rotation of the antiferromagnetic ordering vector to (0.509 0 −0.483) [37,38]. In this incommensurate phase, the antiferromagnetic spins are helically modulated, and the magnetic moments rotate in a plane passing across the b axis and making an angle of approximately 74° with the ordering vector [39,40]. Propagation of magnons along these ordering vectors would be observed in the INS spectra at Q = 0.84 and 0.83 Å −1 , respectively. Indeed, it is the excitation of spin precession waves in the direction of antiferromagnetic ordering vectors that give rise to the dispersed signal at Q ≈ 0.84 Å −1 in the CuO spectra shown in Figure 1 [25]. Unfortunately, as the directions of the ordering vectors in the commensurate and incommensurate are so similar, the expected shift in the Q-position of the magnons that would accompany the TN2 phase transition cannot be resolved in these data. The degree of dispersion of the magnon signal is related to the strength of the exchange interactions between the magnetic spins [41]. The signals in the INS spectra of CuO show very large dispersion (almost vertical line at Q = 84 Å −1 in the Figure 1), which is consistent with the strong antiferromagnetic super-exchange interactions in this direction (JAFM = 67-80 meV) that are mediated by the oxygen orbitals of the Cu-O-Cu bridges [42][43][44][45]. As magnons are bosons, the observed increase in spin-wave intensity with increasing temperature up to about 100 K (Figure 1 and Figures S1-S3) is a consequence of the number of magnons active within the CuO lattice being directly proportional to the Bose factor [41]. At T > 100 K, the intensity significantly increases and reaches maximum at about 225 K (close to TN1), and decreases at higher temperatures. The magnon excitations extend up to rather high energies (around 65-80 meV according to Ref. [39]), therefore our INS spectra show only their low energy part. The obtained results of increasing the magnon intensity (at T > 100 K) can be related to gradual softening of the dispersion curves of magnon spin-waves and decreasing the spin gap with temperature approaching TN on heating (shifting of the magnons to lower energies), while at higher temperatures the short-range dynamic spin correlations exhibit slow dissipation. Between 200 and 225 K, the CuO particles undergo a second magnetic phase transition (Figures 1 and 3) In the low temperature (<T N2 ) commensurate phase of CuO, the antiferromagnetic spins are orientated parallel to [010] and are ordered along the wave vector Q = (0.5 0 −0.5) relative to the crystallographic basis of the atomic structure [23,24]. Conversion to the incommensurate antiferromagnetic phase that exists between T N1 and T N2 involves a 0.85 • rotation of the antiferromagnetic ordering vector to (0.509 0 −0.483) [37,38]. In this incommensurate phase, the antiferromagnetic spins are helically modulated, and the magnetic moments rotate in a plane passing across the b axis and making an angle of approximately 74 • with the ordering vector [39,40]. Propagation of magnons along these ordering vectors would be observed in the INS spectra at Q = 0.84 and 0.83 Å −1 , respectively. Indeed, it is the excitation of spin precession waves in the direction of antiferromagnetic ordering vectors that give rise to the dispersed signal at Q ≈ 0.84 Å −1 in the CuO spectra shown in Figure 1 [25]. Unfortunately, as the directions of the ordering vectors in the commensurate and incommensurate are so similar, the expected shift in the Q-position of the magnons that would accompany the T N2 phase transition cannot be resolved in these data. The degree of dispersion of the magnon signal is related to the strength of the exchange interactions between the magnetic spins [41]. The signals in the INS spectra of CuO show very large dispersion (almost vertical line at Q = 84 Å −1 in the Figure 1), which is consistent with the strong antiferromagnetic super-exchange interactions in this direction (J AFM = 67-80 meV) that are mediated by the oxygen orbitals of the Cu-O-Cu bridges [42][43][44][45]. As magnons are bosons, the observed increase in spin-wave intensity with increasing temperature up to about 100 K (Figure 1 and Figures S1-S3) is a consequence of the number of magnons active within the CuO lattice being directly proportional to the Bose factor [41]. At T > 100 K, the intensity significantly increases and reaches maximum at about 225 K (close to T N1 ), and decreases at higher temperatures. The magnon excitations extend up to rather high energies (around 65-80 meV according to Ref. [39]), therefore our INS spectra show only their low energy part. The obtained results of increasing the magnon intensity (at T > 100 K) can be related to gradual softening of the dispersion curves of magnon spin-waves and decreasing the spin gap with temperature approaching T N on heating (shifting of the magnons to lower energies), while at higher temperatures the short-range dynamic spin correlations exhibit slow dissipation. Between 200 and 225 K, the CuO particles undergo a second magnetic phase transition (Figures 1 and 3) and this third phase (paramagnetic) is retained up to at least 400 K. [36]. Our data also show a weak broad magnon signal persisting at Q = 0.84 Å −1 in the spectra recorded at 300 K that becomes weaker with increasing temperature (Figures 2 and 3). INS spectra for CuO nanoparticles of different sizes are shown in Figure 4 (T = 183 K) and Figure 5 (T = 223 K). Consistent with the 7 K spectra for the 15 nm particles (Figure 1), magnons are very weak and almost invisible in the 7 K spectra for the 8 and 25 nm particles (see Figure S4). The intensity of dispersed magnon signal at Q ≈ 0.84 Å −1 in the spectra recorded at 183 and 223 K is particle-size dependent for both the commensurate ( Figure 4) and incommensurate ( Figure 5) antiferromagnetic phases (for large particles) and paramagnetic phase (for 8 nm particles), and increases in intensity with decreasing particle size. This "reverse finite size effect" has also been observed for magnons within antiferromagnetic α-Fe 2 O 3 (hematite) [47]. The origin of this effect is not clear, but we postulate that it could be a consequence of the creation of single-domain particles at the nanoscale that facilitates the propagation of magnons through the magnetic lattice. Alternatively, inter-particle exchange interactions mediated through the increasing number of uncompensated surface spins inherent at the nanoscale ("superferromagnetism effect") [48] could potentially enhance magnon propagation. This would be a result of the particles forming collective states with aligned magnetic moments that allow for collective magnon propagation [49] through multiple particles-in this effect, magnons are propagated through the inter-particle interface. Finally, it must be acknowledged that the role of lattice vacancies and their effect on the observations of this study are not fully known. Small particle sizes and lattice vacancies, for example, have been shown to affect the magnetic properties in both bulk and nanoscale CuO [7,27]. Figure 4 (T = 183 K) and Figure 5 (T = 223 K). Consistent with the 7 K spectra for the 15 nm particles (Figure 1), magnons are very weak and almost invisible in the 7 K spectra for the 8 and 25 nm particles (see Figure S4). The intensity of dispersed magnon signal at Q ≈ 0.84 Å −1 in the spectra recorded at 183 and 223 K is particle-size dependent for both the commensurate ( Figure 4) and incommensurate ( Figure 5) antiferromagnetic phases (for large particles) and paramagnetic phase (for 8 nm particles), and increases in intensity with decreasing particle size. This "reverse finite size effect" has also been observed for magnons within antiferromagnetic α-Fe2O3 (hematite) [47]. The origin of this effect is not clear, but we postulate that it could be a consequence of the creation of single-domain particles at the nanoscale that facilitates the propagation of magnons through the magnetic lattice. Alternatively, inter-particle exchange interactions mediated through the increasing number of uncompensated surface spins inherent at the nanoscale ("superferromagnetism effect") [48] could potentially enhance magnon propagation. This would be a result of the particles forming collective states with aligned magnetic moments that allow for collective magnon propagation [49] through multiple particles-in this effect, magnons are propagated through the inter-particle interface. Finally, it must be acknowledged that the role of lattice vacancies and their effect on the observations of this study are not fully known. Small particle sizes and lattice vacancies, for example, have been shown to affect the magnetic properties in both bulk and nanoscale CuO [7,27]. Conclusions Newly-measured INS spectra of CuO nanoparticles provided important new insights into CuO nanoparticles. INS spectra revealed evidence of magnon propagation along the ordering vectors within the commensurate (T < TN2, Q = (0.5 0 −0.5)) and incommensurate (TN2 < T < TN1, Q = (0.506 0 −0.483)) antiferromagnetic phases of CuO. At low temperatures (T < 100 K), the intensity of the magnon signals increases with increasing temperature in accordance with the theory of the Bose population of states. Dynamic magnetic correlations are also clearly visible in INS spectra of the particles in the paramagnetic state (at T > TN). The intensity of the magnon signals in the INS spectra (Figure 1), magnons are very weak and almost invisible in the 7 K spectra for the 8 and 25 nm particles (see Figure S4). The intensity of dispersed magnon signal at Q ≈ 0.84 Å −1 in the spectra recorded at 183 and 223 K is particle-size dependent for both the commensurate ( Figure 4) and incommensurate ( Figure 5) antiferromagnetic phases (for large particles) and paramagnetic phase (for 8 nm particles), and increases in intensity with decreasing particle size. This "reverse finite size effect" has also been observed for magnons within antiferromagnetic α-Fe2O3 (hematite) [47]. The origin of this effect is not clear, but we postulate that it could be a consequence of the creation of single-domain particles at the nanoscale that facilitates the propagation of magnons through the magnetic lattice. Alternatively, inter-particle exchange interactions mediated through the increasing number of uncompensated surface spins inherent at the nanoscale ("superferromagnetism effect") [48] could potentially enhance magnon propagation. This would be a result of the particles forming collective states with aligned magnetic moments that allow for collective magnon propagation [49] through multiple particles-in this effect, magnons are propagated through the inter-particle interface. Finally, it must be acknowledged that the role of lattice vacancies and their effect on the observations of this study are not fully known. Small particle sizes and lattice vacancies, for example, have been shown to affect the magnetic properties in both bulk and nanoscale CuO [7,27]. Conclusions Newly-measured INS spectra of CuO nanoparticles provided important new insights into CuO nanoparticles. INS spectra revealed evidence of magnon propagation along the ordering vectors within the commensurate (T < TN2, Q = (0.5 0 −0.5)) and incommensurate (TN2 < T < TN1, Q = (0.506 0 −0.483)) antiferromagnetic phases of CuO. At low temperatures (T < 100 K), the intensity of the magnon signals increases with increasing temperature in accordance with the theory of the Bose population of states. Dynamic magnetic correlations are also clearly visible in INS spectra of the particles in the paramagnetic state (at T > TN). The intensity of the magnon signals in the INS spectra INS spectra for CuO nanoparticles of different sizes. All spectra were collected with E i = 50 meV and at T = 223 K. At this temperature, the large particles are in the commensurate phase (<T N2 ), while the small 8 nm particles are in the paramagnetic state (T N = 30 K and 50 K for the particles of size 5 and 10 nm, respectively [26]). The spectra are plotted with the same intensity scale (right). Conclusions Newly-measured INS spectra of CuO nanoparticles provided important new insights into CuO nanoparticles. INS spectra revealed evidence of magnon propagation along the ordering vectors within the commensurate (T < T N2 , Q = (0.5 0 −0.5)) and incommensurate (T N2 < T < T N1 , Q = (0.506 0 − 0.483)) antiferromagnetic phases of CuO. At low temperatures (T < 100 K), the intensity of the magnon signals increases with increasing temperature in accordance with the theory of the Bose population of states. Dynamic magnetic correlations are also clearly visible in INS spectra of the particles in the paramagnetic state (at T > T N ). The intensity of the magnon signals in the INS spectra are found to be particle size dependent, and increase with decreasing particle size. The origin of this "reverse size effect" could be related to the formation of collective states in which the magnetic moments of closely positioned particles align and allow for the propagation of collective magnetic excitations. Future studies of interest should include a study of the structure and magnetic properties of nanoparticulate CuO under hydrostatic pressure to further elucidate the nature of its magnetic state (e.g., [50][51][52]).
5,334
2019-02-26T00:00:00.000
[ "Physics", "Materials Science" ]
Novel mutation in exon11 of PRKCG (SCA14): A case report Introduction: PRKCG mutations have been implicated in the pathogenesis of spinocerebellar ataxia type 14 (SCA14), which is a rare autosomal dominant disease marked by cerebellar degeneration, dysarthria, and nystagmus. Until now, there has never been a report of patients with mutations of c.1232G>C worldwide. Case description: We report a case of a 30-year-old Chinese man with episodic dystaxia, speech disorder, and cognitive impairment; however, his father exclusively exhibited a speech disorder regardless of the same mutation. Whole-exome sequencing revealed a heterozygous c.1232G>C (p.G411A) variant of PRKCG. Conclusion: This case presents an extended genotype and phenotype of SCA14, and emphasizes the importance of gene sequencing in patients with spinocerebellar ataxia. Introduction Spinocerebellar ataxia type 14 (SCA14) [OMIM: 605361] (Yamashita et al., 2000;Brkanac et al., 2002) (Rossi et al., 2014) is an autosomal dominant disorder characterized by progressive cerebellar degeneration, dysarthria, and nystagmus. Symptoms, such as axial myoclonus (Yamashita et al., 2000), cognitive impairment (Wedding et al., 2013;Bolton and Lacy, 2019), tremors (Koht et al., 2012), and impaired sensibility (Klebe et al., 2005;Koht et al., 2012), may also be observed. Furthermore, Parkinson's disease (Sailer et al., 2012;Chen et al., 2022), which is characterized by muscle rigidity and tremors, has also been reported in some family pedigrees. Patients with SCA14 may exhibit further ataxic conditions such as dysphagia (Ueda et al., 2013). The incidence of SCA14 is from 1% to 4% in all autosomal genetic disorders (Chelban et al., 2018), and it was first reported in a Japanese family in 2000 (Yamashita et al., 2000). It was also reported in a 4th generation American family of English and Dutch origin who displayed pure cerebellar ataxia (Brkanac et al., 2002). SCA14 has also been reported in various countries such as Australia (Kang et al., 2019), Norway (Koht et al., 2012), Germany (Ganos et al., 2014), Japan (Ueda et al., 2013), and China (Chen et al., 2022). There is an ambiguous correlation between clinical manifestations and ethnicity, while the age of onset occurs between childhood and 60 years old. Diagnosis is mainly based on clinical manifestations, physical examination, and laboratory tests, and genetic testing is required to confirm the diagnosis. SCA14 is caused by PRKCG variants encoding protein kinase C γ (PKCγ) (Yabe et al., 2003). Although missense and deletion mutations have been found in PRKCG (Chelban et al., 2018), the specific molecular mechanism underlying pathogenesis remains poorly understood (Shimobayashi and Kapfhammer, 2021). Here, we identified a c.1232G>C mutation in PRKCG in a Chinese family to extend the genotype and phenotype of SCA14. Our results emphasize the importance of detecting PRKCG mutations in patients with episodic ataxia. Case description Clinical characteristics The proband (aged 30) suffered from episodic ataxia for 3 years and was hospitalized for hypokalemia in the Endocrinology Department, Dushu Lake Hospital Affiliated with Soochow University on 6 July 2022. The patient was experiencing fatigue and limb weakness after heavy sweating in hot weather. He had experienced a speech disorder and cognitive impairment since birth and his speech was characterized as slow and slurred. A neurological examination revealed that his gait was ataxic and his tandem gait was impaired. The tendon reflexes were normal, and Hoffman and Babinski's signs were negative. No axial myoclonus or tremors had been observed during the past 30 years. There was a requirement for assistance while transferring (activities of daily living score: 95). After admission, a complete examination revealed that the Renin-angiotensin-aldosterone system was normal. The potassium level in the 24 h urine sample was 64.58 mmoL/L (normal range: 25-125 mmoL/L), and the calcium level was 0.59 mmoL/L (normal range: 2.5-7.5 mmoL/L). Because the patient suffered from headaches, a right-sided parietooccipital tumorectomy was performed in 2009, and the postoperative pathology suggested an arteriovenous malformation. The postoperative magnetic resonance imaging (MRI) and computerized tomography (CT) scans ( Figure 1A) suggested postoperative changes; however, cerebellar atrophy changes were not observed. The proband had no siblings, and his grandparents and aunt did not exhibit any clinical manifestations, but his father had a speech disorder. The pedigree of the proband is shown in Figure 1B. During hospitalization, the patient's hypokalemia was corrected, and the weakness in his limb was improved after administering potassium supplementation; however, episodic ataxia remained without remission. The patient refused further follow-up and treatment, which limited the process of collecting more clinical data. The timeline with relevant data is shown in Figure 2. Genetic results Peripheral blood in EDTA was collected for whole genomic extraction. The proband and his parents underwent genetic testing, but his grandparents and aunt were not considered for genetic FIGURE 2 The timeline of the proband. Frontiers in Genetics frontiersin.org testing because they were asymptomatic. The mutation was verified in samples procured from family members by Sanger sequencing. The verification revealed the presence of a heterozygous mutation in PRKCG c.1232G>C (p.G411A) ( Figures 1C, 3). The proband's father had the same heterozygous mutation. According to the guidelines developed by the American College of Medical Genetics and Genomics (ACMG) for the classification of pathogenic or likely pathogenic variants, the PRKCG mutation was classified under "uncertain significance" on the basis of the evidence framework of PM2 and BP4. The PRKCG sequence (NM_002739) was obtained from the National Center for Biotechnology Information (NCBI) (https://www.ncbi.nlm.nih.gov/). The three-dimensional model of PRKCG protein was obtained from the AlphaFold Protein Structure Database (https://alphafold.ebi.ac.uk/). The three-dimensional models of the wild-type and p.G411A (c.1232G>C) mutant proteins were generated by PyMOL 2.5, a protein threedimensional structure visualization software (Jumper et al., 2021). Discussion and conclusion The clinical manifestations of SCAs, which are a unique type of cerebellar degenerative disease, range from typical ataxia disturbance to abnormal ocular movements. Many patients exhibit cerebellar ataxia, but this may not be an early symptom. In a previous case, dystonia was reported as the only symptom present in the early stages (De Michele et al., 2022). In this case, the proband had a speech disorder and cognitive impairment since birth and suffered from episodic ataxia over the last 3 years. Clinical manifestations and imaging results revealed that there were no traces of postoperative complications such as hydrocephalus and herniation. Furthermore, we consulted a neurosurgeon and a neurologist at Dushu Lake Hospital Affiliated with Soochow University. Furthermore, it has been 10 years since the right-sided parietooccipital tumorectomy was performed, and the cerebellar ataxia of the patient was not considered to be caused by a craniotomy. Considering the episodic ataxia, mild cognitive impairment, and speech disorder, it was recommended that the patient undergo genetic testing. Whole genomic extraction showed mutations in PRKCG with no mutations in KCNA1 or CACNA1A, which are commonly observed in episodic ataxia. Based on the genotype and phenotype, it was concluded that repeated falls were caused by ataxia rather than hypokalemia. Moreover, during previous episodic ataxia, the patient did not undergo strenuous exercise or exhibit a loss of appetite, vomiting, diarrhea, and other causes of low potassium levels. Additionally, no relationship between SCA14 and hypokalemia was reported. The absence of ataxia and cognitive dysfunction in the patient's father, who had the same genotype, may be explained by the decreased penetrance of the disorder (Yabe et al., 2003). SCAs are a group of progressive neurogenetic disorders caused by gene mutations. SCA14 is a rare autosomal dominant inherited disease involving PRKCG mutation. PRKCG encodes protein kinase C γ (PKCγ), which is a member of the serine/threonine enzyme family, plays a vital role in cell growth and signal transduction, and is highly expressed in Purkinje cells (Winkler et al., 2020;Shimobayashi and Kapfhammer, 2021). The activity of PKCγ also plays a major role in dendritic development and synaptic maturation of cerebellar Purkinje cells (Winkler et al., 2020). In vitro studies and studies on transgenic mice (Winkler et al., 2020) support the functional gain hypothesis, which indicates that increased PKCγ activity leads to dendritic dysplasia, neuronal death, and aggregation FIGURE 3 Variants associated with ataxia that are described in the protein kinase C γ coded by PRKCG. The pathogenic or likely pathogenic variants identified in this study are highlighted in red. Frontiers in Genetics frontiersin.org effects. The effect of PKCγ is related to the membrane residence duration (Wong et al., 2018). However, a study has shown that PKCγ exerts toxic effects by inducing endoplasmic reticular stress (Seki et al., 2007). Shimobayashi (Shimobayashi and Kapfhammer, 2017) reported that SCA14 depends on the activity of PKCγ, and mutations in different domains occur due to different pathways. Until now, many mutation sites have been reported (Chen et al., 2022;De Michele et al., 2022;Tada et al., 2022) (Schmitz-Hübsch et al., 2021 (Figure 4) in PRKCG. However, we pioneered the determination of a novel heterozygous mutation in PRKCG c.1232G>C (p.G411A). PRKCG consists of two domains, namely, the regulatory domain and the catalytic domain. The regulatory domain consists of the C1 and C2 regions. The majority of the mutations discovered in the C1 region are missense variants (De Michele et al., 2022). Variants in the catalytic domain are rare, and the clinical phenotype is much more complex (Chelban et al., 2018) because of the inhibition of calcium ion inflow (Adachi et al., 2008), which in turn prevents dendritic processes, as reported in this case. The mutation gene is located in exon11, which is in the catalytic domain. The patient had a speech disorder and cognitive impairment since birth. The disease progressed and ataxia initially occurred 3 years ago. However, his father had the same mutation that only manifested as a speech disorder. Transgenic mice (Ji et al., 2014) with PKCγ mutations located in the catalytic domain (S361G) have shown pathological changes and motility defects typical of cerebellar ataxia, which is consistent with this case. However, the mutation (p.G411A) reported here requires further functional verification. Many patients diagnosed with SCA14 display progressive cerebellar syndrome, which is rarely associated with severe disabilities (Koht et al., 2012). Dysarthria, abnormal ocular movements, and dysmetria are the most common clinical manifestations, and cognitive impairment is rare and is usually mild (De Michele et al., 2022) in these patients. To summarize the clinical features of SCA14 reported thus far, they can be divided into two categories as follows: the classic type and the atypical type. The characteristics of the classic type are as follows: 1) cerebellar ataxia, most patients present with slowly progressive ataxia; 2) dysarthria or a speech disorder; 3) ocular symptoms, abnormal ocular movements, and dysmetria; and 4) cerebellar atrophy in the MRI. The characteristics of the atypical type include cognitive impairment, depression, and epilepsy. All affected subjects that were considered in this article had a speech disorder, and the proband exhibited progressive ataxia and mild cognitive dysfunction, which is consistent with the clinical manifestations of a previously reported case of SCA14 (Wedding et al., 2013). The proband had no cerebellar atrophy, as revealed by a CT scan on 1 October 2009, and he refused to undergo more MRI scans due to personal reasons. However, if the patient exhibits the development of cerebellar atrophy in the future, a further examination will be required. To summarize, SCA14 should be considered when patients exhibit the following features: slowly progressive cerebellar ataxia, dysarthria or a speech disorder, abnormal ocular movement, extrapyramidal systemic syndrome, and mild to severe atrophy of the cerebellum revealed by brain magnetic resonance. Presently, a diagnosis for the same relies on gene detection. The majority of patients with SCA14 have a long history of the disease; thus, ordinary life is unaffected. However, many patients may die from dysphagia or falls. Currently, there is no targeted therapy for SCA14. Supportive treatment includes heteropathy, such as rehabilitation treatment, and antisecosis to reduce the risk of asphyxia. We reported on this family of Chinese origin with episodic ataxia, speech disorders, and cognitive impairment with an extended genetic profile and clinical features of SCA14, and emphasize the importance of gene sequencing in patients with episodic ataxia regardless of the absence of classic dysmetria and nystagmus and also independent of familial history. Data availability statement The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: MedGen UID: 343106; Concept ID: C1854369; Accession SCV003798486. Ethics statement Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article. Author contributions HS designed the study; RS and XT drafted and revised the manuscript; RS and XC acquired, analyzed, and interpreted the data. HS and XS critically reviewed the article. All authors have read and approved the final manuscript. FIGURE 4 Difference between the wide-type and mutant protein (p.G411A) of PRKCG (regional model). The left model is the mutant, and the right model is the wide-type. Frontiers in Genetics frontiersin.org
2,959.6
2023-03-08T00:00:00.000
[ "Biology", "Medicine" ]
A hybrid approach to Enhancing Process Scheduling in multiple core Systems Process scheduling within computer systems and with regards to the CPU always encounters bottle necks due to over reliance on single processor scheduling techniques. The presence of multicore processor systems attempts to increase throughput without however operating at full optimum capacity, hence the need for the proposal of a more efficient scheduling approach for use in multiple core systems. This paper conducted a comparative analysis of the rate of efficiency of existing scheduling algorithms with the aid of secondary data. It then employed CPU user benchmark analysis to asses the effectiveness of the proposed approach. Quad-core processor systems are most suitable for the proposed approach which basically implements two approaches one where scheduling decisions are handled by a master processor while other processors execute the user code thus always ensuring that all processors are busy and always utilized. Introduction For optimum performance to be achieved while using a computer system, all basic components must synchronize their activities and capabilities. This is however not usually the case with one reason for this being that CPUs still don't seem to perform at their best when dealing with actual processes. The bottle neck is caused by the fact that multicore systems still tend to rely heavily on single processor scheduling techniques thus denying them the capability to optimize their potential. This research aimed to propose a framework for optimizing multiple core processors systems by reducing the amount of idle time through the implementation of an enhanced process scheduling technique. 1.1 Single core processor systems vs multiple core processor systems Computer systems in the olden days relied heavily on single processing core CPUs. This then begs the question, what is a single core processor system? Any microprocessor that contains a single core on a chip and is only able to run a single computing thread at any one time is considered to be a single core processor system (Bindal 2017). Modern computers however are generally dominated by multiple core processor systems. Multiple core processor systems basically have the ability to run multiple computing threads at any one time. This is made possible by the fact that multiple core processor systems have several independent processors on a chip (Gerrit 1997). Which in turn provides the added advantage of increased throughput. In essence what this means is that the speed-up ratio with N processors is not N, however; it is less than N. 1.2 Process management and scheduling algorithms Irrespective of its size, each and every program always has an alternating number of CPU instructions waiting for some form of input/output (John 2001). Single processor systems waste a lot of time when waiting for the input/output results. In most cases those CPU cycles are never recovered. These then leads to the need to implement some form of scheduling which allows for one set of instructions to access the CPU while another is waiting for input/output results thus almost eliminating idle time. For us to appreciate how scheduling works we must first understand the concept of CPU burst and input/output burst. Consider a burst as a dash or a short sprint at an athletics meet, which is basically an athlete running as fast as they can till they can run no more. A CPU burst occurs when a processor is executing instructions and an I/O burst occurs when the said processor is fetching instructions. A CPU burst commences when the processor start running instructions from cache and ends when the processor finds the need to start fetching instructions or data from memory. An input/output burst on the other hand commences with the process of reading or writing data and ends when the requested data is written/read or when the space to store it runs out (Ahmet 2017) The Journal of Information Engineering and Applications www.iiste.org ISSN 2224-5782 (print) ISSN 2225-0506 (online) Vol.11, No.2, 2021 26 balancing act of managing these activities with the aim of maximize the use of the resources and minimize wait and idle time is what we are referring to as scheduling. Due to the fact that idle time is inevitable the CPU scheduler has to pick another process from the ready queue to run next whenever the CPU is idle. 2. Methods and Materials The research relied heavily on secondary data to perform a comparative analysis of common scheduling algorithms but in testing the hybrid scheduling approach a number of tests were run on a number of personal computers running on icore5 and icore7 processors with the aid of CPU user bench mark tool. 2.1 Comparative analysis of the most common scheduling algorithms First Come First Serve Considered to be the simples and easiest scheduling algorithm to implement basically due to the fact that based on the FIFO queue, the process that requests information first is the first to get CPU allocation. (Abraham 2015) Consider the following table of arrival time and burst time for three processes P1,P2 and P3: Process Arrival time ms Burst time in (ms) P1 0ms 11ms P2 2ms 7ms P3 5ms 20ms Average waiting time (27 + 20 + 0 ) /3 = 33.66ms The second case (reverse order) is poor due to the convoy effect: latter processes are held up behind a long-running first process as the average waiting time suggests Shortest Job First In this scheduling algorithm, the processes with the shortest execution time is the one that is next scheduled for processing. The scheduling itself can either be preemptive (where timeslots of a CPU are created and divided among processes) or non-preemptive (where processes occupy the CPU until termination of the process or the process is pushed to the waiting list). This in turn significantly reduces the average waiting time for other processes awaiting execution. (Abraham 2015) Process Execution time in (ms) P1 10 P2 5 P3 8 Table 2.2 Shortest job first scenario execution times The SJF scheduling algorithm allows for P2 to be processed first then P3 and finally P1. Priority Scheduling In this scheduling approach the scheduler relies on pre assigned priorities which in most cases are prioritized as system internal processes, interactive processes (which can be further sub divided) and batch processes with system internal processes having the highest priority and batch processes having the least priority, jobs with equal priorities are carried out on round-robin or FCFS basis. This approach however runs the risk of a low priority job never getting access to the processor. (Deitel 2015) Journal of Information Engineering and Applications www.iiste.org ISSN 2224-5782 (print) ISSN 2225-0506 (online) Vol.11, No.2, 2021 Round Robin Scheduling It's the oldest and most common multitasking algorithm. It employs a time-sharing system that relies on the preemptive scheduling scheme. It defines a small fixed unit of time called a quantum (or time-slice) typically 10-100 milliseconds it has some properties as below • Fair: given n processes in the ready queue and time quantum q, each process gets 1/nth of the CPU. • Live: No process waits more than (n -1) q time units before receiving a CPU allocation. scheduled execution of processes based on FCFS According to R-R scheduling processes are executed in FCFS order. So, firstly P1(burst time=20ms) is executed but after 4ms it is preempted and new process P2 (Burst time = 3ms) starts its execution whose execution is completed before the time quantum. Then next process P3 (Burst time=4ms) starts its execution and finally remaining part of P1 gets executed with time quantum of 4ms. Multilevel Queue Scheduling This algorithm separates the ready queue into various separate queues. In this method, processes are assigned to a queue based on a specific property of the process, like the process priority, size of the memory, etc. However, this is not an independent scheduling algorithm as it needs to use other types of algorithms in order to schedule the jobs (Maciej 2009) 3 Conclusion In multiple processor systems processors maybe homogeneous (identical) or heterogeneous (unidentical). The scheduling itself can take two approaches, one being where all scheduling decisions are handled by a single processor called Master Server (processor) and other processors execute the user code with the second being where multiprocessing is used in that each processor is self-scheduling. All processes may be in a common ready queue or each processor has its own private queue for ready processes. The proposed enhancement is suitable for quad-core processor systems where in all cases its operating system will view each CPU core as a separate processor. The algorithm takes two cores as slaves, which are seen to combine or detach from other cores. By letting slave cores be "Low level" processing units and other cores as "High level" processing units. To schedule processes among High level and Low-level processing units, a burst time Range is calculated by finding the difference between the process with the highest burst time and the process with the lowest burst time. The range is compared to each process's burst time; If the burst time is greater than or equal to the Range then that process is assigned to High level processing units. ii. Else if the burst time is less than the Range that process is assigned to low level processing units. The performance principals of the proposed algorithm are as follows: i. The multiple core systems should have four or more cores. ii. Round Robin technique is employed in both High and Low-level processing units. iii. If High level processing units have completed executing processes and Low-level processing units still have ongoing processes then Low-level processing units' merges with the High-level units and they execute those processes as a single processing unit. iv. If a new process arrives and it's found to be for Low level processing units, the Low-level units detaches itself from the High-level processing units leaving all executing processes to it so as to service the new process. v. But if this new process is for the High-level processing units, the cores then resume as a single processing unit. vi. If two or more processes arrive at the same time, a new Range is computed and used to schedule those processes among the High level and Low-level processing units. The Roaming Core concept therefore implies that the Low-level processing units can merge or detach themselves from the High-level processing units thus keeping the processor busy and well utilized. An analysis of the performance of the hybrid approach was conducted as below Given following set of processes P1, P2, P3, P4, P5, and P6. burst time comparison and processor allocation Suppose a new process Px arrives with a burst time of 18 ms, then Operating system will compare it with the burst time range. Since 18 ms is above the range, it will be assigned to high level processing units. Also given that P8, P9, and P10 arrives at the same time with the following burst time. Process (P) Burst time P8 12 P9 8 P10 5 Table 3.3 Burst time of 3 different processes under the hybrid scheduling approach Journal of Information Engineering and Applications www.iiste.org ISSN 2224-5782 (print) ISSN 2225-0506 (online) Vol.11, No.2, 2021 29  A new burst time range will be computed (Since two or more processes have arrived at the same time) which will be (P8 -P10) = 7ms  The processes will be assigned as follows. HIGH LEVEL (>= 7ms) LOW LEVEL (< 7 ms) P8 P10 P9 Table 3.4 Burst time comparison and processor allocation for the 3 different processes The proposed algorithm can act as a solution to the problem of low priority processes being starved as long as the higher priority processes continue being active, to a point where low priority processes may never get service, since it able to assign higher priority processes to HIGH level processing units while low priority processes are given to LOW level processing units. A framework for its implementation is illustrated below: Figure 3.1 A framework for its implementation a hybrid approach to Enhancing Process Scheduling in multiple core Systems Journal of Information Engineering and Applications www.iiste.org ISSN 2224-5782 (print) ISSN 2225-0506 (online) Vol.11, No.2, 2021
2,821
2021-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Synergistic Flame Retardant Effect of Barium Phytate and Intumescent Flame Retardant for Epoxy Resin Recently, widespread concern has been aroused on environmentally friendly materials. In this article, barium phytate (Pa-Ba) was prepared by the reaction of phytic acid with barium carbonate in deionized water, which was used to blend with intumescent flame retardant (IFR) as a flame retardant and was added to epoxy resin (EP). Afterward, the chemical structure and thermal stability of Pa-Ba were characterized by Fourier transform infrared (FTIR) spectroscopy and thermogravimetric analysis (TGA), respectively. On this basis, the flammability and flame retardancy of EP composites were researched. It is shown that EP/14IFR/2Ba composite has the highest limiting oxygen index (LOI) value of 30.7%. Moreover, the peak heat release rate (PHRR) of EP/14IFR/2Ba decreases by 69.13% compared with pure EP. SEM and Raman spectra reveal the carbonization quality of EP/14IFR/2Ba is better than that of other composites. The results prove that Pa-Ba can cooperate with IFR to improve the flame retardancy of EP, reducing the addition amount of IFR in EP, thus expanding the application range of EP. In conclusion, adding Pa-Ba to IFR is a more environmentally friendly and efficient method compared with others. Introduction In recent years, safety and environment protection requirements have become higher and higher for many materials used in industry and daily life. Epoxy resin (EP) plays a significant role in mechanical properties, electrical insulation, heat resistance, corrosion resistance and so on, thus becoming one of the most indispensable resins. It is extensively used in coatings, electronic and electrical industries, handicrafts, and photoelectric industries. However, EP is composed of chains of hydrocarbon with high flammability, which will produce comparatively great toxicity in the process of combustion. Therefore, there is an urgent need to improve the flame retardancy of EP [1][2][3][4][5]. In order to improve the flame retardancy of EP [6][7][8][9], there are several commonly used methods, such as surface modification [10], superrefining [11][12][13], complex cooperation [14,15], and cross-linking. A lot of research works have been carried out, and the direct addition of flame retardants has the advantages of convenience and economy, thus becoming the most chosen approach. Halogen flame retardant has advantages of high flame retardant efficiency [16], low dosage and good compatibility with materials, however, a large amount of smoke and poisonous and corrosive gases, such as dioxins, will be produced during combustion, causing great harm to the environment. Moreover, metal hydroxide is also an available flame retardant synergist, which is non-toxic and has good stability, while plenty of additive amounts and poor flowability will reduce the mechanical properties of materials. Therefore, we consider IFR as an environmentally friendly flame retardant [17], which is halogen-free and has low smoke. The most familiar and commercial IFR system is ammonium polyphosphate (APP) [18] and pentaerythritol (PER) [19][20][21], but only improve the effect of a high content of flame retardant on EP matrix, but also become more economical because of the reduction in the amount of APP originating from nonrenewable resources and the increase in Pa-Ba and make EP widely used in various fields. Preparation of Pa-Ba The synthesis diagram of Pa-Ba prepared from Pa and barium carbonate is shown in Scheme 1. First, 11.96 g (0.06 mol) BaCO3 was suspended in 100 mL deionized water under 35 °C, and 9.43 g (0.01 mol) Pa (70% aqueous solution) was dissolved into 50 mL of deionized water. After they dissolved completely, the Pa solution was placed in a constant pressure droplet funnel, which was added to the BaCO3 suspension at the rate of 5 drops 10 s within 30 min, with mechanical stirring. The reaction was kept at a constant temperature for 3 h, until there was no precipitation. Afterwards, the white precipitation yielded was filtered and rinsed with deionized water no less than 5 times until PH was equal to 7. Finally, the product was dried at 75 °C for 10 h, and a white powder, namely Pa-Ba, was obtained. The yield of Pa-Ba is about 85.88%. Scheme 1. Synthesis route of Pa-Ba. Preparation of EP Composites The composition of EP is shown in Table 1. At the very beginning, EP systems were slowly stirred for 30 min under 75 °C after the participation of IFR and Pa-Ba, so that the flame retardant was uniformly dispersed in the epoxy resin. The curing agent PA651 (the mass ratio of EP to PA651 was 3:1) was added into EP composites, stirring until the mixture was uniform. Afterwards, the blends were dried in a vacuum oven at 100 °C for 3 h and injected slowly into the mold which was preheated in 10 min and cured in a constant temperature drying box by the curing system of 110 °C/3 h + 130 °C/3 h + 150 °C/2 h. The EP composites were obtained after natural cooling. Scheme 1. Synthesis route of Pa-Ba. Preparation of EP Composites The composition of EP is shown in Table 1. At the very beginning, EP systems were slowly stirred for 30 min under 75 • C after the participation of IFR and Pa-Ba, so that the flame retardant was uniformly dispersed in the epoxy resin. The curing agent PA651 (the mass ratio of EP to PA651 was 3:1) was added into EP composites, stirring until the mixture was uniform. Afterwards, the blends were dried in a vacuum oven at 100 • C for 3 h and injected slowly into the mold which was preheated in 10 min and cured in a constant temperature drying box by the curing system of 110 • C/3 h + 130 • C/3 h + 150 • C/2 h. The EP composites were obtained after natural cooling. Measurements The FTIR was performed with the Thermo Fisher Nicolet ls10 spectrometer (Beijing Ruili Analytical Instrument Co., Ltd., Beijing, China) by recording the frequency of 16 scans, and the region was 400-4000 cm −1 . The sample functional groups were tested by KBr pressing method. Thermogravimetric analysis (TGA) (Netzsch, Germany) was responded under the heating rate of 20 • C/min from 40-800 • C in N 2 atmosphere. The synthesized samples and the residual chars after burning were observed by ZEISS EV0 MA15 scanning electron microscope(Carl Zeiss, Germany). With the laser wavelength 532 nm, the Raman spectra were recorded in the range 200-2000 cm −1 by using the Thermo Fisher Dxr2xi Confocal Raman spectrometer (REN-ISHAW plc, Wotton-under-Edge, UK). Characterization of Pa-Ba In accordance with Scheme 1, Pa-Ba was produced. Figure 1 indicates the FTIR spectra of Pa, BaCO 3 and Pa-Ba [43]. As shown in Pa, 3416.67 cm −1 is associated with the O-H absorption of H 2 O, the appearance of the O-P-O telescopic vibration is shown in 1639.30 cm −1 , and CH 2 meets the conditions of characteristic vibration peak at 2846-2942 cm −1 , the position of 2820.32 cm −1 is attributable to the vibration shrinkage range of P-OH. It is found that BaCO 3 shows the absorption peak of C=O at 1447.34 cm −1 , 692.28 cm −1 is the characteristic band of barium salt. As for Pa-Ba, some characteristic peaks are observed from barium salt and Pa, for example, the peaks at 692.28 cm −1 , just as the same with BaCO 3 and the absorption for (PO 3 ) 2− at 1007.18 cm −1 shifts to 1072.20 cm −1 , which means the interaction between ions is changed. Moreover, the vibration of P-OH at 2820.32 cm −1 and the absorption peak of C=O at 1447.34 cm −1 both disappeared. The results above suggested that Pa-Ba was synthesized. Barium phytate was characterized by Scanning electron microscopy (SEM) and energydispersive spectroscopy (EDS) as shown in Figure 2 [44]. Figure 2a shows the SEM morphology of Pa-Ba which has an irregular granular shape. The EDS data of Pa-Ba are depicted in Figure 2b-e, which demonstrate the composition and distribution of main elements in Pa-Ba. It is evident that there are four main elements oxygen (O), phosphorus (P), carbon (C), and barium (Ba) in Pa-Ba. In addition, the four main elements are distributed homogeneously, which further confirms that Pa-Ba was synthesized. Barium phytate was characterized by Scanning electron microscopy (SEM) and energy-dispersive spectroscopy (EDS) as shown in Figure 2 [44]. Figure 2a shows the SEM morphology of Pa-Ba which has an irregular granular shape. The EDS data of Pa-Ba are depicted in Figure 2b-e, which demonstrate the composition and distribution of main elements in Pa-Ba. It is evident that there are four main elements oxygen (O), phosphorus (P), carbon (C), and barium (Ba) in Pa-Ba. In addition, the four main elements are distributed homogeneously, which further confirms that Pa-Ba was synthesized. Barium phytate was characterized by Scanning electron microscopy (SEM) and energy-dispersive spectroscopy (EDS) as shown in Figure 2 [44]. Figure 2a shows the SEM morphology of Pa-Ba which has an irregular granular shape. The EDS data of Pa-Ba are depicted in Figure 2b-e, which demonstrate the composition and distribution of main elements in Pa-Ba. It is evident that there are four main elements oxygen (O), phosphorus (P), carbon (C), and barium (Ba) in Pa-Ba. In addition, the four main elements are distributed homogeneously, which further confirms that Pa-Ba was synthesized. Thermal Stability To investigate the reactions among Pa-Ba, EP and IFR, their thermal decomposition behaviors are researched using TGA in the N 2 atmosphere as displayed in Figure 3. The details are shown in Table 2. composites both show downward trends. EP/IFR/2Ba composite has the lowest maximum thermal decomposition rate, corresponding to a temperature of 347 °C, which is 67 °C lower than that of pure EP. It is proved that IFR and Pa-Ba can reduce the Tmax of EP, generating more residues and forming more stable carbon layers. Lower initial decomposition temperature, the reduction in the Tmax and the increase in the residues demonstrate that Pa-Ba can be used as a carbon source, which can protect the matrix degradation, having the effect of heat insulation and oxygen insulation, thus producing more stable carbon layers. The degradation of pure EP can be divided into the following two stages. The first stage occurs from 350 to 500 • C, which is also the main thermal degradation stage of EP. The degradation of EP can release a large amount of heat and produce CO, CO 2 , CH 4 and other thermal degradation gases. In the second stage, after 500 • C, EP continuously degrades and carbonizes to form a carbon layer. With the addition of IFR and Pa-Ba, the carbon layer increases ulteriorly, which can better isolate the exchange of heat and gas and prevent combustion. Furthermore, there are some exothermic and endothermic events in TGA. The endothermic reaction occurs when APP is heated to 300-330 • C, and the side-chain structure decomposes and removes part of the amino groups to form hydroxyl groups. The exothermic reaction occurs after the thermal decomposition of APP, which reacts with PER to form a homogeneous carbon layer. The char formation of IFR is instrumental in restraining the combustion of EP composites, thus improving thermal stability. In addition, Pa-Ba reacts with IFR exothermically at 350-600 • C to form a more stable carbon layer. As exhibited in Figure 3a and Table 2, the initial decomposition temperature (T 5 wt% ) of pure EP occurs at 372 • C, and there is obvious weightlessness at 300-500 • C. After adding Pa-Ba and IFR, the weightlessness stage of EP composites is identical to that of pure EP. However, the T 5 wt% of EP composites decreases in different degrees, for example, the T 5 wt% for EP/16IFR and EP/14IFR/2Ba decreases from 372 • C to 320 • C and 313 • C, respectively, which reveals that IFR and Pa-Ba advance the decomposition of EP [43]. The residue of EP/IFR with 23.27 wt% at 800 • C enhances remarkably in comparison with that of pure EP with 7.25 wt%, which means that a stable carbon layer was generated by the reaction of APP and PER. After adding Pa-Ba, the residue of EP composites increases further, especially, EP/14IFR/2Ba composite has the highest residue of 25.90 wt%. From the DTG curves in Figure 3b, the maximum weight loss rate and the T max of EP composites both show downward trends. EP/IFR/2Ba composite has the lowest maximum thermal decomposition rate, corresponding to a temperature of 347 • C, which is 67 • C Polymers 2021, 13, 2900 7 of 14 lower than that of pure EP. It is proved that IFR and Pa-Ba can reduce the T max of EP, generating more residues and forming more stable carbon layers. Lower initial decomposition temperature, the reduction in the T max and the increase in the residues demonstrate that Pa-Ba can be used as a carbon source, which can protect the matrix degradation, having the effect of heat insulation and oxygen insulation, thus producing more stable carbon layers. Flame Retardancy As demonstrated in Table 3, the pure EP shows high flammability when the LOI reaches 19.1%. Obviously, the adjunction of IFR increases the LOI of EP from 19.1% to 24.3%, which availably improves the flame retardancy of EP, revealing IFR can be considered as a productive flame retardant. With Pa-Ba added into EP composites, the LOI values exhibit a trend of increasing at first and then decreasing, especially, EP/14IFR/2Ba composite has the highest LOI reaching 30.7%. The results demonstrate that Pa-Ba can increase the flame retardant efficiency of IFR in EP, thus making EP widely used in various fields. This section may be divided into subheadings. It should provide a concise and precise description of the experimental results, their interpretation, as well as the experimental conclusions that can be drawn. Cone calorimeter test (CCT) is adopted to test the flammability of polymer materials, the combustion behavior in a fire can be assessed in accordance with the experimental data [41]. There are many combustion parameters of combustible materials obtained by CCT in a fire. The flame retardancy in an actual fire can be evaluated by heat release rate (HRR), peak heat release rate (PHRR), and total heat release (THR). Meanwhile, the smoke suppression properties can be estimated by smoke production rate (SPR), peak smoke production rate (PSPR), total smoke production (TSP). Curves of HRR and THR are displayed in Figures 4 and 5, respectively. It can be recognized that pure EP has high flammability, showing the HRR is a single peak and changes rapidly over time, and the PHRR is as high as 794.09 kW/m 2 . The HRR curves of EP composites are analogous to those of pure EP. The addition of IFR slows down the heat release and decreases the PHRR value. After the addition of Pa-Ba, the value of PHRR decreases further, especially, the PHRR of EP/14IFR/2Ba reduces to the lowest level of 245.15 kW/m 2 , a decrease of 548.94 kW/m 2 , which is only 30.87% of that of pure EP. As displayed in Figure 5, pure EP has a THR of 95.45 MJ/m 2 . The THR of EP/IFR is remarkably lowered, which attains to 41.18 MJ/m 2 . After adding Pa-Ba, the value of THR decreases further, especially, the THR of EP/14IFR/2Ba drops to the lowest level of 27.16 MJ/m 2 , a decrease of 68.29 MJ/m 2 , which is only 28.45% of that of pure EP. From the HRR and THR curves, the addition of IFR and Pa-Ba can effectively reduce the PHRR and THR values, which attests that the flame retardancy of EP was improved [45]. 6.48 m 2 /m 2 , a decrease of 22.14 m 2 /m 2 . From the SPR and TSP curves, the addition of IFR and Pa-Ba can availably decrease the SPR and TSP values and the total smoke volume of materials, which proves that EP composites have excellent smoke suppression performance. All these indicate that in the process of combustion, the existence of Pa-Ba can promote the cross-linking of IFR into carbon more efficiently, and the formed carbon layer can be used as an obstacle to block the transmission of heat and combustible gas, thus playing a better role of flame retardant. 6.48 m 2 /m 2 , a decrease of 22.14 m 2 /m 2 . From the SPR and TSP curves, the addition of IFR and Pa-Ba can availably decrease the SPR and TSP values and the total smoke volume of materials, which proves that EP composites have excellent smoke suppression performance. All these indicate that in the process of combustion, the existence of Pa-Ba can promote the cross-linking of IFR into carbon more efficiently, and the formed carbon layer can be used as an obstacle to block the transmission of heat and combustible gas, thus playing a better role of flame retardant. Curves of SPR and TSP are exhibited in Figures 6 and 7, respectively. As shown in Figure 6, the PSPR value of pure EP is 0.255 m 2 /s, which is much higher than that of EP composites. The addition of IFR and Pa-Ba slows down the smoke production of EP and decreases the PSPR value, especially, the PSPR of EP/14IFR/2Ba reduces to the lowest level of 0.109 m 2 /s, a decrease of 0.146 m 2 /s. From Figure 7, the TSP of EP composites decreases remarkably. Pure EP can produce smoke constantly and promptly, and the TSP is as high as 28.62 m 2 /m 2 . After adding IFR and Pa-Ba, the TSP of EP composites decreases dramatically, especially EP/14IFR/2Ba, the TSP value cuts down to the lowest level, which is only 6.48 m 2 /m 2 , a decrease of 22.14 m 2 /m 2 . From the SPR and TSP curves, the addition of IFR and Pa-Ba can availably decrease the SPR and TSP values and the total smoke volume of materials, which proves that EP composites have excellent smoke suppression performance. All these indicate that in the process of combustion, the existence of Pa-Ba can promote the cross-linking of IFR into carbon more efficiently, and the formed carbon layer can be used as an obstacle to block the transmission of heat and combustible gas, thus playing a better role of flame retardant. Residual Char Consecutive and dense residual char can have the effect of heat insulation and oxygen insulation, thus forestalling secondary combustion of the EP matrix. Figure 8 displays the digital photos of EP, EP/16IFR and EP/14IFR/2Ba after CCT. It is shown that pure EP is nearly burnt out and rarely produces residual char. Moreover, EP/16IFR has generated an expanded carbon layer, which is not compact enough and has a relatively low expansion height of 3.8 cm. However, the carbon layer generated by EP/14IFR/2Ba with the expansion height of 5.0 cm is relatively dense and consecutive, which means that the carbonization quality of EP/14IFR/2Ba is better than that of EP/16IFR. Residual Char Consecutive and dense residual char can have the effect of heat insulation and oxy gen insulation, thus forestalling secondary combustion of the EP matrix. Figure 8 displays the digital photos of EP, EP/16IFR and EP/14IFR/2Ba after CCT. It is shown that pure EP is nearly burnt out and rarely produces residual char. Moreover, EP/16IFR has generated an expanded carbon layer, which is not compact enough and has a relatively low expan sion height of 3.8 cm. However, the carbon layer generated by EP/14IFR/2Ba with the ex pansion height of 5.0 cm is relatively dense and consecutive, which means that the car bonization quality of EP/14IFR/2Ba is better than that of EP/16IFR. Residual Char Consecutive and dense residual char can have the effect of heat insulation and oxygen insulation, thus forestalling secondary combustion of the EP matrix. Figure 8 displays the digital photos of EP, EP/16IFR and EP/14IFR/2Ba after CCT. It is shown that pure EP is nearly burnt out and rarely produces residual char. Moreover, EP/16IFR has generated an expanded carbon layer, which is not compact enough and has a relatively low expansion height of 3.8 cm. However, the carbon layer generated by EP/14IFR/2Ba with the expansion height of 5.0 cm is relatively dense and consecutive, which means that the carbonization quality of EP/14IFR/2Ba is better than that of EP/16IFR. Polymers 2021, 13, x FOR PEER REVIEW 10 of 14 Furthermore, the SEM of char residues is shown in Figure 9. For pure EP, although the surface is relatively smooth, numerous pores and cracks were found on the discontiguous residual char surface, making it impossible to delay the combustion of the underlying EP. In contrast, the continuance of the carbon layer of EP/16IFR is remarkably improved, with few pores produced and a compact and continuous carbon layer. Compared with EP/16IFR and pure EP, the addition of Pa-Ba forms a denser wrinkled carbon layer, which can be considered as a more effective protective barrier, not only to prevent molten droplets and the escape of combustible gas, but also have great effects on heat insulation and oxygen insulation. The residual chars after cone calorimeter testing are further researched as demonstrated in Figure 10. Based on the Raman spectra, we analyzed the characteristic peak of D-band and G-band in graphite, which appeared at 1348 and 1590 cm −1 in turn [46]. The D-band mainly corresponds to the defect of the graphitized layer, while the G-band is mainly equivalent to the ordered graphite layer. The ratio of R intensity of the D band to the G band (ID/IG) reflects the graphitization degree. Moreover, ID/IG is inversely proportional to graphitization degree. The R values of pure EP, EP/16IFR and EP/14IFR/2Ba are 3.93, 3.52 and 3.15, respectively, indicating that the R value of EP/16IFR is lower and the degree of graphitization is higher compared with pure EP. After adding Pa-Ba, the R value decreases further, which means that the graphitization degree of the carbon layer is higher than that of EP/16IFR. Therefore, the carbon layer formed by EP/14IFR/2Ba after combustion is more orderly and dense, and its quality is better than that of the other two materials, Furthermore, the SEM of char residues is shown in Figure 9. For pure EP, although the surface is relatively smooth, numerous pores and cracks were found on the discontiguous residual char surface, making it impossible to delay the combustion of the underlying EP. In contrast, the continuance of the carbon layer of EP/16IFR is remarkably improved, with few pores produced and a compact and continuous carbon layer. Compared with EP/16IFR and pure EP, the addition of Pa-Ba forms a denser wrinkled carbon layer, which can be considered as a more effective protective barrier, not only to prevent molten droplets and the escape of combustible gas, but also have great effects on heat insulation and oxygen insulation. Furthermore, the SEM of char residues is shown in Figure 9. For pure EP, although the surface is relatively smooth, numerous pores and cracks were found on the discontiguous residual char surface, making it impossible to delay the combustion of the underlying EP. In contrast, the continuance of the carbon layer of EP/16IFR is remarkably improved, with few pores produced and a compact and continuous carbon layer. Compared with EP/16IFR and pure EP, the addition of Pa-Ba forms a denser wrinkled carbon layer, which can be considered as a more effective protective barrier, not only to prevent molten droplets and the escape of combustible gas, but also have great effects on heat insulation and oxygen insulation. The residual chars after cone calorimeter testing are further researched as demonstrated in Figure 10. Based on the Raman spectra, we analyzed the characteristic peak of D-band and G-band in graphite, which appeared at 1348 and 1590 cm −1 in turn [46]. The D-band mainly corresponds to the defect of the graphitized layer, while the G-band is mainly equivalent to the ordered graphite layer. The ratio of R intensity of the D band to the G band (ID/IG) reflects the graphitization degree. Moreover, ID/IG is inversely proportional to graphitization degree. The R values of pure EP, EP/16IFR and EP/14IFR/2Ba are 3.93, 3.52 and 3.15, respectively, indicating that the R value of EP/16IFR is lower and the degree of graphitization is higher compared with pure EP. After adding Pa-Ba, the R value decreases further, which means that the graphitization degree of the carbon layer is higher than that of EP/16IFR. Therefore, the carbon layer formed by EP/14IFR/2Ba after combustion is more orderly and dense, and its quality is better than that of the other two materials, The residual chars after cone calorimeter testing are further researched as demonstrated in Figure 10. Based on the Raman spectra, we analyzed the characteristic peak of D-band and G-band in graphite, which appeared at 1348 and 1590 cm −1 in turn [46]. The D-band mainly corresponds to the defect of the graphitized layer, while the G-band is mainly equivalent to the ordered graphite layer. The ratio of R intensity of the D band to the G band (I D /I G ) reflects the graphitization degree. Moreover, I D /I G is inversely proportional to graphitization degree. The R values of pure EP, EP/16IFR and EP/14IFR/2Ba are 3.93, 3.52 and 3.15, respectively, indicating that the R value of EP/16IFR is lower and the degree of graphitization is higher compared with pure EP. After adding Pa-Ba, the R value decreases further, which means that the graphitization degree of the carbon layer is higher than that of EP/16IFR. Therefore, the carbon layer formed by EP/14IFR/2Ba after combustion is more orderly and dense, and its quality is better than that of the other two materials, which can be conducive to preventing the formation of cracks during and after combustion. which can be conducive to preventing the formation of cracks during and after combustion. Figure 10. Raman spectra of EP, EP/16IFR and EP/14IFR/2Ba after combustion. Curves of the FTIR spectra of residual chars are exhibited in Figure 11. As shown in EP/16IFR, 3462.01 cm −1 is associated with the O-H absorption, the appearance of the C-H telescopic vibration is shown in 2933.42 and 2863.26 cm −1 , C=C meets the conditions of characteristic vibration peak at 1637.82 cm −1 , the position of 1401.27 cm −1 is attributable to C-N stretching and N-H bending vibration absorption peak, and 1085.91 cm −1 is the characteristic band of P=O. It can be recognized from EP/14IFR/2Ba that there is no obvious difference between the two FTIR curves, demonstrating that the addition of Pa-Ba does not change the degradation products, only accelerating or delaying the reactions. Figure 11. FTIR spectra of EP/16IFR and EP/14IFR/2Ba after combustion. Conclusions Barium phytate (Pa-Ba) was prepared from phytic acid and barium carbonate and characterized by FTIR and SEM. EP composites were produced by the addition of the synthesized Pa-Ba and IFR. The thermal stability was studied by TGA. EP/14IFR/2Ba has the Curves of the FTIR spectra of residual chars are exhibited in Figure 11. As shown in EP/16IFR, 3462.01 cm −1 is associated with the O-H absorption, the appearance of the C-H telescopic vibration is shown in 2933.42 and 2863.26 cm −1 , C=C meets the conditions of characteristic vibration peak at 1637.82 cm −1 , the position of 1401.27 cm −1 is attributable to C-N stretching and N-H bending vibration absorption peak, and 1085.91 cm −1 is the characteristic band of P=O. It can be recognized from EP/14IFR/2Ba that there is no obvious difference between the two FTIR curves, demonstrating that the addition of Pa-Ba does not change the degradation products, only accelerating or delaying the reactions. which can be conducive to preventing the formation of cracks during and after combustion. Figure 10. Raman spectra of EP, EP/16IFR and EP/14IFR/2Ba after combustion. Curves of the FTIR spectra of residual chars are exhibited in Figure 11. As shown in EP/16IFR, 3462.01 cm −1 is associated with the O-H absorption, the appearance of the C-H telescopic vibration is shown in 2933.42 and 2863.26 cm −1 , C=C meets the conditions of characteristic vibration peak at 1637.82 cm −1 , the position of 1401.27 cm −1 is attributable to C-N stretching and N-H bending vibration absorption peak, and 1085.91 cm −1 is the characteristic band of P=O. It can be recognized from EP/14IFR/2Ba that there is no obvious difference between the two FTIR curves, demonstrating that the addition of Pa-Ba does not change the degradation products, only accelerating or delaying the reactions. Figure 11. FTIR spectra of EP/16IFR and EP/14IFR/2Ba after combustion. Conclusions Barium phytate (Pa-Ba) was prepared from phytic acid and barium carbonate and characterized by FTIR and SEM. EP composites were produced by the addition of the synthesized Pa-Ba and IFR. The thermal stability was studied by TGA. EP/14IFR/2Ba has the Figure 11. FTIR spectra of EP/16IFR and EP/14IFR/2Ba after combustion. Conclusions Barium phytate (Pa-Ba) was prepared from phytic acid and barium carbonate and characterized by FTIR and SEM. EP composites were produced by the addition of the synthesized Pa-Ba and IFR. The thermal stability was studied by TGA. EP/14IFR/2Ba has the highest residue of 25.90 wt%, which is much higher compared with pure EP (7.25 wt%). Afterwards, the flame retardancy was analyzed by LOI and CCT. The results show that Pa-Ba can cooperate with IFR to flame retardant EP, and EP/14IFR/2Ba has the highest LOI value of 30.7%. The PHRR value of EP/14IFR/2Ba decreases dramatically, from 794.09 kW/m 2 to 245.15 kW/m 2 ; meanwhile, the PSPR value reduces from 0.255 m 2 /s to 0.109 m 2 /s. From the residue char of EP composites after combustion, the expanded carbon layer generated by EP/14IFR/2Ba is dense and continuous, with a height of 5.0 cm. SEM and Raman spectroscopy were adopted to investigate the residue char further. They reveal that the carbonization quality of EP/14IFR/2Ba is better than that of other composites, which is conducive to preventing the formation of cracks during and after combustion. The results demonstrate that Pa-Ba can be used as a carbon source, which can protect matrix degradation, prevent the escape of combustible gas, and have significant effects on heat insulation and oxygen insulation, thus forming more stable carbon layers. Meanwhile, Pa-Ba can improve the flame retardant efficiency of IFR in EP and reduce the total smoke volume of materials, so as to cooperate with IFR to improve the thermal stability, flame retardancy and smoke suppression performance of EP, thus playing a better role in reducing the probability of fire as well as expanding the applicable scope of EP. Data Availability Statement: All the data will be available to the readers.
7,086.8
2021-08-28T00:00:00.000
[ "Materials Science", "Environmental Science", "Chemistry" ]
Mesomorphic, Optical and DFT Aspects of Nearto Room-Temperature Calamitic Liquid Crystal A new liquid crystalline, optical material-based Schiff base core with a near to room-temperature mesophase, (4-methoxybenzylideneamino)phenyl oleate (I), was prepared from a natural fatty acid derivative, and its physical and chemical properties investigated by experimental and theoretical approaches. The molecular structure was confirmed by elemental analysis, FT-IR (Fourier-Transform-Infrared Spectroscopy) and NMR (nuclear magnetic resonance) spectroscopy. Optical and mesomorphic activities were characterized by differential scanning calorimetry (DSC) and polarized optical microscopy (POM). The results show that compound (I) exhibits an enantiotropic monomorphic phase comprising a smectic A phase within the near to room-temperature range. Ordinary and extraordinary refractive indices as well as birefringence with changeable temperatures were analyzed. Microscopic and macroscopic order parameters were also calculated. Theoretical density functional theory (DFT) calculations were carried out to estimate the geometrical molecular structures of the prepared compounds, and the DFT results were used to illustrate the mesomorphic results and optical characteristics in terms of their predicted data. Three geometrical isomers of the prepared compound were investigated to predict the most stable isomer. Many parameters were affected by the geometrical isomerism such as aspect ratio, planarity, and dipole moment. Thermal parameters of the theoretical calculations revealed that the highest co-planar aromatic core is the most stable conformer. Introduction Liquid crystals (LCs) of low melting temperatures are important materials for a wide range of applications, including temperature sensors and electro-optical displays [1][2][3][4][5][6]. These materials require certain characteristics to be manageable in device applications [1,2,7]. The applicable liquid crystalline compounds depend on various parameters, such as optical transmittance, absorption coefficient, way by exchanging the mesogenic core. These findings encouraged us to study the synthesis and analysis of another calamitic molecule with an inverted azomethine linkage core. Based on the above consideration, and in order to achieve low melting temperatures near to room temperature, a new two-ring calamitic compound with an azomethine central linkage, namely (4-methoxyphenylimino)methyl)phenyl oleate (I), was synthesized, and its mesomorphic and optical behaviour investigated using experimental and theoretical approaches. Additionally, the ordinary and extraordinary refractive indices, birefringence, thermal stability and order parameters were measured. The optical activity data were correlated to geometrical results from simulated DFT modeling calculations. 2.2. Synthesis of (4-methoxybenzylideneamino)phenyl oleate, I I was prepared according to the following Scheme 1: Crystals 2020, 10, x 3 of 24 different way by exchanging the mesogenic core. These findings encouraged us to study the synthesis and analysis of another calamitic molecule with an inverted azomethine linkage core. Based on the above consideration, and in order to achieve low melting temperatures near to room temperature, a new two-ring calamitic compound with an azomethine central linkage, namely (4-methoxyphenylimino)methyl)phenyl oleate (I), was synthesized, and its mesomorphic and optical behaviour investigated using experimental and theoretical approaches. Additionally, the ordinary and extraordinary refractive indices, birefringence, thermal stability and order parameters were measured. The optical activity data were correlated to geometrical results from simulated DFT modeling calculations. Synthesis of (4-methoxybenzylideneamino)phenyl oleate, I I was prepared according to the following Scheme 1: Synthesis of 4-methoxybenzylideneamino)phenol A: Equimolars of 4-methoxybenzaldehyde (4.1 mmol) and 4-aminophenol (4.1 mmol) in ethanol (10 mL) were refluxed for 2 h. The reaction mixture was allowed to cool, and the separated product filtered. The obtained solid was recrystallized from ethanol. Results and Discussion Nuclear magnetic resonance (NMR) spectroscopy is a versatile analytical tool that has been extensively used in chemistry and in the identification of liquid crystals [39]. The great strength of NMR is the ability to distinguish the unique magnetic environments of the same type of nuclei (e.g., 1 H and 13 C) in different positions of the same molecule, enabling researchers to investigate molecules at the atomic level. Thus, NMR is a powerful tool to study molecular dynamics, and can be used to elucidate different structures of the same molecule, and to monitor the associated kinetics [40] and thermodynamics of the changes that could be caused by changing sample temperature. Molecular dynamics of the antiferroelectric liquid crystal can be investigated using different nuclear magnetic resonance (NMR) techniques. The molecular formula of the prepared compound I was confirmed via its elemental analysis, FT-IR data and NMR spectroscopy. The results were consistent with the projected structure. The 1 H-NMR and 13 C-NMR for the methoxy group appeared at δ = 3.87 and 52.25 ppm, respectively. However, the alkenyl protons appeared at δ = 5.96 and 5.89 ppm as two multiplets. The signal 8.60 was assigned to the CH=N proton, and the corresponding carbon appeared at 158.8 ppm. The NMR peaks between 7-7.4 ppm were assigned to the 1 H aromatic resonances frequencies, while 13 C peaks were observed at 158-110 ppm. Schiff bases as well as azo derivatives are the kind of compounds that can be present in two forms, E and Z isomers, but these compounds present only in the E form in the solid state [41,42]. The Z form could be obtained either by UV irradiation or thermal heating. In this study, the NMR spectra were recorded at different temperatures to investigate the molecular dynamics and to probe any conformational or structural changes in response to temperature variations (Supplementary Material). The NMR was recorded at different temperatures (see Figure S1) and the chemical shifts associated with each signal are presented in Table S1. The results of the NMR at various temperatures revealed that there is no significant effect on the chemical shifts of the recorded signals, indicating that compound I is thermally stable. As shown in Figure 1, the thermal heating at 365 K of the DMSO solution of the prepared compound resulted in a new peak observed at (δ = 9.9 of the CH=N). These results provide clear evidence of the formation of the Z isomer. The percentage of E and Z isomers is calculated from Mesomorphic Behavior Studies Mesomorphic and optical activity of the synthesized oleic acid natural fatty acid derivative (I) were investigated by differential scanning calorimetry (DSC), and textures were confirmed by polarizing optical microscopy (POM). DSC thermograms of the present compound, I, during heating /cooling scans are presented in Figure 2. These showed two endotherm peaks of the crystal-smectic A and smectic A-isotropic transitions during heating and cooling scans. The POM showed a focal conic fan characteristic of the SmA phase ( Figure 3). Details of the transition temperatures and enthalpies as well as the normalized entropy of transition, as derived from DSC measurements during heating and cooling scans, are presented in Table 1. In order to ensure the stability of the synthesized compound, DSC measurements were performed for two heating-cooling cycles. Thermal analyses of this derivative (I) were recorded from the second heating scan. Moreover, DSC measurements were confirmed by the POM texture observations. Figure 2; Figure 3 indicate that the prepared compound exhibits enantiotropic monomorphic properties and possesses a smectogenic mesophase (SmA phase). Cr-I = transition from solid to isotropic phase. Cr-SmA = transition from solid to SmA phase. SmA-I = transition from SmA to isotropic phase. I-SmA = transition from solid to SmA phase. SmA-Cr = transition from SmA to solid phase. ∆S/R = normalized entropy of transition. As shown from Table 1 and Figure 2; Figure 3, the oleic acid derivative (I) produces a mesomorphic compound with low melting temperature near room temperature (41.9 • C upon heating) that was augmented by the long length of the saturated alkenyl terminal chain. The terminal interactions participate in an important role in the determination of the SmA-to-isotropic behavior, i.e., the enhancement of the smectic molecular order is set by the fact that the terminal attractions become stronger, permitting the simple arrangement of the layers due to the long alkenyl chain, enhancing the SmA-to-I transition. Furthermore, the smectic phase formation may be due to the microphase separation between the alkenyl chains and aromatic cores, which becomes more favorable as the length of the terminal chain increases [43,44]. Moreover, the mesomorphic range of compound I is ≈ 20 • C upon heating and ≈ 26.5 • C on cooling. Generally, the mesomorphic behavior of calamitic mesogens is impacted by many parameters, such as the dipole moment, aspect ratio, polarizability and the competitive interaction between terminal aggregations. Furthermore, the molecular geometry that is affected by the mesomeric configurations also affects the molecular-molecular interactions. In our previous studies, we concluded that the molecular aggregation of rod-like molecules by the lateral attraction of planar molecules enforced with longer alkenyl-chains might play the main role in the mesophase activity of LC compounds with two aromatic rings [2,30]. Another factor is the end-to-end association of terminal flexible chains that differs according to mesomeric effects. These factors combine in different ratios to affect the mesomorphic properties. From the viewpoint of entropy, a dominant role of the alkenyl chains is their liability, and that they can easily make multi-conformational changes [45]. Thus, the lower values of estimated entropy changes for conventional low molar mass mesogens may be attributed to the thermal cis-trans isomerization of the CH=N linkage, which is in agreement with previous reports [33,[46][47][48]. Measurements of Refractive Index Abbe refractometer, made by Bellingham, England, with a heating control unit thermostat within ±0.1 • C around the prisms, was used for measuring the refractive index at certain degrees of temperature. The compound was exposed to a sodium lamp (589.3 nm). In order to measure the ordinary and extraordinary refractive indices (no and ne) of the liquid crystalline sample, the prisms of Abbe refractometer were modulated in planar and homeotropic alignments, respectively. The values of no and ne for the compound (I) were taken during the cooling process with the accuracy of ±0.0005, as shown in Figure 4. It was clear that as the temperature increases, the no values increase and the ne values decrease. Measurements of Refractive Index Abbe refractometer, made by Bellingham, England, with a heating control unit thermostat within ±0.1 °C around the prisms, was used for measuring the refractive index at certain degrees of temperature. The compound was exposed to a sodium lamp (589.3 nm). In order to measure the ordinary and extraordinary refractive indices (no and ne) of the liquid crystalline sample, the prisms of Abbe refractometer were modulated in planar and homeotropic alignments, respectively. The values of no and ne for the compound (I) were taken during the cooling process with the accuracy of ±0.0005, as shown in Figure 4. It was clear that as the temperature increases, the no values increase and the ne values decrease. The effective geometry α eg is the dispersion of light in liquid crystals that can be obtained by the following equation [49][50][51]: Figure 5 shows that for compound I, the α eg values increase with increasing mesomorphic temperature. The α eg values reach unity in the isotropic phase because the molecular orientation order in the sample used vanished [52,53]. Birefringence Measurement using the Abbe Refractometer One of the critical parameters that affects the operation of electro-optic devices is the birefringence of the liquid crystal [50,53,54]. Figure 6 describes the values of birefringence (∆), which is the difference between the measuring n e and n o for compound I at different temperatures using a sodium lamp 589.3 nm. It has been noted that as the temperature increases, ∆n gradually decreases [50][51][52][53][54][55][56][57]. Figure 6 describes the best curve fitting of ∆n values using the Cauchy dispersion relationship. Birefringence Measurement using the Abbe Refractometer One of the critical parameters that affects the operation of electro-optic devices is the birefringence of the liquid crystal [50,53,54]. Figure 6 describes the values of birefringence (∆), which is the difference between the measuring ne and no for compound I at different temperatures using a sodium lamp 589.3nm. It has been noted that as the temperature increases, ∆n gradually decreases [50][51][52][53][54][55][56][57]. Figure 6 describes the best curve fitting of ∆n values using the Cauchy dispersion relationship. Birefringence Measurement using the Abbe Refractometer One of the critical parameters that affects the operation of electro-optic devices is the birefringence of the liquid crystal [50,53,54]. Figure 6 describes the values of birefringence (∆), which is the difference between the measuring ne and no for compound I at different temperatures using a sodium lamp 589.3nm. It has been noted that as the temperature increases, ∆n gradually decreases [50][51][52][53][54][55][56][57]. Figure 6 describes the best curve fitting of ∆n values using the Cauchy dispersion relationship. Birefringence Measurement by a Modified Spectrophotometer The phase transition temperature and birefringence for mesomorphic materials can be obtained during heating and cooling from the transmission spectrum by the modified spectrophotometer (MS) Crystals 2020, 10, 1044 9 of 21 method as in Figure 7 [56,57]. The setup consists of the diffraction grating D, rotating disc R, mirror M, beam splitter B, and P 1 and P 2 as a polarizer and analyzer. The sample S was placed between two glass slices with two polarizers and positioned in the electric oven with a heating control unit of rate 1 • C/min, as in Figure 7. The transmitted light intensity was measured as a function of the wavelength (200-900 nm) at a specific temperature using the MS technique. The phase transition temperature and birefringence for mesomorphic materials can be obtained during heating and cooling from the transmission spectrum by the modified spectrophotometer (MS) method as in Figure 7 [56,57]. The setup consists of the diffraction grating D, rotating disc R, mirror M, beam splitter B, and P1 and P2 as a polarizer and analyzer. The sample S was placed between two glass slices with two polarizers and positioned in the electric oven with a heating control unit of rate 1 °C/min, as in Figure 7. The transmitted light intensity was measured as a function of the wavelength (200-900 nm) at a specific temperature using the MS technique. Figure 8 shows the difference of light transmittance with wavelength at certain temperatures through the cooling action of the sample I placed between two crossed polarizers. The transmittance change occurs due to the compound mesogens orientation. Using the MS method, the transition temperatures were determined, and the results obtained were compatible with those determined by DSC and POM techniques. Birefringence (∆n) was measured in the LC phase from the light transmission of the sample as follows [56][57][58][59]: where T⊥ and T‖ represent the transmittance of light in crossed and parallel polarizers, respectively. Using the MS method, the T⊥ and T‖ values at specific wavelength and temperature were determined for compound I. A traveling microscope was used to determine the thickness t of a sample and was equivalent to 30 μm. The values of ∆n of compound I in the LC phase at specific temperature with wavelength 589.3 nm were estimated and compared with that obtained using the Abbe refractometer as shown in Table 2. Birefringence (∆n) was measured in the LC phase from the light transmission of the sample as follows [56][57][58][59]: where T ⊥ and T represent the transmittance of light in crossed and parallel polarizers, respectively. Using the MS method, the T ⊥ and T values at specific wavelength and temperature were determined for compound I. A traveling microscope was used to determine the thickness t of a sample and was equivalent to 30 µm. The values of ∆n of compound I in the LC phase at specific temperature with wavelength 589.3 nm were estimated and compared with that obtained using the Abbe refractometer as shown in Table 2. Interferometric Method The interferometric method was based on interference of linearly polarized light travelling through birefringent medium and exploiting linear or circular polariscope [60][61][62]. The intensity I of the light leaving the polariscope after passing through the liquid crystal cell was given as follows [63][64][65][66]: where I o is the intensity of the incident monochromatic light of wavelength λ on the polariscope, T is the transmission of a linear polariscope, θ p and θ a are the angles between axes of transmission of polarizer and analyzer, which are measured in the plane of the principal section of the liquid crystal cell, ∆n is the birefringence, and t is the thickness of the sample. When θ p = +45 • and θ a = −45 • or inversely θ p = −45 • and θ a = +45 • , which means that the polarizer and analyzer are perpendicular to each other, Equation (3) becomes: where I m = 2I o T 2 . So that: By measuring the intensities, I and I m , the value of ∆n could be obtained by knowing the wavelength λ and the thickness t as shown in Table 2. Order Parameter Measurement By applying the hypothesis of Vuks, the microscopic order parameter S was determined from the measured values of n e and n o for the investigated compound I as follows [67]: where ∆α and α are, respectively, the anisotropic and mean molecular polarizability, and ‹n 2 › is the mean square value of the refractive index. By using the extrapolation method of Haller, the scaling factor ∆α/α is obtained, and replacing it with Equation (6), the parameter S can be estimated for the compound I [56,57,68]. In the crystalline and mesophase phase, the macroscopic order parameter Q was related to the birefringence ∆n and ∆n o , as follows [69,70]: The Haller formula can be used to obtain the value of ∆n o as follows [56][57][58]: where T C is the transition temperature from the smectic A to isotropic, and β is a constant of the material. The values of ∆n o and β were determined from the relationship between ∆n and Ln (1 − T/T C ), as listed in Table 2. Figures 9 and 10 show that the relationship between S and Q for compound I with temperature is inversely proportional; however, the relation with ∆n appears linearly. It was noted that the S and Q values for compound I are nearly the same. ( ) = 〈 2 〉 − 1 where ∆α and α are, respectively, the anisotropic and mean molecular polarizability, and ‹n 2 › is the mean square value of the refractive index. By using the extrapolation method of Haller, the scaling factor Δα/α is obtained, and replacing it with Equation (6), the parameter S can be estimated for the compound I [56,57,68]. In the crystalline and mesophase phase, the macroscopic order parameter Q was related to the birefringence Δn and Δno, as follows [69,70]: The Haller formula can be used to obtain the value of Δno as follows [56][57][58]: where TC is the transition temperature from the smectic A to isotropic, and β is a constant of the material. The values of Δno and β were determined from the relationship between Δn and Ln (1 − T/TC), as listed in Table 2. Figures 9 and 10 show that the relationship between S and Q for compound I with temperature is inversely proportional; however, the relation with ∆n appears linearly. It was noted that the S and Q values for compound I are nearly the same. Molecular Polarizability Molecular polarizability for liquid crystal materials is a significant parameter. The ordinary (αo) and extraordinary (αe) polarizability describe the electrical vector perpendicularly and parallel to the optical axis of the mesomorphic compound I, and can be calculated by the method of Vuks as follows [56]: where N is the number of molecules per volume, and ‹n 2 › is the refractive index mean square value, which is given as follows [56,71]: The αe and αo values for the prepared compound I with variable temperature are presented in Figure 11. The αe values increase as the temperature decreases, while αo values increase with increasing temperature. This αe and αo temperature dependence shows the same behavior as birefringence. Moreover, αe/αo values have linear dependency with temperature ( Figure 12). Molecular Polarizability Molecular polarizability for liquid crystal materials is a significant parameter. The ordinary (α o ) and extraordinary (α e ) polarizability describe the electrical vector perpendicularly and parallel to the optical axis of the mesomorphic compound I, and can be calculated by the method of Vuks as follows [56]: where N is the number of molecules per volume, and ‹n 2 › is the refractive index mean square value, which is given as follows [56,71]: The α e and α o values for the prepared compound I with variable temperature are presented in Figure 11. The α e values increase as the temperature decreases, while α o values increase with increasing temperature. This α e and α o temperature dependence shows the same behavior as birefringence. Moreover, α e /α o values have linear dependency with temperature ( Figure 12). Molecular Modeling (DFT Calculations) Three configurational isomers are proposed according to the orientation of the -CH=N-group with respect to the MeO and the C=O of the carboxylate as follows, Figure 13: Molecular Modeling (DFT Calculations) Three configurational isomers are proposed according to the orientation of the -CH=N-group with respect to the MeO and the C=O of the carboxylate as follows, Figure 13: Molecular Geometry Three geometrical isomers of the prepared compound (I) were investigated to predict the most stable isomer (Ia-c). These isomers were built according to the orientation of the -CH=N-with respect to the -OCH3 and the carbonyl group of the -COO-linkage of the oleate moiety. The DFT calculations were carried out to estimate the stability of the proposed isomers (Ia-c). The theoretical DFT calculations were carried out in gas phase at B3LYP 6-311G(d,p) basis set. The absence of the imaginary frequency for all suggested conformers is proof of their stability, Figure 14. Molecular Geometry Three geometrical isomers of the prepared compound (I) were investigated to predict the most stable isomer (Ia-c). These isomers were built according to the orientation of the -CH=N-with respect to the -OCH 3 and the carbonyl group of the -COO-linkage of the oleate moiety. The DFT calculations were carried out to estimate the stability of the proposed isomers (Ia-c). The theoretical DFT calculations were carried out in gas phase at B3LYP 6-311G(d,p) basis set. The absence of the imaginary frequency for all suggested conformers is proof of their stability, Figure 14. Table 3 shows selected structural parameters of the optimum geometries of the postulated isomers. The twist angles between the two phenyl rings were estimated. The twist angle for all estimated isomers are strongly affected by the orientation of the -CH=N-linkage with respect to the terminal groups. The twist angle was 49.1, 48.5 and 55.4 • for the isomers Ia, Ib and Ic, respectively. Isomer Ib showed the most planar geometry with the least twist angle; however, Ic had the least planarity. Such planarity could affect the degree of packing of the molecules in the condensed liquid crystalline phase. Moreover, the co-planarity of the liquid crystals is an essential parameter affecting the mesophase behavior [38]. On the other hand, the estimated aspect ratio of the proposed compounds has been calculated from the predicted dimensional parameters. It was clear that the orientation of the -CH=N-linkage does not significantly affect the aspect ratio of the proposed isomers. Table 3 shows selected structural parameters of the optimum geometries of the postulated isomers. The twist angles between the two phenyl rings were estimated. The twist angle for all estimated isomers are strongly affected by the orientation of the -CH=N-linkage with respect to the terminal groups. The twist angle was 49.1, 48.5 and 55.4° for the isomers Ia, Ib and Ic, respectively. Isomer Ib showed the most planar geometry with the least twist angle; however, Ic had the least planarity. Such planarity could affect the degree of packing of the molecules in the condensed liquid crystalline phase. Moreover, the co-planarity of the liquid crystals is an essential parameter affecting the mesophase behavior [38]. On the other hand, the estimated aspect ratio of the proposed compounds has been calculated from the predicted dimensional parameters. It was clear that the orientation of the -CH=N-linkage does not significantly affect the aspect ratio of the proposed isomers. The dipole moment and polarizability of the investigated isomers were calculated using the same method. Both the dipole moment as well as the polarizability of the isomers were significantly affected by the geometrical orientation of the mesogenic group (C=N) with respect to the terminal groups. The configurational structure greatly affected the polarity and highly impacted the polarizability of the molecules. Isomers Ia and Ib showed almost the same dipole moment and polarizability; however, the other isomer Ic differed by almost 20 Bohr 3 in the polarizability and by 1.5 Debye in the dipole moment. These results can be explained in terms of the co-planarity of the aromatic rings. The dipole moment and polarizability of the investigated isomers were calculated using the same method. Both the dipole moment as well as the polarizability of the isomers were significantly affected by the geometrical orientation of the mesogenic group (C=N) with respect to the terminal groups. The configurational structure greatly affected the polarity and highly impacted the polarizability of the molecules. Isomers Ia and Ib showed almost the same dipole moment and polarizability; however, the other isomer Ic differed by almost 20 Bohr 3 in the polarizability and by 1.5 Debye in the dipole moment. These results can be explained in terms of the co-planarity of the aromatic rings. Thermal parameters calculated by the same method at the same base site are summarized in Table 4. The results of the theoretical calculations for the three geometrical isomers, Ia-c, revealed that the conformer Ia of the highest co-planar aromatic core is the most stable conformer; however, Ic of the least co-planar isomer is the least stable, with an energy difference of 209 K cal mol -1 . In contrast, the energy difference between the Ia and Ib isomers is 0.3 K cal mol -1 . The lower energy difference between the more stable isomers Ia and Ib is evidence of their interconverting equilibrium. The high stability of the conformer Ib could be attributed to the geometrical parameter results that may permit the high degree of aromatic co-planarity. Figure 15 present the estimated plots of frontier molecular orbitals HOMO (highest occupied) and LUMO (lowest unoccupied) of all postulated configurational isomers Ia-c of the prepared compound, I. As shown in the figure, it is clear that the electron densities of the sites that shared in the formation of the HOMOs and the LUMOs are localized on the aromatic rings. Moreover, there was no obvious impact of the geometrical configuration of the mesogenic core or the terminal chains on the location of the electron densities of the FMOs. However, the orientation of the groups insignificantly affects the energy gap between the FMOs. The configuration of the investigated groups of the isomer Ic increases the energy level of the FMOs. The predicted energy gap between the FMOs could be used in the estimate of the capability of electron transfer between the FMOs during any electronic excitation process. The global softness (S) = 1/∆E is the parameter that shows the degree of polarizability of materials as well as their photoelectric sensitivity. The charge distribution map for the proposed conformers Ia-c was calculated under the same basis sets according to the molecular electrostatic potential (MEP, Figure 16). The red region (negatively charged atomic sites) was distributed on the aromatic rings and the maximum was on the carbonyl group of ester linkage for all isomers, while the methoxy groups are the least negatively charged atomic sites (blue regions). As shown in Figure 16, there are significant effects of the orientation of the C=N group compared to the C=O and OMe groups on the mapping of the charge destitution. Since the stability of the enhanced mesophase and other mesomorphic properties are highly impacted by the degree of packing of the compounds, which is affected by the geometrical structure, the orientation of the charge distribution could have an impact on this property. Conclusions Herein, thermotropic mono-azomethine liquid crystalline material based on a natural fatty acid derivative, operating in region near to the room-temperature, has been synthesized and characterized experimentally and theoretically. The mesomorphic and optical behavior were investigated and the computational approaches were established to confirm the experimental data, which was produced using DFT calculations. Moreover, ordinary/extraordinary refractive indices and the birefringence with different temperatures were analyzed and briefly discussed. The results show that the investigated material possesses an enantiotropic monomorphic phase comprising a smectic A phase within the near to room-temperature range. The geometrical isomerism affects many thermal parameters such as aspect ratio, planarity and dipole moment. Thermal parameters of the theoretical calculations revealed that the highest co-planar aromatic core (Ia) is the most stable conformer. The same dipole moment and polarizability were shown for isomers Ia and Ib; however, the other isomer Ic differed by almost 20 Bohr 3 in the polarizability and by 1.5 Debye in the dipole moment. Conclusions Herein, thermotropic mono-azomethine liquid crystalline material based on a natural fatty acid derivative, operating in region near to the room-temperature, has been synthesized and characterized experimentally and theoretically. The mesomorphic and optical behavior were investigated and the computational approaches were established to confirm the experimental data, which was produced using DFT calculations. Moreover, ordinary/extraordinary refractive indices and the birefringence with different temperatures were analyzed and briefly discussed. The results show that the investigated material possesses an enantiotropic monomorphic phase comprising a smectic A phase within the near to room-temperature range. The geometrical isomerism affects many thermal parameters such as aspect ratio, planarity and dipole moment. Thermal parameters of the theoretical calculations revealed that the highest co-planar aromatic core (Ia) is the most stable conformer. The same dipole moment and polarizability were shown for isomers Ia and Ib; however, the other isomer Ic differed by almost 20 Bohr 3 in the polarizability and by 1.5 Debye in the dipole moment.
6,923
2020-11-16T00:00:00.000
[ "Chemistry", "Materials Science", "Physics" ]
Impact and cost-effectiveness of strategies to prevent respiratory syncytial virus (RSV) disease in Vietnam: A modelling study Background New prevention strategies for respiratory syncytial virus (RSV) are emerging, but it is unclear if they will be cost-effective in low- and middle-income countries. We evaluated the potential impact and cost-effectiveness of two strategies to prevent RSV disease in young children in Vietnam. Methods We used a static cohort model with a finely disaggregated age structure (weeks of age <5 years) to calculate the RSV disease burden in Vietnam, with and without a single dose of maternal vaccine (RSVpreF, Pfizer) or of monoclonal antibody (Nirsevimab, Sanofi, Astra Zeneca). Each strategy was compared to no pharmaceutical intervention, and to each other. We assumed both strategies would be administered year round over a ten-year period. The primary outcome measure was the cost per disability-adjusted life year (DALY) averted, from a societal perspective. We ran probabilistic and deterministic uncertainty analyses. Results With central input assumptions for RSVpreF vaccine ($25/dose, 69 % efficacy, 6 months protection) and Nirsevimab ($25/dose, 77 % efficacy, 5 months protection), both options had similar cost-effectiveness ($3442 versus $3367 per DALY averted) when compared separately to no pharmaceutical intervention. RSVpreF vaccine had a lower net cost than Nirsevimab (net discounted cost of $213 m versus $264 m) but prevented fewer RSV deaths (24 % versus 31 %). Our results were very sensitive to assumptions about the dose price, efficacy, and duration of protection. At $5/dose and a willingness-to-pay threshold of 0.5 times the national GDP per capita, both prevention strategies have the potential to be cost-effective. Conclusions RSVpreF vaccine and Nirsevimab may be cost-effective in Vietnam if appropriately priced. Introduction Acute lower respiratory infections (ALRIs) are the leading cause of mortality in children younger than five years of age worldwide [1], with respiratory syncytial virus (RSV) being the most important pathogen [2].Most of the deaths associated with RSV occurred in children less than 6 months of age living in low-and middle-income countries (LMICs) [2]. In 2017 in Vietnam, ALRIs were estimated to cause 13 % of deaths in children <5 years [3].In southern Vietnam and in central Vietnam, RSV was also the leading pathogen in a population-based ALRI surveillance study of children younger than two years of age [4][5][6]. Palivizumab (AstraZeneca), a monoclonal antibody (mAb), is the only pharmaceutical intervention currently available to prevent RSV in young children, but it is expensive [7].Two emerging pharmaceutical strategies have demonstrated high efficacy against severe RSV disease in clinical trials and may be feasible for use in LMICs.A single injection long-acting mAb (Beyfortus™, Nirsevimab, Astra Zeneca and Sanofi) has been approved for use in infants in Europe and United States (US) [8,9] but it is very costly.Meanwhile a maternal vaccine (RSVpreF or PF-06928316, Pfizer) [10] has been approved in the US for use in the third trimester of pregnancy.Both strategies are designed to provide newborns with protection against RSV disease as early in life as possible. Forthcoming reviews of Nirsevimab and RSVpreF by the World Health Organisation (WHO) Strategic Advisory Group of Experts on Immunisation (SAGE) will have implications for the potential approval and recommended use of both products in LMICs.National decisionmakers will also need to assess whether to recommend the introduction of one or both RSV prevention strategies.A preliminary modelling analysis could help synthesise available country-level data, identify key drivers of cost-effectiveness, and establish future data needs.It should also provide a foundation for new analyses to be run in the future as new input data emerges. This paper provides a preliminary assessment of the potential impact and cost-effectiveness of the infant mAb (Nirsevimab) and the maternal vaccine (RSVpreF) strategies in Vietnam. Modelling approach We used version 1.6 of the universal vaccine decision-support model, UNIVAC, [11] to evaluate the potential impact and cost-effectiveness of introducing Nirsevimab and RSVpreF over a ten-year period (2025-2034) in Vietnam.Without detailed estimates of the national seasonal RSV incidence, we assumed both strategies would be administered year-round.UNIVAC is an Excel static proportionate outcomes cohort model with a finely disaggregated age structure (weeks of age for children <5 years).The UNIVAC-RSV model is described in detail elsewhere [12].In brief, UNIVAC is populated with the United Nations (UN) (2022 revision) estimates of the number of individuals alive in each single calendar year and single year of life in Vietnam [13].For each birth cohort, numbers of life-years experienced between birth and age five years are multiplied by rates of severe RSV disease outcomes (cases, clinic visits, hospital admissions and deaths) and non-severe RSV disease outcomes (cases, clinic visits).Rates are entered per 100,000 per year in children aged <5 years.The UNIVAC model calculates the numbers of cases, clinic visits, hospital admissions, deaths, and disability-adjusted life years (DALYs) expected to occur with and without each of the two RSV interventions over the lifetimes of each birth cohort. The expected numbers of disease outcomes aged <5 years are assigned to weeks of age <5 years.For each RSV prevention strategy, the percent reduction in each disease outcome is calculated for each week of age by multiplying intervention coverage in the relevant week of age by the assumed efficacy against that outcome in the relevant week of age.For simplicity, all disease outcomes are assumed to be independent, hence if a severe RSV case has an outpatient clinic visit and then later goes to inpatient care, this case is included in both the rate of clinic visits and the rate of hospital admissions. The primary outcome measure is the cost (US$) per DALY averted from the societal perspective, accounting for all costs and benefits aggregated over the ten birth cohorts (2025-2034).All future costs and health benefits were discounted at 3 % per year, and all costs represent 2022 US$ (1US$ currency exchange rate to Vietnam Dong = 23,195). Vietnam does not have a strict willingness-to-pay (WTP) threshold for determining if an intervention is cost-effective.In this study, we calculated the probability the vaccine would be cost-effective at 0.25, 0.5, and 1 times the national gross domestic product (GDP) per capita [14].These are broadly consistent with the 0.26-0.89range estimated by Ochalek et al for Vietnam [15] based on a range of different approaches for estimating the health effects of changes in health expenditure. All inputs assumed for Vietnam are summarised in Table 1. Disease burden A study in Nha Trang by Yoshida et al. [16] estimated 4,620 (95% CI 3,410-6,270) RSV-ALRI cases per 100,000 per year in children aged <5 years.We estimated 1,400 (95% CI 800-2,420) severe RSV-ALRI cases (e.g."with chest wall indrawing") per 100,000 per year in children aged <5 years based on a systematic review and meta-analysis of LMICs by Li et al [2].We estimated the rate of non-severe RSV-ALRI cases by subtracting the severe RSV-ALRI rate from the total RSV-ALRI rate (Table 1).We estimated 20 RSV-ALRI deaths per 100,000 per year in children aged <5 years using estimates from Li et al. for the same income stratum e.g lower-middle income countries [2].This is equivalent to around 5 % of all under-five deaths in Vietnam. We assumed that 66 % of RSV-ALRI cases would be associated with a clinic visit based on WHO estimates of the mean percentage of children <5 years with pneumonia symptoms who were taken to a healthcare provider in lower-middle income countries [17]. We estimated the annual rate of RSV-ALRI hospital admissions to be 644 (95% CI 368 -1138) per 100, 000 per year in children aged <5 years by combining several local data sources.First, we calculated the number of ALRI admissions (ICD-10 codes J-12, J20-22, and J18) in children less than 2 years of age attending a referral paediatric hospital in Ho Chi Minh City (Children's Hospital 1) over a two-year period (2018-2019).Second, we applied age-specific estimates of the fraction of ALRIs attributable to RSV based on two previous studies conducted in Ho Chi Minh City [4,5].Third, we fit a parametric (Burr) age distribution to the derived counts of RSV-ALRI hospital admissions in children less than 2 years of age and extrapolated the curve to estimate the number of cases expected to occur between 2 and 5 years of age (Fig. 1).Finally, we assumed a hospital population catchment size of 5,941,573 children <5 years by multiplying the entire population in the catchment area (72,563,300, provided by the local administration) by the percentage of the total population of children <5 years (8 %) estimated by UN world population prospects (UNWPP) [18].The RSV-ALRI admission rate estimated using this method (644 per 100,000 children <5 years) was consistent with the estimate by Li et al. for LMICs (610 per 100,000 children <5 years) [2]. The age distribution calculated for RSV-ALRI hospital admissions was applied to all severe RSV disease outcomes.To calculate the age distribution for non-severe RSV disease outcomes, we estimated the number of severe RSV-ALRI cases expected to occur in the year 2019 in broad age groups (<3m, 3-5 m, 6-11 m, 12-59 m) and assumed that 27 %, 74 %, 76 %, and 71 % would be non-severe.These percentages were calculated by adjusting age-specific incidence rates from Li et al. [2] for LMICs to make them consistent with the overall rate of RSV-ALRI estimated in Nha Trang by Yoshida et al [16].We then fit a parametric (Burr) age distribution to the estimated counts of non-severe RSV-ALRI disease cases in children <5 years (Fig. 1). When calculating DALYs, we assumed non-severe RSV-ALRI symptoms would last for five days based on a previous publication [19].We assumed seven days for severe RSV-ALRI symptoms based on data from Children's Hospital 1 (2018-2019) and two previous studies [4,5].We assumed disability weights of 5.1 % and 13.3 % for non-severe and severe RSV-ALRI disease cases, respectively, based on GBD 2019 DALY weights for moderate and severe lower respiratory infections [20].We used UN estimates life-expectancy at specific ages to determine premature mortality averted [13].For example, life expectancy at age 1 year was 70 years for males and 79 years for females in the 2025 birth cohort. Cost of clinic visits and hospital admissions We estimated US$ 52 (IQR 32-85) per RSV clinic visit and US$ 165 (IQR 95-249) per RSV hospital admission based on a prospective study of children aged <2 years who sought care at a major paediatric referral hospital in southern Vietnam between September 2019 and December 2021.The societal costs in Table 1 include direct medical costs (e.g., hospitalisation billing), non-medical costs (e.g., transportation, accommodations, etc.), and indirect costs (e.g., the opportunity cost of missed work) incurred prior to admission, during the hospitalisation, or during medical visits and after discharge [21]. RSV prevention strategy costs The dose price of each intervention is highly uncertain, so we used $25/dose for the base-case analysis (Table 1) and ran separate scenarios assuming $5 and $15 per dose.Costs for international handling [22] and delivery [23], safety boxes, and syringes were taken from reference sources [24].We assumed wastage of 5 % for doses, syringes, and safety boxes (Table 1).The percent wastage is converted into a factor [1/ (1 − % wastage)] which is multiplied by the expected number of doses required to meet the anticipated level of coverage. The health system delivery costs associated with RSV interventions are uncertain and adaptations to existing immunization delivery platforms may be required.The incremental health system delivery cost per dose will include the costs of additional training, transportation, coldchain expansion etc.Given the lack of empirical cost of delivery of these interventions, we derived the cost estimates from existing literature, the Immunization Delivery Cost Catalogue (IDCC) repository.Specifically, a health system cost of delivery of US$ 2.02 per immunization, an average estimate for the LMIC was used [25].(Table 1). RSV prevention strategy impact We assumed maternal vaccination could achieve 70 % coverage of pregnant women each year based on an estimate from Baral et al [26].This represents the proportion of pregnant women who attended at least one antenatal care (ANC) visit at 24-36 weeks of gestation, using data from the 2002 Demographic and Health survey (DHS) in Vietnam [26]. We assumed infant mAbs could achieve the same coverage and timeliness (age-specific coverage) as reported for Bacillus Calmette-Guérin (BCG) vaccine in the 2014 Multiple Indicator Cluster Survey (MICS) in Vietnam.We included coverage in each week of age that corresponds to 58 %, 82 %, and 88 % by 1, 3, and 12 months of age respectively [27,28].We assumed year-round RSV disease incidence and year-round mAb administration, so any doses given later in infancy were still assumed to have some effect on incident cases that occur later in infancy. Efficacy estimates were taken from November 2022 press releases of clinical trials for both the maternal vaccine (RSVpreF) [29] and infant mAb (Nirsevimab) [30] (Table 1).RSVpreF efficacy was reported after three and six months after birth [29].For Nirsevimab, the efficacy was reported after five months and involved combining evidence from the Phase 3 MELODY trial and the Phase 2b trial [30].The end points were RSV ALRI cases that were medically attended (used as a proxy for efficacy against non-severe RSV cases and clinic visits) and RSV-ALRI cases that were hospitalised (used as a proxy for efficacy against severe RSV cases, clinic visits, hospital admissions and deaths).In the base case we assumed these efficacy values would be fixed for the duration of followup in the clinical trials and then fall to zero thereafter.We also ran alternative efficacy and waning scenarios in uncertainty analyses. Uncertainty analysis For both RSV intervention strategies, we ran the following alternative scenarios: (i) Efficacy = fitted gamma.For maternal vaccination, cumulative efficacy (cE) was reported to be 81.8 % at three months and 69.4 % at six months of age.We used previously described methods [12] to calculate the instantaneous efficacy (iE) at each week of age.This assumes very high initial efficacy followed by a gradual ebbing of protection that approaches zero by 12 months.We assumed a pooled age distribution of RSV disease from six LMICs and assumed that iE could not become negative.A similar method was used for infant mAb.(ii) Efficacy = 3 month duration.For maternal vaccination we assumed efficacy of 81.8 % against severe RSV disease and 57.1 % against non-severe RSV disease for a three-month period, and zero protection thereafter.This scenario was not applicable to infant mAb.(iii) Price reduced to $15 per dose.(iv) Price reduced to $5 per dose. Sensitivity analyses were also conducted to identify parameters with the most influence on cost-effectiveness.Each parameter's central estimate was varied in turn by +/− 10 % and the change in cost per DALY averted was noted. We also ran a probabilistic uncertainty analysis to indicate the range of parameter uncertainty around the incremental costs and benefits (DALYs averted).In the absence of good quality information on the correlation structure and distribution shapes for each input parameter, we assumed simple PERT-Beta distributions [31] informed by the ranges and most likely values outlined in Table 1.We ran 1000 Monte-Carlo simulations for each RSV prevention strategy and each fixed dose price scenario ($5, $15, $25).The results from this analysis were used to generate cost-effectiveness acceptability curves (CEACs) showing the probability that each intervention would be cost-effective at different willingness-to-pay (WTP) thresholds. Lifetime costs and effects based on central input assumptions Table 2 summarises the results based on our central input assumptions for RSVpreF vaccine ($25/dose, 69 % efficacy, 6 months protection) and Nirsevimab ($25/dose, 77 % efficacy, 5 months protection).Nirsevimab was estimated to be more costly than RSVpreF vaccine (net discounted cost of about $303 versus about $243 million) but more impactful.Nirsevimab could avert more severe RSV cases in children <5 years than RSVpreF vaccine (e.g., 279,329 versus 220,992) and avert more RSV deaths than RSVpreF vaccine in children <5 years (2,965 versus 2,345, equivalent to an RSV mortality reduction of 31 % versus 24 %). Nirsevimab was slightly more cost-effective than RSVpreF vaccine ($3,367 versus $3,442 per DALY averted) when each intervention was Fig. 1.Estimated age distribution of severe and nonsevere RSV-ALRI cases in Vietnam.Caption: the parametric Burr distribution had the most favourable root mean square error (RMSE) compared to other standard distributions e.g., Log logistic.The Burr distribution (Burr type XII) has shape 1 (γ), shape 2 (α), and scale (θ), all of which must be positive values.Parameters for the severe disease age distribution were: shape 1 = 2.7, shape 2 = 0.2, scale = 9.8.Parameters for the non-severe age distribution were: shape 1 = 3.2, shape 2 = 0.4, scale = 21.4.The cumulative distribution function (cdf) of the Burr distribution (for x weeks of age) is:.f ) γ ] − α compared to no pharmaceutical intervention.When we compared Nirsevimab to RSVpreF directly, the incremental cost per DALY averted for Nirsevimab was $3,083 which is 83 % of the national GDP per capita in Vietnam. Around 99 % of the DALYs averted were attributed to premature mortality averted highlighting the important contribution of RSV mortality rather than morbidity to overall DALYs.Clinic visits were associated with around half the total societal healthcare costs averted by both interventions. Cost-effectiveness at different willingness-to-pay thresholds Fig. 2 shows incremental costs and benefits of both RSV prevention strategies assuming three different dose prices ($5, $15, and $25).Probabilistic uncertainty clouds represent 1000 runs per scenario and indicate the range of parameter uncertainty around the central estimates.Three WTP lines (0.25, 0.5, and 1.0 times the national GDP per capita) indicate the dose price that may be acceptable at different WTP thresholds.At all prices, the infant mAb is assumed to generate greater incremental health benefits than maternal vaccination, though at a slightly higher net incremental cost.At a WTP threshold of 1-times the national GDP per capita a dose price of less than $15 would have a high probability of being cost-effective (probabilistic clouds are entirely below the WTP threshold line).However, should the WTP threshold be lower (e.g., 0.25 times the national GDP per capita), the dose price would need to be less than $5/dose to have a high probability of being cost-effective.Fig. 3 shows the probability that each strategy will be cost-effective compared to no intervention, at different WTP thresholds.At a WTP threshold of 0.25 times the national GDP per capita, the RSV prevention strategy with the most favourable cost-effectiveness would need to be priced at $5/dose or less to achieve a greater than 95 % probability of being cost-effective. Scenario and sensitivity analysis Fig. 4 shows the robustness of the cost-effectiveness results by recalculating cost-effectiveness for a range of alternative deterministic 'what-if' scenarios.Three potential WTP thresholds (at 0.25 times, 0.5 times, and 1 times of the national GDP per capita) were used. For maternal vaccination (RSVPreF), if the duration of protection is only three months (not applicable for Nirsevimab), the cost per DALY averted is unfavourable (higher than a WTP threshold of 1 times of the national GDP per capita).In scenario 2 (fitted gamma with protection gradually waning over time) the cost per DALY averted is more favourable than the base case scenario for both maternal vaccine and mAb.In the scenarios where the price was reduced, the cost per DALY averted only became less than 0.25 times of the national GDP per capita when the dose price was $5/dose.The parameters with the greatest influence on the cost-effectiveness results were similar for maternal vaccine and mAb.Results were very sensitive to the changes in efficacy, dose price, RSV mortality rate, duration of protection, discount rate, and age distribution of severe RSV disease (Fig. 5). Discussion In this paper we present a preliminary assessment of the costeffectiveness of two strategies for preventing RSV disease in young *Future costs/effects were discounted at a rate of 3 % per year.** Dominated options have higher costs and lower benefits than alternative options.The least costly non-dominated option is maternal RSV vaccination.This is used as the comparator for Infant mAb when calculating the incremental cost-effectiveness of mAb versus maternal vaccination. children in Vietnam.Using plausible assumptions about the expected costs, health impact, and healthcare costs averted by each strategy, we find that RSVPreF and Nirsevimab each have the potential to be cost-effective in Vietnam, if appropriately priced.Our analysis suggests the RSV prevention strategies would need to be priced at less than $5/dose in order to satisfy the lowest (most pessimistic) estimate of WTP recently determined by Ochalek et al for Vietnam (cost per DALY averted less than 0.25 times the national GDP per capita) [15].With our central input assumptions, we find that Nirsevimab could be more costly and more impactful than RSVPreF, but both options would have a similar cost per DALY averted compared to no pharmaceutical RSV intervention.The higher coverage assumed for Nirsevimab (88 % versus 70 % for RSVPreF) is the main reason it would be more costly and impactful.However, the anticipated coverage of both strategies is highly uncertain, and our coverage assumptions would change substantially if the RSV prevention strategies were restricted to use in specific risk groups.A strength of our analysis is that we have included new analyses and data that is representative of the Vietnamese context.This includes estimates of the RSV hospital admission rate, the RSV disease age distribution in granular age bands (weeks of age), and costs of RSV illness.Our analysis also includes the latest available evidence on the efficacy and duration of protection associated with both RSVpreF and Nirsevimab.Our analysis should also provide a foundation for new analyses to be run in the future as new input data emerges.We used the transparent and user-friendly UNIVAC decision-support model [12] which could be easily updated and reviewed by local stakeholders, including "nonmodellers," to help increase local capacity and ownership of model results. Our study is a preliminary assessment and should be updated as new evidence emerges, particularly on the price and duration of protection Fig. 3 . Fig. 3. Probability that each RSV prevention strategy (monoclonal antibody or maternal RSV vaccine) will be cost-effective compared to no intervention, at different WTP thresholds. Fig. 4 . Fig. 4. Cost per DALY averted (US$) for several alternative scenarios for maternal vaccine and mAb. Table 1 Input parameters used to evaluate the impact and cost-effectiveness of RSV prevention strategies in Vietnam for 10 birth cohorts (period 2025-2034).Non-severe age distributions are assumed for RSV clinic visits by non-severe cases.A Burr distribution (found in UNIVAC model step 2 of inputs page) was used to calculate the age distribution by week of age (Fig.1).**Severeagedistributions are also assumed for RSV hospital admissions, RSV deaths, and RSV clinic visits.A Burr distribution (found in UNIVAC model step 2 of inputs page) was used to calculate the age distribution by week of age (Fig.1).***Coverage was 88 % based on BCG coverage at 52 weeks.Lower coverage was assumed in earlier weeks of age using timeliness data from the 2014 MICS. *
5,252.8
2023-09-01T00:00:00.000
[ "Medicine", "Economics" ]
Low Impact Development Design—integrating Suitability Analysis and Site Planning for Reduction of Post-development Stormwater Quantity A land-suitability analysis (LSA) was integrated with open-space conservation principles, based on watershed physiographic and soil characteristics, to derive a low-impact development (LID) residential plan for a three hectare site in Coshocton OH, USA. The curve number method was used to estimate total runoff depths expected from different frequency storms for: (i) the pre-development condition, (ii) a conventional design, (iii) LID design based on the LSA of same building size; and (iv) LID design based on the LSA with reduced building footprints. Post-development runoff depths for the conventional design increased by 55 percent over those for the pre-development condition. Runoff depth for the same building size LSA-LID design was only 26 percent greater than that for the pre-development condition, and 17% for the design with reduced building sizes. Results suggest that prudent use of LSA may improve prospects and functionality 2468 of low-impact development, reduce stormwater flooding volumes and, hence, lower site-development costs. Introduction Because post-development hydrology is important to developers and municipalities that must comply with the USEPA National Pollution Discharge Elimination System Phase II regulations, a simple method is needed during planning to guide the placement of impervious surfaces on a landscape.To maximize the opportunities for economical low impact on hydrology and water quality, the method must consider the unique spatial distribution of physical, topographical, and climatological features of the watershed.The objective of the present study is to develop and test a simple, objectively-applied, method that integrates land suitability analysis (LSA) that incorporates landscape features, soils, and climatological data and Low Impact Development (LID) to aid in reducing the anticipated increase in stormwater runoff flood volumes from a residential development. A land suitability analysis (LSA) considers relevant factors to identify proper locations for different land uses.Land suitability analysis is a systematic procedure for examining the combined effects of a related set of factors that the analyst assumes to be important determinants of locational suitability [1].The meaning of suitability is to prioritize areas in terms of supporting proposed land use, considering social, physical, spatial or economic factors.The most suitable land will be used for development first. The foundational work of McHarg (1969) popularized overlays of natural resources and landscape physiography to analyze land-suitability for, and impacts of, development plans [2].Furthermore, land-use planning decisions that encompass wildlife habitat, aesthetic and recreational aspects, and demand for open space have recently been joined by the imperative of stormwater-runoff management [3].These various planning and development themes are consistent with concepts of decentralized stormwater management infrastructure and runoff-source control, which attempt to maximize precipitation losses in the hydrological cycle (infiltration, evapotranspiration, interception, abstraction, and ground-water recharge) and minimize surface runoff.The philosophy and approach to decentralization and source control are brought together under green infrastructure (GI) [4,5].One tenant of GI is to reconnect fragmented areas that have the potential to reduce high rates and volumes of runoff during storms.The end result of GI techniques yields contiguous corridors and areas of vegetated landscape that are proximate to areas of development.Several researchers have suggested that land is more likely to be managed in a near-natural state if it satisfies multiple objectives including stormwater management [3,6]. LID applies principles of green infrastructure to bring together site-planning and stormwater-management objectives [7][8][9][10].The LID philosophy can be used to retrofit existing development and to plan new sites.Examples of this planning approach have been successfully implemented in municipalities throughout New England and the Mid-Atlantic states in the United States [11,12].Some facets of LID include: (a) integrating conservation goals of wetlands protection, habitat preservation, or aesthetic requirements into the design; (b) minimizing development impacts on sensitive landscape locations (e.g., soils and landscapes prone to erosion) or preserving unique landscape characteristics (e.g., soils with high infiltration rates and good drainage, stands of mature vegetation) by using site-specific data in subsequent engineering design ; (c) maintaining natural or pre-development timing of peak-water flows through the watershed; (d) implementing multifunctional, small scale, source-control stormwater management practices that can be integrated directly into existing stormwater infrastructure and landscape; and (e) reducing or eliminating pollution at its source, instead of allowing it to be conveyed downstream. Soils and topography play a significant role in minimizing stormwater runoff because these attributes vary considerably across even small landscapes, affecting infiltration, runoff, and drainage patterns.Although the spatial distribution of soils and their properties are usually considered in the planning process, the level of detail is often limited to a coarse county-level soil survey (e.g., STATSGO2 database developed by the USDA-Natural Resources Conservation Service in the USA) [13].The spatial resolution of these soils data are often not sufficient for designing stormwater management practices that rely on infiltration processes. Soil maps (-order-1 surveys‖) with much better spatial resolution than county-level maps, are often prepared for detailed studies on small tracts of land [14].The more detailed order-1 survey provides better spatial resolution, and offers more opportunities to the designer for identifying areas where development should be avoided for small development features (e.g., houses, driveways, and streets) in order to minimize runoff potential of a proposed integrated pervious-impervious landscape drainage system. Although many rainfall-runoff models are in current use, hydrological models that incorporate the NRCS curve number method are useful to anticipate and compare runoff quantities from different land uses [15].The curve-number is a rainfall-runoff model that lumps site characteristics (hydrologic soil group, land use, vegetative cover) into a quantity known as a -curve number‖ (CN).A CN represents the runoff potential of a watershed, with values ranging between 0 and 100 (larger CNs represent watersheds with high runoff potential such as a rooftop) [16].The CN method is incorporated into many models widely used today at the large and small spatial scales to estimate the total runoff volume from watersheds, and in subsequent methods to estimate peak runoff rates for storms of varying frequencies [17][18][19][20][21].The CN method applied to a watershed utilizes the spatial delineation of soil map units, land use, and vegetative cover to compute an area-weighted average watershed CN to estimate runoff volumes. The present study only focuses on the development and testing of a proof of the concept.Project economics are not considered.Total runoff depths expected from different frequency storms for four scenarios are compared: (i) the pre-development condition, (ii) a conventional design, (iii) LID design based on the LSA of same building size of the conventional design; and (iv) LID design based on the LSA with reduced building footprints. Study Site Characterization The 3-hectare experimental watershed used in this study (WS185) is located at the watershed facility operated by the USDA-Agricultural Research Service-North Appalachian Experimental Watershed (NAEW) near Coshocton, Ohio in the USA (Figure 1).The climate is a continental pattern and receives an annual average of approximately 1,000 cm of precipitation.The greatest amount of precipitation normally occurs from May through August, a period when vegetative cover would be well-developed [22].The predominant land use at the site is hay meadow from 1986 to the present time.Data used in the study include order-1 soil survey data and a topographic map developed using 4-ft contours.An order-1 soil survey of WS185 was prepared by the USDA-Natural Resources Conservation Service (NRCS) in 2002.To prepare an order-1 soil survey map, the landscape is first visually divided into areas based on slope and landscape position.Survey map-unit boundaries are then estimated on the basis of known soil series in the area.Soil samples are obtained for each soil map unit to confirm and refine the initial classification of soil series and obtain better resolution of soil boundaries.Information gained from field-sampled soils include evidence of redoximorphic horizons (drainage tendency), argillic zones (long-term leaching behavior), texture (spatial variability in hydraulic conductivity at scales < 1 m), soil depth (potential for drainage), bedrock geology (potential for deep percolation) to qualitatively characterize infiltration and drainage properties. Files for soil mapping polygons, local roads, and topography were overlaid using ArcGIS 9.3 (ESRI International; Redlands CA).Information from the soil survey was used to determine one of four NRCS hydrologic soil groups (HSG) for each map unit required by the CN method to quantify infiltration characteristics.The four HSG categories are A, B, C or D, in a sequence from higher to lower infiltration potential.The drainage characteristics of the soil map units were classified from the soil survey as well drained, moderately well drained, and well drained with localized spots of wetter soils (Figure 2).The 4-foot elevation contours were used to create a land surface slope layer measured as percent slope.The percent slope was further reclassified into 5 classes from a flat area to most steep slope area (Figure 3). Land Suitability Analysis Land suitability analysis was used to determine the degree of suitability, based on factors deemed important, for proposed land use (Table 1) [3,23].Factors selected for the suitability analysis in this study were slope, hydrologic soil group (HSG), and soil drainage classification (Table 1).Overlaying those factors generated many small polygons throughout the watershed.Each of the polygons was calculated a suitability score.The spatial variation of those values provided the basis for guiding land development. Each factor was scored on a scale from 0 to 10, with a maximum score of 10 representing the most suitable and 0, the least suitable for the proposed residential development.The slope factor score was maximized for areas with the flattest slopes.Development on soils belonging to HSG A (i.e., soils with low runoff potential) is discouraged, as permeability is relatively high, and, hence these soil units would be expected to mitigate runoff.On the other hand, pre-development soils belonging to HSG groups C and D have lower permeability and their runoff potential is relatively larger, approaching that found for impervious areas.Therefore, larger scores are given to encourage development in these areas.Finally, drainage capability quantifies how well overland runoff is drained from the property through the soil horizon, and high scoring was assigned well-drained areas to encourage development on areas with high potential of runoff.Scores were then weighted to reflect their relative importance in determining the suitability of development activity in a given area of the watershed.In the absence of criteria to rate the importance of each factor, heuristic arguments were applied to weight these factors.Slope was assigned a weight of 10 as it affects construction practices and therefore may be a more meaningful factor to developers.Slope also influences the potential for infiltration and peak runoff rate along the landscape, and therefore higher slopes would limit infiltration and increase runoff peak flows perpendicular to landscape contours.The HSG was given a weight of 7 due to its effects on infiltration potential which was considered a serious imposition on the prospects for development.Runoff control has not been typically accounted for in development plans, and therefore not a high priority for consideration by developers.The drainage factor was given the lowest weight of 5. Suitability analysis was conducted with Scenario 360 software (Placeways, Boulder, CO) which was implemented on ArcGIS (Ver.9.3; ESRI Inc., Redlands CA).This software facilitated the calculation of a suitability score from geospatial data (slope, HSG, drainage scores and weights), which were attributed to each soil-survey map unit.The suitability score (SS) was a simple weighted sum calculated with a matrix method as: where W and s are the weight and the attribute score for factor i, respectively.A higher suitability score suggests an area appropriate for development, and areas with lower scores should be conserved for infiltration and to maintain pervious areas to minimize runoff generation. Development Plans Because there were only a few spots where slope exceeded 25%, there were no restrictions on the conventional development and the layout of lots was based upon De Chiara et al. and suburban development guidelines published for Wayne and Coshocton counties, OH [24].It has a checkerboard layout of large lots accessed by a wide street ending in a cul-de-sac.The typical cul-de-sac radii recommended by most city ordinances are equal or greater than 15 meters. Guidelines for open space conservation design principles were adapted from Arendt (1996Arendt ( , 1999) ) to create LID plans [11,25].The features of the design which distinguishes it from a conventional design are: Narrower Streets: The American Society of Civil Engineers, in cooperation with the National Association of Home Builders and the Urban Land Institute, suggests street design to be based on the logical premise that street should be appropriate to its functions [25].Streets with 5.5-6 meters (18-20 feet) of paved width is enough for roads serving rural subdivisions with few homes [11]. Smaller and compact lots: The lot sizes are reduced to that they fit inside the zone designated for building construction.Reducing lot size helps in preserving open space for common use, produces compact neighborhoods where neighbors can see and talk to each other more easily and more often. Alternative to cul-de-sac: Instead of cul-de-sac design (as used in conventional design) which converts a large amount of space to impervious surface, alternate designs are often used.For example, the LID design for this study uses a simple -hammerhead -or -Turning T‖ to serve the five houses, as illustrated by [11]. Reducing front setbacks: Because lots are smaller in size, front setbacks are reduced and houses can be closer to the access road.This helps to decrease length of driveway and increase backyard space.Reducing front yard length does not diminish the quality of design because backyards are used more often for family recreation than front yards, and hence need to be bigger. Bike trail /walk trail: Many people do in fact take advantage of opportunities to walk around the neighborhood when that choice exists [26].Hence, a walking/bilking trail is designed to link houses with the common space and to the access road.The trail can be enjoyed by everybody for a pleasant morning or evening walk around the neighborhood. Common space: A part of the common space where the slopes are relatively flatter, is designed as a small picnic ground/park accessible from the homes via the walking/biking trail.This space can be used to organize activities or for just casual sitting and games. Curve-Number Application The NRCS curve-number method (CN) converts rainfall to runoff as a function of soil hydrologic group and land cover-type condition (Table 2) [15].The pre-development land cover was assumed as -pasture in good hydrologic condition‖.The pervious areas under the developed scenarios were treated as -grass cover greater than 50% and less than 75%‖.Impervious surfaces (e.g., roads, roof tops, and driveways) were assigned a CN of 98. Curve numbers from each land unit are then area-weighted to yield a composite curve number.24 hour rainfall depths (P) corresponding to different recurrence intervals ranging from 2 to 25 years were used to generate runoff depth through the CN method.Briefly, runoff depth (Q) is computed using the CN Equation [16]: where S is the depth of potential maximum watershed retention of rainfall after the initiation of storm runoff.The relationship between S and CN was developed in the CN method as a convenience so that CN would range from 0 to 100 to correspond with larger CN for larger runoff potential: The values for assigned S are then substituted into Equation 2 to yield a runoff depth.Equations 2 and 3 require that Q, P, and S have units of inches, but Q and P are afterwards converted to cm.As is often assumed in hydrology, the runoff-depth frequency curve was assumed equal to that of rainfall depth.The magnitudes and frequencies of 24 hour rainfall used in Equation 2 were obtained from Huff and Angel and used in Equation 2 [27].Development designs for undeveloped, conventional development, and two LSA-LID scenarios were compared by using the runoff depths computed from Equation 2 for different precipitation frequencies. Land Suitability Scores The goal of the land suitability analysis was to find those areas which would both accommodate development with the smallest increase in runoff from the watershed (Figure 4).The low areas near the outlet of the watershed represent soils with highest permeability (Figure 2).The majority of runoff that is generated from the upslope areas is infiltrated at this central area in the toe slope of the watershed, which is underlain by a moderately-drained, relatively permeable formation of Oxyaquic Udifluvent soil located slightly upstream from the outlet.The high capacity for infiltrating and detaining runoff has led to historically small amounts of flow measured at the outlet flume, based on over 40 years of data collection [22].Priority for conservation of these areas was borne out by the results of the suitability analysis, which relied on good detail in spatial delineation of soils and their hydrologic properties from the order-1 survey.The use of this detailed soils data stands in contrast to using commonly-available soils survey data [28], which indicated Coshocton silt-loam soils with moderate slope in the mid-and toe-slope areas, with the ridge top composed of Gilpin silt-loam soil.HSG group C is assigned to both soil types, and are generally moderately well-drained with areas of poorer drainage due to shallow soil depth or nearby clay lenses.It stands to reason that the use of coarser resolution data from the county survey would have entirely missed the soil features of the watershed that were appropriate for identifying runoff management opportunities, and therefore a suitability analysis incorporating this coarse spatial resolution of soil characteristics was not performed. Development Design Scenarios According to conventional subdivision design, the only restrictions that may make some of the land within a parcel to be -legally unfit for building‖ are those which have very steep slopes, contain wetland or are inside the floodplain.Flood plain and wetland restrictions do not apply to the study site and hence for this scenario the assumption is made that the developer designs the subdivision according to conventional large lot subdivision regulations using the entire site except areas with slope greater than 25%. Figure 5 shows the design in accordance with most conventional subdivisions.Five 255-m 2 dwellings set upon lots between 0.46 and 0.60 ha in size.The lots are arranged in a circular pattern about an 18-m diameter cul-de-sac.The driveway width is approximately 6-7 m and the access road is 15 m wide.Two LSA-LID plan scenarios were developed based on the land suitability analysis (Figures 6 and 7).In the first LSA-LID scenario (Figure 6), the building areas are sized similar to the conventional plan.In the second LSA-LID scenario (Figure 7), each residential dwelling area is reduced to 140 m 2 .House lots were decreased in area for the LSA-LID scenarios (0.063 ha to 0.086 ha) as it was assumed that the decreased house lot area in the LSA-LID plans would be compensated for by natural greenways and open spaces in the surrounding neighborhood compared with the conventional development plan [29].Larger open spaces serve multiple purposes and may therefore be more valuable to the inhabitants of this development [3]. The American Society of Civil Engineers, in cooperation with the National Association of Home Builders and the Urban Land Institute, suggests street design to be based on the premise that the design of a residential street should match its function [25].For example, a 6-m paved width is thought to be sufficient for roads serving rural subdivisions with few homes [11].Accordingly, a maximum road width of 6 m was implemented in the LSA-LID scenarios (compared with 15-m for conventional) to provide access to the five houses.Furthermore, instead of a cul-de-sac design (as used in the conventional design), which converts a large proportional amount of open space to impervious area, a simple -hammerhead -or -Turning T‖ design was used in the LSA-LID plans.The lot sizes are reduced to help in preserving open space for common use and promote more interaction among neighbors.Because lots are smaller in size, front setbacks are reduced and houses can be closer to the access road.This helps to decrease length of driveway and increase backyard space.Reducing front yard length does not diminish the quality of design because backyards are used more often for family recreation than front yards. A part of the common space in the LSA-LID scenarios is designed as a small picnic ground/park where slopes are relatively flatter.This space can be used to organize activities or for just casual sitting and recreation.Many people do in fact take advantage of opportunities to walk around a neighborhood when that choice exists [26].Hence, a walking/biking trail is provided in the LSA-LID plan scenarios, thereby providing a contiguous path connecting various recreational features such as common space, access road, houses, etc. Rooftop runoff in the LSA-LID scenarios is disconnected from the watershed outlet, and allowed to flow out onto lawn areas for subsequent infiltration onto areas protected against erosion.A community septic system is located behind the residential areas and serves a dual-purpose as a village green. Hydrological Comparison between Scenarios Runoff depth for each design storm increased from the pre-developed condition to either the conventional and LSA-LID development scenarios (Table 3).For the same rainfall frequencies, the increase in runoff depth for the conventional development is appreciably more than that predicted for development under LSA-LID scenarios.Compared with the pre-development (natural) condition, conventional development increased runoff depth for a 2-year storm (a typical US design standard for municipal stormwater infrastructure) by 55 percent.Similar to the projections of calibrated of pre-and post-development rainfall-runoff models presented by Booth and Jackson, runoff depth that would be expected to occur on average every 25 years under natural conditions, would occur on average every 10 years after conventional development [30]. In the conventional development scenario, failure to conserve the highly-permeable areas in the central toe slope area of the watershed, lack of sufficient detention at the parcel level, and a predominance of directly-connected impervious surface to the outlet would lead to the large increase in runoff depth.Higher runoff depths imply an increased risk of erosion and subsequent channel incision, increasing the amount of sediment transported and deposited to downstream locations. The increases in runoff depth were less with the LSA-LID development scenarios than those from conventional development due to a reduction in land disturbance and conservation of areas better suited for infiltration and detention of storm runoff.For the 2-year recurrence interval storm, runoff depth increases under the LSA-LID development plans are only 26 percent greater than with the same building size (LSA-LID 1) and only 17 percent with the reduced building size (LSA-LID 2).The conservation of the more infiltrative and better drained soils on the west side and near the outlet of the watershed with a concomitant minimization and centralization of impervious area are the predominant factors explaining this outcome.The outcome of the suitability analysis suggested positioning impervious surfaces in such a way that slope and soil factors contributing to the abstraction or infiltration of precipitation led to smaller quantities of runoff than estimated for conventional development.The smaller impervious area in the LSA-LID scenario that was designated for buildings and roads (compared with conventional development which had no limitations) led to denser development in part of the site and retained greater amounts of open-space, which may enhance opportunities for abstraction and infiltration along longer runoff flow paths.LSA-LID management is meant to capture the smaller storm depths that make up the vast majority of total annual rainfall, as opposed to handling runoff from infrequent larger flooding rainfall events.The anticipated decline in LSA-LID effectiveness for larger storm events is borne out in our results (Table 3) as the percent increase in runoff depth due to either LSA-LID development scenario decreased and leveled out for storms above the 10-year recurrence interval. Conclusions In the present study we have applied a few facets of LID in a planning context especially as it relates to conservation design, minimize development impacts on sensitive or unique areas, maintain or improve on natural timing of water flows through the watershed.Regardless of the sizes of dwellings and imperviousness, a straightforward, comparative hydrologic analysis of low-impact development plans can be obtained from land-suitability analysis based on important watershed hydrological characteristics.The study supports our assertion that detailed site physiographic data can improve on conventional site-development practice.An order-1 soil survey was foundational in the identification of regions with high carrying capacity for runoff and drainage; this was accomplished without long-term pre-development monitoring of hydrology at the site.Yet, without actually implementing and monitoring such a development, our results stand as modeled approximations of what runoff response from the development plans.While a detailed survey may not be within the purview or budget of developers, we advocate its use in situations where soil variability is high and soil data quality is low; and the site has heterogeneity in slope.Cost information is limited to the experience of the primary author in commissioning NRCS to perform an order-1 survey on a 2 km 2 area in suburban Cincinnati OH at a cost of (US) $12,000.Since indigenous knowledge of soils is generally limited to scientists and agricultural producers, and not passed on to or taken up by developers, an improvement of planning and development practice calls for some additional integrative research as a joint effort by the soils and planning communities. While GIS based software can be used effectively to perform suitability analysis, use of decision support software like CommunityViz makes it easier to carry out the analysis and display results in visually attractive form with minimum effort.Since this software provides more dynamic, interactive and user friendly tools for analysis it has the potential to attract more users especially planners, developers and policy makers. While economics of the LSA-LID were not investigated, the present study suggests some potential cost savings.The allocation of development features to parcels impacted least can be an important way to reduce the hydrological impacts of development without costly investment in structural controls (e.g., retention basins) that require large capital investment, commitment of larger tracts of land for their construction, and subsequent maintenance costs.Savings can also potentially be realized by reduction in stormwater infrastructure to convey water from the site.Furthermore, costs associated with compliance with water-quality regulations can be reduced because of the decrease in runoff and expected erosion.These costs are offset to some degree by potentially increased costs for detailed site characterization to quantify the inputs required by the land suitability analysis. The present study is a promising example of how site factors can be incorporated into a simple development-planning tool.Other factors could be incorporated and other response variables evaluated such as peak runoff rate and water-quality constituents. Figure 2 . Figure 2. NRCS hydrologic soil groups and drainage categories for Watershed 185 at the NAEW. Figure 3 . Figure 3. Slope categories of Watershed 185 at the NAEW. Figure 4 . Figure 4. Land suitability scores on Watershed 185 at the NAEW. Figure 5 . Figure 5. Conventional site plan on Watershed 185 at the NAEW. Figure 6 . Figure 6.LSA-LID site plan 1 on Watershed 185 at the NAEW (same building size as conventional). Table 3 . Comparison of runoff depths and percentage increases for different scenarios.
6,210.4
2010-08-03T00:00:00.000
[ "Engineering", "Environmental Science" ]
Genetic Deletion of PGF2α-FP Receptor Exacerbates Brain Injury Following Experimental Intracerebral Hemorrhage Background: The release of inflammatory molecules such as prostaglandins (e.g., PGF2α) is associated with brain damage following an intracerebral hemorrhagic (ICH) stroke; however, the role of PGF2α and its cognate FP receptor in ICH remains unclear. This study focused on investigating the role of the FP receptor as a target for novel neuroprotective drugs in a preclinical model of ICH, aiming to investigate the contribution of the PGF2α-FP axis in modulating functional recovery and anatomical outcomes following ICH. Results: Neurological deficit scores in FP−/− mice were significantly higher compared to WT mice 72 h after ICH (6.1 ± 0.7 vs. 3.1 ± 0.8; P < 0.05). Assessing motor skills, the total time mice stayed on the rotating rod was significantly less in FP−/−mice compared to WT mice 24 h after ICH (27.0 ± 7.5 vs. 52.4 ± 11.2 s; P < 0.05). Using grip strength to quantify forepaw strength, results showed that the FP−/− mice had significantly less strength compared to WT mice 72 h after ICH (96.4 ± 17.0 vs. 129.6 ± 5.9 g; P < 0.01). In addition to the behavioral outcomes, histopathological measurements were made. In Cresyl violet stained brain sections, the FP−/− mice showed a significantly larger lesion volume compared to the WT (15.0 ± 2.2 vs. 3.2 ± 1.7 mm3; P < 0.05 mice.) To estimate the presence of ferric iron in the peri-hematoma area, Perls' staining was performed, which revealed that FP−/− mice had significantly greater staining than the WT mice (186.3 ± 34.4% vs. 86.9 ± 13.0% total positive pixel counts, P < 0.05). Immunoreactivity experiments on brain sections from FP−/− and WT mice post-ICH were performed to monitor changes in microgliosis and astrogliosis using antibodies against Iba1 and GFAP respectively. These experiments showed that FP−/− mice had a trend toward greater astrogliosis than WT mice post-ICH. Conclusion: We showed that deletion of the PGF2α FP receptor exacerbates behavioral impairments and increases lesion volumes following ICH compared to WT-matched controls.Detailed mechanisms responsible for these novel results are actively being pursued. INTRODUCTION Each year, approximately 795,000 Americans suffer a stroke, of which approximately 13% are attributed to intracerebral hemorrhagic (ICH) stroke (Go et al., 2014). Other than clinical management of the patient with surgical and supportive methods, there are no effective therapies for the treatment of ICH. New methods of treatment for post-stroke patients are essential to improving outcomes following a hemorrhagic brain injury. An ICH stroke is caused by the rupture of a blood vessel within the brain. While the primary injury is caused by the mass effect, a secondary injury is caused by the components of blood that make up the hematoma. For example, edema, and inflammation are major clinical concerns that are mediated by erythrocyte lysis and the release of hemoglobin and heme. Blood components released from the breakdown of the hematoma (e.g., hemoglobin, heme, and iron) result in irreversible neuronal cell death and neurological deficits (Xi et al., 2006). Following ICH, neuronal survival can be affected by changes in phenotype and function of microglia following trauma and the release of blood breakdown products. Understanding the physiopathology involved during the secondary injury of an ICH may provide further insight into inflammatory pathways and therefore potential novel targets for therapeutics. Microglia, or infiltrating neutrophils, travel to the site of the stroke and engulf blood components and cell debris (Zhao et al., 2007). Part of this repair response involves the secretion of prostaglandins. Prostaglandins are signaling molecules that are generated and released upon cell damage and are involved in the inflammatory cascade (Minghetti et al., 1997). In the aftermath of a stroke, some prostaglandins and their receptors can be neuroprotective, while others can contribute to the injury. For example, using the mouse model of ICH stroke, our lab has shown that activation of the PGE 2 EP1 receptor can protect against neurotoxicity and when deleted, the same receptor can exacerbate neurological outcomes following ICH (Singh et al., 2013). Also, using the same mouse model of ICH, our lab has shown that EP2 −/− and EP3 −/− mice have less ICHinduced injury compared to WT control mice (Leclerc et al., 2015b,c). Despite the abundance of arachidonic acid in the brain, the function of its metabolite, PGF 2α , is poorly understood. However, PGF 2α is known to play a significant role in the initiation of parturition, renal function, control of cerebral blood flow, and intraocular pressure principally by an increase in uveoscleral outflow of aqueous humor, autoregulation in newborn piglets, contraction of arteries, and myocardial dysfunction. Pathological conditions in humans influence PGF 2α levels in cerebrospinal fluid, where elevated levels of PGF 2α were measured following epilepsy, meningtitis, brain injury, and stroke. The FP receptor is another G-protein coupled receptor that binds selectively to PGF 2α , which is originally synthesized from arachidonic acid (Sugimoto et al., 1994(Sugimoto et al., , 1997. Cyclooxygenase enzymes control the rate of transformation of arachidonic acid into the prostaglandin PGH 2, which can then be converted by prostaglandin synthases into PGF 2α and other prostanoids such as PGE 2 , PGD 2 , PGI 2 , and TxA 2 (Doré, 2006). Activation of the FP receptor triggers G α q protein-coupled mechanisms involving Ca 2+ signaling, IP3 turnover and activation of protein kinase C (Toh et al., 1995). This FP receptor-mediated increase in levels of Ca 2+ may have accounted for the increased levels of brain injury and excitotoxicity measured by our group using in a mouse model of ischemic stroke (Saleem et al., 2009;Kim et al., 2012). However, until recently, our group found that deleting, the FP receptor may also attenuate brain injury as found in a mouse model of traumatic brain injury (Glushakov et al., 2013). Currently, the role of the FP receptor in hemorrhagic stroke is undetermined and thus our goal is to elucidate the role of the FP receptor in a preclinical model of intracerebral hemorrhagic stroke. Animals Studies were performed on 2-4-month-old male adult WT (24-29 g) and FP receptor knockout (FP −/− ) (15-21 g) C57BL/6 mice. The FP −/− mice developed normally, gained weight at a rate equal to that of the WT mice and had no gross anatomical or behavioral abnormalities when compared to the WT littermates (Glushakov et al., 2013). Prior to all experiment's, PCR-genotyping was performed on all littermates and the WT mice were separated from the FP −/− mice. All animal protocols were approved by the Institutional Animal Care and Use Committee of University of Florida and conducted in accordance with guidelines established by the National Institutes of Health. All mice were bred, maintained, and housed in the university's vivarium under controlled conditions (23 ± 2 • C; 12 h reverse light/dark cycle), with access to food and water ad libitum. Collagenase-Induced ICH Model ICH was induced in age matched male WT and FP −/− mice using collagenase VII-S (0.04 units in 0.4 µL saline) St. Louis,MO). All mice were anesthetized with isoflurane (4% initial and 2% maintenance) and immobilized on a stereotaxic frame. A single unilateral intrastriatal injection of collagenase was given at the following coordinates relative to the bregma: 0.4 mm anterior, 2.4 mm lateral, and 3.4 mm from the dura in both WT and FP −/− mice (Wang et al., 2006). Collagenase was infused at 0.2 µL/min using a stereotaxic automated injector (Stoelting, Wood Dale, IL). The needle was left in place for 10 min and then slowly removed over a 15-min period. Rectal temperature was monitored and maintained at 37.0 ± 0.5 • C using a homeothermic blanket system to prevent hypothermia throughout the surgery. After the surgical procedure, the incision was closed with low toxic tissue adhesive, 3M TM Vetbond (Fischer Scientific, Pittsburgh, PA) and each mouse received a 1 mL intraperitoneal injection of warm saline to prevent dehydration. All mice were then transferred to incubators with a temperature maintained at 37.0 ± 0.5 • C for recovery and monitored for 2-4 h and survived for 72 h post-ICH injury. Evaluation of Neurological Functional Outcomes Neurological functions were assessed daily post ICH (24-72 h) in the following order: neurological deficit scores (NDS), grip strength test, and accelerating rotarod test. All assessments were performed during the dark cycle (awake phase) by investigators blinded to the genotype, and for consistency, tests were performed in the same morning period of each day post-ICH. NDS scoring was measured using the 24-point scale (Clark et al., 1998). Briefly, this NDS assessment includes six individual tests (body symmetry, gait, climbing, circling behavior, front limb symmetry, and compulsory circling) scored from 0, indicating normal performance, up to 4 points on a basis of increasing severity. The sum of the scores from individual tests was reported as the NDS. The accelerating rotarod test was used to assess motor deficits using the Rota Rod Rotamex 5 machine and software (Columbus Instruments International, Columbus, OH) following ICH injury (Jones and Roberts, 1968). The rotarod tests motor deficits and coordination and is comprised of a rotating barrel, which accelerates from 4 to 30 revolutions/min over the course of 5 min. The time in seconds at which each animal fell from the barrel was recorded from a single trial using the provided software. Prior to surgery, mice were trained once daily over the course of 3 days, the average of these training periods then serving as the baseline. The grip strength test was used to assess forelimb strength by measurement using the Animal Grip Strength System (San Diego Instruments, San Diego, CA). Each mouse was placed over a steel grid by the tail so that its forelimbs were allowed to grip a single steel bar before being gently pulled backwards (away) from the bar by the tail until the grip was released. Each mouse had five consecutive trials with a 1-min rest period between trials. The data was reported as the average value of maximal force recorded before the mouse released the bar. Hemoglobin Levels The hemoglobin content of each brain subjected to ICH was quantified with Drabkin's reagent (Sigma-Aldrich), as described previously (Choudhri et al., 1997;Wang and Doré, 2007a). Briefly, mice were anesthetized 5 h after ICH, and transcardially perfused with 30 mL of normal saline. The brain was dissected into ipsilateral and contralateral sides, treated individually as follows. Each sample was homogenized for 2 min in 1 mL of distilled water and then centrifuged at 13,000 × g for 30 min. Eighty microliters of Drabkin's reagent was added to a 20 µL aliquot of supernatant (which contained hemoglobin) and allowed to stand for 15 min at room temperature. The concentration of cyanomethemoglobin produced was measured at 540 nm. A standard curve, reflecting the amount of hemoglobin present was generated by adding incremental volumes of blood (0, 0.5, 1.0, 2.0, 4.0, and 8.0 µL), obtained by cardiac puncture of anesthetized control mice, to 100 µL of lysate from the tissue of normal caudate putamen. Results from at least three samples per mouse were averaged. Immunohistology All mice were euthanized 72 h post ICH and transcardially perfused with 4% paraformaldehyde in phosphate-buffered saline (PBS). Brains were harvested, post-fixed in perfusion solution for 48 h., and then cryopreserved in 30% sucrose/PBS solution for up to 3 days. All brains were sectioned using the Leica CM 1850 cryostat and mounted onto slides to make 10 sets of 16 sections, each 30 µM thick, equally distributed through the entire brain. Slides were then stored at −80 • C until processed for histological analysis. Cresyl violet staining was used to measure corticostriatal lesion volumes. Perls' staining was used to assess the perihematoma amounts of ferric iron (Hill and Switzer, 1984). A Perls' reaction is produced through the addition of an 2% potassium ferrocyanide, which combines with Fe 3+ to form ferric ferrocyanide, producing a bright blue color. Slides were then counterstained with nuclear fast red. Microglia and astrocyte involvement in ICH injury were studied by immunostaining using polyclonal rabbit anti-Iba1 (1:1000; Wako Bioproducts, Richmond, VA) and anti-GFAP (1:1000; DAKO, Carpinteria, CA) to measure microgliosis and astrogliosis respectively. Following overnight incubation with primary antibodies, sections were incubated with avidinperoxidase-labeled biotin complex secondary antibodies (1:1000; BA-500, Vector Laboratories, Burlingame, CA) for 1 h Vectastain ABC and DAB SK-4100 kits (Vector Laboratories) were used per the manufacturer's protocol. Quantification Analysis All slides were scanned using the ScanScope CS (Aperio Technologies, Inc., Vista, CA) and ImageScope software (Aperio Technologies, Inc.) was used to perform quantitative analysis. Quantitative analysis of lesion volume was performed on 8 sections taken from 5 slides per mouse, allowing for an analysis that represented of the whole brain. Quantification of Perls' positive reaction was performed similarly to the method used to calculate lesion volume; however, only the number of positive counts around the lesion was measured. The ImageScope Positive Pixel Count algorithm was used for quantification after the appropriate brain regions were outlined. To perform quantitative analysis of cortical Iba1 and GFAP immunoreactivity, four to five sections from five slides per mouse were selected. Microgliosis and astrogliosis were analyzed by placing identically sized boxes of 1000 × 1000 pixels in both the ipsilateral and contralateral motor cortex. Data is presented as the relative ipsilateral to contralateral signal for signal normalization per area quantified. For each section within the cortex, an area immediately lateral to the lesion was selected for quantification and the intensity of Iba1 and GFAP immunoreactivity was evaluated by means of relative positive pixel counts. An analyst blinded to the experimental groups performed the entire quantifying procedure. Statistical Analysis All data is expressed as mean ± standard error of the mean and statistical difference between the two groups was analyzed using an unpaired two-tailed Student's t-test. For the neurological deficit scores (NDS), non-parametric data was calculated using the Mann-Whitney U test. When appropriate, statistical comparisons between multiple groups were done using one-way ANOVA followed by Turkey's multiple comparison tests. Statistical differences were considered significant if P < 0.05. All data was analyzed through GraphPad Prism 6.0 software (GraphPad Software Inc., La Jolla, CA). Deletion of the FP Receptor Exacerbates Neurobehavioral Deficits Post-ICH Neurobehavioral functional testing was performed at 24, 48, and 72 h post ICH by investigators blinded to the genotype. When compared to WT controls, the NDS of FP −/− mice was significantly higher than WT mice after ICH (6.1 ± 0.7 vs. 3.1 ± 0.8; P < 0.05) ( Figure 1A). In addition to neurological deficit analysis, rotarod and grip strength tests were performed after ICH. At 24, 48, and 72 h post ICH the FP −/− mice showed reduced rotarod performances (seconds) when compared to baseline function ( Figure 1B). However, only FP −/− mice showed significantly reduced performance at 24 h post ICH compared to the baseline (30.3 ± 7.8 vs. 74.4 ± 15.3 s; P < 0.05) ( Figure 1B). Additionally, when compared at 24 h post-ICH, the FP −/− mice had significantly lower rotarod performance compared to WT mice (30.3 ± 7.8 vs. 58.7 ± 8.0 s; P < 0.05) ( Figure 1C). Deletion of the FP Receptor Exacerbates Lesion Volume and Increases Hemoglobin Level and Ferric Iron Deposition Post-ICH Collagenase-induced ICH in mice consistently produces intrastriatal hematoma as evident from Cresyl violet staining (Figure 2A). Following analysis of the quantification, the FP −/− mice had a greater lesion volume than the WT mice 72 h post ICH (15.0 ± 2.3 vs. 3.2 ± 1.7 mm 3 respectively; P < 0.01). To better understand the potential cellular mechanisms of action in FP −/− mice, two additional independent measurements were taken. First, hemoglobin content was measured in WT and FP −/− mice 5 h post ICH. Both the WT and FP −/− mice had greater hemoglobin levels on the site-of collagenase injection (ipsilateral hemisphere). However, the FP −/− mice (10.63 g/dL) had significantly more hemoglobin compared to WT mice (4.06 g/dL) (P < 0.01) (Figure 2B). No hemoglobin was measured on the contralateral tissue and therefore this region served as an internal control. Secondly, brain sections were assessed for the deposition of ferric iron as estimated using the Perls' staining (blue), and the levels of ferric iron were noted primarily in the perihematomal regions. Quantification of blue positive pixel count showed that FP −/− mice had more ferric iron in the ipsilateral hemisphere than WT mice (186.3 ± 34.4% vs. 100.0 ± 16.8%; P < 0.05) (Figure 2C). Perls' positive staining was present only in or around the perihematomal region, while none was present in the contralateral hemisphere. Microglia and astrocyte immunoreactivities were estimated in the cortical region of FP −/− and WT mice brain sections using anti-Iba1 and anti-GFAP immunohistochemistry (Figure 3). ICH injury caused apparent microglia activation as detected via increased Iba1 immunoreactivity in the cortical region surrounding the lesion (area marked in red). The FP −/− mice had a trend toward greater Iba1 immunoreactivity of microglia than WT mice (4.1 ± 1.8% vs. 2.8 ± 0.4%) (P = 0.09) (Figure 3A). To study astrogliosis, GFAP immunoreactivity was used. Similar to the microglia marker, the GFAP levels were greater in the ipsilateral cortical region compared with the corresponding contralateral area in the ICH-treated animals. The changes in GFAP levels in the FP −/− mice (3.7 ± 0.4%) were not significantly different when compared to WT mice (3.2 ± 0.3%) ( Figure 3B). DISCUSSION This study investigated the role of the toxic molecule prostaglandin F 2α (PGF 2α ) in a mouse model of ICH stroke. The main finding of this study is that the FP −/− mice have an increased susceptibility to ICH-induced injury compared to WT mice. Here, we have documented for the first time the unique and significant role of the FP receptor in exacerbating ICH injury, potentially by interfering in the functions of microglia against iron and/or heme-induced neuronal death. Our data shows that FP −/− mice had significantly greater neurological deficits compared to WT controls. For example, the NDS of FP −/− mice was significantly higher than WT mice after ICH (P < 0.05, Figure 1A). In addition to neurological deficit analysis, rotarod, and grip strength tests were performed after ICH. For the rotarod, FP −/− mice showed significantly reduced performance at 24 h post-ICH compared to the baseline (P < 0.05, Figure 1B). The grip strength test was used as an additional test to assess neuromuscular function following ICH by measuring maximal muscle strength of forelimbs. Both FP −/− and WT mice showed significant deficit in terms of improvements in grip strength 24, 48, and 72 h post ICH with the FP −/− mice showing a greater deficit in grip strength compared to WT mice at the 48 and 72 h post ICH ( Figure 1C). Previous data have revealed that there is no significant difference in the morphology of cerebral vasculature and anastomoses in FP −/− and WT mice brains (Glushakov et al., 2013). Therefore, the hemoglobin content 5 h post ICH between FP −/− and WT mice may not be attributed to structural differences in micro-vasculature of the brain. However, greater amounts of hemoglobin following ICH might be attributed to the Frontiers in Neuroscience | www.frontiersin.org mice showed significantly greater deficits in grip strength compared to the WT mice (P < 0.001). Both groups recovered, although the FP −/− mice recovery was slower than the WT mice. All comparisons included n = 7-10 WT and n = 11 FP −/− mice, and statistics were calculated using a two-way repeated measures analysis of variance with Newman-Keuls multiple comparisons test (NDS and rotarod) and a paired Student's t-test versus baseline (grip strength). ns = not significant, *P < 0.05. **P < 0.01. *** and # P < 0.001. FP = F prostanoid receptor subtype; WT = wildtype. functions of the microvasculature. For example, the FP −/− mice may have weaker blood vessel walls which may therefore be more likely to rupture under stress, such as that induced by collagenase. Also, the greater neurological deficits measured in the FP −/− mice may also have been due to the greater hemoglobin content. Hemoglobin levels in the brain following hemorrhagic stroke can not only disrupt the blood brain barrier but can also up-regulate nitric oxide synthase and peroxynite formation, which would lead to further neuronal death (Ding et al., 2014). Additionally, greater expression of hemoglobin proteins (α-and β-globin) have been measured and found to be localized in neurons and microglial cells following ICH in rats (He et al., 2011). The same group also found that levels of heme and iron may cause an increase in the expression of endogenous hemoglobin after ICH. The mouse PGF 2α -FP receptor is reported to have the highest homology to the PGE 2 -EP1 receptor, and when activated, this receptor can increase levels of IP3 and intracellular levels of Ca 2+ (Sugimoto et al., 1994;Mohan et al., 2012). This may explain why our findings with the FP −/− mice are consistent with our previous findings with the EP1 −/− mice using the same ICH model; the EP1 −/− mice showed greater deteriorated outcomes compared to WT control mice (Singh et al., 2013). Our groups also found recently that use of EP1 receptor agonists improved anatomical outcomes and functional recovery (Leclerc et al., 2015a). In contrast to ICH, the genetic deletion and/or pharmacological blockade of either the EP1 or FP receptor types attenuated brain injury and improved neurological outcomes in excitotoxicity and mouse ischemic stroke models (Ahmad et al., 2006(Ahmad et al., , 2008Saleem et al., 2007Saleem et al., , 2009. The difference in the role of these receptors between ischemic and hemorrhagic strokes demonstrates a uniqueness and dynamism in functionality that is determined by the type of brain injury. Differences in the vasculature might be a possible mechanism that could lead to greater lesion volumes in FP −/− mice post ICH. We have FIGURE 2 | Genetic deletion of the PGF 2α -FP receptor increases brain injury after ICH. WT and FP −/− mice underwent ICH and were euthanized at 72 h for determination of lesion volume by Cresyl violet staining of brain sections. (A) Representative images of coronal brain sections from WT (upper panel) and FP −/− mice (lower panel) show FP −/− mice as having a greater lesion volume. Images were obtained from a single animal and demonstrate the characteristic hematoma profile for WT and FP −/− mice, captured adjacent to the needle insertion site and representing maximal hematoma size. Quantification of lesion volumes showed that FP −/− mice had significantly greater ICH-induced brain injury. (WT: n = 7, FP −/− : n = 10, *P < 0.05). (B) In a separate cohort, quantification of hemoglobin content volumes at 72 h showed that FP −/− mice had significantly greater ICH-induced hemoglobin content compared to WT mice (WT: n = 3, FP −/− : n = 3, **P < 0.01). (C) Genetic deletion of the FP receptor increased brain ferric iron content, as represented by Perls staining post-ICH. Representative high magnification images of coronal brain sections show Perls staining (blue) in peri-hematomal regions in WT (upper panel) and FP −/− mice (lower panel). Square selections in the inserts denote magnified regions. Quantification of blue positive pixel count in the ipsilateral hemisphere showed that FP −/− mice had significantly greater ferric iron deposition (WT: n = 6, FP −/− : n = 6, *P < 0.05). previously shown that FP −/− mice do not present with any significantly altered gross vascular anatomy of the brain; although we cannot rule-out changes in proteins as being responsible for the regulation of neovascularization. For example, stromal cellderived factor 1 (SDF-1), which is controlled by endothelial cells, could alter ICH outcomes (Seo et al., 2009;Glushakov et al., 2013). These findings therefore direct our attention toward the mechanisms and cells involved in the clearance of blood in examining the etiology of increased cerebral injury in FP −/− mice post ICH. The role of the FP receptor in ischemic versus hemorrhagic stroke may be determined by the state and level of humoral neuroinflammation in intracerebral hemorrhage as compared to ischemic stroke. For example, the expression levels of proinflammatory cytokines IL-1β and TNFα increased early (3 h post ICH) following collagenase-induced ICH (Liesz et al., 2011). Microglia have been identified as the main producers of the early increased levels of intracerebral IL-1β and TNFα (Wang and Doré, 2007b;Wang et al., 2007). Furthermore, previous studies have investigated the activation of microglia/macrophage and leukocyte invasion after experimental ICH Del Bigio, 2000, 2003;Loftspring et al., 2009). Nevertheless, the role of the FP receptor in microglia mediated inflammation remains largely unknown and therefore further in vitro studies are warranted. This study suggests that a mild activation state of microglia could be different in FP −/− mice compared to WT mice post ICH. Whether changes in microglial activation significantly translate into changes in functions of the microglia remains to be explored. This study demonstrated that the FP −/− mice had greater perihematomal Perls' pixel count and may support that the microglia had a reduced ability to remove ferric iron released from lysed blood cells post ICH. Previous studies have shown that microglia and astrocytes express the FP receptor and, more recently, it has been demonstrated that PGF 2α may enhance the clearance of β amyloid by its agonism of liver X receptors (LXR)/ retinoid X receptors (RXR) expressed on microglia (Zhuang et al., 2013). Previous evidence has shown that LXRs are expressed at high levels in the brain and when stimulated, can cause changes in the expression of inflammatory genes in microglia and macrophages (Wang et al., 2002;Joseph et al., 2003;Zelcer et al., 2007;Cui et al., 2012). To conclude, if the LXR is involved in ICH further studies would be necessary, as evidence suggests that PGF 2α can regulate the LXR. However, using an ischemic stroke model, activation of the LXR promoted neuroprotection and reduced inflammation via the inhibition of nuclear factor κB (Morales et al., 2008;Cheng et al., 2010). Thus, it is possible that in this study FP −/− mice showed greater ICH-induced lesions with greater blood and ferric iron accumulation because of diminished phagocytic capability, resulting in decreased clearance of red blood cells and therefore increased brain injury. In response to injury, astrocytes have a diverse role and it is well known that reactive astrocytes (gliosis) form a glial scar that contains the damaged area as shown following our ICH protocol. However, following injury and/or neurodegeneration, reactive gliosis, which involves alterations in functioning and phenotype of different glial cells, may augment brain damage. Studies have revealed that the astrocyte response to gross brain damage leads to anisomorphic (disorganized) astrogliosis that reinforces a cascade of events that eventually increases brain injury. Anisomorphic astrogliosis can inhibit neurite outgrowth and increase levels of the inducible form of nitric oxide synthase (iNOS) and nitric oxide (NO) which can possess cytotoxic properties and contribute to neuronal death (Gibbons and Dragunow, 2006). The elevated levels of NO released from activated astrocytes might be the most relevant in ICH-induced injury as NO is a vasodilator (Chi et al., 2015;Crobeddu et al., 2015;Muñoz et al., 2015). Therefore, increased vasodilation could lead to greater ICH-induced secondary injury. In this study, we saw increased ipsilateral astrogliosis in both FP −/− and WT mice post-ICH; however, no significance was found between the two genotype groups. Microglia and astrocytes become activated following brain injury and release factors that contribute to the functional state of the blood-brain barrier. Here, we show that microglia may be more "reactive" than astrocytes in FP −/− mice compared to WT mice. Furthermore, compared to previously published work by our group using EP1 −/− , EP2 −/− , and EP3 −/− mice post ICH, the levels of micro-and astrogliosis are contradictory compared to the FP −/− mice used in this study (Singh et al., 2013;Leclerc et al., 2015b,c). The decreased level of microand astrogliosis in EP1 −/− , EP2 −/− , and EP3 −/− mice was used to account for the changes in functional and anatomical outcomes post ICH. We found that the FP −/− mice had a trend toward greater microgliosis, suggesting that this state of activation in the FP −/− mice could potentially be responsible for the changes in functional and anatomical outcomes presented here. Further in vivo and in vitro studies are necessary to elucidate the role of the FP receptor in glial cells are necessary. Proposed future experiments include performing in vivo experiments using the FP receptor selective antagonist and agonist to measure any FP receptor mediated-neuroprotection in this and other ICH models. Neuropharmacological experiments designed to specifically study the FP receptor will help clarify the respective role of the glial-neuronal axis after ICH. CONCLUSIONS In this study, we have provided evidence that suggests a neuroprotective role for the FP receptor following ICH. Our results show that deletion of the FP receptor increases brain injury, functional deficits and increases the deposition of ferric iron post-ICH. However, without following this work with further experiments that utilize a FP receptor selective antagonist and agonist, caution should be used when interpreting the potential for the FP receptor as a therapeutic target for the treatment of ICH. Our findings are similar to those found with EP1 −/− mice post-ICH, and recently our group has shown that following activation of the EP1 receptor, astrogliosis, neutrophil infiltration, blood-brain barrie breakdown, and functional recovery all improved (Leclerc et al., 2015a). Based on these findings, we hypothesize that the activation of the FP receptor will result in measurable changes in the improvement of functional and anatomical outcomes following ICH. Until then, we remain hopeful that the FP receptor, similar to the related EP1 receptor, is a viable therapeutic target for the treatment of ICH. AUTHOR CONTRIBUTIONS SM and SD designed the study, analyzed, and interpreted the results and wrote the manuscript. SM performed the surgical procedures, performed the blinded behavioral testing, and harvest brains for analysis with the assistance of the other lab members. EK, JF, GD, and AP coordinated and performed tissue sections and contributed to histological staining, quantification and data analysis. All authors read and approved the final manuscript.
7,014
2018-09-05T00:00:00.000
[ "Biology" ]
Antimicrobial and Antibiofilm Peptides. The increasing onset of multidrug-resistant bacteria has propelled microbiology research towards antimicrobial peptides as new possible antibiotics from natural sources. Antimicrobial peptides are short peptides endowed with a broad range of activity against both Gram-positive and Gram-negative bacteria and are less prone to trigger resistance. Besides their activity against planktonic bacteria, many antimicrobial peptides also show antibiofilm activity. Biofilms are ubiquitous in nature, having the ability to adhere to virtually any surface, either biotic or abiotic, including medical devices, causing chronic infections that are difficult to eradicate. The biofilm matrix protects bacteria from hostile environments, thus contributing to the bacterial resistance to antimicrobial agents. Biofilms are very difficult to treat, with options restricted to the use of large doses of antibiotics or the removal of the infected device. Antimicrobial peptides could represent good candidates to develop new antibiofilm drugs as they can act at different stages of biofilm formation, on disparate molecular targets and with various mechanisms of action. These include inhibition of biofilm formation and adhesion, downregulation of quorum sensing factors, and disruption of the pre-formed biofilm. This review focuses on the proprieties of antimicrobial and antibiofilm peptides, with a particular emphasis on their mechanism of action, reporting several examples of peptides that over time have been shown to have activity against biofilm. Introduction In 1922, Alexander Fleming identified lysozyme from nasal mucus [1], which was considered the first human antimicrobial protein. This discovery was overshadowed when in 1928, Fleming discovered penicillin, which, together with streptomycin, in 1943, led to the beginning of the so-called "Golden Age of Antibiotics". In the 1940s, along with Howard Florey and Ernst Chain, he brought the therapeutic use of penicillin to fruition, which allowed these scientists to be awarded the Nobel Prize for Medicine in 1945. With the advent of the "Golden Age of Antibiotics", there was a loss of interest in the therapeutic potential of natural antimicrobial peptides (AMPs), such as lysozyme [2,3]. However, in the 1960s, due to the increase in the number of multidrug-resistant microbial pathogens, the attention of the scientific community turned to the study of antimicrobial peptides [4][5][6][7]. Antimicrobial peptides are small molecules (10-100 amino acids) produced by all living organisms that play an essential role in the innate immunity [8,9]. Since the discovery of the first groups of AMPs, the magainins from the skin of the African clawed frog Xenopus laevis by Zasloff et al. [10][11][12] and the first antimicrobial peptides A further aspect of the AMPs activity that has been much investigated in recent years and needs to be more deeply considered is their ability to affect biofilm formation. Biofilms are a complex ensemble of microbial cells irreversibly associated to surfaces and enclosed in an essentially selfproduced matrix consisting of polysaccharides, DNA, and proteins. They are ubiquitous in nature, having the ability to adhere to virtually any surface, either biotic or abiotic, including medical devices, causing chronic infections that are difficult to eradicate [17]. The biofilm matrix plays an active role in the development of antimicrobial resistance, protecting bacteria from the host immune system, hostile environmental conditions, and antimicrobial agents, including the majority of antibiotics. Biofilms are very difficult to treat due to their adaptive resistance to antibiotics compared to their planktonic counterparts [17]. Many AMPs show antibiofilm activity against multidrug-resistant bacteria, acting at different stages of biofilm formation, on disparate molecular targets and with various mechanisms. This review focuses on antimicrobial peptides and their mechanism of action against biofilm formation. Structure AMPs can be classified in four groups according to their secondary structure: α-helical, β-sheet, loop, and extended peptides [18]. α-helical and β-sheet peptides are more common and AMPs endowed with α-helical structures are the most studied to date [19]. α-helical AMPs are linear in aqueous solution and will assume amphipathic helical structures when they interact with bacterial membranes or in the presence of organic solvents [6]. Magainin-2 and LL-37 are examples of peptides that belong to this group (Figure 2a, 2b) [20,21]. In the α-helix conformation, the distance between A further aspect of the AMPs activity that has been much investigated in recent years and needs to be more deeply considered is their ability to affect biofilm formation. Biofilms are a complex ensemble of microbial cells irreversibly associated to surfaces and enclosed in an essentially self-produced matrix consisting of polysaccharides, DNA, and proteins. They are ubiquitous in nature, having the ability to adhere to virtually any surface, either biotic or abiotic, including medical devices, causing chronic infections that are difficult to eradicate [17]. The biofilm matrix plays an active role in the development of antimicrobial resistance, protecting bacteria from the host immune system, hostile environmental conditions, and antimicrobial agents, including the majority of antibiotics. Biofilms are very difficult to treat due to their adaptive resistance to antibiotics compared to their planktonic counterparts [17]. Many AMPs show antibiofilm activity against multidrug-resistant bacteria, acting at different stages of biofilm formation, on disparate molecular targets and with various mechanisms. This review focuses on antimicrobial peptides and their mechanism of action against biofilm formation. Structure AMPs can be classified in four groups according to their secondary structure: α-helical, β-sheet, loop, and extended peptides [18]. α-helical and β-sheet peptides are more common and AMPs endowed with α-helical structures are the most studied to date [19]. α-helical AMPs are linear in aqueous solution and will assume amphipathic helical structures when they interact with bacterial membranes or in the presence of organic solvents [6]. Magainin-2 and LL-37 are examples of peptides that belong to this group (Figure 2a,b) [20,21]. In the α-helix conformation, the distance between two close amino acids is around 0.15 nm while the angle between them with regard to the center is around 100 degrees from the top view [18]. β-sheet peptides are stabilized by at least two disulphide bridges, organized to create an amphipathic structure [19,22,23]. This class includes protegrins (from the cathelicidin family); defensins, the largest group of β-sheet AMPs; and tachyplesins ( Figure 2c,d) [24,25]. Due to their rigid structure, β-sheet AMPs are more structured in solution and do not undergo major conformational changes when interacting with a membrane environment [26,27]. Thanatin and lactoferricin B are peptides with a loop structure, stabilized by disulfide, amide, or isopeptide bonds (Figure 2e,f) [19]. The extended AMPs class is populated by peptides that do not show a regular secondary structure. These peptides are rich in arginine, tryptophan, glycine, proline, and histidine residues [19,28]. The 13-residue Arg-and Trp-rich tritrpticin and indolicidin peptides (Figure 2g,h) from porcine and bovine leukocytes, respectively, belong to this group of AMPs [29]. Due to their short length, a simple residue substitution can lead to broad changes in both their structural and functional properties. As an example, replacing Pro residues with Ala in tritrpticin will transform the peptide structure into an α-helical conformation with improved antimicrobial activity but also with higher cytotoxicity [30]. Biomolecules 2020, 10, x FOR PEER REVIEW 3 of 17 two close amino acids is around 0.15 nm while the angle between them with regard to the center is around 100 degrees from the top view [18]. β-sheet peptides are stabilized by at least two disulphide bridges, organized to create an amphipathic structure [19,22,23]. This class includes protegrins (from the cathelicidin family); defensins, the largest group of β-sheet AMPs; and tachyplesins ( Figure 2c, 2d) [24,25]. Due to their rigid structure, β-sheet AMPs are more structured in solution and do not undergo major conformational changes when interacting with a membrane environment [26,27]. Thanatin and lactoferricin B are peptides with a loop structure, stabilized by disulfide, amide, or isopeptide bonds (Figure 2e, 2f) [19]. The extended AMPs class is populated by peptides that do not show a regular secondary structure. These peptides are rich in arginine, tryptophan, glycine, proline, and histidine residues [19,28]. The 13-residue Arg-and Trp-rich tritrpticin and indolicidin peptides (Figure 2g, 2h) from porcine and bovine leukocytes, respectively, belong to this group of AMPs [29]. Due to their short length, a simple residue substitution can lead to broad changes in both their structural and functional properties. As an example, replacing Pro residues with Ala in tritrpticin will transform the peptide structure into an α-helical conformation with improved antimicrobial activity but also with higher cytotoxicity [30]. Antimicrobial peptides have a wide spectrum of action against bacteria, viruses, cancer cells, fungi, and parasites [11,14] as described in the following sections. Antibacterial Peptides Antibacterial peptides are among the most studied and are characterized by both hydrophobic and hydrophilic domains. Most of them are cationic and this positive net charge allows these peptides to interact with the negatively charged bacterial membranes [32]. Their mechanism of action has been widely studied. AMPs can lead to bacterial cell death through both membranolytic [33][34][35] and non-membranolytic mechanisms, interacting with intracellular targets, such as DNA, RNA, and proteins [36][37][38][39]. Both Gram-negative and Gram-positive bacteria have molecules on the outer membrane that confer a negative net charge, allowing the electrostatic interaction with cationic peptides [24]. Then, the AMPs accumulate at the surface and, once a certain concentration is reached, they assemble on the bacterial membrane [40]. Three different putative models have been proposed to describe the action of antimicrobial peptides. In the barrel-stave model, peptides perpendicularly insert into the membrane, promoting peptide-peptide lateral interactions. In this mechanism, the AMPs' amphipathic structure plays a significant role because the hydrophilic residues generate the channels' lumen while the hydrophobic side establishes a favorable interaction with membrane lipids [41]. To date, only a few peptides, such as pardaxin and alamethicin, that act through this mechanism have been identified [42,43]. The same event of peptide insertion into the membrane occurs in the toroidal model although the pore formation is not originated by peptide-peptide interactions. In this model, the peptide induces a curvature in the lipid bilayer and the pore is generated by both the peptide and the phospholipid head groups [44]. The essential difference between these two models is the arrangement of the lipid bilayer, as in the toroidal model, the hydrophobic and hydrophilic arrangement of the bilayer is disrupted while it is intact in the barrel-stave model. Many AMPs acting in the toroidal model have been found, including magainin-2 [25], protegrin-1 [45], melittin [46], and lacticin Q [25]. In the carpet model, the AMPs adsorb onto the membrane, covering the entire surface until a threshold concentration is reached [26]. At this stage, a detergent-like effect occurs, leading to the loss of membrane integrity and eventually to disintegration by micelle formation. In this model, specific peptide-peptide interactions are not required, and peptides do not insert into the hydrophobic core to form transmembrane channels [26]. Antimicrobial peptides like LL-37 and cecropin are known to adopt the carpet model mechanism [47,48]. In the non-membranolytic mechanism, peptides can inhibit cell wall and protein synthesis, bacterial cell division, or DNA replication by interacting with specific proteins involved in this biological process. As an example, Di Somma et al. [49] demonstrated that temporin-L (TL) interacts with E. coli FtsZ, a protein belonging to the divisome complex, leading to inhibition of the Z-ring formation, thus impairing cell division and causing bacterial death without damaging the cell membrane. Graf et al. reported the subclass of proline-rich AMPs (PrAMPs) that can penetrate the bacterial membrane and kill bacteria by inhibiting protein synthesis [39]. In particular, Mardirossian et al. tested the antimicrobial activity of Bac5 , an N-terminal fragment of the bovine proline-rich antimicrobial peptide Bac5, on Escherichia coli, Acinetobacter baumannii, Klebsiella pneumoniae, Staphylococcus aureus, Salmonella enterica, and Pseudomonas aeruginosa, showing the inhibition of bacterial protein synthesis [40]. In addition, the synthetic peptide 35409 has been reported to inhibit cell division and induce filamentation, suggesting two different targets within a bacterial cell [41], or the lysine-peptoid hybrid, LP5, binds DNA gyrase and topoisomerase IV, causing inhibition of thee replication and ATP leakage from bacterial cells [42]. Anticancer Peptides Antimicrobial peptides with anticancer activity, also called anticancer peptides (ACPs), are α-helical or β-sheet peptides and can be divided into two groups. Peptides, such as insect cecropins and frog skin magainins, belong to the first group, characterized by peptides active against both bacteria and cancer cells but not against normal mammalian cells [50][51][52]. Peptides toxic to bacteria and both normal and cancer cells, including the bee venom melittin, insect defensins, and the human LL-37 peptide [53,54], belong to the second group. ACPs can lead to cancer cells' death by membranolytic or non-membranolytic mechanisms according to the peptide characteristics and the peculiar target membrane features [55]. Cancer cells differ from normal mammalian cells due to their membrane net negative charge, which is conferred by anionic molecules, such as the phospholipids phosphatidylserine (PS), heparin sulfate, O-glycosylated mucins, and sialylated gangliosides. Differently, mammalian cell membranes are endowed with a zwitterionic character due to the molecules normally present on their membranes [14,45]. In healthy cells, the phosphatidylserine molecules are in the plasma membrane's inner-leaflet, while in cancer cells, the asymmetry between inner and outer membrane leaflets is lost, leading to the presence of PS in the outer leaflet [56,57]. The negative net charge exposed on the cancer outer membrane makes them similar to the bacterial membranes, suggesting that AMPs and ACPs might share similar molecular principles for selectivity and activity [58]. Dermaseptin B2 and B3 have been reported to be active against the proliferation of human prostate, mammary, and lymphoma cancer cells [58]. A study conducted by Lin et al. on the cytotoxic effect of epinecidin-1 on normal and cancer cells showed that this peptide could inhibit the growth of both tumor and normal cell lines. It was also demonstrated that epinecidin-1 induces cytotoxic effects and membrane lysis, perturbating the cancer cell membrane. In addition, this peptide inhibits necrosis in HT1080 cells (highly aggressive fibrosarcoma cell line) by downregulating the necrosis-related genes [59]. Antiviral Peptides Because of the emerging resistance of viruses and the limited efficiency of commonly used drugs, antiviral peptides represent good candidates as putative therapeutic agents [60]. Antiviral agents can act at different stages, by inhibiting the activity of viral reverse transcriptase or the pre-integration complex or avoiding the transport of circular viral DNA to the nucleus. Alternatively, they can inhibit the action of viral integrase, impairing viral DNA to integrate into the cellular chromosome. In addition, antiviral compounds may inhibit the viral proteases by blocking the retroviral morphogenesis because, after transcription, the proviral DNA is translated into a polyprotein that requires the activity of viral proteases to generate the proteins needed to assemble the viral capsid [61]. It has been demonstrated that both enveloped RNA and DNA viruses can be targeted by antiviral peptides [62]. AMPs can cause membrane instability by integrating into viral envelopes, thus preventing the viruses from infecting host cells [63]. Melittin, in addition to anticancer activity, has also been reported to have inhibitory activity against enveloped viruses, such as Junin virus (JV), HIV-1, and HSV-2. Melittin was suggested to suppress HSV-1 syncytial mutant-mediated cell fusion, very likely by interfering with the activity of Na+ K+ ATPase, a cellular enzyme involved in the membrane fusion process [64]. Some antiviral AMPs can prevent viral particles from entering the host cells by binding specific receptors on mammalian cells. For example, some α-helical cationic peptides, such as lactoferrin, can prevent HSV infections by binding to heparan sulfate molecules needed for the attachment of HSV viral particles to the host cell surface, thus blocking virus-receptor interactions [65,66]. Antifungal Peptides According to their mechanism of action and origin, antifungal peptides can be grouped into membrane-traversing peptides, which can lead to pore formation or act on β-glucan or chitin synthesis, and non-membrane-traversing peptides that interact with the cell membrane and cause cell lysis [67]. Antifungal peptides can lead to fungi death through different mechanisms of action, including inhibition of DNA, RNA, and protein synthesis; induction of apoptotic mechanisms; permeabilization of membranes; inhibition of cell wall synthesis and enzyme activity; or repression of protein folding and metabolic turnover [68,69]. Antiparasitic Peptides Magainins and cecropins were the first identified antimicrobial peptides that exhibited antiparasitic activity [70]. Although some parasitic microorganisms are multicellular, the mechanism of action of antiparasitic peptides (APPs) is very similar to AMPs, directly interacting with the cell membrane [71]. Scorpine, a peptide deriving from the venom of the scorpion Pandinus imperator, is able to inhibit the developmental stages of both the ookinete and gamete of Plasmodium berghei [72]. Bombinin H4 was reported to affect the viability of both insect and mammalian forms of Leishmania through perturbation of the plasma membranes at micromolar concentrations. The molecular mechanism consists in a rapid depolarization of the plasma membrane and loss of integrity associated with bioenergetic collapse [73]. Cathelicidin is a further example of APP that is able to kill Caernohabditis elegans through pore formation on the cell membrane [74]. Biofilm Biofilm consists of a mixture of microorganisms embedded in self-produced extracellular polymeric substances (EPSs). The EPS constitutes a structural scaffold for other carbohydrates, proteins, nucleic acids, and lipids to adhere to. The presence of biofilms represents a severe problem in environmental, food, and biomedical fields as these architectures protect bacteria from hostile environments and prevent the effect of antimicrobial agents [75]. The exopolysaccharides' characteristics differ among various bacteria and depend on the growth conditions, medium, and availability of nutrients. In some forms of biofilm, mannose, galactose, and glucose are the most abundant carbohydrates, followed by N-acetyl-glucosamine, galacturonic acid, arabinose, fucose, rhamnose, and xylose, which occurr in the composition of the biofilm matrix from Enterococcus faecalis, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, and Pseudomonas aeruginosa [76]. Most exopolysaccharides are not biofilm specific, but their production increases following a stress response, such as the production of colanic acid in Escherichia coli and the alginate synthesis in P. aeruginosa [77]. Biofilm formation and development consist of four different stages: (i) Aggregation or attachment; (ii) microbe adhesion; (iii) biofilm development and maturity; and (iiii) biofilm aging [78]. The aggregation or attachment step is divided into a reversible and irreversible phase. The reversible adhesion begins when the microorganisms come in contact with the target surface. During this event, some weak interactions, including van der Walls and electrostatic forces, and hydrophobic interactions between the molecules occurring on microbial cells and those present on the target surface are established. Afterwards, the irreversible adhesion phase takes place with the formation of covalent interactions and the initial production of exopolysaccharides. In the adhesion step, the formed microcolonies are protected by extracellular polysaccharides or by cellular organelles, such as pili and fimbriae, that allow bacterial cells to survive. During the third stage, the colony grows, acquiring a fungi-like architecture, and cells undergo further adaptation to life in a biofilm. In particular, two properties are often associated with surface-attached bacteria: The increased synthesis of EPSs and the development of antibiotic resistance. These features appear to create a protective environment and cause biofilms to be a tenacious clinical problem. Finally, in the last stage, the biofilm is capable of releasing part of the colonies into the environment and bacterial cells move to further colonize other surfaces in appropriate conditions, thus entering another biofilm cycle. Each stage of the biofilm formation process depends on the microbial genera and species, the characteristics of the attachment surface, the environmental conditions, and the physiological status of the microorganism [79]. Microorganisms' attachment occurs more commonly on surfaces that are hydrophobic, rough, and coated by conditioning films. On the contrary, attachment to surfaces is made more complicated by electrostatic repulsion between the negative organic molecules of surfaces and the bacteria membrane. Antimicrobial Peptides and Biofilm The antibiofilm activity of antimicrobial peptides has been less studied than their antimicroorganism capabilities. Moreover, the assessment of a specific ability to impair biofilm formation well apart from their antimicrobial activity is quite difficult to achieve. An AMP can be considered to be antibiofilm if the minimum biofilm inhibitory concentration (MBIC) is below the minimum inhibitory concentration (MIC), with a distinct activity compared to the direct killing antimicrobial capability. Eradication of preformed biofilms is much more difficult than inhibition [80], and the minimum biofilm eradication concentration (MBEC), i.e., the minimum concentration of an antimicrobial agent required to eliminate pre-formed biofilms, is generally larger than MBIC. In all cases, it is fundamental to being able to distinguish between dead and living cells. Recently, Raheem and Straus [81] described many biological assays and biophysical methods and techniques to define the specific antibacterial and antibiofilm functions' peptides. For all these reasons, few peptides endowed with real antibiofilm activity have been identified so far; some of these peptides are listed in Table 1. Antibiofilm peptides were demonstrated to affect biofilm formation or degradation at different stages and with different mechanisms of action, including inhibition of biofilm formation and adhesion, downregulation of quorum sensing, and killing of pre-formed biofilm [88,89] (Figure 3). Nisin A is able to disrupt or degrade the membrane of biofilm-embedded cells of an MRSA strain of S. aureus, disturbing the membrane potential [90]. Human cathelicidin LL-37, one of the most studied antibiofilm peptides, is able to affect the bacterial cell signaling system. This peptide can inhibit P. aeruginosa biofilm formation at a concentration of 0.5 μg/mL by downregulating the genes related to the QS system, decreasing the attachment of bacterial cells on the surface and stimulating twitching motility mediated by type IV pili [89,91]. Antimicrobial peptides can also lead to the degradation of the extracellular polymeric matrix of bacterial biofilms. Hepcidin 20 can reduce the extracellular matrix mass of Staphylococcus epidermidis and alter its biofilm architecture by targeting the polysaccharide intercellular adhesin (PIA) [92]. Antibiofilm peptides can also target a stringent stress response in both Gram-negative and Gram- Figure 3. Biofilm formation consists on attachment, proliferation, mutation and detachment stages, which can be inhibited by antimicrobial peptides Nisin A is able to disrupt or degrade the membrane of biofilm-embedded cells of an MRSA strain of S. aureus, disturbing the membrane potential [90]. Human cathelicidin LL-37, one of the most studied antibiofilm peptides, is able to affect the bacterial cell signaling system. This peptide can inhibit P. aeruginosa biofilm formation at a concentration of 0.5 µg/mL by downregulating the genes related to the QS system, decreasing the attachment of bacterial cells on the surface and stimulating twitching motility mediated by type IV pili [89,91]. Antimicrobial peptides can also lead to the degradation of the extracellular polymeric matrix of bacterial biofilms. Hepcidin 20 can reduce the extracellular matrix mass of Staphylococcus epidermidis and alter its biofilm architecture by targeting the polysaccharide intercellular adhesin (PIA) [92]. Antibiofilm peptides can also target a stringent stress response in both Gram-negative and Gram-positive bacteria or downregulate genes involved in biofilm formation and the transportation of binding proteins [93]. Biofilm formation in staphylococci depends on the synthesis of the polysaccharide intracellular adhesin (PIA), which is encoded by the icaADBC locus. Human β-defensin 3 was shown to be able to reduce the expression of the icaA, IcaR, and icaD genes of S. epidermidis ATCC 35,984, leading to a reduction of biofilm formation [94]. Gopal et al. [95] reported that NRC-16, a pleurocidin peptide analogue, showed MIC values ranging from 2.17 to 17.4 µg/mL on planktonic bacteria vs. biofilms against different Gram-negative and Gram-positive bacteria, and fungi. It is interesting to note that similar results were obtained with the melittin peptide. For both of them, minimal biofilm inhibitory concentration (MBIC) values ranging from 8 to 35 µg/mL against five clinical strains of P. aeruginosa have been obtained [95]. Moreover, Blower et al. [86] demonstrated that the SMAP-29 peptide is able to inhibit biofilm production in Burkholderia thailandensis by about 50% at peptide concentrations at or above 3 µg/mL. Anunthawan et al. studied KT2 and RT2, two synthetic tryptophan-rich cationic peptides, which showed activity against multidrug-resistant E. coli biofilms at sub-MIC levels [96]. Another peptide known as CRAMP is able to inhibit fungal biofilm formation [97], but surprisingly, it was demonstrated that AS10, a CRAMP shorter fragment, was able to inhibit biofilm growth of Candida albicans, E. coli, and P. aeruginosa [98]. Moreover, IDR-1018 showed antibiofilm activity against several Gram-positive and Gram-negative pathogens [99]. De la Fuente-Núñez et al. studied two synthetic peptides DJK-5 and DJK-6 based on properties associated with IDR-1018, which showed a broad spectrum of antibiofilm activity and the ability to eradicate pre-existing biofilms [100]. Mataraci and Dosler designed the CAMA peptide, a hybrid peptide (cecropin (1-7)-melittin A (2-9) amide) containing the N-terminal region of cecropin A and the N-terminal portion of melittin A. Interestingly, this peptide was able to inhibit methicillin-resistant Staphylococcus aureus (MRSA) biofilm formation [101]. Biofilm Resistance to Antimicrobial Peptides One of the ideas associated to the biofilm resistance to AMPs is related to their interaction with EPS even if the mechanism is still not well understood. Most of the molecules making up EPS have a negative charge, but the exopolymer PIA, composed of poly-N-acetyl glucosamine, is positively charged and it might protect the biofilm from AMPs through electrostatic repulsion with the positively charged peptides [102]. In fact, PIA was demonstrated to defend S. epidermidis and S. aureus from the LL-37 and the human β-defensin peptides' action [103]. Alginate, made up of the uronic acid D-mannuronate and the C-5 epimer-L guluronate, is an anionic extracellular polysaccharide secreted by Gram-negative bacteria that can interact with positively charged peptides, protecting biofilm-embedded cells. Alginate is able to trap antimicrobial peptides in hydrophobic microdomains consisting of pyranosyl C-H groups, which are inducible when the complexes AMPs-alginate are formed, owing to the charge neutralization between the two species [104,105]. In Gram-positive bacteria, the resistance to AMPs can be mediated by the membrane protein MprF, which is involved in the addition of alanine or lysine to phosphatidylglycerol (PG) to form alanyl-phosphatidylglycerol (APG) and lysyl-phosphatidylglycerol (LPG), respectively, and in the translocation of these compounds to the outer leaflet [106,107]. It was demonstrated that MprF mutants of S. aureus were more susceptible to AMPs, suggesting that the addition of lysine or arginine to the membrane could lead to a reduction in the susceptibility to AMPs [108]. An MprF homolog has been found in P. aeruginosa involved in the addition of alanine to PG to form APG. This modification led to an increased resistance to antimicrobial agents [109]. In P. aeruginosa and Salmonella enterica, the PhoP/PhoQ genetic system is able to decrease the LPS net negative charge by adding aminoarabinose to the lipid, conferring AMPs' resistance to bacterial biofilms [110]. In P. aeruginosa, a two-component regulatory system pmrA-pmrB has also been found, which regulates resistance to LL-37, polymyxin B, and polymyxin E. This system modifies LPSs in the bacteria's outer membrane, leading to a reduction of the AMPs' interaction with the outer membrane [111]. Moreover, it was found that the addition of an acyl chain to lipid A might contribute to bacterial resistance to AMPs. In S. enterica Typhimurium, the PagP enzyme adds additional palmitate (C16:0 acyl chain) to the lipid A moiety. This acylation is thought to be responsible for the increase of the hydrophobic interactions between lipid A and the acyl chains, thus leading to a higher outer membrane fluidity [112]. A higher membrane permeability in response to AMPs was observed in pagP mutants of S. enterica Typhimurium, compared to the control strain [113]. It was also demonstrated that deacylation could increase the bacteria's susceptibility to AMPs, thus supporting the finding that lipid A acylation is involved in the bacterial resistance to antimicrobial peptides. The PagL enzyme, located on the outer membrane of several Gram-negative bacteria, is responsible for the deacylation of lipid A, removing R-3-hydroxymyristate from position 3 of some lipid A precursor [114]. The two component systems (TCSs) are used by bacteria to respond to environmental changes. TCSs consist of a membrane sensor, which is able to detect signals from the environment that are transferred by activating a transcriptional response regulator through phosphorylation or de-phosphorylation. The receptor is usually a histidine kinase located in the cytoplasmic membrane that can be activated by environmental signals. The cytoplasmic protein is phosphorylated by the sensor and acts as a transcription factor. The response involves the activation of genes, such as membrane-remodeling genes, ion transporters, and virulence genes, which help Gram-negative and Gram-positive bacteria to better adapt to the environment. Several TCSs systems are known to respond to AMPs, thus helping bacteria to counteract their activity [115][116][117]. Discussion and Future Considerations The identification of new therapeutic strategies to counteract biofilm-associated infections is among the main challenges in medicine. The high concentrations of antibiotics used in order to disrupt or prevent biofilm formation could be associated with poor prognosis and cytotoxicity. For this reason, a promising strategy might consist in the use of alternative drugs to address biofilm-related infections. Because of their peculiar characteristics, antimicrobial peptides have to be considered as valid candidates in the fight against biofilms. However, AMPs' interaction with EPS components might affect their antimicrobial activity, representing an obstacle for the development of AMPs as antibiofilm drugs. Designed antibiofilm peptides could be used to interfere with signaling pathways involved in the synthesis of EPS components. Otherwise, EPS-AMP interactions could even be used for the design of AMP-based antibiofilm strategies in order to seize essential EPS components, interfering with the biofilm architecture. The strategy of combining biofilm dispersing agents with conventional antibiotics could also be exploited. Bacterial invasions are often impossible to eradicate by the direct administration of antibiotics due to the protection effect exerted by biofilms, and the use of a high concentration of antibiotics has to be discouraged due to their extreme toxicity. AMP-AMP and AMP-drug combinations induce biofilm matrix degradation, allowing the antibacterial agent to escape protection and to reach bacterial cells, which may be potential areas of future anti-biofilm study and research. Promising combinatorial strategies can then be foreseen consisting in the use of AMPs with compounds able to dissolve the biofilm matrix or antimicrobial peptides in association with drugs used for anti-infective therapy, with anti-inflammatory ormucolytic agents, such as salicylic acid or ibuprofen, or with inhibitors of QS [118]. Funding: This work was supported in part by MIUR grants ARS01_00597 Project "NAOCON" and PRIN 2017 "Identification and characterization of novel antitumoral/antimicrobial insect-derived peptides: a multidisciplinary, integrated approach from in silico to in vivo". Conflicts of Interest: The authors declare no conflict of interest.
7,021
2020-04-01T00:00:00.000
[ "Biology" ]
Small Heat-Shock Proteins, IbpAB, Protect Non-Pathogenic Escherichia coli from Killing by Macrophage-Derived Reactive Oxygen Species Many intracellular bacterial pathogens possess virulence factors that prevent detection and killing by macrophages. However, similar virulence factors in non-pathogenic bacteria are less well-characterized and may contribute to the pathogenesis of chronic inflammatory conditions such as Crohn’s disease. We hypothesize that the small heat shock proteins IbpAB, which have previously been shown to reduce oxidative damage to proteins in vitro and be upregulated in luminal non-pathogenic Escherichia strain NC101 during experimental colitis in vivo, protect commensal E. coli from killing by macrophage-derived reactive oxygen species (ROS). Using real-time PCR, we measured ibpAB expression in commensal E. coli NC101 within wild-type (wt) and ROS-deficient (gp91phox-/-) macrophages and in NC101 treated with the ROS generator paraquat. We also quantified survival of NC101 and isogenic mutants in wt and gp91phox-/- macrophages using gentamicin protection assays. Similar assays were performed using a pathogenic E. coli strain O157:H7. We show that non-pathogenic E. coli NC101inside macrophages upregulate ibpAB within 2 hrs of phagocytosis in a ROS-dependent manner and that ibpAB protect E. coli from killing by macrophage-derived ROS. Moreover, we demonstrate that ROS-induced ibpAB expression is mediated by the small E. coli regulatory RNA, oxyS. IbpAB are not upregulated in pathogenic E. coli O157:H7 and do not affect its survival within macrophages. Together, these findings indicate that ibpAB may be novel virulence factors for certain non-pathogenic E. coli strains. Introduction Pathogenic Escherichia coli are a major source of morbidity, and less-commonly mortality, due to infections of the urinary tract, intestinal tract, and bloodstream. Most E. coli virulence factors identified to date target interactions with host intestinal epithelial cells. For instance, Esp and Nle Type III secretion system effectors from enteropathogenic (EPEC) and enterohemorrhagic (EHEC) E. coli disrupt internalization, protein secretion, NF-κB signaling, MAPK signaling, and apoptosis in eukaryotic cells [1]. Certain strains of pathogenic E. coli, including the enteroaggregative E. coli, also form biofilms in the intestine, secrete toxins that cause fluid secretion from intestinal epithelial cells, or inhibit eukaryotic protein synthesis resulting in intestinal injury [2][3][4][5]. Pathogenic E. coli that breach the intestinal mucosal barrier are phagocytosed by innate immune cells such as lamina propria macrophages and neutrophils. Some pathogenic E. coli strains have also acquired virulence genes that allow them to avoid destruction within phagocytes and thereby promote disease [6]. For example, uptake of EHEC into macrophages is associated with increased expression of Shiga toxin, and Shiga toxin enhances intra-macrophage survival through an unknown mechanism [6,7]. Likewise, expression of nitric oxide reductase in EHEC enhances their survival within macrophage phagolysosomes presumably by protecting them from reactive nitrogen species [8]. Similar to pathogenic strains of E. coli, resident intestinal (commensal) E. coli also encounter lamina propria macrophages in the intestine, especially during periods of epithelial damage and enhanced mucosal permeability in chronic inflammatory lesions associated with the inflammatory bowel diseases (IBD's), Crohn's disease and ulcerative colitis. IBD's are associated with genetically-determined defective innate immune responses including disordered cytokine secretion and bacterial clearance in macrophages [9,10]. In addition IBD's and experimental murine colitis are associated with increased numbers of luminal commensal E. coli [11]. Therefore, it is plausible that enhanced survival of E. coli in macrophages may play a role in etiopathogenesis of IBD's. Indeed, others have shown that resident adherent-invasive E. coli are more prevalent in inflamed ileal tissue from Crohn's disease patients compared with controls and that a specific adherent-invasive E. coli strain isolated from a human Crohn's disease patient causes experimental colitis in susceptible hosts in vivo and survives better in macrophages in vitro compared with laboratory reference E. coli strains [12][13][14]. The increased survival of the adherent-invasive E. coli strain in macrophages is due in part to expression of E. coli htrA, a gene that allows E. coli to grow at elevated temperatures and defend against killing by hydrogen peroxide in vitro [15]. Genes, including htrA, may therefore function as virulence factors in commensal E. coli by protecting the bacteria from toxic reactive oxygen species (ROS) and/or reactive nitrogen species (RNS) found in macrophage phagolysosomes. Similar to HtrA, the E. coli small heat shock proteins IbpA and IbpB also protect bacteria from killing by heat and oxidative stress in laboratory cultures [16][17][18]. The role of the ibpAB operon in protecting E. coli from heat damage is reinforced by evidence that ibpAB are upregulated in E. coli cultures in response to heat treatment [19,20]. In addition, we have previously shown that a commensal adherent-invasive murine strain of E. coli (NC101), which causes colitis in mono-colonized Il10 -/mice, increases ibpAB expression when present in the inflamed vs. healthy colon, possibly due to the increased concentrations of ROS/RNS in inflamed colon tissue [21][22][23]. However, it is unknown whether ibpAB are upregulated in response to ROS/RNS are important for the survival of non-pathogenic E. coli in macrophage phagolysosomes. We hypothesized that commensal E. coli upregulate ibpAB in response to ROS and that ibpAB protect E. coli from ROS-mediated killing within macrophages. Bacterial Strains, Cells Lines, and Culture Conditions The non-pathogenic murine E. coli strain NC101 was isolated as described previously [24]. E. coli strain O157:H7 was a kind gift from Dr. Ann Matthysse at UNC, Chapel Hill. E. coli were grown in Luria-Burtani (LB) broth at 37°C with shaking at 250 rpm. The J774 murine macrophage and L929 fibroblast cell lines were originally obtained from ATCC (Manassas, VA) and cultured in RPMI containing 10% fetal bovine serum (FBS), 100U/mL penicillin, 1000 μg/mL streptomycin, and 10mM glutamine in 37°C humidified incubators with 5% CO 2 . Conditioned media from L929 cells was used as a source of macrophage colony stimulating factor (M-CSF) for the production of bone marrow-derived macrophages (BMDMs) and was made as described previously [25]. The mutant E. coli NC101 strain lacking ibpA and ibpB (NC101ΔibpAB) that was used in this study had been generated previously using the λ-red recombinase method [23,26]. We used identical methods to create a mutant E. coli O157:H7 strain that lacks ibpA and ibpB (O157: H7ΔibpAB). However, since the pCP20 plasmid encoding Flp recombinase failed to induce recombination at the FRT sites in E. coli O157:H7, we used strains of NC101ΔibpAB and O157: H7ΔibpAB that still contained the kanamycin resistance gene. Mutant E. coli NC101 lacking oxyS (NC101ΔoxyS) was also generated using the λ-red recombinase method. Primers 5'GCATAGCAACGAACGATTATCCCTATCAAGCATTCTGACTGTGTAGGCTGGAGC-TGCTTC and 5' ACCGTTACTATCAGGCTCTCTTGCTGTGGGCCTGTAGAATCATAT-GAATATCCTCCTTAGTTCC were used to amplify the kanamycin resistance cassette from pKD4. Transformation and site-specific recombination of the PCR product into the oxyS locus on the E. coli NC101 chromosome followed by excision of the kanamycin resistance gene using pCP20 was performed as previously described [23,26]. Recombinant bacterial cell lines were generated in accordance with procedures outlined by the Environmental Health and Safety Department at University of North Carolina at Chapel Hill. Mouse Strains and Production of Bone Marrow-Derived Macrophages Wild-type, gp91phox -/-, and Inos -/mice (all on the C57/B6 genetic background) were originally obtained from Jackson Laboratories and maintained in specific-pathogen-free conditions in Department of Lab and Animal Medicine facilities at UNC, Chapel Hill. All animal protocols were approved by the UNC-Chapel Hill Institutional Animal Care and Use Committee. Gentamicin Protection Assays Intra-macrophage bacterial survival assays were performed as described previously [14,23]. Briefly, approximately 10 mid-log phase bacteria/cell were added to 5-7.5 x 10 5 BMDMs/well in 12-well plates in a total volume of 1mL/well RPMI/10%FBS. Plates were centrifuged at 1000xg for 10 min, incubated for 60 min at 37°C in 5% CO 2 . The end of this incubation was considered time 0. Each well was washed and treated with media containing 100μg/mL gentamicin for 60 min at 37°C in 5% CO 2 to kill extracellular bacteria. Media was then replaced with media containing 20μg/mL gentamicin for the duration of the experiments. At the indicated times, wells were washed 4x with 1mL PBS, then incubated for 10 min at room temperature with 0.5mL of sterile water containing 1% Triton-X100 to lyse BMDMs. Viable intracellular bacteria were enumerated by counting colony forming units (CFU) in dilutions of lysates plated on LB agar. In some experiments, J774 cells were treated with 100nM bafilomycin-A1 (Sigma), an inhibitor of the vacuolar H + -ATPase, 60 min prior to, and during, co-incubation with bacteria. Intra-macrophage bacterial gene expression assays were performed similarly except 6-well plates containing 2 x 10 6 BMDMs/well or 1 x 10 6 J774 cells/well were used, no centrifugation step was included, and time 0 was defined as the point immediately after addition of diluted bacteria to each well. At the indicated times, wells were washed as above, but instead of adding Triton-X100, 1mL/well of Bacterial RNAProtect (Qiagen) was added to the BMDMs, incubated for 5 minutes at room temperature, and then transferred to microcentrifuge tubes. After centrifugation at 10,000xg x 5 min, pellets were frozen at -20°C for future RNA isolation. Stimulation of Bacterial Cultures with Paraquat Mid-log phase 10mL cultures of E. coli growing at 30°C in LB were treated for the indicated times with the indicated concentrations of the freshly-prepared superoxide generator paraquat (Sigma) dissolved in water or water control. At each time point, bacteria from a1mL aliquot of each culture were pelleted by centrifugation at 10,000 x g for 30 sec, after which 0.5mL of Bacterial RNAProtect was immediately added. After 5 min incubation at room temperature, bacteria were pelleted again and RNA was isolated as described below. RNA Isolation and Real-Time PCR Bacterial RNA was isolated from cell pellets using Qiagen RNeasy Mini columns according to the manufacturer's instructions. Purified RNA was treated with either on-column DNase treatment (Qiagen) or Baseline-Zero DNase (Epicentre) according to the manufacturer's instructions. Complementary DNA synthesis and real-time PCR using primers for the E. coli 16S, oxyS, ibpA, and ibpB genes were performed as previously described [23]. Gene expression relative to the 16S rRNA bacterial housekeeping gene was calculated using the ΔΔCt method. E. coli upregulate ibpAB following phagocytosis by macrophages Since others have shown that ibpAB protect E. coli from oxidative damage [28,29], that E. coli upregulate other oxidative stress response genes upon phagocytosis by neutrophils [30], and that ROS are increased in macrophage phagolysosomes [31], we predicted that E. coli also upregulate ibpAB after phagocytosis by macrophages. To test this, we co-cultured immortalized J774 murine macrophages and murine BMDMs with the non-pathogenic murine adherentinvasive E. coli strain, NC101. At the indicated times, we quantified ibpA and ibpB mRNA in gentamicin-resistant (i.e. intracellular) E. coli using real-time PCR. We found that E. coli ibpA and ibpB expression increased within 2 hrs of adding bacteria and remained elevated for at least 24 hrs (Fig. 1). These data indicate that factors within macrophages induce ibpAB expression in E. coli relatively soon after phagocytosis. ROS mediate ibpAB expression in E. coli in cultures and macrophages Next, we explored potential factors in macrophages that might upregulate E. coli ibpAB. To establish whether the acidic environment that exists in the macrophage phagolysosome induces E. coli ibpAB, we measured ibpAB expression in E. coli within J774 macrophages that had been treated with bafilomycin-A1, an inhibitor of the vacuolar H + -ATPase that acidifies the phagolysosome. Inhibition of vacuolar acidification did not decrease E. coli ibpAB induction within macrophages, but rather unexpectedly increased expression suggesting that the acidic environment of the phagolysosome is not responsible for upregulation of E. coli ibpAB in macrophages ( Fig. 2A). In addition to low pH, the phagolyosome also contains increased concentrations of ROS and RNS. Since ibpAB have been shown to protect cultured E. coli from killing by hydrogen peroxide [16], we predicted that E. coli upregulate ibpAB in response to phagolysosomal ROS or RNS. To test this, we incubated BMDMs from gp91phox -/mice that have an impaired oxidative burst, Inos -/mice that are defective in nitric oxide production, and wild-type (wt) mice with E. coli NC101 and measured ibpAB mRNA in intracellular E. coli. Interestingly, ibpAB expression in E. coli within Inos -/-BMDMs was increased relative to wt BMDMs, whereas ibpAB expression in gp91phox -/-BMDMs was decreased compared with wt BMDMs (Fig. 2B-E). These data suggest that ROS, but not RNS, within BMDMs are partially responsible for the induction of ibpAB in intra-macrophage E. coli. To confirm that ROS enhance ibpAB expression in commensal E. coli, we treated mid-log phase E. coli NC101 with the superoxide generator, paraquat, for the indicated times and measured ibpAB expression. We detected a dose-dependent increase in ibpAB expression five minutes after addition of paraquat, but the degree of upregulation diminished substantially by ten minutes (Fig. 3A and B). To confirm that bacteria are sensing the presence of ROS generated by paraquat, we also measured expression of oxyS, a small regulatory RNA in E. coli that has previously been shown to be upregulated in response to hydrogen peroxide, control expression of several stress response genes, and protect E. coli from peroxide-induced DNA damage [32]. We observed a consistent dose-and time-dependent increase of oxyS mRNA in E. coli treated with paraquat (Fig. 3C). Interestingly, the oxyS upregulation slightly precedes ibpAB E. coli ibpAB expression is positively controlled by the oxyS small regulatory RNA Using a reporter-gene screen, others have previously shown that oxyS expression up-or downregulates 20 genes in E. coli, several of which are stress response genes [32]. However, oxyS has not previously been described to regulate expression of the ibpAB operon. Since we determined that superoxides induce oxyS expression shortly before ibpAB expression (Fig. 3), we hypothesized that oxyS may upregulate ibpAB expression. To test this, we measured ibpAB expression in paraquat-treated E. coli NC101 or oxyS-deficient E. coli (NC101ΔoxyS) and found that ibpAB expression was significantly attenuated in unstimulated as well as paraquat-stimulated NC101ΔoxyS (Fig. 4A). To determine whether upregulation of ibpAB in macrophages was also dependent on oxyS, we incubated BMDMs with E. coli NC101 or NC101ΔoxyS for the indicated times and measured ibpAB mRNA levels in intracellular bacteria. At one hour after the addition of bacteria, ibpAB mRNA was significantly lower in NC101ΔoxyS compared with NC101 ( Fig. 4B and C). However, this difference was absent by 6 hours. Therefore, oxyS-dependent factors mediate ibpAB expression in intra-macrophage E. coli at early, but not late, stages of intracellular survival. The mechanisms by which the oxyS small regulatory RNA controls ibpAB mRNA levels are still unknown. Expression of ibpAB is associated with enhanced E. coli survival within macrophages Having determined that E. coli upregulate ibpAB in response to ROS in culture and in macrophages, we hypothesized that ibpAB expression protects E. coli from killing by ROS in macrophages. In order to address this hypothesis, we incubated BMDMs from wt or gp91phox -/mice with E. coli NC101 or ibpAB-deficient NC101 (NC101ΔibpAB) for the indicated times and then quantified viable gentamicin-resistant (i.e. intracellular) bacteria by plating macrophage lysates on agar. At each time point examined after addition of bacteria, we detected significantly fewer intra-macrophage NC101ΔibpAB vs. NC101 in wt BMDMs (Fig. 5A). However, no significant differences in intra-macrophage NC101 vs. NC101ΔibpAB numbers were observed at any time point in gp91phox -/-BMDMs suggesting that ibpAB expression in E. coli NC101 protects intracellular E. coli from killing by macrophage-derived ROS. Interestingly, when we performed the same experiments with pathogenic E. coli O157:H7, we found that wt BMDMs kill E. coli O157:H7 more efficiently than E. coli NC101 and that ibpAB has no effect on intramacrophage survival (Fig. 5B). However, unlike results observed with E. coli NC101, gp91phox -/-BMDMs kill E. coli O157:H7 less efficiently than wt BMDMs at 1 and 4 hrs post infection. Therefore, ibpAB protect E. coli NC101, but not E. coli O157:H7, from ROS-mediated killing in macrophages. Since E. coli O157:H7 are killed more efficiently by wt BMDMs than E. coli NC101 and since the ibpAB-mediated protection from intra-macrophage killing presumably requires adequate expression of ibpAB, we asked whether E. coli O157:H7 upregulate ibpAB after phagocytosis to a similar degree as E. coli NC101. To answer this question, we compared ibpAB expression in phagocytosed E. coli NC101 with E. coli O157:H7 in wt BMDMs. Although E. coli O157:H7 slightly increase ibpAB expression after infection of BMDMs, they do so to a much lesser extent compared with E. coli NC101 (Fig. 6). Therefore, it is conceivable that the increased killing of E. coli O157:H7 compared with E. coli NC101 by wt BMDMs may be due to insufficient ibpAB expression in E. coli O157:H7. These results support the concept that the E. coli ibpAB operon is a virulence factor that is upregulated in certain strains of E. coli, including NC101, during macrophage infection, and protects E. coli from killing by macrophagederived ROS. Discussion Several functions of E. coli ibpAB have previously been identified, including protection of bacteria from elevated temperatures, carbon monoxide, tellurite and copper toxicity, and oxidative stress [16][17][18]29,33]. However, all previously published studies have examined the roles of ibpAB in bacterial survival in laboratory cultures devoid of eukaryotic cells, and therefore have limited relevance to host-microbial interactions in animal systems. In our studies, we present new evidence that ibpAB also attenuate the bactericidal activity of macrophage ROS leading to increased survival of certain clinically-relevant E. coli strains within macrophages. The mechanisms by which ibpAB protect E. coli from ROS are not entirely clear. The ibpAB gene sequences are not similar to those of known E. coli superoxide dismutases or catalase and therefore it is unlikely that IbpAB enzymatically neutralize superoxides and peroxides. More likely, IbpAB function as intracellular chaperones that bind and sequester or refold proteins that have been damaged by ROS, similar to the mechanisms by which they protect bacterial proteins from heat shock [28]. Indeed, others have shown that recombinant IbpA and IbpB suppress inactivation of E. coli metabolic enzymes by potassium superoxide and hydrogen peroxide in vitro and bind non-native forms of the enzymes [28]. Presumably, similar events occur within the cytoplasm of bacteria exposed to ROS or heat, but this concept remains to be proven. Given that ibpAB protect E. coli proteins from damage by ROS, we hypothesized that E. coli upregulate ibpAB expression in response to ROS. In the present work, we show that ROS induce ibpAB expression in E. coli in lab cultures and macrophage phagolysosomes. Interestingly, while we detected a transient increase in ibpAB expression in E. coli cultures treated with the superoxide generator, paraquat, we did not detect upregulation of ibpAB in E. coli cultures treated with hydrogen peroxide (data not shown). The explanation for this difference is not entirely clear, but could be due to the more reactive and therefore damaging nature of superoxides compared with peroxides. We also hypothesized that RNS, like ROS, might induce ibpAB expression. However, contrary to our hypothesis, we observed increased ibpAB expression in E. coli within Inos -/macrophages that are deficient in RNS production. This unexpected result could be due to compensatory upregulation of ROS production in Inos -/macrophages, a phenonmenon that has previously been reported [34]. It is also notable that even in the gp91phox -/macrophages that have impaired ROS production, E. coli ibpAB expression increases over time. Therefore, other factors within macrophages, besides ROS, likely play a role in ibpAB expression. The mechanisms by which ROS cause transcription of ibpAB are unknown. Others have previously shown that the alternative sigma factors σ32 and σ54 transcribe ibpAB and ibpB, respectively [20]. In addition to heat, other factors have been shown to increase σ32 protein levels, including ethanol, hyperosmotic shock, carbon starvation, and alkaline pH. On the other hand, σ54 controls expression of several nitrogen-metabolism genes. However, changes in abundance or activity of these alternative sigma factors in response to oxidative stress have not been previously reported. In addition to transcriptional control, IbpAB protein levels are also controlled at the levels of RNA processing, translation, and protein stability. [35,36]. In the present study, we show evidence suggesting that ibpAB expression is also controlled post-transcriptionally at the mRNA level. For instance, upregulation of ibpAB mRNA in E. coli treated with paraquat or phagocytosed by macrophages is partially dependent on the small regulatory RNA, oxyS. Our findings are somewhat surprising since a screen of mutants with a randomly inserted reporter gene failed to identify ibpAB as targets of regulation by oxyS [32]. In addition, ibpAB were not identified as putative targets of oxyS regulation using an in silico analysis [37]. Perhaps this discrepancy may be due to differences in assay design (e.g. reporter gene vs. real-time PCR) or false assumptions in computational prediction algorithms. We have previously determined that colitis is associated with increased ibpAB mRNA levels in intra-colonic E. coli [23]. While our studies do not prove that ROS present at increased concentrations in inflamed colon tissue mediate the upregulation of E. coli ibpAB, they do demonstrate that ibpAB expression is at least partially induced by ROS in vitro and therefore suggest that ROS may contribute to ibpAB expression during colitis in vivo. Further studies in which colonic ROS are neutralized during colitis will be required to determine whether this is actually the case. Since ROS cause E. coli to increase ibpAB expression and since ibpAB expression is associated with enhanced survival in BMDMs, one might predict that ibpAB-expressing E. coli are more virulent than ibpAB-deficient E. coli in diseases that are associated with persistence of bacteria within macrophages such as IBD's and experimental colitis. On the contrary, we have previously shown that ibpAB-deficient E. coli paradoxically cause increased inflammatory responses in colitis-prone Il10 -/mice compared with wt mice by unknown mechanisms [23]. Therefore, the biological relevance of ibpAB-mediated increases in intra-macrophage E. coli survival that we observed in the present studies to experimental colitis is unclear. One possible explanation for the inverse relationship between intra-macrophage E. coli survival in these experiments and colitis severity in prior experiments is that macrophages used in the present study were obtained from C57/B6 mice whereas the colitis model requires the use of mice on the SvEv/129 genetic background. It is known that SvEv/129, but not C57/B6, mice are naturally deficient in the Slc11a1 (Nramp1) gene expressed in macrophages that functions to protect mice from certain intracellular bacterial infections [38,39]. Therefore, our findings in BMDMs from C57/B6 mice may not be applicable to Slc11a1-deficient SvEv/129 mice that have a baseline defect in killing of intracellular microbes. Nonetheless, we believe that our results highlight a potentially important pathway by which E. coli protect themselves from host immune responses. In summary, we have identified a novel mechanism by which some E. coli increase transcription of ibpAB and have shown that the upregulation of ibpAB enhances survival of a non-pathogenic E. coli strain in macrophages. Further investigation of these proteins in other non-pathogenic and pathogenic bacterial strains in disease models will help clarify the role that they play as virulence factors in infectious and inflammatory disease pathogenesis.
5,286.4
2015-03-23T00:00:00.000
[ "Biology", "Medicine" ]