text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Identification and Characterization of the Thrombin Binding Sites on Fibrin*
Thrombin binds to fibrin at two classes of non-sub- strate sites, one of high affinity and the other of low affinity. We investigated the location of these thrombin binding sites by assessing the binding of thrombin to fibrin lacking or containing (cid:103) (cid:42) chains, which are fibrinogen (cid:103) chain variants that contain a highly anionic car-boxyl-terminal sequence. We found the high affinity thrombin binding site to be located exclusively in D domains on (cid:103) (cid:42) chains ( K a , 4.9 (cid:51) 10 6 M (cid:50) 1 ; n , 1.05 per (cid:103) (cid:42) chain), whereas the low affinity thrombin binding site was in the fibrin E domain ( K a , 0.29 (cid:51) 10 6 M (cid:50) 1 ; n , 1.69 per molecule). The amino-terminal (cid:98) 15–42 fibrin sequence is an important constituent of low affinity binding, since thrombin binding at this site is greatly diminished in fibrin molecules lacking this sequence. The tyrosine- sulfated, thrombin exosite-binding hirudin peptide, S-Hir 53–64 (hirugen), inhibited both low and high affinity thrombin binding to fibrin (IC 50 1.4 and 3.0 (cid:109) M , respec- tively). The presence of the high affinity (cid:103) (cid:42) chain site on fibrinogen molecules did not inhibit fibrinogen conver- sion to fibrin as assessed by thrombin time measurements, and thrombin exosite
Thrombin binds to its substrate fibrinogen in the central amino-terminal region and cleaves fibrinopeptides A and B from the A␣ and B chains, respectively, converting fibrinogen to fibrin. The thrombin-fibrinogen binding interaction is mediated through an anion-binding fibrinogen recognition exosite in thrombin (1)(2)(3) that is situated in an extended patch of positively charged residues in the region of the thrombin loop segment centered around Lys 70 -Glu 80 (4). The exosite also binds to heparin cofactor II (5), the platelet or endothelial cell thrombin receptor (6), thrombomodulin (7,8), GPIb␣ 1 (9), as well as to a strongly anionic sequence in the carboxyl-terminal region of the leech thrombin inhibitor, hirudin (10 -15).
In addition to binding to fibrinogen at its substrate site, thrombin binds to fibrin at a "non-substrate" site(s) (1, 2, 16 -18). It is commonly believed that non-substrate binding takes place at the same location as fibrinogen substrate binding, namely the central E domain. As determined from binding experiments with 125 I-thrombin by Liu et al. (19), two classes of non-substrate sites exist in fibrin, one of "high" affinity (K a , ϳ6 ϫ 10 5 M Ϫ1 ) and the other of "low" affinity (K a , ϳ7 ϫ 10 4 M Ϫ1 ). Hogg and Jackson (20) also found two classes of sites in fibrin with affinity constants of 3.3 ϫ 10 6 and 3.0 ϫ 10 4 , respectively. It has been inferred from available information that all non-substrate thrombin binding, especially that of high affinity, is in the E domain (2), although to our knowledge this subject has not been specifically addressed.
Human fibrinogen is chromatographically separable into two major components ("peak 1" and "peak 2"), which differ with respect to the structure of their ␥ chains (21). Dimeric peak 1 fibrinogen molecules each contain two ␥ A chains (␥1-411V), whereas peak 2 fibrinogen molecules, which amount to ϳ15% of the total fibrinogen population (22), have one ␥ A and one ␥Ј chain (␥1-427L) (23,24). Similar ␥ chain variants have been identified in rodent (25,26) and bovine 2 fibrinogens and may exist in other animal species as well (27). In humans, ␥Ј chains arise through alternative processing of the primary mRNA transcript (28) and differ structurally in their COOH-terminal sequences in that ␥ A chain residues 408 -411 are replaced in ␥Ј chains by an anionic 20 amino acid sequence (24,29). In rats (25,30) and cows 2 ␥ A 408 -411 is replaced by a shorter but homologous sequence (Table I). The rat and human ␥Ј chains are tyrosine-sulfated at ␥Ј418 (31,32) and also at ␥Ј422 in humans. 2 ␥ A and ␥Ј chains are functionally equivalent with respect to factor XIIIa-catalyzed cross-linking (23), but unlike the ␥ A chain, ␥Ј chains lack the complete platelet binding sequence, ␥ A 400 -411, and therefore do not support ADP-induced fibrinogen binding or platelet aggregation (33)(34)(35). Our group has recently presented evidence that plasma factor XIII binds specifically to ␥Ј chains (36), but little else is known about its functions. In this report we present compelling evidence that the anionic carboxyl-terminal ␥Ј chain sequence situated in the fibrin D domain constitutes the high affinity thrombin binding site, which is itself separate and distinct from the low affinity thrombin binding sites that reside in the central E domain.
MATERIALS AND METHODS
Human fibrinogen fraction I-2 was isolated from normal citrated plasma by glycine precipitation (37) and separated into peaks 1 and 2 fibrinogen by anion exchange chromatography on DEAE-cellulose (36). Des-B1-42 fibrinogen was produced from peak 1 or peak 2 fibrinogen by digestion with Crotalus atrox protease III (38). Fibrinogen concentrations were determined spectrophotometrically at 280 nm using an absorbance coefficient of 1.51 ml mg Ϫ1 cm Ϫ1 (22). Molecular weights of 340,000 and 325,000 were used for fibrinogen and des B1-42 fibrinogen, respectively (38,39).
Fibrin-Sepharose was prepared by coupling CNBr-activated Sepharose with peak 2 fibrinogen and then converting the resin-bound fibrinogen to fibrin in the presence of thrombin (2 units/ml) for 16 h at 4°C as described by Heene and Mathias (40). The fibrin-Sepharose was washed with 1.0 M NaCl, 50 mM HEPES pH 7.4 buffer, followed by 100 mM NaCl, 50 mM HEPES pH 7.4 buffer containing 50 mM CaCl 2 and 2 mM phenylmethylsulfonyl fluoride.
Human ␣-thrombin (specific activity, 3.04 units/g) was obtained from Enzyme Research Laboratories, Inc., South Bend, IN. A molecular weight of 36,500 and an absorbance coefficient of 1.83 ml mg Ϫ1 cm Ϫ1 were used for calculating thrombin concentrations (41). PPACK-thrombin was prepared by adding a 5-fold molar excess of PPACK (Calbiochem) to ␣-thrombin and after dialysis the mixture was labeled with 125 I (42). The labeled protein was separated from free iodine by affinity chromatography on peak 2 fibrin-Sepharose CL-4B that had been equilibrated with 50 mM HEPES, 100 mM NaCl, pH 7.4, buffer containing 0.01% (w/v) PEG 8000. Elution of thrombin was achieved with HEPES buffer, pH 7.4, containing either 500 mM NaCl or 40 mM CaCl 2 .
Factor XIII (1.95 units/g) was prepared from pooled human plasma (43) and the activity assayed by the method of Loewy et al. (44). Factor XIII (500 units/ml) in 100 mM NaCl, 50 mM HEPES, pH 7.4, was activated to XIIIa in the presence of 500 M dithiothreitol and 10 mM CaCl 2 by incubation with thrombin (10 units/ml, final) for 30 min at 37°C (45).
Thrombin-fibrin binding experiments were performed using a modification of the method reported by Liu et al. (19). Fibrin monomer solutions were prepared from fibrinogen clotted at 1 mg/ml in 60 mM NaH 2 PO 4 buffer, pH 6.4, with thrombin (1 unit/ml, final) for 2 h at room temperature. The clots were synerized and dissolved in 20 mM acetic acid to Ͼ10 mg/ml fibrin and repolymerized in a 10-fold excess of 100 mM NaCl, 50 mM Tris, pH 7.4, buffer containing 40 mM CaCl 2 and 2 mM N-ethylmaleimide. These clots were synerized and dissolved in 20 mM acetic acid to a 10 mg/ml stock solution. Clots containing 0.5 or 1 nmol of fibrin were formed by adding a fibrin monomer solution to a 100 mM NaCl, 50 mM HEPES, 0.01% (w/v) PEG 8000, pH 7.4, buffer containing varying amounts of 125 I-labeled PPACK-thrombin and incubated for 2 h at room temperature. Clot-bound thrombin was separated from free thrombin by syneresis of the clot. The final concentration of reactants in the clotting mixture were fibrin, 2.5 M, 125 I-PPACK-thrombin, 0 -37.5 M, in a final volume of 200 or 400 l. For clotting mixtures containing des-B1-42 fibrin, which polymerizes slowly and incompletely, full clot recovery (Ͼ95%) was assured by cross-linking the fibrin with factor XIIIa (25 units/ml) for 2 h at room temperature. After the incubation period, tubes were centrifuged and thrombin-bound clots separated from free thrombin by syneresis. The distribution of thrombin bound to the clot and free in solution was determined by radioactivity counting in a Packard Multi-prias 4 ␥ counter. The amount of thrombin trapped in the clot was estimated from the radioactive counts that were retained in cross-linked clots of peak 1 or des-B1-42 peak 1 fibrin in the presence of 25 M S-Hir 53-64 , which had been added to block thrombin exosite binding to fibrin.
The binding data were graphed as Scatchard plots (46). Data indicating a two-component system were deconvoluted by the method of Klotz and Hunston (47). It was not technically feasible to reach thrombin concentrations which saturated the low affinity site in samples of peak 2 fibrin that contained high levels of the high affinity component. In these experiments, the low affinity component was defined by peak 1 (␥ A ,␥ A ) fibrin values and was used for correcting high affinity values (47). High affinity thrombin binding to des B1-42 peak 2 fibrin was not significantly affected by a low affinity binding component, and these data were therefore not corrected. The level of thrombin entrapment in the clots (Յ4% of total counts) did not significantly effect binding parameters, and therefore no corrections were applied to the data.
Competitive binding experiments involving thrombin anionic exosite binding were performed with the sulfated hirudin peptide, S-Hir 53-64 , which was a generous gift from Dr. John Maraganore of Biogen Inc., Cambridge, MA. Hirugen at concentrations up to 40 M was added to 125 I-PPACK-thrombin (1 M) and 0.5 nmol of fibrin at a final volume of 200 l as described above for thrombin binding measurements. Peptide concentrations were estimated spectrophotometrically at 215 nm using an absorbance coefficient of 15.0 ml mg Ϫ1 cm Ϫ1 (48).
A Fibrometer Precision Coagulation Timer (BBL Microbiology Systems) was used to determine the thrombin time for the conversion of fibrinogen (1 mg/ml final) to fibrin in 50 mM Tris, 100 mM NaCl, pH 7.4 at 37°C at a thrombin level of 0.6 unit/ml. Hydrolysis of S-2238 (H-D-phenylalanyl-L-pipecolyl-L-arginine-p-nitroanilide dihydrochloride; Chromogenix, Mölndal, Sweden) by thrombin (3.2 nM) in 0.10 M NaCl, 0.05 M Tris, pH 7.5 buffer, was monitored at 405 nm at room temperature. Samples contained S-2238 (50 M), with or without peak 1 fibrin (1 M), or peak 2 fibrin (1 M). The hydrolysis rate was estimated from the increase in absorbance at 405 nm during the first 3 min of the reaction.
RESULTS
Thrombin Binding to Fibrin-In our studies of thrombin binding to fibrin we found it useful as a general condition to covalently cross-link the fibrin polymer in the presence of factor XIIIa during the binding experiment in order to assure complete fibrin recovery (Ͼ95%). This procedure was particularly useful for recovering des B1-42 fibrin clots, which polymerize slowly and incompletely in the absence of cross-linking (49). There were no significant differences in thrombin binding behavior to cross-linked and non-cross-linked fibrin (Fig. 1), confirming the findings of Liu et al. (50). Thrombin entrapment in the clot, as assessed in the presence of 25 M S-Hir 53-64 , was Յ4% of the total counts and did not significantly change any of the calculated binding parameters.
Low and High Affinity Binding Sites-Our previous study with des-B1-42 fibrin had indicated that the 15-42 sequence was a component of the non-substrate thrombin binding site in the fibrin E domain (49). To extend those observations we carried out a systematic study of non-substrate thrombin binding to several fibrin preparations that differed with respect to their ␥ chain composition, their B1-42 content, or both. Fraction I-2 fibrin, which has ϳ15% ␥Ј-containing molecules (22), was studied first (Fig. 2). As assessed from the Scatchard plot, our results correspond to those reported by Liu et al. (19), who studied a similar fibrinogen subfraction. The data indicate two classes of binding sites, one of high affinity (K a , 5.5 ϫ 10 6 M Ϫ1 ) and the other of low affinity (K a , 0.45 ϫ 10 6 M Ϫ1 ) ( Table II). Studies of thrombin binding to peak 1 fibrin, which contains only ␥ A chains, indicated a single class of binding site with a K a of 0.21 ϫ 10 6 M Ϫ1 , corresponding to the low affinity site in fraction I-2 fibrin, and having a binding stoichiometry of 1.80 per molecule of fibrin (Fig. 2). Parallel analysis of thrombin
V R P E H P A E T E Y E S L Y P E D D L
Rat ␥Ј (408-419)
V S V E H E V D V E Y P
Bovine ␥Ј (408-419)
V R V E H H V E I E Y D
Hirudin (53)(54)(55)(56)(57)(58)(59)(60)(61)(62)(63)(64)(65) N G D F E E I P E E Y L Q Thrombin Binding Sites on Fibrin binding to peak 2 fibrin demonstrated that high affinity binding dominated the Scatchard plot and that there were 0.83 high affinity sites per fibrin molecule (Fig. 3), a stoichiometry that corresponds well to the ␥Ј chain content in peak 2 fibrinogen preparations (48% ␥Ј, 52% ␥ A ) (51). Low affinity binding in peak 2 fibrin was too low for accurate quantitation, but was in the same range as was found for peak 1 or fraction I-2 fibrin. There was a marked reduction of low affinity binding to des-B1-42 peak 2 fibrin (Fig. 4), and therefore no corrections to the high affinity values were applied for the presence of a low affinity component. In the case of des-B1-42 peak 1 fibrin, which lacks a high affinity binding site, reduced levels of low affinity thrombin binding were found (Fig. 4) and exceeded the amount that could be attributed to entrapment alone. The estimated K a (0.11 ϫ 10 6 M Ϫ1 ) was 38% of that found for peak 1 or fraction I-2 fibrin, but the stoichiometry was the same (i.e. 1.66 sites per molecule).
Thrombin Exosite-binding Peptide-To provide additional evidence that the ␥Ј sequence contains the high affinity site for thrombin exosite binding, we evaluated thrombin binding in the presence of S-Hir 53-64 , a well characterized thrombin exosite binding peptide, to des-B1-42 peak 2 (high affinity) or peak 1 (low affinity) fibrin. S-Hir 53-64 was an effective competitive inhibitor of thrombin binding to fibrin with an IC 50 of 3.0 M for high affinity thrombin binding and 1.4 M for low affinity binding (Fig. 5), thus indicating that both classes of sites bind thrombin through its exosite.
Fibrinogen to Fibrin Conversion and S-2238 Hydrolysis-The mean thrombin times for peak 1 and peak 2 fibrinogens were 20.5 Ϯ 0.5 and 20.4 Ϯ 0.5 s (n ϭ 5), respectively, indicating that the presence of the ␥Ј sequence had no measurable effect on thrombin substrate cleavage of fibrinogen. Hydrolysis of S-2238 was not inhibited by the presence of peak 1 fibrin or peak 2 fibrin in the hydrolysis mixture (Table III). DISCUSSION These present experiments show that there is a unique high affinity non-substrate binding site for thrombin in the carboxyl-terminal region of the ␥Ј chain and a low affinity class of binding site in the amino-terminal region of fibrin, the latter contained in part within the B1-42 sequence. In studies of fraction I-2 fibrinogen, which contains approximately 8% ␥Ј chains, we detected the same two classes of binding sites that were identified by Liu et al. (19). The binding affinities we determined were about 10-fold higher for high affinity binding and 4-fold higher for low affinity binding (Table II). In peak 1 fibrin (␥ A ,␥ A ) only the low affinity binding component was observed, whereas with peak 2 fibrin (␥Ј,␥ A ), there was increased high affinity thrombin binding corresponding to the increased content of ␥Ј chains. Overall, high affinity binding stoichiometry corresponds well to the content of ␥Ј chains, with one thrombin per ␥Ј chain.
Although the existence and structure of the ␥Ј chain has been known for many years (21,23,51), its role as the high affinity non-substrate thrombin binding site in fibrin has been over- looked for several reasons. First, it has been generally assumed that the entire thrombin binding site in fibrin was a residual of the substrate recognition site in fibrinogen. Thus, knowledge that there were two classes of binding sites in fibrin, coupled with the observation that high affinity thrombin binding was only a minor component of the total binding reaction in fraction I-2 fibrin (19,20), evidently did not raise suspicion of another possible thrombin-binding location. Second, most investigations on this subject have involved only central E domain structures (17,18,52,53) or in addition, plasmic D fragments (17,18) from which the ␥Ј sequence had most likely been cleaved (54) or which had a low content of ␥Ј-containing molecules to begin with (i.e. fraction I-2) (19,55). Studies of thrombin binding to immobilized fibrin (1,16,56) or to a modified fibrin clot (des-B1-42 fibrin) (49) could not have distinguished the specific location of any binding site.
We would therefore revise the current belief that all nonsubstrate thrombin binding takes place in the fibrin E domain, to stipulate that only low affinity thrombin binding takes place in this region. We would concur, however, with the notion that thrombin binding in the E domain is likely to represent a residual aspect of the site that participated in fibrinogen substrate recognition. Scatchard analyses indicated a stoichiometry of 1.69 thrombin molecules per fibrin molecule, suggesting that there are two low affinity sites in each dimeric fibrin molecule, corresponding to a fibrinogen substrate recognition site for each pair of fibrinopeptides (FPA, FPB). Whether recognition site binding is the same for FPA and FPB cleavage has yet to be determined.
Unlike the high affinity binding site in the ␥Ј chain, formation of the low affinity site in the E domain is not restricted to a single peptide sequence. Consistent with a previous report (49), our current data suggest that the 15-42 sequence contributes significantly to non-substrate binding and that ϳ60% of low affinity binding is lost by removal of this sequence. Other evidence suggests that the fibrin A␣27-50 sequence contributes as well to low affinity thrombin binding (18,53). The ␥ chains in the E domain have also been proposed as contributors to the thrombin binding site (17,18), but the evidence for this is not well substantiated.
Fibrinogens New York I (des-B9 -72) and Naples I (B A68 T) are dysfibrinogenemias, which have been characterized as having impaired thrombin binding (57,58), presumably related to a defective amino-terminal substrate or non-substrate binding site. A recent study of recombinant ␥ A -type B A68 T fibrinogen has reaffirmed the importance of B68 alanine in thrombin-mediated cleavage of Naples I fibrinogen (59). In the case of New York I, which is heterozygotic, thrombin binding to fibrin was 50% of normal, but there was no evidence to suggest a high affinity thrombin binding component (57). Similarly, thrombin binding to homozygous Naples I fibrin was reported to be absent (58), and thus there was also no collateral evidence for high affinity thrombin binding to the presumably normal Naples I ␥Ј chain. However, in another report on this same family, thrombin binding to fibrin from a homozygous proband was reduced to only one-third of normal (60). The available data derived from studies on Naples I fibrin do not permit an unambiguous distinction to be made as to the presence or absence of a high affinity binding component, although we would have expected only low affinity binding to have been affected.
Direct measurements of thrombin binding to substrate fibrinogen molecules have not been reported, owing to the fact that thrombin binding to its substrate is accompanied by con- (61)(62)(63)(64)(65)(66). Nevertheless, the ␥Ј site itself in fibrinogen is not an effective competitor for thrombin binding and cleavage at the fibrinogen substrate site, as assessed by our thrombin time measurements in this study and in another (22). It therefore seems likely that the substrate binding site itself will prove to have a higher binding affinity for thrombin than has been estimated previously from K m measurements, by analogy with hirudin, which has a higher binding affinity for thrombin as a bivalent molecule than does its COOH-terminal exosite binding sequence alone. The physiological role that the ␥Ј sequence plays in modulating thrombin function still remains to be determined. It is very likely that the measurable thrombin clotting activity found in fibrin and fibrin degradation products (67)(68)(69)(70) is attributable to non-substrate binding at the ␥Ј site, or the low affinity site, or at both sites. In light of our present findings, it will be important to study the relationship between thrombin binding to ␥Ј-containing fibrin and thrombin activation of coagulation factors such as factors V, VIII, or XIII or cellular receptors such as those on platelets and endothelial cells. | 4,874.8 | 1996-09-20T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Stein's density approach and information inequalities
We provide a new perspective on Stein's so-called density approach by introducing a new operator and characterizing class which are valid for a much wider family of probability distributions on the real line. We prove an elementary factorization property of this operator and propose a new Stein identity which we use to derive information inequalities in terms of what we call the \emph{generalized Fisher information distance}. We provide explicit bounds on the constants appearing in these inequalities for several important cases. We conclude with a comparison between our results and known results in the Gaussian case, hereby improving on several known inequalities from the literature.
Introduction
Charles Stein's crafty exploitation of the characterization X ∼ N (0, 1) ⇐⇒ E f ′ (X) − Xf (X) = 0 for all bounded f ∈ C 1 (R) (1.1) has given birth to a "method" which is now an acclaimed tool both in applied and in theoretical probability.The secret of the "method" lies in the structure of the operator T φ f (x) := f ′ (x) − xf (x) and in the flexibility in the choice of test functions f .For the origins we refer the reader to [39,37,36]; for an overview of the more recent achievements in this field we refer to the monographs [27,3,11] or the review articles [26,30].
Among the many ramifications and extensions that the method has known, so far the connection with information theory has gone relatively unexplored.Indeed while it has long been known that Stein identities such as (1.1) are related to information theoretic tools and concepts (see, e.g., [19,21,13]), to the best of our knowledge the only references to explore this connection upfront are [4] in the context of compound Poisson approximation, and more recently [32,31] for Poisson and Bernoulli approximation.In this paper and the companion paper [22] we extend Stein's characterization of the Gaussian (1.1) to a broad class of univariate distributions and, in doing so, provide an adequate framework in which the connection with information distances becomes transparent.
The structure of the present paper is as follows.In Section 2 we provide the new perspective on the density approach from [38] which allows to extend this construction to virtually any absolutely continuous probability distribution on the real line.In Section 3 we exploit the structure of our new operator to derive a family of Stein identities through which the connection with information distances becomes evident.In Section 4 we compute bounds on the constants appearing in our inequalities; our method of proof is, to the best of our knowledge, original.Finally in Section 5 we discuss specific examples.
The density approach
Let G be the collection of positive real functions x → p(x) such that (i) their support S p := {x ∈ R : With this setup in hand we are ready to provide the two main definitions of this paper (namely, a class of functions and an operator) and to state and prove our first main result (namely, a characterization).
Definition 2.1.To p ∈ G we associate (i) the collection F(p) of functions f : R → R such that the mapping x → f (x)p(x) is differentiable on the interior of S p and f (a + )p(a + ) = f (b − )p(b − ) = 0, and (ii) the operator We call F(p) the class of test functions associated with p, and T p the Stein operator associated with p.
Theorem 2.1.Let p, q ∈ G and let Proof.If Q(b) = 0 the statement holds trivially.We now take Q(b) > 0. To see the sufficiency, note that the hypotheses on f , p and q guarantee that To see the necessity, first note that the condition R T p f (y)q(y)dy = 0 implies that the function y → T p f (y)q(y) be Lebesgue-integrable.Next define for z ∈ R the function Then the function belongs to F(p) for all z and satisfies the equation for all x ∈ S p .For this choice of test function we then obtain with Q(z) := z a q(u)du.Since this integral equals zero by hypothesis, it follows that Q(z) = P (z)Q(b) for all z ∈ S p , hence the claim holds.
The above is, in a sense, nothing more than a peculiar statement of what is often referred to as a "Stein characterization".Within the more conventional framework of real random variables having absolutely continuous densities, Theorem 2.1 reads as follows.
Corollary 2.1 (The density approach).Let X be an absolutely continuous random variable with density p ∈ G. Let Y be another absolutely continuous random variable.Then E [T p f (Y )] = 0 for all f ∈ F(p) if, and only if, either P(Y ∈ S p ) = 0 or P(Y ∈ S p ) > 0 and for all z ∈ S p .
Corollary 2.1 extends the density approach from [38] or [10,11] to a much wider class of distributions; it also contains the Stein characterizations for the Pearson given in [33] and the more recent general characterizations studied in [14,17].There is, however, a significant shift operated between our "derivative of a product" operator (2.1) and the standard way of writing these operators in the literature.Indeed, while one can always distribute the derivative in (2.1) to obtain (at least formally) the expansion the latter requires f be differentiable on S p in order to make sense.We do not require this, neither do we require that each summand in (2.2) be well-defined on S p nor do we need to impose integrability conditions on f for Theorem 2.1 (and thus Corollary 2.1) to hold!Rather, our definition of F(p) allows to identify a collection of minimal conditions on the class of test functions f for the resulting operator T p to be orthogonal to p w.r.t. the Lebesgue measure, and thus characterize p.
Example 2.1.Take p = φ, the standard Gaussian.Then F(φ) is composed of all real-valued functions f such that (i) x → f (x)e −x 2 /2 is differentiable on R and (ii) lim x→±∞ f (x)e −x 2 /2 = 0.In particular F(φ) contains the collection of all differentiable bounded functions and which is Stein's well-known operator for characterizing the Gaussian (see, e.g., [36,3,11]).There are of course many other subclasses that can be of interest.For example the class F(φ) also contains the collection of functions f (x) = −f ′ 0 (x) with f 0 a twice differentiable bounded function; for these we get , the generator of an Ornstein-Uhlenbeck process, see [2,18,27].The class F(φ) as well contains the collection of functions of the form f (x) = H n (x)f 0 (x) for H n the n-th Hermite polynomial and f 0 any differentiable and bounded function.For these f we get an operator already discussed in [16] (equation ( 38)).
Example 2.2.Take p = Exp the standard rate-one exponential distribution. Then In particular F(Exp) contains the collection of all differentiable bounded functions such that f (0) = 0 and the operator usually associated to the exponential, see [24,28,38].The class F(Exp) also contains the collection of functions of the form f (x) = xf 0 (x) for f 0 any differentiable bounded function.For these f we get an operator put to use in [9].
Example 2.3.Finally take p = Beta(α, β) the beta distribution with parameters (α, an operator recently put to use in, e.g., [17,14]. There are obviously many more distributions that can be tackled as in the previous examples (including the Pearson case from [33]), which we leave to the interested reader.
Stein-type identities and the generalized Fisher information distance
It has long been known that, in certain favorable circumstances, the properties of the Fisher information or of the Shannon entropy can be used quite effectively to prove information theoretic central limit theorems; the early references in this vein are [35,6,5,23].Convergence in information CLTs is generally studied in terms of information (pseudo-)distances such as the Kullback-Leibler divergence between two densities p and q, defined as or the Fisher information distance which measures deviation between any density q and the standard Gaussian φ.Though they allow for extremely elegant proofs, convergence in the sense of (3.1) or (3.2) results in very strong statements.Indeed both (3.1) and (3.2) are known to dominate more "traditional" probability metrics.More precisely we have, on the one hand, Pinsker's inequality for d TV (p, q) the total variation distance between the laws p and q (see, e.g., [15, p. 429]), and, on the other hand, for d L 1 (φ, q) the L 1 distance between the laws φ and q (see [20,Lemma 1.6]).These information inequalities show that convergence in the sense of (3.1) or (3.2) implies convergence in total variation or in L 1 , for example.Note that one can further use De Brujn's identity on (3.3) to deduce that convergence in Fisher information is itself stronger than convergence in relative entropy.While Pinsker's inequality (3.3) is valid irrespective of the choice of p and q (and enjoys an extension to discrete random variables), both (3.2) and (3.4) are reserved for Gaussian convergence.Now there exist extensions of the distance (3.2) to non-Gaussian distributions (see [4] for the discrete case) which, as could be expected, have also been shown to dominate the more traditional probability metrics.There is, however, no general counterpart of Pinsker's inequality for the Fisher information distance (3.2); at least there exists, to the best of our knowledge, no inequality in the literature which extends (3.4) to a general couple of densities p and q.
In this section we use the density approach outlined in Section 2 to construct Stein-type identities which provide the required extension of (3.4).More precisely, we will show that a wide family of probability metrics (including the Kolmogorov, the Wasserstein and the L 1 distances) is dominated by the quantity Our bounds, moreover, contain an explicit constant which will be shown in Section 4 to be at worst as good as the best bounds in all known instances.In the spirit of [4] we call (3.5) the generalized Fisher information distance between the densities p and q, although here we slightly abuse of language since (3.5) rather defines a pseudo-distance than a bona fide metric between probability density functions.We start with an elementary statement which relates, for p = q, the Stein operators T p and T q through the difference of their respective score functions p ′ p and q ′ q .
Lemma 3.1.Let p and q be probability density functions in G with respective supports S p and S q .Let S q ⊆ S p and define Suppose that F(p) ∩ F(q) = ∅.Then, for all f ∈ F(p) ∩ F(q), we have and therefore Proof.Splitting S p into S q ∪ {S p \ S q }, we have f (y)p(y) = f (y)q(y)p(y)/q(y)I Sq (y) + f (y)p(y)I Sp\Sq (y) for any real-valued function f .At any x in the interior of S p we thus can write The first claim readily follows by simplification, the second by taking expectations under q which cancels the first term T q f (x) (by definition) as well as the third term T p f (x)I Sp\Sq (x) (since the supports do not coincide).
Remark 3.1.Our proof of Lemma 3.1 may seem circumvoluted; indeed a much easier proof is obtainable by writing T p under the form (2.2).We nevertheless stick to the "derivative of a product" structure of our operator because this dispenses us with superfluous -and, in some cases, unwanteddifferentiability conditions on the test functions.
From identity (3.6) we deduce the following immediate result, which requires no proof.Lemma 3.2.Let p and q be probability density functions in G with respective supports S q ⊆ S p .Let l be a real-valued function such that E p [l(X)] and E q [l(X)] exist; also suppose that there exists f ∈ F(p) ∩ F(q) such that we denote this function f p l .Then The identity (3.8) belongs to the family of so-called "Stein-type identities" discussed for instance in [16,7,1].In order to be of use, such identities need to be valid over a large class of test functions l.Now it is immediate to write out the solution f p l of the so-called "Stein equation" (3.7) explicitly for any given p and l; it is therefore relatively simple to identify under which conditions on l and q the requirement f p l ∈ F(q) is verified (since f p l ∈ F(p) is anyway true).Remark 3.2.For instance, for p = φ the standard Gaussian, one easily sees that lim x→±∞ f φ l (x) = 0, hence, when S q = S φ = R, q only has to be (differentiable and) bounded for f φ l to belong to F(q).However, when S q ⊂ R, then q has to satisfy, moreover, the stronger condition of vanishing at the endpoints of its support S q since f φ l needs not equal zero on any finite points in R.
We shall see in the next section that the required conditions for f p l ∈ F(q) are satisfied in many important cases by wide classes of functions l.The resulting flexibility makes (3.8) a surprisingly powerful identity, as can be seen from our next result.Theorem 3.1.Let p and q be probability density functions in G with respective supports S q ⊆ S p and such that F(p) ∩ F(q) = ∅.Let for some class of functions H. Suppose that for all l ∈ H the function f p l , as defined in (3.7), exists and satisfies f p l ∈ F(p) ∩ F(q).Then where the generalized Fisher information distance between the densities p and q.
This theorem implies that all probability metrics that can be written in the form (3.9) are bounded by the generalized Fisher information distance J (p, q) (which, of course, can be infinite for certain choices of p and q).Equation (3.10) thus represents the announced extension of (3.4) to any couple of densities (p, q) and hence constitutes, in a sense, a counterpart to Pinsker's inequality (3.3) for the Fisher information distance.We will see in Section 5 how this inequality reads for specific choices of H, p and q.
Bounding the constants
The constants κ p H in (3.11) depend on both densities p and q and therefore, to be fair, should be denoted κ p,q H .Our notation is nevertheless justified because we always have where the latter bounds (sometimes referred to as Stein factors or magic factors) do not depend on q and have been computed for many choices of H and p.Consequently, κ p H is finite in many known cases -including, of course, that of a Gaussian target.
Bounds such as (4.1) are sometimes too rough to be satisfactory.We now provide an alternative bound for κ p H which, remarkably, improves upon the best known bounds even in well-trodden cases such as the Gaussian.We focus on target densities of the form with S a scale-invariant subset of R (that is, either R or the open/closed positive/negative real half lines), d > 0 some constant and c the appropriate normalizing constant.The exponential, the Gaussian or the limit distribution for the Ising model on the complete graph from [10] are all of the form (4.2).Of course, for S = R, (4.2) represents power exponential densities.
Theorem 4.1.Take p ∈ G as in (4.2) and q ∈ G such that S q = S. Consider h : R → R some Borel function with p-mean E p [h(X)] = 0. Let f p h be the unique bounded solution of the Stein equation Proof.Under the assumption that E p [h(X)] = 0, the unique bounded solution of (4.3) is given by where I − = 0 (resp., We first tackle I − .Setting p(x) = ce −d|x| α I S (x) and using Jensen's inequality, we get where the last equality follows from a simple change of variables.Applying Hölder's inequality we obtain where γ q = P q (X < 0) := 0 −∞ q(x)dx.Repeating the Jensen's inequalitychange of variables-Hölder's inequality scheme once more yields .
Iterating this procedure m ∈ N times we deduce Since the mapping y → η(y) := e d|y| α y −∞ e −d|u| α du attains its maximal value at 0 for α ≥ 1 (indeed, hence η is monotone increasing), the interior of the parenthesis becomes Note that here we have used, for any support S, 0 −∞ ce −d|u| α du ≤ 1. Elevated to the power 1/(2m), this factor tends to 1 as m → ∞.Since we also have lim m→∞ N (m) = 2 we finally obtain Similar manipulations allow to bound I + by (||h||∞) 2 2 2 α P q (X > 0).Combining both bounds then allows us to conclude that , hence the claim holds.
This result of course holds true without worrying about f p h ∈ F(q).However, in order to make use of these bounds in the present context, the latter condition has to be taken care of.For densities of the form (4.2), one easily sees that f p h ∈ F(q) for all (differentiable and) bounded densities q for α > 1, with the additional assumption, for α = 1, that lim x→±∞ q(x) = 0.
Example 4.2.Take p = φ, the standard Gaussian.Then, from (4.4), Comparing with the bounds from Example 4.1 we see that (4.5) significantly improves on the constants in cases (i) and (iii); it is slightly worse in case (ii).
Applications
A wide variety of probability distances can be written under the form (3.9).For instance the total variation distance is given by 1] the class of Borel functions in [−1, 1], the Wasserstein distance is given by d W (p, q) = sup with H Lip1 the class of Lipschitz-1 functions on R and the Kolmogorov distance is given by with H HL the class of indicators of lower half lines.We refer to [15] for more examples and for an interesting overview of the relationships between these probability metrics.Specifying the class H in Theorem 3.1 allows to bound all such probability metrics in terms of the generalized Fisher information distance (3.12).It remains to compute the constant (3.11), which can be done for all p of the form (4.2) through (4.4).The following result illustrates these computations in several important cases.
Corollary 5.1.Take p ∈ G as in (4.2) and q ∈ G such that S q = S.For α > 1, suppose that q is (differentiable and) bounded over S; for α = 1, assume moreover that q vanishes at the infinite endpoint(s) of S. Then we have the following inequalities: If, for all y ∈ S, q is such that the function f p l (x) = e d|x| α (I [y,b) (x) − P (x)), where P denotes the cumulative distribution function associated with p, belongs to F(q), then Proof.The first three points follow immediately from the definition of the distances and Theorems 3.1 and 4.1.To show the fourth, note that for l y (x) = δ {x=y} the Dirac delta function in y ∈ S. The computation of the constant κ p H in this case requires a different approach from our Theorem 4.1.We defer this to the Appendix.
We conclude this section, and the paper, with explicit computations in the Gaussian case p = φ, hence for the classical Fisher information distance.From here on we adopt the more standard notations and write J (X) instead of J (φ, q), for X a random variable with density q (which has support R).
Immediate applications of the above yield
which is the second inequality in [20, Lemma 1.6] (obtained by entirely different means).Similarly we readily deduce this is a significant improvement on the constant in [20,35].
Next further suppose that X has density q with mean µ and variance σ 2 .Take Z ∼ p with p = φ µ 0 ,σ 2 0 , the Gaussian with mean µ 0 and variance σ 2 0 .Then where I(X) = E q (q ′ (X)/q(X)) 2 is the Fisher information of the random variable X.General bounds are thus also obtainable from (3.10) in terms of and the quantity Γ(X) = I(X) − 1 σ 2 0 , referred to as the Cramér-Rao functional for q in [25].In particular, we deduce from Theorem 4.1 and the definition of the total variation distance that Further specifying q = φ µ 1 ,σ 2 1 we see that to be compared with [27,Proposition 3.6.1].Lastly take Z ∼ φ the standard Gaussian and X d = F (Z) for F some monotone increasing function on R such that f = F ′ is defined everywhere.Then straightforward computations yield with ψ f = (log f ) ′ .In particular, if F is a random function of the form F (x) = Y x for Y > 0 some random variable independent of Z, then simple conditioning shows that the above becomes where q X refers to the density of X d = Y Z.This last inequality is to be compared with [8,Lemma 4.1] and also [34].For all densities q such that f p ly (x) ∈ F(q), Theorem 3.1 applies and yields sup y∈S |p(y)−q(y)| ≤ sup y∈S p(y) E q [(I [y,b) (X) − P (X)) 2 /(p(X)) 2 ] J (p, q), where b is either 0 or +∞.We now prove that sup y∈S p(y) E q [(I [y,b) (X) − P (X)) 2 /(p(X)) 2 ] ≤ 1 for p(x) = c e −d|x| α and any density q satisfying the assumptions of the claim.To this end note that straightforward manipulations lead to (P (y)) 2 + (1 − 2P (y))P q (X ≥ y) .
A Bounds for the supremum norm
This last expression is equal to 1.
1 p
p(x) (exists and) is positive} is an interval with closure Sp = [a, b], for some −∞ ≤ a < b ≤ ∞, (ii) they are differentiable (in the usual sense) at every point in (a, b) with derivative x → p ′ (x) := d dy p(y)| y=x and (iii) Sp p(y)dy = 1.Obviously, each p ∈ G is the density (with respect to the Lebesgue measure) of an absolutely continuous random variable.Throughout we adopt the convention 1 p(x) = (x) if x ∈ S p 0 otherwise; this implies, in particular, that p(x)/p(x) = I Sp (x), the indicator function of the support S p .As final notation, for p ∈ G we write E p [l(X)] := Sp l(x)p(x)dx. | 5,272.4 | 2012-10-15T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
Triterpene Constituents of Euphorbia Erythradenia Bioss. and their Anti-HIV Activity.
Phytochemical investigation of the aerial parts of Euphorbia erythradenia Bioss. (Euphorbiaceae), one of Iranian endemic Euphorbias, with particular attention to triterpene constituents, using methanol solvent extraction was carried out. Five known triterpenes, including four cycloartanes and oleanolic acid, were isolated for the first time and identified using NMR and Mass techniques. Anti HIV activity of the isolated triterpenes and ingenoid diterpenes was evaluated using single cycle replicable HIV-1 (SCR HIV-1) virions. Molecular features of the most active compound (IC50 = 0.008 μM, CC50 = 3.264 μM, TI = 380.64), which showed higher therapeutic index than nevirapine, was assessed using molecular docking. Docking studies demonstrated three hydrogen bonds between HIV-1 virion protease active site and this compound with a distance less than 3 A° which can be responsible for the observed anti HIV-1 activity.
Introduction
The genus Euphorbia is the largest among the plant family Euphorbiaceae, comprising about 2000 known species consisting of annuals to trees. They all contain latex and have unique flower structures. About 70 species have been reported in Iran that 17 of them are endemic (1). E. erythradenia Bioss. is one of Iranian endemic spurges (2) which few phytochemical studies have been reported for this plant. Some species of this genus are used in folk medicines to cure skin diseases, gonorrhea, migraines, intestinal parasites, and warts (1) which most of them have been proved by modern phytochemical and pharmacological studies. In Iran, they are also used as a purgative.
The presence of several kinds of secondary metabolites including triterpenes, steroids, flavonoids, phenolics and aromatic compounds in Euphorbia species have been reported up until now (1,3), but what highlights the features of Euphorbia's secondary metabolites, is its specific terpenoids profile. Euphorbia's terpenoids, including diterpenes and triterpenes, have shown interesting various biological activities which are attracting medicinal researchers increasingly as a source of new potential drugs as well as lead compounds.
In this paper, as a part of our efforts to find new sources of bioactive terpenoids, one of Iranian Euphorbia species, E. erythradenia arial parts were phytochemically investigated and five known triterpene compounds extracted and their chemical structure were elucidated ( Figure 1). Considering the huge demand for finding new drugs for viral infections, specially acquired human immunodeficiency virus (HIV), anti HIV activity of the extracted triterpenoids plus the four previously reported ingenoid diterpenes from E. erythradenia (Figure 2), were evaluated, using single cycle replicable HIV virions. In order to find the important structural features of anti HIV-1 active compounds, ligand-protein binding studies were carried out using AutoDock software.
Plant material
Aerial flowering parts of E. erythradenia f. (Euphorbiaceae) were collected in September 2010 from populations growing in Gharbalbiz in the neighborhood of Mehriz city, Yazd province (I.R.Iran). Plant material was identified by Ali Mirhosseini, plant taxonomist (Agricultural and natural Resources Research Center of Yazd Province) and a voucher specimen (nos. 1947) was deposited in the Herbarium Center of Yazd (HCY).
Production of Pseudotyped Single Cycle Replicable HIV Virions
Single Cycle Replicable HIV-1 virions (SCR) were previously constructed and their characterization ascertained the production of infective HIV-1 virions with the capability of one cycle of replication (4, 5). Briefly, these single cycle replicable HIV-1 (SCR HIV-1) virions were produced by deleting a 2-kb fragment within the Pol region of the HIV-1 genome from the pNL4-3 strain. Pseudotyped SCR HIV-1 virions were constructed by co-transfection of HEK 293T cells with pmzNL4-3 (containing deleted genome), psPAX2, and pMD2G plasmids obtained from Addgene (www.addgene.org).
The pmzNL4-3 plasmid encodes the HIV-1 fulllength RNA, with packaging ability containing the aforementioned deletion in the Pol region; the psPAX2 plasmid encodes HIV Gag and Gag-Pro-Pol polyproteins, besides all the HIV-1 accessory proteins; and the pMD2G plasmid encodes the vesicular stomatitis virus surface glycoprotein (VSVG), which is required for assembly and the budding process of virus. These pseudotyped virions have the ability of infecting a broad spectrum of cell, even without the CD4 receptor (including Hela cells).
Inhibition of HIV p24 core antigen production (HIV replication)
Hela cells, which were used as target cells in this experiment, were seeded at a density of 6 × 10 4 cells per well in 96-well plates. Each well was infected with 600 ng of p24 single cycle replicable HIV-1 (SCR HIV-1) virions. After 24 h of virus adsorption, cells were washed 3 times with pre-warmed DMEM to remove free virus particles. Cells were then incubated for 48 hours in a total volume of 200 μL per well of fresh medium containing various concentrations of compounds 1-8. Nevirapine (a HIV-1/2 RT inhibitor) was used as positive control. After 48 h, the p24 antigen (Ag) assay was performed on the supernatants by using a quantitative p24 ELISA method (HIV p24 ELISA, BioMerieux, France), according to the manufacturer's protocol. The IC 50 of compounds 1-8 was calculated according to the method described by Cheng et al.(6). The therapeutic index (TI) was evaluated as the ratio of CC 50 to IC 50 .
XTT-based cytotoxicity assay
The cellular toxicity of compounds 1-8 in Hela cells was assessed using a cell proliferation XTT kit (Roche Diagnostics, Germany), as described previously (7). Briefly, cells were plated in duplicate in 96-well plates in the presence or absence of various concentrations of compounds 1-8. After incubation at 37°C with 5% CO2 for 3 days, 50 μL of prepared XTT mixture was added to each well. The cells were incubated for an additional 4 hours to allow the production of XTT formazan. Absorbance was measured using an ELISA plate reader (BioTek ELx800) at a test wavelength of 450 nm and a reference wavelength of 690 nm. Percent inhibition was calculated using the following formula: Inhibition (%) = [100 -(A t /A s )] × 100, where A s is the absorbance of the solvent and A t , of the test sample, respectively. The cytotoxic concentration that resulted in a reduction of the number of viable cells by 50% (CC 50 ) was calculated from dose response curves.
Docking studies using AutoDock software The high resolution crystal structure of HIV protease (PDB code: 1AJX) complexed with Aha001 was retrieved from PDB Protein Data Bank and its ligand was deleted from the active site. Compound 7 was constructed on HyperChem 8.0.3 version and minimized using AMBER force field molecular mechanics and PolakRiebiere algorithm with RMS gradient = 0.01 Kcal/mol.
The receptor was kept rigid, and ligand was allowed to be flexible. Polar hydrogens and Kollman united atom partial charges were added to the individual protein atoms. A docking grid box was built with 28, 28 and 28 points in 12.79, 23.30 and 5.85 directions in the catalytic site of protein and the number of generations and maximum number of energy evaluations was placed on 100 and 2,700,000, respectively. Docking results were clustered with a root mean square deviation (RMSD) of 0.5 Å and evaluated by Pymol software.
Structures elucidation
Compound 1a & 1b showed a pair of doublets in the up-field area (δ H 0.32, J = 4.0 Hz and 0.55, J = 4.0 Hz) that develops hypothesis of cycloartane backbone of them. It should be mentioned that a pair of doublets in the up-field area in 1 H-NMR spectrum is a wellknown characteristic of cyclopropane ring of cycloartanes between natural products. Based on this hypothesis comparison of 13 C-NMR spectral data of cycloartane structures and Compound 1a & 1b 13 C-NMR was performed which proved our hypothesis.
Without pentacyclic structure, based on DEPT and broadband (BB) decoupled 13 C-NMR, remaining carbons included three methyl groups (δ C 18.3, 17.2 and 17.6), four methylene groups that two of them are involved in olefinic bonds (δ C 31.5, 31.6 &110.9, 111.4), three methane groups which two of them are oxygenated (δ C 35.9, 76.3 and 76.7), and a couple of quaternary carbons which are involved in double bonds (δ C 147.5 and 147.7). The duplicity of most of the signals of the side chain in 13 C-NMR spectra (δ C 17. 2 and 17.6, 31.5 and 31.6, 110.9 and 111.4, 76.3 and 76.7, 147.5 and 147.7) and intensity of them in comparison with the remaining peaks with identical multiplicity and cycloartane structure as a basic skeleton, indicated a mixture of two epimers at one of side chain's carbon. In this way, taking one peak of each two close peaks, plus the remaining carbons of side chain, two methyl groups (2 CH 3 ), two methylene groups (CH 2 and =CH 2 ), two methine groups (CH and CHO), and an olefinic quaternary carbon were involved in side chain which totally were in accordance of
EI-MS m/z 315 [M -127] + and 297 [M -H 2 O -127] as a result of side chain fragmentation.
It is crystal clear that the quaternary olefinic carbon is linked to an olefinic methylene group and therefore, the 1 H-NMR signal for this methylene group has appeared as a doublet in the downfield region (δ H 4.88, J = 1.2 Hz and 4.93, J = 1.2 Hz). A triplet peak at δ H 4.02 (J = 6.4 Hz) with the integration equals to one hydrogen, demonstrated neighboring oxygenated methine to an aliphatic CH 2 in one side and quaternary carbon in another side. A singlet at δ H 1.72 cleared that one of methyl groups is connected to quaternary carbon while another methyl linked to a methine because of its doublet multiplicity (δ H 0.88, J = 5.6). Finally, considering these data and literature (8) compound 1a was determined as (24R)-cycloart-25-ene-3β,24-diol which was mixed with its C-24 epimer compound 1b; (24S)-cycloart-25-ene-3β,24-diol.
Cycloartane skeleton of compound 2 was confirmed by a pair of doublets in the up-field area (δ H 0.26, J = 4.0 Hz and 0.49, J = 4.0 Hz) and comparison of 13 C-NMR spectrum between compound 2 and 1a. In addition to the carbons which are involved in cycloartane skeleton, 13 C-NMR (BB and DEPT) showed three methyl groups, a methylene group and three methine group that two of them are olefinic (δ C 124. Broadband decoupled 13 C-NMR and DEPT spectra, showed the presence of an aliphatic backbone which has a double bond consisting of a methine group and a quaternary carbon (δ C 122.6 and 143.6 respectively). Furthermore, a carbonyl group (δ C 183.2) was identified that along with the loss of m/z = 45 at EI-MS indicated the presence of a carboxylic acid functional group. The presence of a carboxylic group and a double bond, taking together with seven degrees of unsaturation, supported pentacyclic structure of compound 4. Olefinic proton was clear with a triplet (δ H 5.28, J = 3.4 Hz) and seven angular methyl singlets at δ H 0.75, 0.77, 0.90, 0.91, 0.93, 0.99 and 1.13 were also detected. In addition, two signals at δ H 3.22 (1H, dd, J = 11.0, 4.6 Hz) and 2.82 (1H, dd, J = 13.8, 4.2 Hz) represented an oxygenated and allylic protons, respectively. These data suggested oleanene skeleton in agreement with EI-MS m/z 248 and 207, which are diagnostically important peaks of an olean-12-ene structure that has a carboxylic functional group at C-17, as a result of retro Diels-Alder cleavage of ring C (11). According to the previously reports for oleanolic acid (12, 13), the hydroxyl group on position 3 has a beta configuration and therefore the carbinolic hydrogen (H-3) has an alpha configuration. The 1 H-NMR signal for this hydrogen has appeared as a doublet of doublets (1H, J = 11.0, 4.6 Hz) at δ H 3.22. Eventually, the foregoing characteristics as well as published papers (12, 13), determined that compound 4 is oleanolic acid.
Anti HIV activity
Anti-human immunodeficiency virus-1 (HIV-1) activity of compounds 1-4 and previously (4) reported ingenoid diterpenes from E. erythradenia was evaluated ( Table 3). The four ingenoid diterpenes had been identified previously and their pro-apoptotic effect has been established (14). Based on literatures reports, it is known that oleanolic acid (15, 16) and cycloartanes (17) partially and diterpenes especially are cytotoxic which our results confirm that. But compound 7 showed significant anti HIV activity (IC 50 = 0.008 μM, CC 50 = 3.264 μM, TI = 380.64) and higher therapeutic index than nevirapine, a non-nucleoside reverse transcriptase inhibitor (NNRTI) used to treat HIV-1 infection and AIDS. Ligand protein binding studies between compound 7 and HIV-1 protease (pdb 1AJX) demonstrated three hydrogen bond between Asp 25A, Asp 28A and Gly 49A of protease active site and compound 7 with a distance less than 3 A° that can be responsible for the observed anti HIV-1 activity (Figure 3). | 2,928.8 | 2016-03-01T00:00:00.000 | [
"Chemistry",
"Biology"
] |
Effects of transparent online open procurement on prices, volumes, and costs of medicines: an interrupted time series study in Ningxia, China
Objectives To assess the effects of the transparent online open procurement arrangement on the prices, volumes, and costs of medicines in Ningxia, China. Methods Data were extracted from the Ningxia pharmaceutical procurement platform, covering 16 months of purchase orders (December 2019 to March 2021) prior to the implementation of the transparent online open procurement policy and 20 months of purchase orders after the implementation of the policy (April 2021 to November 2022). Interrupted time series (ITS) analysis was performed to evaluate the effects of the transparent online open procurement policy on the prices, volumes, and total costs of the purchase orders. Results After implementation of the transparent online open procurement policy, the average price of purchased medicines showed a declining trend by 0.012 Yuan per month, while the total volume of purchase orders declined at a rate by 1.741 million per month measured by the smallest formulation units and the total costs of the purchase orders decreased at a rate by 5.525 million Yuan per month. Conclusion The transparent online open procurement policy resulted in reduced prices, lowered volumes, and lowered total costs of purchased orders of medicines.
Introduction
According to the most recent statistics provided by the global pharmaceutical information company IQVIA, global expenditure on medicatioxns -defined as the total amount allocated for the procurement of pharmaceuticals from manufacturers, exclusive of off-invoice discounts and rebates -is projected to reach $1.9 trillion by the year 2027 and this expenditure is anticipated to grow steadily at a rate ranging between 3% and 6% annually OPEN ACCESS EDITED BY Bernd Rosenkranz, Fundis African Academy of Medicines Development, South Africa (IQVIA, 2023).Notably, medications constitute a significant portion of healthcare expenditures in low-and middle-income countries, accounting for 20%-60% of their healthcare spending, in contrast to the 18% observed in countries affiliated with the Organization for Economic Co-operation and Development (OECD) (World Health Organization, 2015).It is crucial to acknowledge that in developing nations, up to 90% of the population rely on out-of-pocket payments for the purchase of medications, resulting in pharmaceuticals being the most substantial household expense, second only to food (World Health Organization, 2015).In China, out-of-pocket payments accounted for 27.0% of its total health expenditure in 2022 (National Health Commission of the People's Republic of China, 2023b).
China held the position of the world's second largest consumer in the pharmaceutical sector in 2018, with an impressive expenditure of $137 billion (IQVIA, 2023).The share of China's consumption of medicines in the world increased from 17.6% in 2015 to 20.3% in 2022 (Fei et al., 2023).Projections suggest that the pharmaceutical market in China is poised for substantial growth, with a volume increase of 8% over the upcoming 5 years (IQVIA, 2023).This surge in demand will be mirrored by a noteworthy 19% increase in pharmaceutical spending, ultimately reaching an estimated range of $140 to $170 billion by the year 2023 (IQVIA, 2023).Notably, from 2010 to 2020, a substantial portion of the total healthcare expenditure, ranging from 30% to 40%, were spent in pharmaceuticals (National Health Commission of the People's Republic of China, 2023a; Yan et al., 2022).This figure surpassed not only the corresponding percentages observed in the United States (11.0%),Japan (20.7%), and Korea (18.6%) but also exceeded the average of OECD countries (15.9%) (OECD, 2023).
Extensive research has been conducted to investigate the factors contributing to the escalating pharmaceutical expenditure (Main et al., 2022).Pharmaceutical pricing remains a pivotal and predominant factor that significantly contributes to the escalating pharmaceutical expenditure (de Wolf et al., 2005;Kadkhodamanesh et al., 2021;Mirzoev et al., 2021;Akbarpour et al., 2023), gaining increased prominence with each passing year (Ling-hua and Jieyuan, 2022).Governments worldwide have introduced a range of interventions and policy measures aimed at fostering the rationalisation of drug prices, thereby curtailing pharmaceutical expenses (Lee et al., 2015;Murthy and Okunade, 2016;Hassali and Wong, 2018).These multifaceted strategies encompass diverse pricing mechanisms, as well as practices such as pooled procurement, tendering, and negotiation.These efforts collectively seek to address the challenge of burgeoning pharmaceutical costs (World Health Organization, 2020;Mirzoev et al., 2021).At present, the evidence regarding the effectiveness of these policies in reducing medication prices and enhancing access to pharmaceuticals is notably mixed.These multifaceted considerations underscore the complexity of achieving a balanced and effective pharmaceutical pricing policy (World Health Organization, 2020;Kakkar, 2021).
The fundamental principle of advancing market transparency for pharmaceuticals lies in the public dissemination of information regarding the net prices of healthcare products (World Health Organization, 2019).This concept was initially championed by the 72nd World Health Assembly (WHA), which underscored the critical need for reliable data concerning medicine prices.In this context, the guidelines set forth by the World Health Organization (WHO) on country pharmaceutical pricing policies recommend that nations should actively "Share the net transaction prices of pharmaceutical products with relevant stakeholders, within and external to the country" (World Health Organization, 2020;Riccaboni et al., 2022).The significance of promoting price transparency has been underscored by various initiatives and regulations aimed at enhancing clarity in pricing practices.One notable example is the Medicines Transparency Alliance (MeTA), an initiative led by WHO.This initiative aspired to establish national-level, multi-stakeholder platforms for the sharing of comprehensive data related to the selection, procurement, quality, availability, pricing, promotion, and utilisation of medicines (Paschke et al., 2018).Another pertinent example is the European Union (EU) Transparency Directive, which mandates the public disclosure of list prices for all reimbursable medicines across Europe (European Union, 1988).
Facing the problem of high drug prices and opacity of the drug procurement process in hospitals, in 2000, the Chinese government issued the first policy document on public disclosure of drug price transparency.This retailing price ceilings of drugs set by the government has enabled further regional exploration of drug procurement procedures.In 2005, Sichuan, a western province of China, innovatively piloted centralised drug procurement via the internet and purchasing at the provincial level, with reference to the price ceilings (Jun-feng, 2006).In 2006, Guangdong implemented transparent online open procurement of medicines through pricelimit bidding (Jun-he, 2007).In 2009, to further decrease the inflated drug price, the zero-markup for drugs (ZMD) policy was introduced (Ni et al., 2021).In 2015, the milestone national policy document on transparent drug procurement was announced: All localities were encouraged to actively explore and pilot market-oriented drug pricing mechanism instead of strict and static drug retail price cap restriction by government alone.In 2017, the Chinese government issued several policy documents concerning reforming drug procurement and delivery.Standardizations of the Comprehensive National Information Management Platform as well as the Provincial Centralised Drug Procurement Platforms were put forward as core measures to monitor drug prices and supplies.Drug procurement related data, including drug prices were encouraged to be shared with the public.In 2019, the Chinese government issued the "Pilot Plan for National Centralised Drug Procurement".
The fundamental rationale for promoting price transparency is that it may improves economic efficiency; assists policymakers and researchers with reliable price information; empowers buyers to negotiate more strategically; increases the accountability of manufacturers and governments for prices; and facilitates costeffective decision-making by prescribers and patients (World Health Organization, 2018;Ahmad et al., 2020).Disclosure and control of drug prices in this study was one of the four aspects of transparency that can occur (World Health Organization, 2018).There was inconsistency in the impact of price transparency on drug prices in current literature.Some studies demonstrated that transparency have brought down the price of medicines, such as a pricing system in the private market conducted in South Africa (Moodley and Suleman, 2019a;Moodley and Suleman 2019b).However, some studies have concluded that transparency has no obvious expenditure saving effect.A previous study in the United Kingdom, for example, found that expenditure on inhaled glucocorticosteroids was unchanged after cost information was provided (Langley et al., 2018).Transparency has even been found to drive prices up, such as a South African study of transparent pricing systems (Bangalee and Suleman, 2016).A systematic review in 2023 concluded that the impact of drug price transparency was still of controversial although important, calling for stronger evidence on aspects as prices, quantites and expenditure (Joosse et al., 2023).
This study was conducted in Ningxia, an underdeveloped province located in northwestern China.In December 2020, Ningxia Public Resources Trading Administration, an agency for administration of governmental related products procurement including pharmaceuticals, announced the notice and rules for transparent online open procurement of medicines (Ningxia Public Resources Trading Administration, 2020a;Ningxia Public Resources Trading Administration, 2020b), with an intention to address the issues of supply shortage of essential medicines by engaging more qualified suppliers and hospitals in free bargaining.It was also expected to bring down transaction costs for both suppliers and hospitals.This study aimed to evaluate the effects of the transparent online open procurement arrangement on the prices, volumes, and total costs of the purchase orders in Ningxia, China.This study was expected to provide evidence of the effectiveness of drug price transparency.Although a previous study conducted in Shanghai, China explored these themes, the evidences were qualitative in nature (Cheng and Bo, 2021).This study used a institutional data with a quasi-experimental design.The findings of this current study were expected to not only address the gap in the literature, but also provide practical reference regarding drug price transparency practice for low and middle income countries similar to China.
Study setting
This study was conducted in Ningxia Hui Autonomous Region in northwest China, which covers an area of 66,400 square kilometres with 7.25 million residents (in 2021).It has the largest group of Hui ethnicity.Ningxia is deemed an underdeveloped province in China in terms of both GDP and per capita GDP.In 2021, Ningxia ranked bottom third among the 31 provinces/regions in mainland China in total GDP (452.23 billion Yuan, or $US 63 billion) and 20th in per capita GDP (62,549 Yuan, or $US 8701).Its per capita disposable income reached 38,291 Yuan ($US 5326) for urban and 15,337 Yuan ($US 2133) for rural residents, respectively.In 2021, Ningxia had 4,571 healthcare institutions (including both hospitals and primary care centres), 5.68 inpatient beds per 1,000 population (compared with 6.7 national average), and 8.36 skilled health workers per 1,000 population (compared with 7.97 national average) (Ningxia Hui Autonomous Region Statistics Bureau, 2023; National Health Commission of the People's Republic of China, 2023a).
Intervention measures
The transparent online open procurement arrangements in Ningxia involve several components.
First, the "Ningxia Pharmaceutical Procurement Platform" (the Platform hereafter) was established by integrating and upgrading the "Ningxia Drug Tendering and Purchasing Platform" and the "Ningxia Centralized Drug Purchasing Network".The two platforms used to realize the bidding function and the procurement function, respectively.In January 2021, the "Ningxia Pharmaceutical Procurement Platform" overseen by the Ningxia Public Resources Trading Administration was officially put into use and in April 2021, the first batch of drug purchases officially started.
Second, it mandates all qualified suppliers publicly disclosing necessary product information, including the generic names, specifications, dosage forms, and prices of all drugs, to be listed on the official public website on a timely manner.Hospitals are required to procure all of their medicines in line with the product information disclosed on the Platform.The Platform is deemed the only legal channel for public hospitals to make purchase orders.
Third, the Public Resources Trading Administration also coordinates reviews of the product prices negotiated conducted by a group of medical and pharmaceutical experts drawn from the expert database.Those experts will step in for further negotiations for overpriced products.Public hospitals negotiate with the listed suppliers under the supervision of the Public Resources Trading Administration.The price ceiling and average price relating to each of the procured medicines in Ningxia guided the negotiation.After that, medical institutions, on the basis of the listed price of drugs on the platform, combined with their own purchasing volume, bargain with manufacturers through the bargaining function of the Ningxia Pharmaceutical Procurement Platform.Hospitals were allowed to negotiate with listed enterprises on pharmaceutical prices.
Fourth, dynamic price monitoring and adjustment mechanisms were established.Inner reference pricing was used for price adjustment.For enterprises, they are required to actively report the new products prices to Ningxia Public Resources Trading Administration if the prices of the same product listed on other provincial procurement platform was lower.Ningxia Public Resources Trading Administration were then responsible for adjusting the listed price on the platform with reference to the new prices that the enterprises declared.For enterprises that failed to report such information or those with fraud practices, they may be disqualified from the platform.The thorough monitoring and reviewing of pharmaceutical prices are conducted every 6 months.
Study design
This study employed a pre-post design with segmented time series analysis.An interrupted time series is a strong quasiexperimental design in which data are collected at multiple time points before and after the intervention.The advantage of this design is that it can detect a possible underlying secular trend which occurs after the intervention (Koskinen et al., 2015).
The procurement information from December 2019 to November 2022 recorded in Ningxia Pharmaceutical Procurement Platform were extracted for analysis.We aggregated the original data into monthly indicators for the purposed of the study.The basic idea behind the A.M. index system analysis method (Addis A. & Magrini N.' method) of drug expenditure was that changes in price, volume, and structure were the three main drivers of changes in drug expenditure (Addis and Magrini, 2002).Based on this idea, we chose these variables to explore changes in expenditure, and we did not include structure changes in our study because there was no systematic difference in structure before and after the intervention (Ningxia Public Resources Trading Administration, 2017;Ningxia Public Resources Trading Administration, 2020a).Three indicators were synthesized for analysis: monthly procurement price, volume and cost of medicines.These variables were largely used in current drug cost related studies (Wouters et al., 2017;Dos Santos et al., 2022).Both monthly procurement price and volume were standardized by smallest pack unit (bottle, bag, box, etc.), a commonly used standardization for pharmaceutical price and volume (Wouters et al., 2017).The monthly procurement volume of pharmaceuticals was calculated as sums-up of procurement volumes of each specific product in each month.The monthly procurement price of pharmaceuticals was calculated by dividing total monthly pharmaceutical cost by total monthly procurement volume.
One wave of segmented analysis was conducted in this study.The purchasing information over a 36-month period (from December 2019 to November 2022) were collected.April 2021 (i.e., the 17th month) was chosen as the intervention point for the implementation of pharmaceutical transparent online open procurement policy.This analysis enable us to explore the impact of Ningxia pharmaceutical transparent online open procurement policy on pharmaceutical procurement.
Statistical analysis
The interrupted time series (ITS) was used to analyze the procurement price (CNY), total volume of purchase orders measured by the smallest formulation units (millions of units) and total costs of the purchase orders (CNY in millions), they acted as dependent variables in this study.The linear regression equation is constructed as follows: In this model, Y t is the outcome indicator in month t; T t is a continuous variable indicating the months passed at month t since the start of the observation period; I t represents the two periods before (value = 0) and after (value = 1) the intervention; T is a continuous variable indicating months passed since the intervention (time prior to the intervention is coded 0).β 0 is the initial level estimate, i.e. the study variable at t = 0; β 1 is the trend estimate of the change in time variable t in the pre-intervention study variable, i.e., the baseline slope estimate; β 2 was the level of change in the study variables before and after the intervention; β 3 is the trend change of the study variable after the intervention, that is, the difference between the post-intervention slope and the preintervention slope; β 1 +β 3 is the slope of the trend of study variables after intervention; ε t is random error.
The level of autocorrelation in the model was estimated using the Durbin-Watson test (Savin and White, 1977).If autocorrelation exists, the Prais-Winsten method was used to deal with autocorrelation (Turner et al., 2021).All of the analyses were performed using STATA17.0 software (Stata Corp LP, College Station, TX, United States).
Results
Overall, it can be seen that after the implementation of Ningxia pharmaceutical transparent online open procurement, the average price of purchased medicines has changed from an upward trend to a downward trend (Figure 1).The downward trend of total volume of purchase orders measured by the smallest formulation units and total costs of the purchase orders has accelerated significantly after the implementation of Ningxia pharmaceutical transparent online open procurement (Figures 2, 3).
ITS analysis shows that before the implementation of the pharmaceutical transparent online open procurement policy, the average price of purchased medicines is increasing at a trend of 0.010 yuan per month [p = 0.556, CI = (−0.025,0.046)].In the first month of policy intervention, the average price of purchased medicines increased by 0.147 yuan, and the change was not statistically significant [p = 0.418, CI = (−0.217,0.510)].Compared with the upward trend of 0.010 yuan before policy intervention, the upward trend of the average price of purchased medicines decreased by 0.022 yuan, and the change was not statistically significant [p = 0.366, CI = (−0.072,0.027)], that is, the average price of purchased medicines decreased at 0.012 yuan per month after policy intervention (β 1 +β 3 ) (Table 1).
The total volume of purchase orders was decreasing at a trend of 1.560 million smallest formulation units per month, and the change was not statistically significant [p = 0.352,1.802)].In the first month of policy intervention, total volume of purchase orders increased by 7.643 million smallest formulation units, with no statistically significant change [p = 0.595, CI = (−21.376, 36.662)].Compared with the downward trend of 1.560 million before the policy intervention, the upward trend of total volume of purchase orders increased by 0.181 million smallest formulation units, and the change was not statistically significant [p = 0.920, CI = (−3.830,3.468)], that is, the total volume of purchase orders after the policy intervention decreased at a trend of 1.741 million smallest formulation units per month (β 1 +β 3 ) (Table 1).
The total costs of the purchase orders was decreasing at a trend of 1.596 million yuan per month, and the change was not statistically significant [p = 0.756,8.763)].In the first month of policy intervention, the total costs of the purchase orders increased by 22.980 million yuan, and the change was not statistically significant [p = 0.556,101.651)].Compared with the downward trend of −1.596 million before the policy intervention, the upward trend of the total costs of the purchase orders decreased by 3.929 million yuan, and the change was not statistically significant [p = 0.474, CI = (−14.977,7.119)], that is, the total costs of the purchase orders after the policy intervention decreased at a monthly trend of 5.525 million yuan (β 1 +β 3 ) (Table 1).
Discussion
Our research found that the implementation of Ningxia pharmaceutical transparent online open procurement policy resulted in reduced prices, lowered volumes, and lowered total costs of purchased orders of medicines, and the implementation of this policy can be considered to be quite effective.The average price of purchased medicines changed from an upward trend to a downward trend after the policy intervention, and the downward trend of total volume of purchase orders and total costs of the purchase orders accelerated significantly.
The transparent online open procurement of drugs in Ningxia encompassed some major aspects: First, information on products prices were publicly disclosure under public scrutiny.On the one hand hospitals may benefit with the possibility of expenditure rationalization, and suppliers on the other hand with sales on a larger scale (Sigulem and Zucchi, 2009).Second, a unified platform integration of two separated systems enables unified and simplified government supervision, especially the transaction process by hospitals, manufacturers and deliver enterprises.Trends in total volume of purchase orders measured by the smallest formulation units.
Frontiers in Pharmacology frontiersin.org Comprehensive information may also promote rational procurement decisions by hospitals regarding drug alternatives, etc. Thirdly, public disclosures of prices benefits government regulators for a clearer picture on products prices all over China, leaving more information supporting bargaining with enterprises about their listed prices, largely simplified local price supervision process.Fourth, all qualified drug manufacturers were encouraged to participate in open bidding, competition among suppliers were intensified.Fifth, the openness and transparency of the whole transaction process may also effectively prevent corruptions.From this research, such comprehensive intervention had the potential to bring down drug prices.Cost-savings can also be achieved.The procurement volumes of drugs by hospitals were decreased.
The results of this study are due to a combination of factors, in which transparency may have played a key role in lowering drug prices, but there may have been other elements in the overall policy that played a role, such as upgrading the consolidation of drug Trends in total costs of the purchase orders.purchasing platforms and encouraging companies to bid for medicines, etc., and it is worthwhile to explore further the role played by the specific individual elements.Increasing transparency is one of the solutions proposed by both researcher and practitioners to reduce prices (Franzen et al., 2020).This is especially important for underdeveloped countries where drug corruptions were common.Peru, Zambia, and Jamaica, for example, have promoted the smooth procurement of medicines mainly by strengthening transparent legislation, open procurement process and related information transparency (World Health Organization.Regional Office for the Eastern Mediterranean, 2009).Simply public disclosure of pharmaceutical prices on the internet would make a great positive impact (Paschke et al., 2018).It was also believed that increased manufacturer transparency would improve the overall fairness and efficiency of price negotiation system (Moon, 2018;Colbert et al., 2020), leading to price reduction on both originator and generic medicines.Previous studies in China have shown that transparency of information on the procurement of essential medicines reduces the number of highpriced medicines procured, which is conducive to promoting the accessibility of essential medicines and the rational use of medicines (Xin et al., 2013).Information technology in public contracting helps creating a more competitive, transparent, and accountable procurement system.Ukraine, for example, introduced a centralized e-procurement system that is known as ProZorro (Law of Ukraine, 2015).Public dissemination about the procurement process combined with an electronic bidding system is considered an efficient strategy for price competition and also an anti-corruption tool (Cohen and Montoya, 2001).Example of how transparency improved procurement processes is also found in Chile.It's pharmaceutical procurement system CENABAST had historically been inefficient and non-transparent.Empirical analysis showed that savings from use of the reformed system were estimated to be between 5% and 7%, with demonstrated price reductions (Cohen and Montoya, 2001;Singer et al., 2009).In 2013 their efforts reportedly increased transparency in 180 hospitals (Navarrete, 2014).Merely publishing purchase prices may be insufficient to reduce prices.Full bidding and fulfillment via electronic means are key enabling factors for medication price reduction.This has been explored by several Brazilian studies.A 2009 study of an e-procurement system introduced to nine Brazilian hospitals revealed a decrease in the price over ten percent (Sigulem and Zucchi, 2009).While such decrease can't be attributable to transparency alone, volume purchasing arrangement may also play a crucial part (Sigulem and Zucchi, 2009).Price reductions may be attributable to greater purchasing leverage, in addition to improved transparency.Later on, another study in 2015 verified such hypothesis that an Internet-based strategy to improve pricing transparency did not lead to statistically significant reductions in actual purchase price, suggesting that merely publishing purchase prices for medications may be insufficient to reduce prices (Kohler et al., 2015).
In addition to pharmaceutical transparency, many benefits have been found in other areas of transparent online, some studies have emphasized the economic and social significance of adopting online pharmacies (Al Halbusi, 2024), finding that social media use and technological turbulence in SMEs can have a positive impact on business performance (Al Halbusi, 2022a;Hassani et al., 2022), as well as on e-entrepreneurial intentions (Al Halbusi, 2022a;Al Halbusi, 2022b).In addition, the adoption of knowledge management systems and artificial intelligence can improve business sustainability (Al Halbusi, 2023).
The results of the study can be better understood when considering the unique context of Ningxia.Ningxia, was an autonomous region in China, with a distinctive economic and medical background.Additionally, the technological upgrades of the Ningxia pharmaceutical procurement platform facilitated government supervision and drug transactions, which also played a crucial role in observing the results.As stated in the study setting in the Methods section, Ningxia as an economically and medically underdeveloped province needs efficient spending, which aligns with the goals of the transparent online open procurement policy.The observed reduction in medicine prices and costs is reflective of the region's need to optimize limited resources while maintaining access to essential medicines.
This study suffered from major limitations.First, we only had access to institutional data of Ningxia, the generality of the results may be limited since nationwide data would be more applicable.The data we got was the average price of purchased medicines, the total volume of purchase orders, and the total costs of the purchase orders for each month of transparent online open procurement, and we got synthetic data rather than original data.Second, when dealing with drug prices and volumes, Defined Daily Dose (DDD) standardization would have be more concise, but since our access to the detailed data was restricted, the smallest package unit were used as an alternative.In addition, this study lacked control variables, data to adjust for drug type were not available, and administrative costs associated with the procurement process were not considered.Ningxia, as a less developed provincial level autonomous region, faced with similar problems as those undeveloped countries, including insufficient medical resources and government revenue.The demand for pharmaceuticals was also low with limited population.Thus, even with these flaws, this study makes significant contribution to the international readers especially for developing countries facing with souring pharmaceutical expenditure issues and medical corruption issues.This study evaluated the policy effects, provided new evidence that expands public understanding of the impact of transparent online open procurement, may provide valuable lessons for health policy decision makers elsewhere.
Conclusion
Ningxia pharmaceutical transparent online open procurement policy was characterized by a high transparency on pharmaceutical prices.The whole transaction process between hospitals, manufacturers and delivery enterprises were under strict government scrutiny.Public disclosure of pharmaceutical prices enabled an inner reference price adjustment.From our empirical assessment, this comprehensive procurement strategy resulted in reduced prices, lowered volumes, and lowered total costs of purchased orders of medicines.While the effect of such comprehensive reform on patient outcome or drug access needs further investigation.
FIGURE 1
FIGURE 1Trend of average price of purchased medicines.
TABLE 1
Detailed parameters of ITS analysis. | 6,154.4 | 2024-08-20T00:00:00.000 | [
"Economics",
"Medicine"
] |
Fruquintinib inhibits VEGF/VEGFR2 axis of choroidal endothelial cells and M1-type macrophages to protect against mouse laser-induced choroidal neovascularization
Wet age-related macular degeneration, which is characterized by choroidal neovascularization (CNV) and induces obvious vision loss. Vascular endothelial growth factor (VEGF) family member VEGF-A (also named as VEGF) and its receptor VEGFR2 contribute to the pathogenesis of CNV. Choroidal endothelial cells (CECs) secret C–C motif chemokine ligand 2 (CCL2), which attracts macrophages to CNV lesion and promotes macrophage M1 polarization. Accordingly, infiltrating macrophages secret inflammatory cytokines to promote CNV. In vivo, intravitreal injection of fruquintinib (HMPL-013), an antitumor neovascularization drug, alleviated mouse CNV formation without obvious ocular toxicity. Meanwhile, HMPL-013 inhibited VEGF/VEGFR2 binding in CECs and macrophages, as well as macrophage M1 polarization. In vitro, noncontact coculture of human choroidal vascular endothelial cells (HCVECs) and macrophages under hypoxia conditions was established. HMPL-013 downregulated VEGF/VEGFR2/phosphoinositide-3-kinase/protein kinase B (AKT)/nuclear factor kappa B pathway and CCL2 secretion in HCVECs, as well as VEGF/VEGFR2-induced macrophage M1 polarization under hypoxia condition. In addition, HMPL-013 inhibited HCEVC derived CCL2-induced macrophage migration and M1 polarization, along with macrophage M1 polarization-induced HCVECs proliferation, migration, and tube formation. Altogether, HMPL-013 alleviated CNV formation might via breaking detrimental cross talk between CECs and macrophages.
Introduction
Age-related macular degeneration (AMD), a leading cause of incurable vision loss in the elder people and accounting for 8.7% of all cases of blindness in the developed nations 1 , is categorized into dry and wet types. Dry AMD is featured by multiple drusen deposits and rarely impacts vision, developing not only to geographic atrophy but also to wet AMD, which is characterized by choroidal neovascularization (CNV) and induces obvious vision loss. Vascular endothelial growth factor (VEGF) family members, containing VEGF-A (also named as VEGF), VEGF-B, VEGF, VEGF-D, VEGF-E, and placental growth factor (PGF), promote CNV via binding to their respective receptors vascular endothelial growth factor receptor 1 (VEGFR1), VEGFR2, and VEGFR3. Intravitreal injection of anti-VEGF reagents is deemed to be the optimal treatment for CNV. However, any improvement is accompanied by long-term monthly intravitreal injections and ocular complications, such as endoophthalmitis 2 .
Therefore, the searching of cellular and molecular mechanisms of CNV is warranted. Clinical studies have shown that age-related changes in Bruch's membrane lead to choriocapillaris atrophy, as well as to decreased diffusion of oxygen toward the neuroretina. The resulting outer retina hypoxia may be an important driving force of CNV formation, by stimulating VEGF overexpression via the transcription factor hypoxia-inducible factor 1α (HIF-1α) in the retinal pigment epithelium (RPE) cells, and retinal Muller cells 3 . VEGF binds to its receptor VEGFR2, consequently phosphorylates Tyr1175 inside VEGFR2, finally promotes laser-induced CNV formation in mice 4,5 . Anti-VEGF-A/VEGFR2 or nonspecific small interfering RNA inhibits CNV and attenuates VEGF mRNA expression in a mouse laser-induced CNV model 6 . In addition, resveratrol inhibits HIF-1α accumulation and VEGF secretion induced by cobalt chloride through sirtuin 1 in human RPE cells 7 . In addition, in AMD-relevant models, VEGF/ VEGFR2 blockade does not cause retinal atrophy, which is a side effect caused by intraocular injections of VEGFneutralizing proteins 8 . These studies attract us to investigate the suppression of VEGF/VEGRR2 axis in CNV.
Accumulating study reveals infiltrating macrophages contribute to the progress of CNV. In different environments, macrophages polarize into M1 pro-inflammatory and M2 anti-inflammatory types. M1-type macrophages with pro-inflammatory functions can produce VEGF and promote neovascularization 9 , while it has higher transcript ratio of M1 chemokine C-X-C motif chemokine ligand 11 (CXCL11) to M2 chemokine C-X-C motif chemokine ligand 22 (CXCL22) in advanced AMD maculae compared to the control 10 . Besides, M1-type macrophages secret pro-inflammatory cytokines, such as interleukin-6 (IL-6), tumor necrosis factor-α (TNF-α), and C-C motif chemokine ligand 5 (RANTES) to promote ocular neovascularization 11 . C-C motif chemokine ligand 2 (CCL2) produced and released by choroidal endothelial cells (CECs) draws macrophages with CCL2 receptor CCR2 on the surfaces of macrophages to the site of CNV injury 12 . In addition, CCL2 facilitates macrophage M1 polarization 13 .
Fruquintinib (HMPL-013) is an antitumor neovascularization drug, which belongs to tyrosine kinase inhibitors, acting as a powerful and highly selective inhibitor for all types of VEGFRs, including VEGFR1, VEGFR2, and VEGFR3. The researchers have completed phase III clinical trials of colorectal cancer (CRC), and the results show that in patients with metastatic CRC who have received at least two chemotherapy regimens, oral HMPL-013 significantly improves the overall survival rate of the patients compared to the placebo group 14 . Following China's priority review of HMPL-013 in September 2017, on September 4, 2018, the National Medical Products Administration granted HMPL-013 for the first global approval for the treatment of progressive CRC 15 . Therefore, the question whether HMPL-013 can alleviate CNV attracts our attention.
Herein, mouse laser-induced CNV and in vitro endothelial cell and macrophage hypoxia models were applied to identify the functions and mechanisms of HMPL-013 on CNV. Our study could supply a potential therapeutic strategy for the treatment of wet AMD.
Mouse laser-induced CNV model and treatment
Nine to 10-week-old male C57BL/6 mice were purchased from the Laboratory Animal Center of Nantong University (Nantong, China). The mouse laser-induced CNV model was constructed, as previous description 16 . At the time of laser photocoagulation, the production of a bubble was regarded as a rupture of the Bruch's membrane, indicating that the model was successfully established. Photocoagulation spots containing hemorrhage or failing to develop a bubble at the laser site were excluded. The mice were randomly assigned into five groups: normal, CNV 7 d, CNV 7 d + 1 μl of 0.1% dimethyl sulfoxide (DMSO), CNV 7 d + 1 μl of HMPL-013 (Elunate ® ; Chi-Med, China; 5 μg/μl in 0.1% DMSO), and CNV 7 d + 1 μl ranibizumab (RBZ; Lucentis; Genentech Inc.; 10 μg/μl; used as the positive control), which is a recombinant humanized monoclonal antibody fragment binding VEGF-A. In the animal experiments, the investigator responsible for all other experiments except for CNV model construction were blind to the group allocation.
Fundus angiography
Fundus fluorescein angiography (FFA) and indocyanine green angiography (ICGA) in mice were done following previous study 18 . For the grading of CNV leakage, two masked researchers not involved in laser photocoagulation or angiography evaluated the fluorescein angiograms at a single sitting. Grade 0 lesions had no hyperfluorescence. Grade 1 lesions exhibited hyperfluorescence without leakage. Grade 2A lesions exhibited hyperfluorescence in the early or midtransit images and late leakage. Grade 2B lesions showed bright hyperfluorescence in the transit images and late leakage beyond treated areas (grade 2B lesions were defined as clinically significant), as previous description 19 . For each ICGA examination, the entire lesion area was quantitatively measured using the Heidelberg software (Spectralis Acquisition and Viewing Modules; version 3.2, Heidelberg Engineering) by three independent observers. Mean observed values were calculated.
Choroidal flat mounts and immunofluorescence
Choroidal flat mounts and immunofluorescence were performed according to previous methods 20
Hematoxylin-eosin stain
On day 7, following euthanasia, the mouse eyes were enucleated and immersion-fixed 10 in 4% PFA for 2 h. After fixation, the eyes were embedded in Tissue-Tek ® optimum cutting temperature compound (#4583, Sakura Finetek, Japan), and cross-sectioned on a cryostat vertically through the center of the cornea and optic nerve. Slide of 5 µm thickness was stained with hematoxylin-eosin (HE).
Terminal deoxynucleotidyl transferase dUTP nick-end labeling
After fixation and permeabilization, the mouse cryosections were incubated with a terminal deoxynucleotidyl transferase dUTP nick-end labeling (TUNEL) reaction mixture (#11684795910, Roche, Switzerland) at 37°C for 60 min, and DAPI for 5 min at room temperature (RT), and then washed for 30 min in PBS.
Electroretinography
The mice were dark-adapted over 16 h. Then the mice were anesthetized and their pupils were dilated. Contact lens electrodes were placed on both eyes with a drop of methylcellulose. Full-field electroretinographies (ERGs) were recorded by using the universal testing and electrophysiologic system 2000 (UTAS E-2000, LKC Technologies). The responses were recorded at a gain of 2 k using a notch filter at 60 Hz, and were bandpass filtered between 0.1 and 1500 Hz. In the light-adapted photopic state, with a −1.02 log cds/m 2 background light (flash intensity) to desensitize the rods and isolate cones, photopic cone responses were recorded in response to a single flash of 0 dB. The amplitude of the a-wave was measured from the baseline to the lowest negative-going voltage, whereas peak b-wave amplitudes were measured from the trough of the a-wave to the highest peak of the positive b-wave.
Co-immunoprecipitation
Proteins of mouse retina-RPE-choroid complex tissues from normal, CNV 7 d, CNV 7 d + DMSO, and CNV 7 d + HMPL-013 groups were obtained by the protein extraction kit (Merck) and precleared. The beads coated with p-VEGFR2 or VEGF antibody were incubated with the precleared whole proteins 4°C for overnight. The beads were washed with cell lysis buffer four times. Finally, the beads were boiled for 10 min. The eluents were analyzed by Western blot with VEGF and p-VEGFR2 antibodies.
Cell noncontact coculture and treatment
Human choroidal vascular endothelial cells (HCVECS; #36052-03, Celprogen, USA) were cultured in Dulbecco's Modified Eagle Medium (DMEM) containing 4.5 g/l glucose supplemented with 10% (v/v) FBS, 100 U/ml penicillin, and 100 mg/ml streptomycin. The human macrophages were derived from human peripheral blood mononuclear cells and cultured, as the previous description 22 . The cells were kept at 37°C in a humidified atmosphere containing 5% CO 2 . HCVECs were cultured in the lower well and macrophages were cultured in the upper well of the transwell plate (#CLS3397, Corning, USA) and verified by STR profiling.
Transwell assay
After the coculture with HCVECs or macrophages for 24 h, the macrophages or HCVECs were collected and seeded into the upper chamber (8 µm) at a density of 1 × 10 5 cells/well (Corning) with non-serum DMEM medium. The lower chamber was filled with 500 µl DMEM supplemented with 10% FBS. Twelve hours later, the human macrophages or HCVECs on the upper surface of the membrane were removed with a cotton swab. Then, the lower cells were fixed with formaldehyde and stained by crystal violet for 30 min. The number of migrated cells was counted under a microscope.
5-Ethynyl-2-deoxyuridine assay
The 5-ethynyl-2-deoxyuridine (EdU) incorporation assay was conducted with an EdU kit (#C10310-1, Ribo-Bio, China) according to the manufacturers' instructions. The cell nuclei were counterstained with DAPI. The EdUpositive ratio was calculated as the cells. The number of cells was counted using Image-Pro Plus software (Media Cybernetics, USA).
Tube formation assay
HCVECs tube formation was analyzed using extracellular matrigel vessel-like formation assay. Firstly, precooled growth factor reduced matrigel (#354230, Corning) was coated in the bottom of 48-well plate, 300 µl/well and incubated in a humidified atmosphere of 5% CO 2 at 37°C for 1 h. After coculture with macrophages for 24 h, HCVECs were harvest and 1 × 10 5 cells/ well was seeded to the coated plate. Then, 20 ng/ml human VEGF-A recombinant protein (#ab55566, Abcam) was added to each well after cells were seeded in triplicates. After culture for 24 h, tube formation was visualized by Olympus microscope (Japan), and total tube length were analyzed using Image-Pro Plus software.
Statistical analysis
The data were shown as mean ± SEM. Statistical analysis was performed by the one-way ANOVA followed by Tukey's test. P < 0.05 was considered statistically significant. Analysis was done by statistical software SPSS 15.0.
HMPL-013 mitigates mouse laser-induced CNV formation
To explore the effects of HMPL-013 on mouse CNV formation, HMPL-013 intravitreal injection was done at day 3 after laser coagulation, and analysis was done at day 7 (Fig. 1a). The concentration-time profiles of HMPL-013 in the mouse retina-RPE-choroid complex tissues after [ 14 C] HMPL-013 intravitreal injection showed that HMPL-013 reached its peak concentration at 4 h (13649.21 ng/ml ± 1024.80 ng/ml) and the high concentration sustained until 20 h (Fig. 1b). CNV leakage was alleviated in the HMPL-013 and RBZ groups compared to the CNV 7 d group (Fig. 1c). The leakage score analysis further showed that the percentage of 0 and 1 scores increased, while the percentages of 2a and 2b scores decreased, in the HMPL-013 and RBZ groups compared to the CNV 7 d group (Fig. 1d). CNV area also decreased in HMPL-013 and RBZ groups (Fig. 1e). Accordingly, the mean intensity values showed that CNV leakage decreased in the HMPL-013 and RBZ groups (Fig. 1f). In addition, staining of IB4 (a vascular endothelial cell marker) and phalloidin was performed on choroidal flat mounts, indicating that HMPL-013 alleviated CNV formation (Fig. 1g, h). The results suggested that HMPL-013 intravitreal injection reduced CNV leakage and area.
HMPL-013 causes no obvious intraocular toxicity
Next, on the basis of the efficacy of HMPL-013 on the mouse CNV, we wondered whether HMPL-013 could exert ocular side effects. HE stain on retinal cryosections (Fig. 2a) and quantification of retinal thickness (Fig. 2b) revealed no differences in histologic morphology or retinal thickness between the normal and HMPL-013-injected eyes. In addition, ocular cell apoptosis was unchanged in the HMPL-013 group compared to the normal and CNV 7 d groups (Fig. 2c, d). In CNV 7 d group, a-wave and b-wave amplitudes decreased compared to normal group, while HMPL-013 elevated a-wave and b-wave amplitudes ( Fig. 2e-g), indicating that HMPL-013 improved scotopic response in mice with CNV. These data suggested that HMPL-013 intravitreal injection caused no obvious intraocular toxicity.
HMPL-013 inhibits macrophage M1 polarization-induced HCVECs proliferation, migration, and tube formation
Finally, we investigated the effects of pro-inflammatory cytokines derived from macrophages on the proangiogenic behaviors of HCVECs. HCVECs proliferation was induced by hypoxia, reduced by macrophage polarization modulator geraniin or HMPL-013. The downregulatory role of HMPL-013 on the proliferation of HCVECs was weakened by macrophage M1-type polarization agonist LPS (Fig. 8a, b). Similarly, HMPL-013 played the inhibitory role on hypoxia-induced HCVECs migration (Fig. 8c, d) and tube formation (Fig. 8e, f). The results suggested that HMPL-013 inhibited macrophage M1 polarization, consequently suppressed HCVECs proliferation, migration, and tube formation. To sum up, HMPL-013 mitigated mouse CNV formation via inhibiting VEGF/VEGFR2 binding in CECs and macrophages, consequently blocking the detrimental cross talk between these two kinds of cells (Fig. 8g).
type VEGF family members to their receptors and downstream signaling pathways. We also found HMPL-013 at the dose of 5 μg alleviated mouse CNV leakage and area, even better than RBZ at the dose of 10 μg. To date, HMPL-013 has been progressed in clinical trial for advanced non-small cell lung cancer 28 and advanced gastric cancer 29 . Whether HMPL-013 can be used for the treatment of CNV needs further investigation.
VEGF and VEGFR2 expression, and their binding in CECs and macrophages.
Previous study reveals the M1-associated cytokines increased to a greater extent in the RPE/choroid complexes, whereas the M2-associated cytokines were highly expressed in the retinas 33 . While human advanced AMD macula has a higher M1 to M2 chemokine transcript ratio compared to normal autopsied eyes 10 . Study using transgenic mice advance inflammatory M1 phenotype monocyte (CCR2 + ) infiltrate as the drive for experimental CNV 34 . However, another study illuminates that VEGF + Arg1 + macrophages drive the onset of CNV in mice 35,36 . These contradictory results about the roles of M1 and M2 macrophages in the pathogenesis of CNV potentially due partially to the complex and kinetic microenvironment that governs macrophage polarization and function 37,38 . Hereon, we found HMPL-013-mitigated CNV formation via inhibiting macrophage M1 polarization.
The mRNA expressions of M1-related markers are dramatically upregulated in the early stage, while the M2related markers are slightly upregulated in the middle stage and sustained until the late stage in the aqueous humors of wet AMD patients 39 . In the study, the mouse choroidal flat mounts on day 7 following laser treatment were used for the detection of macrophage polarization, showing that both of M1-and M2-type markers increased. The difference might attribute to the different species and tissues. VEGFR1 knockdown inhibits MCP-1 (CCL2) expression of clear cell renal cell carcinoma cells 40 . It has been reported that MCP-1 (CCL2) via nuclear factor kappa β in bovine retinal endothelial cells 41 . HMPL-013 is a potent inhibitor for all kinds of VEGFRs. Therefore, we speculate that HMPL-013 inhibits the binding of VEGFs to VEGFRs in HCECs to downregulate the expression of CCL2. Thereafter, downregulated CCL2 restrain the infiltration of macrophages, resulting in the decreased expression of CCR2.
In summary, HMPL-013 ameliorated mouse CNV formation via inhibiting VEGF/VEGFR2 binding in CECs and macrophages, thereby blocking the detrimental cross talk between these two kinds of cells. However, the study still left certain questions needed for further exploration, such as the inhibitory roles of HMPL-013 on other kinds of VEGF family members, and dynamic observation for HMPL-013 treatment on the progress of mouse CNV. | 3,900.4 | 2020-11-01T00:00:00.000 | [
"Biology"
] |
Active Volcanism Revealed from a Seismicity Conduit in the Long-resting Tatun Volcano Group of Northern Taiwan
Abundant earthquakes clustered within a particular zone often reflect an active geological feature, such as clustering seismicity along a fault zone and a huge number of volcanic-earthquakes around the erupting conduit. Herein we perform a double-difference tomographic inversion and relocate the seismicity at the long-resting Tatun volcano group (TVG) in northern Taiwan. A dramatic improvement of the earthquake location model surprisingly show that, from 2014 to 2017, two clustered seismic zones are identified in the TVG. One major group of events (>1000) persistently clustered within a ~500 m diameter vertical conduit with a ~2 km height. The clustering seismicity conduit is just located nearby Dayoukeng, one of the strongest fumaroles in the TVG, and is connected to a fracture zone characterized by low Vp/Vs in the shallow crust. The other group of events is clustered within a sphere-like zone beneath Mt. Chihsin around the depths between 0.5 km and 2 km. Both seismic zones are probably triggered by the significantly volcanic gases and fluids ascending from the deep magma reservoir. Combined with a variety of results from literature, the seismicity conduit near the strong fumarole is the evidence for an active volcano and also identifies a likely pathway for ascending magma if the TVG erupts again in the future. But possibility of developing different magma pathways at other clustered seismic zones such as beneath Mt. Chihsin may not be totally excluded.
also social and economic impacts in Taiwan. Therefore, detailed volcanic studies and monitoring works are two important tasks if the TVG is an active volcano group.
A lot of volcanic features such as fumaroles and hot-springs are still significant at the TVG, but it was considered to be dormant or even extinct for a long time. Early stage geological studies showed that the volcanic eruption primarily started around 0.8 Ma and eventually stopped around 0.1 Ma 3-5 . However, a variety of recent observations and analyses suggest that the TVG may be still active. First, Helium isotopes indicate that some mantle signatures have been detected from the volcanic gas and fluid at many fumarole sites 6 . Second, shallow pressure sources, including both inflation and deflation, have been detected from high-accurate leveling surveys across the TVG 7 . Third, clustered micro-earthquakes with some typical volcano-earthquakes, such as tornillos, mono-chromatic tremors, long-period events, and heartbeat earthquakes, have been observed in the shallow crust beneath the TVG [8][9][10][11][12] . Furthermore, a few small-scale phreatic eruptions have been detected by infrasonic sensors and seismic arrays 13,14 . Fourth, the dating results from volcanic ashes and Fe-rocks imply that the last eruption took place within several thousand years 15,16 . Finally, a deep magma reservoir has been detected through the seismic detection of both S-wave shadows and P-wave delays 2 .
All of the late observations and analyses mentioned above suggest that a future volcanic eruption in the TVG cannot be totally excluded; however, the detailed subsurface structures, such as shallow hydrothermal systems, volcanic conduits, and deep magma reservoirs, necessary to delineate the geometry and connections between volcanic plumbing systems are still unknown. Herein, we produce seismic tomographic images based on the P-and S-wave arrival time data recorded at 40 broadband seismic stations in the TVG area between 2014 and 2017. The 3-D velocity structures of P-waves (Vp), S-waves (Vs) and the ratios between both (Vp/Vs) at depths shallower than 5 km can successfully be inverted by the double-difference tomography 17 . These tomographic images allow us to improve the understanding of the volcanic plumbing system in the shallow crust beneath the TVG. Also the relocated seismicity interestingly shows a vertical conduit within a ~500 m diameter area near the Dayoukeng fumarole, one of the strongest degassing processes in the TVG, at a depth between ~0.2 to 2 km below the surface. Such a seismicity conduit is the major pathway for the transport of volcanic gases and fluids from deep reservoirs to the surface; it may be one of the most likely pathways for ascending magma for TVG eruptions in the future. But possibility of developing different magma pathways at other clustered seismicity zones such as beneath Mt. Chihsin might not be totally excluded. These results significantly help in the understanding of the TVG plumbing system and provide some critical information to mitigate future volcanic hazards in the Taipei metropolis.
Results (High-Resolution Seismic tomography)
We perform the double-difference seismic tomographic inversion (tomoDD) of the P-and S-wave arrivals to more than 2,000 micro-earthquakes recorded by 40 broadband seismic stations in the TVG in northern Taiwan between 2014 and 2017. The seismic array was installed and gradually upgraded by the Taiwan Volcano Observatory at Tatun (TVO), and the P-and S-wave arrivals have been routinely picked for locating earthquakes by using the traditional algorithm (Hypo71) based on a simplified layer velocity model since 2003 8,9 . The preliminary results of seismicity show that two groups of earthquakes are roughly clustered at the shallow crust just beneath Mt. Chihsin and the Dayoukeng fumarole 9 . Mt. Chihsin is the highest peak and probably the youngest volcano in the TVG 15 and the Dayoukeng fumarole is one of the strongest degassing processes in the TVG [ 12 ; supplemental video].
The inverted P-wave and S-wave velocity models (Vp and Vs) at 6 different depths from 0 km (sea level) to 5.0 km are shown in Figs. 2 and S1, respectively. As expected, a strong heterogeneity of both Vp and Vs are observed at the shallow crust due to the variations of rock types (lithology), of fractures, and in temperature, and probably also the volcanic fluids or gases involved. In general, low Vp zones, shown in red, are broadly detected at the shallow depths (<2.0 km), while high Vp areas, shown in blue, are largely at deep depths (>3.0 km). For areas with Vp less than 3.0 km/s, they are likely reflected on fractural rocks filled with hydrothermal fluids or volcanic gases because a lot of hot-springs and fumaroles are observed in the Tatun volcanic area 18,19 . Based on the geology survey 20,21 , the andesitic rocks largely observed on the surface is overlay on the sedimentary rocks in the Tatun volcanic area. Thus, areas with Vp greater than 5.0 km/s are associated with andesitic rocks that intruded into sedimentary rocks at the previous volcanism. The general features from the tomographic inversion are reliable because major portions of the study area can be successfully inverted by checkerboard tests due to the high density of ray-paths from the abundant micro-earthquakes at the shallow depths (Figs. 3 and S2). However, both the structural heterogeneity and the inversion reliability gradually decrease with depth because of the limited ray-paths in the deeper layers.
To discuss the detailed geological features beneath the TVG, a representative velocity profile (A-A') across the Dayoukeng area is plotted in Fig. 4. Again, major portions of the velocity structures are successfully inverted according to the checkerboard test (Fig. 4d,e). The assumed checkerboard model with velocity variations of 10% in each layer ( Fig. 4f) are basically obtained in most grids for both the Vp and Vs models after the tomographic inversion. These checkerboard tests suggest that major portions of the inverted velocity structures are reliable for the delineation of the subsurface structures along the A-A' profile due to dense seismic ray-paths. Similar results across major portions of the study area are shown along Profile B-B' (Fig. 5), which is almost perpendicular to Profile A-A' (Fig. 1).
The seismic tomographic images across the A-A' profile show two prominent zones (L1 and L2) where low Vp and Vp/Vs ratios are identified at different depths (Fig. 4a,c). The shallow one (L1) is a broad area from the surface down to a depth of ~2.5 km, and the deep one (L2) is a narrow zone dipping to the northeast just beneath the Dayoukeng area. The areas with both low Vp and low Vp/Vs ratio are probably associated with supercritical fluids 21,22 , because Vp decreases as pore pressure in the saturated rocks increases 23,24 and Vp/Vs decreases if a gas phase is added to the saturated fluid 25,26 .
Furthermore, a broad area just below the vertical seismicity conduit at depths below ~ 2.5 km shows high Vp and Vs (H1 in Fig. 4a,b), which are interpreted as andesitic rocks that intruded into sedimentary rocks at the last volcanism because the andesite and sedimentary are two of major rocks near the surface 20,21 . A lot of hot springs and fumaroles in and around the Dayoukeng area imply the hydrothermal activity is strong at the shallow depths 18,19 . In other words, high fluids and fracturing might be expected in the Dayoukeng area. The presence of both high fluids and fracturing will reduce Vs more than Vp, resulting in a high Vp/Vs ratio. Such high Vp/ Vs ratios with high fluids and fracturing phenomena are likely caused by the upwelling of magma fluids and the heat from the deep magma reservoir in the lower crust 2 . Another small area (H2) with high Vp, Vs, and Vp/Vs is shown at extremely shallow depths (0-800 m) beneath the Dayoukeng area, which can also interpreted as high fluids and fracture features within andesitic rocks.
The relocated seismicity by tomoDD around the Dayoukeng fumarole is surprisingly clustered at an extremely narrow vertical conduit, which is limited to a small area of approximately 0.5 km in diameter from a few of hundred meters below the surface down to ~2 km in depth (Fig. 1). In contrast, there are only very limited earthquakes (less than 10%) away from this seismicity conduit in and around the Dayoukeng area. Similarly, the seismicity beneath Mt. Chihsin is largely clustered within a sphere-like zone around the depths between 0.5 and 2.0 km (Fig. S3). In general, it is very surprising to see that the locations of many micro-earthquakes have been significantly improved through a careful relocation process based on the inverted 3-D velocity model (i.e., Fig. S4). The relocated seismicity beneath the Dayoukeng area can be generally divided into three major groups along the A-A' profile ( Fig. 6). For the seismicity northeast of the Dayoukeng area, the relocated earthquakes are generally shifting southwest and increase to a depth of about 1-2 km. On the other hand, the relocated earthquakes southwest of the Mt. Chihsin area are shifting in the northeast direction. It is more interesting to see that the relocated earthquakes between the Mt. Chihsin and Dayoukeng areas are clearly clustered into the narrow vertical conduit. In particular, some of the focal depths beneath the Dayoukeng area are moved to above the sea level. Obviously, such a dramatic improvement is associated with not only the significant change of the strong heterogeneity from the absolute arrivals of both P-and S-waves, but also the huge number of relative arrivals of both P-and S-waves for reducing systemic errors. Certainly, the relocated seismicity is more reliable because the estimated errors of the earthquake locations in both the horizontal (ERH) and vertical (ERZ) directions are significantly improved after the relocation process (Fig. 7). The statistical comparison before and after the relocation process shows that earthquakes with an error less than 100 meters dramatically increase from ~20% to ~90%. www.nature.com/scientificreports www.nature.com/scientificreports/
Discussion
The highly clustered seismicity within an extremely narrow conduit suggests that the volcanic activity in the TVG is still ongoing. After the tomoDD relocation, the preliminary earthquakes diffused as a cloud of seismicity are dramatically clustered into an extremely narrow vertical conduit (Figs. 6 and S4). Although there are some possible small uncertainties for each individual earthquake location, the general pattern of seismicity clustering at a vertical conduit beneath the Dayoukeng area doesn't change too much due to dense seismic ray-path covering at that area. Such a concentrated seismicity is probably caused by the stress changes within a shallow volcanic www.nature.com/scientificreports www.nature.com/scientificreports/ conduit in the TVG. This conduit might be composed of fractural rocks that are not only allowing volcanic fluids or gases easily went through, but also inducing micro-earthquakes. Since the seismicity was not mainly dominated by seismic swarms but persistent from 2014 to 2017 (Fig. 8), the seismicity conduit couldn't been suddenly generated along a local fault due to stress transfer. Similar cases were reported at many active volcanoes, such as the Kilauea volcano in Hawaii 27,28 , Sakurajima volcano in Japan 29 , Mt. Vesuvius in Italy 30,31 , and Soufriere Hills in Montserrat 32 . Some of the concentrated seismicity can be directly associated with the magma movement, but others may reflect the background seismicity in the volcanic area. No matter the mechanism for triggering these volcano-earthquakes, the concentrated seismicity within a particular area often shows that a volcano is still active. Like in active faults, the concentrated seismicity is detected along their narrow fault zones (i.e. [33][34][35][36] ). Therefore, the www.nature.com/scientificreports www.nature.com/scientificreports/ persistent and stable seismicity within a narrow conduit in the TVG indicates it is still active even though there have been no eruption records in human history.
Furthermore, it is interesting to see that the seismicity conduit (earthquakes clustered within a vertical conduit) is near the Dayoukeng area, which has one of the strongest degassing fumaroles on the surface (i.e., supplemental movie). A variety of interesting anomalies were observed in and around the Dayoukeng area from geochemical, geophysical, and seismic analyses in the past decade. Firstly, the strong degassing process at the Dayoukeng fumarole has created several large sulfur-towers (i.e., Fig. S5), which have undergone a repeating process of collapse and re-building from time to time. The tallest height of these sulfur-towers may be up to ~ 5 meter (Fig. in 19 ). The detailed geochemical analyses of collected fluids and gases at all fumaroles in the TVG show that the Dayoukeng fumarole has a significant signature of sulfur compounds and hydrogen chloride as well as high helium isotope ratios (~6.8). These results strongly indicate that the degassing process is not associated with the hydrothermal but with magmatic activities 6,18,19 . Secondly, extremely high heat-flows have been detected in the Dayoukeng area 19 . The steam temperature measured at the Dayoukeng fumarole is around 105 °C at the surface and increases to 250 °C at a depth of ~200 meter within a borehole at Chintenkeng, which is less than 100 m from the seismicity conduit. Thirdly, the chemical signatures observed from the Helium isotope in the TVG show that the Dayoukeng area has the highest 3 He/ 4 He ratio (Yang et al., 1999), which implies the existence of magma chamber underneath this area. Fourth, some inflation sources have been detected by the crustal deformation as well as the seismic data. One inflation source at a depth of 0.7 km beneath the Dayoukeng area has been detected from repeated precise leveling surveys 7 . The inflation source within the pressurized hydrothermal layers may be created by the ascension of volcanic fluids from the deep magma reservoir. Similarly, another deeper inflation source has been identified around a depth of 2 km beneath the Dayoukeng area from 1016 earthquake focal mechanisms 37 . The stress magnitude ratios suggest this inflation is also associated with the local volcano-hydrothermal activity. Those shallow inflation sources might be caused by migrations of volcanic fluids and/or gases. Finally, www.nature.com/scientificreports www.nature.com/scientificreports/ www.nature.com/scientificreports www.nature.com/scientificreports/ a variety of interesting volcanic earthquakes and tremors have been observed in and around the Dayoukeng area, indicating strong degassing at shallow depths. For instance, a sequence of high-frequency spasmodic bursts within a duration of ~15 minutes was detected 9 . Also, heartbeat-like seismicity with a repeated period of ~18 minutes suggests a volcanic conduit may exist just beneath the Dayoukeng area 12 . Combining all the unusual features from the geochemical, geophysical, and seismic data with the significant degassing process on the surface, we may conclude that the Dayoukeng area is one of the major sites releasing the volcanic material and heat ascending from the deep magma reservoirs of the TVG.
The velocity structures obtained from the double-difference tomography indicate that the upwelling volcanic fluids and gases largely pass through some fracture zones beneath the Dayoukeng area. For instance, a northeast-dipping zone with low Vp/Vs (L2 in Fig. 4c) with depths between 3 km and 5 km may be one of the major pathways for ascending volcanic materials. Thus, major portions of the ascending material and heat may occasionally trigger clustered micro-earthquakes within a narrow vertical conduit at depths between 2 km and the surface. Meanwhile, other ascending materials and heat are migrating into broad hydrothermal zones (L1 in Fig. 4a,c) at depths shallower than ~2.5 km.
In addition to the seismicity clustered beneath the Dayoukeng area, the other group of micro-earthquakes is clearly identified beneath Mt. Chihsin (Fig. 9). Although the seismicity beneath Mt. Chihsin doesn't show a conduit-like shape, most of micro-earthquakes are detected within a small sphere-like zone within a diameter of ~ 1 kilometer. Similar to the earthquakes at the Dayoukeng area, seismicity beneath Mt. Chihsin is steady and www.nature.com/scientificreports www.nature.com/scientificreports/ persistent (Fig. 8). Also their focal depths are roughly ranging from between 0.5 km and 2.0 km. Both groups of earthquakes at the Dayoukeng area and Mt. Chihsin might be taken place within a hydrothermal layer and likely induced by the volcanic gases and fluids ascending from the deep magma reservoir beneath the Tatun volcanic group.
In summary, the seismic conduit beneath the Dayoukeng area as well as the clustered seismicity beneath Mt. Chihsin may be considered as an important seismic evidence for an active volcano in the TVG. In particular, the clustered micro-earthquakes within a narrow ~500 m diameter conduit beneath the Dayoukeng area have been persistently triggered by the ascending volcanic gases or fluids, which were originally released from the deep magma reservoirs at the lower crust 2 and passed through some fracture zones with low Vp/Vs values in the uppermost crust (Fig. 4). The magmatic signature was clearly shown by the significant sulfur compounds and hydrogen chlorides as well as high helium isotopes from the ascending gases and fluids near the Dayoukeng fumarole 6,19 . The pressurized hydrothermal layer at a shallow depth was revealed by an inflation source at a depth of 0.7 km 7 and another deep volcano-hydrothermal inflation was identified at a depth of ~2 km 37 beneath the Dayoukeng area. Also the spasmodic bursts 8 and heartbeat seismicity 12 have been observed beneath the Dayoukeng area. Thus, the seismic conduit may be one of the major pathways for ascending magma if the volcano erupts in the future. Certainly, possibility of developing different magma pathways at other clustered seismicity zones such as beneath Mt. Chihsin might not be totally excluded because there is no particular seismicity pattern for defining magma pathways in the future eruption.
Method
We employed double-difference seismic tomography 17 to produce a more accurate model of the velocity and event locations for the TVG in Taiwan. Among the 8,194 earthquakes recorded at 40 seismic stations in the TVG between 2014 and 2017, we have carefully selected 2,836 events to avoid spacing redundancy within each model grid during tomographic inversion (Figs. S6 and S7). The criteria for selected earthquakes includes (1) the rms (root-mean-square) values of travel-time residuals were less than 1 sec, and (2) both horizontal and vertical errors were smaller than 2 km. Besides, in each subarea (Fig. 1), we only used one earthquake that was located with the minimum residual and distance error for the tomographic inversion later. In total, we used the absolute arrivals of 39,282 P-waves and 31,697 S-waves picked routinely by the Taiwan Volcano Observatory in Taiwan (TVO), and the relative arrivals of 149,009 P-waves and 107,407 S-waves. In the process of the tomographic inversion, different parameters were tried according to the values suggested by the double-difference seismic tomography 17 . The final results are given by the parameters below. The iteration number was 32, and the damping for both inversion of velocity structures and earthquake relocation was 75. The weighting ratio between absolute and different arrival times were reduced from 100 to 0.01. The huge number of relative arrivals of both P-and S-waves, which are about 4 times more than the absolute arrivals, significantly reduce systemic errors, thereby improving the velocity model of the TVG. Based on the inverted 3D velocity model, we successfully relocated 8,194 local earthquakes.
We slightly modified the velocity model (Table 1), which has been used for routinely locating earthquakes and further investigations [8][9][10]37 , as the initial 1D model for the double-difference tomographic inversion. The velocity model was divided into 11 layers with a depth interval of 0.5 km from the surface to the depth of 5 km. The nodes at the X and Y directions of the velocity models are shown Fig. 1. The grid spacing in each horizontal layer is 0.5 km covering an area of 10 km × 10 km in the TVG.
In order to assess the effectiveness of the inversion, a checkerboard velocity model was applied to create a simulated dataset. Velocity perturbations of ±10% were applied to the checkerboard model with a grid spacing of 1.5 km in each layer. The simulated arrivals of both the P-and S-waves constructed from the actual seismicity | 4,954.6 | 2019-12-01T00:00:00.000 | [
"Geology"
] |
Developing an Efficient Deep Learning-Based Trusted Model for Pervasive Computing Using an LSTM-Based Classification Model
Mobile and pervasive computing is one of the recent paradigms available in the area of information technology. The role of pervasive computing is foremost in the field where it provides the ability to distribute computational services to the surroundings where people work and leads to issues such as trust, privacy, and identity. To provide an optimal solution to these generic problems, the proposed research work aims to implement a deep learning-based pervasive computing architecture to address these problems. Long short-term memory architecture is used during the development of the proposed trusted model. The applicability of the proposed model is validated by comparing its performance with the generic back-propagation neural network. This model results with an accuracy rate of 93.87% for the LSTM-based model much better than 85.88% for the back-propagation-based deep model. The obtained results reflect the usefulness and applicability of such an approach and the competitiveness against other existing ones.
Introduction
e recent advances in information technology has made the world to shift from big desktop computers and have a tendency to powerful and smaller devices to facilitate with large computational and heterogeneous wireless communications interfaces. Weiser [1] referred the concept of pervasive/ubiquitous computing which is the most recent paradigm in the world of computers. Ubiquitous/pervasive computing offers a number of advantages such as making life easier with the support of digital infrastructure and mobile devices accomplished by offering the services distribution to the people. Some of the pervasive and ubiquitous computing facing security-related issues are open for researchers to answer. Devices inside the pervasive systems are embedded and invisible which are operating in pervasive surroundings. e pervasive devices are performing mutual interaction without any identity in advance. So, it becomes complicated for the users to know where such devices are present and to exchange personal information. e traditional security of computing is mostly relying on the techniques of access control and authentication which provide access to the only registered users of the system. Ubiquitous and pervasive computing systems are very scalable and flexible due to which it is not suitable to adopt such services. e main characteristic of pervasive computing is the development and design of efficient services to the user who sends request for the services and in the context from which the request of service is sent. e contribution of the proposed research is to develop an efficient deep learning-based model to ensure the generic security issues such as trust, privacy, and authenticity over the Internet. From the selected dataset, only 12% of the data is used for the experimental purposes, and it results in a high accuracy rate of 93.87%. is high accuracy rate for such a small amount of data shows that if the training set increases, then the proposed model will provide more prominent accuracy rates. e paper is organized as follows: Section 2 represents the related work to the proposed research. Section 3 briefly shows the materials and method followed for conducting the proposed research. Section 4 shows the experimental results of the proposed study. e paper is concluded in Section 5.
Related Work
Different approaches and techniques are proposed by researchers for pervasive computing. Chen et al. [2] proposed an infrastructure of data-centric design to support the applications of context-aware. eir proposed middleware treats sources of contextual data as stream publishers. e system is robust to support self-organizing peer-to-peer overlay to support the services of data-driven. Katsiri and Mycroft [3] proposed a system for simulating pervasive systems through estimated knowledge about its situations and entities involved. e research has improved AESL with the function of higher order predicated to denote estimated knowledge about the possibility of predicted instance with value True for a time reference. Padovitz et al. [4] presented a framework of ECORA for the computing of context-aware for reasoning context about uncertainty and tracing the issues of scalability, usability, communication, and heterogeneity.
e system considers an agent-oriented hybrid approach and combining service of centralized reasoning with context-aware, reasoning able mobile software agents. Ahamed et al. [5] presented the S-MARKS design and implementation which consists of resource discovery, device validation, discovery of resource, and privacy of the module.
Boukerche and Ren [6] proposed a system of security for management of trust involving development of a trust model, nodes credential assignment, private key updating, managing the trust value, and taking suitable decision about the rights of access nodes. e research demonstrated that a malicious node can efficiently be excluded from the environment of pervasive and ubiquitous computing. Yu et al. [7] surveyed the literature for the comparison and classification framework for the four dimensions of the design concerned application migration: spatial, temporal, entity, and other concerns. Bello Usman and Gutierrez [8] augmented the basic concept of pervasive and mobile computing from different studies of the literature which uses methods and generic conceptual phases for the management of trust. e study covered a wide range of methods, techniques, models, and applications of trust-based protocol. Carullo et al. [9] presented a new approach for the establishment of trust leveraging the profiles of users. e authors [10] presented an approach which is capable of judging the trustworthiness of a device which interacts and behavior of the device with little interaction experience. e existing research in the field of gesture recognition and facial expression in the perspective of intelligent tutoring is analyzed for facilitating educational societies in building an efficient tutoring system [11].
Kurniawan and Kyas [12] presented a statistical decision approach for trust-based access control through Bayesian decision theory for identity management in Internet of ings. Dangelo et al. [13] presented the generic issues of pervasive computing architecture. e system integrated the techniques of artificial intelligence for achieving similar resemblance with the decision making which is human-like. Apriori algorithm was firstly used to extract the behavioral patterns, and then, Naïve Bayes classifier was used for decision making for trustworthiness of users. Uddin et al. [14] proposed an approach for the detection of terrorist activities. e five models of the deep neural network are used to monitor the behaviors of terrorist activities. e approaches used logistic regression, Naïve Bayes, and Support vector machine algorithms. e authors [15] proposed deep learning and neural networks algorithms for prediction of the behavior of punctuality of employees at the workplace. Khan et al. [16] proposed a variant of SVM, a LinearSVC for answer classification. e chi-square and univariate methods are used for the reduction of the size of the feature space. Deep learning algorithms are used in a variety of problems such as for evolutionary computing models in computer vision [17] and deep ensemble learning for human action recognition in still images [18].
Materials and Methods
e following sections show the materials and methods used.
Dataset. Dataset (Dishonest Internet users dataset.txt)
[19] has been used in this study. e dataset has 322 instances and 5 attributes which is available on the UCI machine learning repository. Figure 1 represents the generic diagram of the intrusion detection system. e system of intrusion detection acts like a guard at the objective node to activate a firewall and to alert host devices when an unauthorized access or illegal traffic is detected. In our case, we have used the deep learning-based model to defend the unauthorized access and malicious network traffic.
Deep Neural Network Backward Propagation Model for the Classification of Trusted and Untrusted Internet Users.
To tackle the problem of perceptron, in 1986, Rumelhart et al. [20] defined a new supervised learning technique which is called the Back-Propagation Deep Neural Network (BPDNN) which is mostly used for classification problems. e BPNN is a supervised deep learning model where error variance between the calculated outputs and the desired outputs is back-propagated.
is process is repetitive through the learning process for minimizing the errors through weights thought the back-propagation of errors. As a consequence of weight regulations, a hidden unit sets their weight to signify significant features of the task domain. e BPDNN contains three layers which are the hidden layer, inputs layer, and outputs layer. Learning in the BPDNN is a two-step procedure [20,21].
Complexity
Step 1. Forward propagation: this step depends on the input and present weights, and the output is computed. For such computations, each hidden unit and output unit calculates net excitation which depends on these conditions: (i) Values of earlier layer units that are linked to the unit in deliberation (ii) Weights between the unit in consideration and the previous layer unit (iii) reshold value on the consideration unit is net excitation is accomplished by activation function returning the calculated output value for that unit. is activation function must be differentiable and continuous. Several activation functions can be used in the BPDNN. Sigmoid is an extensively used activation function.
Step 2. Backward propagation of error: in this step, the error is computed by variance between the actual output of each output unit and targeted output. is error is back-propagated to the former layer that is the hidden layer. Error at that node is calculated for each unit in the hidden layer N. In a similar way, error at each node of the previous hidden layer that is N-1 is calculated. Forward and backward steps are repetitive until the error is reduced up to the predictable level. e parameters of BPDNN are shown in Table 1.
e BPDNN graphically represented in Figure 2 contains three layers, the inputs layer, hidden layer, and outputs layer.
Cross Validation Method.
For data classification using hold-out methods: 70% for training and 73% for testing in this study.
Performance Evaluation
Metrics. Accuracy, model execution time, and ROC-AUC have been used as performance evaluation metrics to evaluate the performance of the model.
Experimental Results
e backward propagation deep learning-based different networks have been trained and tested with essential parameters and reported in Table 1 Figures 3 and 4, respectively. It is concluded from Figure 3 that the back-propagation neural network generates an accuracy rate of 85.88% for the proposed problem, while the LSTM-based classification and recognition model outperforms by generating an accuracy rate of 93.87% as depicted in Figure 5. e back-propagation neural network is good in sequence learning problems but fails in retaining the information used long before [22]. To address the problem of retention in back-propagation neural networks, Hochreiter and Schmidhuber proposed a modified version of the backpropagation neural network in 1997 known as long shortterm memory (LSTM) [23]. is model provided prominent results for many machine learning problems such as text recognition, speech recognition, network attack detection problems, and many others. is high applicability of the Initial weights 3 Number of hidden units 4 Overtraining and initial stopping criteria 5 Number of instances 6 Function of activation 7 Inputs normalization Complexity 3 LSTM represents the outperformance to the vanilla recurrent neural networks (back-propagation, feed-forward propagation, and so on) significantly. e applicability of the proposed algorithm is also tested using the LSTM model. e performance results of the LSTM-based model are depicted in Figure 5, and it generates an accuracy rate of 93.87% for the proposed problem.
is high accuracy rate for the LSTM-based recognition model reflects the application of the proposed model for the said issue.
After testing the LSTM model for varying training and test sets, it is concluded from Figure 5 that the LSTM shows an average highest accuracy rate of 93.87% for a training set of 70% and the remaining is selected as a test set. is high-accuracy value reflects the applicability of the LSTM model for the proposed problem. It is also concluded from Figure 5 that when the training set increases, the calculated time consumption also increases accordingly. e highest accuracy value of the LSTM model in Figure 5 reflects the solution to the nonretaining problem (forgetting/destroying the values used long before) of the back-propagation model.
Conclusions
Pervasive and ubiquitous computing is one of the advance paradigms in the area of information technology. e recent advances in information technology has made the world to shift from big desktop computers and have a tendency to powerful and smaller devices to facilitate with large computational and heterogeneous wireless communications interfaces. e role of pervasive computing is foremost in the field where it provides ability to distribute computational services to the surroundings where people work and leads to make issues such as trust, privacy, and identity. To provide an optimal solution to these generic problems, the proposed research work aims to implement a deep learning-based pervasive computing architecture to address these problems. Long short-term memory architecture is used during the development of the proposed trusted model. e applicability of the proposed model is validated by comparing its performance with the rival back-propagation neural network. is model results with an accuracy rate of 93.87% for the LSTM-based model much better than 85.88% for the back-propagation-based deep model. e obtained results reflect the usefulness and applicability of such approach and the competitiveness against other existing ones. In the future, the proposed research can be expanded for the recognition of unfair recommenders and its implementations on portable devices in order to validate it for real-world scenarios.
Data Availability e research has used the dataset which is available online. | 3,077.2 | 2020-09-09T00:00:00.000 | [
"Computer Science"
] |
Damping Analyses of Structural Vibrations and Shunted Piezoelectric Transducers
Piezoelectric transducers in conjunction with appropriate electric networks can be used as a mechanical energy dissipation device. Alternatively, undesired mechanical energy of a structure could be converted into electrical energy that can be dissipated through a shunt network in the form of Joule heating. This paper presents an experimental method to calculate damping energy in mechanical systems. However, the mathematical description of damping mechanism is much more complicated, and any process responsible for the occurrence of damping is very intricate. Structural and piezoelectric damping are calculated and analysed in the case of pulse switching or SSDI semiactive vibration control technique. This technique which was developed in the field of piezoelectric damping consists in triggering the inverting switch on each extremum of the piezoelectric voltage which induces an increase of the electromechanical energy conversion.
Introduction
Vibration damping is one of the manifestations of mechanical energy dissipation related to motion in mechanical systems.Damping processes have been studied for a long time.Damping forces are small compared to the other interactions in a mechanical system and yet their mathematical description remains much more complicated.Actually, any process responsible for the occurrence of damping is very intricate and the knowledge of it is insufficient.Sometimes just changing the system's stiffness or mass to alter the resonance frequencies can reduce the unwanted vibration as long as the excitation frequencies do not change.But in most cases, the vibrations need to be dissipated using damping materials or devices that are tuneable with vibration.
Several methods have been investigated in case of vibration damping.These methods have the forms of passive, semiactive, and active treatments which can be used for sound/vibration cancellation.Active control involves the use of active elements (actuators) along with sensors and controllers (analogue or digital) to produce an out-ofphase actuation to cancel the disturbance causing the noise/vibration [1].All other methods that do not include a real-time active algorithm can be grouped under the passive control option.Passive damping refers to energy dissipation within the structure by add-on damping devices.Viscous dampers (dashpots), viscoelastic damping, tunedmass dampers, dynamic absorbers, and shunted piezoelectric dampers are the mechanisms of passive vibration control.The most common types of passive damping treatments of viscoelastic materials were described by Rao [2].Tuned dampers are the reactive devices which used to oscillation damping at a particular resonant frequency.It consists of an inertia element, a compliant/resilient element, and an energy dissipating element.The inertial, resilient, and dissipative elements in such devices are: mass, spring and dashpot.Depending on the application, these devices are sized from a few grams to many tons.They attract the vibration energy of the target mode and dissipate it as heat through the action of its dashpot.
Electronic damping using piezoelectric ceramics (Piezo-Shunt) is less temperature sensitive and more tuneable compared with viscoelastic damping treatments.In this damping technique, the mechanical energy of the structure is converted to electrical energy by piezoelectric material.The electrical energy, in turn is dissipated, as heat, in an electrical Smart Materials Research shunt circuit.These methods are interesting because they do not rely on any operative energy as in active control.They consist in the drive of a few solid-state switches (i.e., MOSFET transistor) requiring very few power and, in general, are simple to implement.Pulse switching damping technique [3][4][5][6][7] which is implemented in this paper consists in leaving the piezo elements in open-circuit except during very brief period of time where the electric charge is either suppressed in a short circuit or inverted on an inductance.
Many various terms are used to represent vibration damping.These representations merely indicate the mathematical model used to represent the physical mechanism of damping that is still not clearly understood for many cases [8,9].The purpose of this paper is the practical measurement of damping energy which is the vital problem in vibrating systems.The effects of piezoelectric damping and structural damping are observed experimentally and compared with each other.At the following, the damping behaviors in low and high values of deformation are studied.In order to analyse the vibration damping, a multimodal structure equipped with piezoelectric elements wired on a pulse switching cell is experimented.
Energy Analyses and Energetic Considerations
In vibration analyses, it is concerned with damping in terms of system response.The loss of energy from the oscillatory system results in the decay of free vibration amplitude.In steady-state forced vibration, the loss of energy is balanced by the energy which is supplied by the excitation.Energy dissipation is usually determined under conditions of cyclic oscillations.The energy dissipated per cycle due to a damping force F d is computed from the general equation [8] where u is the displacement.On the other hand, experiments by several investigators [8] indicate that for most structural metals (such as steel), the energy dissipated per cycle (structural damping) proportional to the squared of the vibration amplitude and independent of the frequency over a wide frequency range.Then energy dissipated by structural damping may be written as [10] where α d is a constant independent of the frequency of harmonic oscillation and u max is the amplitude of vibration.Material in cyclic stress exhibits a stress-strain relation characterized by a hysteresis loop.The cyclic dissipated energy is proportional to the hysteresis loop area.Consequently, the energy balance equation based on hysteresis loop can be deduced.The following is an equation of motion of a vibrating system where F i (u, u) and F e (t) correspond to internal and external forces per unit mass, respectively.The external and the internal loops (hysteresis loops) can be then described by We can write down the following relationship between them: Owing to this relationship we, can convert the external loop into an internal one by subtracting the values corresponding to the external loop from acceleration dependent on displacement.Multiplying ( 5) by du = u dt enables one to obtain the infinitesimal work done by exciting forces.
Integrating over the entire period leads to The first integral on the right-hand side of ( 6) equals zero for periodic motion.Hence, we can derive the following wellknown formula: which say that dissipated energy is equal to the work done by external forces [11,12].The experimental sample in this paper is a cantilever beam equipped with piezoelectric patches wired on a pulse switching cell (Figure 1) [13].
The total outgoing current from the piezoelectric patches using their constitutive equations can be calculated as [14,15] where C p is the capacitance of the piezoelectric elements.
In general, the piezoelectric patches are wired together in parallel.Multiplying each term of ( 8) by the voltage and integrating over the time shows that the converted energy is the sum of the electrostatic energy stored on the piezoelectric elements and the energy absorbed or dissipated by the electrical device as In the case of pulse switching technique, an electric circuit is connected to the piezoelectric elements.This circuit can be used for dissipating energy from the system or energy recovery or for both of them.It consists of a switching device in parallel with the piezoelectric elements.The current in the switching device is always zero except during the voltage inversion that takes place at each switch trigger.At each inversion, the energy extracted from the piezoelement is equal to the difference in the electrostatic energy on the piezoelectric elements before and after the voltage inversion Piezo elements jump (Figure 2).The energy dissipated in the switching device is then given by where v k is the piezoelectric voltage just before the kth inversion and γ is the inversion coefficient.
There are two types of damping: material damping (structural) and system damping.Material damping is the damping inherent in the material, while system damping includes the damping at adds damper devices to the system (such as, piezoelectric material), in addition to material damping.By calculating damping energy (7) in the case of forced harmonic vibration during a time period for controlled and uncontrolled cases, respectively, the piezoelectric damping can be calculated.Among to the methods of damping measurement of a vibrating system, attenuation [11] is the other parameter to describe the vibration damping as where (u max ) c and (u max ) unc are the maximum amplitude of displacement in controlled (structure with piezo elements) and uncontrolled cases, respectively.
Experimental Results
Experimental setup considered is a steel beam equipped with piezoelectric inserts.This structure corresponds to the According to the proposed method, the switch trigger is generated by the digital output of the control board, connected on a pulse switching device as described in [5].The displacement sensor used is a simple piezoelectric insert collocated with the main inserts.The sensor thickness and material are similar to the control insert and generates, therefore, the same open-circuit voltage.Control strategy programming and implementation is done using the Matlab/Simulink software environment and dSPACE Real-Time Workshop for real-time computing and input/output control.Excitation of the beam is obtained using an electromagnet driven using an audio amplifier, by a harmonic excitation generated by the dSPACE board.
In practice, to measure the energy dissipated per cycle, using the concept of ( 7), the procedure is rather straightforward.For a cantilever beam with applied f (t) at its free-end, this energy loss is given by the integral over a complete cycle in the case of harmonic excitation as where T is the cycle's period.This integration can be estimated by using Z data points separated by Δt seconds, so that digital summation over Z − 1 data points replaces the continuous integral.Thus (12) leads to where n is the number of cycles.Therefore, by multiplying generated force in the dSPACE board to the free-end measured velocity by the laser sensor and dSPACE step time Δt for the Z captured data points during n cycle, the dissipated energy from (13) can be calculated during a time period.Piezoelectric damping energy is equal to the difference between controlled and uncontrolled cases.
Figure 3 shows the variations of capture (displacement sensor) signal amplitude versus the excitation force amplitude in controlled and uncontrolled cases, respectively.The curves slope indicate the influence coefficient (inverse of stiffness) at the point that in which excitation force is applied (It should be mentioned that the capture signal is an image of the free-end deflection of beam; it has placed at the clamped end of beam that in which the bending moment is maximum and consequently the capture voltage (piezoelement deformation) and free-end deflection are maximum.Indeed, computations have been made using the image of deflection given by the collocated piezoelectric deflection sensor.).Damping effect increases gradually with increase of force amplitude which is more evident in controlled case because of the piezoelectric damping.Piezoelectric effect is too little in small deformations.The vertical distance between these two curves show the effect of piezoelectric damping.Since the structural damping is the same in controlled (piezo elements) and uncontrolled cases, the difference of curves slope is just the one caused by the piezoelectric damping.In high deformations (force amplitude > 1.4), the curves tend to be parallel with each other.It is mentioned that the maximum damping caused by piezoelectric elements was limited and have its ultimate value.After the ultimate value, the curves rise similarly with increase of force amplitude.In this case, the additional injected energy to the system only increases the amplitude of vibration in two cases (controlled and uncontrolled) proportionally.In this figure, the excitation frequency is equal to the first resonant frequency of the beam (Table 1).
Figure 4 shows the variations of capture signal attenuation versus the force amplitude.From the figure, the decreasing of the attenuation was limited.The minimum attenuation (ultimate value) is about −10 dB.It is observed that after the ultimate value the curve rise slowly with increase of force amplitude.In this case, structural damping is more predominant than piezoelement damping (It is proportional to the square of vibration amplitude, (2).).Also, the excitation frequency is the first resonant frequency (Table 1).
Figure 5 indicates the variations of periodical damping energy versus the square of capture signal amplitude.Damping energy in controlled case increases faster than one in uncontrolled case with increase of capture signal amplitude.Therefore, the curve slope in controlled case is more than the one in uncontrolled case up to the value of 60.After that, the structural damping is more predominant, and piezoelectric damping reaches its ultimate value.Then, two curves are approximately parallel (with distance d piezo ).It can be concluded that (2) holds true only for high values of structural deformation (The curves slope are constant in high deformations).Also, the values of α d are the same for the two cases (controlled and uncontrolled) and are just as the one for structural damping.Piezoelectric effect increases damping only more than uncontrolled case equal d piezo .Thus as ( 2) is relevant to structural damping, this equation for controlled case is The slop of these curves gives the value of α d and is approximately equal to 0.026.
Conclusions
This paper showed an experimental method to calculate the damping energy in mechanical system.However, the mathematical description of damping mechanism is much more complicated and any process responsible for the occurrence of damping is very intricate and the knowledge of it is insufficient.Structural damping and piezoelectric damping for pulse switching semiactive nonlinear control technique have been calculated and analysed.Pulse switching control technique is interesting for structural damping applications because it presents simultaneously good damping performances, a good robustness, and very low power requirements.Finally, it is important to consider that this technique is simple enough to be self-powered.Piezoelectric damping will increase with increase of force amplitude but its maximum value was limited and has its ultimate value.
Figure 1 :Figure 2 :
Figure 1: Cantilever beam where u(x, t) is the beam deflection along the transverse direction (y) and f (t) is the excitation force.
Figure 3 :
Figure 3: Variations of the capture signal amplitude versus the force amplitude in controlled and uncontrolled cases.
Figure 4 :Figure 5 :
Figure 4: Variations of the capture signal attenuation in dB versus the force amplitude.
Table 1 :
Characteristics of the experimental structure.
Table 1 .
The proposed control strategy is implemented using a laboratory PC-based real-time DSP controller environment.Either the voltage or the deflection signals are sampled by the board as input data. | 3,264.8 | 2012-01-16T00:00:00.000 | [
"Engineering",
"Physics"
] |
Sentence Similarity Based on Contexts
Existing methods to measure sentence similarity are faced with two challenges: (1) labeled datasets are usually limited in size, making them insufficient to train supervised neural models; and (2) there is a training-test gap for unsupervised language modeling (LM) based models to compute semantic scores between sentences, since sentence-level semantics are not explicitly modeled at training. This results in inferior performances in this task. In this work, we propose a new framework to address these two issues. The proposed framework is based on the core idea that the meaning of a sentence should be defined by its contexts, and that sentence similarity can be measured by comparing the probabilities of generating two sentences given the same context. The proposed framework is able to generate high-quality, large-scale dataset with semantic similarity scores between two sentences in an unsupervised manner, with which the train-test gap can be largely bridged. Extensive experiments show that the proposed framework achieves significant performance boosts over existing baselines under both the supervised and unsupervised settings across different datasets.
Introduction
Measuring sentence similarity is a long-standing task in NLP (Luhn, 1957;Robertson et al., 1995;Blei et al., 2003;Peng et al., 2020).The task aims at quantitatively measuring the semantic relatedness between two sentences, and has wide applications in text search (Farouk et al., 2018), natural language understanding (MacCartney and Manning, 2009) and machine translation (Yang et al., 2019a).
One of the greatest challenges that existing 1 Accepted by TACL.methods face for sentence similarity is the lack of large-scale labeled datasets, which contain sentence pairs with labeled semantic similarity scores.The acquisition of such dataset is both labor-intensive and expensive.
For example, the STS benchmark (Cer et al., 2017) and SICK-Relatedness dataset (Marelli et al., 2014) respectively contain 8.6K and 9.8K labeled sentence pairs, the sizes of which are usually insufficient for training deep neural networks.
Unsupervised learning methods are proposed to address this issue, where word embeddings (Le and Mikolov, 2014) or BERT embeddings (Devlin et al., 2018) are used to to map sentences to fix-length vectors in an unsupervised manner.Then sentence similarity is computed based on the cosine or dot product of these sentence representations.Our work follows this thread where sentence similarity is computed based on fix-length sentence representations, as opposed to comparing sentences directly.The biggest issue with current unsupervised approaches is that there exists a big gap between model training and testing (i.e., computing semantic similarity between two sentences).For example, the BERT-style models are trained at the token level by predicting words given contexts, and there is neither explicit modeling sentence semantics nor producing sentence embeddings at the training stage.But at test time, sentence semantics needs to be explicitly modeled to obtain semantic similarity.The inconsistency results in a distinct discrepancy between the objectives at the two stages and inferior performances on textual semantic similarity tasks.For example, BERT embeddings yield inferior performances on semantic similarity benchmarks (Reimers and Gurevych, 2019), and even underperforming the naive method such as averaging GloVe (Pennington et al., 2014) embeddings.Li et al. (2020) investigated this prob-lem and found that BERT always induces a nonsmooth anisotropic semantic space of sentences, and this property significantly harms the performance of semantic similarity.
Like word meanings are defined by neighboring words (Harris, 1954), the meaning of a sentence is determined by its contexts.Given the same context, it is a high probability to generate two similar sentences.If it is a low probability of generating two sentences given the same context, there is a gap between these two sentences in the semantic space.Based on this idea, we propose a framework that measures semantic similarity through the probability similarity of generating two sentences given the same context in a fully unsupervised manner.As for implementation, the framework consists of the following steps: (1) we train a contextual model by predicting the probability of a sentence fitting into the left and right contexts; (2) we obtain sentence pair similarity by comparing scores assigned by the contextual model across a large number of contexts.To facilitate inference, we train a surrogate model, to act as the role of step 2, based on the outputs from step 1.The surrogate model can be directly used for sentence similarity prediction in an unsupervised setup, or used as initialization to be further finetuned on downstream datasets in the supervised setup.Note that the outcome from step 1 or the surrogate model is a fixed-length vector regarding the input sentence.Each element in the vector indicates how fit the input sentence is to the context corresponding to that element, and the vector itself can be viewed as the overall semantics of the input sentence in the contextual space.Then we use cosine distance between two sentence vectors to compute the semantic similarity.
The proposed framework offers the potential to fully address the two challenges above: (1) the context regularization provides a reliable means to generate a large-scale high-quality dataset with semantic similarity scores based on unlabeled corpus; and (2) the train-test gap can be naturally bridged by training the model on the large-scale similarity dataset, leading to significant performance gains compared to utilize pretrained models directly.
We conduct experiments on different datasets under both supervised and unsupervised setups, and experimental results show that the proposed framework significantly outperforms existing sentence similarity models.
Matrix Based Methods
The first line of work for measuring sentence similarity is to construct a similarity matrix between two sentences, each element of which represents the similarity between the two corresponding units in two sentences.Then the matrix is aggregated in different ways to induce the final similarity score.Pang et al. (2016) applied a twolayer convolutional neural network (CNN) followed by a feed-forward layer to the similarity matrix to derive the similarity score.He and Lin (2016) used a deeper CNN to make the best use of the similarity matrix.Yin and Schütze (2015) built a hierarchical architecture to model text compositions at different granularities, so several similarity matrices can be computed and combined for interactions.Other works proposed to use the attention mechanism as a way of computing the similarity matrix (Rocktäschel et al., 2015;Wang et al., 2016;Parikh et al., 2016;Seo et al., 2016;Shen et al., 2017;Lin et al., 2017;Gong et al., 2017;Tan et al., 2018;Kim et al., 2019;Yang et al., 2019b).
Word Distance Based Methods
The second line of work to measure sentence similarity is to calculate the cost of transforming from one sentence to another, and the smaller the cost is, the more similar two sentences are.This idea is implemented by the Word Mover's Distance (WMD) (Kusner et al., 2015), which measures the dissimilarity between two documents as the minimum amount of distance that the embedded words of one document need to transform to words of another document.Following works improve WMD by incorporating supervision from downstream tasks (Huang et al., 2016), introducing hierarchical optimal transport over topics (Yurochkin et al., 2019), addressing the complexity limitation of requiring to consider each pair (Wu and Li, 2017;Wu et al., 2018;Backurs et al., 2020) and combining graph structures with WMD to perform crossdomain alignment (Chen et al., 2020).More recently, Yokoi et al. (2020) proposes to disentangle word vectors in WRD has shown significantly performance boosts over vanilla WMD.
Sentence Embeddings Based Methods
Sentence embeddings are high-dimensional representations for sentences.They are expected to contain rich sentence semantics so that the similarity between two sentences can be computed by considering their sentence embeddings via certain metrics such as cosine similarity.Le and Mikolov (2014) introduced paragraph vector, which is learned in an unsupervised manner by predicting the words within the paragraph using the paragraph vector.In a followup, a line of sentence embedding methods such as FastText, Skip-Thought vectors (Kiros et al., 2015), Smooth Inverse Frequency (SIF) (Arora et al., 2016), Sequential Denoising Autoencoder (SDAEs) (Hill et al., 2016), InferSent (Conneau et al., 2017), Quick-Thought vectors (Logeswaran and Lee, 2018) and Universal Sentence Encoder (Cer et al., 2018) have been proposed to improve the sentence embedding quality with more efficiency.
The great success achieved by large-scale pretraining models (Devlin et al., 2018;Liu et al., 2019) has recently stimulated a strand of work on producing sentence embeddings based on the pretraining-finetuning paradigm using large-scale unlabeled corpora.The cosine outcome between the representations of two sentences produced by large-scale pretrained models is treated as the semantic similarity (Reimers and Gurevych, 2019;Wang and Kuo, 2020;Li et al., 2020).Su et al. (2021); Huang et al. (2021) proposed to regularize the sentence representations by whitening them, i.e., enforcing the covariance to be an identity matrix to address the non-smooth anisotropic distribution issue (Li et al., 2020).
The BERT-based scores (Zhang et al., 2020;Sellam et al., 2020), though serve as automatic metrics, also capture rich semantic information regarding the sentence and have the potentials for measuring semantic similarity.Cer et al. (2018) proposed a method of encoding sentences into their corresponding embeddings that specifically target transfer learning to other NLP tasks.Karpukhin et al. (2020) adopted two unique BERT encoder models and the model weights are optimized to maximize the dot product.The most recent line of work focuses on leveraging the contrastive learning framework to tackle semantic textual similarity (Wu et al., 2020;Carlsson et al., 2021;Kim et al., 2021;Yan et al., 2021;Gao et al., 2021), where two similar sentences are pulled close and two random sentences are pulled away in the sentence representation space.This learning strategy helps better separate sentences with different semantics.This work is motivated by learning word representations given its contexts (Mikolov et al., 2013;Le and Mikolov, 2014) with the assumption that the meaning of a word is determined by its context.Our work is based on large-scale pretrained model and aims at learning informative sentence representations for measuring sentence similarity.
Overview
The key point of the proposed paradigm is to compute semantic similarity between two sentences by measuring the probabilities of generating the two sentences across a number of context.
We can achieve this goal based on the following steps: (1) we first need to train a contextual model to predict the probability of a sentence fitting into the left and right contexts.This goal can be achieved by either a discriminative model, i.e., predicting the probability that the concatenation of a sentence with context forms a coherent text, or a generative model, i.e., predicting the probability of generating a sentence given contexts; (2) next, given a pair of sentences, we can measure their similarity by comparing their scores assigned by contextual models given different contexts; (3) for step 2, for any pair of sentences at test time, we need to sample different contexts to compute scores assigned by contextual models, which is time-consuming.We thus propose to train a surrogate model that takes a pair of sentences as inputs, and predicts the similarity assigned by the contextual model.This enables faster inference though at a small sacrifice of accuracy; (4) the surrogate model can be directly used for obtaining sentence similarity scores in a unsupervised manner, or used as model initialization, which will be further fine-tuned on downstream datasets in a supervised setting.We will discuss the detail of each module in order below.
Training Contextual Models
We need a contextual model to predict the probability of a sentence fitting into left and right contexts.We combine a generative model and a discriminative model to achieve this goal, allowing us to take the advantage of both to model text coherence (Li et al., 2017).
Notations Let c i denote the i-th sentence, which consists of a sequence of words c i = {c i,1 , ..., c i,n i }, where n i denotes the number of words in c i .Let c i:j denote the i-th to j-th sentences.c <i and c >i respectively denote the preceding and subsequent context of c i .
Discriminative Models
The discriminative model takes a sequence of consecutive sentences [c <i , c i , c >i ] as the input, and maps the input to a probability indicating whether the input is natural and coherent.We treat sentence sequences taken from the original articles written by humans as positive examples and sequences with replacements of the center sentence c i as negative ones.Half of replacements of c i come from the original document, and half of replacements come from random sentences from the corpus.The concatenation of LSTM representations at the last step (right-to-left and left-to-right) is used to represent the sentence.Sentence representations for consecutive sentences are concatenated and output to the sigmoid function to obtain the final probability: (1) where h denotes learnable parameters.We deliberately make the discriminative model simple for two reasons: the discriminative approach for coherence prediction is a relatively easy task and more importantly, it will be further used in the next selection stage for screening, where faster speed is preferred.
Generative Models
Given contexts c <i and c >i , the generative model predicts the probability of generating each token in sentence c i sequentially using SEQ2SEQ structures (Sutskever et al., 2014) as the backbone: Semantic similarity between two sentences can be measured by not only the forward probability of generating the two sentences given the same context p(c i |c <i , c >i ), but also the backward probability of generating contexts given sentences.The context-given-sentence probability can be modeled by predicting preceding contexts given subsequent contexts p(c <i |c i , c >i ) and to predict subsequent contexts given preceding contexts p(c >i |c <i , c i ).
Scoring Sentence Pairs
Given context [c <i , c >i ], the score for s i fitting into the context is the linear combination of scores from discriminative and generative models: where λ 1 , λ 2 , λ 3 , λ 4 control the tradeoff between different modules.For simplification, we use c to denote context c <i , c >i .S(s i , c) is thus equivalent to S(s i , c <i , c >i ).
Let C denote a set of contexts, where N C is the size of C. For a sentence s, its semantic representation v s is an N C dimensional vector, with each individual value being S(s, c) with c ∈ C. The semantic similarity between two sentences s 1 and s 2 can be computed based on v s 1 and v s 2 using different metrics such as cosine similarity.
Constructing C We need to pay special attentions to the construction of C. The optimal situation is to use all contexts, where C is the entire corpus.Unfortunately, this is computationally prohibitive as we need to iterate over the entire corpus for each sentence s.
We propose the following workaround for tractable computation.For a sentence s, rather than using the full corpus as C, we construct its sentence specific context set C s in a way that s can fit into all constituent context in C s .The intuition is as follows: with respect to sentence s 1 , contexts can be divided into two categories: contexts which s 1 fits into, based on which we will measure whether or not s 2 also fits in; contexts which s 1 does not fit into, and we will measure whether or not s 2 also does not fit in.We are mostly concerned about the former, and can neglect the latter.The reason is as follows: the latter can also further be divided into two categories: contexts that fit neither s 1 or s 2 , and contexts that do not fit s 1 but fit s 2 .For contexts that fit neither s 1 and s 2 , we can neglect them since two sentences not fitting into the same context does not signify their semantic relatedness; for contexts that does not fit s 1 but fit s 2 , we can leave them to when we compute C s 2 .
Practically, for a given sentence s, we first use a TF-IDF weighted bag-of-word bi-gram vectors to perform primary screening on the whole corpus to retrieve related text chunks (20K for each sentence).Next, we rank all contexts using the discriminative model based on Eq.1.For discriminative models, we cache sentence representations in advance, and compute model scores in the last neural layer, which is significantly faster than the generative model.This two-step selection strategy is akin to the pipelined selection system (Chen et al., 2017;Karpukhin et al., 2020) in open-domain QA which contains document retrieval using IR systems and fine-grained question answering using neural QA models.
C s is built by selecting top ranked contexts by Eq. 3. We use the incremental construction strategy, adding one context at a time.To promote diversity of C s , each text chunk is allowed to contribute at most one context, and the Jaccard similarity between the i − 1-th sentence in the context to select and those already selected should be lower than 0.5. 2o compute semantic similarity between s 1 and s 2 , we concatenate C s 1 and C s 2 and use the concatenation as the context set C. The semantic similarity score between s 1 and s 2 is given as follows:
Training Surrogate Models
The method described in Section 3.3 provides a direct way to compute scores for semantic relat-edness.But it comes with a severe shortcoming of slow speed at inference time: given an arbitrary pair of sentences, the model still needs to go through the entire corpus, harvest the context set C s , and iterate all instances in C s for context score calculation based on Eq.(3), each of which is time consuming.To address this issue, we propose to train a surrogate model to accelerate inference.Specifically, we first harvest similarity scores for sentence pairs using methods in Section 3.3.We collect scores for 100M pairs in total, which are further split into train/dev/test by 98/1/1.Next, by treating harvested similarity scores as gold labels, we train a neural model that takes a pair of sentence as an input, and predicts its similarity score.The cosine similarity between the two sentence representations is the predicted semantic similarity, and we minimize the L 2 distance between predicted and golden similarities.The Siamese structure makes it possible that fixedsized vectors for input sentences can be derived and stored, allowing for fast semantic similarity search, which we will discuss in detail in the ablation study section.
It is worth noting both the advantages and disadvantages of the surrogate model.For advantages, firstly, it can significantly speed up inference as it avoids the time-consuming process of iterating over the entire corpus to construct C. Secondly, the surrogate shares the same structure with existing widely-used models such as BERT and RoBERTa, and can thus later be easily finetuned on the human-labeled datasets in supervised learning; on the other hand, the origin model in Section 3.3 cannot be readily combined with other humanlabeled datasets.For disadvantages, the surrogate model inevitably comes with a cost of accuracy, as its upper bound is the origin model in Section 3.3.
Experiment Settings
We evaluate the Surrogate model on Semantic Textual Similarity (STS), Argument Facet Similarity (AFS) corpus (Misra et al., 2016), and Wikipedia Sections Distinction (Ein Dor et al., 2018) tasks.We perform both unsupervised and supervised evaluations on these tasks.For unsupervised evaluations, models are directly used for obtaining sentence representations.For supervised evaluations, we use the training set to fine-tune all models and use the L 2 regression as the objective function.Additionally, we also conduct partially supervised evaluation on STS benchmarks.
Implementation Details For discriminative model in 3.2.1,we use a single-layer bi-directional LSTM as the backbone with the size of hidden states set to 300.
For generative model in 3.2.2,We implement the above three models, i.e.
p(c i |c <i , c >i ), p(c <i |c i , c >i ) and p(c >i |c <i , c i ) based on the SEQ2SEQ structure, and use Transformer-large as the backbone (Vaswani et al., 2017).Sentence position embeddings and token position embeddings are added to word embeddings.The model is trained on a corpus extracted from CommonCrawl which contains 100B tokens.
For the surrogate model in 3.4, we use RoBERTa (Liu et al., 2019) as the backbone, and adopts the Siamese structure (Reimers and Gurevych, 2019), where two sentences are first mapped to vector representations using RoBERTa.We use the average pooling on the last RoBERTa layer to obtain the sentence representation.During training, we use Adam (Kingma and Ba, 2014) with learning rate of 1e-4, β 1 = 0.9, β 2 = 0.999.The trained surrogate model obtains an average L 2 distance of 7.4 × 10 −4 on dev set when trained from scratch, and 6.1 × 10 −4 when initialized using the RoBERTa-large model (Liu et al., 2019).We set C s to 500.
Baselines We use the following models as baselines: • Avg.
Glove embeddings is the average of word embeddings produced via the co-occurrence statistics in the corpus (Pennington et al., 2014) • BERTScore computes the similarity of two sentences as a sum of cosine similarities between their tokens' embeddings (Zhang et al., 2020).• BLEURT is baseed on BERT and captures non-trivial semantic similarities by finetuning the model on the WMT Metrics dataset, on a set of ratings provided by the user, or a combination of both (Sellam et al., 2020).• DPR works by using two unique BERT encoder models and the model weights are optimized to maximize the dot product (Karpukhin et al., 2020).• Universal Sent Encoder is a method of encoding sentences into their corresponding embeddings that specifically target transfer learning to other NLP tasks (Cer et al., 2018).• SBERT is a BERT-based method of using the Siamese structure to derive sentence embeddings that can be compared through cosine similarity (Reimers and Gurevych, 2019).
Run-time Efficiency
The run-time efficiency is important for sentence representation models since similarity functions are potentially applied to large corpora.
In this subsection, we compare Surrogate base to InferSent (Conneau et al., 2017), Universal Sent Encoder (Cer et al., 2018) and SBERT base (Reimers and Gurevych, 2019).We adopt a length batching strategy in which sentences are grouped together by length.
The proposed Surrogate model is based on PyTorch.InferSent (Conneau et al., 2017) and SBERT (Reimers and Gurevych, 2019) are based on PyTorch.Universal Sent Encoder (Cer et al., 2018) is based on Tensorflow and the model is from the Tensorflow Hub.Model efficiency is measured on a server with Intel i7-5820K CPU @ 3.30GHz, Nvidia Tesla V100 GPU, CUDA 10.2 and cuDNN.We report both CPU and GPU speed and the results can be found in Table 1.As can be seen, InferSent is around 69% faster than Surrogate model on CPU since its simpler model architecture.The speed of the proposed Surrogate model is comparable to SBERT for both nonbatching and batching setups, which is in accord with our expectations due the same transformer structure adopted by the Surrogate model.
Experiment: Semantic Textual Similarity
We evaluate the proposed method on the Semantic Textual Similarity (STS) tasks.We compute the Spearman's rank correlation ρ between the cosine similarity of the sentence pairs and the gold labels for comparison.
The results are shown in Table 2 and we observe significant performance boosts of the proposed models over baselines.Notably, the proposed models trained in the unsupervised setting (both Origin and Surrogate) are able to achieve competitive results to models trained on additional annotated NLI datasets.Another observation is, as expected, the Surrogate models underperform the Origin model as Origin serves as an upper bound for Surrogate but with a cost of inference speed.
Partially Supervised Evaluation We finetune the model on the combination of the SNLI (Bowman et al., 2015) and the Multi-Genre NLI (Williams et al., 2018) dataset, with the former containing 570K sentence pairs and the latter containing 433K pairs across various genres of sources.Sentence pairs from both datasets are annotated with one of the labels contradiction, entailment, and neutral.The proposed models are trained on the natural language inference task then used for computing sentence representations in an unsupervised manner.
The partially supervised results are shown in Ta-ble 2. As can be seen, results from the proposed model finetuned on NLI datasets are comparable to results from unsupervised models since no labeled similarity dataset is used, and comparable to results from supervised models if further finetuned on similarity datasets such as STS.
Supervised Evaluation For the supervised setting, we use the STS benchmark (STSb) to evaluate supervised STS systems.This dataset contains 8,628 sentence pairs from three categories: captions, news, and forums, and is split into 5,749/1,500/1,379 sentence pairs respectively for training/dev/test.The proposed models are finetuned on the labeled datasets under the setup.
For our proposed framework, we use Origin to represent the original model, where C for each sentence is constructed by searching the entire corpus as in Section 3.3 and we compute similarity scores based on Eq.( 4).We also report performances for Surrogate models with base and large sizes.
The results are shown in Table 3.We can see that for both model sizes (base and large) and both setups (with and without NLI training), the proposed Surrogate model significantly outperforms baseline models, leading to an average of over 2point performance gains on the STSb dataset.
Note that the Origin model can not be readily adapted to the partially supervised or supervised setting because it is hard to finetune the Origin model where the context set C needs to be constructed first.Hence, we finetune the Surrogate model as a compensation for the accuracy loss brought by the replacement of Origin with Surrogate.As we can see from Table 2 and Table 3, finetuning Surrogate on NLI datasets and STSb is an effective remedy for the performance loss.
Experiment: Argument Facet Similarity
We evaluate the proposed model on the Argument Facet Similarity (AFS) dataset (Misra et al., 2016).This dataset contains 6,000 manually annotated argument pairs collected from human conversations on three topics: gun control, gay marriage and death penalty.Each argument pair is labeled on a scale between 0 and 5 with a step of 1. Different from the sentence pairs in STS datasets, the similarity of an argument pair in AFS is measured not only in the claim, but also in the way of reasoning, which makes AFS a more difficult dataset Table 2: Spearman rank correlation ρ between the cosine similarity of sentence representations and the gold labels for various Textual Similarity (STS) tasks under the unsupervised setting.We use *-NLI to denote the model additionally trained on NLI datasets.♯ indicates that results are reproduced by ourselves; § indicates results are taken from Reimers and Gurevych (2019); Surrogate are results for our proposed method.
compared to STS datasets.We report the Pearson correlation r and Spearman's rank correlation ρ to compare all models.
Unsupervised Evaluation The results are shown in Table 4, from which we can see for both the unsupervised settings, the proposed models Origin and Surrogate outperform baseline models by a large margin, with over 10 points for the unsupervised setting and over 4 points for the supervised setting.
Supervised Evaluation
We follow Reimers and Gurevych (2019) to use the 10fold cross-validation for supervised learning.Results are shown in Table 4, from which we can see for both the supervised settings, the proposed models Origin and Surrogate outperform baseline models by a large margin, with over 10 points for the unsupervised setting and over 4 points for the supervised setting.
Experiment: Wikipedia Sections Distinction
Ein Dor et al. ( 2018) constructed a large set of weakly labeled sentence triplets from Wikipedia for evaluating sentence embedding methods, each of which is composed of a pivot sentence, one sentence from the same section and one from another section.Test set contains 222K triplets.The construction of this dataset is based on the idea that a sentence is thematically closer to sentences within its section than to sentences from other sections.
We use accuracy as the evaluation metric for both unsupervised and supervised experiments: an example is treated as correctly classified if the positive example is closer to the anchor than the negative example.Results are shown in Table 5.For the supervised setting, the proposed model significantly outperforms SBERT, with a nearly 3-point gain in accuracy for both base and large models.
Ablation Studies
We perform comprehensive ablation studies on the STSb dataset with no additional training on NLI datasets to better understand the behavior of the proposed framework.Studies are performed on both the original model setup (denoted by Origin) and the surrogate model setup (denoted by Surrogate).We adopt the unsupervised setting for comparison.
Size of Training Data for Origin
We would like to understand how the size of data for training Origin affects downstream performances.We vary the training size between [10M, 100M, 1B, 10B, 100B] and present the results in proves as we increase the size of training data when its size is below 1B.With more training data, e.g.1B and 10B, the performance is getting close to the best result achieved with the largest training data.
Size of C s
Changing the size of C s will have an influence on downstream performances.Table 7 shows the results.The overall trend is clear: a larger C leads to better performances.When the size is 20 or 100, the results are substantially worse than the result when the size is 500.Increasing the size from 500 to 1000 only brings marginal performance gains.We thus use 500 for a trade-off between performance and speed.
Number of Pairs to Train Surrogate
Next, we would like to explore the effect of the number of sentence pairs to train Surrogate.The results are shown in achieves an acceptable result of 74.02, which indicates that the collected automatically labeled sentence pairs are of high quality.
How to Construct C
We explore the effect of the way we construct C. We compare three different strategies: (1) the proposed two-step strategy as detailed in Section 3.3; (2) randomly selection; and (3) the proposed twostep strategy but without the diversity promotion constraint that allows each text chunk to contribute at most one context.For all strategies, we fix the size of C to 500.
The results for these strategies are respectively 78.47, 34.45 and 76.32.The random selection strategy significantly underperforms the other two.The explanation is as follows: given the huge se- mantic space for sentences, randomly selected contexts are very likely to be semantic irrelevant to both s 1 and s 2 and can hardly reflect the contextual semantics the sentence resides in.The similarity computed using context scores based on completely irrelevant contexts is thus extremely noisy, leading to inferior performances.Removing the diversity promotion constraint (the third strategy), the Spearman correlation reduces by over 2 points.The explanation is straightforward: without the diversity constraint, very similar contexts will be included in C, making the dimensions in the semantic vector redundant; with more diverse contexts, the sentence similarity can be measured more comprehensively and the result can be more accurate.
Modules in the Scoring Function
We next turn to explore the effect of each term in the scoring function Eq.(3).Table 9 shows the results.We can observe that removing each of these terms leads to performance drops to different degrees.Removing discriminative results in the least performance loss, with a reduction of 0.5; removing left-context and right-context respectively results in a performance loss of 1.11 and 1.46; and removing both left-context and right-context has the largest negative impact on the final results, with a performance loss of 1.97.These observations verify the importance of different terms in the scoring function, especially the context prediction terms.
Model Structures
To train the surrogate model, we originally use the Siamese network structure where two sentences are separately feed into the same model.It would be interesting to see the effect of feeding two sentences together into the model, i.e., {[CLS], s 1 , [SEP], s 2 } and then using the special token [CLS] for computing the similarity, which is the strategy that BERT uses for sentence-pair classification.Here, we call it the BERT-style model for comparison with the Siamese model.
By training the BERT-style model using the same harvested sentence pairs as the Siamese model with the L 2 regression loss, we obtain a Spearman's rank correlation of 77.43, slightly better than the result of 77.32 for the Siamese model.This is because interactions between words/phrases in two sentences are modeled more sufficiently in the BERT structure as interactions start at the input layer through self-attentions.For the Siamese structure, the two sentences do not interact until the output cosine layer.
The merit of sufficient interactions from the BERT structure also comes at a cost: we need to rerun the full model for any new sentence pair.This is not the case with the Siamese structure, which allows for fast semantic similarity search by caching sentence representations in advance.In practice, we prefer the Siamese structure since the speedup in semantic similarity search overweighs the slight performance boost brought by the BERT structure.
Case Analysis
We conduct case analysis on STS benchmark (Cer et al., 2017) test set.Examples can be seen in Table 10.Given two sentences of text s 1 and s 2 , the models need to compute how similar s 1 and s 2 are, returning a similarity score between 0 and 5.As can be seen, scores from the proposed surrogate model are more correlated with to the gold compared to the universal sentence encoder and the SBERT model.
Conclusion
In this work, we propose a new framework for measuring sentence similarity based on the fact (Reimers and Gurevych, 2019) and the Universal Sentence Encoder model (Cer et al., 2018), respectively.Scores from the proposed surrogate model are more correlated with to the gold compared to the universal sentence encoder and the SBERT model.
that the probabilities of generating two similar sentences based on the same context should be similar.We propose a pipelined system by first harvesting massive amounts of sentence pairs along with their similarity scores, and then training a surrogate model using the automatically labeled sentence pairs for the purpose of faster inference.Extensive experiments demonstrate the effectiveness of the proposed framework against existing sentence embedding based methods.
We directly evaluate the trained model on the test set without finetuning.Results are shown in Table5.For the unsupervised
Table 3 :
Reimers and Gurevych (2019)the STSb dataset under the supervised setting.We use *-NLI to denote the model additionally trained on NLI datasets.♯indicatesthatresults are reproduced by ourselves; § indicates results are taken fromReimers and Gurevych (2019); Surrogate are results for our proposed method.setting, the large model Surrogate large outperforms the base model Surrogate base by 2.1 points.
Table 6 .
The model performance drastically im-
Table 4 :
Reimers and Gurevych (2019)ion r and Spearman's rank correlation ρ on the Argument Facet Similarity (AFS) dataset.♯indicates that results are reproduced by ourselves; § indicates results are taken fromReimers and Gurevych (2019); Surrogate are results for our proposed method.
Table 8 .
As expected, more training data leads to better performances.With only 100K training pairs, the Surrogate model still
Table 5 :
Reimers and Gurevych (2019)kipedia sections distinction task.♯indicates that results are reproduced by ourselves; § indicates results are taken fromReimers and Gurevych (2019); Surrogate are results for our proposed method.
Table 6 :
The effect of size of training data for Origin.
Table 7 :
The effect of size of C.
Table 8 :
The effect of training data size for Surrogate.
Table 9 :
The effect of each term in the scoring function Eq.(3).discriminative stands for log p(y = 1|s i , c <i , c >i ), left-context stands for 1 |c<i| log p(c <i |s i , c >i ) and right-context stands for 1 |c>i| log p(c >i |c <i , s i ).both contexts means we remove both left context and right context.
Table 10 :
We use gold, surrogate, sbert and universal to denote scores obtained from the gold label,the proposed Surrogate model, the SBERT model | 8,295 | 2021-05-17T00:00:00.000 | [
"Computer Science"
] |
Iterative Methods for Computing the Resolvent of the Sum of a Maximal Monotone Operator and Composite Operator with Applications
Total variation image denoising models have received considerable attention in the last two decades. To solve constrained total variation image denoising problems, we utilize the computation of a resolvent operator, which consists of a maximal monotone operator and a composite operator.More precisely, the composite operator consists of amaximalmonotone operator and a bounded linear operator. Based on recentwork, in this paperwe propose a fixed-point approach for computing this resolvent operator.Under mild conditions on the iterative parameters, we prove strong convergence of the iterative sequence, which is based on the classical Krasnoselskii–Mann algorithm in general Hilbert spaces. As a direct application, we obtain an effective iterative algorithm for solving the proximity operator of the sum of two convex functions, one of which is the composition of a convex function with a linear transformation. Numerical experiments on image denoising are presented to illustrate the efficiency and effectiveness of the proposed iterative algorithm. In particular, we report the numerical results for the proposed algorithm with different step sizes and relaxation parameters.
Introduction
In the last two decades, the total variation (TV) image denoising model proposed by Rudin, Osher, and Fatemi [1] has received considerable attention. Often referred to as the ROF model, this takes the form where ∈ is the observed noisy image, ‖ ‖ is the total variation, and > 0 is the regularization parameter, which balances the data fidelity term and total variation regularization. In the following, we denote by ∈ the vectorized image ∈ × , where = . Because TV regularization has the advantage of maintaining image edges when removing noise, it has been extended to many other important image processing problems, including image deblurring [2][3][4], image inpainting [5], image superresolution [6], image segmentation [7], and image reconstruction [8,9].
Many efficient iterative algorithms have been proposed to solve the ROF model (1). These include the Chambolle gradient projection algorithm and its variants [10][11][12], the primal-dual hybrid gradient algorithm [13][14][15], and the split Bregman algorithm [16,17]. Isotropic total variation (ITV) and anisotropic total variation (ATV) are the most widely employed methods in the literature. It is worth mentioning that TV includes ITV and ATV, which can both be viewed as compositions of a convex function with a linear transformation . That is, ‖ ‖ = ( ). For specific examples, see [18,19]. Based on this, Micchelli et al. [20] extended the ROF model (1) to the general convex optimization problem 2 Mathematical Problems in Engineering where ∈ , : → (−∞, +∞] is a proper lower semicontinuous convex function and : → is a linear transformation. The convex optimization problem (2) is equivalent to computing the proximity operator of the function ∘ . Recall that the proximity operator of a proper lower semicontinuous convex function : → (−∞, +∞] is defined as Thus, Micchelli et al. [20] proved that the above minimization problem (2) can be solved using a fixed-point equation. The advantage of the fixed-point framework is that it provides a convenient analysis of the convergence of the proposed algorithm and enables the development of efficient numerical algorithms via various fixed-point iterations.
Because the pixel values of grayscale images are generally distributed in [0,255] or [0, 1], it is natural to incorporate this prior information into the ROF model (1). Thus, we obtain the following constrained ROF model: where is a nonempty closed and convex set, which could be chosen as the pixel range of the images. To solve the constrained ROF model (4), it is sufficient to consider the following constrained convex optimization problem: In general, the constrained convex optimization problem (5) can always be reformulated as an unconstrained convex optimization problem. More precisely, where denotes the indicator function of , which is defined by ( ) = 0 if ∈ and ( ) = +∞ otherwise. As the fixed-point algorithm proposed by Micchelli et al. [20] cannot be applied to solve the problem in (6), this motivated us to develop an efficient iterative algorithm for this purpose. Hence, we apply this algorithm to the constrained total variation denoising model (4).
As the indicator function belongs to the class of proper lower semicontinuous convex functions, we are motivated to solve the following general convex optimization problem in general Hilbert spaces: where is a proper lower semicontinuous convex function. To achieve this goal, let us recall the definition of the resolvent operator. Let be a real Hilbert space and let : → 2 be a maximal monotone operator. The resolvent of is the single-valued operator = ( + ) −1 . Next, let : → (−∞, +∞] be a proper lower semicontinuous convex function and let denote the subdifferential of , and let = . Thus, the resolvent of coincides with the proximity operator as follows: Under certain qualification conditions, the problem considered in (7) can be solved via the resolvent operator * ∘ ∘ + . Overall, this leads to the problem of computing the resolvent operator of + * . More precisely, let ∈ and where : → 2 and : → 2 are two maximal monotone operators and : → is a bounded linear operator from the Hilbert space to the Hilbert space . Following this, let = and = . Hence, the problem (9) reduces to the convex optimization problem (7), because 1.1. Existing Work. Next, let us briefly review some existing work concerning the computation of resolvent operators. Bauschke and Combettes [21] extended the Dykstra algorithm [22] for computing projections onto the intersection of two closed convex sets to compute the resolvent of the sum of two maximal monotone operators. Hence, they obtained an algorithm for finding the proximity operator of the sum of two proper lower semicontinuous convex functions. Combettes [23] proposed two inexact parallel splitting algorithms for computing the resolvent of a weighted sum of maximal monotone operators. The key idea was to reformulate the weighted sum of maximal monotone operators as a sum of two maximal monotone operators in a product space. The two iterative algorithms were based on extensions of the Douglas-Rachford splitting and Dykstra-like methods, respectively. Furthermore, Combettes [23] applied these algorithms when computing the proximity operator of a weighted sum of proper lower semicontinuous convex functions. In more recent work, Artacho and Campoy [24] generalized the averaged alternating modified reflection algorithm [25] to compute the resolvent of the sum of two maximal monotone operators.
In contrast, Moudafi [26] proposed a fixed-point algorithm to compute the resolvent of operator * , where : → is a bounded linear operator with the adjoint * , : → 2 is a maximal monotone operator, and and are two Hilbert spaces. When = , this algorithm utilizes the fixed-point approach to computing the proximity operator ∘ proposed by Micchelli et al. [20]. It is clear that the resolvent of the operator * is a special case of (9) by allowing = 0. In addition, Combettes et al. [27] proposed a dual forward-backward splitting algorithm to solve the convex optimization problem in (7) (see also [28]). The main idea in this case was to first derive the dual problem of (7) and then apply the forward-backward splitting algorithm to solve this problem. Finally, the convergence of the primal iterative sequence was proved using the connection between the primal optimal solution and the dual optimal solution. However, the authors did not apply the obtained iterative algorithm to solve the constrained ROF model (4). Beck and Teboulle [3] solved the constrained ROF model (4) as a subproblem of image deblurring. Chan et al. [29] applied the alternating direction method of multipliers to solve the constrained total variation image deblurring problem. Their numerical results confirmed that the quality of the restored image could be improved by incorporating prior information of the images as a constraint. However, neither study considered the general resolvent operator of (9).
In this study, we focus on computing the resolvent of the operator + * (9), where : → 2 and : → 2 are two maximal monotone operators and : → is a bounded linear operator. We assume that the operator + * is maximally monotone. This is true, for example, if 0 ∈ ( (dom ) − dom ) [30], where ( ) represents a relative interior of the set . If = 0, then the problem (9) becomes that of computing the resolvent operator of * . Inspired by the work of Moudafi [26] and Combettes et al. [27], we propose a fixedpoint approach to computing resolvent operators (9). First, we show that the resolvent operator (9) can be characterized using a fixed-point equation. Subsequently, we propose a fixed-point algorithm and prove the strong convergence of its iterative sequence. Next, we employ the proposed iterative algorithm to solve the convex optimization problem (7) arising in the field of image denoising. Finally, we conduct numerical experiments on image denoising to validate the effectiveness of the proposed algorithm. In particular, we show how the performance of the algorithm is influenced by the selection of the step size and relaxation parameters.
The remainder of this paper is organized as follows. In Section 2, we introduce some notation and present useful definitions and lemmas. In Section 3, we present the main fixed-point algorithm and prove its strong convergence. In Section 4, we employ the obtained iterative algorithm to solve a particular convex optimization problem, which is related to the calculation of the resolvent operator (9). In Section 5, we present some numerical experiments on image denoising to illustrate the performance of our proposed algorithm. Finally, we provide some conclusions and ideas for future work in Section 6.
Preliminaries
In this section, we review some basic definitions and lemmas in monotone operator theory and convex analysis, which will be used throughout this paper. First, let be a real Hilbert space with inner product ⟨⋅, ⋅⟩ and the associated norm ‖ ⋅ ‖. Let denote the identity operator on . We use ⇀ to indicate that the sequence { } converges weakly to and → to indicate that the sequence { } converges strongly to . Let ( , ) denote all bounded linear operators from the Hilbert space to the Hilbert space . Let ∈ ( , ). Then, the adjoint of is the unique operator * ∈ ( , ) such that ⟨ , ⟩ = ⟨ , * ⟩.
Definition 1 (see [28], (maximal monotone operator)). Let : → 2 be a set-valued operator. is said to be monotone if Moreover, is maximally monotone if its graph is not strictly contained in the graph of any other monotone operator.
It is easy to check that firm nonexpansiveness implies nonexpansiveness. If is firmly nonexpansive, then is 1−cocoercive.
Thus, if is − averaged then is also nonexpansive.
The following lemma provides some useful characterizations between an operator and − .
Mathematical Problems in Engineering
Lemma 6 (see [28]). Let be a nonempty subset of and let : → . Then The following lemma shows that a composition of two averaged operators is also an average. This result first appeared in the work of Ogura and Yamada [32]. Combettes and Yamada [33] subsequently confirmed it with a different proof.
Lemma 7 (see [32]). Let be a nonempty subset of . Furthermore, let 1 : → be 1 − averaged and let 2 : We also make full use of the following lemma.
Lemma 8 (see [28]). Let ∈ and ∈ . Let ∈ and denote the set of real numbers. Then We end this section by introducing the Krasnoselskii-Mann algorithm. Theorem 9 provides a fundamental tool for studying the convergence of many operator splitting methods.
Computing the Resolvent Operator (9)
Before presenting our main results, we first introduce some notation. For a fixed ∈ , let > 0 define operators : → and : → as follows: and In addition, the following lemma provides a fixed-point characterization of the resolvent operator (9). Lemma 10. Let : → 2 and : → 2 be two maximal monotone operators. Furthermore, let ∈ ( , ) and ∈ . Then, for a given > 0, if and only if is a fixed point of .
Next, we prove the following lemma, which characterizes an important property of the operator .
Lemma 11. Let
: → 2 be a maximal monotone operator, and let ∈ ( , ). For a given ∈ and > 0, define an operator : → : → − ( − * ). Then Proof. (i) Let 1 , 2 ∈ . Then, we have that The first inequality follows from the fact that the resolvent operator is firmly nonexpansive, and the second is trivial.
Lemma 11 shows that, for any On the other hand, − (1/ ) is firmly nonexpansive and is also 1/2−averaged. According to Now, we are ready to present our main results.
Remark 13. We observe that (4 − ‖ ‖ 2 )/2 > 1 for any ∈ (0, 2/‖ ‖ 2 ). Taking = 1 in (20), we obtain a simple iterative algorithm to compute the resolvent operator problem (9). More precisely, for any 0 ∈ the iterative sequences { } and { } are generated as follows: where ∈ (0,2/‖ ‖ 2 ). The iterative algorithm (29) is equivalent to the Picard iteration scheme, which can be easily applied in practice. In fact, the iterative scheme (29) can be rewritten as With the help of the relation + −1 −1 ∘ −1 = , we obtain from (30) that Let = . Then, the iterative scheme (31) is equivalent to Remark 14. We observe that the resolvent operator of + * in (9) is equivalent to the following monotone inclusion problem: find ∈ , where , , , and are the same as in (9). This monotone inclusion (33) can be solved using existing methods for more general monotone inclusion problems (such as [34][35][36][37]). However, these iterative algorithms are not identical to our proposed algorithm. To illustrate this point, we will explain the proposed algorithm from the perspective of duality. In fact, the dual monotone inclusion of (33) is find ∈ , By Lemma 11, we know that − ( − * ) is 1/‖ ‖ 2cocoercive. Thus, is a solution of the dual monotone inclusion (34), and = ( − * ) is a solution of the primal monotone inclusion (33). Next, we apply the relaxed forwardbackward splitting algorithm (see, e.g., Theorem 26.14 of [28]) to solve the dual monotone inclusion (34). More precisely, let 0 ∈ and set which is equivalent to the iterative algorithm introduced in Theorem 12. For convenience, we summarize the differences between the proposed algorithm and existing algorithms in Table 1.
Let = 0 in Theorem 12. Then, we obtain the following corollary.
Corollary 15 extends some of the results from [26] in two aspects. (i) The range of { } is expanded and (ii) we obtain strong convergence of the iterative sequence { } (which is not studied in [26]).
Remark 18. The obtained iterative algorithm (38) for computing the resolvent operator + ( ) is different from those proposed by Bauschke and Combettes [21,23]. In fact, similar to (32), the iterative scheme (38) can be rewritten as where = . Bauschke and Combettes [21] proposed a Dykstra-like algorithm to compute the resolvent operator + ( ). Let 0 = , 0 = 0, and 0 = 0. Then, the Dykstra-like algorithm is defined by = ( + ) , In addition, Bauschke and Combettes proved that { } and { } generated by (40) converge strongly to + ( ). Following some simple calculation, the Dykstra-like algorithm (40) can be rewritten as follows: On the other hand, Combettes [23] proposed an inexact Douglas-Rachford splitting algorithm and an inexact Dykstra-like algorithm for computing the resolvent of the sum of a finite family of maximal monotone operators. For the resolvent of the sum of two maximal monotone operators, the inexact Dykstra-like algorithm without errors coincided with the iterative algorithm (40). For simplicity, we have presented the inexact Douglas-Rachford splitting algorithm without errors for computing the resolvent of the sum of two maximal monotone operators. Let 0 ∈ , and define the iterative sequences as follows: where > 0, and { } ⊆ (0, 2) such that ∑ +∞ =0 (2 − ) = +∞ and inf > 0. Next, the sequences { } and { } converge strongly to + ( ). In fact, this iterative algorithm (42) is equivalent to applying the Douglas-Rachford splitting algorithm to the monotone inclusion of Comparing (39), (40), and (42), we find that the obtained iterative sequences generated by all algorithms converge strongly to the resolvent operator + ( ). The iterative algorithms (39) and (42) do not have any requirements for the initial value, whereas (40) requires the selection of a fixed initial value. Unlike (39) and (42), the Dykstra-like algorithm (40) does not require tuning of the parameters.
Application to Convex Optimization Problem
In this section, we apply the obtained results to solve a particular convex optimization problem that has been studied in the literature. For convenience, we introduce some notation. be a nonzero bounded linear operator such that ∈ core ( (dom ) − dom ), where the core of a subset ⊆ is defined by core = { ∈ | cone ( − ) = } (see, e.g., Definition 6.9 of [28]). Consider the following convex optimization problem: The minimization problem (44) is equivalent to the proximity operator of ( ) + ( − ). As a direct application of the resolvent operator (9), we obtain the following convergence theorem from Theorem 12.
Remark 22.
(1) Comparing (47) with (48), the range of in (47) is clearly larger than that of the iterative sequence (48) introduced by Combettes et al. [27] when the variable step size is constant; i.e., ≡ . In addition, Theorem 20 also recovers the main results of Proposition 28.16 in [28]. However, we require that satisfies ∑ ∞
(2) Although our proposed iterative sequences (45) are error-free, it is not difficult to add error sequences in corresponding locations, as in (48). Because the proof is almost identical to that of Theorem 12, we have omitted it here.
Numerical Experiments
In this section, we present numerical experiments to verify the effectiveness of the proposed iterative algorithms for We run the program with MATLAB R2014a. We select "Barbara," "Lena," "Boat," and "Goldhill" as the test images (see Figure 1). Gaussian noise of mean 0 and standard deviation 30 is added to the clear images.
We use the signal-to-noise (SNR) and peak-signal-tonoise (PSNR) to evaluate the quality of the restored images. These are defined by and = 20 log 10 255 √ −̃2 , where is the original image,̃is the restored image, and and are the row and column size of the image , respectively. The iterative algorithm is stopped when the following criterion is satisfied: where is a given small constant. We tuned the regularization parameter and set it to 15 for optimal denoised image quality.
We aim mainly to solve the constrained total variation (TV) image denoising problem (4). In particular, we choose the anisotropic total variation as the regularization term during testing. By using the indicator function, the constrained (TV) denoising problem (4) can be reformulated as the following unconstrained optimization problem: where is the indicator function. It is clear that the optimization problem (52) is a special case of (44). In fact, if ( ) = ( ), ( ) = ‖ ‖ , and = 0, then the proposed iteration scheme (45) can be employed to solve the constrained TV image denoising problem (4). It is well known that ‖ ‖ 2 ≈ 8, where is the usual gradient operator of the total variation (see [10,20]). Let = . Then, (4) (also (52)) is reduced to the wellknown ROF model (1). We select as both the nonnegative and the bounded sets. Moreover, the nonnegative set is defined as { ∈ | ≥ 0}, and the bounded set as { ∈ | 0 ≤ ≤ 255}. The corresponding minimization problems (52) are referred to as the nonnegative ROF (N-ROF) model and bounded ROF (B-ROF) model, respectively. It is worth mentioning that the proximity operator of the indicator function is the projection operator, which has closed-form solutions based on our choices. Therefore, in (45) the proximity operator of ( ) = ( ) is the orthogonal projection onto the closed convex set . That is, ( ) = ( ).
Numerical Results and Discussion.
First, we describe the impacts of the parameters for the iterative step size and relaxation variable on the proposed iterative algorithm (45). According to Theorem 20, we can choose ∈ (0, 1/4) and ∈ (0, 2 − 4 ). Figures 2, 3, and 4 demonstrate the performance of the proposed iterative algorithm for solving (52) with different choices of and . The test image was "Barbara," and the stopping criterion was set as 10 −2 .
As shown in Figures 2-4, when the iterative step size is fixed, a larger relaxation parameter leads to a faster convergence of the iterative algorithm. Table 2 presents the corresponding numerical results in terms of the SNR, PSNR, and number of iterations ( ). Because the prior pixel information of the image is introduced as a constraint, the performance of the constrained ROF model is superior to that of the unconstrained model. Numerical results confirm the advantages of the constrained ROF model. We can see from Table 2 that the iterative step size has a significant impact on the performance of the algorithm. Furthermore, the experimental results demonstrate that a larger always leads to a faster convergence of the iterative algorithm. Thus, we choose = 1/4 and = 1 in the following comparison.
Next, we focus on investigating the performances of the constrained and unconstrained ROF models for the test images from Figure 1. The numerical results are presented in Table 3. We notice that the SNR and PSNR values slowly decrease with an increasing number of iterations. Because more iterations do not improve the quality of the image, we should stop the iterative algorithm in the early stages. Figures 5-8 present denoised images for = 10 −2 in Table 3. From the visual point of view, the images restored by the constrained model are closer to the original images than those restored by the unconstrained model. This further confirms the benefits of introducing constraints in the image recovery model.
Conclusions
The total variation can be viewed as a composition of a convex function with a linear transformation. Thus, Micchelli et al. [20] introduced a fixed-point algorithm based on proximity operators to produce a total variation model for image denoising (1). Inspired by the work of Moudafi [26], we studied the calculation of the resolvent of the sum of a maximal monotone operator and a composite operator (9) to produce a constrained total variation model (4). Subsequently, we proposed a fixed-point algorithm for this resolvent operator. Based on the fixed-point theory of nonexpansive mappings, we proved the strong convergence of the obtained iterative sequence. The advantage of the fixedpoint approach is that it provides the potential to develop additional fast iterative algorithms. Numerical simulations on image denoising illustrated the performance of the proposed algorithm. In particular, we found that the step size had a significant impact on the convergence speed of the algorithm. In general, when the iterative step size was fixed, larger relaxation parameters resulted in a faster iterative algorithm convergence. Numerical results also confirmed that the constrained ROF model achieved a superior performance compared with the unconstrained ROF model.
Finally, we wish to note that the constrained TV model (4) can also be derived using other iterative algorithms, such as the primal-dual Chambolle-Pock algorithm [15], the alternating direction method of multipliers [29,38,39], and the preconditioned primal-dual algorithm [40,41]. We have not presented the corresponding numerical results here. Thus, we will further examine the convergence rate of our proposed iterative algorithm and include these comparative results in future work.
Data Availability
The image data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare no conflicts of interest. | 5,670 | 2019-05-05T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
Single Particle Detection System for Strong-Field QED Experiments
Measuring signatures of strong-field quantum electrodynamics (SF-QED) processes in an intense laser field is an experimental challenge: it requires detectors to be highly sensitive to single electrons and positrons in the presence of the typically very strong x-ray and $\gamma$-photon background levels. In this paper, we describe a particle detector capable of diagnosing single leptons from SF-QED interactions and discuss the background level simulations for the upcoming Experiment-320 at FACET-II (SLAC National Accelerator Laboratory). The single particle detection system described here combines pixelated scintillation LYSO screens and a Cherenkov calorimeter. We detail the performance of the system using simulations and a calibration of the Cherenkov detector at the ELBE accelerator. Single 3 GeV leptons are expected to produce approximately 537 detectable photons in a single calorimeter channel. This signal is compared to Monte-Carlo simulations of the experiment. A signal-to-noise ratio of 18 in a single Cherenkov calorimeter detector is expected and a spectral resolution of 2% is achieved using the pixelated LYSO screens.
Introduction
The interaction between light and matter is described by the theoretical framework of quantum electrodynamics (QED). In the perturbative limit, it is considered to be the most precise and well-tested theory of modern physics [1]. As the electric field strength approaches the so-called Schwinger critical field E s ≈ 1.3 × 10 18 V/m, novel strongfield quantum effects become important. Consequently, the description of electron-laser interactions must be described by dressed states as a 0 1 and by high-order processes with radiative corrections scaling with αχ 2/3 e included in the theory [2], which is referred to as strong-field QED (SF-QED), where α is the fine-structure constant.
The normalized vector potential a 0 is a Lorentz invariant given, in Heaviside-Lorentz natural units (c = = 0 = 1), by a 0 = eE/(m e ω 0 ). Here, e is the absolute electron charge, E the peak value of the laser electric field and m e is the rest mass of the electron [2]. Another important parameter in the theory of SF-QED is the quantum parameter χ e . It is defined, also in natural units, as χ e = a 0 γ e (ω 0 /m e )(1 − cos θ), where γ e is the colliding electron beam Lorentz-factor, ω 0 is the laser frequency, and θ is the collision angle between the electron with the laser beam (for head-on collision θ = 180 • ). A parameter χ e 1 indicates that high-energy photon emission by the electron is likely, and, therefore, the particle undergoes a significant recoil on its motion.
Fundamental processes involving photon emission and photon decay are modified under strong-fields and mechanisms such as multi-photon Compton scattering, radiation reaction and nonlinear Breit-Wheeler (BW) are examples of predictions resulting from SF-QED. However, the experimental investigation of this regime is very limited to date [3]. Few experiments utilize the high-intensity fields in collisions of ultra-relativistic ions [4] or the interaction of ultra-relativistic particles in aligned crystals [5]. Recently, the investigation of SF-QED has been proposed using the collision of tightly focused ultra-relativistic electron beams [6].
With the advent of ultra-intense laser pulses [7,8], strong electric fields can be achieved in the laboratory by strongly focusing ultra-intense lasers. However, the Schwinger critical field is still far beyond the reach of present laser technology by around 3-4 orders of magnitude [9]. A solution for this challenge is to combine the highest achievable electric field from laser pulses with ultra-relativistic electron beams or γphotons, allowing the Schwinger field to be achieved in the rest frame of the electrons. In addition to pair generation, the vacuum responds nonlinearly and processes such as light-light scattering [10] and vacuum birefringence [11] can occur and be detected [12]. A review of strong-field QED processes is found in Ref. [3].
The first experiments in strong-field QED using intense laser fields and ultrarelativistic electron beams were reported in the Experiment-144 at SLAC in the 1990s [13][14][15]. In this experiment, electron bunches were accelerated by a linear accelerator up to energies of 49.1 GeV and interacted with a laser field with a root mean square (RMS) normalized vector potential of a RM S 0 = 0.4 and χ RM S e ≈ 0.3 producing about 100 positrons in total in the perturbative multi-photon regime (χ e < 1 and a 0 < 1) [13][14][15].
Experiments have been proposed for investigating SF-QED effects based on the interaction of high energy electron or photons beams with high-intensity lasers or nuclear fields in crystals [5,16]. Moreover, experiments investigating pair production, multiphoton Compton scattering and radiation reaction using all-optical setups have been proposed and realised at the Astra-Gemini laser system at the Rutherford Appleton Laboratory (RAL) [17][18][19]. In such experiments, electron beams generated by laserwakefield acceleration (LWFA) up to 2 GeV interacted with an intense laser pulse with a 0 = 10 [18,19] at the nonperturbative moderate quantum regime (χ e < 1 and a 0 > 1), and signatures of radiation reaction process were observed. Upcoming projects aim to investigate SF-QED effects in the nonperturbative full quantum regime, i.e. χ e > 1 and a 0 > 1. Interactions can be between two beams of photons [10] or between electron beams and intense laser pulses as proposed in the Experiment-320 (E-320) at FACET-II [20][21][22] and the LUXE experiment at DESY [23][24][25]. In both experiments, a small number of electron-positron pairs, which are generated by SF-QED processes must be measured with a sensitive detection system. The challenge in the detector development is that the detection system must be able to detect single particles, while also being insensitive to the photon background which is inherent to the experiments with an ultra-relativistic electron beam and a beam dump close to the interaction region.
Here we describe the detection system which is designed for SF-QED experiments such as the E-320 at FACET. We report a detection system which is able to diagnose single particles with MeV to GeV energies. A test and calibration of the detector system is presented. Monte-Carlo simulations of the background noise level and the expected signal-to-noise ratio for the E-320 show that single positron events can be detected with an expected signal-to-noise ratio of 18.
Single Particle Detection System
The detection system comprises two pixelated LYSO:Ce crystal screens to provide high spatial resolution coupled to a segmented Cherenkov calorimeter both placed about 3.6 m after a dipole magnet. The screens are placed in front of a Cherenkov calorimeter, and thus provide the ability to discriminate against low energy background events. Both detectors have fast (nanosecond-scale) response which allows timing-based background suppression as well. Figure 2a) presents the setup of the single particle detection system.
The two pixelated LYSO screens have dimensions of 4 cm × 20 cm × 4 mm with crystal pixels sizes of 2 mm × 2 mm × 4 mm which provide tracking information on the single particles propagating towards the detector. The LYSO:Ce crystal has a scintillation yield of 25 photons/keV emitted at central wavelength of 410 nm [26]. Besides the high light yield, the crystal has a decay time of about 40 ns which allows the background noise from secondary sources of radiation located far from the detectors to be suppressed by imaging the screens using a setup combination of a condenser lens, a gated image intensifier and single-photon sensitive camera Hamamatsu ORCA-Flash4 [27,28], see calibration of the LYSO crystals in Appendix A. A PCX condenser lens (diameter of 25 cm and focal length of 40 cm) is placed about 800 mm away from the screens and the image intensifier placed about 270 mm after the lens giving enough magnification to image the entire active region of the LYSO screens at the image intensifier. The Hamamatsu ORCA camera is placed about 250 mm away from the image intensifier with a f/1.4 macro lens with focal length of 28 mm and 40 mm diameter to maximise the photon collection.
The Cherenkov calorimeter, which is placed behind the pixelated LYSO screens, comprises up to 7 detection channels of 50 mm × 40 mm × 400 mm Schott F2 leadglass wrapped in enhanced specular reflector (ESR) foils to prevent the optical photon cross-talk between the detection channels. The choice of F2 lead-glass for the Cherenkov calorimeter is due to its linear response for energy measurements between 1 -4 GeV and absence of scintillation as discussed in [29,30]. The F2 lead-glass has a radiation length of X 0 = 3.14 cm and a Molière radius of R M = 3.4 cm. Hence, the glass blocks of the Cherenkov calorimeter were designed to contain the particle shower produced by a single 3 GeV positron incident on it as shown in Figure 3. The calorimeter array consists of up to 7 blocks with the signal positrons designed to be incident on the the 3 central channels, which corresponds to a total active area of 3 · (40 mm · 50 mm) = 6000 mm 2 , and positioned to detect positrons initially in the 2.5 -5.6 GeV range for a nominal 87.2 MeV transverse kick, equivalent to an integrated field strength of BL = 0.3 Tm, of the dipole magnet. However, the kick settings of the dipole magnet can be selected for detection of particles at alternative energy ranges. The side channels provide on shot background reference to allow better discrimination of signal events. Detection of the Cherenkov photons is achieved with photomultiplier tubes (PMTs) placed at the rear of each detection channel [31]. The PMTs used on the detector have a rise time of about 3 ns therefore also allowing background noise rejection by temporal gating. A detailed view of the Cherenkov detector is shown in Figure 2b) where the reference background channels are colored in blue, the main three central detection channels are in red, and the dispersion direction of the positrons within the 2.5 -5.6 GeV range is represented by the yellow area. The positron spectrum from the SF-QED interaction is discussed later in Section 3.
Calibration of the Cherenkov Calorimeter
The Cherenkov calorimeter was calibrated at the ELBE radiation source at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) using the dark current of the accelerator, which provides single electrons of 27 MeV energy with a weighted average number of (0.156 ± 0.005) electrons/RF cycle. The calibration of the ELBE accelerator dark current is presented in detail in Appendix B.
The dark current measurement was performed on the central calorimeter channel with the PMT gain set to 4 × 10 6 , corresponding to a voltage across the cathode and anode of the PMT of 10 3 V. The signals of the PMTs were recorded using PicoScopes [32]. A total of 10 4 events were acquired, and the number of Cherenkov photons detected by the PMTs as well as the signal decay time of each event are shown in Figure 4.
From Figure 4a), an average of 4 photons are detected by the photomultiplier tube after a single electron with 27 MeV energy hits the detection channel. From Monte-Carlo simulations using GEANT4 [33][34][35], it is predicted that a single 27 MeV electron produces 1770 photons of which only 17 photons are detected by the PMT. Figure 6a) shows the simulated distribution using GEANT4 of the number of detected photons by the calorimeter channel. The distribution was fitted by a Gaussian curve with mean at N Sim = 17.6 photons and RMS width of σ Sim = 5.5 photons. In the simulated model of the Cherenkov calorimeter detector, the transmittance of the lead-glass and the PMT quantum efficiency were included and the generation of optical photons in the range between 350 -650 nm was simulated, see Figure 5 for the lead-glass transmittance characterised over its 40 cm length and the typical quantum efficiency of the PMT employed. By comparing the simulation obtained for a single 27 MeV hit and the calibration results for an incident particle of 27 MeV, we calculate a photon detection efficiency of η ± σ η = (0.23 ± 0.13), where η = 4/17.6 ≈ 0.23 and σ η is calculated using error propagation method such that Figure 2. a) Proposed design of the single particle detection system (not to scale) for E-320 at FACET-II. The incident single GeV-positron travels through two pixelated LYSO scintillating screens which provides particle tracking and high resolution spectral information before entering the Cherenkov detector at one of its lead-glass detection channels where its energy is fully deposited. b) Detailed view of the Cherenkov detector. The background reference channels, where no signal particle is expected to strike, are shown in blue color. The central detection channel of the detector, which signal leptons are deflected, are illustrated in red color. The Cherenkov photons produced inside the lead-glass channel are detected by photomultiplier tubes (PMTs) at the rear of each channel. The positron direction of dispersion is within the yellow area with limits of 2.5 -5.6 GeV. Based on this calibration and the scaling of Cherenkov photons derived from GEANT4 simulations we calculate the number of detected photons for different energies of incoming single particles, see Figure 6b). Approximately 537 photons are detected for a single 3 GeV particle, providing an easily observed signal for single GeV-particle hits. The spectral resolution of the calorimeter depends on the energy of the incident particle and is limited by statistics of the detected Cherenkov photons. A representative value is approximately 20% with a resolution of better than 10% possible at the highest energy range. Higher spectral resolution and rejection of false positive events is provided by the LYSO tracking screens in combination with the Cherenkov detectors for event rejection as discussed in the following section. detected per channel for a single incident particle of different energies after the detection efficiency η of 23% and its uncertainty were taken into account. The error bars indicate the standard deviation which determines the energy resolution of the calorimeter. A single 3 GeV particle results in 537 detected photons by the calorimeter. Hence, a single GeV-particle incident on the detector is easily detected by the calorimeter.
To guard against any unknown non-linearities, an in-situ calibration of the detection system is planned during the beamtime of the Experiment-E320 using thin foils to produce a few positron-electron pairs with energies in the 2.5 -5.6 GeV range, and a correction on the calibration curve can be applied during the experiment.
Proposed Implementation for the Experiment-320: Expected Performance
In this section, we present the experimental parameters of the Experiment-320 followed by a discussion of the background noise and the signal-to-noise ratio expected on the single particle detection system.
The E-320 uses the FACET-II linear accelerator to generate electron beams up to 13 GeV with charge of 2 nC, a maximum Gaussian transverse profile of σ e = 30 µm and divergence less than 6 µrad [36]. A flat-top laser beam with diameter of 40 mm is focused by an off-axis parabolic mirror (OAP) reaching a peak intensity of 1.3 × 10 20 W/cm 2 which corresponds to an a 0 = 10. The focused laser beam forms a crossing-angle with the electron beam of about 30 • spatially overlapping with only 1% of the electron bunch, and, consequently, a quantum parameter χ e = 1.5 becomes experimentally accessible [20,21]. As a result, E-320 will probe a regime where the interaction with the laser is nonperturbative and electron-positron pair-creation occurs in the tunneling regime [37].
The emission of high-energy photons in the strong laser field widens the initial monoenergetic 13 GeV electron energy to the range of 1-13 GeV after the SF-QED interaction and also increases the electron beam divergence to 25 µrad. The simulated energy spectrum and divergence of the electron beam after the interaction point are shown in Figure 7 (note that the simulation parameters represent the concept design phase of E320 and final experimental parameters will likely differ from those considered here). The emitted high-energy photon beam has a maximum energy limited by the electron bunch energy of 13 GeV before the interaction and most of the photons emitted have energies less than 2 GeV. Figure 8 presents the simulated spectrum and divergence of the photon beam after the electron-laser interaction. Due to the low divergences of the electron and photon beams, neither of them generate substantial background noise at the edge of the OAP or in any other location along the beamline due to their small divergence and short distance to the IP.
Some of the emitted high-energy photons, while still immersed in the laser field, interact with the optical photons of the laser and generate electron-positron pairspredominantly through the nonlinear BW process. The simulation results show that positrons in the range of 1 -9 GeV have a higher probability of being created as presented in Figure 9a), and the particles within the energy range of 2.5 -5.6 GeV are deflected towards the detectors for the selected magnet kick of 87.2 MeV. The created pairs propagate through the FACET-II spectrometer beamline alongside with the remains of the primary 13 GeV electron beam and the high energy photons, up to the dipole magnet where the charged particles are deflected towards the detectors and beam dump. On the other hand, the high-energy photons propagate towards the beam dump without deflection and interaction with any material along its path due to the small divergence. Therefore, no forward background noise contribution on the detectors is expected from the photon beam.
A slice of the FACET beamline with the positioning of devices such as the dipole magnet and the single particle detector is presented in Figure 10.
The positrons are dispersed upwards by the dipole magnet and they propagate through the positron detection chamber (PDC) before exiting through a 5 mm thick aluminum vacuum exit window to reach the LYSO pixelated screens, and, finally, the Cherenkov detector. On the other hand, electrons are dispersed downwards with A beam dump is placed approximately 8.6 m downstream from the Cherenkov detector to stop the main electron beam and the produced high-energy photons. When the high-energy particles interact with the beam dump, substantial radiation in the backward direction that has the potential to reach the detectors and becomes noise is generated. However, the large distance between the detectors and the beam dump corresponds to a delay of 57 ns between a particle signal hit and the backscattered radiation on the detectors which is enough to significantly reduce dump noise at the detectors by time gating the PMTs and LYSO screens. Hence, the remaining background at the LYSO screens and detectors considered in the simulations below originates from the upstream radiation sources and secondary particles sources located very closely to the detectors which arrive within the gating window of the detectors.
Background and Signal-to-Noise Ratio Estimates for the Experiment-320
To efficiently detect single particles with high confidence, the design goal for the Cherenkov detectors was to achieve a signal-to-noise ratio in terms of signal power to background power (SNR P = P S /P BG ) approximately equal or greater than unity and a signal-to-noise ratio in terms of mean to variance (SNR σ = µ/σ 1, where µ and σ are mean value of signal and variance of the background respectively). This is particularly important for data taking at low signal rates, where the number of pairs to be detected per shot is ≈ 1 or less. Under these circumstances the Cherenkov detector allow the rejection of false positive events in the tracking detector and therefore the accumulation of tracking data to record high resolution spectra at low event rates.
Monte-Carlo simulations were performed using the code FLUKA [38,39] for modelling the expected background noise at the positron detectors. In the simulations, only the electrons that actively contribute to the background noise generation were used. Hence, from the theoretical energy spectrum after the electron-laser interaction as shown in Figure 7a), just the 10 7 primary electrons in the energy range between 1 GeV to 12.8 GeV were included, while the unperturbed 13 GeV electrons were presumed to propagate to the dump and therefore be outside the gating window of the detectors. Figure 11 presents the simulated particle fluence at the detection region, corresponding to the layout in Figure 10. The FLUKA simulations include only beamline components within distances where the generated background noise reaches the detectors within the temporal gating window. Consequently, the backscattered noise from the beam dump, which takes longer to reach the detectors, is suppressed on the simulations. Our simulations predict that the majority of background at the detectors arises from the secondary particles generated due to the interaction of low energy scattered electrons with the bottom of the vacuum chamber. Vacuum pipe Figure 11. Expected particle fluence (gamma photons, electrons and positrons) at the FACET-II tunnel where the single particle detection system is installed. A shower of secondary particles from the interaction between the deflected primary electron beam with the chamber and vacuum pipe walls travels directly to the detectors generating background noise which cannot be gated.
The particle spectrum of the background incident on the Cherenkov calorimeter is shown in Figure 12a). As can be seen, the incoming background is mainly composed by photons with energy lower than 25 MeV. The number of Cherenkov photons detected inside a calorimeter channel per incoming particle energy is presented in Figure 12b). Summing up all Cherenkov photons we find a total of N BG = 320 photons per primary electron bunch with a standard variation of σ BG = 12. From Monte-Carlo simulations, a single 3 GeV positron (characteristic of expected positron energies) is expected to produce only N SIG = 537 photons that are detected (see Section 2.1). The expected signal-to-noise ratio (SNR) in terms of detected photons is therefore SNR P = N SIG /N BG = 1.7 i.e. the number of optical photons within the detection channel is approximately tripled when a signal positron is present. The background signal considered in these simulations is produced by the interaction of laser scattered electrons with the vacuum system. It consists of a large number of low energy events generated at some distance from the positron detector and therefore irradiates the Cherenkov detector stack with a smoothly varying flux and low statistical fluctuation.
The number of detected photons by the PMT N det is given by the sum of the signal produced by a single particle N SIG added to the signal produced by the background noise N BG and the expected uncertainty < σ det > 2 = σ 2 SIG + σ 2 BG which accounts for the variance of both signal and background, During the experiment, the background noise N BG is monitored by the calorimeter reference channels at each shot and its value is subtracted from the signal detected N det by the PMTs. Thus, the signal produced by a single particle hit is calculated as N SIG = N det − N BG ± < σ > with uncertainty given by simply propagating the variances such that < σ > 2 = < σ SIG > 2 + 2 < σ BG > 2 . The uncertainty of the signal σ SIG is the main contribution on the expected variance < σ > for a smoothly varying background consisting of many lower energy particles. Using the calibration given in Figure 6, the energy of the particle is evaluated as E = (N SIG /0.13) (1/1.04) and its precision is given by the expected uncertainty < σ >. The more important SNR measure is therefore the alternative definition of signal to variance < σ >. This is predicted to be SNR σ = N SIG /< σ > ≈ 18 for a 3 GeV signal particle, where the number signal photons of N SIG = 537 is obtained from the calorimeter calibration curve for a single 3 GeV particle, and < σ >≈ 29 is calculated using the the simulated background uncertainty of < σ BG >= 12 and signal variance of the number of N SIG given as < σ SIG >= √ N SIG ≈ 23 such that < σ >= (2 · 12 2 + 23 2 ) 0.5 ≈ 29. The high SNR σ >> 1 demonstrates that individual positrons can clearly be separated from the background. 5 10 15 20 25 30 Background particle energy (MeV) Figure 12. Background spectrum at a single Cherenkov calorimeter detection channel at the Experiment-320: a) Background noise particle spectrum, most of the noise have energies below 25 MeV; b) Number of Cherenkov photons produced per energy of incoming background particle, a total of 320 photons are detected at the single Cherenkov calorimeter channel.
The higher spatial resolution of the LYSO crystals enables pair spectra to be measured with a resolution of 60 MeV at 3 GeV for nominal dipole magnet settings. A simulation of a single 3 GeV particle propagating through the LYSO screens without considering the background radiation is shown in Figure 14. The single particle is deflected by the dipole magnet and propagates to the LYSO screen-1 depositing about 5.5 MeV. The shower produces signal in a cluster of pixels on screen-2 with 8.8 MeV deposited. Recording a high resolution spectrum at low count rates of < 1 pair per shot requires integration of many shots and therefore efficient rejection of background events.
To this end, the energy deposited in each LYSO screen by a single particles hit was evaluated using FLUKA as shown in Figure 13. For 5.5 MeV deposited on a single crystal pixel, about 1.4 × 10 5 scintillation photons are produced however only a fraction of them are detected by the camera. The number of detected photons for a GeV-particle passing through the LYSO screens is estimated as 1.4 × 10 5 photons · CE P CX · G int · CE Orca · QE Orca ≥ 546 photons, where CE P CX is the collection efficiency of the PCX condenser lens imaging system calculated as (π · 125 2 )/(800 2 )/(4π) = 6.1 × 10 −3 , CE Orca is the collection efficiency of the camera CE Orca = (π · 20 2 )/(250 2 )/(4π) = 1.6 × 10 −3 , QE Orcas ≈ 0.4 is the quantum efficiency of the ORCA-Flash camera at the scintillation light wavelength of 410 nm [28], and the parameter G int = 10 3 is the gain applied on the dual microchannel plate (MCP) of the image intensifier already taken into account the quantum efficiency of the device. Following the same calculation method, we expect an average of 858 scintillation photons being detected for the LYSO screen-2. The uncertainty on the LYSO measurements is determined by the overall counting efficiency (collection and quantum) which is to be calibrated after FACET-II be commissioned.
To reject the background we require a valid event to follow the calculated particle track to within one pixel and define the threshold to be within 3σ of the expected signal level in each of the three detectors (screen-1, screen-2 and the Cherenkov calorimeter), leading to > 99% of true events being counted and therefore < 1% false negatives.
Simulations to evaluate the background noise level on the screens were also performed. Most of the background hits shown in Figure 15 on screen-1 and screen-2 can be rejected based on the above energy thresholds alone. The highlighted pixels on screen-1 with energy deposited > 6 MeV can be neglected by analysing their tracked pixels on screen-2. The absence of cluster of pixels with integrated energy deposited > 8 MeV as highlighted in Figure 15b) strongly indicates that the hits on screen-1 are originated from background noise. Finally, none of the background events shown meet the detector threshold in the Cherenkov detector.
These simulations only account for background generated by the electrons scattered due to the electron beam-laser interaction. The measurement of false positive events is only possible if the calorimeter measures a sufficiently high signal above the background. However, our FLUKA simulations show that the background noise consists of a bath of many low energy events. Therefore, the reference channels and signal channels prevent false positives arising from the low energy particle background. Thus, based on our simulation inputs, the false positive rate would be zero. To estimate the real false Figure 13. Energy deposited at each LYSO scintillating screen per incoming single particle. As the incoming particle travels through the first crystal screen, secondary particles with lower energy are created increasing the energy deposited at the second screen in comparison with the first LYSO. The typical uncertainty on the data points is on the order of 3-5%.
positive rate, we need to be able to evaluate the probability of giving a false positive on the Cherenkov calorimeter by having a high-energy localized event on the detector (or split event). To quantify this probability, the background radiation level of hard gamma photons in FACET-II needs to be known, which is will be possible as soon as FACET-II becomes operational.
Conclusions
In this paper, a single particle detection system designed to measure positron spectra for strong-field QED experiments is presented. The implementation in the upcoming Experiment-320 at FACET-II (SLAC) is detailed with calibration data from the ELBE accelerator. Based on this calibration a single 3 GeV particle hit at the Cherenkov detector generates about 537 detectable photons by the calorimeter with an energy resolution of ≈ 20 %. Furthermore, Monte-Carlo simulations were performed to demonstrate that a signal-to-noise ratio of SNR σ = 18 is predicted for the Cherenkov calorimeter detector at the Experiment-320. The combination of the LYSO screens with the Cherenkov detector allows efficient rejection of background events and the recording of positron spectra with ∆E/E = 0.02 even for pair production rates of 1 per shot. The decay time of the LYSO screens were calibrated using a radioactive Sodium-22 source to produce scintillation photons in the LYSO screens and a photomuliplier tube to diagnose the scintillation photons. A total of 1500 traces captured by the PMT were analysed and a curve of the form was fitted on each captured trace. In Equation (A.1), A is the signal amplitude, τ is the decay time of the signal, τ P stands for the signal rise time and t 0 is time shift of the signal. The function Θ(t) is the step-function which allows to shift the signal in time. An example of a single captured trace used on the evaluation of the decay times is shown in Figure A1a). Figure A1b) shows a histogram distribution of the evaluated decay time τ of each captured signal. A normal distribution was fitted on the histogram data and an average decay time of the LYSO crystals of 42.2 ns with FWHM of 7.1 ns was calculated. Both results are in agreement with measurements reported in the literature [40].
In Figure A1a), the image intensifier temporal gate window is also shown. The capturing window starts shortly before the single particle arrives and ends few-ns before the arrival of background noise from the beam dump allowing to capture most of the scintillation light from the LYSO screens without overlapping with the background noise.
Appendix B. Dark Current Calibration
To estimate the number of electrons at each accelerator dark current shot, Gafchromic EBT3 Radiochromic films were used [41]. The RCF films were placed in front of the Cherenkov calorimeter detector and the absorbed dose in the films was recorded for two define time periods, 3270 s and 650 s.
The RCFs were digitized using a commercial scanner model Epson Perfection V750 Pro [42]. The chosen flatbed scanner allows to store information of 16-bit RGB color information from the scanned film. By calibrating the scanner transmission light, one can retrieve the optical density of the irradiated RCFs by using the digitized color counts for each individual RGB channel. From the optical density, information retrieved from the RCFs, the absorbed dose is obtained [43]. The total number of electrons that have passed through a selected area of an RCF is given by [43] where D bkg is the average background dose noise on the RCF, e is the electron charge, d = 28 µm and ρ = 1.20 g/cm 3 are the thickness and density of the active layer of the radiochromic film, A p = (25.4 mm/300) 2 is the area for one pixel, and, D(C R ) i,j is the total absorbed dose at the pixel given by the indexes (i, j). The dark current of the accelerator is considered as a constant background source or noise on the measurements and has a frequency of 260 MHz. At the charge calibration measurements, the noise given by the dark current needs to be taken into account. The dark current can be measured as soon as the radio frequency (RF) injector gun was activated by opening the accelerator shutter. The total number of dark current electrons can be calculated using the following equation, where T is the total RCF exposure time to the dark current, N c e is the average number of electrons in one dark current cycle. The RCF used for the dark current measurement is shown in Fig. B1.
By scanning the RCF shown in Fig. B1 with the Epson Perfection V750 Pro flatbed scanner and post processing the individual color channels, the two available measurements of the dark current were analyzed. The first measurement was irradiated for a period of time of approximately 11 min and resulted on an average of 0.164 Figure B1. Radiochromic film used for determining the dark current of the ELBE accelerator.
electrons per 3.846 ns, equivalently to a current of I 11 dc = 6.81 pA. The second dark current measurement spot was irradiated for a period of 54.5 min and yield an average of 0.150 electrons per 3.846 ns, or I 54.5 dc = 6.23 pA.
Appendix B.1. Uncertainty Evaluation of the Dark Current
In calibration experiment using the dark current of the ELBE accelerator, we identify the following possible sources of uncertainties: • Background noise on the measurement • Large energy spread of the electron beam • Uncertainty on the digitalization of the colour channels of the Radiochromic films (RCF) by the flatbed scanner • Uncertainty on the conversion between the measured optical density from the RCFs to absorbed dose The background noise on the measurement can be neglected since there cannot be any other source of background scattered particle inside the experimental cave at ELBE and the signals captured by the PMTs were triggered above the typical background levels of the photomultiplier tubes as well as its dark current. The large energy spread of the electron source can also be neglected since the ELBE accelerator provides single particle energies filtered by a magnetic system and resulting in a energy spread of < 50 keV. Hence, the electron beam has an energy resolution of E = 50 keV/27 MeV < 0.2%.
The uncertainty on the digitalization of the colour channels of the flatbed scanner was also addressed during the evaluation of the accelerator dark current. A calibration of the three colour channels (red, green, and blue) of the Epson Perfection V750 flatbed scanner was performed using commercial Kodak Wratten 2 Neutral Density No. 96 filters [44] which were also calibrated and a difference of < 0.5% from their nominal value was found. In the readout of the radiochromatic films (RCFs), we focused on the red channel of the scanned films and a calibration curve obtained for the red colour channel of the scanner was obtained through a curve fit of the form OD = 2.229 exp(−C R /21957) − 0.427 where OD is the optical density and C R is the digitized intensity in counts of the red colour channel by the flatbed scanner. The fitted curve presented a R 2 = 0.9996 which confirms that the fitted curve agrees well the calibration data points.
The conversion between optical density of the RCF obtained at the red colour channel read by the scanner and the absorbed dose was also evaluated. RCFs films were illuminated by a proton source (Jena University Laboratory for Ion Acceleration tandem accelerator, namely JULIA) with a known fluence which provides a specific absorbed dose on the films. Later, the illuminated RCFs were scanned and a conversion between the optical density at the red colour channel and the absorbed dose was be evaluated [43,45]. As result, a transfer function between the digitized intensity C R and the absorbed dose by the RCF was obtained as, Dose(Gy) = 6.1 C R − 0.221 × 10 6 5.3 × 10 3 − C R . (B. 3) The transfer function above has a fluence error of 5% for the dose applied from the proton source and the other uncertainties are assumed to be negligible. As only the fluence error of 5% is the only substantial uncertainty, the value of 5% is also applied for the number of electrons from the dark current.
Finally, the ELBE dark current measurement of 650 seconds provides a current value of (0.164 ± 0.008) electrons per RF cycle and the longer measurement of 3270 seconds results in a dark current of (0.150 ± 0.007) electrons per RF cycle.
From both measurements, the best estimate for the dark current value is calculated using the weighted average method as described in [46], where x i is the i-th measured value with uncertainty of σ i , and w i is the weight calculated as w i = 1/σ 2 i . The uncertainty σ wav in x wav is calculated using error propagation such that Hence, the best estimate for the dark current is x DC wav = (x wav ± σ wav ) = (0.156 ± 0.005) electrons per RF cycle. | 8,813 | 2021-07-08T00:00:00.000 | [
"Physics"
] |
Nickel-Catalyzed Decarboxylative Coupling of Redox-Active Esters with Aliphatic Aldehydes
The addition of alkyl fragments to aliphatic aldehydes is a highly desirable transformation for fragment couplings, yet existing methods come with operational challenges related to the basicity and instability of the nucleophilic reagents commonly employed. We report herein that nickel catalysis using a readily available bioxazoline (BiOx) ligand can catalyze the reductive coupling of redox-active esters with aliphatic aldehydes using zinc metal as the reducing agent to deliver silyl-protected secondary alcohols. This protocol is operationally simple, proceeds under mild conditions, and tolerates a variety of functional groups. Initial mechanistic studies suggest a radical chain pathway. Additionally, alkyl tosylates and epoxides are suitable alkyl precursors to this transformation providing a versatile suite of catalytic reactions for the functionalization of aliphatic aldehydes. Cross-coupling reactions have revolutionized the landscape of carbon–carbon bond construction, with extensive application in the synthesis of natural products, pharmaceuticals, agrochemicals, and functionalized polymers.1 Coupling of Grignard or organolithium reagents with carbonyl compounds remains among the most frequently used synthetic reactions (Figure 1A),2 although limitations exist due to the instability, basicity, and lack of functional group compatibility of the requisite highly nucleophilic reagents. Barbier-type reactions3 and Nozaki-Hiyama-Kishi (NHK)4 couplings are attractive as they avoid the handling of airand moisture-sensitive organometallic reagents, and have been employed in many complex settings,5 although the reaction scope is often limited in cases where sp3 alkyl fragments are added to aliphatic enolizable aldehydes. An attractive approach to deliver organohalide feedstocks to carbonyl compounds that obviates the need for preformed organometallic reagents are transition-metal-catalyzed reductive coupling reactions.6 To date, the coupling of aldehydes with organohalides using a stoichiometric reducing agent can be catalyzed by Cr,7 Rh,8 Co,9 and Ni,10 but current systems are often restricted to aryl, allylic or propargylic halides and aromatic aldehydes. The catalytic transformation of aliphatic aldehydes with less-activated sp3 counterparts remains a synthetic challenge.7c-e,11 Aliphatic aldehydes often exhibit attenuated reactivity, and competing enolization reactions lead to side product formation.10d Additionally, compared with sp2-hybridized halides, unactivated alkyl halides are less suitable coupling partners due to lower reactivity and undesirable side pathways such as homocoupling or competing b–H eliminations of reactive intermediates.12 The wide availability of alkyl carboxylic acids makes this substrate class an attractive coupling partner for processes of this type.13 In recent studies, Baran, Weix, and others have extensively explored the utility of redox-active esters (RAEs), as a carboxylic acid-derived radical precursors in a variety of carbon–carbon and carbon–heteroatom bond forming reactions.14 While the specific combination of aliphatic, enolizable aldehydes with sp3 alkyl fragments are largely excluded from past work, Reisman, Blackmond, and Baran recently reported an attractive electrochemical Crcatalyzed cross-coupling of aldehydes with redox-active esters including two examples of this combination with primary RAEs (Figure 1B),11b but no general approach to the catalytic union of aliphatic aldehydes with simple sp3 alkyl fragments has been described. In order to address this gap in the field, our lab recently described a catalytic process involving the reductive coupling of aliphatic aldehydes with alkyl bromides in a pathway proposed to proceed through the intermediacy of a-silyloxyalkylnickel intermediates derived from aldehydes, silyl chlorides, and low-valent nickel (Figure 1B).15 In order to address limitations of that protocol, including substrate access, scope, and yield, we have now explored the utility of more broadly available substrate classes in catalytic couplings with aliphatic aldehydes (Figure 1C). The main focus of this study is the coupling of aliphatic aldehydes with redox-active esters, providing access to numerous product types derived from simple carboxylic acid precursors. Additionally, preliminary examples of reductive couplings between aldehydes and alkyl tosylates or epoxides14w,16 are described. This combination of procedures provides strategies where alkyl fragments are derived from carboxylic acids, alcohols, or alkenes, thus greatly expanding the range of precursors available for aldehyde functionalization processes. Figure 1. Background and focus of this work. Our initial investigation geared towards developing the catalytic reductive coupling of aldehydes 1a with the N-hydroxyphthalimide (NHPI) ester 2a (Table 1). Systematic investigation of the reaction parameters showed that the desired product 3a was isolated in good yield (91%) with a combination of Ni(cod)2 and bioxazoline (BiOx). Control experiments indicated that a nickel catalyst was necessary for the reaction to proceed, and other nickel sources only led to moderate yield (entries 2, 11 and 12). The ligand (BiOx), reductant (nanopower Zn), 1,5-hexadiene and LiCl also A. Grignard, Barbier, and Nozaki-Hiyama-Kishi couplings R1 X X= Br, I + H R2 O [Mg, Li] R2 OH R1 M R1 [M] Reductant Unactivated sp3-sp3 Coupling Rare [M] = Ni, Cr, Rh B. Previous work: + Stoichiometric Chromium or Electrochemical Baran primarily with sp2 R1 or R2 Montgomery Ni (cat.), Mn + H O R1 R2 OR R2 Br R1 NHPI O R2 H O R1 C. This work Unactivated sp3-sp3 cross couplings with aliphatic aldehydes H O R1 [Ni0] [Si]-Cl [Ni] O[Si] R1 NHPI O R2 O[Si] R1 R2 OTs R3 O[Si] R1 R3 O[Si] R1 [Si]O R4 R1 = Alkyl; R2 = Alkyl; R3 = Alkyl; R4 = Alkyl Broad scope and substrate variation R1 OR R2 NHPI = N-hydroxyphthalimide limitations in scope and substrate access
Figure 1. Background and focus of this work.
Our initial investigation geared towards developing the catalytic reductive coupling of aldehydes 1a with the N-hydroxyphthalimide (NHPI) ester 2a (Table 1). Systematic investigation of the reaction parameters showed that the desired product 3a was isolated in good yield (91%) with a combination of Ni(cod)2 and bioxazoline (BiOx). Control experiments indicated that a nickel catalyst was necessary for the reaction to proceed, and other nickel sources only led to moderate yield (entries 2, 11 and 12). The ligand (BiOx), reductant (nanopower Zn), 1,5-hexadiene and LiCl also played a crucial role in successful transformation (entries 3-6). A ligand screen revealed that BiOx is uniquely effective when compared with other common ligands (entries 13-15). Of note, olefin additives can dramatically improve the efficiency, with 1,5-hexadiene proving the most effective (entries 5, 8-10). Furthermore, the particle size of Zn is critical, with the use of nanopowder Zn (40-60 nm) enhancing the yield (entry 7). With optimal conditions in hand, we sought to define the reaction scope (Table 2A). Various 1º and 2º carboxylic acids were converted to the corresponding NHPI-esters and coupled efficiently with aldehyde 1a. A range of functional groups were well tolerated including ketones (3h, 3i, 3ab), esters (3j, 3x), N-Boc (3y), N-tosyl (3z), and alkenes (3g, 3v, 3ac, 3ad). A simple methyl group can also be added effectively using the RAE 3c derived from acetic acid. Notably, some potentially reactive functional groups, including alkyl chloride 3k and aryl bromide 3l were left intact under current conditions, offering opportunities for subsequent cross-coupling. Protected alcohols (3f, 3ac) and ethers (3m, 3t, 3u) were also competent coupling partners, allowing for the construction of polyol motifs. Moreover, heterocycles including pyridine (3o), and indole (3p) were also readily accommodated as were a series of secondary redox-active esters (3q-3z). The protocol was scalable to 5 mmol, obtaining the desired product 3a in 81% isolated yield.
After defining the scope of RAEs, attention then turned to the scope of the aliphatic aldehyde (Table 2B). Sterically encumbered aldehydes with b-branching, such as isovaleraldehyde (3af) and citronellal (3ag) were competent coupling partners. a-Branched aldehydes (3as-3au) also delivered the desired products without diminished efficiency. Benzyl ethers (3ah), silyl ethers (3ai), acetals (3aj), alkynes (3ak), and phthalimide groups (3ar) were also tolerated. Substrates with functional groups known to engage in transition-metal-catalyzed transformations such as aryl chlorides (3al), aryl bromides (3am) and aryl boronate esters (3an), delivered the desired product smoothly without competing reactivity. Notably, heterocycle substrates, such as indole (3aq), was likewise suitable for this chemistry. The scope and chemoselectivity of this method in activating aldehydes in the presence of a wide array of reactive functional groups including ketones is thus quite broad, addressing an important limitation of classical methods for carbonyl additions. While this method demonstrates considerable scope with carboxylic acid-derived RAEs, we considered that utilizing alkyl precursors derived from simple alcohols and alkenes would further extend the utility and scope of the strategy (Table 3). To enable the use of alcohol precursors, we explored the use of alkyl tosylates as the coupling partner. 17 With simple modification of the reaction conditions (see SI), our catalytic system can activate the C-O bond of tosylates, delivering the desired product in good yield (Table 3) with attractive functional group compatibility including esters (5b), ethers (5c), and furans (5d).
With an eye towards utilizing alkene feedstocks, we then considered the use of epoxides as the alkyl precursor (Table 3). 18 After extensive investigation of reaction parameters (see SI), an effective method was realized, obtaining the desired silyl-protected 1,3-diols in good yield, tolerating a range of functional groups, such as furans (7c), ethers (7d), aryl bromides (7e), and alkynes (7f). This approach further diversifies the range of product types accessible by this method, with 1,3-diols being obtained in the epoxide-based procedure. 19 Table 3. Catalytic Couplings of Aldehydes with Alkyl Tosylates or Epoxides. a a Reactions run on 0.20 mmol scale unless otherwise noted. Yields are for isolated material. A cyclopropane-containing RAE 8 afforded a 92:8 ratio of ring-opened product 3g and compound 9 with the cyclopropane ring intact (Figure 2A). Additionally, in an experiment involving hexenyl transfer, a direct linear dependence of the ratio of 11/12 on the catalyst loading was observed ( Figure 2B). These experiments are consistent with a mechanism involving free-radical intermediates, in analogy to prior studies on nickel-catalyzed processes with both alkyl halides or
Coupling of Epoxides with Aliphatic Aldehydes
NiCl 2 (dme) (20 mol %) BiOx (30 mol %) TESCl (4.0 equiv) 1,5-hexadiene (1.5 equiv) NaI (1.0 equiv) LiBr (1.5 equiv) nanopowder Zn (3.0 equiv) DMF (0. redox-active esters. 14v,20 Similarly, ring opening was observed in couplings of cyclopropanecarboxaldehyde (13) leading to product 14 exclusively as the Z-isomer ( Figure 2C). In this case, we attribute ring-opening of the cyclopropane unit to a nickel-catalyzed process involving the intermediacy of 15, potentially involving the initial oxidative addition of a low-valent nickel species to the aldehyde, promoted by Et3SiCl. 15,21 An experiment employing stoichiometric Ni(cod)2 but lacking the zinc reductant resulted in the formation of product 3a in high yield, suggesting that key organonickel intermediates involved in product formation do not require reduction at the nickel center, but rather that the zinc reductant is involved in catalyst regeneration ( Figure 2D).
Figure 2. Mechanistic Experiments
Based on these experiments and insights from prior studies, we propose a mechanistic picture consistent with the above findings (Figure 3). Oxidative addition of aldehyde 1 and silyl chloride to Ni(0) generates Ni(II) silyloxyalkyl complex II. Species related to II have been previously described, 22 and our prior studies of aldehyde -alkyl halide couplings illustrated characteristic byproducts that are best explained by the involvement of II. Addition of free radical VI to II affords Ni(III) species III, which undergoes rapid reductive elimination to form product 3 and Ni(I) species IV. Combination of IV with the RAE 2 results in V and the free radical VI that recombines with species II. The above steps are consistent with the observation that Ni (0) formation in the absence of zinc, illustrating that reduction of intermediate II to the corresponding Ni(I) complex is not strictly required for turnover. Additionally, the above evidence (Figure 2A-B) for free radical intermediates derived from the RAE 2 are consistent with this proposed mechanistic pathway.
The conversion of Ni(II) complex V to the Ni(II) silyloxyalkyl nickel intermediate II requires a net two-electron reduction by zinc and oxidative addition of the aldehyde and silyl chloride. The commonly invoked reduction of nickel complex V to Ni(0) complex I completes the catalytic cycle, although this possibility must be viewed within the context of recent work from Diao that illustrates that Ni(II) BiOx complexes are more resistant to reduction compared with the corresponding Ni(II) complexes of other commonly employed pyridyl-based ligands. 23 The presence of the phthalimido substituent in V and the interaction of V with the aldehyde and silyl chloride may affect the facility of this reduction by nanopowder zinc. Given these complexities, the precise nature of the conversion of V to II will require further investigation.
The generation of free radical VI from RAE 2 is depicted (Figure 3) as involving Ni(I) species IV in analogy to studies from Baran in the coupling of anhydrides with redox-active esters. 11a The efficiency of product formation in the absence of zinc (Table 2D) illustrates that the nickel catalyst is competent in mediating the decomposition of redox-active esters. We observed that zinc and Et3SiCl rapidly promotes the decomposition of RAE 2, however, the presence of the nickel catalyst has a protective effect as previously described by Baran, slowing the rate of consumption of 2 compared to control experiments where the nickel catalyst is omitted (see SI). Recent studies from Rousseaux have provided evidence in reductive arylation reactions that TMSCl and Zn promote the formation of free radicals. 24 Our studies, which potentially involve effects of the silyl chloride in several steps including aldehyde activation and/or redox-active ester decomposition, have not clearly elucidated the active agent in mediating radical formation from the redox-active ester. Finally, the role of 1,5-hexadiene is not illustrated in the mechanistic scheme since the 4-and 5coordinate complexes II and III cannot accommodate the bidentate coordination of this additive. Coordination of this additive likely prevents catalyst decomposition and/or inhibits competing side reactions that lie off the productive catalytic pathway.
Figure 3. Proposed Mechanism
In conclusion, a highly effective decarboxylative alkylation of aliphatic aldehydes with redox- active esters has been developed. The procedure is broad in scope, tolerant of a wide array of functional groups, high-yielding, experimentally simply, and scalable. This process was extended to include the reductive cross-coupling of alkyl tosylates or epoxides with aliphatic aldehydes, thus providing a broad range of precursors derived from carboxylic acids, alcohols, or alkenes. Preliminary mechanistic experiments on this aldehyde -redox-active ester coupling are consistent with initial aldehyde activation to produce a-silyloxyalkylnickel species as a key intermediate that is captured by free radicals generated from the redox-active ester. Future work will include efforts to further study the mechanism of these transformation and expand the scope in increasingly complex applications. | 3,308.2 | 2021-10-21T00:00:00.000 | [
"Chemistry"
] |
Use of genomics to design a diagnostic assay to discriminate between Streptococcus pneumoniae and Streptococcus pseudopneumoniae
Distinuishing the species of mitis group streptococci is challenging due to ambiguous phenotypic characteristics and high degree of genetic similarity. This has been particularly true for resolving atypical Streptococcus pneumoniae and Streptococcus pseudopneumoniae. We used phylogenetic clustering to demonstrate specific and separate clades for both S. pneumoniae and S. pseudopneumoniae genomes. The genomes that clustered within these defined clades were used to extract species-specific genes from the pan-genome. The S. pneumoniae marker was detected in 8027 out of 8051 (>99.7 %) S. pneumoniae genomes. The S. pseudopneumoniae marker was specific for all genomes that clustered in the S. pseudopneumoniae clade, including unresolved species of the genus Streptococcus sequenced by the BC Centre for Disease Control Public Health Laboratory that previously could not be distinguished by other methods. Other than the presence of the S. pseudopneumoniae marker in six of 8051 (<0.08 %) S. pneumoniae genomes, both the S. pneumoniae and S. pseudopneumoniae markers showed little to no detectable cross-reactivity to the genomes of any other species of the genus Streptococcus or to a panel of over 46 000 genomes from viral, fungal, bacterial pathogens and microbiota commonly found in the respiratory tract. A real-time PCR assay was designed targeting these two markers. Genomics provides a useful technique for PCR assay design and development.
INTRODUCTION
Streptococcus pseudopneumoniae was first reported in 2004 by Arbique et al., and was described as an acapsular, bileinsoluble and optochin-resistant bacteria when grown in CO 2 . It is also a member of the mitis group streptococci [1]. Members of the mitis group streptococci can be difficult to identify to the species level and often lack genetic markers for reliable discrimination. For example, Arbique et al. showed that common pneumococcal targets, such as pneumolysin (ply) and autolysin (lytA) could be detected in a few Streptococcus mitis and the majority of S. pseudopneumoniae [1]. Studies by Kawamura et al. [2] and Wessels et al. [3] further illustrate the challenges with using 16S rRNA gene sequencing [2], biochemical, MALDI-TOF MS and molecular assays [3] in discriminating between members of the mitis group streptococci.
Given the challenges in resolving mitis group streptococci, the epidemiology and clinical significance of S. pseudopneumoniae is unclear. Pathogenicity of S. pseudopneumoniae has been shown in a murine model [4], while in humans, it has been associated with chronic obstructive pulmonary disease (COPD) [5]; others did not make the same observation [6]. A common feature of S. pseudopneumoniae appears to be the prevalence of erythromycin, tetracycline and penicillin resistance [5][6][7][8]. The paucity of studies on S. pseudopneumoniae have been undoubtedly hampered by challenges in distinguishing S. pseudopneumoniae from atypical Streptococcus pneumoniae. S. pseudopneumoniae is genetically similar to S. pneumoniae according to the results of a genomic comparison study done by Shahinas et al. [9], which documented various shared and unique features between S. pneumoniae, S. pseudopneumoniae and S. mitis. Multilocus sequence analysis (MLSA) has been successful in a number of studies in discriminating mitis group streptococci [8,10,11]. In the same spirit as MLSA discriminates species of the genus Streptococcus, we used phylogenetic inference to look at the population structure of mitis group streptococci, irrespective of taxonomic classification in NCBI. Ultimately, we used this clustering information to inform marker discovery that was used to develop a real-time PCR assay that discriminates between S. pneumoniae and S. pseudopneumoniae.
METHODS
Streptococcus growth conditions, and isolate selection for sequencing Members of the genus Streptococcus referred to the British Columbia Centre for Disease Control Public Health Laboratory (BCCDC PHL) were selected for study. These isolates, though identified as belonging to the mitis group streptococci by partial 16S rRNA gene sequencing, could not be classified definitively as S. pneumoniae, S. mitis or S. pseudopnuemoniae. All isolates of members of the genus Streptococcus were grown on 5 % Columbia Sheep Blood Agar (Oxoid) at 37 C in a CO 2 incubator for 18-24 h. Fifty strains of members of the genus Streptococcus isolated from various sample types were selected; three of these isolates were identified as S. pneumoniae, one as Streptococcus gordonii and one as Streptococcus australis. The remaining 44 (plus one repeated sample) isolates belong to the mitis group streptococci, but after 16S rRNA sequencing had uncertain laboratory identification beyond the viridans grouping. ATCC strains S. mitis (ATCC 49456 T ), Streptococcus oralis (ATCC 9811), S. pneumoniae (ATCC 49619) and S. pseudopneumoniae (ATCC BAA-960 T ) were included as controls for the real-time PCR.
Genome sequencing
Nucleic acids were extracted from the isolates of members of the genus Streptoccocus using a DNeasy Blood and Tissue Kit (QIAgen) or a MaxMAX DNA Multi-Sample Ultra Kit (ThermoFisher). The extracted DNA was made into Illumina-compatible libraries using either a Nextera XT (Illumina), TruSeq Nano DNA Library Prep Kit for NeoPrep (Illumina) or a NxSeq AmpFREE Low DNA Library Kit (Lucigen). Libraries made with the NxSeq AmpFree DNA Library Kit were quantified using the NEBNext Library Quant Kit for Illumina (New England BioLabs). All libraries were sequenced on an Illumina MiSeq using a 500-cycle MiSeq V2 kit (Illumina). Quality of the raw sequencing reads was assessed using FastQC v0.11.5 (www.bioinformatics.babraham.ac.uk/projects/fastqc/) and MultiQC 1.2 [12]. One isolate, BCCDCPHL-Ssp027, failed to sequence and was not further analyzed. All raw sequence data is available from the BCCDC PHL Genomic Data Bank (BioProject: PRJNA379148), specifically this study under BioProject: PRJNA428833.
Genome assembly
Raw Illumina reads were adapter and quality trimmed with Trimmomatic v0.36 [13], using the adapter sequences packaged with the A5-miseq assembly pipeline [14]. The resulting trimmed reads were assembled with the Unicycler 0.4.1 [15] assembly pipeline with the --no_pilon option, using SPAdes v3.11.0 [16] as the assembler for the trimmed Illumina reads.
Public genome download All available genomes of members of the genus Streptococcus from RefSeq release 84 were downloaded using ncbi-genomedownload 0.2.5 (github.com/kblin/ncbi-genome-download) (n=11 455). In addition, we downloaded S. pseudopneumoiae sequence data from BioProjects PRJEB20507, PRJEB4909, PRJEB2340 and PRJNA225866, and assembled the genomes, where appropriate, as described above (n=16). In total, 52 S. pseudopneumoniae genomes (including one labelled S. mitis) were gathered (Table S1). Non-streptococci genomes that were used to assess the analytical specificity (exclusivity) were also downloaded with ncbi-genome-download (n=46 727), and included microbiota found in respiratory samples [17].
Phylogenetic inference of Streptoccocus spp Genomes were used to reconstruct a phylogenetic tree using PhyloSift v1.0.1 [18], which places genomes phylogenetically using 37 reference markers that are found in single copies and are nearly universal. The alignment of these phylogenetic markers (21 327 nucleotide positions) were used
IMPACT STATEMENT
Mitis group streptococci are often difficult to distinguish with traditional biochemical assays. In particular, Streptococcus pseudopneumoniae is often hard to differentiate from atypical Streptococcus pneumoniae. Due to this, our understanding of the epidemiology and clinical significance of S. pseudopneumoniae has been limited. We sought to develop a suitable marker for distinguishing S. pseudopneumoniae from other species of the genus Streptococcus by using publicly available genomes along with phylogenetic support. This work should have broad interest for those studying species of the genus Streptococcus, in addition to being an example of how genomics can support the development of diagnostic assays.
Marker discovery
The pan-genome of the 34 complete RefSeq S. pneumoniae genomes and 27 S. pseudopneumoniae genomes (based on their phylogenetic placement) were generated using largescale blast score ratio (LS-BSR) v1.011 analysis [21], predicting genes with Prodigal v2.6.3 [22] and clustering using VSEARCH v2.5.0 [23]. The LS-BSR accessory script (com-pare_BSR.py) was used on the resulting LS-BSR gene matrix to compare and extract genes that were unique to all 34 S. pneumoniae or 27 S. pseudopneumoniae. Candidate markers for either S. pneumoniae or S. pseudopneumoniae were selected based on having a sequence length longer than 500 nucleotides and over 99 % identity (number of identical nucleotides of query divided by subject length) when aligned back to all originating genomes using blastn v2.6.0+ [24]. Other bioinformatics software, such as bioawk (github.com/ lh3/bioawk), and seqtk (github.com/lh3/seqtk) were used to filter and manage the sequence data. Candidate markers were annotated using prokka v1.12 [25].
In silico pneumococcal capsule typing Assembled genomes were used to simulate Illumina sequencing data at a sequencing depth of 150Â with wgsim 0.3.2 (https://github.com/lh3/wgsim). These data were used with Pneumococcal Capsule Typing (PneumoCaT 1.0) pipeline [26] to predict pneumococcal serotypes from Illumina sequence data.
Taxonomic classification of discrepant isolates Discrepant classifications were assessed by simulating the Illumina sequences using the assembled genome in question with wgsim 0.3.2 at a sequencing depth of 150Â. The simulated reads were classified using Kraken version 1.0 [27] using the minikraken_20171019_8 GB database, and the most likely taxonomy was based on the classification with the largest number of reads assigned to it.
Real-time PCR assay
A TaqMan assay was developed for the S. pneumoniae marker (SPN0001) and S. pseudopneumoniae marker (SPS0002). Primers and probes were designed using Geneious 9.0.4 (www. geneious.com, [28]) (Table 1), IDT OligoAnalyzer 3.1 was used to assess primer interactions, and Thermofisher Primer Express 3.0.1 to predict primer and probe melting temperatures. Real-time PCR reactions were performed on an ABI 7500 with recommended Fast thermal-cycling conditions in a 20 µl final volume using TaqMan Fast Advanced Master Mix (Life Technologies), PCR-grade water and primers and probes in a 20Â mix. The 20Â multiplex mix consists of each primer and probe resuspended in IDTE (pH 8.0) at a final concentration of 200 nM each for SPN-F and SPN-R, 100 nM for SPN-P, 500 nM each for SPS-F and SPS-R and 250 nM for SPS-P oligonucleotides.
Streptococcus lysates were prepared by either re-suspending a half loop (0.01 ml) of bacterial growth from isolated colonies into 1 ml of PCR-grade water in a micro-centrifuge tube and heating in a dry bath at 100 C for 8 min, or using Instagene following the manufacture's protocol. A 2 µl aliquot of sample lysate was used in each real-time PCR reaction.
RESULTS AND DISCUSSION
We set out to look for species-specific markers that would unambiguously distinguish S. pseudopneumoniae from S. pneumoniae and other members of the mitis group streptococci. Given the challenges related to accurately identifying S. pseudopneumoniae, we elected to first generate a phylogenetic tree of species of the genus Streptoccocus downloaded from RefSeq complete, including 34 S. pneumoniae, four S. oralis and three S. mitis, as well as 36 S. pseudopneumoniae and 16 more from various BioProjects (Table S1). Fig. 1 illustrates that 25 S. pseudopneumoniae cluster (blue branches) among S. mitis, and S. oralis. Of the 52 S. pseudopneumoniae isolates 27 clustered together on the tree in a clade (orange branches) near the S. pneumoniae clade (gold branches), including two S. pseudopneumoniae ATCC BAA-960 T genomes that had been sequenced by different groups. The majority (24 out of 25) of the S. pseudopneumoniae that were not part of the large (orange) S. pseudopnuemoniae clade were isolated and sequenced during a study in one intensive care unit population [29]. In that study, the original laboratory identification of these S. pseudopneumoniae were mostly 'Strep Viridans', Neisseria, Enterococcus fecaelis or Staphylococcus aureus, while the authors used the average nucleotide identity (ANI) to find the best match of these 24 genomes to S. pseudopneumoniae IS7493 in the NCBI database. The taxonomy that was applied to these genomes and uploaded to NCBI was based on the ANI, which ranged from 0.82 and 0.93. It has been suggested that an ANI of at least 0.95 is needed for classification of isolates as members of the same species [30][31][32]. Given that the S. pseudopneumoniae classification from the Roach et al. study [29] is not strongly supported, we decided to use the S. pseudopneumoniae genomes that clustered within that defined clade ( Fig. 1; orange clade). S. pseudopneumoniae 2120939-III (BioProject: PRJEB4909), also did not cluster within the defined clade and was excluded from the marker discovery process (Fig. 1, and Table S1).
S. pneumoniae and S. pseudopneumoniae have specific markers We next wanted to look for S. pneumoniae and S. pseudopneumoniae species-specific markers. To accomplish this, we took all genomes complete RefSeq S. pneumoniae (n=34) and publicly available S. pseudopneumoniae genomes that clustered within the S. pseudopneumoniae phylogenetic clade ( Fig. 1; n=27) and identified the pan-genome using LS-BSR [21]. After filtering, 13 candidate S. pneumoniae and four S. pseudopneumoniae genes that were greater than 500 nucleotides in length were identified. These had at least a 99 % identical match across their intended targets (Table S1). We decided to further investigate a single candidate marker from both S. pneumoniae (centroid_2470; 729 nt; GtnR-family transcriptional regulator; SPN0001) and S. pseudopneumoniae (centroid_2440; 735 nt; kdpDE-an osmosensitive potassium channel histidine kinase/response regulator; SPS0002). Analytical specificity (inclusivity) was assessed by looking for blast hits for each marker against all species of the genus Streptococcus (n=11 455) and the non-RefSeq S. pseudopneumoniae (n=16), to look for any potential-cross reactivity within the genus.
The S. pneumoniae marker was found in 8019 out of 8066 (99.41 %) of the S. pneumoniae genomes in RefSeq, it was not found in any non-pneumococcal genomes. The criteria for a genome containing a marker required at least 99 % nucleotide identity across the length of the SPN0001 target. If we looked at the raw blastn output before applying the strict 99 % identity across the entire gene, there were 11 out of 47 S. pneumoniae genomes that had blastn matches with fewer identical nucleotides. When we queried these 11 S. pneumoniae genomes with the 154 base pair (bp) realtime PCR marker sequence (see below), eight of them matched the SPN0001 target with at least 99 % nucleotide coverage (Table S1; spn0001_discrepant). The remaining 3 S. pneumoniae isolates had short blastn matches, which may be reflective of misassemblies or inadequate genome sequencing coverage prior to assembly. On the basis of the results of the the in silico analysis using the SPN0001 PCR target sequence, the adjusted specificity of SPN0001 would improve to 8027 out of 8066 (99.52 %).
The S. pseudopneumoniae marker was found in all 27 S. pseudopneumoniae used to discern the marker, as expected, and did not have any matches in the 25 so-called S. pseudopneumoniae that did not cluster within the major S. pseudopneumoniae clade (Fig. 1). There were, however, 20 non-pseudopneumoniae matches: two Streptococcus canis and four Streptococcus pseudoporcinus that shared approximately 80 % identical nucleotide sequence to SPS0002, and 14 S. pneumoniae genomes (14 out of 8066; 0.173 %). We looked at the MLST of the 14 S. pneumoniae and five of them represent ST5107 (non-typeable according to PneumoCaT [26]), isolated from Thailand during a study by Chewapreecha et al. [33], and one was a ST2971 from China (also non-typeable). The remaining eight genomes belong to various unknown sequence types and were all non-typeable except for one serotype 37 (Table S1; sps0002_discrepant), a serotype that has been described in non-pneumococcal streptococci [34].
We also looked at the exclusivity of these two markers by assessing any blast hits to known viral, fungal and bacterial (microbiota and pathogens) genomes associated with sputum and nasopharyngeal samples (Table S1; exclusivity). Of the 46 727 genomes queried, only the S. pseudopneumoniae marker, SPS0002, matched 38 identical nucleotides (38 out of 735 nt; 5 %) in four species of the genus Enterococcus (Table S1; sps0002_discrepants). On the basis of analytical specificity inclusivity and exclusivity results, both the S. pneumoniae SPN0001 and S. pseudopneumoniae SPS0002 markers have high specificity to their respective species and were considered useful targets for a real-time PCR assay.
Presence/absence of S. pneumoniae (SPN0001) and S. pseudopneumoniae (SPS0002) PCR marker is concordant with phylogenetic placement of clinical isolates of species of the genus Streptococcus The initial impetus for this study was to develop molecular markers that would distinguish S. pneumoniae from S. pseudopneumoniae. Since the SPN0001 and SPS0002 markers were used to look for presence and absence of all RefSeq streptococci genomes, we added discrepant results and more genomes to the reference phylogenetic tree (Fig. 1) to further understand the relationship of these markers to the clustering of the genomes. We included all S. pneumoniae isolates that lacked the SPN0001 PCR target (154 nucleotide sequence; n=39; originally 47, but excluding the eight S. pneumoniae that had truncated SPN0001, but that contained the PCR marker as described above), as well as all S. pneumoniae isolates that contained the SPS0002 PCR target (119 nucleotide sequence; n=14). Two S. mitis, one S. oralis and five S. infantis genomes randomly picked from RefSeq were added to populate the tree such that each species was represented by five members of these mitis group streptococci. Finally, BCCDC PHL sequenced streptococci isolates were also included on the tree, but are described below. The first tree that was generated had two isolates that were distantly related to the other isolates on the tree. We were suspicious of the classification of these organisms, which were labelled S. pneumoniae in the NCBI records. We simulated raw reads from both of these two genomes and used Kraken to classify the simulated reads. One isolate was classified as Streptococcus salivarius, while the other was classified as a Staphylococcus species. Further evidence that these were not S. pneumoniae genomes came from the in silico MLST results; the S. salivarius genome had no matches to any S. pneumoniae MLST markers, while the Staphylococcus sp. genome best matched MLST markers from Stapyhlococcus hominis (Table S1). We pruned these two genomes from the phylogenetic tree and regenerated the tree, decorated with blast results for both the SPN0001 and SPS0002 realtime PCR sequence target (Fig. 2).
The presence and absence of the S. pneumoniae (SPN0001) and S. pseudopneumoniae (SPS0002) PCR marker sequences correlated almost exclusively with the genomes that clustered in the S. pneumoniae ( Fig. 2; gold clade) and S. pseudopneumoniae ( Fig. 2; orange clade) clades. One exception was a S. pneumoniae genome (ST2971) that was both SPN0001-and SPS0002-positive, but clustered with the S. pneumoniae clade.
We noted that many of the discrepant genomes identified above may be due to incorrect classification in the NCBI RefSeq record, as they clustered according to the presence of either the SPN0001 or SPS0001 PCR markers. Many of the taxonomic classification associated with genomes in NCBI RefSeq originate from the submitting laboratory, and the two genomes (S. salivarius and the species of the genus Staphylococcus) that were pruned from the final tree (Fig. 2) made us suspicious of some of the other discrepant genomes. To provide a reference method to classify these genomes, we used the top match from Kraken classification to support or refute the classification provided by NCBI. For S. pneumoniae that were SPN0001-negative, 15 genomes clustered outside of the S. pneumoniae clade ( Fig. 2; gold clade): two were pruned from Fig. 2 (S. salivarius and the member of the genus Staphylococcus), five were classified as S. mitis and eight were classified as S. pseudopneumoniae (Table S1). The eight genomes that were classified as S. pseudopneumoniae clustered with the S. pseudopneumoniae clade ( Fig. 2; orange clade) and were SPS0002positive. The remaining 24 discrepant S. pneumoniae genomes clustered with the S. pneumoniae clade, but were SPN0001negative. Those 24 S. pneumoniae consisted of 11 different MLST sequences types, and one unknown sequence type among these 24 S. pneumoniae genomes (Table S1). Notably, ST425 (n=6), ST5107 (n=5), and ST2705 (n=3) are present multiple times. In terms of serotype, 9 out of 24 genomes were predicted to be non-typeable whereas the rest were assigned a predicted serotype of 19F (n=7), 33F (n=3), 3, 06E, 14, 23F and 32F (Table S1; spn0001_discrepants). The five S. pneumoniae ST5107 isolates were SPN0002-positive, and along with the S. pneumoniae ST2971 (SPN0001-and SPS0002-positive) are the only instances of the SPS0002 PCR marker having a match outside of genomes in the S. pseudopneumoniae clade. This is possibly due to recombination, common in S. pneumoniae and particularly in acapsular lineages [33]. With the support of the Kraken classification and the tree placement for 15 discrepant S. pneumoniae genomes, we readjusted the SPN0001 specificity to 8027 out of 8051 (99.70 %). Likewise, the S. pseudopneumoniae SPS0002 marker specificity to S. pneumoniae was adjusted based on 8 out of 14 S. pneumoniae genomes probably being S. pseudopneumoniae (they cluster within the S. pseudopneumoniae clade and were SPS0002-positive), and SPS0002 marker was detected in 6 out of 8051 (0.074 %) of S. pneumoniae genomes.
Over a three-year period, the BCCDC PHL collected isolates of members of the genus Streptococcus that could not be classified to species by 16S rRNA gene sequencing. We took this collection of unknown isolates of members of the genus Streptococcus and sequenced their genomes to see where they clustered phylogenetically, and if that clustering was supported by the expected matches to the SPN0001 and SPS0002 PCR markers. We grew 44 ambiguous isolates of members of the genus Streptococcus from April 10 2014 to June 1, 2017 for genome sequencing, as well three laboratory-confirmed S. pneumoniae isolates, one S. gordonii isolate and one S. australis isolate. All isolates, with the omission of BCCDCPHL-Ssp027 (failed sequencing), S. gordonii, and S. australis, were added to the reference phylogenetic tree seen in Fig. 1 to generate the final tree (Fig. 2) Fig. 2. This lytA gene was detected in all genomes from both the S. pneumoniae and S. pseudopneumoniae clades at over 98 % and approximately 82 % nucleotide identity, respectively (Table S1). However, lytA was also detected in 20 SPN0001-and SPS0002negative genomes, such as S. mitis B6, at approximately 82 % nucleotide identity. These data further support the usefulness of SPS0002 for distinguishing S. pseudopneumoniae from S. pneumoniae.
Real-time PCR assay results agree with phylogenetic placement of S. pneumoniae and S. pseudopneumoniae isolates Given the in silico specificity of the SPN0001 PCR marker to S. pneumoniae and specificity of the SPS0002 PCR marker to S. pseudopneumoniae, these sequences were designed as a real-time PCR assay (Table 1), which can be run as a singleplex or a duplex. We had three different panels: (1) A well- Fig. 1. Phylogenetic tree of selected species of the genus Streptococcus. Phylosift was used to place 34 S. pneumoniae (gold branches), 52 S. pseudopneumoniae, three S. mitis and four S. oralis based on NCBI taxonomy (RefSeq release 84). Note that one S. mitis strain (1042_SPSE) was included as a S. pseudopneumoniae due to information in supplemental data from [29]. The S. pneumonaie cluster is shown with gold branches, while the major S. pseudopneumoniae cluster (including two S. pseudopneumoniae ATCC BAA-960 T genomes) is shown with orange branches. S. pseudopneumoniae that fall outside of the orange S. pseudopneumoniae clade are denoted by blue branches and were ultimately excluded from the marker discovery process. Bootstrap support values are indicated in black bold type at the node that separates the S. pneumoniae and S. pseudopneumoniae clades. (Table S1).
SPN0001 was detected in all known isolates of S. pneumoniae in all three panels (29 out of 29) with 100 % accuracy and analytical specificity; no cross-reactivity was observed in the remaining 159 isolates of members of the genus Streptococcus (Table S1). SPS0002 could be detected in only S. pseudopneumoniae ATCC BAA-960 T , as expected, from the well-characterized panel. In the clinical panel, ten isolates of the S. mitis group were positive for SPS0002, indicating that they were probably S. pseudopneumoniae. However, because of the lack of a reference assay that could reliably confirm the species classification of mitis group streptococci, the detection of the SPS0002 marker in the genome sequencing panel and where the SPS0002-positive isolates clustered on the tree was important. The real-time PCR results confirmed the clustering of the sequenced isolates of members of the genus Streptococcus on the phylogenetic tree in Fig. 2; all genomes of members of the genus Streptococcus that clustered within the S. pseudopneumoniae clade ( Fig. 2; orange clade) were SPS0002-positive, while any members of the genus Streptococcus not clustering within the S. pseudopneumoniae cluster were PCR negative for SPS0002. These data support the hypothesis that the SPN0001 and SPS0002 markers identified using comparative genomics are suitable markers for distinguishing S. pneumoniae and S. pseudopneumoniae from other mitis group streptococci.
In this study we used the power of genomics to identify molecular specific markers capable of reliably differentiating S. pneumoniae and S. pseudopneumoniae. These markers were used as the basis for development of a real-time PCR assay, providing the clinical, microbiology and epidemiological communities a robust tool for reliable differentiation of S. pneumoniae and S. pseudopneumoniae. Given the abundance of misidentification of mitis group streptococci in NCBI RefSeq, phylogenetic inference was helpful in separating species to give us confidence in the dataset that we used to capture specific markers from the pan-genome. The phylogenetic inference was also helpful for looking at discrepant results as we were testing our markers in silico. For example, during our in silico analysis of the S. pseudopneumoniae marker, 14 S. pneumoniae matches (out of 8051) were found. Clustering of these 14 discrepant S. pneumoniae genomes placed eight of them in the S. pseudopneumoniae clade, and these eight genomes were also negative for the S. pneumoniae marker. This highlights the importance of alternate methods to investigate discrepant genomes from public databases, such as NCBI RefSeq. Database issues aside, this approach has broad applications for other diagnostics, including targeted assay design for outbreaks or surveillance, similar to that described by Bowers et al. [35].
Funding information
This work was supported by in-kind mandate-dedicated funding from the BCCDC Public Health Laboratory. | 5,908 | 2018-04-09T00:00:00.000 | [
"Biology"
] |
INTERNET LITERACY : AN ANALYSIS ON ITS POSSIBILITY
As technology emerges so fast and the demands of curriculum to integrate technology and innovation into classrooms, teachers are pushed to penetrate technology and bring the internet to everyday talks with the students. In accordance, using the euphoria of digital natives, students are also pushed to absorb and adjust themselves to growth of technology. In turn, there is no specific course on the use of the internet like the 90s curriculum had in the university level. However, the fact that not so many students bring laptop to campus everyday might indicate that they did not use the internet for learning outside classroom intensively. This study aims at describing how much internet literacy that students have. After analyzing the questionnaire descriptively, it was found that the participants of this study had poor internet literacy, in terms of the use of email, the use of social media, and the use of English in social media.
Background of Study
To accommodate the demands of the growing of technology and knowledge on education, a study program has to go along with this.Generally people can see whether a certain study program accommodate this challenge or not through the curriculum presented.Thus, English Department of Universitas Kanjuruhan Malang has determined its vision to be a leading study program in teaching material development based on IPTEKS (technology and knowledge) in order to produce competitive graduates in 2025.This vision literally describes that the curriculum offered is based on the growing technology and education.
Selber in Johnson (2007) defines literacy is not only merely the ability to read and write, but it covers the ability to use technology.Internet is now a happening technology that almost todays' learners opt to it.Further, todays' learners are claimed to be millennial generation in which they grow up surrounded by 2.0 technology.This was strengthen by Presky (in Liu:2010) who attributed todays' students as digital natives, because they spend almost their lives surrounded by computer, video games, iPod, smart phones, and other devices.
Several experts might define internet literacy differently.Colwell, Hunt-Barron and Reinking (2013) mention that digital literacy covers the ability to seek for and evaluate any information provided by the internet.Meanwhile, Semas in Johnson (2003) prefer the term internet literacy to specifically refer to the ability to seek for online information.Further, Hofstetter in Johnson (2003) provides detailed definition of internet literacy, such as the ability to connect, secure, communicate, use multimedia, and do web development.Walker and White (2013) also add that it is a need for todays' students to comprehend digital competence.There are four elements that should be comprehended: procedure competence, socio-digital, digital discourse, and strategic competence.
It can be concluded that internet literacy is not solely defined as the ability to send an email or simply use a search engine.However, it should be defined as the ability to seek for, use, evaluate, and develop information provided for self enrichment.
This internet literacy should be ideally taught at school, including university.As most of people assume that todays' students are already acquainted with computer and internet, knowledge on how to use this digital technology for learning effectively is no longer in the curriculum.This study is derived from the fact that most of students in University of Kanjuruhan Malang come from East of Indonesia, such as Ende, Adonara, Manggarai, Sumba, West Kalimantan, West Papua, and a small number comes from East Java.These students have different cultures, customs, and characters with the lecturers who most of them are from Java.There have been no recent studies about the impact of different internet accessibility got by students from Java and East Indonesia.The fact that not all students bring any laptop or have smartphone with internet facility to class everyday indicate that they are rarely use internet to for learning outside classroom.
This study aims at describing how much students' internet literacy, specifically students in English Education Department.It also reveals the most commonly web visited by the students.Most universities in Indonesia and also many public places facilitate its students and society with free connection.This is done to support open access of information for everyone.Todays' millennial generation are exposed to kinds of social media massively.The fact that English is regarded as a foreign language for Indonesia students brings another angle of the use of internet for English learning.Thus, lecturers might know how to adjust the materials if they want to integrate the use of internet in the classroom, also they know how to penetrate internet for learning.Based on the background of the study described, the question of this study is formulated into: "How much internet literacy that students of English Education Department have?" The findings of this study discuss how much internet literacy that students of English Education Department have regardless the sex and age differences.Johnson (2007) mentions four activity categories that users do with internet, communication, information, entertainment, and advertisement and online shopping.Hoffstetter in Johnson (2007) adds another category, which is technical ability, such as security, download, and connection.This discussion in this study covers categories suggested by Johnson, excluding technical ability.
Research Method
This study implements qualitative approach to solve the problem stated.Qualitative approach was taken because it tries to describe the phenomenon of a certain event.In this study, the researcher tries to describe how much internet literacy that students of English department have.The participant involved in this study was the third year students of English Education Department of Universitas Kanjuruhan Malang.There were 36 students voluntarily involved in this study.These students were chosen because they, especially those who came from East Indonesia, have spent enough time in Java and got equal access towards internet facility like those who came from Java.
The main instrument in this study was the researchers themselves, further, the researchers used questionnaire as minor instrument to measure students' internet literacy.The questionnaire was divided into four parts: the use of email for eligible communication, the use of social media, the use of English in their social media, and the use of English in daily communication.After collecting students' questionnaires, the researchers analyzed them descriptively.
Result and Discussion
The questionnaire was used to measure how much internet literacy that the participants have, and specifically how far the use the internet for learning English.The questionnaire was divided into two forms.The first was in scale form, which range from Strongly Agree, Agree, Disagree, and Strongly Disagree.Meanwhile, the second form was a short essay question, asking about students' most visited sites and the most distracting things from the internet.
When the students were asked about their email address, all of them acknowledged that they have an email address.However, how they used the email address might show a surprising fact (Table 1).Table 1 shows that most of the participants did not use email for their correspondences, although they agreed that they used email for academic purpose, it might happen because the lecturers assigned them to send homework through email.They certainly have a low frequency of checking emails.They rather chose another media for communicating, such as using social media, which is assumed to be more practical.One good reason for having an email address for the participants was they needed it for signing up to social media, online shopping accounts, game online, and other applications.Although it was found that the participants of this study had a low frequency of checking their emails, they certainly spend a plenty of time checking their social media when they were connected to the internet.They 100% admitted that they have at least one social media and became an active user of it with the mean score of it was 3.58 (Table 2). 2 pictures that checking social media, either for academic purpose, friendship, or advertisement and online shopping is favorable.The needs to engage and keep in touch with family or friends might be one of the reasons of this phenomenon.As most of the participants are coming from outside Malang, even Java, strengthens this reason.The more sophisticated features offered by some social media, such as phone call, video call, live streaming, which in turn suggested in cost efficiency, becomes another point to the use of social media.
The last section of the questionnaire that the researchers ask was about whether the participant used English in communicating using social media.Interestingly, most of the participants (28 out of 36 students) argued that they joined a group which used English as the medium to communicate (Table 3).However, most of them were silent readers and rarely checked their group.Thus, for those three sections (the use of email, the use of social media, and the use of English in social media), the researchers found that students' internet literacy is poor (Table 4).Also, Table 4 showcases that the participants of this current study have strong internet literacy in term of the use of social media.Unfortunately the participants did not use the internet, especially social media as a way to learn English maximally.The finding is confirmed by the fact that most of the participants confessed that they use the internet to browse online dictionary, such as Webster, Oxford, and others, and some of them used Google Translate, and Wikipedia.Few of them mentioned the use of other sites, such as reputable journals, BBC Online learning, and English learning forum.Further, the participants argued that sometimes they did not like browse the internet because of pop-up advertisement, pornography contents, and viruses.
Conclusion
People may not generalize that todays' students definitely have strong internet literacy.The participants of this study are categorized having poor internet literacy, in term of the use of email, the use of social media, and the use of English in social media.Although they showed strong internet literacy in term the use of social media, they could not take of advantage of this to help them learning English.The next researchers are expected to investigate the other category of internet literacy as suggested by Hoffestter, such as technical ability, covering security, download, and reproduce. | 2,296 | 2019-04-04T00:00:00.000 | [
"Education",
"Computer Science"
] |
Transitions W ithin the n = 4 Com plex of K rV II Obtained from a Theta-Pinch Light Source
T h e spectru m o f six tim es ion ized k ryp ton (K r V II) has been ob served in th e 4 3 0 -1 0 0 0 A w avelen gth range and 23 lines have been identified a s transition s b etw een levels o f the 4 s2, A sA d . A p 2 and A sA p con figu ra tion s. F o r 13 o f the lin es th e c la ssifica tion is new . R ev ised va lu es are prop osed for three leve ls w h ile for th e rest th e u n certain ty in the ex ist in g level v a lu e s has been co n s id era b ly decreased . T h e resu lts are su p p orted by iso elec tro n ic com p arison s a lo n g th e Z n 1 iso e lec tro n ic seq uence. T h e con figu ra tion s are interpreted by fittin g the th eo retica l energy exp ression s to the ob served energy levels u sin g leastsq u a res tech n iq u es. T h e param eter va lu es are com p ared w ith resu lts from H a r tr e e -F o c k ca lcu la tion s.
Introduction
The K r6* ion belongs to the Zn I isoelectronic sequence.The knowledge of the spectra in this sequence with the exception of the Z n l spectrum was for a long time limited [1].A small number of levels were reported for G a ll-B r V I [1], but for K r V II and more highly ionized ions, no information was available.For Zn I the analysis presented in Atomic Energy Levels [1] has been extended [2][3][4], and a few levels of G a ll [5] were added by Denne et al.Recently Isberg and Litzen revised and extended the analysis of Ga II [6].The analyses of SeV and B rV I have also been extended [7,8].For more highly ionized ions the 4 s-4p resonance transition of Rb V III-M o X III and R u X V -D y X X X V II was observed by Reader and Acquista [9,10].Litzen and Ando reported the 4 s-4 p transitions including intercombination lines in Z rX I, N b X II and Mo X III [11].Observations of the K r V II spectrum were reported by Fawcett et al. using a zeta-pinch [12].Several reports on observations of K r V II spectra using the beam-foil technique [13][14][15][16] have appeared.The Zn I isoelec tronic sequence has been studied theoretically in a number of papers [17][18][19][20].
The present work concerns the study of the 4r, 4s4d, 4p 2 and 4s4p configurations in K rV II.
The revival of the interest in data on the Zn I isoelectronic sequence is due to observations of impurity-lines from highly ionized heavy ions with few valence electrons in high tem perature plasmas [21,22]
Experimental arrangements
The light source used in the present work is a theta-pinch discharge built at Lund Institute of Technology [24], The spectra were recorded using a 3 m normal-incidence spectrograph equipped with a 1200 lines/mm grating blazed for 1380 A. The plate factor in the first diffraction order is 2.77 A mm-1.
To distinguish between different stages of ionization, a number of experimental parameters, i.e., gas pressure, dis charge voltage and number of discharges, were varied.A well developed K r V II spectrum was obtained with the following parameters: 4mTorr, 13kV and 800 discharges.
The spectra were exposed on Kodak SW R plates and lines from C III, N III, O III, K r I I and K r I I I were used as internal standards.The plates were measured with a semi-automatic comparator with a photoelectric setting device [25].For sharp lines the settings are reproducible to within ± 0.5 p m .Third order interpolation formulas, together with correction curves, were employed to reduce the comparator settings to wavelength values.The accuracy of the wavelength values is estimated to be ±0.01 A.
Analysis
The K r V II lines observed in the present work are given in Table 1,13 of them being without previous classification.The intensity figures given in the table are based on visual estimates.
The energy levels derived from the observed lines are given in Table II, and the general structure of the term system is shown in Fig. 1
. When establishing the energy levels, we were guided by isoelectronic comparisons taking into account Zn I [1], G a ll [6], G e III [1], A s IV [1], SeV [7] and B rV I [7, 8].
When performing the analysis we also used theoretical predictions of the structures of the configurations.The predictions were obtained by diagonalizing the energy matrices with appropriately scaled Hartree-Fock (HF) values for the energy parameters.For this purpose the computer code developed by Cowan [26] was used.A comparison with the level system given by Pinnington et al. [16] shows that eight of their levels values have been confirmed, although the accuracy has been considerably improved.However, for three levels we propose new identifications as discussed below. For
.
Such lines have been used for diagnostic purposes.The resonance transition 4s* ,S0-4s4p ]P, has been observed for a large range of Z values in the Zn I isoelectronic sequence [9, 23].* P e r m a n e n t ad d ress: U N 1 C A M P , I n stitu to d e F ísica D E Q -P la sm a , 1 3 1 0 0 C a m p iñ a s S P , B rasil.* P e r m a n e n t ad d ress: C e n tr o d e I n v e stig a c io n e s O p tic a s (C IO p ), L a P la ta .A r g e n tin a .
Fig. I .
Fig. I.The gross structure of the lower part o f the KrVIII energy level system.
the level 4s4p yP 0 we propose the new value 117389 cm-1. The level is established from a line at 617.18 A, classified as the 4s4p 3P0-4p2 \P, transition. This position is in reasonable agreement with the value predicted by Curtis [20]. For the level 4j4rf3Z>, we propose a new value at 349973 c m '1 . The level is determined by a line at 558.22 A Table I. Identified lines o f Kr VII Intensity ¿(À) a (cm 1 )
* Asymmetric line.classified as the 4s4p 'P t-4 s4 d 3D { transition.
This identifi cation is confirmed by the lines at 435.01 A and 447.60 A corresponding to the 4s4p
[19]-4 s4 d 3£>, and 4s4p 3P2~4s4d iD ] transitions.This new level value is in agreement with the theoretically predicted value by Ivanova et al.[19].For the level 4p 2 'D 2 we
propose the value 279 714 cm"1 , determined by a line at 918.45 A and classified as the 4s4p 'P, - 4p2 lD 2 transition. The identification is confirmed by a line at 626.48 A classified as 4s4p
3P l-4p2 lD 2 and a line at 652.90A classified as 4s4p 3P 2-4 p 2 'D 2. This
Table III . Energy parameters for the 4s4p configuration o f Kr VII Parameter HF value (cm-1) Fitted value* (cm -1)
* The rms deviation of the fit is 17cm" 1 for 4 observed levels.*The | 1,818.6 | 1986-08-01T00:00:00.000 | [
"Physics"
] |
Long noncoding RNA MEG3 suppresses liver cancer cells growth through inhibiting β-catenin by activating PKM2 and inactivating PTEN
Maternally expressed gene 3 (MEG3) encodes an lncRNA which is suggested to function as a tumor suppressor and has been showed to involve in a variety of cancers. Herein, our findings demonstrate that MEG3 inhibits the malignant progression of liver cancer cells in vitro and in vivo. Mechanistically, MEG3 promotes the expression and maturition of miR122 which targets PKM2. Therefore, MEG3 decreases the expression and nuclear location of PKM2 dependent on miR122. Furthermore, MEG3 also inhibits CyclinD1 and C-Myc via PKM2 in liver cancer cells. On the other hand, MEG3 promotes β-catenin degradation through ubiquitin–proteasome system dependent on PTEN. Strikingly, MEG3 inhibits β-catenin activity through PKM2 reduction and PTEN increase. Significantly, we also found that excessive β-catenin abrogated the effect of MEG3 in liver cancer. In conclusion, our study for the first time demonstrates that MEG3 acts as a tumor suppressor by negatively regulating the activity of the PKM2 and β-catenin signaling pathway in hepatocarcinogenesis and could provide potential therapeutic targets for the treatment of liver cancer.
Introduction
Recent research has found that long noncoding RNAs (lncRNAs) were involved in various human cancers. Maternally expressed gene 3 (MEG3) has been shown to be involved in a variety of cancers and is downregulated in most cancers and affects cell proliferation, progression, and prognosis [1][2][3][4][5] . Notably, genetic variants and imprint change in MEG3 may contribute to the development and risk of cancer 6,7 . Moreover, MEG3 increases autophagy 8 , and epigenetic repression of MEG3 represses the p53 pathway and enhances Wnt/β-catenin signaling 9,10 . In addition, MEG3 produces an antitumor effect in several cancers 11,12 . Furthermore, MEG3 functions as a competing endogenous RNA to regulate cancer progression 13 and TGF-β pathway genes through the formation of RNA-DNA triplex structures 14 . Strikingly, excessive MEG3 promotes osteogenic differentiation of mesenchymal stem cells from multiple myeloma patients by targeting BMP4 transcription 15 .
miR-122 is involved in human cancer proliferation, invasion, and progression [16][17][18][19] . In particular, miR-122 reverses the drug resistance and hepatotoxicity in hepatocellular carcinoma cells through regulating the tumor metabolism 20,21 . Pyruvate kinase muscle isozyme M2 (PKM2) is a limiting glycolytic enzyme that catalyzes the final step in glycolysis, which is key in tumor metabolism and growth 22,23 . Moreover, PKM2 plays a pivotal role in the growth, survival, and metabolic reprogramming of cancer cells 24,25 . Notably, loss of SIRT2 function in cancer cells reprograms their glycolytic metabolism via PKM2 regulation 26 . In addition, our previous study indicates that double mutant P53 (N340Q/L344R) promotes hepatocarcinogenesis mediated by PKM2 27 . Phosphatase and tensin homolog (PTEN) is one of the powerful switches for the conversion between tumor suppressors and oncogenes. A number of studies have suggested that PTEN may alter various functions of certain oncogenic proteins [28][29][30][31][32][33] . Strikingly, PTEN opposes malignant transformation of pre-B cells and breast cells 34,35 . In particular, the PI3K-PTEN-AKT-mTOR pathway is a central controller of cell growth and a key driver for human cancer 36 . β-catenin (encoded by CTNNB1) is a subunit of the cell surface cadherin protein complex that acts as an intracellular signal transducer in the WNT signaling pathway. Many hepatic tumors such as hepatocellular adenomas, hepatocellular cancers, and hepatoblastomas have mutations in β-catenin that result in constitutive activation of β-catenin 37 . Also, Wnt/β-catenin/TCF-4 signaling is crucial for the proliferation and self-renewal maintenance of cancer stem cells [38][39][40][41] . Strikingly, MSK1-mediated βcatenin phosphorylation confers resistance to PI3K/ mTOR inhibitors in glioblastoma 42 .
In the present study, we indicate that MEG3 inhibits the malignant progression of liver cancer cells in vitro and in vivo. Our study for the first time demonstrated that MEG3 acts as a tumor suppressor by negatively regulating the activity of the PKM2 and β-catenin pathway in hepatocarcinogenesis and may provide potential therapeutic targets for the treatment of liver cancer.
Cell transfection and stable cell lines
Cells were transfected with DNA plasmids using transfast transfection reagent lipofectamine R 2000 (Invitrogen) according to manufacturer's instructions. For screening stable cell lines, 48 h after transfection, the cells were plated in the selective medium containing G418 (1000-2000 μg/ml, Invitrogen) or Puromycin (1-2 μg/ml, Calbiochem) for about 4 weeks or so, and the GFPpositive cells were selected and the selective media were replaced every 3 days.
MicroRNA detection
Total RNA was isolated from cultured cells using Trizol (Invitrogen, Carlsbad, CA, USA) according to the manufacturer's protocol. Real-time RT-PCR-based detection of mature miR-122 and U6 snRNA was achieved with the miRNA Detection kit (including a universal primer, U6 primers, Qiagen) and miR122 specific upstream primers (mature miR122:P1: 5′-TGGAGTGTGA-CAATGGTGTTTG-3′ Origene, USA). qRT-PCR was performed with a StepOne Plus real-time PCR system (Applied Biosystems). The real-time PCR reaction was performed in 40 cycles with each cycle consisting of a denaturation step (95°C for 15 s, and 15 min for the first cycle only) and an annealing step (60°C for 30 s). Each sample was run in triplicate. C t values for miR122 were calculated and normalized to C t values for U6 snRNA.
Co-immunoprecipitation (IP)
Cells were lysed in 1 ml of whole-cell extract buffer A (50 mM pH 7.6 Tris-HCl, 150 mM NaCl, 1% NP40, 0.1 mM EDTA, 1.0 mM DTT, 0.2 mM PMSF, 0.1 mM Pepstatine, 0.1 mM Leupeptine, 0.1 mM Aproine); 500 μl cell lysates were used for IP with antibody. In brief, protein was pre-cleared with 30 μl protein G/A-plus agarose beads (Santa Cruz, Biotechnology, Inc., CA) for 1 h at 4°C and the supernatant was obtained after centrifugation (5000 rpm) at 4°C. Pre-cleared homogenates (supernatant) were incubated with 2 µg of antibody and/or normal mouse/rabbit IgG with rotation for 4 h at 4°C. The immunoprecipitates were incubated with 30 μl protein G/A-plus agarose beads by rotation overnight at 4°C, and then centrifuged at 5000 rpm for 5 min at 4°C. The precipitates were washed five times for 10 min with beads wash solution (50 mM pH 7.6 Tris-HCl, 150 mM NaCl, 0.1% NP-40, 1 mM EDTA), resuspended in 60 µl 2 × SDS-PAGE sample loading buffer, and incubated for 5-10 min at 100°C. Western blotting was performed with related antibodies.
Super-EMSA (gel-shift)
Cells were washed and scraped in ice-cold PBS to prepare nuclei for electrophoretic gel mobility shift assay with the use of the gel shift assay system modified In brief, consensus oligonucleotides for damage or repair DNA were biotin-labeled (hot probe). Each binding reaction was carried out with 1 µg biotinylated dsDNA probe and 200 µg purified nuclear protein in 20 µl of binding buffer containing 0.5 mg/ml poly(dI:dC) (25 mM HEPES at pH 8.0 with 50 mM KCl, 0.1% Triton X100, 2 mM MgCl 2 , 3 mM DTT, and 5% glycerol). Twenty-five pmol unlabeled cold DNA motifs (a 250-fold excess) were added in the competition assays. Reactions were carried out for 30 min incubation at room temperature, followed by overnight incubation at 4°C. Reaction mixtures were loaded onto 6% TBE polyacrylamide gels and separated in 0.5%×TBE at 100 v on ice until the dye front migrated two-thirds of the way to NC membranes and Western blotting was performed for anti-biotin.
Dual luciferase reporter assay
Cells were transfected with luciferase construct plasmids and pRL-tk. After incubation for 48 h, the cells were harvested with Passive Lysis Buffer (Promega), and luciferase activities of cell extracts were measured with the use of the Dual luciferase assay system (Promega) according to manufacturer's instructions. Luciferase activity was measured and normalized for transfection efficiency with Renilla luciferase activity.
Cells proliferation CCK8 assay
Cells were synchronized in G0 phase by serum deprivation and then released from growth arrest by re-exposure to serum, and then cells were grown in complete medium for assay. The cell proliferation reagent CCK8 was purchased from Roch and the operation was carried out according to the manufacturer's instruction. In brief, cells at a concentration 4 × 10 3 were seeded into 96-well culture plates in 100 μl culture medium containing 10% heat-inactivated fetal calf serum (FCS). Before detection, 10 μg/well cell proliferation reagent CCK8 was added and incubated for 4 h at 37°C and 5% CO 2 . Cell growth curve was based on the corresponding normalized values of OD450 and each point represents the mean of three independent samples.
Colony-formation efficiency assay 5 × 10 2 cells were plated on a 10-cm dish, then 10 ml DMEM containing 10% FBS was added into each 10-cm dish of the three replicates. Then these dishes were incubated at 37°C in a humidified incubator for 10 days. The cell colonies on the dishes were stained with 1 ml of 0.5% Crystal Violet for more than 1 h and the colonies were counted.
Xenograft transplantation in vivo
Four-weeks-old male athymic Balb/C mice were purchased from Shi Laike Company (Shanghi, China) and maintained in the Tongji animal facilities approved by the China Association for Accreditation of Laboratory Animal Care. The athymic Balb/C mice were injected in the armpit area subcutaneously with Hep3B suspension of 1 × 10 6 cells in 100 μl of PBS. The mice were observed over 4 weeks, and then sacrificed to recover the tumors. The wet weight of each tumor was determined for each mouse. A portion of each tumor was fixed in 4% paraformaldehyde and embedded in paraffin for histological hematoxylin-eosin (HE) staining.
Ethics statement
All methods were carried out in "accordance" with the approved guidelines. All experimental protocols "were approved by" a Tongji university institutional committee. Informed consent was obtained from all subjects. The use of mice was reviewed and approved by the China national institutional animal care and use committee".
MEG3 inhibits liver cancer cell growth in vitro and in vivo
To investigate whether MEG3 inhibited the malignant growth of human liver cancer cell line Hep3B, we first established two stable Hep3B cell lines transfectd with pCMV6-A-GFP (GFP ctrl), pCMV6-A-GFP-MEG3 (MEG3), respectively. As shown in Fig. 1a, the expression of MEG3 was significantly increased in MEG3 overexpressing Hep3B on the transcriptional level. As shown in Fig. 1b, excessive MEG3 significantly decreased the growth of liver cancer cell Hep3B compared to the control cells (P < 0.01). We further performed plate colony formation assay and observed a significant decrease in colony formation efficiency rate in excessive MEG3 (74.67 ± 4.04 versus 19.33 ± 1.15, P = 1.0957E−05 <0.01) (Fig. 1c, d). To explore the effect of MEG3 on liver cancer cells in vivo, the two stable Hep3B were injected subcutaneously into athymic Balb/C mice. As shown in Fig. 2a-c, when MEG3 was overexpressed, the xenograft tumor weight decreased approximately one-third compared to the corresponding control group (0.22152 ± 0.07382 g versus 0.07042 ± 0.0652 g, P = 0.004061372 <0.01); when MEG3 was overexpressed, the xenograft tumor size decreased approximately one-fifth compared to the corresponding control group (0.15508 ± 0.1035 cm 3 versus 0.03125 ± 0.05229 cm 3 , P = 0.007228 <0.01). Moreover, compared to control, xenograft tumors contained less of poorly differentiated cells in MEG3 overexpression group (Fig. 2d, upper). The proliferation index (calculated as percentage of PCNA-positive cells) and Ki67 were significantly lower in MEG3 overexpressing xenograft tumors compared to the control group (Fig. 2d, middle and lower). Taken together, these findings demonstrate that MEG3 inhibits malignant progression of liver cancer cells in vitro and in vivo. H3K9me3, H3K36me3, RNA polII on the miR122 promoter region (Fig. 3b). Furthermore, MEG3 overexpression enhanced the binding of Dicer, Exportin5 to the pre-miR122 probe (Fig. 3c). Importantly, the pei-miR122, pre-miR122, and mature miR122 were significantly increased in excessive MEG3 group compared to the control group (Fig. 3d, e). Surprisingly, miR122 targets for 3′-untranslational region (UTR) of PKM2 (Fig. 4a), and inhibits PKM2 3-UTR luciferase activity (Fig. 4b) and PKM2 expression (Fig. 4c). Taken together, MEG3 promotes the expression and maturation of miR122 which targets PKM2 and inhibits the expression of PKM2.
MEG3 inhibits localization and function of PKM2
To explore whether MEG3 could influence the function of PKM2, we selected PKM2-upregulated C-Myc and CyclinD1. At first, the results showed that PKM2 3′-UTR luciferase activity was significantly decreased in pCMV6-A-GFP-MEG3 group compared to the control group (P < 0.01) (Fig. 4d). Furthermore, MEG3 could not significantly alter the loading of CTCF and RNA polII on the promoter region of PKM2 (Fig. 4e). Thus, the PKM2 mRNA was significantly unchanged in pCMV6-A-GFP-MEG3 group compared to the control group (Fig. 4f). Significantly, the PKM2 expression was significantly decreased in pCMV6-A-GFP-MEG3 group compared to the control group (Fig. 4g). However, when the miR122 was inhibited, the PKM2 expression was significantly unaltered in pCMV6-A-GFP-MEG3 group compared to the control group (Fig. 4h). Moreover, MEG3 could reduce the ERK1/2 expression (Fig. 5a) and the interplay between ERK1/2 and pPKM2(Ile 429/Leu 431) (Fig. 5b). Therefore, The PKM2(ser37) expression was significantly decreased in pCMV6-A-GFP-MEG3 group compared to the control group (Fig. 5c). Furthermore, the nuclear PKM2 was significantly reduced in pCMV6-A-GFP-MEG3 group compared to the control group (Fig. 5d, e). Strikingly, the interaction between PKM2 and Histone H3 was significantly decreased in pCMV6-A-GFP-MEG3 group (Fig. 6a). Thus, MEG3 decreased pHiatone H3(T11), H3K9Ac, and increased H3K9me3. However, PKM2 knockdown abrogated the MEG3 action (Fig. 6b). Furthermore, MEG3 decreased the loading of H3K9Ac on CyclinD1 and C-Myc promoter region (Fig. 6c). Ultimately MEG3 decreased the expression of CyclinD1 and C-Myc. However, the expression of CyclinD1 and C-Myc did not alter in Hep3B cell line with MEG3 overexpression plus PKM2 knockdown (Fig. 6d). Taken together, these observations suggest that MEG3 decreased the PKM2 expression and nuclear location dependent on miR122, and then inhibited CyclinD1 and C-Myc via PKM2.
MEG3 inhibits β-catenin activity through PKM2 reduction and PTEN increase
To explore whether MEG3 influenced β-catenin activity, we analyzed the activity of β-catenin co-activators LEF and TCF-4 in liver cancer cells. As shown in Fig. 9a, MEG3 overexpression decreased the interaction between pPKM2 and β-catenin. Thereby, MEG3 overexpression decreased the expression and nuclear localization of βcatenin (Fig. 9b). Furthermore, MEG3 overexpression decreased the interaction between β-catenin and LEF, TCF4 in liver cancer cells. (Fig. 9c). Therefore, MEG3 overexpression decreased the binding of β-catenin to LEF/ TCF4 probe (Fig. 9d). In particular, MEG3 overexpression increased the interplay between β-catenin and PTEN, and decreased the interplay between β-catenin and LEF, TCF4. However, the MEG3 action was abrogated when the PTEN was knocked down (Fig. 9e). MEG3 overexpression decreased the activity of LEF/TCF4 (Fig. 10a). Furthermore, MEG3 overexpression decreased the loading of Cmyc promoter region and CyclinD1 promoter region (Fig. 10b). Thereby, MEG3 overexpression decreased the promoter luciferase of C-myc (Fig. 10c) and CyclinD1 (Fig. 10d). However, the MEG3 action was abrogated when the β-catenin was knocked down (Fig. 10c, d). Finally, MEG3 overexpression decreased the expression of C-myc and CyclinD1 on the level of transcription and translation. However, the MEG3 action was abrogated when β-catenin was knocked down (Fig. 10e). Collectively, these observations suggest that MEG3 inhibits β-catenin activity through PKM2 reduction and PTEN increase in liver cancer cells.
Discussion
It has been confirmed that MEG3 encodes an lncRNA which is suggested to function as a tumor suppressor and has been shown to involve in a variety of cancers. Our studies are now indicated to evaluate the effects of MEG3 in liver cancer cells. Our findings demonstrate that MEG3 inhibits the malignant progression of liver cancer cells in vitro and in vivo. Mechanistically, MEG3 promotes the expression and maturation of miR122 which targets PKM2. Therefore, MEG3 decreased the PKM2 expression and nuclear location dependent on miR122. Furthermore, MEG3 inhibited CyclinD1 and C-Myc via PKM2 in liver cancer cells. Strikingly, MEG3 promotes β-catenin degradation through ubiquitin-proteasome system dependent on PTEN. Moreover, MEG3 inhibits β-catenin activity through PKM2 reduction and PTEN increase. Furthermore, we found that excessive β-catenin rescued the effect of MEG3 in liver cancer (Fig. 13). To our knowledge, this is the first report demonstrating that lncRNA MEG3 suppresses liver cancer cells growth through β-catenin by activating PKM2 and PTEN.
To date, accumulating evidence indicates that MEG3 plays a critical role in cancer progression and metastasis. Fig. 12 β-catenin determines MEG3 suppressor function in vivo. A Tumorigenesis test in vivo. The mice were stratified and the tumors were recovered. The wet weight of each tumor was determined for each mouse. Each value was presented as mean ± standard error of the mean (SEM). **, P < 0.01. B The appearance time of each tumor was determined for each mouse. Each value was presented as mean ± standard error of the mean (SEM). **, P < 0.01. C A portion of each tumor was fixed in 4% paraformaldehyde and embedded in paraffin for histological hematoxylin-eosin (HE) staining and PCNA staining (DAB stainning, original magnification ×100).
Previous studies suggested that MEG3 functioned through the activation of p53; however, the functional properties of MEG3 remain obscure and their relevance to human diseases is under continuous investigation 43 . Crosstalk between MEG3 and miR-1297 regulates the growth of testicular germ cell tumor through PTEN/ PI3K/AKT pathway 44 . MEG3 may be an underlying therapeutic target for LUAD functioning as ceRNAs for the regulation of miRNA-mRNA in lung adenocarcinoma 45 . In addition, MEG3 was decreased in primary endometrial stromal cells (ESCs) in response to TGF-β1 stimulation 46 .
Evidently, our results indicate that the involvement of MEG3 inhibition of liver cancer cell growth is supported by results from two parallel sets of experiments: 1 MEG3 is downregulated and is postively associated with miR122, PTEN and negatively associated with PKM2, β-catenin expression in human liver cancer tissue 2 . MEG3 inhibits malignant progression of liver cancer cells in vitro and in vivo. Our observations demonstrated that MEG3 is Fig. 13 The schematic illustrates a model that long noncoding RNA MEG3 suppresses liver cancer cells growth through β-catenin by activating PKM2 and inactiviating PTEN. MEG3 promotes the expression and maturation of miR122 which targets PKM2. Therefore, MEG3 decreased PKM2 expression and nuclear location dependent on miR122. Furthermore, MEG3 inhibited CyclinD1 and C-Myc via PKM2 in liver cancer cells. Strikingly, MEG3 promotes β-catenin degradation through ubiquitin-proteasome system dependent on PTEN. Moreover, MEG3 inhibits βcatenin activity through PKM2 reduction and PTEN increase. Furthermore, we found that excessive β-catenin rescued the effect of MEG3 in liver cancer crucial for the inhibition of cell growth and viability in liver cancer cells. According to the aforementioned findings, MEG3 is a tumor suppressor.
Of significance, our findings clearly showed that MEG3 promotes the expression and maturation of miR122 which targets PKM2 and inhibits the expression of PKM2. Studies indicate that alcoholic hepatitis accelerates early hepatobiliary cancer by increasing stemness and miR122mediated HIF-1α activation 47 . Furthermore, miR122 is implicated as a regulator of physiological and pathophysiological processes in the liver. Gα12 overexpressed in hepatocellular carcinoma reduces microRNA-122 expression via HNF4α inactivation, which causes c-Met induction 48 . A quantitative mathematical model of HCVinduced miR-122 sequestration proposes that such miR122 inhibition by HCV RNA may result in global derepression of host miR-122 targets, providing an environment fertile for the long-term oncogenic potential of HCV 49 .
Accordingly, reduction in PKM2 may partly contribute to MEG3-mediated inhibition of liver cancer cell growth. Our findings in this study provide a novel evidence for an active role of PKM2 in MEG3-mediated inhibition of liver cancer cell growth. This assertion is based on several observations 1 : MEG3 decreased the PKM2 expression dependent on miR122 2 . MEG3 inhibits nuclear localization and function of PKM2 dependent on miR122 3 . MEG3 inhibits the expression of cyclinD1 and C-Myc via PKM2. These findings are noteworthy, given that PKM2 is and functions as a key oncogene to mediate various biological processes including cell proliferation and differentiation. Moreover, PKM2 is associated with cancer differention 50,51 . Pyruvate kinase M2 activates mTORC1 by phosphorylating AKT1S1 52 and PKM2 promotes tumor angiogenesis by regulating HIF-1α through NF-κB activation 53 . In particular, cytosolic PKM2 stabilizes mutant EGFR protein expression through regulating HSP90-EGFR association 54 . PKM2 promotes stemness of breast cancer cell by through Wnt/β-catenin pathway 55 . In addition, miR675 upregulates lncRNA H19 through activation of EGR1 in human liver cancer 56 . However, a study failed to observe PKM2-dependent transfer of phosphate from ATP directly to protein 57 . Furthermore, our findings indicated that MEG3 inhibited the expression of C-myc, whereas C-myc decides the reprogramming metabolism of cancer 58 .
Strikingly, we also demonstrated that MEG3 is closely associated with PTEN and β-catenin in liver cancer cells. This assertion is based on several observations 1 : MEG3 increased the expression and phosphorylation of PTEN 2 . MEG3 promotes β-catenin degradation through ubiquitin-proteasome system dependent on PTEN 3 . MEG3 inhibits β-catenin activity through PKM2 reduction and PTEN increase in liver cancer cells 4 . MEG3 inhibited cell growth, colony formation ability, and cell growth in vivo. However, β-catenin overexpression abrogated the MEG3 action. β-catenin determines MEG3 suppressor function in liver cancer cells.
It is well known that PTEN is a lipid phosphatase that converts phosphatidylinositol 3,4,5-phosphate (PIP3) to phosphatidylinositol 4,5-phosphate (PIP2) and plays a critical role in the regulation of tumor growth 59,60 . Notch promotes tumor metastasis in a prostate-specific PTENnull mouse model 61 . miR-18a promotes cell proliferation of esophageal squamous cell carcinoma cells by increasing cylin D1 via regulating PTEN-PI3K-AKT-mTOR signaling axis 62 . Furthermore, PTEN is a key molecular controller of the PI3K signaling, and PI3K-PTEN dysregulation leads to mTOR-driven upregulation of the core clock gene BMAL1 in normal and malignant epithelial cells 63 . In particular, PTEN negatively regulates mTORC2 formation and signaling in grade IV glioma via Rictor hyperphosphorylation at Thr1135 and directs the mode of action of an mTORC1/2 inhibitor 64 . In addition, SOX7 co-regulates Wnt/β-catenin signaling with Axin-2 65 and FAK promotes osteoblast progenitor cell proliferation and differentiation by enhancing Wnt signaling 66 . Moreover, FOXKs promote Wnt/β-catenin signaling by translocating DVL into the nucleus 67 and ECM1 regulates tumor metastasis and CSC-like property through stabilization of β-catenin 68 . A report shows that microRNA-153 promotes β-catenin activation in hepatocellular carcinoma through the suppression of WWOX 69 . Also, there is a regulation function between Wnt/β-catenin signaling and PI3K/Akt survival pathway 70 .
Conclusion
The present study depicts a novel evidence for MEG3 that plays inhibiting tumorigenesis roles by downregulating PKM2 and β-catenin in liver cancer cells, which may have potential therapeutic significance. Alteration of the expression of lncRNAs MEG3 may also mediate changes at an epigenetic level to affect gene expression and contribute to inhibiting hepatocarcinogenesis. MEG3 overexpression in combination with blocking PKM2 and β-catenin might represent a promising treatment strategy targeting tumors. Our study for the first time demonstrated that MEG3 acts as a tumor suppressor by negatively regulating the activity of PKM2 and β-catenin in hepatocarcinogenesis and might serve as a prognostic biomarker and molecular therapeutic target. | 5,303.2 | 2018-02-15T00:00:00.000 | [
"Biology"
] |
Recent progress on optical rogue waves in fiber lasers: status, challenges, and perspectives
Abstract. Rogue waves (RWs) are rare, extreme amplitude, localized wave packets, which have received much interest recently in different areas of physics. Fiber lasers with their abundant nonlinear dynamics provide an ideal platform to observe optical RW formation. We review recent research progress on rogue waves in fiber lasers. Basic concepts of RWs and the mechanisms of RW generation in fiber lasers are discussed, along with representative experimental and theoretical results. The measurement methods for RW identification in fiber lasers are presented and analyzed. Finally, prospects for future RW research in fiber lasers are summarized.
Introduction
Rogue waves (RWs), also known as "extreme waves," "freak waves," and "abnormal waves," are the waves that are much greater in amplitude than the close-by waves, unpredictable, and usually appearing unexpectedly from directions other than dominant wind and waves. 1,2 The concept of RWs is believed to be first established in the ocean, in reference to the giant waves on the surface of the sea. In oceanography, RWs can be defined as extreme waves with a height more than twice the significant wave height (SWH), which is the mean amplitude of the largest third of waves. According to this definition, RWs are not necessarily the biggest waves found in the ocean, but they are extremely dangerous even to large ships such as ocean liners because of their unexpected and sudden appearance.
The RW concept is also extended to other fields of science, such as matter physics, superfluidity, optics, and even economics. 3 There have been various RWs studied, including oceanic RWs, 1,2,4 optical RWs, 5 acoustic RWs, 6,7 capillary RWs, 8 electromagnetic RWs, 9,10 and even financial RWs. 11,12 Several defining properties of RWs can be summarized in three points. First, a large amplitude is required, typically more than twice that of the average amplitude of the highest third of the waves (called SWH). Second, unpredictability of the pulse should be fulfilled. Third, RWs should be rare, i.e., probability distribution function of the wave amplitude should have an L-shape (or other specific long-tail shape). 13 Currently, it is well known that RWs are generated in the nonlinear systems. 14,15 However, the mechanism driving the emergence of RW is different, depending on the properties of the system. 14 In the field of optics, the description of RW generation is typically described by the nonlinear Schrödinger equation (NLSE), 15,16 which also governs the pulse propagation and soliton formation in the media. [17][18][19][20][21][22][23][24][25][26] Indeed, RW dynamics are closely related to the nonlinear breather and soliton formation induced by modulation instability. 27 Within the framework of the one-dimensional NLSE, Peregrine solitons described by a class of nonlinear Akhmediev breather 28 are considered as a prototype of RW. [27][28][29][30][31] Experimentally, in nonlinear optical systems, RWs, also called optical RWs, were first investigated through the supercontinuum (SC) generation process based on the optical fibers. 5 From then on, there have been many studies directed to generating RWs in a variety of optical systems. Optical fiber oscillating systems are well known for providing convenient platforms to investigate versatile fundamental nonlinear phenomena, such as modulation instability, [32][33][34] soliton formation and dynamics, 21,35 and self-similarity. 36 Study of optical RW in fiber lasers has attracted plenty of attention since its first demonstration in 2011. 13,37 The investigation of the mechanisms of optical RWs in fiber lasers has enabled researchers to deeply understand the generation principle of optical RWs, which can offer a chance to control the operation of optical RWs. There have been several review articles covering the previous study of RWs. 38,39 However, to the best of our knowledge, there is no specific review on the dynamics of RWs in fiber lasers.
In this review, the latest research progress on optical RWs in fiber lasers is highlighted. The scope of the paper is mostly focused on experimental investigation of RWs in fiber lasers. In Sec. 2, a brief introduction to the basic concept of optical RWs is given, along with a comparison between optical RWs and ocean RWs. In Sec. 3, we discuss the experimental methods of generating optical RWs in nonfiber lasers. In Sec. 4, we introduce experimental observation of RWs in fiber lasers. In Sec. 5, various measurement methods of optical RWs are discussed. The challenges and outlook on optical RWs in fiber lasers will be discussed in Sec. 6.
Basic Concept of Optical Rogue Waves
An optical RW corresponds to extreme optical pulses that appear suddenly and rarely. A remarkable characteristic of optical RWs is their exceptionally large amplitudes; the largest ones have an intensity at least 30 to 40 times the average intensity. 5 RWs are closely related to modulation instability and soliton formation, which are all developed in a nonlinear optical system. The role of modulation instability on the RWs is demonstrated in Ref. 14, where it is shown that modulation instability is crucial for RW generation in many optical systems.
A number of theoretical studies have been advanced for optical RW generation. In 2013, Akhmediev et al. 38 previewed the development of optical RWs. In 2016, a roadmap on the optical RWs was summarized by Akhmediev et al.; 40 thanks to their review, research of RWs is developing fast.
Comparison Between Ocean RWs and Optical RWs in Fiber Lasers
Apart from the optical RWs, the ocean RWs are also greatly important. There are various physical processes in ocean systems, such as wave breaking, dissipation, currents, and wind force. 41 The wave breaking is a natural nonlinear process while the dissipation, currents, and wind force are either nonlinear or linear. In a word, the observations of ocean RWs are very complicated. Actually, there are similarities and differences between ocean RWs and optical RWs. In both cases, there is a similar mathematical equation in the form of an NLSE, which can be used for describing the evolutionary process of the envelope in time and space. 41,42 In fiber laser, there is the sinusoidal underlying carrier wave at frequency ω while there is the Stokes wave modulated by the NLSE envelope, which (to the second order) includes contributions at both ω and the second harmonic 2ω. 43 In both cases, the measurement methods in the domain are also different. In the fiber laser experiments, only the time-domain envelope intensity is generally measured, and there is no information about carrier oscillations recorded. However, there are many individual carrier wave amplitudes directly recorded in oceanic systems, which are more complicated. In addition, the statistics in both systems are usually taken into account. However, there are important differences. In fiber laser experiments, the statistics are determined by the peaks of intensity envelopes. However, in water waves, the statistics are generally dominated by the amplitudes (or trough-to-crest heights or crest heights) of individual waves. In addition, in the fiber laser, the criterion of the RW generation is that its amplitude (the envelope peak intensity) is more than twice that of SWH. In the ocean system, there is the same criterion, but it is expressed in terms of the trough-to-crest height. Although there is an analogy between the generation of ocean RWs and the propagation of pulses in fiber lasers, due to the complexity of ocean RWs, more precisely targeted research in their natural environment is urgently required.
Experimental Observation of Optical Rogue Waves in Nonfiber Lasers
Optical RWs have been experimentally verified in plenty of physical systems. Solli et al. 5 demonstrated the first observation of optical RWs, which was based on a platform of SC generation in a photonic crystal fiber. Since then, a variety of nonlinear optical systems have been used for generating RWs. Apart from the SC process, [44][45][46][47][48][49][50][51] there are other nonlinear optical schemes, such as mode-locked pulse fiber laser [52][53][54][55] and Raman amplifier systems, 56,57 which also provided the excellent platforms for investigating the generation of RWs. However, most of these research works in versatile nonlinear optical systems are concerned with the observation of optical RWs, and there is also a strong motive to deeply investigate physical mechanisms of optical RW formation. In this section, RW generation in different platforms apart from fiber lasers is summarized.
In the case of SC generation, an ultrashort pulse generated from a laser was typically inserted into a segment of highly nonlinear optical fiber. The RWs were captured by a real-time measurement system based on time stretching, which will be further discussed in Sec. 5. A typical diagram of experimental setup is shown in Fig. 1. RWs can appear as rare solitons. It has been shown that the optical rogue structures could be efficiently isolated by an adequate spectral filtering based on an offcentered optical band pass filter. 5,45,58 In addition, rogue-wavelike extreme value fluctuation in Raman fiber amplifier systems was first reported by Hammani et al. 57 A typical diagram of experimental setup of Raman RW generation is shown in Fig. 2. In 2012, they experimentally reported the observations of extreme optical fluctuations in lumped Raman fiber amplifiers. 59 In addition, RW statistics during high power femtosecond pulse filamentation in air were reported in 2008. 60 In these reports, the RWs are typically in a conservative system without gain and loss in the system, which is distinct from a fiber laser system. In nonconservative systems, deterministic RWs were found Fig. 1 The optical set-up for RW generation in a supercontinuum system. Reproduced with permission from Ref. 5. in an optically injected semiconductor laser 61 and semiconductor laser with saturable absorber for the two-dimensional (2-D) case. 62
Optical Rogue Waves in Fiber Lasers
Fiber laser, as a dissipative nonlinear optical system, has been intensively employed for the study of optical solitons. 21,[63][64][65] Soliton dynamics including soliton interactions, 66-68 soliton molecules, 69-71 soliton rains, 72-75 noise-like pulses (NLPs), 76,77 and soliton explosions, 78 which could be highly related to the RW generation, has been intensively studied in ultrafast fiber lasers. Therefore, fiber lasers also provide an appropriate platform for the generation of dissipative RWs. 79 In a fiber laser, dynamic of RWs can be measured within each round trip. 40 RWs in fiber lasers were experimentally studied as early as 2011 13 and numerically studied in 2012. 80 Since then, the study of dissipative RWs in fiber lasers has been rapidly developing. [53][54][55]79,[81][82][83][84][85][86][87][88][89][90][91][92][93][94][95][96][97] RWs in fiber lasers can be categorized by pulse duration as three types, 94 namely slow RWs, fast RWs, and ultrafast RWs. These RWs are generated by different mechanisms. Ultrafast RWs are difficult to measure using the traditional method, which will be discussed in Sec. 4. According to the formation mechanism, there are mainly three kinds of dissipative RWs generated in the fiber lasers. 98 The first type of RWs can be achieved via the chaotic structures among the NLPs. The second one is dark three-sister RWs, 99 and the third one is the pulse waves generated from the multiple-pulse interaction, 100,101 which have been identified as the aperiodically generated temporal structures.
Slow Rogue Waves
Slow RWs are typically with pulse duration from seconds to microseconds and are typically generated in fiber lasers by pump modulation 13 or altering the laser gain. 80 An experimental study in 2016 showed that, by altering the birefringence of the laser cavity, vector RWs can be observed at the pump power slightly above laser threshold. 87 The as-observed optical RWs are generated based on the interaction between the polarization modes with duration from 98 to 255 μs, which can be classified as a type of slow RW. Sergeyev et al. claimed that the increased in-cavity birefringence strength could cause the spatial modulation of the polarization state of the in-cavity lasing field. Based on their numerical predication, a precise polarization control of the pump and the intracavity laser field emitted RWs in an erbium-doped fiber laser (EDFL) has been demonstrated. 102 The typical experimental setup of EDFL is shown in Fig. 3.
Rogue Waves Generated by Soliton Interaction
Fast RWs typically have durations of hundreds of nanoseconds to tens of picoseconds. Fast RWs are typically generated by soliton interaction in mode-locked fiber lasers (MLFLs). Dissipative RWs generated by chaotic pulse bunching are reported in the literature, 81 and Peng et al. 89 reported RW generation based on the soliton collision. Peng and Zeng 103 demonstrated the generation of RWs among the soliton molecules by the soliton interactions, which could be related to the cavity dissipative effects and high pulse energy. RWs can appear via soliton collisions, producing events with high redshifted energy. 104,105 The energy exchange between the solitons is promoted by Raman effects and third-order dispersion. 106,107 When the dissipative nonlinear optical systems deviate from equilibrium state, the fiber lasers can produce short and low coherence pulse packets. Such peculiar pulse regime has been first reported in details from the MLFL experiment in 1997, 108 which is then called NLPs. NLPs have been found in the fiber lasers based on the multiple mode-locking mechanisms 96,[109][110][111][112][113][114] and, therefore, are characterized with universality. In other words, NLP generation is quite generic dynamics for partially modelocked lasers that emit pulse packets of optical noise burst with the fundamental frequency or the harmonics. There are, however, some factors, including the long cavity and the high pumping power, that are quite conducive to generating the NLPs. In the early days, due to the lack of real-time detecting techniques adapted to the time scales of the NLP structures with picosecond or subpicosecond time scales, it is difficult to resolve the internal structure of the NLPs, which increases the sense of mystery about their detailed characteristics and physical forming process to some extent. The measurements based on the commercial optical spectrum analyzer in the NLP regime, generally show the characteristics of stable, smooth, and wideband spectra, 76,110,[115][116][117][118] which may be broader than the bandwidth of the gain medium. In addition, the NLPs possess a special autocorrelation trace, with a double-scaled structure with an ultrashort coherent spike located in a wide pedestal, which cannot represent the pulse width of the NLPs. In fact, the narrow peak reflects the typical temporal timescales of the internal noisy pulse packets; the broad baseline suggests that the pulse regime consists of packets with picosecond or subpicosecond range, possessing the fine inner temporal structure with randomly diverse noisy pulse. 119 At this stage, due to the low-level information collection through the traditional measuring scheme, including the averaged spectral measurement and the autocorrelation recording, it has been difficult to figure out the formation of the NLPs. In fact, the majority of chaotic pulses, including NLPs, found in the fiber lasers have not yet been resolved in real time. The temporal duration of these pulses is usually in the range of picosecond or subpicosecond, which is smaller than or equal to the temporal resolution of the photoelectric detection system. In addition to the improvement of the electronic detection bandwidth, there is another way to realize the fast detection in real time, i.e., to record shot-to-shot spectra based on the high-speed real-time oscilloscope. In order to achieve such shot-to-shot spectral measurements, a new detection technique can be applied, which is known as the dispersive Fourier-transform (DFT) technique. 120 In the fiber lasers, the DFT technique is generally implemented by sending the ultrafast output pulses through a long fiber with either positive or negative dispersion, producing the sufficient accumulated dispersion so that the spectral fluctuations of these pulses are mapped into a temporal intensity waveform, which can be captured by the real-time oscilloscope with high electronic bandwidth. In this way, shot-to-shot spectra of the internal pulse dynamics can be analyzed. DFT has been used for observing the generation of RWs in the NLP regime. 82,[121][122][123][124] However, it is important to note that not all the NLPs could be considered as RWs. When the pulse-energy distribution of the NLPs is always Gaussian profile, this pulse state may be not the RW regime. 124 In the literature, 123 even though the pulse-energy distribution of the NLPs in the normal dispersion is nearly Gaussian, the distribution of the peak optical spectral intensity for these pulses displays the obvious non-Gaussian statistics, which implies that this NLP regime could be related to RWs. In the former, the observation of the little deviation from Gaussian statistics is mainly caused by the insufficient temporal resolution of the detection scheme; the DFT technique is implemented in the latter, which can significantly improve the temporal resolution.
Recent Works
Apart from the above-mentioned methods, there are also several observations of RWs in fiber lasers reported in the last 3 years. Stimulated Brillouin scattering (SBS) has been recently considered as a trigger effect for the generation of RWs. Experimentally, Brillouin scattering-induced RWs in self-pulsing fiber lasers, 91 Q-switched random laser, 125 and high power amplifier 126 were reported. Boukhaoui et al. numerically studied the influence of SBS on the occurrence of RWs in self-pulsing fiber lasers. 127 They showed that the RW generation in the SBS process is highly related to high-order Stokes generation while acoustic noise effect is negligible for the occurrence of extreme events. Recently, dissipative RWs generated in a linear cavity normal dispersion ytterbium-doped fiber laser have been reported. 55 The as-mentioned laser is mode locked by SESAM, and a chirped fiber Bragg grating was introduced into the cavity for dispersion compensating. It is claimed that the generation of RWs may be attributed to the filtering effect of the chirped fiber Bragg grating, which induces multipulsing instability to the cavity. In 2018, researchers demonstrated observation of optical RWs in the fiber laser with the generation of random dissipative soliton. 95 It was shown that, with proper adjustment of the cavity parameters, i.e., intracavity polarization state and pump power level, the random dissipative soliton buildup can be obtained in multiple-pulse regime. Along with the process of dissipative soliton buildup, high-amplitude waves were analyzed by studying the real-time spectral dynamics and the temporal pulse trains, which was considered as further confirmation of optical RWs using the method of statistics. The achieved results offer a promising choice for the investigation of the optical RW phenomenon in the pulsed fiber lasers and are valuable for further revealing the physical mechanism for optical RW generation.
Cai et al. 121 reported on the generation of RWs among the NLPs in the mode-locked EDFL with microfiber-based graphene saturable absorber (see Fig. 4). The pulse regime shows the smooth and broad optical spectrum and the temporal trains with a fundamental frequency of 7.35 MHz. This pulsating state has an autocorrelation trace with a narrow coherent peak rooted from the wide shoulder. The statistical distribution for the pulseamplitude fluctuations of the NLP packet is shown in Fig. 5(a). As shown, this distribution curve exhibits an obvious structure of elevated tails, which is non-Gaussian. In addition, the intensity of the maximal amplitude is more than twice the intensity of SWH that is one of the key criteria for generating RWs.
Finally, by utilizing the DFT technique, they provided the evolution of the sectional NLP packet in several roundtrips, as shown in Fig. 5(b). From this figure, one can see that there is a clear chaotic wave with large amplitude appearing in the NLP packet, which is similar to the stroboscopic recording of the RW event in the literature 128 and to reported numerical simulations of dissipative RWs. 37 These experimental results suggest that there are typical RWs appearing in the NLP regime. In addition, Wang et al. demonstrated in 2018 by numerical simulations dissipative RWs among the NLPs, providing in such a way a possibility to investigate their evolution, 96 as shown in Fig. 6. From this figure, it can be seen that, for a saturation energy of E sat ¼ 0.06 and 0.12 nJ, the evolution of the pulse did not clearly lead to an NLP regime but to stable single-pulse and two-pulse operations, respectively. When the value of E sat is set to 0.4 nJ, more pulses are obtained. By further increasing the saturation energy to 8 nJ, RWs appear among the NLP regime. In other words, with the increment of E sat , the pulse number in the laser cavity also increases, which can lead to the formation of the many pulse bunches. And the pulse-to-pulse interaction in these bunches enables the formation of the RWs. 37,79,128 Figure 7 shows the theoretical statistical properties of the pulses for different E sat values. Obviously, the highest amplitude for each E sat value is more than twice the SWH, which confirms the generation of optical RWs.
Optical Rogue Wave Measurement
For the measurement of slow optical RWs, it is convenient to use a high-speed oscilloscope combined with a wide bandwidth photodetector. 53,79,128 Ultrafast RWs cannot be directly measured by real-time oscilloscope. Indeed, there are two challenges for the real-time measurement of ultrafast RWs: the limitation of the data converter and the trade-off between the sensitivity and the speed of the optoelectronic front-end. Currently, there have been mainly two measurement methods developed for ultrafast RWs: time stretching and time lensing.
Time Stretching
Time stretching is a real-time measurement technique based on DFT, 120 which enables fast real-time measurements in optical imaging and spectroscopy. The DFT technique can map the optical spectrum to temporal pulse waveform by a dispersive medium: the intensity envelope in the time domain is equivalent to the optical spectrum as, e.g., measured by optical spectrum analyzer. For this to happen, one should satisfy a certain condition: the pulses are properly stretched by the dispersive element so that the corresponding temporal waveform is equivalent to the analogy of the far-field diffraction condition in the spatial domain. A typical schematic diagram of time-stretching technique is shown in Fig. 8. The waveform of the input pulses can be stretched in time by the dispersive element with large group-velocity dispersion. Then, the output pulse trains are captured by the high-speed photodetector and oscilloscope, realizing the real-time measurement. Herein, the chirped fiber Bragg grating, a normal dispersion fiber or an anomalous dispersion fiber can be used as dispersive element. In general, the normal dispersion fiber is used in the vast majority of the reports with the DFT technique, because the anomalous dispersion fiber may have a lower threshold for nonlinearity and necessitate lower power levels (reducing the signal-to-noise ratio at the [137][138][139] In 2014, the Raman RW generation in the pulse fiber lasers was provided by the research group of Runge et al. 82 By employing the pulse stretching method, the statistical histograms of wave events in more detail were investigated and the spectral evolution of RWs in real time was analyzed. Also in 2014, Lecaplain et al. 123 demonstrated RW emission in a fiber laser operating in the NLP regime. In the experiment, they used time-stretching measurement method to make the statistical distribution histogram of pulse spectral intensity, which could display the strong deviation from the Gaussian shape and the typical long-tail structure. In addition, the maximal amplitude was more than twice the SWH. Clearly, these characteristics indicated the generation of RWs. In 2015, researchers reported RWs in the Yb-doped fiber laser with normal dispersion. 124 The consecutive single-shot spectra of RWs were presented by the time-stretching measurement. Chowdhury et al. 55 presented experimental investigation of RWs in the linear cavity Yb-doped fiber laser. They employed the dispersive Fourier transform method to observe the existence of RWs and to analyze the corresponding spectral evolution. In short, using the time-stretching method, RW generation can be effectively verified. However, the phase information of the RWs is usually missing. 140,141 Therefore, more measuring methods should be considered to further investigate the comprehensive characteristics of RWs.
Time Lensing
Time lensing comes from a temporal imaging system, which is analogous to spatial imaging system. A time lens is capable of compressing or expanding the pulse width of optical waveforms without distortion. 142,143 Time-lensing measurements can support real-time measurement of ultrashort pulses with a subpicosecond resolution. 143,144 The time-lensing method has been applied to the research of incoherent soliton propagation in optical turbulence 145 and stochastic breather emergence in modulation instability. 146 Using the time-lensing method, ultrafast RWs in a vector field have been demonstrated. 147 In the time-lensing measurement of RWs, the imaged signal must be synchronized for a specific timing. 145 The typical experimental observation system of time-lensing measurement is shown in Fig. 9. The statistical distribution with heavy-tailed structure confirmed the generation of RWs. In 2016, the researchers reported the generation of RWs events in the fiber lasers using the real-time measurement based on the time-lensing methods. 146 Li et al. 148 demonstrated the observation of optical RWs in MLFL operating in the NLP state by utilizing the time-lensing technique. In addition, they investigated the round-trip tracking evolution and the detailed temporal patterns of RWs in the time domain at sub-ps resolution. However, compared with the time-stretching measurement, the measuring system of the time-lensing method is more complex, which can increase the experimental cost to some extents.
Hybrid Method
Time stretching and time lensing are powerful tools to observe fast RWs in fiber lasers. In Ref. 149, systematic and dedicated experimental research on wave-packet formation and shot-toshot coherence in quasi-mode-locked operation is carried out. Combining the time-stretching and time-lensing methods, simultaneous measurement of spectral and temporal profiles of the soliton dynamics and RWs can be performed. The combination enabled real-time measurement of both the phase and intensity of RWs and unveiled different temporal patterns. 146,150,151 Ryczkowski et al. 152 demonstrated the real-time full-field characterization of unstable pulses in a fiber laser through a saturable absorber mirror (SAM) by simultaneously employing the timestretching and time-lensing techniques. The simultaneous use of two methods is capable of completely characterizing the realtime evolution of RWs in the spectral and temporal domains, which will be a better way for investigating the generation and dynamics of RWs in fiber lasers in the future (Fig. 10).
Other Measurements
Apart from the above methods, the direct measurement of RWs in fiber lasers can be conducted by the oscilloscope in some conditions. For the pulse fiber lasers, the pulse amplitudes can be recorded to draw the statistical distribution histogram by utilizing the oscilloscope with the high electronic bandwidths. 79,128 When the pulse repetition frequency is low enough and the time interval between the pulses is sufficiently large, the histogram of pulse amplitude can be created using an oscilloscope with a relatively low electronic bandwidth to continuously record the amplitudes of plenty of pulses and analyze the total pulse intensity. Events with pulse amplitude larger than twice the SWH can prove the generation of RWs. This measuring method based on the oscilloscope can be simpler and more convenient than the time stretching and time lensing in pulse fiber lasers. Liu et al. 53 reported the generation of optical RWs in a pulse fiber laser. In their experiment, the repetition frequency was 5.03 MHz and the corresponding pulse interval was 198.8 ns. They utilized 8-GHz oscilloscope to record 10 5 pulse peak intensity, creating the amplitude histogram with log scale. This histogram exhibited the typical statistical distribution with a long-tail structure. In addition, the largest amplitude of pulses was more than twice the SWH. These features showed that the pulses could be regarded as typical RWs. Wang et al. 96 also investigated RW formation in pulse fiber laser. The repetition rate of their fiber laser was 3.47 MHz and the time interval among adjacent pulses was 288.2 ns. The research group spent several hours in recording about 500 thousand temporal samples on the 2-GHz oscilloscope. The corresponding distribution histogram could display an obvious long-tail structure, and the highest amplitude was twice the SWH, which confirmed the generation of RWs. However, it is difficult to investigate the real-time wave events of RWs by the direct measurement of the oscilloscope. Therefore, it is necessary to combine the various measuring methods to conduct the research of RWs.
Outlook
As mentioned above, RWs in fiber lasers are well developed and are still being intensively investigated. In Sec. 4, we discussed the observations of RWs in various fiber lasers, such as the MLFL with different types of saturable absorbers, Q-switched random laser, and the self-pulsing fiber lasers. Compared with other kinds of fiber lasers, the MLFL can offer a more convenient playground for observing the generation of optical RWs because of their many advantages, such as low price, ultrashort pulses, simple structure, and good stability. When the fiber lasers are mode locked by the 2-D materials, these materials can not only provide excellent saturable absorption properties but also enhance the nonlinear effects for the pulse interactions in the fiber lasers, which benefits the formation of RWs. Different from the MLFL, the SBS effects can be formed in the Qswitched random lasers 125 and the self-pulsing fiber lasers. 127 The influence of SBS can introduce a trigger effect for the RW generation. It can be believed that the observations of RWs in various fiber lasers will attract more attention in the future. As the study of fiber lasers advances, RWs in fiber lasers will be further investigated from the following several aspects.
Deterministic Rogue Wave in Fiber Lasers
Based on the various experimental observations of RWs in fiber lasers, it is intriguing to investigate the deterministic prediction of RW generation in fiber lasers. Deterministic optical RW generation typically depends on a theoretical prediction combined with proper experimental conditions. Sergeyev et al. presented slow deterministic vector RWs in an EDFL passively mode locked by carbon nanotube. By controlling the polarization state of intracavity and pump wave, deterministic RWs can be generated. 153 It is also interesting to consider that algorithm-controlled fiber lasers could be a next-generation platform for deterministic RW generation. Algorithm-controlled fiber lasers will be further discussed in Sec. 6.4.
Rogue Waves in Two-Dimensional Material-Based Mode Locked Fiber Laser
In the last decade, the MLFLs based on 2-D materials have been fast developing. 69,[154][155][156][157][158][159][160][161][162] It is worth mentioning that an MLFL with a saturable absorber would be a promising direction for the study of RWs. Earlier works on RWs in fiber lasers were mostly mode locked by nonlinear polarization rotation (NPR). Indeed, recently there have been many results on the RWs in fiber lasers with real saturable absorbers, and it has been demonstrated that saturable absorbers play an essential role in the RW generation. 147 Liu et al. 53 demonstrated dissipative RW generation in pulsed fiber laser with topological insulator saturable absorber on microfiber. The authors ascertained that the topological insulator microfiber device introduces strong nonlinear interactions, which contributed greatly to the generation of RWs. In 2016, their group also reported a dissipative RW induced by soliton explosion in fiber lasers, which are mode-locked by a carbon nanotube. 54 In 2017, RWs in mode ultrafast pulse fiber laser mode locked by graphenedecorated microfiber 90 were reported. In 2018, RWs were reported in MoS 2 MLFL operating at 2000 nm. 96 Klein et al. 94 found ultrafast RWs in a fiber laser with the graphene saturable absorber, which is attributed to the noninstantaneous relaxation of the saturable absorber together with the polarization mode dispersion of the cavity.
Recently, RW generation has been reported in a linearcavity Yb-doped fiber laser mode-locked by semiconductor SAM. 55 It is noted that the authors mentioned that the SESAM plays an important role on the formation of RWs. However, there have been no systematic studies on the dynamic of RWs in a specific SA-MLFL, which would be a direction for the study in the future. In the last decade, 2-D nanomaterials, including grapheme, 73,[163][164][165] topological insulators, 53 and transition metal dichalcogenides, [166][167][168] have been widely applied as optical saturable absorbers for MLFLs and have been studied for RW generation. 53 In the last three years, there have been many 2-D materials reported for application in ultrafast fiber lasers, [169][170][171][172] which has significantly enhanced the development of the ultrafast lasers. Continually searching and employing new materials with good saturable absorptions and highly nonlinear characteristics may sufficiently quicken the above-mentioned process. It can be expected that more 2-D materials-based fiber lasers will provide appropriate platforms for the study of RW generation and dynamics in the future.
Rogue Waves in Mid-Infrared Fiber Lasers
In recent decades, the study of nonlinear fiber optics has been extended to the mid-infrared band, and mid-infrared fiber lasers have attracted intensive interest. It is natural that study of optical RWs has also been extended to the mid-infrared region. In 2011, the formation of mid-infrared RWs was numerically investigated in the soft glass fibers. 173 In 2017, mid-infrared optical RWs generated by SC in chalcogenide fibers were reported by Liu et al. 174 RWs were subsequently found in mid-infrared ultrafast fiber laser. Researchers reported optical RWs in a Tm-doped fiber laser 96 mode locked by MoS 2 . They experimentally observed dissipative RWs in the fiber lasers generated from an NLP state. Another finding of optical RWs in mid-infrared was from Akosman and Sander. 175 They demonstrated the route from a stable mode locking state toward RW formation in a linear cavity Tm/Ho-doped fiber lasers operating at ∼1980 nm. 175 According to the recent works, it is easy to find that the mechanism and nonlinear dynamics of the RWs at 2 μm are comparable to those observed at 1 and 1.5 μm. It indicates that RW generation is a general feature of fiber lasers. So far several works on MIR RWs have been reported with operating wavelength limited to 2 μm; RWs at 3 μm and above have not been discovered. It can be anticipated that study of RWs at mid-infrared band will be another hot topic in the field of nonlinear fiber optics.
Rogue Waves in Algorithm-Controlled Fiber Lasers
A variety of SAs have been extensively applied to the observation of RW generation in the pulse fiber lasers. However, there are some disadvantages in different SAs. For example, the NPR technique, which is one of the artificial SAs, shows a strong polarization-dependent feature, which can hinder corresponding applications in the research of RWs. Recently, a programmable NPR MLFL at 1.5 μm with a human-like algorithm has been presented in the literature. 176 Stable fundamental mode-locked regime has been automatically obtained in the pulsed fiber laser. In addition, this fiber laser showed the initial mode-locking time of 0.22 s and recovery time of 14.8 ms. In addition, this fiber laser can lock onto Q-switched regimes and Q-switch mode locking. The intelligent programmable method greatly improves the reliability of MLFL, which may also be used for the observation of RWs in the fiber lasers with SAs. In fact, the NLPs are realized in the machine-learning-based MLFL. Researchers have also demonstrated complex transition pulse regimes from the MLFL based on an intelligent polarization algorithm control. Furthermore, research groups have employed machine learning to analyze the generation of extreme events in optical fiber modulation instability. 177 So far, the investigation of RWs in algorithm-controlled fiber lasers has not been yet demonstrated. We believe that the generating mechanism of rouge waves will be effectively studied in pulse fiber lasers with different SAs through human-like intelligent methods.
Rogue Waves Based on the Multimode Fiber or Multimode Fiber Lasers
Remarkable research on RWs in single-mode fiber lasers has been widely conducted due to their potential value in the ocean optics. However, the pulse energy of single-mode fiber lasers is approaching limits that may hinder their development and application in scientific research, industrial processing, and other fields. Compared with the single-mode fibers, the multimode fibers (MMFs) can enhance the capacity of communication systems, promoting the potential impact of optical pulses in fiber lasers. The nonlinear propagation in the MMF lasers is closely related to a complex spatiotemporal mixing process caused by the nonlinearity and waveguide imperfections. 178 Recently, the spatiotemporal dynamics of optical pulses have been demonstrated in the MMFs, such as the spatiotemporal mode-locking, 179 the soliton molecules, 180 harmonic mode locking, 181 the spatiotemporal instability, 182 and beam self-cleaning. 183 This research provides new approaches for exploring RW generation in the MMF lasers. In addition, researchers have reported efficient SC generation by employing a 1064-nm laser source to pump a graded index MMF. 184 Indeed, RWs are apt to be observed in the SC generation. Therefore, the MMF is suitable for investigating the generation of RWs. It can be expected that further exploration of RWs in the MMFs or MMF lasers will be a new hot topic in nonlinear fiber optics.
Rogue Waves Induced by the Optical Vortex Beams in the Fiber Lasers
RWs have been obtained in several optical configurations, such as the photonic crystals, 185 the optical fibers, 186 and the SC generation. 45 Recently, the generation of 2-D optical RWs in the presence of turbulence with the interaction of optical vortices was demonstrated by Gibson et al., 187 which indicates the optical vortices can induce the generation of RWs. At present, vortex beams in the fiber lasers have been demonstrated because of the promising applications in the quantum optics, 188 optical micromanipulation, 189 rotation detection, 190 WDM (modedivision multiplexing) systems, 191 and nonlinear fiber optics. 192 In the fiber systems, the vortex beams are generally realized by the modulating elements, including the mode selective couplers, 193,194 long period fiber gratings, 195,196 and microstructured fiber facets. 197 The mode-locked vortex beams through the mode fibers in the all-fiber lasers have been reported. 198 Therefore, the optical RWs based on the vortex beams in the fiber lasers will be one of the research hot topics, promoting the further development of nonlinear optics.
Rogue Waves in Temporal Cavity Soliton Fiber Lasers
Apart from the MLFL, the fiber laser without the mode locker inserted in the cavity can also generate ultrashort pulses, for example, the temporal cavity solitons (TCSs). When the dispersion and nonlinearity are balanced in the fiber lasers, TCSs are formed, which can transmit indefinitely and keep their shape in the fiber cavity. 199 At present, TCSs have been intensely reported in fiber laser cavities 20,[199][200][201] due to their potential applications in the all-optical buffer and coherent frequency combs. 202 Researchers have reported the experimental observation of TCS bound states in universal mechanisms. 201 TCSs in these bound states can interact with each other, which may induce the optical RW generation. At the moment, there is no experimental observation of optical RWs through TCS fiber lasers. We believe that the generation of optical RWs in TCS fiber lasers will be realized in future, a potential hot topic that would further reveal more physical phenomena in nonlinear fiber optics fields.
Conclusion
RWs are extreme events first observed in the ocean, showing great threat to the safety of sea-going personnel and ships. The study of RWs in different systems has remained a hot research topic. Fiber lasers provide an ideal platform to observe the generation of optical RWs as well to as investigate their behaviour. We hope that this review will be helpful for future studies of RWs in different optical systems. | 9,063.6 | 2020-03-01T00:00:00.000 | [
"Physics"
] |
Matriptase-2, a Membrane-bound Mosaic Serine Proteinase Predominantly Expressed in Human Liver and Showing Degrading Activity against Extracellular Matrix Proteins*
We have identified and cloned a fetal liver cDNA encoding a new serine proteinase that has been called matriptase-2. This protein exhibits a domain organization similar to other members of an emerging family of membrane-bound serine proteinases known as type II transmembrane serine proteinases. Matriptase-2 contains a short cytoplasmic domain, a type II transmembrane sequence, a central region with several modular structural domains including two CUB (complement factor C1s/C1r, urchin embryonic growth factor,bone morphogenetic protein) domains and three low density lipoprotein receptor tandem repeats, and finally, a C-terminal catalytic domain with all typical features of serine proteinases. The human matriptase-2 gene maps to 22q12-q13, a location that differs from all type II transmembrane serine proteinase genes mapped to date. Immunofluorescence and Western blot analysis of COS-7 cells transfected with the isolated cDNA confirmed that matriptase-2 is anchored to the cell surface. Matriptase-2 was expressed in Escherichia coli, and the purified recombinant protein hydrolyzed synthetic substrates used for assaying serine proteinases and endogenous proteins such as type I collagen, fibronectin, and fibrinogen. Matriptase-2 could also activate single-chain urokinase plasminogen activator, albeit with low efficiency. These activities were abolished by inhibitors of serine proteinases but not by inhibitors of other classes of proteolytic enzymes. Northern blot analysis demonstrated that matriptase-2 transcripts are only detected at significant levels in both fetal and adult liver, suggesting that this novel serine proteinase may play a specialized role in matrix remodeling processes taking place in this tissue during development or in adult tissues.
We have identified and cloned a fetal liver cDNA encoding a new serine proteinase that has been called matriptase-2. This protein exhibits a domain organization similar to other members of an emerging family of membrane-bound serine proteinases known as type II transmembrane serine proteinases. Matriptase-2 contains a short cytoplasmic domain, a type II transmembrane sequence, a central region with several modular structural domains including two CUB (complement factor C1s/C1r, urchin embryonic growth factor, bone morphogenetic protein) domains and three low density lipoprotein receptor tandem repeats, and finally, a Cterminal catalytic domain with all typical features of serine proteinases. The human matriptase-2 gene maps to 22q12-q13, a location that differs from all type II transmembrane serine proteinase genes mapped to date. Immunofluorescence and Western blot analysis of COS-7 cells transfected with the isolated cDNA confirmed that matriptase-2 is anchored to the cell surface. Matriptase-2 was expressed in Escherichia coli, and the purified recombinant protein hydrolyzed synthetic substrates used for assaying serine proteinases and endogenous proteins such as type I collagen, fibronectin, and fibrinogen. Matriptase-2 could also activate single-chain urokinase plasminogen activator, albeit with low efficiency. These activities were abolished by inhibitors of serine proteinases but not by inhibitors of other classes of proteolytic enzymes. Northern blot analysis demonstrated that matriptase-2 transcripts are only detected at significant levels in both fetal and adult liver, suggesting that this novel serine proteinase may play a specialized role in matrix remodeling processes taking place in this tissue during development or in adult tissues.
Proteolytic enzymes play crucial roles in the development and maintenance of an organism as well as in a number of pathological conditions including the progression of malignant tumors (1). Most studies on cancer-associated proteinases have focused on matrix metalloproteinases (MMPs), 1 a family of zinc-dependent endopeptidases that collectively degrade all major protein components from extracellular matrix and basement membranes (2,3). However, enzymes from other catalytic classes such as cysteine, aspartyl, and serine proteinases have been also implicated in different aspects of tumor progression. Among them, an emerging group of membrane serine proteinases, called TTSPs and containing a complex organization of domains, have raised recent interest because of their potential ability to participate in matrix-degrading processes associated with cancer (reviewed in Ref. 4). To date, 11 distinct human TTSPs have been described and characterized at the amino acid sequence level. They include enteropeptidase, hepsin, human airway trypsin-like protease (HAT), corin, matriptase/MT-SP1, epitheliasin/TMPRSS2, TADG-12/TMPRSS3, TMPRSS4, MSPL (mosaic serine protease large form), spinesin/TMPRSS5, and DESC1 protease (differentially expressed squamous cell carcinoma gene 1) (4 -6). All of them share a number of structural features: a short N-terminal cytoplasmic domain, a type II transmembrane sequence, a central region of variable length containing modular structural domains, and a C-terminal catalytic region with all of the characteristic features of serine proteinases. TTSPs have been found in a wide variety of mammalian tissues as well as in other eukaryotic organisms including Drosophila melanogaster (7) and Xenopus laevis (8).
Although the physiological roles of most TTSPs are still unclear, there are some cases in which their participation in specific functions has been suggested or demonstrated. This is the case of enteropeptidase that is involved in the proteolytic activation of trypsinogen to trypsin, which subsequently activates other digestive enzymes such as chymotrypsinogen or procarboxypeptidases (9,10). Likewise, matriptase/MT-SP1 has been proposed to initiate signaling and proteolytic cascades through their ability to activate cell surface-associated proteins like pro-uPA and protease-activated receptor 2 (11). Matriptase has also been suggested to participate in the control of intestinal epithelial turnover by regulating the cell-substratum adhesion (12). Hepsin has been involved in mammalian cell growth, developmental processes such as blastocyst hatching, and initiation of blood coagulation (13)(14)(15). Corin, a TTSP family member isolated from human heart, has been found to act as an * This work was supported by Grant SAF00-0217 from Comisión Interministerial de Ciencia y Tecnología-Spain and European Union (QLG1-CT-2000-01131). The Instituto Universitario de Oncología was supported by Obra Social Cajastur-Asturias. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
The nucleotide sequence(s) reported in this paper has been submitted to the GenBank TM in vitro activator of pro-atrial natriuretic peptide, a cardiac hormone essential for the regulation of blood pressure (16,17). HAT, originally isolated from the sputum of patients with chronic airway diseases, may be involved in the host defense system on the mucous membrane (18). Recently, a HAT-related protease isolated from rat tissues has been found to cleave pro-␥-melanotropin at the adrenal cortex, stimulating the mitogenic actions of this peptide (19). Spinesin, predominantly expressed at synapses, may play specific roles in neural functions (20). Finally, insertion of -satellite repeats into the gene encoding TMPRSS3 causes a form of autosomal recessive deafness, suggesting a role for this protease in the development or maintenance of the inner ear or in the turnover of the protein contents of the perilymph and endolymph (21).
The expression of virtually all TTSPs characterized to date is widely deregulated during the development and progression of tumor processes. Thus, matriptase/MT-SP1 was originally identified in breast cancer cells and is highly expressed in breast, prostate and colorectal cancers (22)(23)(24)(25). Inhibition of this protease abolishes both primary tumor growth and metastasis in a murine model of prostate cancer (24,26), whereas stabilization of active matriptase through glycosylation by N-acetylglucosaminyltransferase V is associated with the prometastatic effects of this enzyme (27). Hepsin is overexpressed in ovarian and prostate carcinomas (28 -31), and its expression correlates inversely with measures of patient prognosis (32). Epitheliasin is also overexpressed in prostate carcinomas, and a mutated form of this protease has been found in a case of aggressive disease (33)(34)(35). TMPRSS3/TADG-12 is overexpressed in ovarian cancer (36), and TMPRSS4 is overexpressed in pancreatic cancer (37). Finally, the recently described DESC1 was identified as a consequence of its differential expression in squamous cell carcinoma of the head and neck (6).
These recent findings have stimulated the search for new TTSPs potentially associated with some of the proteolytically mediated processes taking place during normal or pathological conditions and especially during tumor progression. In this work, and as part of our studies on tumor proteinases, we have examined the possibility that additional members of this family of membrane proteinases could be produced by human tissues, with the discovery of a novel family member named matriptase-2. We describe the molecular cloning and complete nucleotide sequence of a cDNA coding for this protein and report an analysis of its expression in human tissues. We also report the production of recombinant matriptase-2 in Escherichia coli and perform an analysis of its enzymatic activity against synthetic and endogenous substrates. Finally, we demonstrate that matriptase-2 is bound to the cell membrane.
EXPERIMENTAL PROCEDURES
Materials-A human fetal liver cDNA library constructed in DR2 and Northern blots containing polyadenylated RNAs from different adult and fetal human tissues were from CLONTECH (Palo Alto, CA). Chemicals, reagents, and synthetic and macromolecular substrates for proteases, including fibronectin, fibrinogen, laminin, type I collagen, and type I gelatin, were obtained from Sigma. Single-chain uPA was purchased from Oncogene Research Products (Boston, MA). Recombinant gelatinases A and B were kindly provided by Dr. G. Murphy (University of East Anglia, Norwich, UK). Recombinant collagenase-3 (MMP-13) was produced as described previously (38). Restriction endonucleases and other reagents used for molecular cloning were from Roche Molecular Biochemicals. Synthetic oligonucleotides were prepared in an Applied Biosystems (Foster City, CA) model 392A DNA synthesizer. Double-stranded DNA probes were radiolabeled with [␣-32 P]dCTP (3000 Ci/mmol) purchased from Amersham Biosciences using a random-priming kit from the same company.
Bioinformatic Screening of the Human Genome and cDNA Cloning-The BLAST program was used to search public (www.ncbi.nlm.nih.gov) and private (www.celera.com) human genome data bases, looking for regions with sequence similarity to previously described TTSPs. After identification of a DNA contig in chromosome 22 encoding a region similar to the catalytic domain of matriptase/MT-SP1, we analyzed in this contig the possible presence of regions encoding the remaining domains characteristic of TTSPs. This approach allowed us to identify DNA regions potentially encoding a full-length sequence for a novel member of this family of serine proteinases. Then we designed specific oligonucleotides to PCR amplify a cDNA for this protein, using a panel of commercially available cDNA libraries (CLONTECH) and the Expand TM High Fidelity PCR system (Roche Molecular Biochemicals). The following oligonucleotides were used: matriptase-2f, 5Ј-AGGATGC-CCGTGGCCGAGGC, and matriptase-2r, 5Ј-AGGTGGGCCC TGCTTT-GCAG. All of the PCRs were performed in a GeneAmp 2400 PCR system from PerkinElmer Life Sciences for 40 cycles of denaturation (94°C, 15 s), annealing (64°C, 15 s), and extension (68°C, 60 s). After cloning of the amplified PCR products in pBSII, their identities were confirmed by nucleotide sequencing.
Nucleotide Sequence Analysis-DNA fragments of interest were sequenced by using the kit DR terminator TaqFS and the automatic DNA sequencer ABI-PRISM 310 (PerkinElmer Life Sciences). All of the nucleotides were identified in both strands. Computer analysis of DNA and protein sequences was performed with the GCG software package of the University of Wisconsin Genetics Computer Group.
Membrane Immunolocalization-Full-length matriptase-2 cDNA was subcloned into pcDNA3 expression vector. In addition, a 24-bp linker coding for the hemagglutinin (HA) epitope of human influenza virus was inserted at the end of the cDNA sequence encoding the C-terminal region of matriptase-2. COS-7 cells were transfected with 1 g of plasmid DNA (pcDNA3-matriptase-2-HA), using FuGENE 6 reagent (Roche Molecular Biochemicals) according to the manufacturer's instructions. About 48 h after transfection, the cells were fixed for 10 min in cold 4% paraformaldehyde in PBS, washed in PBS and incubated for 5 min in 0.2% Triton X-100 in PBS. Fluorescent detection was performed by incubating the slides with monoclonal antibody 12CA5 (Roche Molecular Biochemicals) against HA, followed by another incubation with goat anti-mouse fluoresceinated antibody. After washing in PBS, the slides were mounted and observed under fluorescence with a Zeiss axiophot equipped with a CCD camera (Photometrics).
Preparation of Cell Membrane Fractions and Western Blot
Analysis-COS-7 cells were transiently transfected with the pcDNA3matriptase-2-HA plasmid as described previously. The cells were scraped from the plates, and the membrane fractions were prepared essentially following the procedure described by Strongin et al. (39). The extracts were separated by SDS-PAGE, analyzed by Western blotting with an anti-HA monoclonal antibody, and detected with an enhanced chemiluminescence kit (Amersham Biosciences).
Expression and Purification of the Protease Domain of Human Matriptase-2-A 708-bp fragment of the matriptase-2 cDNA containing the entire catalytic domain was generated by PCR amplification with primers 5Ј-ATTGTTGGTGGAGCTGTGTCC and 5Ј-TCAGGTCACCAC-TTGCTGGAT using the full-length matriptase-2 cDNA in pBSII as template. The PCR amplification was performed for 25 cycles of denaturation (95°C, 10 s), annealing (55°C, 10 s), and extension (68°C, 1 min) using the Expand TM High Fidelity PCR system. The PCR-amplified product was then phosphorylated and ligated in the SmaI site of the PGEX-3X expression vector (Amersham Biosciences). The resulting expression vector was transformed into BL21(DE3)pLysS competent E. coli cells, and expression was induced by the addition of isopropyl-1-thio--D-galactopyranoside (final concentration, 0.5 mM) followed by further incubation for 4 -6 h at 30°C. The cells were collected by centrifugation, washed, and resuspended in 0.05 volumes of PBS. Finally, the cells were lysed by sonication and centrifuged at 20,000 ϫ g for 20 min at 4°C. The soluble extract was treated with glutathione-Sepharose 4B (Amersham Biosciences), and the glutathione S-transferase (GST)-matriptase-2 fusion protein eluted with glutathione elution buffer (10 mM reduced glutathione in 50 mM Tris-HCl, pH 8.0), following the manufacturer's instructions. Finally, the purified GSTmatriptase-2 fusion protein was used for enzymatic assays.
Enzymatic Assays-Enzymatic activity of purified recombinant matriptase-2 was detected using the synthetic fluorescent substrates N-t-Boc-Gln-Ala-Arg-AMC, N-t-Boc-Gln-Gly-Arg-AMC, N-t-Boc-Ala-Pro-Ala-AMC, N-t-Boc-Val-Leu-Lys-AMC, and N-t-Boc-Ala-Phe-Lys-AMC. Routine assays were performed at 37°C at substrate concentrations of 100 M in an assay buffer of 50 mM Tris-HCl, 20 mM NaCl, pH 8.0, with a final concentration of Me 2 SO of 2.5%. The fluorometric measurements were made in a MPF-44A PerkinElmer spectrofluorometer ( ex ϭ 360 nm, em ϭ 460 nm). For the inhibition assays, matriptase-2 and inhibitors were preincubated for 15 min at 37°C, and then the incubations were performed at the same conditions as above with the different inhibitors. Cleavage of type I collagen, type I gelatin, type I laminin, fibronectin, and fibrinogen by recombinant matriptase-2 was followed by SDS-PAGE. All of the assays were performed in the above described assay buffer for 4 -12 h at 37°C. The experiments using type I collagen as a substrate were also performed at 28°C. The enzyme/substrate ratio (w/w) used in these experiments was 1/100. Finally, to examine the activation of other proteinases by matriptase-2, the proenzymes of human uPA, plasmin, gelatinase A, and gelatinase B were incubated with active matriptase-2 at a 1:100 w/w enzyme/substrate ratio. The incubations were performed in assay buffer without Me 2 SO for 4 -12 h at 37°C. The processing of the different precursor proteins was assessed by SDS-PAGE.
Homology Modeling-A three-dimensional model of the catalytic domain of matriptase-2 was calculated using Swiss-Model, a semiautomated modeling server (40), and analyzed with the Swiss-Pdb Viewer. Briefly, the amino acid sequence of the predicted catalytic domain of matriptase-2 was compared with the sequences of the macromolecules deposited in the Protein Data Bank to identify suitable templates. After ranking the possible templates by sequence similarity to matriptase-2, structural quality, and nonredundancy, we chose that of human matriptase. The template accession entry is 1EAX. The target sequence was automatically threaded over the structure of the template, built with ProMod II, and energy-minimized with Gromos96. The quality of the resulting models was verified automatically with WhatCheck and manually with Swiss-Pdb Viewer. Electrostatic analyses of the model were performed with MolMol (41). We added hydrogens and utilized a protein dielectric constant of 3. Partial atomic charges were taken from the Amber94 force field. The figures were modeled with MolMol and rendered with Megapov and POV-Ray (www.povray.org/).
Northern Blot Analysis-Nylon filters containing 2 g of poly(A) ϩ RNA of a wide variety of human tissues were prehybridized at 42°C for 3 h in 50% formamide, 5ϫ SSPE (1ϫ SSPE ϭ 150 mM NaCl, 10 mM NaH 2 PO 4 , 1 mM EDTA, pH 7.4), 10ϫ Denhardt's solution, 2% SDS, and 100 g/ml of denatured herring sperm DNA and then hybridized with a radiolabeled matriptase-2-specific probe 2.2 kb long, containing nucleotides 220 -2420 of the isolated cDNA. Hybridization was performed for 20 h under the same conditions used for prehybridization. The filters were washed with 0.1ϫ SSC, 0.1% SDS for 2 h at 50°C and exposed to autoradiography. RNA integrity and equal loading was assessed by hybridization with an actin probe.
Identification and Characterization of a Human Fetal Liver cDNA Encoding a New Membrane-bound Serine Proteinase-
To identify novel members of the TTSP family of membrane serine proteinases, we used the BLAST algorithm to screen the human genome data bases looking for DNA contigs containing sequences with significant similarity to previously described family members. This search led us to the identification of a contig located in chromosome 22 containing coding information for a putative new serine proteinase with a type II transmembrane domain. To generate a cDNA clone for this gene, we performed PCRs using a panel of human cDNA libraries, and specific oligonucleotides derived from the identified genomic sequence. A fragment of the expected size (ϳ2.5 kb) containing in-frame initiator and stop codons was amplified from a cDNA library prepared from human fetal liver. After cloning and sequencing the PCR-amplified product, we confirmed by conceptual translation that the generated sequence was distinct from that reported for all previously identified human TTSPs (Fig. 1A). Computer analysis of the full-length cDNA sequence revealed that it codes for a protein of 802 amino acids, with a calculated molecular weight of 88,901 and showing significant sequence similarity to all other human membrane serine proteinases belonging to the TTSP family. Further analysis of the predicted sequence indicated that the maximum percentage of identities (35%) was with matriptase/MT-SP1 and spinesin. The percentage of identities with other members of the TTSP family, such as TMPRSS2, TMPRSS3, TMPRSS4, DESC1, corin, MSPL, and HAT, was of about 30%.
An alignment of the deduced amino acid sequence from the isolated liver cDNA confirmed that the identified protein possesses all domains characteristic of TTSPs (Fig. 1B). Thus, close to the initiator methionine residue there is a hydrophobic domain spanning from positions 44 -66 that is not preceded by a recognizable signal sequence. Computer analysis using the TMHMM (transmembrane helices Markov model) program (available at www.cbs.dtu.dk) revealed that this domain is predicted to act as a type II membrane anchor sequence. The transmembrane domain is followed by a stem region containing two CUB domains (the first one substantially degenerate with respect to the consensus CUB) and three LDLR repeats. This stem region is very similar to that present in matriptase, which contains two CUB and four LDLR repeats. Finally, there is a catalytic domain located at the C-terminal region of the identified protein and showing about 45% identities with the equivalent domain of matriptase. This catalytic domain also contains all structural hallmarks of functional serine proteinases (Fig. 2). In fact, an alignment of this sequence with that of other members of this class of proteolytic enzymes allows identification of a prodomain region ending in a conserved Arg-Ile-Val-Gly-Gly motif that is highly conserved in serine proteinases and that contains the Arg-Ile bond that is cleaved for protease activation (Fig. 2). The sequence alignment also allows identification of the active site Ser residue as that present at position 753 within the conserved motif Gly-Asp-Ser-Gly-Gly. The His and Asp residues necessary for catalytic activity should be those present at positions 608 and 659, respectively. The sequence Ser-Trp-Gly proposed to interact with the side chains of serine proteinase substrates for proper orientation of the scissile bond is also present at the C-terminal region of the identified sequence (positions 773-775). The putative catalytic domain of this protein also contains the six conserved cysteine residues present in the catalytic region of TTSP family members and involved in the formation of three disulfide bonds (Cys 593 -Cys 609 , Cys 724 -Cys 738 , and Cys 749 -Cys 778 ). A fourth predicted disulfide bridge should form between Cys 679 of the catalytic domain and Cys 559 located at the prodomain. The formation of this predicted disulfide bond observed in many serine proteinases including matriptase would suggest that the active catalytic domain of the identified protein should still remain located at the cell surface even after cleavage at the activation site. Finally, analysis of consensus motifs present in the identified sequence revealed the presence of a potential phosphorylation site in the cytoplasmic tail (Ser-Lys-Arg at position 34) that may participate in the recruitment of intracellular proteins potentially involved in activation of signal transduction pathways. This analysis also revealed six potential sites of N-glycosylation (Asn-Xaa-Thr or Asn-Xaa-Ser) at positions 127, 175, 329, 424, 444, and 509. All of these structural features are also conserved in the amino acid sequence deduced from a mouse cDNA isolated as part of a large scale cDNA sequencing project (42). The protein encoded by this mouse cDNA (accession numbers AK004939 and BAB23684) likely corresponds to the mouse ortholog of the human serine proteinase identified in this work (Fig. 2). The percentages of identities between the human and mouse enzymes were 85.1% in nucleotides and 84.3% in amino acids.
In summary, and according to this structural analysis, we can conclude that the cloned human cDNA encodes a novel membrane serine proteinase that, according to its significant sequence similarity and overall domain organization with matriptase, we propose to call matriptase-2.
Membrane Localization of Matriptase-2-To provide experimental support to the proposal that matriptase-2 is a membrane-bound protease, we transfected COS-7 cells with pcDNA3-matriptase-2-HA, a construct containing the HA epitope at the C-terminal region of the enzyme. Transfected cells were then analyzed by immunofluorescence with a mouse monoclonal antibody (12CA5) specific for this viral epitope. As shown in Fig. 3A, a fluorescent pattern surrounding the cell membrane was clearly visualized in transfected cells expressing matriptase-2. Furthermore, we performed SDS-PAGE analysis of lysates from COS-7 cells transfected with the matriptase-2-HA construct, followed by Western blotting detection with anti-HA monoclonal antibody. As can be seen in Fig. 3B, matriptase-2 was detected in the membrane-enriched fractions but not in the soluble fraction. Taken together, these results provide strong experimental evidence that matriptase-2 is a membrane-anchored protein.
Production of Recombinant Matriptase-2 in E. coli and Analysis of Its Enzymatic Properties-To analyze the enzymatic activity of matriptase-2, we first expressed the catalytic domain of this mosaic membrane-bound protein in bacterial cells. To this purpose, a cDNA coding for the catalytic region was subcloned into the expression vector pGEX-3X, and the resulting plasmid was transformed into the E. coli bacterial strain BL21(DE3)pLysS. After induction of transformed bacteria with isopropyl-1-thio--D-galactopyranoside, a protein band of the expected size (52 kDa) was detected by SDS-PAGE analysis of bacterial protein extracts (Fig. 4). This recombinant fusion protein was purified using glutathione-Sepharose chromatography as described previously (43) (Fig. 4). The soluble GSTmatriptase-2 fusion protein eluted from the affinity column was directly used for enzymatic analysis. We did not release the catalytic domain of matriptase-2 with Factor Xa, because any putative trace of this factor that could remain after the purification process would interfere with assays to analyze the enzymatic activity of a protein such as matriptase-2, which belongs to the same class of proteolytic enzymes as Factor Xa. However, and similar to the case of a fusion protein containing the catalytic domain of matriptase (26), the GST-matriptase-2 fusion protein was apparently autoactivated after incubation at 37°C in the course of the different activity assays. Thus, SDS-PAGE analysis of the recombinant fusion protein incubated for 12 h at 37°C showed the presence of an additional band of 26 kDa that likely corresponds to the catalytic domain of matriptase-2 after proteolytic release of the GST-moiety (data not shown). To analyze the substrate specificity of this recombinant matriptase-2, a series of synthetic quenched fluorescent peptides commonly used for assaying serine proteinases were used. As can be seen in Fig. 5A, the peptides N-t-Boc-Gln-Ala-Arg-AMC and N-t-Boc-Gln-Gly-Arg-AMC were hydrolyzed by matriptase-2 (12.1 and 18.3 nM AMC/min, respectively). Other peptides, including N-t-Boc-Ala-Phe-Lys-AMC, N-t-Boc-Ala-Pro-Ala-AMC, and N-t-Boc-Val-Leu-Lys-AMC, were not signif- M. The hydrolytic activity of matriptase-2 against this substrate was substantially abolished by a number of inhibitors of serine proteinases (phenylmethylsulfonyl fluoride, 4-(2-aminoethyl)-benzenesulfonyl fluoride, leupeptin, and aprotinin) but not by EDTA or E-64, inhibitors of metallo-and cysteine-proteinases, respectively (Fig. 5B). Tosyl-L-phenylalanine chloromethyl ketone was also a poor inhibitor of matriptase-2 ( Fig. FIG. 2. Amino acid sequence comparisons of the catalytic domains of matriptase-2 and matriptase. The amino acid sequence of human matriptase-2 identified in this work is shown in bold. The sequences for human and mouse matriptase were extracted from the SwissProt data base, whereas the sequence BAB23684 corresponding to the putative mouse matriptase-2 was deduced from a cDNA sequence reported in (42). The multiple alignment was performed with the PILEUP program of the GCG package. Gaps are indicated by dots. Common residues to all sequences are shaded. The numbering corresponds to the sequence of human matriptase-2. 5B). The substrate specificity of the matriptase-2 catalytic domain is consistent with structural features of this proteinase. In fact, it is well established that the S1 specificity of serine proteinases is largely determined by the residue located 6 amino acids N-terminal to the active site Ser residue. This position is occupied by an Asp residue in matriptase-2, pointing to a cleavage specificity for substrates with Arg/Lys at the P1 position. The presence of a polar Gln residue (position 802) very close to the catalytic Ser is also consistent with specificity for basic residues.
After these results demonstrating that the isolated cDNA for matriptase-2 encodes a protein whose catalytic domain exhibits an enzymatic activity against synthetic peptides and an inhibitory profile characteristic of serine proteinases, we evaluated the possibility that matriptase-2 could also degrade a series of extracellular matrix and basement membrane protein components. To do this, a variety of proteins including type I collagen, laminin, fibronectin, fibrinogen, and gelatin were treated with purified recombinant matriptase-2, and the reactions were followed by SDS-PAGE analysis. As can be seen in Fig. 6, this enzyme was able to degrade protein substrates like fibronectin, fibrinogen, and type I collagen. In relation to the degrading activity of matriptase-2 on type I collagen, it is remarkable that this enzyme is not a bona fide fibrillar collagenase because its collagenolytic activity did not generate the typical 3 ⁄4 and 1 ⁄4 fragments produced by fibrillar collagenases, as illustrated for human collagenase-3 in the comparative analysis performed under the same assay conditions (Fig. 6). Also in support of this conclusion, experiments of type I collagen hydrolysis with recombinant matriptase-2 at 28°C revealed the absence of significant degrading activity on this substrate at this temperature (not shown). Therefore, it seems that the matriptase-2 catalytic domain used in this work is only capable of degrading type I collagen at conditions in which the triple helical collagen is partially denatured. The hydrolyzing activity of matriptase-2 on the different macromolecular substrates was blocked in all cases by phenylmethylsulfonyl fluoride (Fig. 6 and data not shown), providing additional support for the proposal that this enzyme is a serine proteinase. Finally, we examined the possibility that matriptase-2 could be implicated in the activation of other proteinases like pro-MMPs, pro-uPA, or plasminogen as part of a coordinate action within a proteolytic cascade. Interestingly, and similar to the case of matriptase, matriptase-2 was able to process, albeit with a low efficiency, the 55-kDa single-chain precursor of uPA generating a 33-kDa protein and other smaller fragments (Fig. 6). By contrast, matriptase-2 was not able to process to a significant extent progelatinase A (MMP-2), progelatinase B (MMP-9), and plasminogen (data not shown).
Homology Modeling of the Catalytic Domain of Matriptase-2-The amino acid sequence similarity between matriptase-2 and matriptase, together with the recent resolution of the three-dimensional structure of the catalytic domain of matriptase (44), opens the possibility to perform a computer modeling of the structure of matriptase-2. Fig. 7A shows an analysis of the molecular surface of the catalytic domain of human matriptase-2 compared with that corresponding to human matriptase. As can be seen, there is a significant degree of similarity between them around their catalytic site, both showing a deep negatively charged S1 pocket and a similar overall charge distribution (Fig. 7A). The occurrence of this deep S1 pocket in both enzymes is consistent with a somewhat relaxed specificity for the catalytic activity of matriptases when compared with other serine proteinases. An additional structural feature shared by matriptase and matriptase-2 may be deduced from the analysis of the loops surrounding the active site cleft and especially of that called "60 insertion loop" (44). Matriptase-2 also shows the presence of this loop, which is similar in length and exhibits a -hairpin conformation similar to that of the corresponding loop present in matriptase and thrombin, although in this latter protein the loop is oriented differently (Fig. 7B) (44,45). The matriptase loop is stabilized through internal hydrogen bonds made by the carboxylate groups of two Asp residues: Asp 60 (A) and Asp 60 (B). Matriptase-2 has Glu and Asp at equivalent positions, suggesting that the stabilization mechanism of its -hairpin loop must be similar to that of matriptase (44). Nevertheless, some interesting structural differences between matriptase and matriptase-2 can be also predicted after analysis of the generated models. First, matriptase has Ser at the position 190 (1EAX numbering), making Lys residues at P1 equally acceptable as Arg. However, matriptase-2 has an Ala at the same FIG. 6. Analysis of enzymatic activity of matriptase-2 on protein substrates. Fibronectin, type I gelatin, type I collagen, laminin, fibrinogen, and pro-uPA were incubated alone (Ϫ) or in the presence of recombinant matriptase-2 (ϩ) in 50 mM Tris-HCl, 20 mM NaCl, pH 8.0, for 12 h at 37°C. Type I collagen was also incubated with human collagenase-3 (MMP-13) in 50 mM Tris-HCl, pH 7.6, 10 mM CaCl 2 , and 100 mM NaCl, as a positive control of collagenolytic activity. In all cases, the enzyme/substrate ratio was of 1/100 (w/w). For inhibition assays, incubations with matriptase-2 were performed in the presence of 2 mM phenylmethylsulfonyl fluoride (PMSF).
position. This substitution suggests that matriptase-2 may preferentially accept substrates with Arg at P1 position. A second noticeable difference is that position 151 is occupied by a Gly residue in matriptase and by an Ile residue in matriptase-2. This allows the accommodation of bulkier P2Ј residues in matriptase than in matriptase-2 (Fig. 7A). Taken together, these structural differences open the possibility of developing selective inhibitors against both matriptases, an aspect that is of future interest because of the potential involvement of these proteases in human diseases, including cancer.
Analysis of Matriptase-2 Distribution in Human Tissues-To investigate the presence of matriptase-2 mRNA transcripts in human tissues, Northern blots containing poly(A) ϩ RNAs prepared from a variety of fetal tissues (brain, lung, liver, and kidney) and adult tissues (leukocytes, colon, small intestine, ovary, testis, prostate, thymus, spleen, pancreas, kidney, skeletal muscle, liver, lung, placenta, brain, and heart) were hybridized with the full-length cDNA isolated for matriptase-2. As shown in Fig. 8, a transcript of about 3.5 kb was exclusively detected in liver. The restricted expression of matriptase-2 is consistent with previous data indicating that most TTSP genes show a highly restricted expression pattern, suggesting that they may have tissue-specific functions (4). Thus, enteropeptidase expression is restricted to enterocytes of the proximal small intestine (9), corin is predominantly produced by cardiac myocytes (16), HAT is mainly expressed in trachea (18), matriptase in the gastrointestinal tract and prostate (26), TM-PRSS2 in prostate and colon (46), hepsin in liver and kidney (15), and DESC1 is an epithelial-specific protease, being fundamentally detected in testes and prostate (6). DISCUSSION Over the last years, there has been an increasing interest in the characterization of proteolytic processes localized at the cell surface (47,48). Most studies on membrane-associated proteolytic systems have focused on metalloproteinases, but very recently, an emerging family of membrane-bound serine proteinases known as TTSPs has received considerable attention because of their potential role in multiple normal and pathological conditions (4). In this work, we describe the finding of a new human serine protease belonging to this family, which has been tentatively called matriptase-2, to emphasize its relationship with matriptase, a matrix-degrading TTSP originally described in human breast carcinoma cells (23), although there are also clear structural and enzymatic differences between both enzymes. The strategy followed to identify matriptase-2 was based on a computer search of the presently available human genome sequences, looking for regions with similarity to previously characterized TTSP family members. After identification of a DNA sequence presumably encoding the catalytic domain of a new TTSP and PCR amplification experiments using fetal liver cDNA as template, a full-length cDNA coding for human matriptase-2 was finally isolated and characterized. Structural analysis of the identified sequence shows the presence of a series of protein domains characteristic of TTSP proteins, including a short cytoplasmic domain, a type II transmembrane sequence, a central region with several modular structural domains, and a C-terminal catalytic domain with all of the typical features of serine proteinases.
Consistent with its structural characteristics, immunofluorescence and Western blot analysis of COS-7 cells transfected with the isolated cDNA confirmed that matriptase-2 is anchored to the cell surface. In addition, functional analysis of the recombinant catalytic domain of matriptase-2 produced in E. coli provided additional evidence that the isolated cDNA codes for a catalytically active serine proteinase. In fact, the purified recombinant protein exhibits a significant proteolytic activity against fluorogenic substrates used for assaying the enzymatic properties of this class of proteinases. In addition, this degrading activity was abolished by inhibitors of serine proteinases but not by inhibitors of any other class of proteolytic enzymes. The substrate specificity of matriptase-2 against synthetic peptides is similar to that of matriptase, with N-t-Boc-Gln-Gly-Arg-AMC and N-t-Boc-Gln-Ala-Arg-AMC being the preferred substrates for matriptase-2 and matriptase, respectively (Ref. 49 and this work). Matriptase-2 also shares with matriptase the ability to degrade extracellular matrix components, suggesting that this novel protease may participate in some of the matrix-degrading processes occurring in both normal and pathological conditions, including cancer pro- gression. Likewise, the finding that matriptase-2, as matriptase, may activate single-chain uPA suggests that it could act as an initiator of the biologically important proteolytic cascades mediated by activated uPA. Nevertheless, it should be emphasized that matriptase-2 shows very limited uPA activating properties when compared with the rapid and potent activity of matriptase in this regard (11), thereby raising doubts about the in vivo relevance of matriptase-2 as a biological activator of uPA. On the other hand, the observation that matriptase-2 is a fibrinolysin opens the possibility that this enzyme may play a role in processes involving fibrin formation and degradation, such as angiogenesis, in a way similar to that proposed for other membrane-bound proteases including MT-MMPs (50). These findings also raise the possibility that members of the TTSP family of membrane-bound proteases could be part of the alternate proteolytic systems that allow cells to infiltrate fibrin matrices via a plasminogen-independent process in diverse physiological and pathological conditions (51). In any case, we would like to remark that these preliminary enzymatic studies performed with the bacterially produced catalytic domain of matriptase-2 do not likely reflect the optimal conditions of in vivo activity of this enzyme. The recombinant protein shows a low specific activity and lacks the ancillary noncatalytic domains that can strongly influence the substrate specificity and catalytic activity of TTSPs. Accordingly, further studies with full-length matriptase-2 produced in eukaryotic expression systems will be required to provide additional information about the nature of the diverse macromolecular substrates presumably targeted by this enzyme.
To further characterize the structure of the catalytic domain of matriptase-2, we performed a homology model for this protease domain. The predicted structure is very similar to that of the catalytic domain of matriptase (44), although the observed structural differences between both proteases could serve to guide the search for specific inhibitors of each enzyme (52). Besides the overall similarities between matriptase and matriptase-2, it is remarkable that this enzyme presents some characteristic features when compared with other TTSP family members. First, the number and organization of the modular repeats present in the stem region are unique to matriptase-2 among TTSPs, although they are similar to those found at the equivalent region of matriptase. Thus, matriptase-2 contains a total of five modular domains, two CUB domains, and three LDLR repeats, whereas matriptase also contains two CUB repeats but possesses one additional LDLR repeat. There are two TTSPs, corin and enteropeptidase, that exhibit a much more complex organization than matriptases in this region. Thus, corin contains 11 modular domains in its stem region, including eight LDLR repeats, two frizzled domains, and one scavenger receptor domain (15). Likewise, enteropeptidase contains two LDLR repeats, two CUB domains, one disulfide knotted domain, one MAM (meprin, A5 antigen, and receptor protein phosphatase ) domain, and one scavenger receptor domain (9). Other than corin and enteropeptidase, all of the remaining human TTSPs characterized to date exhibit a simpler structural organization than matriptase and matriptase-2 and only contain one or two modular repeats in their respective stem regions or even none of them, as is the case for hepsin (15). The functional role of the CUB and LDLR repeats of matriptase-2 is presently unknown, although they can be involved in mediating protein-protein or protein-ligand interactions as proposed for other proteins containing these modules (53,54). Another peculiarity of matriptase-2 is that the gene encoding this proteinase maps to chromosome 22q12-13, a location unique among all TTSP genes identified to date. Interestingly, several TTSP genes lie on the long arm of human chromosome 11; spinesin, TMPRSS4, and MSPL genes are clustered in 11q23, whereas the gene for matriptase is located at 11q25. Similarly, three TTSP genes are located on chromosome 21, TMPRSS2 and TMPRSS3 genes are located at 21q22, and the gene for enteropeptidase is located at 21p11. Likewise, three TTSP genes are located at chromosome 4, HAT and DESC1 are located at 4q13, and corin is located at 4p12. Finally, the hepsin gene maps to 19q13 in a region containing several genes encoding serine proteinases (55). It is worthwhile mentioning that the region containing the matriptase-2 gene is frequently altered in several human tumors, such as insulinomas, ependymomas, and breast and colorectal carcinomas (56 -59). Genetic lesions in the 22q13 region have been also linked to diverse diseases including schizophrenia susceptibility (60). It will be of future interest to examine the possibility that the matriptase-2 gene could be a direct target of some of these genetic abnormalities.
Finally, in this work, as a step to try to define the physiological role of matriptase-2, we have examined the distribution of this protein in human tissues. Similar to the case of most TTSPs, matriptase-2 expression in normal tissues is very restricted, being only detected in significant amounts in fetal and adult liver. This finding suggests a role for this enzyme in some of the matrix-remodeling processes occurring in this tissue during development or in adult life as proposed for other proteolytic enzymes overexpressed in this tissue (61,62). These putative physiological roles for matriptase-2 in liver may also imply the possibility that their potential substrates could be something other than extracellular matrix components. In support of this proposal, several studies have provided evidence of the existence of multiple and distinct substrates for other TTSP family members (4). Furthermore, the above mentioned structural peculiarities of matriptase-2, when compared with other TTSP proteins, could also be consistent with distinct catalytic properties for this novel enzyme. Finally, the identification of the putative murine ortholog of human matriptase-2 raises the possibility of generating mice deficient in this gene, as recently described for matriptase (63). These mutant mice could contribute to clarify the role of matriptase-2 in physiological processes. | 9,223.2 | 2002-10-04T00:00:00.000 | [
"Biology"
] |
Safe guidewire visualization using the modes of a PTx transmit array MR system
Purpose MRI‐guided cardiovascular intervention using standard metal guidewires can produce focal tissue heating caused by induced radiofrequency guidewire currents. It has been shown that safe operation is made possible by using parallel transmit radiofrequency coils driven in the null current mode, which does not induce radiofrequency currents and hence allows safe tissue visualization. We propose that the maximum current modes, usually considered unsafe, be used at very low power levels to visualize conductive wires, and we investigate pulse sequences best suited for this application. Methods Spoiled gradient echo, balanced steady‐state free precession, and turbo spin echo sequences were evaluated for their ability to visualize a conductive guidewire embedded in a gel phantom when run in maximum current modes at very low power level. Temperature at the guidewire tip was monitored for safety assessment. Results Excellent guidewire visualization could be achieved using maximum current modes excitation, with the turbo spin echo sequence giving the best image quality. Although turbo spin echo is usually considered to be a high‐power sequence, our method reduced all pulses to 1% amplitude (0.01% power), and heating was not detected. In addition, visualization of background tissue can be achieved using null current mode, also with no recorded heating at the guidewire tip even when running at 100% (reported) specific absorption rate. Conclusion Parallel transmit is a promising approach for both guidewire and tissue visualization using maximum and null current modes, respectively, for interventional cardiac MRI. Such systems can switch excitation mode instantaneously, allowing for flexible integration into interactive sequences.
| INTRODUCTION
Catheter-based procedures in cardiovascular interventions are typically guided under X-ray fluoroscopy to visualize the guidewire-catheter system within the surrounding anatomical structures. The use of MRI for guidance leads to both improved soft tissue contrast and elimination of radiation dose. 1,2 However, many existing interventional devices used under X-ray guidance are made of metal or contain metal and thus are electrically conductive and may be ferrous. Ferrous devices are contraindicated for use in MRI systems either because of excessive forces or production of unacceptable levels of artifact. However, even nonferrous metallic devices are susceptible to radiofrequency (RF) induced currents, which can lead to dangerous focal heating of tissues. This is a well-known risk for MRI. [3][4][5] It has been shown that the key variables associated with guidewire safety in MRI are the guidewire diameter, total length, insertion length, and volume of the dielectric medium. 6,7 In the clinical scenario during an intervention, the insertion length can be constantly changing, which emphasizes the need to control the RF-induced current on the guidewire dynamically.
Many approaches have been explored to overcome the problem of heating of interventional devices. The simplest strategy is to use unmodified nonferrous devices with conventional scanners operating with lower RF transmission levels. Although effective, this often results in compromised imaging efficiency and unfavorable tradeoffs in visualization of both anatomy and devices. 8 An alternative approach is to modify the devices themselves so they cannot support RF currents. However, this approach requires a new generation of interventional devices that do not jeopardize mechanical performance. 9 In addition to guidewire safety, the inability to robustly visualize standard guidewires presently prevents the ubiquitous practice of MRI-guided interventions. Therefore, guidewire visualization has been a focus of current research. State-of-the-art guidewire visualization methods primarily rely on either susceptibility-related effects, 10,11 auxiliary contrast markers, 12,13 or some form of local signal detector mounted on or integrated into the device. [14][15][16] Moreover, pulse sequences and RF field polarization methods exist that allow device-tracking without the need of modifications. 17,18 For instance, Campbell-Washburn et al 10 harnessed susceptibility effects using a gradient echo sequence in conjunction with through-slice dephasing to make the guidewire appear hyperintense while suppressing background signal. However, sharp changes in susceptibility also exist at air and tissue interfaces, which can also appear hyperintense, thus interfering with the segmentation of the guidewire from the background. To overcome these limitations, hardware solutions capable of detecting the NMR signal adjacent to the device have been introduced.
These solutions utilize active components, such as saddle coils, loopless antennae, and solenoid coils acting as receive coils 9,13,19 ; alternately, they utilize passive components, such as chokes and resonant networks, to change the RF properties of the device at the Larmor frequency. 20 In 1 example, Etezadi-Amoli et al 14 placed a toroidal coil over a guidewire to excite and receive local NMR signals. Solutions for the safety and visualization of catheters have also been attempted on guidewires. 10,[21][22][23] For instance, Sonmez et al 23 attached an active solenoid coil for visualization and a temperature probe for temperature monitoring, modifying the guidewire substantially.
Additional opportunities toward safe device visualization may be found in harnessing and controlling the typicaly dangerous RF-induced currents. These behave according to Ampère's law and enhance the RF magnetic field (B 1 ) around the guidewire mediating the visualization. The control of the RF-induced current may be achieved by manipulating the electromagnetic fields around the guidewire, given that the currents depend on the magnitude and phase of the incident electric field. A degree of control over the emitted electromagnetic fields is possible with parallel transmit (PTx) array coils. It has been shown that RF-induced current "modes" may exist on the guidewire and can be determined by 1 or more current sensors placed over the guidewire. 24 Typically, 1 or more maximum current modes (MM) in which the PTx system produces a strong current on the conductor exist, along with additional null current modes (NM) [24][25][26] in which RF excitation produces 0 measured current at the sensor's location. Despite the NMs producing little or no RF current, they can still generate substantial transmit (B + 1 ) field; hence, they can be harnessed for safe imaging of the anatomical structures with a guidewire in situ. In addition to the cardiac interventional example that is the focus of this work, NMs have been explored for other safe imaging applications, including the imaging of deep brain stimulators. [25][26][27] On the other hand, the MMs can produce dangerously high RF currents that enhance the transmit B + 1 fields near the guidewire. MMs have not been used for imaging because of their potential for heating; but if properly treated, these modes can help in visualizing the guidewire by directly imaging the NMR adjacent to the guidewire. The magnitude of the B + 1 produced is inversely proportional to the radial distance from the guidewire; thus, even small guidewire currents can produce significant B + 1 adjacent to it. With very low RF power sequences, a measurable signal comes mainly from the locality of the guidewire. 27,28 Some have shown that the guidewire to coil coupling can be leveraged for visualization using birdcage coils and that it can be used to assess the safety of RF excitation. 27,29 The combination of PTx control and direct RF current monitoring of induced currents may provide an alternative means to both visualize the device and anatomy in a safe manner without device modification.
The work by Etezadi-Amoli et al 24 has shown that coupling modes exist and has characterized them. The main focus of this paper is to demonstrate that the coupling mode MM (usually omitted and considered hazardous) can be harnessed for safe and robust guidewire visualization, as well as to investigate the performance of common pulse sequence types for optimum device visualization in this regime.
| Transmit field enhancement
The electromagnetic RF field used during MRI transmission consists of both a magnetic field, B + 1 , and an electric field, E. The electric field drives currents, I, on the guidewire, which in turn generate a local magnetic field, B 1,w ; they also produce heating at locations where charge can be displaced in the dielectric medium, for example at the guidewire tip. 20 The transmitted magnetic field is enhanced or suppressed by the local magnetic field, thus generating NMR signal predominantly from protons around the guidewire. This enhancement can be large enough so that with low RF power a detectable NMR signal is produced using standard pulse imaging sequences, which enables safe guidewire visualization. In a PTx system, whereas the transmit magnetic field B + 1,c (r), at location r, is produced by the c th element, the simultaneously produced electric field E c (r) will induce current I c (r′), at location r′, on a conductive guidewire ( Figure 1). This induced current in turn generates a local scattered magnetic field, B 1,w (r), which is linearly polarized and oriented in the circumferential direction. 30,31 This field adds together with the imposed B + 1,c field produced by the RF coil. We define the guidewire enhancement factor k(r) as the ratio of the B + 1 field produced with the guidewire in, compared with the guidewire out, such that sufficiently close to the guidewire, k ≫ 1.
When transmitting on all channels simultaneously, the overall B + 1 field is a linear combination of the individual coil B + 1,c fields weighted by complex weighting factors w c , often referred to as RF shims. This is also true for the induced current on the guidewire because it is linearly related to the fields produced by each coil. The MM mode (the mode with the largest singular value) has the property of maximizing the induced current I and thus will also maximize k(r). The method of mode identification is described in 24 : briefly, modes are computed by the singular value decomposition of the coupling matrix, which is formed from current measurements obtained by different current sensors when the RF array is driven 1 coil at a time with the same amplitude and phase. In the MM mode, the flip angle produced by a given RF pulse is now also a strong spatially dependent function as a result of the spatially variable fields. To avoid confusion, we will define the nominal flip angle θ as the angle that would be achieved in the absence of any wires. If the local B + 1 field has been significantly enhanced, the true flip angle will be much greater. Hence, even when using low amplitude pulses that do not create significant risk of heating, high flip angles can be produced close to a conductor using the MM mode, enabling visualization of the guidewire while generating limited signal from the rest of the object. Optimum guidewire visualization may contain the following properties: no heating risk, low signal in the background at the nominal flip angle, large signal enhancement at the guidewire (a large enhancement factor k), and short time per image frame to allow dynamic imaging of moving guidewires.
| Sequences for guidewire visualization
The following pulse sequences were considered and chosen for their rapid imaging ability: balanced steady state free precession (bSSFP), spoiled gradient echo (SPGR), single shot turbo spin echo (TSE), and single shot echo planar imaging (EPI). The signal response for each of these can be compared by numerical simulation, assuming a reference T 1 time of 1531 ms 32 and T 2 of 100 ms, approximately relevant to blood at 1.5 Tesla. 33 For the sake of comparison, we consider 2D versions of all sequences and a scenario where the repetition time (TR) for bSSFP and SPGR and the echo spacing for the TSE sequence are all the same; this was fixed to 4 ms. Whereas the short TR sequences can be studied in steady state, this is not so for TSE; thus, we adopted a plausible single shot imaging scenario assuming an echo train length of 50 echoes, with partial Fourier encoding such that the signal is determined by the 10th echo. Because dynamic repetition of TSE leads to saturation, we include a 100 ms delay between shots and a tip-back pulse to speed up longitudinal recovery. 34 For TSE, we define θ as the excitation nominal flip angle and use refocusing flip angle 2θ with a flip-back pulse at the end of the echo train of angle −θ. The EPI sequence does not have a simple comparison point in terms of numbers of RF pulses; hence, it was simulated with TR 200 ms, which corresponds to a similar frame rate to the other methods. Simulated signals were computed using known steadystate expressions for the SPGR/EPI (i.e., the Ernst equation) and for bSSFP. TSE was simulated using the extended phase F I G U R E 1 Schematic illustrating the B 1 field enhancement near the guidewire graph 35 method, with the steady state over multiple dynamics computed as described in Ref. 36 . Figure 2 contains the results of these simulations. Figure 2A shows the signal as a function of flip angle. Each curve has a maximum signal, S max , occurring at some flip angle θ max . If these sequences are used with a MM RF shim setting, the flip angle local to the guidewire will be much larger than the surroundings. If we adjust the power level such that the flip angle local to the guidewire is θ max , this will maximize the signal from the guidewire. With these considerations in mind, we can conclude that the best sequence in terms of overall signal to noise ratio (SNR) is likely to be EPI, with bSSFP coming second. We may, however, exclude EPI from the candidates because in reality the high degree of B 0 field inhomogeneity that can be expected over wide field of view (FOV) required for this application would lead to spatial distortion and signal loss in this sequence. Figure 2B plots signal normalized to the maximum for each sequence against flip angle normalized by θ max ; this allows us to deduce how quickly the signal will drop off away from the guidewire. A rapid drop-off in signal will give the sharpest depiction of the guidewire. Figure 2B suggests that the optimal sequence for sharpness in guidewire visualization is the TSE, whereas bSSFP is the optimal sequence to maximize SNR.
| Hardware and phantoms
Experiments were performed by measuring currents on a standard guidewire in a gel phantom in the presence of RF excitation. All measurements were performed on a 3 Tesla MRI system (Achieva, Philips, Netherlands) with an 8-channel transceiver transverse electromagnetic body coil 37 used for transmission and a local 6-channel torso coil for reception. A 14-liter phantom was filled with poly(acrylic) acid gel prepared according to American Society for Testing and Materials standard F2182. 38 An array of horizontally tilted plastic circles and an Eppendorf tube (filled with mineral oil, the narrow end colocalized with the guidewire tip) were placed in the gel to add distinguishable features. The relaxation properties of the gel were measured using a 1 liter sample studied with inversion recovery TSE for T 1 and multi-echo spin echo for T 2 . The measured parameters were T 1 = 2270 ms, T 2 = 250 ms.
Experiments used a standard nitinol core guidewire with polyurethane outer coating of 0.89 mm diameter and cut to 900 mm length (RF+GA35153M, Terumo Corporation, Japan) to increase resonance. The most distal 4 mm of the guidewire's polyurethane coating was removed because an exposed tip is expected to produce a worst-case heating condition 4 and was not done so for visualization gains. The guidewire was placed in the phantom oriented along the static magnetic field direction of the scanner, with 46 cm of the guidewire immersed in the gel. RF current measurements were made using 2 toroidal coil sensors, 14 both placed over the guidewire outside the gel. The sensors used an electro-optical connection to minimize direct coupling to the RF fields 14 ; RF signals were measured directly by the scanner's spectrometer.
| Current modes and RF shimming
As outlined above, the maximum and null current modes were determined, as described by Etezadi-Amoli et al, 24 using the F I G U R E 2 (A) Predicted signals relative to M 0 for each candidate sequence (details in text); for TSE, the excitation flip angle is θ, and the refocusing angle is 2θ. For each signal trace, the peak signal S max occurs at flip angle θ max . (B) Predicted signals normalized to S max versus flip angle normalized to θ max . This plot shows how the signal drops off as the flip angle is reduced from θ max for each sequence. TSE, turbo spin echo measured induced RF current on the guidewire. A 2 × 8 coupling matrix was formed by measuring induced currents from the 2 sensors as each of the 8 transmit channels was energized in turn. After performing the singular value decomposition on the coupling matrix, the columns of the right singular vector's matrix yielded the required mode weights with 2 MMs and 6 NMs. This is because the coupling matrix has a rank of 2, giving 2 singular vectors with non-0 singular values and 6 with 0 singular value forming the null space. 24 The mode with the largest singular value was chosen as the MM RF shim and used to visualize the guidewire only. For tissue visualization, a uniform B + 1 field must be constructed from the remaining null modes because each produces 0 induced currents; any linear combination will produce the same result. By using the actual flip-angle method 39 in a combined (nominal quadrature) mode and using the low flip angle SPGR scans for each individual channel, B + 1 maps for each coil were acquired. The resulting per channel B + 1 maps form the columns of N × 8 matrix P, where N is the number of pixels in each image. The 6 × 8 matrix Ṽ of channel weights for each of the 6 null modes may then be used to compute P = PṼ, which is a matrix of null mode B + 1 maps. These were used within a magnitudeleast-squares RF shimming calculation 40 : where x is a vector of complex RF shim weights to apply to each virtual null mode, whereas w is a vector of complex RF shim weights to apply to each physical coil, and T is the target B + 1 . The goal of the optimization was a uniform field with a magnitude equivalent to generate 100% of the nominal flip angle. Note that, although the optimization computes x, the shims w are actually applied to the coil.
| Determination of guidewire enhancement factor
The guidewire enhancement factor k was measured by acquiring multiple SPGR images with a range of 64 different nominal flip angles θ with the guidewire in and out. In each case, the measured signals S were fitted pixel-wise to Equation 3: where the local B + 1 scaling factor c and overall scaling A were the unknowns. The guidewire enhancement factor is then given by the ratio of c with guidewire-in to guidewire-out. In these experiments, a 5 mm slice orthogonal to the guidewire was used, with 1 mm in plane resolution and TR = 3.3 ms. For the guidewire-in, θ was stepped in the ranges 0° to 5° in steps of 0.08° and then 5° to 90° in steps of 2.75°, and for guidewireout in the ranges 0° to 90° in steps of 2.9°. Uncertainty in the estimated enhancement factors was estimated using a residual resampling bootstrapping method.
| Imaging experiments
Visualization of the guidewire was performed using MM excitation with low power, a mode usually not considered safe at normal power levels. The candidate pulse sequences (SPGR, bSSFP, and TSE) were all set up with a thick slice (50 mm) to produce a projection image over the complete guidewire region. Sequences were individually optimized to maximize frame rate for the same nominal resolution (1 mm in plane) and FOV (252 mm × 398 mm). For the SPGR sequence, the echo time (TE) and the TR were 1.54 ms and 3.19 ms, respectively, whereas for the bSSFP these were 1.55 ms and 3.10 ms, respectively. The bSSFP used SENSE factor 2 and partial Fourier reconstruction to obtain a frame rate of 2.5 frames per second (fps); these acceleration measures were not used for the SPGR because the SNR was insufficient, so this yielded a frame rate of 0.79 fps. The single shot TSE used an echo spacing of 4.8 ms and a combination of SENSE factor 2 and partial Fourier reconstruction to give an TE of 53 ms and frame rate of 1.01 fps. Note that flip-back pulses were not used for the TSE in experiments because they were not found to have a strong influence at the frame rates used here. Although the PTx system can be used to set the maximum current mode, the guidewire enhancement factor is not known in advance; thus, the optimal nominal flip angle for visualization must be determined empirically for each sequence. This was done by sweeping through a range of input power scales and selecting the best power level empirically based on guidewire visualization. The criteria used for this were 1) sharp depiction of the guidewire shaft, 2) as much of the guidewire tip visible as possible, and 3) minimal background signal.
For the real-time visualization test, a 150 cm long guidewire was pulled out of the phantom during imaging with TSE. An approximately 5 cm diameter loop was formed by the guidewire in the FOV (Figure 3). To speed up the frame rate to 2.5 fps, a lower resolution of 2 mm was used. The guidewire slowly was pulled out manually while the imaging protocol dynamically repeated for 100 frames.
| Guidewire heating tests
During imaging protocols, temperature at the bare tip of the guidewire was monitored using a fiber-optic temperature probe (LumaSense Technologies, Inc., USA) attached to the guidewire tip, tied on with nylon string in parallel with the guidewire axis and flush with the guidewire tip. It is shown in the literature that the guidewire tip is the site for worst case heating 20 ; it has recently also been demonstrated from a study in the presence of deep brain stimulator electrodes that RF did not increase the heating or specific absorption rate (SAR) at other locations far from the wire tip. 25 Worst case heating was demonstrated by running a TSE scan at high power (maximum allowable scanner-reported SAR for 6 min). Subsequently, equivalent measurements were made while visualization sequences were run. After each heating acquisition, the temperature was allowed to return back to baseline (18.6°C-19.0°C).
| Determination of guidewire enhancement factor
A map of the guidewire enhancement factor is shown in Figure 4. The enhancement factor decreases very quickly as distance from the guidewire increases, with the pixel at the guidewire having a maximum enhancement factor of 108 (± 6.6). The mean of 8 central pixels around the guidewire was 82 (± 1.79).
| Imaging experiments
The signal as a function of nominal flip angle for each tested sequence is shown in Figure 5, taken from a pixel location close to the guidewire. The shapes of the curves in Figure 5B follow the same profile seen in simulation ( Figure 2B), with the SSFP having the highest signal; however, the TSE showing the fastest descent from the peak signal as flip angle is reduced. The nominal flip angle that gives the best guidewire visualization in MM mode was determined to be 0.36°, 0.06°, and 0.94° for bSSFP, SPGR, and TSE, respectively. It should be noted that these values do not correspond to the signal peaks as shown in Figure 5 because the criteria used for determining the nominal flip angle to use for guidewire visualization (described above) considered other factors in addition to SNR. Figure 6A though C shows coronal view projection images, normalized to maximum value, which were acquired using the visualization sequences. It can be seen that TSE provides the cleanest delineation of the guidewire with very little background contamination. In contrast, the bSSFP shows the guidewire but with contamination from banding artifacts that are bright for low flip angles rather than the more familiar dark bands usually seen when using higher flip angles. The SPGR also shows the guidewire; however, the background signal is greater than that of the TSE by 152%. The SNR ratio for each guidewire visualization technique was 114, 106, and 58 for the TSE, SPGR, and SSFP, respectively. The TSE demonstrates a better image quality and guidewire to background contrast.
Corresponding "tissue" visualization images generated using the shimmed NM excitations are shown in Figure 6-9D through F. The guidewire is still visible in these images. This is hypothesized to be due to the guidewire having a similar (but uncontrolled) effect on the receiver sensitivity of the array coil and residual enhancement from currents outside F I G U R E 3 The guidewire geometry used while the guidewire was withdrawn from the phantom F I G U R E 4 Scale factor, k, for B 1 enhancement at the guidewire in an axial image (axis labels are in mm). The B 1 enhancement factor is the ratio of the peak signal flip angle with guidewire-in to guidewireout. The pixel size is 0.9 mm × 0.9 mm the null point. Nevertheless, the background poly(acrylic) gel can clearly be visualized along with the circles array structure.
In Figure 6F, banding artifacts are observed in the periphery of the image, which are well known to exist in balanced sequences and can be reduced with static field shimming. 41 The B 0 inhomogeneity around the guidewire was handled with localized second-order shimming. In the null images, Figure 6D through F, the guidewire sometimes shows up as a signal void as in Figure 6E; and at other times, Figure 6D and F, the tip is present with signal and some of the shaft is void of signal. The guidewire visualization is not consistent across pulse sequences, leading to the conclusion that the receive enhancements seen in null images alone are not reliable enough to visualize the guidewire. Furthermore, these effects would be less appreciable in a heterogeneous background. Figure 7 shows line profiles through the images from Figure 6A through C; the measured full width at half maximum (FWHM) for the guidewire is plotted. It is revealed that the TSE displays the narrowest guidewire width ( Figure 7B). The FWHM mean of 100 points along the guidewire shaft is 2.8 mm (± 0.19), 3.5 mm (± 0.26), and 2.6 mm (± 0.09), respectively, for the bSSFP, SPGR, and TSE.
| Real-time visualization
Selected frames of the real-time TSE acquisition of the guidewire being manually pulled out from the phantom are shown in Figure 8. The real-time results are shown in Supporting Information Video S1. A frame rate of 2.5 fps was achieved using all available standard parameters in the pulse sequence definition. Although the coupling matrix can be changing during the guidewire pull, it was not updated throughout this experiment. However, with a single coupling measurement and RF shim setting, the guidewire was visible throughout its trajectory. Figure 3 depicts the guidewire geometry used while the guidewire was withdrawn from the phantom.
| Heating tests
Temperature measurements made using a high SAR TSE sequence are shown in Figure 9. A temperature change of 18.0°C was measured for the MM at full amplitude (θ = 90°) and of 0.02°C with the MM at 1.1% (θ = 1°) amplitude used for guidewire visualization. The optimal combination of NMs used for anatomical imaging did not produce any detectable temperature increase when used with the TSE, SPGR, or bSSFP set to a nominal flip angle of 26 degrees. The B + 1 field used in these tests was sufficiently high for anatomical visualization in the entire FOV. It was also noted that no temperature increase was detected at the point of hand contact while handling the guidewire from outside the phantom, which supports what is found in the literature. 20
| DISCUSSION
Parallel transmission offers a method for allowing guidewires to be "decoupled" or maximally coupled from the MRI transmission system. In this work, we propose to use such a system to effectively visualize these devices using the TSE pulse sequence configured in a way that maximally couples to them F I G U R E 5 Signal from a pixel at the guidewire location. (A) Measured signals for each candidate sequence using a straight guidewire; for TSE, the excitation flip angle is θ, and refocusing angle is 2θ. For each signal trace, the peak signal S max occurs at flip angle θ max . (B) Measured signals normalized to S max versus flip angle normalized θ max using a straight guidewire while transmitting at low power. Although this would be unsafe at normal power levels, our results show that the local B 1 + fields are significantly enhanced by a factor of over 100 when maximum coupling mode is used, as shown in Figure 4. The use of standard pulse sequences with significantly reduced nominal flip angles therefore results in high signal from the blood adjacent to the guidewire and very little signal anywhere else. The fact that these sequences use very low amplitudes means the associated RF power is very small, and heating effects are insignificant. An added benefit of this approach is that the mode in which the transmit coil is used may be switched instantly (potentially on a pulse-by-pulse basis). Hence, "decoupled" mode excitations can be used for standard anatomical imaging with high RF power levels but no heating, and strongly coupled modes can be used for guidewire visualization at very low RF power levels, also with no heating, potentially within the same imaging sequence.
Simulations predicted that TSE sequences scaled to very low power levels would offer sharp visualization of wires in the maximum coupling mode, which was confirmed experimentally. Although bSSFP can offer higher SNR, the sequence suffers from severe "bright band" off-resonance artifacts when used with very low flip angles, making these images less suitable for guidewire visualization. Heating tests showed that when run at very low power, MM excitations do not produce any measurable heating. Conversely, any required sequence can be used for visualization of background tissue in conjunction with a NM excitation, with no recorded heating at the guidewire tip, even when running at 100% (reported) SAR. Similar trends were observed in other experiments, including some using the same setup, others with a much larger phantom (~30 liters) and a local 8-channel PTx surface coil (data not shown), and others using a much smaller phantom. 42 The presented heating measurements were focused on the guidewire tip considering that in the literature this is the location where SAR and heating will manifest the highest. 20,25 Furthermore, when conducting test procedures, the interventionist manipulating the guidewire did not detect feeling any increases in temperature for any experiments conducted. This said, it was not possible to systematically monitor temperature along the guidewire's length; therefore, we cannot rule out some unobserved heating, which is a limitation of the current study.
A limitation of using only NM excitation modes for background tissue visualization is that the efficiency of the RF coil is effectively reduced by removing the MM modes; thus, production of a homogeneous excitation within peak power limits is a challenge. In the present work, this limited achievable flip angles when using NM excitation; for example, for TSE the excitation was limited to 26° instead of 90°. Future work to address this will focus on the design of the RF transmitter coil because the generated field patterns are expected to influence this. Another issue in the use of metal wires is that receiver (i.e., B − 1 ) coupling can also lead to image artifacts, as seen on Figure 6D, local to the guidewire. Image processing methods could potentially be used to reduce such effects because they are multiplicative to the image. Differential receiver-related enhancements between receive channels and the presence of receiver-related signal voids can be reduced by reconstructing array coil data using sum of squares, as detailed by Eryaman et al. 43 The effects can be seen in comparing Figures 6D and F with Figure 6E, demonstrating both SENSE and sum-of-squares reconstructions. Hence, the NM imaging results in Figures 6D and F could likely be improved by an altered handling of receiver data in the reconstruction. Additionally, there may remain small currents on the full extent of the guidewire to produce transmit enhancement effects. This is because the null of the current is only guaranteed at the location of the current measurement. It is noted that these small remaining currents do not pose a heating risk, as seen in the results of the heating test in which the same protocol was used. A solution to this might be to place current sensors inside the dielectric medium to achieve a stronger null in the FOV. 44 A potential issue with the proposed guidewire visualization approach is that the tip of the guidewire is less easily visualized than the shaft. Figure 8 and the supporting video file (Supporting Information Video S1) show that it is still possible to see the tip as it moves, but it is fainter than the rest of the guidewire. This is because the induced currents on the guidewire decay to 0 (or a very small value) at the tip. There is evidence that an optimal length of exposed guidewire tip of around 2 to 10 mm exists that increases guidewire tip visualization. 14 In this work, we did strip insulation over the last 4 mm of the guidewire, although this was motivated by a desire to create a worst case heating risk. In practice, we would not advocate for purposely stripping insulating coatings because this would introduce a safety risk if currents are not correctly controlled, and exposure of a sharp metal tip may also risk puncture injuries. Others have investigated methods for visualization only of a catheter or guidewire tip, for example, by using a balloon-tip catheter filled with gadolinium or air/CO 2 45,46 or by using modified interventional devices with integrated tracking coils 47 ; however, a drawback of these methods is that they can only visualize the tip (or the single position where the marker is placed). Hence, a hybrid solution using our proposed method along with a separate tip marker may prove most successful. Additionally, it has been Only the guidewire is visible, and the background signal is dark. Note that only a single static RF shim is used noted elsewhere that visualizing the shaft alone and the close proximity to the tip may be sufficient for successful guidewire placement in vivo. 8 Another implementation challenge is to keep the tip and shaft in plane and know when the tip or shaft has fallen outside the plane. A practical solution is to use a thick slice to encompass the guidewire. A computational alternative is to use multiple real-time projections to reconstruct the three dimensional trajectory of the device in a fraction of a second and estimate the image slice containing the feature of interest. 48 The number and positioning of current sensor(s) is critical both for practical reasons and safety. In this work, 2 sensors, both outside the phantom, were used to estimate currents on the inserted part of the guidewire; based on the temperature results, induced currents are nulled. However, it cannot be certain that the same conditions exist when the guidewire geometry changes. There may be instances when modes exist on the inserted length that are not fully sensed on the external length. It has been shown that a key variable is insertion length, 6 which changes during an intervention, making it important to actively control and monitor induced currents both outside and potentially inside the dielectric (i.e., human body). Optimal sensor placement and RF coil design for this application are the subjects of current work. 44 Alternative image-based methods for measuring the coupling matrix have been proposed. 49,50 A feature of the proposed method is that the optimal nominal flip angle for visualization must be determined empirically because we expect guidewire visibility to be a function of the coupling to the coil, which is likely to vary. The inability to make adjustments continuously is a limitation of the current approach; however, in practice this may not prove to be a serious issue because the user could adjust this parameter while viewing the images in real time. Because guidewire visualization is achieved with very low RF power, the sweep through flip angles may be safely accomplished.
In future work, a faster frame rate will be investigated to visualize rapid guidewire movement with a goal of 7 fps. The methods used in this paper achieved adequate speed for a proof of concept using standard sequences. It is clear from Figures 6 and 8 that the guidewire-only images are truly sparse; hence, there are many opportunities for acceleration using alternative sampling/reconstruction methods.
| CONCLUSION
A method for visualizing a standard guidewire separate from the anatomy has been demonstrated by using a maximum coupling mode at low power, which is usually regarded as hazardous and omitted. TSE sequences at very low power were found to yield sharp delineation of the guidewire at reasonable SNR without severe off-resonance artifacts present in balanced SSFP images. | 8,682.8 | 2019-11-13T00:00:00.000 | [
"Physics"
] |
Deep-Sea Habitats and Megafauna on the Slopes of the São Paulo Ridge, SW Atlantic
The São Paulo Ridge (SPR) is a 350 km-long linear geological feature located in the Continental Margin off Brazil (Latitude 28–29°S, Longitude 40–45°W). In 2013, the region was mapped during the SW Atlantic “Iata-Piuná” expedition and explored by a series of deep-sea dives of the manned submersible Shinkai 6500. A digital bathymetric model analyzed for seafloor morphology, delimited four major bathymetric sectors namely plateau, ridge crest, ridge escarpment and ridge foot. These sectors further enclosed 12 morphological features at smaller spatial scales (structural classes) including plains, valleys, peaks, terraces, and troughs. Video profiles across the depth gradient (4,219–2,644 m depths) revealed that the slopes of the SPR southern flank were gentle and terraced, mostly covered by biogenic sediments and interrupted by rocky cliffs/crests, dispersed outcrops and loose particles. The North Atlantic Deep Water (NADW) and Antarctic Bottom Water (AABW) overlaid at the escarpment along which they established colder (0.4–1.0°C; 4,200–3,400 m) and warmer (2.0–3.0°C; 3,400–2,600 m) habitats, respectively. Physical components were used to define seven seascape units in the ridge foot (2), escarpment (3), and plateau-ridge crest (2), where a total of 914 organisms of the epibenthic and benthopelagic megafauna were recorded. Over 70% of these records were sessile suspension feeders, including sponges (61.5%) and anthozoans (11.4%). Most taxonomic groups concentrated above 3,800 m, under the influence of NADW, where densities reached maximum values (mean 0.26 organisms.m–2; 0.024–0.027 organisms.m2 95% CI). Also, nearly half of megafauna records concentrated in patches delimited by the 3,800–3,300 m and 2,900–2,700 m isobaths. The deepest patch (3800–3300 m) coincided with the interface zone between AABW and NADW, where mixing processes create a density gradient. Evidences suggested that topography-related deep-water flow dynamics, and not substrate availability, drives benthic megafauna distribution at meso-habitat scale.
INTRODUCTION
In recent decades, marine sciences, industry and conservation initiatives have turned their attention toward the Southwest Atlantic. The region has a critical role in the Atlantic Meridian Overturning Circulation (AMOC), whose dynamics influence poleward heat flux and global climate (Garzoli and Matano, 2011;Frajka-Williams et al., 2019). Deep components of AMOC are strongly affected by distinctive seafloor topographies, whose origin and morphology derive from geological events established throughout the history of South Atlantic Ocean expansion (Bassetto et al., 2000;Ussami et al., 2012). In association with topographic features, valuable mineral deposits have been mapped and explored (Hein et al., 2013) and deep ecosystems and biodiversity have been increasingly described (e.g., Perez et al., 2012;Kitazato et al., 2017;Jovane et al., 2019).
Particularly relevant in this context is the east-west trending system of topographic features that include the Rio Grande Rise, Vema Channel, São Paulo Ridge and São Paulo Plateau (Figure 1). With depths spanning 600-5,000 m, these rises and troughs interpose and provide channels for the flow of the deepest water masses of the Atlantic Ocean: the North Atlantic Deep Water (NADW) and the Antarctic Bottom Water (AABW). The former flows southwards along the West Atlantic, compensating the northward circulation of surface, central and intermediate waters, and maintaining mass balance in the Atlantic (Garzoli and Matano, 2011;Frajka-Williams et al., 2019). Transported by the Deep Western Boundary Current between 1,500 and 3,000 m depths, NADW circulation is constrained westward by the topography of Brazil's continental slope and transversely oriented seamounts and ridges, most notably the Vitoria-Trindade Chain, Rio Grande Rise and São Paulo Ridge (Stramma and England, 1999;McDonagh et al., 2002). Below 3,000 m, the AABW flows northwards, along the Southwest Atlantic basin and into the North Atlantic, often via abyssal conduits, most notably the 5,000 m-deep Vema Channel (McDonagh et al., 2002;Morozov et al., 2010). These deep-water masses, with their distinctive physical and chemical properties, interact with the seafloor contributing to the establishment of variable sedimentation regimes, habitats and biological communities. Descriptions of these relationships in the region are generally scarce but have progressively increased often driven by initiatives addressing the needs for conservation and future sustainable mineral exploitation (e.g., Sumida et al., 2016;Hajdu et al., 2017;Perez et al., 2018;Montserrat et al., 2019).
Bathyal communities tend to change continuously along depth gradients essentially because of depth-correlated conditions such as pressure, temperature and dissolved oxygen (Carney, 2005). Drastic changes, however, may be driven by spatial discontinuities in physical and chemical conditions, food supply, sedimentary regime, bottom currents, and major topographic features (Rex and Etter, 2010). These effects may be conspicuous in the São Paulo Ridge (SPR) whose southern flank forms a steep 2,200 m depth gradient that extends transversely to the predominant flow of the SW Atlantic deep circulation (Alberoni et al., 2019). The region was subject to geological studies in the 1970s and 1980s (e.g., Gamboa and Kumar, 1977;Gamboa and Rabinowitz, 1981) but remained mostly unexplored in terms of habitat configuration and biodiversity. In 2013 the SPR was targeted by a global expedition "Quelle 2013 -Quest for the limits of life" led by the Japan Agency for Marine-Earth Science and Technology (JAMSTEC), which searched for extreme deep-sea environments with the RV Yokosuka and the manned submersible Shinkai 6500 1 . An important finding of this exploration in the SPR was a deep whale fall with a well-established chemosynthetic community (Sumida et al., 2016;Cavalett et al., 2017;Shimabukuro et al., 2017;Shimabukuro and Sumida, 2019). In addition, the geological setting of the explored areas and a report of a fossil whale skull was provided by Ichisima et al. (2017). In this study we describe nonchemosynthetic benthic meso-and macro-habitats (sensu Greene et al., 2007) and megafauna along the SPR depth gradient and explore the effect of associated abiotic factors, including substrate, seafloor morphology and deep-water mass stratification. We further estimate large-scale distribution of benthic habitats in the SPR by analyzing bathymetry-derived terrain variables and classification of seafloor features. Benthic megafauna variability across different spatial scales will be discussed as a baseline and hypotheses for more detailed ecological studies in the future.
Study Area
The São Paulo Ridge (SPR) is a 350 km-long linear geological feature located in the Continental Margin off Brazil, between 28-29 • S and 40-45 • W ( Figure 1A). It is a component of the east-west trending Rio Grande Fracture Zone alignment that delineates the southern boundary of the São Paulo Plateau (Bassetto et al., 2000). As other aseismic ridges, the SPR is asymmetric with (a) a flat 2,000 m-deep northern flank, buried by sediments of the São Paulo Plateau, and (b) a steep southern flank (known as the São Paulo Escarpment), diving from 2,500 m-deep crests to a 4,200 m-deep foot (Gamboa and Rabinowitz, 1981;Alberoni et al., 2019). It is believed that SPR current morphology is largely derived from irregular submarine vulcanism that started in the Aptian period (∼120 m.a.), during the early South Atlantic Ocean expansion, when the SPR acted as a barrier obstructing the marine circulation toward the north and promoted the formation of a shallow environment where the deposition of a thick layer of evaporites resulted in the formation of the São Paulo Plateau (Gamboa and Rabinowitz, 1981;Bassetto et al., 2000;Ichisima et al., 2017). The southern flank interposes the northern flow of the Antarctic Bottom Water (AABW) generating countercurrents and eddies that mobilize sediments and form a trough delineating the deep contour of the ridge, also known as the São Paulo Channel (Figure 1B; Gamboa and Kumar, 1977;Alberoni et al., 2019). The NADW flows southward over the São Paulo Plateau, across the SPR crests and down over the escarpment where it overlays the AABW.
Analyzed Data
Data on SPR seafloor benthic habitats and megafauna were acquired during a research cruise conducted by the RV "Yokosuka" in 2013, under the "Iata -Piuná" consortium established between the Japan Agency for Marine-Earth Science and Technology (JAMSTEC), the Oceanographic Institute of University of São Paulo (IOUSP) and the Geological Survey of Brazil (CPRM) . The SPR area was explored between April 23 and 27 and included swath bathymetry and deep-sea dives of the manned submersible Shinkai 6500.
Bathymetric data were acquired during along-ridge transects with a 12 kHz hull-mounted Multibeam Echosounder System (MBES), integrated Differential Global Positioning System (DGPS) and compensated by the Inertial Measurements System (IMU). The data were filtered to remove spurious depth records and submitted to methods of linear interpolation to produce a Digital Bathymetric Model (DBM) with a cell size of 123 m. Video transects were produced during four deep-sea dives along a complete SPR southern flank depth profile, from 4,219 to 2,644 m depths (Figure 1 and Table 1).
Dives 6K1333 and 6K1336 explored the abyssal region at the foot of the ridge. Part of these dives were dedicated at the study of a whale-fall carcass environment, encountered at 4,204 m depth, previously described by Sumida et al. (2016) and excluded from this study. Dives 6K1334 and 6K1335 described the seafloor along the depth gradient of the São Paulo Escarpment. All dives involved approximately 4 h of activities near the seafloor, including photo/video recording, geological and biological sampling. Video data was acquired by two HD-TV color video cameras, both positioned at the bow, 1.7 m above the vehicle's bottom (Nakajima et al., 2014). Camera 1 angled obliquely 40 • toward the seafloor and recorded continuously a fixed area ahead the bow of the submersible (horizontal acceptance = 90 • , vertical acceptance = 57 • ). Camera 2 was mobile (pan -tilt) and was used for detailed observations of habitat features and megafauna species. Continuous information of date/time, depth (and altitude in meters) and the vehicle's heading (in degrees) were overlaid in the videos. Horizontal position (latitude, longitude) was estimated by SSBL (Super Short Base Line) method which required a transponder mounted on the submersible and an where α is the altitude of the camera, θ and ω are the camera's horizontal and vertical acceptance angles, respectively, and δ is the angle of the camera from vertical (Nakajima et al., 2014). Measurable uncertainty in area estimations derived from variability in α along video transects. Bootstrap 95% confidence intervals of MLW were calculated for video transects or segments of it, using the quantile method. Megabenthos specimens were collected by the submersible's manipulators and slurp-gun, photographed on board and stored in ethanol (75%) and formalin for posterior identification by specialists in the National Museum (Federal University of Rio de Janeiro) where they were cataloged.
Videos, sample records and oceanographic data and metadata are available in "DARWIN -Data and Sample Research System for Whole Cruise Information in JAMSTEC 2 ."
Seafloor Morphometric Analysis
The DBM was analyzed for seafloor morphology and segmentation (Brown et al., 2011). This process included the transformation of bathymetry data into secondary-derived layers (terrain variables) using the algorithm package Benthic Terrain Modeler (BTM) (Walbridge et al., 2018) contained in ArcGIS Desktop R 10.2.2. Variables with the greatest potential for the description of benthic habitats at the corresponding spatial scales were chosen, including surface gradients (slope, aspect), relative depth (Bathymetric Position Index -BPI) and surface rugosity (Vector Roughness Measurement -VRM) (Wilson et al., 2007;Walbridge et al., 2018). BPI is a neighborhood analysis function of the mean depth around each cell in the DBM. Positive values correspond to features and regions that are higher than the surrounding area which characterize ridges; negatives values would represent depressions on the seafloor. 2 http://www.godac.jamstec.go.jp/darwin/ Values equal or near zero are either flat or constant slope area (Walbridge et al., 2018).
The procedures for seafloor segmentation (Figure 2) started with overlaying 23 bathymetric profiles on the DBM, transversal to the bathymetric gradient of the São Paulo Escarpment (Supplementary Figure 1) for visual interpretation. Along each profile, the variables BPI-broad (annulus = 300 pixels ∼ 36,900 m), BPI-fine (annulus = 60 pixels ∼ 7,380 m) and slope were extracted and analyzed individually in order to identify Zonal and Structural classes (sensu Erdey-Heydorn, 2008). Zonal classification described broad scale surficial characteristic of the seafloor. Classes representing the main bathymetric components were visually defined, and had their boundaries delimited by depth and values of BPI-broad and slope (i.e., averages across the 23 transversal bathymetric profiles). Within each Zonal class, surficial characteristics of the seafloor were then described to a greater detail (Structural classification) using terrain variables BPI-fine and slope. Structural classes were again visually defined and delimited by maximum and minimum values of depth, BPIfine and slope values (i.e., averages across the 23 bathymetric profiles). A classification table was then built, summarizing lower and upper limits of depth, slope, BPI-broad and BPI-fine for all visually defined Zonal and Structural classes (Supplementary Table 1). The classification table was then used as input data for semi-automated classification of new adjusted classes by BTM's Classify Benthic Terrain tool. Such classification procedure (followed by spatial representation of the resulting classes) was performed several times until the spatial segmentation was considered satisfactory (Figure 2). Each model run was conducted after manual adjustments of the terrain variable values delimiting structural classes, and/or the creation of new intermediate classes.
Seascape Classification
Each video produced by camera 1 was initially observed to record changes in depth, altitude, submersible activities and image visibility. Also, seafloor features relevant for habitat classification (following Greene et al., 1999) were selected, namely: substrate texture, particle sizes, relief and others. These features were organized in a scale of "substrate types, " represented by capital letters, where: R = rocky ridge with rugged surface; F = flat rocky pavement with plain or granulated surface; B = loose boulders (>25.5 cm); C = cobbles (>6.5 cm and <25.5 cm); P = pebbles (>2 cm and <6.5 cm); G = gravel (>4 mm and <2 cm); U = unconsolidated substrate varying from fine mud to coarse sand.
A second analysis of these videos included only segments when the submersible was moving ahead. During these segments, substrate types were attributed to 1-min video intervals using a two capital letters system; the first referring to the substrate type covering more than 50% of the visible seafloor, and the second to the substrate covering between 30 and 50%. For example, a segment where seafloor was mostly unconsolidated (e.g., covered by biogenic sediment) with variable amounts of scattered pebbles was codded as UP. Also, when only one type of substrate was visible during the observed interval the correspondent letter was duplicated (e.g., UU = seafloor completely covered by sediments). These combinations of two substrate types were defined as "bottom types" (Greene et al., 2007;Tissot et al., 2007;Tissot, 2008).
Seascapes units were mainly defined by one or more dominant bottom types regularly combined along a continuous segment of the seafloor. Adjacent seascapes were delimited by an abrupt change in the dominant bottom type (or bottom type combination) observed for more than 10 s along the video track, indicating the end of a seascape unit and the beginning of a new one. Bottom types that contrasted with the dominant ones but did not persist long enough along the video track (e.g., a patch), were not considered a new seascape unit, but as part of the current seascape substrate variability. Descriptions of seascape units were complemented by "modifying" elements (e.g., currents, bioturbation), relief and slope (Greene et al., 1999), as well as by the influence of AABW and NADW on the seafloor. This influence was estimated by temperature and salinity data recorded continuously during the dive track by the submersible's CTD. Mixing percentages of these water masses during the dives were calculated according to Mamayev (1975).
Megafauna Diversity
Videos produced by cameras 1 and 2 were analyzed for visible megafauna. Life forms smaller than approximately 5 cm and/or visible in images taken above 2-m altitude were not included in the analysis. Otherwise, videos were stopped at each sighting and recorded the organism type ("morphotype") along with associated information, including date, time, depth (m), altitude (m), and heading (in degrees). Morphotypes were classified in higher taxa (Phylum, Class, Orders) and their consistency was double checked by repeating the analyses of videos produced by both camera 1 and 2. When no morphotypes could be confidently assigned to a given observed organism after these repeated analyses, it accounted for higher taxa quantification only. Some morphotype identifications to family, genus and species level were possible through collaboration with deep-sea fish and invertebrate taxonomists, and with the aid of deep-sea fauna image databases (e.g., OER's Benthic Deepwater Animal Identification Guide 3 and others).
Megafauna spatial distribution and abundance was analyzed by representing density of total recorded megafauna and/or taxonomic groups as a function of the depth strata and seascape units. Density was calculated by dividing organism numbers by the mean estimated areas covered by video transects (expressed as individuals. m −2 ), and their 95%CI limits.
Seafloor Segmentation
The SPR is depicted as the southern margin of the São Paulo Plateau, which extends to the north as 3,000 m-deep plane areas (Figure 1). Along the border of the plateau, prominent ridge crests rise to 2,200 m depths, over 1,000 m above the plateau level. These crests are separated by plane areas and channels that extend to the edge of the plateau, possibly characterizing paths of deepwater flow. The margin of the plateau is continuously bordered by the São Paulo Escarpment that forms a south -southeast facing wall, which is (a) higher/steeper in sectors adjacent to the plateau crests (Supplementary Figure 2, e.g., profiles 3, 11, and 19) and (b) lower/gentler in sectors adjacent to plane areas in between the ridge crests (Supplementary Figure 1, e.g., profiles 8, 15, and 22). The deep edge of the escarpment is connected to the 4,100-4,300 m-deep São Paulo Channel, a trough extending continuously along the SPR. To the south of this channel, the seafloor rises into a 4,000 m-deep plain area that is part of the Santa Catarina Plateau (Figure 1).
The spatial representation of BPI-broad delineated the areas comprised by broad bathymetric components, including the São Paulo Plateau, plateau crests, São Paulo Escarpment, São Paulo Channel and Santa Catarina Plateau (Figure 3). Secondary seafloor structures were evidenced from the spatial analysis of BPI-fine and included valleys and channels on the São Paulo Plateau, as well as discontinuities of the São Paulo Channel. Slope, aspect (easterness and northerness) and roughness (VRM) provided refined spatial representations of (a) channels and valleys bordering the plateau crests, and crossing the margin of the São Paulo Plateau, and (b) the escarpment, depicted as a steep, southern-faced roughed terrain (Figure 3). These were seafloor topography subject to classification by the morphometric analysis (see below).
The seafloor classification procedure resulted in 12 Structural Classes (Figure 4) enclosed within four Zonal Classes. Descriptors (upper and lower limits of depth, terrain variables and slope) and names attributed to each on of them are presented in Supplementary Table 1. The plateau (Zonal Class II) comprises plains (Class II.5), valleys (Class II.6), and gentle slopes (Class II.7) which were shallower than 3,700 m and with less than 5 • slope. The SPR crest (Class I) is composed of seamount-like summits of high (Class I.2), moderate (Class I.3), and low (Class I.4) altitudes, some of the former topped by 2,240 m-deep peaks (Class I.1). Bordering the edge of the plateau,
Seascapes
The deep-sea dives crossed four zonal classes: ridge foot, ridge escarpment, plateau and ridge crest ( Figure 4C). Seafloor texture was dominated by biogenic sediments usually mixed with scattered rocky particles (UP, UC), outcrops and crests (UF, UR) (Supplementary Table 2). These bottom types were recorded for over 56% of the observation time, followed by bedrock outcrops (RU + RC = 33.5%). Areas entirely covered by rocky substrata (e.g., RC, RR, RP, FP, PC) were recorded in 7.1% of the observed time. Most of the area observed (>90%) corresponded to gentle slopes (5-30 • ). Steep slopes (>30 • ) occurred in 2% of the observed area and only in the ridge escarpment.
Texture classification, slope, water masses and modifying elements allowed the differentiation of seven seascape units ( Table 2). The "Abyssal Mud Field" (AMF), was characterized by an undulated sediment surface covering the São Paulo Channel seafloor, with isolated patches of pebbles and bedrock outcrops (UP, UR). This seascape dominated images produced by dive 6K1333 that explored a linear track along the foot of the ridge, in eastward direction over the 4,000-4,100 m isobaths. The effect of AABW flux was noticeable in this seascape, usually producing regular sand waves ( Table 2). Adjacent do AMF, the "Debris Field" (DF) extended along the deep border of the escarpment generally characterized by a mixture of cobbles, pebbles, and boulders, interspersed with biogenic sediments (UC and UP). These bottom types, recorded in 64.5% of the total observation time of this seascape (Supplementary Table 2), were often associated with landslide signs and coated by a thin sediment cover ( Figure 5A).
Moving into the lower section of the ridge escarpment (above 4,100 m depths) the seascape was dominated by a steep and roughed bedrock surface, the "Bedrock Cliff " (BC) ( Table 2). Rocky ridges sometimes covered by piles of loose cobbles (RU and RC) dominated the seafloor (64.5% of the observation time, Supplementary Table 2) ( Figure 5B). Small amount of sediments accumulated in cracks and crevices, and thinly covered bedrock particles. This seascape unit was interrupted at approximately 4,070-3,998 m depths by a narrow terrace formation, the "Granular Flat" (GF) ( Table 2 and Figure 5C), where flat bedrock pavements were usually covered by pebbles and variable amounts of sediments (FP + FU = 71.4% of the observation time, Supplementary Table 2). A positive (convex) relief characterized the upper section of the escarpment (above 3,366 m depths), where the seascape "Bedrock Crest" (BCR) was defined by a mixed substrate formed by bedrock outcrops and sediment ponds (RU) (Table 2 and Figure 5D). This seascape unit was also observed in the ridge crest, between 2,578 and 2,657 m depths (Figure 5). At the ridge plateau and the adjacent ridge crest, flat sedimented areas (UU), occasionally with sparse patches of pebbles and cobbles (UP, UC), formed a seascape unit called "Mud Terrain" (MT) ( Table 2). Sediment surface was generally smooth, but often modified by animal tracks (Figure 5E). The ridge plateau also included a slightly convex "Granular Crest" (GCR) (Table 2 and Figure 5F) covered by pebbles, cobbles and soft sediments (RU + UP = 82.2% of observation time Supplementary Table 2).
Megafauna
A total of 914 organisms of the epibenthic and benthopelagic megafauna were recorded during 700 min of SPR seafloor observation. Over 70% of the records were sessile suspension feeders, including sponges (62.8%) and anthozoans (11.6%). These were followed by shrimp-like crustaceans (11.3%) and fish (Actinoperygii, 8.7%) ( Table 3). A list of identified taxa is presented in Table 4.
Megafauna records were usually sparse (<0.01 individuals.m −2 ) along the SPR depth gradient, but two patches of moderate concentrations (0.07-0.30 individuals.m −2 ) mostly of suspension feeders, occurred at 3,800-3,300 m, and 2,900-2,700 m depth intervals (Figure 6). The former patch occurred in the ridge escarpment and the BC and BCR seascapes. The latter patch occurred at the ridge crest and within the reach of the GCR, MT and BCR seascapes. Both depth zones were 2 | Classification of seafloor explored during deep-sea dives in the São Paulo Ridge, SW Atlantic into seascape units according with Greene et al. (1999).
Seascape unit
Class ( covered by hard substrata (e.g., outcrops or loose particles), but also included a variable coverage of sediments. Most taxonomic groups were observed above 2,900 m and were associated with mixed substratum in the BCR seascape (Table 3). Nearly 60% of all records of Porifera occurred in this seascape being largely represented by two species: a pedunculated sponge, Family Dendoricellidae (cf Pyloderma sp., Demospongiae), and Poliopogon amadou (Hexactinellida) (Figure 7). Corals (Anthipataria and Alcyonacea) also tended to occur deeper, between 3,300 and 3,800 m, on the ridge escarpment (Seascape BC). 74.3 and 74.0% of crustaceans and echinoderms, respectively, were recorded below 2,900 m ( Table 3). The latter was dominated by Holothuroidea, Order Elasipodida, including Psychropotes semperiana, Benthodites sp., and Enypniastes sp. These were observed in BCR usually occurring on sediment ponds. Nearly 44% of fish records concentrated in the 2,700-2,900 m depth stratum (Table 3). One species, Acanthonus armatus (Ophidiiformes, Ophidiidae) was particularly abundant in association with the GCR seascape ( Figure 7D). Epibenthic and benthopelagic megafauna was recorded within a total area estimated in 35609.4 m 2 (34076.6-37236.8 m 2 95% CI). Mean density was 0.026 individuals.m −2 (0.024-0.027 individuals.m 2 95% CI) decreasing progressively with depth (0.130-0.005 individuals.m −2 , Table 5). The sectors above 3,300 m, including GCR, MT and, BCR seascapes, exhibited more elevated concentrations of megafauna, all of them under a prevailing influence of NADW (usually > 80%, Table 5). Upper sectors of BCR (<2,900 m) exhibited the highest fauna concentrations (0.328 individuals.m −2 -0.300-0.360 95%CI) largely dominated by sponges cf Pyloderma sp. and P. amadou (Table 5 and Figure 7C). Estimated densities of taxonomic groups are presented in Supplementary Table 3.
DISCUSSION
Deep-sea habitats and megafauna biodiversity were described in a limited area across the SPR depth gradient and related to variability of physical factors. Comparable to other ridges in the Atlantic, the slopes of the SPR southern flank were gentle and terraced, mostly covered by biogenic sediments and interrupted by rocky cliffs/crests, dispersed outcrops and loose particles (e.g., the Mid-Atlantic ridge: Priede et al., 2013;Niedzielski et al., 2013;Bell et al., 2016;Alt et al., 2019). Benthic and benthopelagic fauna were generally scarce but spatial variations were noticeable, and potentially driven by depth (and depth covariates), water masses and seascapes zonal distribution. Insufficient sampling precluded attributing causal factors to faunal distribution observed at a meso-and macrohabitat scales. It is possible, however, to assume that observed patterns result from the hierarchical effects of structuring factors operating at different spatial scales (Levin , 2001;Williams et al., 2010). Allied to bathymetry-derived seafloor morphology segmentation (Brown et al., 2011), reported habitat and megafauna distribution data provided elements to identify scales of spatial variation and to draw hypotheses of potential driving factors. Biodiversity patterns in the deep SW Atlantic are partly determined by regional (>1,000 km) biogeochemical processes at surface waters. Most of the region down to 30 • S lays under the influence the South Atlantic Gyral biogeochemical province (Longhurst, 1995) where surface primary production is generally low due to a stable water column structure and rapid vertical remineralization, resulting in minimal POC export fluxes to the seafloor (Mouw et al., 2016). Schlitzer et al. (2003) estimated POC flux values as low as 0.05-0.1 mol C m −2 yr −1 in the central South Atlantic, 10-30 times lower than values estimated along the continental margins of West Africa and South America off Brazil. This reduced POC flux reaching the seafloor would tend to support reduced megafauna densities in the SPR (Sibuet et al., 1989;Smith et al., 2008;Wei et al., 2010), as reported in this study (mean 0.026 individuals.m −2 ; range 0.004-0.350 individuals.m −2 ). When compared to megafauna densities reported from bathyal areas (2,000-3,500 m) of the North Atlantic (compiled in Levin and Gooday, 2003) (Bell et al., 2016;Alt et al., 2019). Densities recorded in comparable depths at the SPR (<2,900 m depths; 0.001-0.33 individuals.m −2 ) were approximately 100 times lower than those recorded to the north of the CGFZ but approximated those recorded to the south.
The regional effect of POC flux on megafauna distribution and abundance is altered by prominent topographic features, such as the SPR and Rio the Grande Rise (RGR), which generate variability in a provincial-scale (∼100-1,000 km). These features disrupt the SW Atlantic basin general morphology (a) producing abrupt discontinuities in depth and slope, (b) modifying the countercurrent flow of the NADW and AABW, and (c) exposing areas of rocky seafloor. The SPR forms a long and linear slope that extends transversely to the South American continental margin north -south orientation and separates a 2,000 m-deep sedimentary plateau (São Paulo), that extends to the north, from a 4,000 m-deep sedimentary-tectonic plateau (Santa Catarina) that extends to the south. This morphology was determined by the tectonic evolution of the South American continental margin and subsequent sedimentary processes, mostly associated with along-slope action of bottom currents and the downslope mass-transport and sediment fluxes (Alberoni et al., 2019). These are processes associated with the flow of NADW and AABW that overlay each other at the slopes of the SPR escarpment (∼3,400 m depth), exposing benthic habitats to downslope changes in physical and chemical conditions. In the explored area, temperature increased by nearly 3.0 • C (∼0.4-3.2 C) over a 1,500 m depth range from the ridge foot (4,200 m) to the ridge crest (2,700 m). Where both water masses mix, at the 3,500-3,300 m depth interval, temperature increased by 1.2 • C delimiting lower colder (0.4-1.0 • C) and upper warmer (2.0-3.0 • C) habitats (Supplementary Figure 2). In addition, habitats below this depth interval, under the influence of AABW, also tend to be less oxygenated and less saturated with CaCO 3 than those in the upper slope where the influence of NADW predominates (Chung et al., 2003;Rijkenberg et al., 2014). These are contrasting conditions that may partly explain the nearly 10-fold difference in the mean megafauna densities estimated in the SPR depth gradient, below and above 3,400 m depths (Table 5). Furthermore, the SPR escarpment interposes the SW Atlantic depth horizons for CaCO 3 Aragonite saturation ( Arg = 1), Aragonite compensation (ACH), and Calcite saturation ( Cal = 1) at ∼2,600, ∼3,400, and ∼4,000 m depths, respectively (Melguen and Thiede, 1974;Thunell, 1982;Chung et al., 2003). These conditions would limit growth of scleractinian cold-water corals in the SPR but not alcyonaceans (Octocorals), which build calcitic skeletons (Roberts et al., 2009). In the explored area such conditions are consistent with the general absence of scleractinian corals and the existing records of bamboo corals (Alcyonacea, Isididae), whose distribution patterns tend to be driven by Calcite saturation levels (Yesson et al., 2012). Finally, topography, depth, temperature, salinity, dissolved O 2 and POC fluxes were all environmental proxies used to define bathyal and abyssal biogeographic provinces (Watling et al., 2013). The SPR is the boundary between a lower bathyal (2,000-3,500 m depths) South Atlantic province and two adjacent abyssal provinces (3,500-6,500 m), the Argentine Basin and the Brazil Basin provinces (Watling et al., 2013). The area explored across the SPR southern flank included the South Atlantic and the Argentine Basin provinces. The drastic changes on megafauna abundance and composition above 3,400 m depths locally support these provinces and the proposed depth boundary.
A variety of megahabitats (∼1-100 km) were inferred by seafloor morphology segmentation in the ridge crest, escarpment, plateau and ridge foot zones. Some seafloor structural classes were ground truthed along the depth profile explored by video cameras, despite the different spatial resolution of both methods. In the ridge escarpment, the structural classes "escarp" (Class III.8) and "terrace" (Class III.9) differentiated a lower steeper zone from an upper gently sloping zone. This differentiation was evident in the video seafloor analysis, as a steep bedrock cliff changed into a gently sloping bedrock crest seascapes (Figure 5), also with a significant increase on the occurrence of sessile suspension feeders (e.g., sponges) and deposit feeders (e.g., holothurians) ( Table 3). Similar correspondences were also noted at the ridge foot, plateau and ridge crest, suggesting that bathymetry-derived seafloor morphological units were biologically relevant and could express habitat heterogeneity along the SPR. Allied to observations on fauna composition and distribution (see below), biotopes could be predicted and mapped (Brown et al., 2011). However, considerable additional sampling effort would be needed for that purpose (e.g., Robert et al., 2015;Anderson et al., 2016), which was generally beyond the scope of this study .
Nearly half of megafauna records (48%) along the explored depth gradient concentrated in 700-900 m-long patches delimited by the 3,800-3,300 m and 2,900-2,700 m isobaths. The lower zone was covered mostly by rough rocky surfaces whereas the upper zone was characterized by mixed substrates with a predominance of sediments. Despite such differences in substrate composition, megafauna observed in both patches were dominated by sessile suspension feeders (cnidarians and sponges, 73-74% of recorded organisms), followed by benthopelagic organisms (swimming shrimps, fish and cephalopods, 19%) and the less frequent soft bottom dwellers (mostly echinoderms, 6-8%). Suspension feeders did not seem to be limited by hard bottom availability and were commonly recorded even when only a few loose particles (cobbles, pebbles, small outcrops) were available interspersed with dominant sediment substrate. Soft bottom dwellers, on the other hand, were generally scarce despite largely available sedimented areas. Whereas these organisms may be limited by low recruitment rates in the area, records of lebensspuren on the sediment surface could indicate that part of this fauna could be buried in the sediments and not visible in the images. These observations suggest that megafauna spatial distribution at a mesohabitat scale (∼10 s of meters to km) in the explored area of the SPR was (a) less affected by substrate availability, as also reported by studies on the flanks of the Mid-Atlantic Ridge (Bell et al., 2016;Alt et al., 2019), and (b) driven by the availability of suspended food particles, as generated by topography-related current flow patterns over the seafloor.
Both patches of megafauna occurred in the vicinity of crests formed at the edge of terraces (Figure 6), where flow dynamics of the NADW may be particularly favorable for sessile suspension feeders (Genin et al., 1986). As a dense water mass flowing over the São Paulo Plateau, the NADW develops a bottom friction transport perpendicular to the depth contour, which, when overlaying a topographic depression such as the SPR escarpment, would drive the NADW flow inside this depression (Wahlin, 2002). This downslope flow, altered by abrupt changes in slope as observed in the edge of the terraces, could generate areas of vorticity and suspended particle concentrations. In the upper depth zone, evidence of such current action included sand waves observed in sediment cover and movements of the highly dense and flexible sponges Pyloderma. Downsloping NADW will encounter AABW opposing flow at 3,400 m depth, within the depth zone of the lower patch of sessile suspension feeders. This water mass interface formed a temperature and salinity gradient as well as a density variation which may characterize a physical boundary condition often associated with POM enrichment (Dullo et al., 2008). Additionally, this density surface (Supplementary Figure 2) may also be associated with turbulent mixing (Zhao and Thrunherr, 2017) and along-ridge currents as derived from changes in the AABW transport vorticity as it collides with the SPR southern flank (topographic steering, White and Dorschel, 2010). If these are physical processes occurring to some extent between 3,800 and 3,300 m depths, resuspension or acceleration of advected food particles could favor local settlement and growth of suspension feeders at this depth range. The NADW and AABW interactions extend along the SPR potentially acting at a provincial scale. Yet, because topography is determinant in such physical processes, its effect on megafauna distribution may vary along the ridge, driving variability at smaller spatial scales. In fact, channels and valleys were features modifying the ridge crest, plateau and escarpment overall morphology, potentially altering water flow and sustaining a variety of mesohabitats and patches of suspension feeding fauna.
An Antarctic minke whale skeleton fall was a major driver of megafauna distribution at macrohabitat scale in the debris field seascape (4,204 m depth, Sumida et al., 2016). A series of dispersed vertebrae and intervertebral disks spread over approximately 2 m over the seafloor, comprised over 40 species some of them occurring at high densities in the bone and sediment area (2-70 individuals.m −2 ). Except from one crustacean (gen. Munidopsis), species recorded at and around the whale fall were adapted to organic fall environments and not recorded anywhere else in the SPR explored area.
Most taxa occurrences were recorded from single or very few observations. The exceptions were the sponges P. amadou and cf. Pyloderma sp., and the benthopelagic fish A. armatus, relatively abundant in particular segments of the SPR depth gradient. Despite its uncertain identification, the sponge cf. Pyloderma formed the densest epifauna concentrations in the explored area of the SPR. P. amadou is the only species of this genus to occur in the Atlantic Ocean, reported in dense patches (up to 5 individuals.m −2 ) in the Great Meteor seamount (29 • 30 N; 28 • 17 W) between 2,675 and 2,765 m depths (Xavier et al., 2015); and in the Tropic Seamount (23 • 55 N; 20 • 45 W) where it occurred from 1,960 to 3,660 m depths (Ramiro-Sánchez et al., 2019). In the SPR the species was observed in much lower concentrations (∼0.03-0.06 individuals.m −2 ) around 3,000 m depth, attached to outcrops and loose rocky particles. A. armatus occurs at bathyal and abyssal depths (1,500-4,415 m) in all tropical and subtropical oceans, being particularly abundant in the western Atlantic (Nielsen et al., 1999). The species has been recorded in Brazil's continental margin off Bahia, from 1,171 to 1,929 m (Mincarone et al., 2008) and in the Caribbean between 1,500 and 4,150 m (Polanco et al., 2019). A comparable exploration in the Rio Grande Rise, revealed nearly 3× more fish morphotypes (30) during half the observation time (Perez et al., 2018), potentially because these dives explored much shallower area (1,233-600 m depth). Bathypterois, Spectrunculus, and Aldrovandia were the only genera found in both the SPR and the Rio Grande Rise. In the latter, they were observed only in deepest sectors (1,200-900 m).
The SPR has been characterized as an important area of environmental transitions in the SW Atlantic basin, mostly driven by the depth gradient of its southern flank, where the two main deep-water masses of the Atlantic dynamically interact. Important along-slope chemical and physical gradients are established along the ridge extension, but seafloor morphology, substrate type and topography-driven current flow processes may generate habitat heterogeneity at varying spatial scales, all relevant do deep fauna distribution. Unprecedented observations along the SPR depth profile have generally indicated that megafauna, although scarce, may respond to such drivers and vary considerably in the mesohabitats established along the ridge, justifying future explorations and studies designed to test the effect of the hypothesized drivers (e.g., Bell et al., 2016;Alt et al., 2019).
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article including videos, sample records and oceanographic data and metadata are available in 'DARWIN -Data and Sample Research System for Whole Cruise Information in JAMSTEC' (http://www.godac. jamstec.go.jp/darwin/).
AUTHOR CONTRIBUTIONS
JP oversaw all aspects of this research, including specimen, sample, data collection, and analysis. All authors conducted the research, analyzed the data, and contributed to the manuscript.
FUNDING
Funding of Brazilian scientists in the "Iata-Piuná" cruise was provided by a grant from CAPES (Program CAPES -JSPS, AUXPE-JSPS-0059-2013, Ministry of Education, Brazil). The senior author was supported by a CPNq productivity fellowship (Process 307992/2019-5). This study was within the umbrella of the National Institute of Science and Technology -Integrated Oceanography Centre (INCT -Mar COI, CNPq).
ACKNOWLEDGMENTS
We thank Brazilian and Japanese governments, and members of JAMSTEC, IOUSP, and CPRM, whose efforts allowed this unprecedented study in the SW Atlantic, as part of JAMSTEC's "Quelle" expedition. We owe the crews of the RV Yokosuka and the submersible Shinkai 6500 the acquisition of all analyzed data and samples. Eduardo Hajdu, Renato Ventura (Museu Nacional -UFRJ), Thayse Fonseca, and Richard Schwarz (UNIVALI) provided invaluable help with the process of identification of fauna from biological samples and video images. Angelica Maffini Mastella, Carine Eccel, and Anna Caroline Silva de Andrade contributed with video analysis procedures.
Katz Fujikura (JAMSTEC) kindly made available high-resolution bathymetry data. | 9,213.4 | 2020-09-18T00:00:00.000 | [
"Environmental Science",
"Biology",
"Geology"
] |
Local phase space and edge modes for diffeomorphism-invariant theories
We discuss an approach to characterizing local degrees of freedom of a subregion in diffeomorphism-invariant theories using the extended phase space of Donnelly and Freidel, [JHEP 2016 (2016) 102]. Such a characterization is important for defining local observables and entanglement entropy in gravitational theories. Traditional phase space constructions for subregions are not invariant with respect to diffeomorphisms that act at the boundary. The extended phase space remedies this problem by introducing edge mode fields at the boundary whose transformations under diffeomorphisms render the extended symplectic structure fully gauge invariant. In this work, we present a general construction for the edge mode symplectic structure. We show that the new fields satisfy a surface symmetry algebra generated by the Noether charges associated with the edge mode fields. For surface-preserving symmetries, the algebra is universal for all diffeomorphism-invariant theories, comprised of diffeomorphisms of the boundary, $SL(2,\mathbb{R})$ transformations of the normal plane, and, in some cases, normal shearing transformations. We also show that if boundary conditions are chosen such that surface translations are symmetries, the algebra acquires a central extension.
Introduction
In gravitational theories, the problem of defining local subregions and observables is complicated by diffeomorphism invariance. Because it is a gauge symmetry, diffeomorphism invariance leads to constraints that must be satisfied by initial data for the field equations. These constraints relate the values of fields in one subregion of a Cauchy slice to their values elsewhere, so that the fields cannot be interpreted as observables localized to a particular region. While this is true in any gauge theory, a further challenge for diffeomorphism-invariant theories is that specifying a particular subregion is nontrivial, since diffeomorphisms can change the subregion's coordinate position.
A related issue in quantum gravitational theories is the problem of defining entanglement entropy for a subregion. The usual definition of entanglement entropy assumes a factorization of the Hilbert space H = H A ⊗ HĀ into tensor factors H A and HĀ associated with a subregion A and its complementĀ. However, all physical states in a gauge theory are required to be annihilated by the constraints, and the nonlocal relations the constraints impose on the physical Hilbert space prevents such a factorization from occurring. One way of handling this nonfactorization is to define the entropy in terms of the algebra of observables for the local subregion [1]. This necessitates a choice of center for the algebra, which roughly corresponds to Wilson lines that are cut by the entangling surface. This procedure is further complicated in gravitational theories, since the local subregion and its algebra of observables must be defined in a diffeomorphism-invariant manner. Thus, the issues of local observables and entanglement in gravitational theories are intertwined.
Despite these challenges, there are indications that a well-defined notion of local observables and entanglement should exist in gravitational theories. Holography provides a compelling example, where the entanglement of bulk regions bounded by an extremal surface may be expressed in terms of entanglement in the CFT via the Ryu-Takayanagi formula and its quantum corrections [2,3]. Such regions are defined relationally relative to a fixed region on the boundary, and hence give a diffeomorphism-invariant characterization of the local subregion. Work regarding bulk reconstruction suggests that the algebra of observables for this subregion is fully expressible in terms of the subregion algebra of the CFT [4][5][6][7][8][9].
In addition, there are various pieces of circumstantial evidence suggesting that entanglement entropy is a well-defined and useful concept in quantum gravity. The gravitational field equations have been shown to follow from applying the first law of entanglement entropy [10,11] to subregions, both in holography [12][13][14][15][16] and for more general gravitational theories [17][18][19][20], all of which is predicated on a well-defined notion for entanglement for the local subregion. In fact, it is conjectured that connectivity of the spacetime manifold arises from entanglement between the microscopic degrees of freedom from which the gravitational theory emerges [21]. Furthermore, entanglement entropy provides a natural explanation for the proportionality between black hole entropy and horizon area [22][23][24][25], while finessing the issue of entanglement divergences through renormalization of the gravitational couplings [26][27][28]. However, in the case of gauge theories, the matching between entanglement entropy divergences and the renormalization of gravitational couplings is subtle. The entropy computed using conical methods [29] contains contact terms [30][31][32], which are related to the presence of edge modes on the entangling surface. These arise as a consequence of the nonfactorization of the Hilbert space due to the gauge constraints. Only when the entanglement from these edge modes is properly handled does the black hole entropy have a statistical interpretation in terms of a von Neumann entropy [33][34][35].
Recently, Donnelly and Freidel presented a continuum description of the edge modes that arise both in Yang-Mills theory and general relativity [36]. Using covariant phase space techniques [37][38][39][40], they construct a symplectic potential and symplectic form associated with a local subregion. These are expressed as local integrals of the fields and their variations over a Cauchy surface Σ. However, one finds that they are not fully gauge-invariant: gauge transformations that are nonvanishing at the boundary ∂Σ change the symplectic form by boundary terms. Invariance is restored by introducing new fields in a neighborhood of the boundary, whose change under gauge transformations cancels the boundary term from the original symplectic form. These new edge modes thus realize the idea that boundaries break gauge invariance, and cause some would-be gauge modes to become degrees of freedom associated with the subregion [41,42].
The analysis of diffeomorphism-invariant theories in [36] was restricted to general relativity with vanishing cosmological constant. However, the construction can be generalized to arbitrary diffeomorphism-invariant theories, and it is the purpose of the present work to show how this is done. The symplectic potential for the edge modes can be expressed in terms of the Noether charge and the on-shell Lagrangian of the theory, and the symplectic form derived from it has contributions from the edge modes only at the boundary. These edge modes come equipped with set of symmetry transformations, and the symmetry algebra is represented on the phase space as a Poisson bracket algebra. The generators of the surface symmetries are given by the Noether charges associated with the transformations. We find that for generic diffeomorphisminvariant theories, the transformations that preserve the entangling surface generate the algebra Diff(∂Σ) ⋉ SL(2, R) ⋉ R 2·(d−2) ∂Σ . In certain cases, including general relativity, the algebra is reduced to Diff(∂Σ) ⋉ SL(2, R) ∂Σ , consistent with the results of [36]. Furthermore, for any other theory, there always exists a modification of the symplectic structure in the form of a Noether charge ambiguity [43] that reduces the algebra down to Diff(∂Σ)⋉SL(2, R) ∂Σ . We also discuss what happens when the algebra is enlarged to include surface translations, the transformations that do not map ∂Σ to itself. In order for these transformations to be Hamiltonian, the dynamical fields generically have to satisfy boundary conditions at ∂Σ. Assuming the appropriate boundary conditions can be found, the full surface symmetry algebra is a central extension of either Diff(∂Σ) ⋉ (SL(2, R) ⋉ R 2 ) ∂Σ , or a larger, simple Lie algebra. The appearance of central charges in these algebras is familiar from similar constructions involving edge modes at asymptotic infinity or black hole horizons [42,44,45]. The construction of the extended phase space for arbitrary diffeomorphism-invariant theories is useful for a number of reasons. For one, higher curvature corrections to the Einstein-Hilbert action generically appear due to quantum gravitational effects. It is useful to have a formalism that can compute the corrections to the edge mode entanglement coming from these higher curvature terms. Additionally, there are several diffeomorphism-invariant theories that are simpler than general relativity in four dimensions, such as 2 dimensional dilaton gravity or 3 dimensional gravity in Anti-de-Sitter space. These could be useful testing grounds in which to understand the edge mode entanglement entropy, before trying to tackle the problem in four or higher dimensions. Finally, the general construction clarifies the relation of the extended phase space to the Wald formalism [46,47], a connection that was also noted in [48].
This paper begins with a review of the covariant phase space in section 2. Care is taken to describe vectors and differential forms on this infinite-dimensional space, and also to understand the effect of diffeomorphisms of the spacetime manifold on the covariant phase space. Section 3 discusses the X fields that appear in the extended phase space, which give rise to the edge modes. Following this, the construction of the extended phase space is given in section 4, which describes how the edge mode fields contribute to the extended symplectic form. Ambiguities in the construction are characterized in section 5, and the surface symmetry algebra is identified in section 6. Section 7 gives a summary of results and ideas for future work.
Covariant phase space
The covariant phase space [37][38][39][40] provides a Hamiltonian description of a field theory's degrees of freedom while maintaining spacetime covariance. This is achieved by working with the space S of solutions to the field equations. As long as the field equations admit a well-posed initial value formulation, each solution is in one-to-one correspondence with its initial data on some Cauchy slice. S may therefore be used to construct a phase space that is equivalent to other Hamiltonian formalisms, such as ADM [49], but since it does not require a choice of Cauchy slice and decomposition into spatial and time coordinates, spacetime covariance remains manifest. The specification of a Cauchy surface and time variable can be viewed as a choice of coordinates on S, with each solution being identified by its initial data.
Working directly with S allows coordinate-free techniques to be applied to both the spacetime manifold and the phase space itself. In particular, the exterior calculus on the S gives a powerful language for describing the phase space symplectic geometry. We will follow the treatment of the exterior calculus given in [36], 1 where it was used to provide an extremely efficient way of identifying edge modes for a local subregion in a gauge theory. This section provides a review of the formalism, on which the remainder of this paper heavily relies.
The theories under consideration consist of dynamical fields, including the metric g ab and any matter fields, propagating on a spacetime manifold M. These fields satisfy diffeomorphism-invariant equations of motion, and the phase space is constructed from the infinite-dimensional space of solutions to these equations, S. Despite being infinite-dimensional, many concepts from finite-dimensional differential geometry, such as vector fields, one-forms, and Lie derivatives, extend straightforwardly to S, assuming it satisfies some technical requirements such as being a Banach manifold [51,52]. One begins by understanding the functions on S, a wide class of which is provided by the dynamical fields themselves. Given a spacetime point x ∈ M and a field φ, the function φ x associates to each solution the value of φ(x) in that solution. More generally, functionals of the dynamical fields, such as integrals over regions of spacetime, also define functions on S by simply evaluating the functional in a given solution. We will often denote φ x simply by φ, with the dependence on the spacetime point x implicit.
A vector at a point of S describes an infinitesimal displacement away from a particular solution, and hence corresponds to a solution of the linearized field equations. Specifying a linearized solution about each full solution then defines a vector field V on all of S. The vector field acts on S-functions as a directional derivative, and in particular its action on the functions φ x is to give a new function Φ , which, given a solution, evaluates the linearization Φ of the field φ at the point x. This also allows us to define the exterior derivative of the functions φ x , denoted δφ x . When contracted with the vector field V , the one-form δφ x simply returns the scalar function Φ x V . The one-forms δφ x form an overcomplete basis, so that arbitrary one-forms may be expressed as sums (or integrals over the spacetime point x) of δφ x . This basis is overcomplete because the functions φ x at different points x are related through the equations of motion, so that the forms δφ x are related as well.
Forms of higher degree can be constructed from the δφ x one-forms by taking exterior products. The exterior product of a p-form α and a q-form β is simply written αβ, and satisfies αβ = (−1) pq βα. Since we only ever deal with exterior products of forms defined on S instead of more general tensor products, no ambiguity arises by omitting the ∧ symbol, which we instead reserve for spacetime exterior products. The action of the exterior derivative on arbitrary forms is fixed as usual by its action on scalar functions, along with the requirements of linearity, nilpotency δ 2 = 0, and that it acts as an antiderivation, δ(αβ) = (δα)β + (−1) p αδβ. (2.1) The exterior derivative δ always increases the degree of the form by one. On the other hand, each vector field V defines an antiderivation I V that reduces the degree by one through contraction. I V can be completely characterized by its action on one-forms I V δφ x = Φ x V , along with the antiderivation property, linearity, nilpotency I 2 Φ = 0, and requiring that it annihilate scalars. Just as in finite dimensions, the action of the S Lie derivative, denoted L V , is related to δ and I V via Cartan's magic formula [52] that preserves the degree of the form. We next discuss the consequences of working with diffeomorphism invariant theories. A diffeomorphism Y is a smooth, invertible map, Y : M → M, sending the spacetime manifold M to itself. The diffeomorphism induces a map of tensors at Y (x) to tensors at x through the pullback Y * [53]. Diffeomorphism invariance is simply the statement that if a configuration of tensor fields φ satisfy the equations of motion, then so do the pulled back fields Y * φ. Now consider a one-parameter family of diffeomorphisms Y λ , with Y 0 the identity. This yields a family of fields Y * λ φ that all satisfy the equations of motion. The first order change induced by Y * λ defines the spacetime Lie derivative £ ξ with respect to ξ a , the tangent vector to the flow of Y λ . Consequently, £ ξ φ must be a solution to the linearized field equations, and the infinitesimal diffeomorphism generated by ξ a defines a vector field on S, which we denoteξ, whose action on δφ is 3) The diffeomorphisms we have considered so far have been taken to act the same on all solutions. A useful generalization of this are the solution-dependent diffeomorphisms, defined through a function, Y : S → Diff(M), valued in the diffeomorphism group of the manifold, Diff(M). Letting Y denote the image of this function, we would like to understand how the Lie derivative L V and exterior derivative δ on S combine with the action of the pullback Y * . In the case Y is constant on S, the Lie derivative simply commutes with Y * , and so L V Y * α = Y * L V α, where α is any form constructed from fields and their variations at a single spacetime point. When Y is not constant, V generates one-parameter families of diffeomorphisms Y λ and forms α λ along the flow in S. At a given solution s 0 , define a solution-independent diffeomorphism Y 0 ≡ Y (s 0 ) by the value of Y at s 0 . Then Y * λ α λ and Y * 0 α λ are related to each other at all values of λ by a diffeomorphism, Y * λ (Y −1 0 ) * . The first order change in these quantities at λ = 0 is given by L V , and since the two quantities differ at first order by an infinitesimal diffeomorphism, we find It is argued in appendix A, identity A.3, that the vector χ a (Y ; V ) depends linearly on V , and hence defines a one-form on S, denoted χ a Y . 2 This yields the pullback formula for L V , (2.5) Applying (2.2) to this equation, one can derive the pullback formula for exterior derivatives from [36] (see A.5 for details), A number of properties of the variational vector field χ a Y follow from the formulas above. First, note χ a Y is not an exact form on S; rather, its exterior derivative can be deduced from (2.6), and applying A.7, we conclude Another useful formula relates χ a Y to the vector χ a Y −1 associated with the inverse of Y . Using that Y * and (Y −1 ) * are inverses of each other, we find where the last equality involves the identity A.8. This implies (2.10) Additional identities are derived in appendix A. Finally, as a spacetime vector field, χ a Y also defines a vector-valued one-formχ Y on S, which acts as Iχ Y δφ = £χ Y φ. The contraction Iχ Y defines a derivation that preserves the degree of the form, in contrast to Iξ, which is an antiderivation that reduces the degree. Similarly, δ( χ Y ) a defines a vector-valued two-form on S, and produces an antiderivation I δ( χ Y )ˆt hat increments the degree. 2 In [36], χ a Y was denoted δ a Y . We choose a different notation to emphasize that χ a Y is not an exact form, and to avoid confusion with the exterior derivative δ.
Edge mode fields
Edge modes appear when a gauge symmetry is broken due to the presence of a boundary ∂Σ of a Cauchy surface Σ. The classical phase space or quantum mechanical Hilbert space associated with Σ transforms nontrivially under gauge transformations that act at the boundary. This can be understood from the perspective of Wilson loops that are cut by the boundary. A closed Wilson loop is gauge-invariant, but the cut Wilson loop becomes a Wilson line in Σ, whose endpoints transform in some representation of the gauge group. To account for these cut-Wilson-loop degrees of freedom, one can introduce fictitious charged fields at ∂Σ, which can be attached to the ends of the Wilson lines to produce a gauge-invariant object. These new fields are the edge modes of the local subregion. They account for the possibility of charge density existing outside of Σ, which would affect the fields in Σ due to Gauss law constraints. The contribution of the edge modes to the entanglement can therefore be interpreted as parameterizing ignorance of such localized charge densities away from Σ.
A similar picture arises in the classical phase space of a diffeomorphism-invariant theory. The edge modes appear when attempting to construct a symplectic structure associated with Σ for the solution space S. Starting with the Lagrangian of the theory, one can construct from its variations a symplectic current ω, a spacetime (d − 1)form whose integral over a spatial subregion Σ provides a candidate presymplectic form. However, this form fails to be diffeomorphism invariant for two reasons. First, a diffeomorphism moves points on the mainfold around, and hence changes the shape and coordinate location of the surface. Second, since solutions related to each other by a diffeomorphism represent the same physical configuration, the true phase space P is obtained by projecting all solutions in a gauge orbit in S down to a single representative. In order for the symplectic form to be compatible with this projection, the infinitesimal diffeomorphisms must be degenerate directions of the presymplectic form [51]. This is equivalent to saying that the Hamiltonian generating the diffeomorphism may be chosen to vanish. While the symplectic form obtained by integrating ω over a surface is degenerate for diffeomorphisms that vanish sufficiently quickly at its boundary, those that do not produce boundary terms that spoil degeneracy.
As demonstrated in [36], these problems can be handled by introducing a collection of additional fields X whose contribution to the symplectic form restores diffeomorphism invariance. These fields are the edge modes of the extended phase space. This section is devoted to describing these fields and their transformation properties under diffeomorphisms; the precise way in which they contribute to the symplectic form is discussed in section 4.
The fields X can be defined through a Diff(M)-valued function X : S → Diff(M).
In a given solution s, X is identified with the diffeomorphism in the image of the map, X = X (s). One way to interpret X is as defining a map from (an open subset of) R d into the spacetime manifold M, and hence can be thought of as a choice of coordinate system on covering the local subregion Σ. 3 A full solution to the field equations now consists of specifying the map X as well as the value of the dynamical fields φ(x) at each point in spacetime. The transformation law for X under a diffeomorphism Y : M → M is given by the pullback along Since X defines a diffeomorphism from R d to M, it can be used to pull back tensor fields on M to R d . We can argue as before that the Lie derivative L V and exterior derivative δ satisfy pullback forumlas analogous to equations (2.4) and (2.6), which serve as defining relations for the variational spacetime vector χ a X . The result of contracting χ a X with a vector fieldξ corresponding to a spacetime diffeomorphism can be deduced by first noting that the pulled back fields In particular, the S Lie derivative Lξ must annihilate X * φ for any ξ, so from (3.1), and hence Iξ χ a X = −ξ a .
(3.5)
We can also derive the transformation law for χ a X under a diffeomorphism from the pullback formulas (2.6) and (3.2). On the one hand we have while on the other hand this can also be computed as where the last equality employed identity A.8. Comparing these expressions and applying the formula (2.10) for χ a Y −1 gives the transformation law The X fields lead to an easy prescription for forming diffeomorphism-invariant quantities: simply work with the pulled back fields X * φ. These are diffeomorphisminvariant due to equation (3.3), and consequently the variation δX * φ is as well. We can explicitly confirm that δX * φ are annihilated by infinitesimal diffeomorphismsξ: Another combination of one-forms that appears frequently is α + Iχ X α, and it is easily checked that Iξ annihilates this sum. Finally, we note that when no confusion will arise, we will simply denote χ a X by χ a to avoid excessive clutter. When referring to other diffeomorphisms besides X, we will explicitly include the subscript, as in χ a Y .
Extended phase space
We now turn to the problem of defining a gauge-invariant symplectic form to associate with the local subregion Σ. The standard procedure of [46,47,51] begins with a Lagrangian L[φ], a spacetime d-form constructed covariantly from the dynamical fields φ. Its variation takes the form where E = 0 are the dynamical field equations, and the exact form dθ, where d denotes the spacetime exterior derivative, defines the symplectic potential current (d − 1)-form θ ≡ θ[φ; δφ], which is a one-form on solution space S. The S-exterior derivative of θ defines the symplectic current (d − 1)-form, ω = δθ, whose integral over Σ normally defines the presymplectic form Ω 0 for the phase space. As a consequence of diffeomorphism-invariance, Ω 0 contains degenerate directions: it annihilates any infinitesimal diffeomorphism generated by vector field ξ a that vanishes sufficiently quickly near the boundary. This is succinctly expressed for such a vector field by IξΩ 0 = 0. The true phase space P is obtained by quotienting out these degenerate directions by mapping all diffeomorphism-equivalent solutions to a single point in P. Ω 0 then defines a nondegenerate symplectic form on P through the process of phase space reduction [51]. This procedure is deficient for a local subregion Σ because Ω 0 fails to be degenerate for diffeomorphisms that act near the boundary ∂Σ. If the boundary were at asymptotic infinity, such diffeomorphisms could be disallowed by imposing boundary conditions on the fields, or could otherwise be regarded as true time evolution with respect to the fixed asymptotic structure, in which case degeneracy would not be expected [40]. For a local subregion, however, neither option is acceptable. Imposing a boundary condition on the fields at ∂Σ has a nontrivial effect on the dynamics [54][55][56], whereas we are interested in a phase space that locally reproduces the same dynamics as the theory defined on the full spacetime manifold M. Furthermore, the diffeomorphisms acting at ∂Σ cannot be regarded at true time evolution generated by a nonvanishing Hamiltonian, because these diffeomorphisms are degenerate directions of a presymplectic form for the entire manifold M.
Donnelly and Freidel [36] proposed a resolution to this issue by extending the local phase space to include the X fields described in section 3. The minimal prescription for introducing them into the theory is to simply replace the Lagrangian with its pullback X * L. Since the Lagrangian is a covariant functional of the fields, X * L[φ] = L[X * φ], so that the pulled back Lagrangian depends only on the redefined fields X * φ, and is otherwise independent of X. The variation of this Lagrangian gives Thus the redefined fields satisfy the same equations of motion E[X * φ] = 0 as the original fields, and, due to diffeomorphism invariance, this implies that the original φ fields must satisfy the equations as well. Additionally, the Lagrangian had no further dependence on X, which means the X fields do not satisfy any field equations. If X is understood as defining a coordinate system for the local subregion, the dynamics of the extended (φ, X) system is simply given by the original field equations, expressed in an arbitrary coordinate system determined by X.
The symplectic potential current is read off from (4.2), This object is manifestly invariant with respect to solution-dependent diffeomorphisms, since both X * φ and δX * φ are. In particular, θ ′ annihilates any infinitesimal diffeomorphism Iξ, as a consequence of the fact that IξδX * φ = 0 (see equation 3.4). An equivalent expression for θ ′ can be obtained by introducing the Noether current for a vector field ξ a , where i ξ denotes contraction with the spacetime vector ξ a . Due to diffeomorphism invariance, J ξ is an exact form when the equations of motion hold [46,47], and may be written where Q ξ is the Noether charge and C ξ = 0 are combinations of the field equations that comprise the constraints for the theory [57]. Then θ ′ in (4.3) may be expressed on-shell As an aside, note that we can vary the Lagrangian with respect to (φ, X) instead of the redefined fields (X * φ, X), and equivalent dynamics arise. This variation produces where Cartan's magic formula £χ = iχd + diχ was used, along with the fact that d commutes with pullbacks. Again, φ satisfies the same field equation E[φ] = 0, and X is subjected to no dynamical equations. This variation suggests a potential current θ ′′ = X * (θ+iχL), which differs from (4.6) by the exact form dX * Qχ. This difference is simply an ambiguity in the definition of the potential current, since shifting it by an exact form does not affect equation (4.1) [43,47]. However, θ ′′ does not annihilate infinitesimal diffeomorphisms Iξ, making θ ′ the preferred choice. The degeneracy requirement for the symplectic potential current therefore gives a prescription to partially fix its ambiguities [48], although additional ambiguities remain, and are discussed in section 5.
The symplectic potential Θ is now constructed by integrating θ ′ over the local subregion. Since θ ′ is defined as a pullback by X * , its integral must be over the preimage σ, for which X(σ) = Σ. This gives The second line uses the alternative expression (4.6) for θ ′ , and is written as an integral of fields defined on the original local subregion Σ, without pulling back by X. This makes use of the general formula σ X * α = X(σ) α, and also applies Stoke's theorem Σ dα = ∂Σ α to write the Noether charge as a boundary integral. Equation (4.9) differs from the symplectic potential for the nonextended phase space, Θ 0 = Σ θ, by both a boundary term depending on the Noether charge, as well as a bulk term coming from the on-shell value of the Lagrangian. For vacuum general relativity with no cosmological constant, this extra bulk contribution vanishes, being proportional to the Ricci scalar [36]. However, when matter is present or the cosmological constant is nonzero, this extra bulk contribution to Θ can survive. As we discuss below, this bulk term imbues the symplectic form on the reduced phase space P with nontrivial cohomology.
Taking an exterior derivative of Θ yields the symplectic form, Ω = δΘ. The expression (4.8) leads straightforwardly to where we recall the definition of the symplectic current ω = δθ. This expression for Ω makes it clear that it is invariant with respect to all diffeomorphisms, and that infinitesimal diffeomorphisms are degenerate directions, again because IξδX * φ = 0. The symplectic form can also be expressed as an integral over Σ and its boundary using the original fields φ, by computing the exterior derivative of (4.9). Noting that the integrands implicitly involve a pullback by X * , we find The first term is the symplectic form for the nonextended theory, Ω 0 = Σ ω. The remaining three terms in the bulk Σ integral simplify to an exact form on-shell d(iχθ + 1 2 iχiχL) (see identity A.10), so the final expression is Hence, we arrive at the important result that the symplectic form differs from Ω 0 by terms localized on the boundary ∂Σ involving χ a . This immediately implies that Ω has degenerate directions: any phase space vector field V that vanishes on δφ and whose contraction with χ a vanishes sufficiently quickly near ∂Σ will annihilate Ω. In fact, only the values of χ a and ∇ b χ a at ∂Σ contribute to (4.12); all other freedom in χ a is pure gauge. To see why these are the only relevant pieces of χ a for the symplectic form, we can use the explicit expression for the Noether charge given in [47]. Up to ambiguities which are discussed in section 5, the Noether charge is given by where ǫ ab is the spacetime volume form with all but the first two indices suppressed, E abcd = δL δR abcd is the variational derivative of the Lagrangian scalar L = −( * L) with respect to the Riemann tensor, and inherits the index symmetries of the Riemann tensor, and W c [φ] is a tensor with (d − 2) covariant, antisymmetric indices suppressed, constructed locally from the dynamical fields; its precise form is not needed in this work.
The last two terms in (4.12) depend only on the value of χ a on ∂Σ, while the terms involving Qχ can depend on derivatives of χ a . From (4.13), Qχ involves one derivative of χ a , and (4.12) has terms involving the derivative of Qχ, so that up to two derivatives of χ a could contribute to the symplectic form. To see how these derivatives appear, we decompose δQχ as δQχ = Q δ( χ ) + ϙχ, (4.14) where ϙ ξ = ϙ ξ [φ; δφ] 4 is a variational one-form depending on a vector ξ (which can be a differential form on S), given by and δΓ d ce is the variation of the Christoffel symbol, This decomposition is useful because ϙχ contains only first derivatives of χ a , while involves second derivatives through the derivative of the vector field Lie bracket.
In appendix B, it is argued that the second derivatives of χ a in Q δ( χ ) + £χQχ cancel out, so that the boundary contribution in (4.12) depends on only χ a and ∇ b χ a at ∂Σ. This means that Ω has a large number of degenerate directions, corresponding to all values of χ a on Σ that are not fixed by the values of χ a and ∇ b χ a at the boundary. The true phase space P is then obtained by quotienting out these pure gauge degrees of freedom. In doing so, Ω descends to a nondegenerate, closed two-form on the quotient space [51]. However, the symplectic potential Θ does not survive this projection. It depends nontrivially on the value of χ a everywhere on Σ through the term involving the Lagrangian in (4.9), which causes it to become a multivalued form on the quotient space. One way to see its multivaluedness is to note that iχL is a top rank form on Σ, so, by the Poincaré lemma applied to Σ, it can be expressed as the exterior derivative of a (d − 2)-form, Here, h X is the homotopy operator that inverts the exterior derivative d on closed forms on Σ [58]. As the notation suggests, it depends explicitly on the value of the X fields throughout Σ, which we recall can be thought of as defining a coordinate system for the subregion. Since h X iχL is a spacetime (d − 2)-form and an S one-form, evaluated at ∂Σ it may be expressed in terms of χ a and δφ at ∂Σ, which provide a basis for local variational forms. Hence, and we see that this latter expression depends on χ a at ∂Σ, so therefore will project to the quotient space. However, h X will be a different operator depending on the values of the X fields on Σ, and hence this boundary integral will give a different form on the reduced phase space for different bulk values of X. This shows that the Lagrangian term in Θ projects to a multivalued form on the quotient space. The failure of Θ to be single-valued implies that the reduced phase space P has nontrivial cohomology. In particular, the projected symplectic form Ω is not exact, despite being closed. For a given choice of the value of Θ, the equation Ω = δΘ still holds locally near a given solution in the reduced phase space, but there can be global obstructions since Θ may not return to the same value after tracing out a closed loop in the solution space. It would be interesting to investigate the consequences of this nontrivial topology of the reduced phase space, and in particular whether it has any relation to the appearance of central charges in the surface symmetry algebra.
Finally, note that for vacuum general relativity with no cosmological constant, the Lagrangian vanishes on shell, being proportional to the Ricci scalar. In this special case, Θ is not multivalued and descends to a well-defined one-form on the reduced phase space, suggesting that the phase space topology simplifies. However, the inclusion of a cosmological constant or the presence of matter anywhere in the local subregion leads back to the generic case in which Θ is multivalued.
JKM ambiguities
The constructions of the symplectic potential current θ and Noether charge Q ξ are subject to a number of ambiguities identified by Jacobson, Kang and Myers (JKM) [43,47]. These ambiguities correspond to the ability to add an exact form to the Lagrangian L, the potential current θ, or the Noether charge Q ξ without affecting the dynamics or the defining properties of these forms. Normally it is required that the ambiguous terms be locally constructed from the dynamical fields in a spacetimecovariant manner. In the extended phase space, however, there is additional freedom provided by the X fields as well as the surfaces Σ and ∂Σ to construct forms that would otherwise fail to be covariant. The freedom provided by the X fields is considerable, given that they can be used to construct homotopy operators as in (4.17) and (4.18) that mix the local dynamical fields φ at different spacetime points. For this reason, we refrain from using the X fields in such an explicit manner to construct ambiguity terms. However, we allow for ambiguity terms that are constructed using the structures provided by Σ and ∂Σ, such as their induced metrics and extrinsic curvatures. This allows for a wider class of Noether charges, including those that appear in holographic entropy functionals and the second law of black hole mechanics for higher curvature theories [59][60][61][62].
A simple example of which types of objects are permitted in constructing the ambiguity terms is provided by the unit normal u a to Σ versus the lapse function N. Interpreting X µ as a coordinate system for the local subregion, we can take Σ to lie at X 0 = 0. Then the lapse and unit normal are related by The form ∇ a X 0 depends explicitly on the X field, and hence is not allowed in our constructions. However, the unit normal u a can be constructed using only the surface Σ and the metric, and hence is independent of the X fields. This then implies that N also depends on the X fields, and so the lapse function cannot explicitly be used in constructing ambiguity terms.
L ambiguity
The first ambiguity corresponds to adding an exact form dα to the Lagrangian. This does not affect the equations of motion; however, its variation now contributes to θ.
The following changes occur from adding this term to the Lagrangian: Note that since θ changes by an S-exact form, the symplectic current ω is unaffected. Incorporating these changes into the definition of the symplectic potential (4.9) changes Θ by We point out that the new term annihilates infinitesimal diffeomorphisms Iξ, so that Θ remains fully diffeomorphism-invariant. Since Θ changes by an S-exact form, the symplectic form Ω = δΘ receives no change from this type of ambiguity, which can also be checked by tracking the changes of all quantities in (4.12). Given that only Ω, and not Θ, is needed in the construction of the phase space, this ambiguity in L has no effect on the phase space. However, it has some relevance to the surface symmetry algebra discussed in section 6. The generators of this algebra are given by the Noether charge, and for surface symmetries that move ∂Σ (the "surface translations"), this ambiguity would appear to have an effect. However, as discussed in subsection 6.1, once the appropriate boundary terms are included in the generators, the result is independent of this ambiguity. The form of the generator does motivate a natural prescription for fixing the ambiguity such that the Lagrangian has a well-defined variational principle, so that it is completely stationary on-shell, as opposed to being stationary up to boundary contributions.
θ ambiguity
The second ambiguity comes from the freedom to add an exact form dβ to θ, since doing so does not affect its defining equation (4.1). Here, β ≡ β[φ; δφ] is a spacetime (d − 2)-form and a one-form on S. The changes that arise from this addition are Under these transformations, the symplectic potential (4.9) changes to Hence, the symplectic potential is modified by an arbitrary boundary term β, accompanied by Iχβ that ensures that Θ retains degenerate directions along linearized diffeomorphisms. Unlike the L ambiguity, this modification is not S-exact, and changes the boundary terms in the symplectic form, Ω → Ω + ∂Σ (δβ + δIχβ + £χβ + £χIχβ). (5.6) Because β can in principle involve arbitrarily many derivatives of δφ, its presence can cause Ω to depend on second or higher derivatives of χ a on the boundary. This affects which parts of χ a correspond to degenerate directions, and will lead to different numbers of boundary degrees of freedom in the reduced phase space. As discussed in section 6, this ambiguity can also be used to reduce the surface symmetry algebra to a subalgebra. Give that β contributes to Θ and Ω only at the boundary, it can involve tensors associated with the surface ∂Σ that do not correspond to spacetime-covariant tensors, such as the extrinsic curvature. This allows the Dong entropy [59][60][61], which differs from the Wald entropy [46,47] by extrinsic curvature terms, to be viewed as a Noether charge with a specific choice of ambiguity terms. This is the point of view advocated for in [62], where the ambiguity was resolved by requiring that the entropy functional derived from the resultant Noether charge satisfy a linearized second law. In general, fixing the ambiguity requires some additional input, motivated by the particular application at hand.
Q ξ ambiguity
The final ambiguity is the ability to shift Q ξ by a closed form γ, with dγ = 0. Since Q ξ depends linearly on ξ a and its derivatives, γ should be chosen to also satisfy this requirement. If γ is identically closed for all ξ a , it then follows that it must be exact, γ = dν [63]. Its integral over the closed surface ∂Σ then vanishes, so that it has no effect on Θ or Ω.
Surface symmetry algebra
The extended phase space constructed in section 4 contains new edge mode fields χ a on the boundary of the local subregion, whose presence is required in order to have a gauge-invariant symplectic form. Associated with the edge modes are a new class of transformations that leave the symplectic form and the equations of motion invariant. These new transformations comprise the surface symmetry algebra. This algebra plays an important role in the quantum theory when describing the edge mode contribution to the entanglement entropy, thus it is necessary to identify the algebra and its canonical generators.
As discussed in [36], the surface symmetries coincide with diffeomorphisms in the preimage space, Z : R d → R d , where R d ⊃ X −1 (M). These leave the spacetime fields φ unchanged, but transform the X fields by X → X • Z. This also transforms the pulled back fields X * φ → Z * X * φ, and due to the diffeomorphism invariance of the field equations, the pulled back fields still define solutions. These transformations therefore comprise a set of symmetries for the dynamics in the local subregion. Infinitesimally, these transformations are generated by vector fields w a on R d . Analogous to vector fields defined on M, w a defines a vectorŵ on S, whose action on the pulled back fields X * φ is given by the Lie derivative, while its action on φ is trivial, Lŵφ = 0. On the other hand, we may apply the pullback formula (3.1) to this equation to derive where W a = (X −1 ) * w a . The contractions of the vectorŵ with the basic S one-forms are therefore Iŵ χ a = W a , Iŵδφ = 0. (6.3) We also will assume that w a is independent of the solution, so that δw a = 0. Writing this as 0 = δX * W a , and applying the pullback formula (3.1), one finds δW a = −£χW a . (6.4) In order for the transformation to be a symmetry of the phase space, it must generate a Hamiltonian flow. This means that IŵΩ is exact, and determines the Hamiltonian Hŵ for the flow via δHŵ = −IŵΩ. The contraction with the symplectic form can be computed straightforwardly from (4.12) by first using the decomposition (4.14) for δQχ. Then The first three terms of the first line combine into the first term in the second line, using formula (6.4) for δW a , formula (4.14) for δQ W , and recalling that the integral involves an implicit pullback by X * , so that δ ∂Σ Q W = ∂Σ (δQ W + £χQ W ).
It is immediately apparent that if the second integral in (6.6) vanishes, the flow is Hamiltonian. This occurs if W a is tangent to ∂Σ or vanishing at ∂Σ, and hence defines a mapping of the surface into itself. If W a is tangential, it generates a diffeomorphism ∂Σ, while vector fields that vanish on ∂Σ generate transformations of the normal bundle to the surface while holding all points on the surface fixed. These transformations were respectively called surface diffeomorphisms and surface boosts in [36]. The remaining transformations consist of the surface translations, where W a has components normal to the surface, and the second integral in (6.6) does not vanish. In general, this term does not give a Hamiltonian flow, except when the fields satisfy certain boundary conditions. We will briefly discuss the surface translations in subsection 6.1, where we show that they can give rise to central charges in the surface symmetry algebra.
Returning to the surface-preserving transformations, we find that the Hamiltonian is given by the Noether charge integrated over the boundary, The surface symmetry algebra is generated through the Poisson bracket of the Hamiltonians for all possible surface-preserving vectors. The Poisson bracket is given by where the last equality uses equation (6.4) applied to δV a and that ∂Σ £ W Q V = ∂Σ i W dQ V vanishes when integrated over the surface since W a is parallel to ∂Σ. This shows that the algebra generated by the Poisson bracket is compatible with the Lie algebra of surface preserving vector fields, without the appearance of any central charges, i.e. the map w a → Hŵ is a Lie algebra homomorphism. Note that the algebra of surface-preserving vector fields is much larger than the surface symmetry algebra. This is because the generators of surface symmetries depend only on the values of the vector field and its derivative at ∂Σ. Vector fields that die off sufficiently quickly near ∂Σ correspond to vanishing Hamiltonians. The transformations they induce on S are pure gauge, and they drop out after passing to the reduced phase space.
To identify the surface symmetry algebra, it is useful to first describe the larger algebra of surface-preserving diffeomorphisms, which contains the surface symmetries as a subalgebra. It takes the form of a semidirect product, Diff(∂Σ) ⋉ Dir ∂Σ where Diff(∂Σ) is the diffeomorphism group of ∂Σ, and Dir ∂Σ is the normal subgroup of diffeomorphisms that fix all points on ∂Σ. 5 Dir ∂Σ is generated by vector fields W a that vanish on ∂Σ, and it is a normal subgroup because the vanishing property is preserved under commutation with all surface-preserving vector fields: where the first term vanishes since W b vanishes at ∂Σ, and the second term vanishes because V b is parallel to ∂Σ, and W a is zero everywhere along the surface. A general surface preserving vector field can then be expressed as where W a 0 vanishes on ∂Σ and W a is tangent to ∂Σ. Note that this decomposition is not canonical; away from ∂Σ there is some freedom in specifying which components of the vector field correspond to the tangential direction. However, given any such choice, it is clear that if W a is nonvanishing at ∂Σ, then it will be nonzero in a neighborhood of ∂Σ, and hence the parallel vector fields act nontrivially on the V a 0 component of other vector fields. Finally, the commutator of two purely parallel vector fields [W , V ] will remain purely parallel, since they are tangent to an integral submanifold. The map W a → W a is therefore a homomorphism from the surface-preserving diffeomorphisms onto Diff(∂Σ), with kernel Dir ∂Σ . This establishes that the group of surface-preserving diffeomorphisms is Diff(∂Σ) ⋉ Dir ∂Σ .
The surface symmetry algebra is represented as a subalgebra of Diff(∂Σ) ⋉ Dir ∂Σ . The Hamiltonian for a surface-preserving vector field is determined by the Noether charge Q W , which depends only on the value of W a and its first derivative at ∂Σ. Hamiltonians for vector fields that are nonvanishing at ∂Σ provide a faithful representation of the Diff(∂Σ) algebra; however, the vanishing vector fields only represent a subalgebra of Dir ∂Σ . To determine it, note that only the first derivative of W a contributes to the Noether charge, and its tangential derivative vanishes. Letting x i , i = 0, 1, represent coordinates in the normal directions that vanish on ∂Σ, the components of the vector field may be expressed W µ = x i W µ i + O(x 2 ), µ = 0, . . . , d − 1, and the O(x 2 ) terms are determined by the second derivatives, which do not contribute to the Noether charge. Then the commutator of two vectors is which is seen to be determined by the matrix commutator of W µ i and V ν j , by allowing the i, j indices to run over 0, . . . , d − 1, setting all entries with i, j > 1 to zero. This algebra gives a copy of SL(2, R) ⋉ R 2·(d−2) for each point on ∂Σ. The abelian normal subgroup R 2·(d−2) is generated by vectors for which the µ index in W µ i is tangential, i.e. W j i ≡ W µ i ∇ µ x j = 0. These vectors represent shearing transformations of the normal bundle: they generate flows that vanish on ∂Σ, and are parallel to ∂Σ away from the surface. By specifying a normal direction, one obtains a homomorphism sending W µ i to its purely normal part, W j i . The fact that only the traceless part of ∇ a W b contributes to the Noether charge, which follows from the antisymmetry of E abcd from equation (4.13) in c and d, translates to the requirement that W j i be traceless when W a vanishes on ∂Σ. This means that the 2 × 2 matrices W j i generate an SL(2, R) algebra. The generators V µ i of R 2·(d−2) transform as a collection of (d − 2) vectors under the SL(2, R) algebra, verifying the semidirect product structure SL(2, R) ⋉ R 2·(d−2) for the vector fields vanishing at ∂Σ. Under diffeomorphisms of ∂Σ, V µ i transforms as a pair of vectors; hence, the full surface symmetry algebra is Diff The extra factor of R 2·(d−2) is a novel feature of this analysis, appearing for generic higher curvature theories, but not for general relativity [36]. Its presence or absence is explained by the particular structure of E abcd , the variation of the Lagrangian scalar with respect to R abcd . When E abcd is determined by its trace, i.e., equal to E d(d−1) (g ac g bd − g ad g bc ) with E a scalar, the R 2·(d−2) transformations are pure gauge. The Noether charge for a vector field vanishing at the surface evaluates to 6 where µ is the volume form on ∂Σ and n ab is the binormal; n c d projects out the tangential component in ∇ c W d , leaving only the SL(2, R) transformations as physical symmetries. A particular class of theories in which this occurs are f (R) theories (which include general relativity), where the Lagrangian is a function of the Ricci scalar, and E abcd = 1 2 f ′ (R)(g ac g bd − g ad g bc ). In more general theories, however, n ab E abc d will have a tangential component on the d index, and the algebra enlarges to include the R 2·(d−2) tranformations.
Curiously, there always exists a choice of ambiguity terms, discussed in subsection 5.2, that eliminates the R 2·(d−2) symmetries. Namely, the symplectic potential current θ can be modified as in equation (5.4a), with β chosen to be β = ǫ ab E abed s c e δg cd , (6.14) and s c e = −u e u c + n e n c is the projector onto the normal bundle of ∂Σ. Note that the explicit use of normal vectors to ∂Σ makes this β not spacetime-covariant. This is nevertheless in line with the broader set of allowed ambiguity terms discussed above. From equation (5.4d), this term changes the Noether charge of a vector vanishing at ∂Σ to The additional terms involving s c e drop out when contracted with the normal component on the d index of ∇ c W d ; however, on the tangential component the additional terms cancel against the first term. This choice of ambiguity thus reduces the surface symmetry algebra to coincide with the algebra for general relativity, Diff(∂Σ) ⋉ SL(2, R) ∂Σ .
Whether or not to use this choice of β depends on the application at hand, and it is unclear at the moment how exactly β should be fixed when trying to characterize the edge mode contribution to the entanglement entropy of a subregion. The above choice is natural in the sense that it gives the same surface symmetry algebra for any diffeomorphism-invariant theory. This would mean that the surface symmetry algebra is determined by the gauge group of the theory, while the Hamiltonians for the symmetry generators change depending on the specific dynamical theory under consideration. Note also that there are additional ambiguity terms that could be added, some of which enlarge the symmetry algebra by introducing dependence on higher derivatives of the vector field. Determining how to fix the ambiguity remains an important open problem for the extended phase space program.
Surface translations
While the surface-preserving transformations are present for generic surfaces, in situations where the fields satisfy certain boundary conditions at ∂Σ, the surface-symmetry algebra can enhance to include surface translations. These are generated by vector fields that contain a normal component to ∂Σ on the surface. For such a vector field, the second integral in (6.6) does not vanish, so for this transformation to be Hamiltonian, this integral must be an exact S form. To understand when this can occur, it is useful to first rewrite the integral in terms of pulled back fields on ∂σ, the preimage of ∂Σ under the X map: (6.16) Since δw a = 0, it is clear from this last expression that the flow will be Hamiltonian only if at the boundary, θ is exact when contracted with w a , where B[φ] is some functional of the fields, possibly involving structures defined only at ∂Σ such as the extrinsic curvature. When this condition is satisfied, the second integral in (6.6) simply becomes δ ∂Σ i W B, and so the full Hamiltonian for an arbitrary vector field w a is Hŵ = Next we compute the algebra of the surface symmetry generators under the Poisson bracket. It is worth noting first that by contracting equation (6.17) with Iv, we find that the B functional satisfies With this, the Poisson bracket is given by Hence, the commutator algebra of the vector fields w a is represented by the algebra provided by the Poisson bracket, except when both vector fields have normal components at the surface, in which case the second term in (6.20) gives a modification. In fact, the quantities provide a central extension of the algebra, which is verified by showing that they are locally constant on the phase space, and hence commute with all generators. The exterior derivative is On shell, we have δL = dθ, and from (6.17) we can argue that the replacement i W i V dδB → i W i V dθ is valid at ∂Σ. Hence, the above variation vanishes, and K[ŵ,v] indeed defines a central extension of the algebra. The modification that B makes to the symmetry generators takes the same form as a Noether charge ambiguity arising from changing the Lagrangian L → L + dα, with α = −B. Using the modified Lagrangian L − dB, the potential current changes to θ − δB. The boundary condition (6.17) then implies that the terms involving θ in (6.6) vanish. The symmetry generators are simply given by the integrated Noether charge, which is modified to modified to Q W → Q W − i W B by the ambiguity. Hence, the generators Hŵ are the same as in (6.18), and their Poisson brackets still involve the central charges K[ŵ,v]. Finally, note that the constancy of the central charges requires the variation of the modified Lagrangian L − dB be zero when evaluated on ∂Σ. Requiring that variations of the Lagrangian have no boundary term on shell generally determines the boundary conditions for the theory. The same is true here: a choice of B satisfying (6.17) can generally only be found if the fields obey certain boundary conditions, and different boundary conditions lead to different choices for B.
The surface translations can be parameterized by normal vector fields W i defined on ∂Σ. Assuming ∂ i W j = 0 in some coordinate system, where i, j are normal indices, we can work out their commutation relations with generators of the rest of the algebra: where A denotes a tangential index. The first relation shows that the new generators commute among themselves (although the corresponding Poisson bracket is equal to the central charge K[ŵ,v]), while the second and third show that W i transforms as a vector under SL(2, R) and as a scalar under Diff(∂Σ). If the Noether charge ambiguity is chosen as in equation (6.14) so that the normal shearing generators x j V A j drop out of the algebra, the resulting surface symmetry algebra is Diff(∂Σ) ⋉ (SL(2, R) ⋉ R 2 ) ∂Σ .
However, if the normal shearing transformations are retained, equation (6.26) shows that the surface translations are no longer a normal subgroup, since the commutator gives rise to generators of Diff(∂Σ) and SL(2, R) ∂Σ . In this case, the full surface symmetry algebra is simple. The above analysis was carried out assuming that all normal vectors generate a surface symmetry. In practice, equation (6.17) may only be obeyed for some specifically chosen normal vectors [44]. The resulting algebra will then be a subalgebra of the generic case considered in this section.
Discussion
Building on the results of [36], this paper has described a general procedure for constructing the extended phase space in a diffeomorphism-invariant theory for a local subregion. The integral of the symplectic current for the unextended theory fails to be degenerate for diffeomorphisms that act at the boundary, and this necessitates the introduction of new fields, X, to ensure degeneracy. These fields can be thought of as defining a coordinate system for the local subregion, and the extended solution space consists of fields satisfying the equations of motion in all possible coordinate systems parameterized by X. While the X fields do not satisfy dynamical equations themselves, it was shown in section 4 that their variations contribute to the symplectic form through the boundary integral in equation (4.12).
There are a few novel features of the extended phase space for arbitrary diffeomorphisminvariant theories that do not arise in vacuum general relativity with zero cosmological constant. First, in any theory whose Lagrangian does not vanish on-shell, the symplectic potential Θ is not a single-valued one form on the reduced phase space P. This is due to the bulk integral of the Lagrangian that appears in equation (4.9), along with the fact that variations for which χ a has support only away from the boundary ∂Σ are degenerate directions of the extended symplectic form, (4.12). Because of this, Ω fails to be exact, despite satisfying δΩ = 0. Investigating the consequences of this nontrivial cohomology for P remains an interesting topic for future work.
Another new result comes from the form of the surface symmetry algebra. As in general relativity, any phase space transformation generated byŵ for which W a ≡ Iŵ χ a is tangential at ∂Σ is Hamiltonian. These generate the group Diff(∂Σ) ⋉ Dir ∂Σ of surface-preserving diffeomorphisms, but only a subgroup is represented on the phase space. This subgroup was found in section 6 to be Diff(∂Σ) ⋉ SL(2, R) ⋉ R 2·(d−2) ∂Σ , which is larger than the surface symmetry group Diff(∂Σ) ⋉ SL(2, R) ∂Σ found in [36] for general relativity. The additional abelian factor R 2·(d−2) arises generically; however, it is not present in f (R) theories, in which the tensor E abcd is constructed solely from the metric and scalars. We also noted that for any theory, there exists a choice (6.14) of ambiguity terms that can be added to θ, with the effect of eliminating the R 2·(d−2) factor of the surface symmetry algebra.
The inclusion of surface translations into the surface symmetry algebra was discussed in section 6.1. This requires the existence of a (d − 1)-form B satisfying the relation (6.17) for at least some vector fields that are normal to the boundary. If such a form can be found, the surface translations are generated by the Hamiltonians (6.18). Interestingly, the Poisson brackets of these Hamiltonians acquire central charges given by (6.21), which depend on the on-shell value of the modified Lagrangian L − dB at ∂Σ. Such central charges are a common occurrence in surface symmetry algebras that include surface translations [44,45,50,[64][65][66][67]. In general, the existence of B requires that the fields satisfy boundary conditions at ∂Σ. An important topic for future work would be to classify which boundary conditions the fields must satisfy in order for B to exist. For example, with Dirichlet boundary conditions where the field values are specified at ∂Σ, B is given by the Gibbons-Hawking boundary term, constructed from the trace of the extrinsic curvature in the normal direction [68]. However, such boundary conditions are quite restrictive on the dynamics. For a local subsystem in which ∂Σ simply represents a partition of a spatial slice, one would not expect Dirichlet conditions to be compatible with all solutions of the theory. An alternative approach would be to impose conditions that specify the location of the surface in a diffeomorphisminvariant manner, without placing any restriction on the dynamics. One example is requiring that the surface extremize its area or some other entropy functional, as is common in holographic entropy calculations [2, 59-61, 69, 70]. Since extremal surfaces exist in generic solutions, these boundary conditions put no dynamical restrictions on the theory, but rather restrict where the surface ∂Σ lies.
The effects of JKM ambiguity terms in the extended phase space construction were discussed in section 5. It was noted that the B form that appears when analyzing the surface translations could be interpreted as a Lagrangian ambiguity, L → L − dB. Note that this type of ambiguity does not affect the symplectic form (4.12), and, as a consequence, the generators of the surface symmetries do not depend on this replacement. In fact, the generators (6.18) are invariant with respect to additional changes to the Lagrangian L → L + dα, since such a change shifts the Noether charge Q W → Q W + i W α, but also induces the change B → B + α. An ambiguity that does affect the phase space is the shift freedom in the symplectic potential current, θ → θ + dβ. We noted that certain choices of β can change the number of edge mode degrees of freedom, and also can affect the surface symmetry algebra. In the future, we would like to understand how this ambiguity should be fixed. One idea would be to use the ambiguity to ensure some B can be found satisfying equation (6.17). In this case, the ambiguity is fixed as an integrability condition for θ. Such an approach seems related to the ideas of [62] in which the ambiguity was chosen to give an entropy functional satisfying a linearized second law. Another approach discussed in [61,[70][71][72] fixes the ambiguity through the choice of metric splittings that arise when performing the replica trick in the computation of holographic entanglement entropy.
As discussed in the introduction, one of the main motivations for constructing the extended phase space is to understand entanglement entropy in diffeomorphisminvariant theories [36]. The Hilbert space for such a theory does not factorize across an entangling surface due to the constraints. However, one can instead construct an extended Hilbert space for a local subregion Σ as a quantization of the extended phase space constructed above. This extended Hilbert space will contain edge mode degrees of freedom that transform in representations of the surface symmetry algebra. A similar extended Hilbert space can be constructed for the complementary regionΣ, whose edge modes and surface symmetries will match those associated with Σ. The physical Hilbert space for Σ ∪Σ is given by the so-called entangling product of the two extended Hilbert spaces, which is the tensor product modded out by the action of the surface symmetry algebra. One then finds that the density matrix associated with Σ splits into a sum over superselection sectors, labelled by the representations of the surface symmetry group.
This block diagonal form of the density matrix leads to a von Neumann entropy that is the sum of three types of terms, where the sum is over the representations R i of the surface symmetry group, p i give the probability of being in a given representation, and S i is the von Neumann entropy within each superselection sector. The first term represents the average entropy of the interior degrees of freedom, while the second term is a classical Shannon entropy coming from uncertainty in the surface symmetry representation corresponding to the state. The last term arises from entanglement between the edge modes themselves, and is only present for a nonabelian surface symmetry algebra [73,74]. The dimension of the representation has some expression in terms of the Casimirs of the group, and hence this term will take the form of an expectation value of local operators at the entangling surface. It is conjectured that this term provides a statistical interpretation for the Wald-like contributions in the generalized entropy, S gen = S Wald-like + S out [36]. Put another way, given a UV completion for the quantum gravitational theory, the edge modes keep track of the entanglement between the UV modes that are in a fixed state, corresponding to the low energy "code subspace" [8,75]. On reason for considering the extended phase space in the context of entanglement entropy comes from issues of divergences in entanglement entropy. These divergences arise generically in quantum field theories, and a regulation prescription is needed in order to get a finite result. A common regulator for Yang-Mills theories is a lattice [1,73,74], which preserves the gauge invariance of the theory. Unfortunately, a lattice breaks diffeomorphism invariance, which can be problematic when using it as a regulator for gravitational theories (see [76] for a review of the lattice approach to quantum gravity). The extended phase space provides a continuum description of the edge modes that respects diffeomorphism invariance. As such, it should be amenable to finding a regulation prescription that does not spoil the gauge invariance of the gravitational theory. Finding such a description is an important next step in defining entanglement entropy for a gravitational theory.
There are a number of directions for future work on the extended phase space itself, outside of its application to entanglement entropy. One topic of interest is to clarify the fiber bundle geometry of the solution space S, which arises due to diffeomorphism invariance. A fiber in this space consists of all solutions that are related by diffeomorphism, and the χ a fields define a flat connection on the bundle. Flatness in this case is equivalent to the equation δ( χ a ) + 1 2 [ χ , χ ] a = 0 for the variation of χ a . This fiber bundle description of S will be reported on in a future work [77]. Another technical question that arises is whether S truly carries a smooth manifold structure. One obstruction to smoothness would be if the equations of motion are not well-posed in some coordinate system. In this case, the solutions do not depend smoothly on the initial conditions on the Cauchy slice Σ, calling into question the smooth manifold structure of S. If X is used to define the coordinate system, this would mean that for some values of X the solution space is not smooth. A possible way around this is to always work in a coordinate system in which the field equations are well-posed, and the gauge transformation to this coordinate system would impose dynamical equations on the X fields. Another obstruction to smoothness comes from issues related to ergodicity and chaos in totally constrained systems [78]. It would be interesting to understand if these issues are problematic for the phase space construction given here, and whether the X fields ameliorate any of these problems.
Another interesting application would be to formulate the first law of black hole mechanics and various related ideas in terms of the extended phase space. This could be particularly interesting in clarifying certain gauge dependence that appears when looking at second order perturbative identities, such as described in [79]. The edge modes should characterize all possible gauge choices, and they may inform some of the relations found in [16,80,81] when considering different gauges besides the Gaussian null coordinates used in [79]. They could also be useful in understanding quasilocal gravitational energy, and in particular how to define the gravitational energy inside a small ball. This can generally be determined by integrating a pseudotensor over the ball, but there is no preferred choice for a gravitational pseudotensor, so this procedure is ambiguous. It would be interesting if a preferred choice presented itself by considering second order variations of the first law of causal diamonds [17,20], using the extended phase space. Some ideas in this direction are being considered in [82], but it is difficult to find a quasilocal gravitational energy that satisfies the desirable property of being proportional to the Bel-Robinson energy density in the small ball limit [83,84]. Finally, it would be very useful to recast the extended phase space construction in vielbein variables. Some progress on the vielbein formulation was reported in [48]. Since vielbeins have an additional internal gauge symmetry associated with local Lorentz invariance, care must be taken when applying covariant canonical constructions [85,86]. It would be particularly interesting to analyze the surface symmetry algebra that arises in this case, which could differ from the algebra derived using metric variables because the gauge group is different. Comparing the algebras and edge modes in both cases would weigh on the question of how physically relevant and universal their contribution to entanglement entropy is.
Proof. This follows from standard treatments of the exterior calculus [52].
Proof. This is simply the derivation property of the Lie derivative applied to all tensor fields on S. I U α is a contraction of the vector U with the one-form α, so the Lie derivative first acts on U to give the vector field commutator L V U = [V, U], and then acts on α, with the contraction I U now being applied to L V α. Hence, on an arbitrary form, Proof. The discussion of section 2 derived equation (2.4), so all that remains is to show that χ a (Y ; V ) is linear in the vector V . This can be demonstrated inductively on the degree of α. For scalars, it is enough to show it holds on the functions φ x . Applying A.1, we have on the one hand while on the other hand, since I V commutes with Y * . Equating these expressions, we find Since the right hand side of this expression is linear in V , χ(Y ; V ) must be as well.
Now suppose A.3 holds for all forms of degree n − 1, and take α to be degree n.
Then for an arbitrary vector U, I U Y * α is degree n − 1, so where identity A.2 was applied along with the fact that I U commutes with £ ξ . On the other hand, Since U was arbitrary, equating these expressions shows thatχ a (Y ; V ) = I V χ a Y , showing that the formula holds for forms of degree n.
Proof. This is essentially the antiderivation property applied to £χ Y . The spacetime Lie derivative £χ Y acting on a tensor can be written in terms of χ a Y and its derivatives contracted with the tensor, where all instances of χ a Y appear to the left. It is straightforward to see that when I V contracts with χ a Y in this expression, the terms will combine into £ (I V χ Y ) , and since I V does not change the spacetime tensor structure of the object it contracts, the remaining terms will combine into −£χ Y I V , with the minus coming from the antiderivation property of I V .
Proof. This may also be demonstrated inductively on the degree of α. For scalars, we simply note that equation (A.3) is valid for arbitrary vectors V , and since . Assume now A.5 holds for all (n − 1)-forms, and take α an n-form and V an arbitrary vector. Then The first equality applies A.1, the second uses A.3 and the fact that I V Y * α is an (n − 1)-form, and the last equality follows from A.1 and A.4. Since V is arbitrary, this completes the proof.
Proof. This is a consequence of the formula for the commutator of two vectors, along with the fact that since χ a is an S one-form, it anticommutes with itself. Alternatively, the formula may be checked by contracting with arbitrary vectors V and U. Letting I V χ a Y = −ξ a and I U χ a Y = −ζ a , we have Proof. For ordinary spacetime vectors ξ a and ζ a , the Lie derivative satisfies [58] £ ξ £ ζ = £ [ξ,ζ] + £ ζ £ ξ . (A.8) Since χ a Y are anticommuting, this formula is modified to from which the identity follows. Note that A.6 provides a formula for [ χ Y , χ Y ] a .
Proof. This identity is a standard property of the Lie derivative, see e.g. [87].
Proof. The identity for ordinary spacetime vectors ξ a and ζ b [58] along with the fact that χ a are anticommuting gives 11) and moving −£χiχ to the left hand side proves the identity.
A.10 £χθ + δiχL + £χiχL = d iχθ + 1 2 iχiχL Proof. The first term in this expression is £χθ = diχθ + iχdθ, which gives one of the terms on the right hand side of the identity, along with iχdθ. Next we have where we applied equation (2.8) for δ χ a , and used that δL = dθ on shell. The −iχdθ term cancels against the similar term appearing in £χθ, so that the remaining pieces are which follows from identity A.9 and dL = 0. Hence, the terms on the left of the A.10 combine into the exact form d(iχθ + 1 2 iχiχL).
A.12 Lξ = £ ξ + I δξP roof. This formula is meant to apply to local functionals of the fields defined at a single spacetime point. Since I δξˆa nnihilates scalars, it clearly is true for that case. Then assume the formula has been shown for all (n − 1)-forms, and take α to be an n-form. For an arbitrary vector V , since I V α is an (n − 1)-form, we have 15) and the last two terms in this expression cancel due to identity A.11. Since V was arbitrary, we conclude that the identity holds for all n forms, and by induction for all S differential forms. Applied to φ and defining ν a = −I V χ a , this gives To get to the third line, the expression (2.8) for δ χ a was used. We then conclude [V,χ] = [ν, χ ]ˆ− δνˆ, proving the identity.
A.15 Lχ = £χ − I δ χP roof. The formalism of graded commutators developed in [87] is a useful tool in proving this identity. Given two graded derivations D 1 and D 2 , their graded commutator D 1 D 2 − (−1) k 1 k 2 D 2 D 1 is another graded derivation, where k i are the degrees of the respective derivations, i.e. the amount the derivation increases or decreases the degree of the form on which it acts. Hence, since I V and Lχ are derivations of degrees −1 and 1, they satisfy where equation (2.8) was used in the last equality.
We then prove the identity through induction on the degree of the form on which it acts. It is true for scalars because I δ χφ = 0. Then suppose it is true for all (n − 1)-forms, and take α to be an n-form. For an arbitrary vector V we have (A.20) The first line employs equation (A.18), the second line uses identities A.14 and A.12 as well as the fact that I V α is an (n − 1)-form, and the third line employs equation (A.19). Since V is arbitrary, we conclude the identity holds for all n-forms, which completes the proof.
B Edge mode derivatives in the symplectic form
In this appendix, we derive the result advertised in section 4, that the symplectic form (4.12) does not depend on second or higher derivatives of χ a . Derivatives of χ a appear in Ω through the terms δQχ + £χQχ. The Lie derivative term may be expressed which uses identity A.6. When added to (B.4), the second derivative terms cancel since χ e is an S one form, so (∇ (c ∇ e) χ d ) χ e = − χ e ∇ (c ∇ e) χ d . This shows that (B.3) does not depend on second derivatives of χ d . | 18,377.8 | 2017-06-15T00:00:00.000 | [
"Mathematics"
] |
Development of a Shared Environment Model with Dynamic Trajectory Representation for Multiple Mobile Robots
. The productivity of groups can be increased by enabling group members to share their perceptions of the environment. We adapt this concept for mobile robots by presenting an object-oriented approach to a shared environmental model. The objects are stored in a graph, which saves memory and computing power and allows the representation of hierarchical and topological relationships. Each object can contain geometric and semantic data as well as information about its current, past, and planned or estimated future movements. An example application shows that modeling future motion can prevent collisions.
Motivation
Product customization and shortening of production life cycles increase the demand for flexible manufacturing systems. Mobile robots show great potential due to their high configurability and scalability as well as their freedom of movement. The applications are diverse and range from logistics and cleaning or inspecting to operation at multiple workstations. The large number of mobile resources means that future production facilities will require decentralized intelligence to handle the increasing complexity.
In order to navigate and interact safely and efficiently in such facilities, robots need an understanding of their environment that is technically enabled by environmental models. The combination of the limited perceptions of individual robots into a common knowledge base enables collective intelligence. Those shared models enable improvements firstly in safety, e.g. warning of imminent danger, and secondly in efficiency, e.g. avoiding traffic jams.
Existing models lack the ability to represent object dynamics. At critical points in a production facility, such as intersections and pedestrian crossings, information about the dynamics of surrounding objects can improve overall performance and safety. Dynamic information could either be directly integrated into the map by the performing robot or estimated by observers for agents with no connection to the shared model, e.g. human workers.
This work presents a novel approach for integrating object dynamics into world models. The suggested solution consists of a client-server architecture and a graph-based data model supporting geometry, semantic classification, topology and object dynamics, including past and present as well as predicted and planned future trajectories. First, an overview of environment models with focus on papers related to this work is given in section 2. Then, our approach is presented in section 3. Section 4 describes a proof-of-concept application offering a technical implementation and illustrating the added value of the approach. Finally, the results are discussed in section 5 and perspectives for future research are given.
Related Work
Research on shared environmental models focuses on three main areas: methods for obtaining new data and integrating them into an environmental model, solutions for contributing and distributing data in multi-user scenarios and lastly, the actual data modeling, which uses data structures, itself.
Obtaining New Data This area includes the various methods from the fields of perception, data fusion and pattern recognition and is considered only briefly below. Standard approaches to mapping in mobile robotics are "Simultaneous Localization and Mapping" (SLAM) [1,2] or "HectorSLAM" [3], which generate grid-based maps. Alternatively, the raw sensor data can also be processed by object recognition and integrated into an object-oriented data model [4]. There are several applications in the field of service robotics [5,6]. Furthermore, data can be processed to form topological graphs, e.g. transforming a 2D grid-based map of a building into a graph with nodes representing rooms [5,7]. Applications to path planning for multiple mobile robots are given in [8]. Due to inaccuracy in perception, objects need to be registered to their absolute position in the environmental model, referred to as "anchoring" or "data association" [9,10]. Context-based classification improves anchoring by considering existing knowledge [11].
Data Distribution This area integrates the methods from the fields of network architecture, consistency management and synchronization and, again, is considered only briefly below. In contrast to fully centralized map storage and fully synchronized maps with all participants, intermediate stages are a trade-off between latency and consistency [12]. A client-server structure can be applied to an object-oriented distribution model, preserving consistency by predefined access rights to object changes [13]. Multi-server architectures with environmental subsets enable scalability [14]. Moreover, there are approaches that are tolerant to inconsistency by assuming it to be acceptable at certain points of the data model [15].
Data Structures for Environment Mapping Our approach focuses on data structures, therefore the related work is discussed in detail. Common approaches for geometric relationships can be classified as grid-based or object-oriented. In addition to geometric data, environmental models can represent semantics or uncertainty.
Grid-based Structures A fundamental concept of map generation is the "Occupancy Grid Map" (OGM), discretizing 2D or 3D space into independent cells assigned with the estimated occupancy, e.g. "free", "occupied" or "unknown" [1,3,5,16]. The storage effort can be reduced by structuring the grid cells in a hierarchical manner, the so-called "octree" [17]. Combinations of several octrees enable object hierarchies and multiple resolutions [18].
Object-oriented Structures With object trees, not only cubes, but any geometric object can be described. Gregor et al. [19] use homogeneous transformations to indicate the relative spatial arrangement of parent and child objects. Semantic information can be represented by linking the nodes of the object tree to corresponding elements of a "conceptual hierarchy" [5].
An extension to graph structures holding geometric and transformation nodes is presented by Blumenthal et al. [20]. Here, nodes can contain semantic information via key-value pairs and past object movements are traced by transformationtimestamp pairs. The graph structure allows several transformation nodes to point to the same geometry node, which enables the modeling of competing perceptions in multi-robot scenarios. Multi-user operation is considered unproblematic because geometric objects, as well as transformation-timestamp pairs, are invariant. In addition, there is an interface for distributed storage of a graph, but possible consistency problems are not considered. Other approaches extend geometric models by separately stored ontologies [10,21], which are not in the scope of this work.
Further graph structures, such as in [22], focus on the shared use of the map by several actors. For this purpose, a so-called "generative grammar" is set up, which regulates how parts of the model may change. The emphasis of the research can also be on a systematic class-based representation of geometric information [23].
Modelling of Uncertainty
Predictions or measurements result in uncertain data, which must be correctly modeled. By augmenting coordinates with one standard deviation per dimension, locations can be stored as Gaussian distributions [24]. Likewise, uncertain occupancies in octrees can be expressed by probability values [17] and spatial transformations can be extended by additional covariance matrices, expressing 6D spatial deviation [20].
Different approaches introduce confidence values for graph attributes and links [10]. Object classification algorithms provide a classification probability, which is integrated into the class structure by Papp et al. [23]. Additionally, generic uncertainty models can be linked to physical quantities here.
Scientific Approach
The overview of the scientific approaches to environmental modeling shows that so far there are only solutions that are specialized in one field. If several robots with a shared environmental model are used, their activities and perceptions are very homogeneous. This in turn leads to restrictions in the representation of the perceived world in the model. Comprehensive environmental models, on the other hand, are not prepared for multi-user or distributed systems. It is not possible to enter predicted trajectories of robots or dynamic obstacles into the world model in any of the considered solutions.
Requirements
From the shortcomings of the existing solutions mentioned in the previous section and the generally necessary characteristics of an environmental model, we can derive some requirements for a new, comprehensive system for mapping the environment.
Geometry and Dynamics Real world objects should be depicted in terms of their position and shape. In order to adequately reproduce movements of objects, a distinction must be made between past, present and future movements. The latter must be further subdivided into planned and estimated movements.
Combination of Topological, Geometrical and Hierarchical Relations Three types of relations are defined in the literature: hierarchical, indicating a dependency of the function and location of subordinate objects on their parent, geometrical, indicating a spatial relation, and topological, indicating neighborhoods that are not necessarily directly connected to the spatial location.
For compact storage, a single data structure should represent these three types of relations.
Scientific Concept
Our system is based on a client-server architecture similar to those in [13] and [14]. Each instance contains a copy of the shared environmental data. In the following, we describe how the data is structured and represented, focusing on motion modeling.
Structure of the Data Model
Our approach is based on object-oriented representation. Geometric or abstract objects of the real world are mapped to nodes of a data structure whose edges reflect relationships between the objects. The data structure has two types of object links, vertical and horizontal. The relative position of two objects is stored in vertical links. They form a tree-like structure and therefore also show hierarchies between objects. In contrast to a conventional tree, objects can have more than one parent node in the superordinate level, resulting in a directed acyclic graph (DAG). This means that multiple memberships can be mapped, for example a workpiece that is jointly transported by two robots. If an object has more than one parent node, its absolute position is ambiguous because there are several paths to the root node.
The second type of object connection is the horizontal link. It shows the topological neighborhood of two objects. Horizontal links may interconnect any objects in the DAG of vertical links. The collection of all horizontal links in a data model forms one or more graphs.
Contents of the Data Model
Each node of the data model represents a real physical object or a group of several subordinate objects [18,20]. Thus a node does not necessarily have to contain a geometry. The assignment of a classification and a trajectory is also possible so that all use cases can be covered.
Geometry and Classification In our approach, several types of geometry representation can be stored. Point clouds provide a storage option close to the sensor, while octrees provide a storage-efficient solution. Moreover, triangle meshes can be stored, which are very suitable for the visualization of objects. The technical details are not discussed in detail here; instead, readers are referred to the literature [17,20].
In addition to geometries, the results of classification algorithms can be stored in the nodes [10,23]. Object Dynamics Present and past object dynamics are modeled according to [20]. This is applicable to the linear and angular velocity stored in each object node, as well as to the location of objects stored in the vertical links. For interfering objects, future motions can be predicted based on present and past motions. Our approach includes an uncertainty model for representing predicted trajectories [25]. Figures 1(a) and 1(b) show dispersing trajectories of velocity and orientation angle, expressed as the mean value and standard deviation. Accordingly, discrete trajectories are formed by time-sampled points where the position is assigned an uncertainty in the form of a covariance matrix, see Figure 1(c). This representation of an uncertain trajectory has the advantage that time and location components are separated. In an environmental model which is based on geometric relationships, the trajectory can therefore be seamlessly inserted.
In case of robots, future motions are predictable. Therefore, our approach includes planned trajectories as continuous paths (NURBS: Non-Uniform Rational B-Spline) mapped to time.
Proof of Concept
A typical system has been created using open-source software. Due to the early stage of development, an industry-like application has been simulated in an office environment, e.g. an order picking robot in a small parts warehouse.
Implementation
The described object-oriented data model has been written in C++. Several hierarchical classes represent the different nodes and edges as well as the optional information about classification, geometry and trajectory. They are depicted on the left side of Figure 2. Linux-based client programs were implemented for reading sample geometries from a database, inserting a trajectory and visualizing the model. Another ROS-based client makes the world model accessible to the mobile robot.
The clients communicate via a WLAN with a linux-based server application. If a client makes a change to its instance of the data model, it informs the server. The server forwards the change to the other clients.
Experiment and Observation
The use example consists of a mobile robot driving along a narrow alley. At a hard-to-observe intersection a human worker crosses the robot's pathway.
The robot, which is only equipped with on-board sensors, would need to reduce its speed at such locations to ensure safety. Therefore the robot's performance decreases even if no interference occurs.
In this scenario it is assumed that a second robot beyond the junction can perceive the person and estimate their future movement. The expected trajectory is fed into the environmental model available to the first robot. The setup is illustrated in Figure 1(d).
Experiments prove that the first robot can react to the interference by stopping before being able to perceive the human worker based on onboard sensors alone. Figure 2 shows the crossing situation in reality and in the model.
Results, Discussion and Conclusion
In this work, a novel shared environmental model for mobile robots was presented. The graph-based data model can represent geometry as point clouds or triangle meshes and semantic information as classification results. The proposed solution is suitable to represent future or past object dynamics and to consider the uncertainty in predicted trajectories, as demonstrated in the section above.
Furthermore, the environmental model offers advantages in the operation of mobile robots. By communicating with other robots, a robot can react to a moving obstacle before it has detected it with it's on-board sensors.
Robots must normally drive at an appropriate speed to be able to brake at dangerous points such as hard-to-observe intersections. It is also conceivable to install stationary sensors at danger points so that potentially interfering objects are always reported to the shared environmental model. Thus overall performance can be increased while maintaining safety.
Fulfillment of Requirements
Current movements can be stored in the model as single values with a timestamp. When a value with a newer timestamp is added, the previous one moves to the history. Future movements can be stored continuously or discretely with uncertainty. In this way, the degree of detail and the time horizon can be varied arbitrarily. This covers a large number of applications.
Combining hierarchies and location relationships makes the modeling of location-independent hierarchies more difficult. On the other hand, it simplifies changing the location of object groups as only the position of the parent node must be updated.
Outlook
Further Aspects of Modeling for Multiple Robots In addition to the aspects described above, the presented concept contains mechanisms to ensure consistency and efficient distribution of data. These are of great importance in a shared environmental model. The mechanisms are based on the existence of a central, superordinate server that decides on the approval and distribution of updates.
To limit network traffic, it is possible to subdivide the environmental model into several regions. In this way, each robot receives updates matched to its location instead of receiving updates for the whole map. The horizontal links between the model objects that are described above are decisive for the subdivision.
Future Work Several possibilities for geometry storage were mentioned in sec. 3.2. It should be investigated whether further methods need to be added to this selection. Since most types of information occurring in a virtual environment are uncertain, the modeling of uncertainty should also be further investigated. Finally, tests of the environmental model with large amounts of content must be carried out to assess storage efficiency and speed so that the results can be compared with other solutions.
Acknowledgements
The results presented in this article were developed within the FORobotics (AZ-Open Access This chapter is licensed under the terms of the Creative Commons Attripermits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. | 3,851.4 | 2020-01-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
The cytosolic form of aspartate aminotransferase is required for full activation of TOR complex 1 in fission yeast
The evolutionarily conserved TOR complex 1 (TORC1) activates cell growth and proliferation in response to nutritional signals. In the fission yeast Schizosaccharomyces pombe, TORC1 is essential for vegetative growth, and its activity is regulated in response to nitrogen quantity and quality. Yet, how TORC1 senses nitrogen is poorly understood. Rapamycin, a specific TOR inhibitor, inhibits growth in S. pombe only under conditions in which the activity of TORC1 is compromised. In a genetic screen for rapamycin-sensitive mutations, we isolated caa1-1, a loss-of-function mutation of the cytosolic form of aspartate aminotransferase (Caa1). We demonstrate that loss of caa1+ partially mimics loss of TORC1 activity and that Caa1 is required for full TORC1 activity. Disruption of caa1+ resulted in aspartate auxotrophy, a finding that prompted us to assess the role of aspartate in TORC1 activation. We found that the amino acids glutamine, asparagine, arginine, aspartate, and serine activate TORC1 most efficiently following nitrogen starvation. The glutamine synthetase inhibitor l-methionine sulfoximine abolished the ability of asparagine, arginine, aspartate, or serine, but not that of glutamine, to induce TORC1 activity, consistent with a central role for glutamine in activating TORC1. Neither addition of aspartate nor addition of glutamine restored TORC1 activity in caa1-deleted cells or in cells carrying a Caa1 variant with a catalytic site substitution, suggesting that the catalytic activity of Caa1 is required for TORC1 activation. Taken together, our results reveal the contribution of the key metabolic enzyme Caa1 to TORC1 activity in S. pombe.
The evolutionarily conserved TOR complex 1 (TORC1) activates cell growth and proliferation in response to nutritional signals. In the fission yeast Schizosaccharomyces pombe, TORC1 is essential for vegetative growth, and its activity is regulated in response to nitrogen quantity and quality. Yet, how TORC1 senses nitrogen is poorly understood. Rapamycin, a specific TOR inhibitor, inhibits growth in S. pombe only under conditions in which the activity of TORC1 is compromised. In a genetic screen for rapamycin-sensitive mutations, we isolated caa1-1, a loss-of-function mutation of the cytosolic form of aspartate aminotransferase (Caa1). We demonstrate that loss of caa1 ؉ partially mimics loss of TORC1 activity and that Caa1 is required for full TORC1 activity. Disruption of caa1 ؉ resulted in aspartate auxotrophy, a finding that prompted us to assess the role of aspartate in TORC1 activation. We found that the amino acids glutamine, asparagine, arginine, aspartate, and serine activate TORC1 most efficiently following nitrogen starvation. The glutamine synthetase inhibitor L-methionine sulfoximine abolished the ability of asparagine, arginine, aspartate, or serine, but not that of glutamine, to induce TORC1 activity, consistent with a central role for glutamine in activating TORC1. Neither addition of aspartate nor addition of glutamine restored TORC1 activity in caa1-deleted cells or in cells carrying a Caa1 variant with a catalytic site substitution, suggesting that the catalytic activity of Caa1 is required for TORC1 activation. Taken together, our results reveal the contribution of the key metabolic enzyme Caa1 to TORC1 activity in S. pombe.
TOR is an evolutionarily conserved serine/threonine protein kinase that coordinates cell growth with nutrient availability (1,2). TOR proteins were originally identified in the budding yeast Saccharomyces cerevisiae in a screen for mutations that conferred resistance to the growth inhibitory effect of rapamycin (3)(4)(5). Rapamycin is a macrolide compound that has immunosuppressive and anti-proliferative effects, which are beneficial in the treatment of transplant and cancer patients and possibly also in metabolic and neurological disorders (1,2,6). Rapa-mycin requires an intracellular co-factor for its toxicity, the peptidyl, prolyl cis/trans-isomerase, FKBP12 (3). The FKBP12rapamycin toxic complex binds the FRB (FKBP12-rapamycin binding) domain in TOR, thus inhibiting its activity (4,5) (reviewed in Ref. 8).
TOR proteins are found in two distinct complexes, named TOR complex 1 (TORC1) 3 and TOR complex 2 (TORC2) (9). TORC1 is typically inhibited by rapamycin, whereas TORC2 is less sensitive to inhibition by the drug (9 -11). In human cells, a single TOR gene exists, mTOR, which is found in association with the Raptor protein to form mTORC1, or with the Rictor and mSin1 proteins to form mTORC2 (reviewed in Ref. 2). Two TOR kinases are present in the fission yeast Schizosaccharomyces pombe, Tor1 and Tor2. These kinases act as the catalytic subunits of the two TOR complexes: S. pombe TORC1 contains mainly the Tor2 kinase, whereas S. pombe TORC2 contains mainly the Tor1 kinase (12)(13)(14). As in other eukaryotes, S. pombe TORC1 positively regulates many different growthrelated processes, whereas inhibiting starvation responses (15). TORC1 activity is sensitive to nutritional starvation. In particular, withdrawal of the glucose or nitrogen from the growth medium leads to de-activation of TORC1 in fission yeast (16). The direct signal mechanisms for the activation of TORC1 are as yet unknown; however, a large body of evidence suggests that the nitrogen source and amino acids (which may also serve as nitrogen source) play a key and conserved role in TORC1 activation (17,18). Nitrogen is an essential element required for synthesis of amino acids, nucleotides, and other cellular components. Yeast cells can sense, take up, and assimilate several sources of nitrogen. Studies in S. cerevisiae defined high quality nitrogen sources, such as glutamine, by their ability to promote rapid growth and suppress nitrogen-catabolite-repression (NCR) genes (19). In fission yeast, withdrawal of the nitrogen source from the growth medium, or its replacement with a low quality source of nitrogen (for example, proline instead of ammonia), results in down-regulation of TORC1 activity. This is best manifested by a dramatic reduction in the phosphorylation of Psk1, the kinase that lies directly downstream to S. pombe TORC1 (orthologue to human p70 S6 kinase, or in short p70S6K), or reduction in the phosphorylation of the sub-strate of Psk1, the ribosomal proteins Rps601 and Rps602 (S6 in human) (16,20). In return, S. pombe TORC1 regulates many nitrogen starvation-induced responses, including transcriptional programs (14), via the regulation of the transcription factors Gaf1 (21), Mei2 (22), or the SAGA (Spt-Ada-Gcn5 acetyltransferase) co-activator complex (23), as well as regulation of mitotic commitment and cell-size regulation (24,25). Nitrogen starvation is a strong cue for S. pombe cells to exit the cell cycle and enter the sexual development pathway. Consistent with the idea that nitrogen sufficiency is mediated via TORC1, cells mutated for TORC1 act as nitrogenstarved cells and enter the sexual development pathway under nutrient-rich conditions (12-14, 23, 26).
S. pombe TORC1 is activated by two distinct, highly conserved guanosine triphosphate GTPases: Gtr (Rag in higher eukaryotes) and Rhb1 (Rheb in higher eukaryotes). Similar to their mammalian counterparts, these GTPases and their direct regulators are localized to the vacuolar membrane (the yeast equivalent to the lysosome), where they are found in association with TORC1 (27, 28). Rhb1 is regulated by the Tsc1-Tsc2 complex that acts as a GTPase-activating protein (GAP) (29,30). Disruption of rhb1 ϩ results in a nitrogen starvation-like phenotype, similar to disruption of TORC1, whereas disruption of tsc1 ϩ or tsc2 ϩ results in phenotypes associated with a nitrogen sufficiency signal (29). In human cells, the TSC-Rheb module is a major hub for regulating TORC1 in response to cellular energy and growth factor signaling (reviewed in Ref. 18). The AMPK kinase that plays a central role in cellular energy homeostasis activates TSC1-TSC2 to inhibit mTORC1 via the control of Rheb (31). In fission yeast, the AMPK kinase, Ssp2, was shown to transmit a nitrogen stress signal (the shift from good to poor nitrogen source) to TORC1 via TSC-Rhb1, leading to regulation of mitotic commitment and cellular size (24). The Rag/Gtr proteins have been implicated in activation of TORC1 in response to specific amino acids by modulating their nucleotide binding status (reviewed in 17). The S. pombe Gtr1-Gtr2 complex appears to play both positive (27) and negative (28,32) roles in regulating TORC1 activity. However, Rag/Gtr1 proteins are not essential for regulating TORC1 in response to amino acids or nitrogen (27, 33, 34) (reviewed in Ref. 18).
Interestingly, rapamycin does not inhibit growth of WT S. pombe cells, indicating that under normal growth conditions, the essential function of TORC1 is resistant to rapamycin (35). However, rapamycin inhibits growth and induces nitrogen starvation responses when the general activity of TORC1 is reduced. Thus, for example, the growth of S. pombe cells becomes sensitive to rapamycin when the activity of tor2 ϩ is reduced (13) or in the presence of caffeine, which reduces the activity of TORC1 (36). Here we report the isolation of a loss of function mutation of caa1 ϩ , encoding the cytosolic isoform of aspartate aminotransferase, as a rapamycin-sensitive mutation. We demonstrate that Caa1 is required for full activation of TORC1. We discuss our findings in view of the roles of specific amino acids in TORC1 activation following nitrogen starvation.
Isolation of caa1-1 as a rapamycin-sensitive mutation
S. pombe TORC1 is essential for cell proliferation, yet rapamycin does not inhibit vegetative cell growth in this organism (35). Previous studies demonstrated that rapamycin can lead to growth arrest when TORC1 activity is compromised (20,36,37). To further explore TORC1 signaling and its response to rapamycin, we conducted a genetic screen to isolate mutant cells that are sensitive to rapamycin. Cells of the TA16 strain (leu1-32 ura4-D18 ade6-M216 h 90 ) were mutagenized using UV irradiation. Following their growth on nonselective plates, cells were replica plated to 0.1 g/ml of rapamycin plates. A screen of 50,000 colonies led to the isolation of six rapamycinsensitive (RS) mutants. Rapamycin sensitivity was confirmed by re-streaking each candidate on nutrient-rich (YE) or minimal (Edinburgh minimal medium) medium in the presence or absence of rapamycin. Here we describe the characterization of one of these mutants, RS42.
As demonstrated in Fig. 1A, the RS42 mutant strain was sensitive to rapamycin on nutrient-rich plates (YE), compared with the parental strain used for mutagenesis, TA16. When examined on minimal plates (EMM, ammonia used as a nitrogen source), RS42 failed to grow either in the presence or absence of rapamycin. The parental strain shows rapamycin sensitivity on EMM plates due to leucine auxotrophy and the inhibitory effect of rapamycin on leucine uptake, as was previously shown (38).
The RS42 strain was crossed with the auxotrophic, but the otherwise, WT strain TA2. Tetrad analysis of diploid cells resulting from the cross of TA2 and RS42 revealed a 2:2 segregation of small versus normal-sized colonies (Fig. 1B). The small colonies were rapamycin-sensitive, whereas the large colonies were rapamycin-resistant (data not shown), indicating that the rapamycin-sensitive phenotype depends on a single mutation and co-segregates with the small-sized colony phenotype.
To identify the mutation that confers rapamycin sensitivity in RS42, RS42 cells were transformed with the S. pombe Norbury pREP3X-cDNA library and plated onto minimal medium supplemented with 50 ng/ml of rapamycin. Following incubation of 5 days at 30°C, six of 28,000 transformants formed colonies on the selective conditions. DNA sequencing of plasmids recovered from these cells showed that they all contained the SPAC10F6.13c gene, a predicted aspartate aminotransferase that shows high similarity with AAT2, the S. cerevisiae cytosolic aspartate aminotransferase gene (YLR027C). We named the SPAC10F6.13c gene caa1 ϩ , for cytosolic aspartate aminotransferase. The aspartate aminotransferase enzyme catalyzes the reversible transfer of the amino group from L-aspartate to 2-oxoglutarate (␣-ketoglutarate) to form oxaloacetate and L-glutamate (Fig. 1C). It plays a critical role in the metabolism of both carbon and nitrogen in all organisms (39,40). DNA sequencing of the caa1 ϩ gene in the RS42 mutant revealed the presence of a mutation in nucleotide number 76 that results in a stop codon close to the N-terminal region of the gene; hereafter this mutation is referred to as caa1-1.
Activation of TORC1 by amino acids Mutant caa1 cells display auxotrophy to aspartate
S. cerevisiae strains lacking AAT2 are viable in rich medium, but require aspartate in minimal medium (41). Therefore, we examined if the inability of the caa1-1 mutant cells to grow on minimal medium (Fig. 1A) is rescued by addition of aspartate. To avoid any possible complications in using auxotrophic mutant cells, we constructed a prototrophic caa1-1 mutant strain (TA637). We found that addition of aspartate, but not glutamate, rescued the inability of prototrophic caa1-1 mutant cells to grow on minimal plates (Fig. 1D). Deletion of the entire ORF of caa1 ϩ from the genome by replacing it with KanMX (⌬caa1) resulted in cells that showed the expected slow growth, aspartate auxotrophy, and rapamycin sensitivity (Fig. 1E). Addition of aspartate alone (Fig. 1E), or aspartate and glutamate together (Fig. 1G), resulted in a similar phenotype and did not fully suppress the slow growth of caa1-1 or ⌬caa1 mutant cells. This may indicate that the caa1 ϩ gene contributes to cellular growth irrespective of its role in aspartate biosynthesis. In Figure 1. Isolation of caa1-1 as a rapamycin-sensitive mutation. A, RS42 (TA267) and the otherwise isogenic WT strain (TA16) were plated on rich (YE) or minimal (EMM) media in the presence or absence of 0.1 g/ml of rapamycin. RS42 cells are rapamycin-sensitive on YE plates and do not form isolated colonies on minimal medium. B, tetrad analysis of a genetic cross between RS42 and WT. RS42 carries a single mutation that leads to rapamycin sensitivity and small-sized colonies. C, the enzymatic reaction catalyzed by amino acid aspartate. D, the prototrophic caa1-1 (TA637) and a prototrophic WT (TA1) strains were plated on YE, EMM, and EMM supplemented with glutamate or aspartate. Only aspartate supports growth of caa1-1 on EMM. E, addition of aspartate to minimal medium does not suppress the rapamycin sensitivity of caa1 mutant cells. Prototrophic caa1-1 (TA267), ⌬caa1 (TA2867), and WT cells (TA2) were streaked on EMM or EMM containing aspartate with or without rapamycin. F, addition of aspartate to medium with a lower concentration of ammonium chloride or proline as a sole nitrogen source does not rescue the rapamycin sensitivity of the caa1 mutant cells. caa1-1 (TA637) and WT cells (TA1) were plated on EMM plates containing aspartate with standard or low concentrations of ammonium chloride (EMM, 5 g/liter and EMM, 3 g/liter of ammonia, respectively) and EMM containing proline as the sole nitrogen source. G, addition of aspartate and glutamate results in a similar effect to addition of aspartate alone. Prototrophic caa1-1 (TA267), ⌬caa1 (TA2867), and WT cells (TA2) were spotted onto the indicated plates following serial dilutions. All plates were incubated at 30°C for 4 days.
Activation of TORC1 by amino acids
addition, supplementation of the medium with aspartate did not rescue the rapamycin-sensitive phenotype of cells lacking caa1 ϩ (Fig. 1F).
We next considered the possibility that rapamycin inhibited the uptake of aspartate, thus preventing the suppression of the rapamycin sensitivity by external aspartate. This option was particularly attractive since we previously showed that rapamycin inhibited leucine uptake (38). The sensitivity of leucine auxotrophs to rapamycin is suppressed by reducing the amount or quality of the nitrogen source, which leads to up-regulation of general amino acid permeases and to an increase in amino acid uptake (38). Here, we found that addition of aspartate to a medium with lower concentration of ammonium chloride or to a medium in which ammonia was replaced with proline, a poor nitrogen source, did not rescue rapamycin sensitivity in prototrophic caa1-1 mutant cells (Fig. 1F). Thus, the inability of aspartate in the medium to suppress rapamycin sensitivity in caa1 mutant cells is unlikely to be due to a defect in aspartate uptake. Rather, loss of caa1 likely affects rapamycin sensitivity, at least partially, irrespective of aspartate biosynthesis.
Deletion of the mitochondrial aspartate aminotransferase, maa1 ؉ , does not result in rapamycin sensitivity
In addition to the cytosolic form of the enzyme, mitochondrial isoenzymes exist both in S. pombe and S. cerevisiae. The S. cerevisiae mitochondrial isoenzyme is called AAT1 (YKL106W). We named the S. pombe homologue, SPBC725.01, maa1 ϩ for mitochondrial aspartate aminotransferase. In both yeasts the cytosolic and mitochondrial forms share about 48% identity (39, 42). We examined whether maa1 ϩ is required for rapamycin resistance by disrupting the entire ORF of the gene (⌬maa1). Cells deleted for maa1 ϩ were plated on nutrient-rich plates with or without rapamycin, together with WT and caa1-1 cells as controls. In contrast to caa1-1, cells disrupted for maa1 ϩ are resistant to rapamycin ( Fig. 2A), indicating that maa1 ϩ is not required for rapamycin resistance. ⌬maa1 cells were unable to grow on EMM, however, addition of aspartate or glutamate did not rescue this lethality ( Fig. 2A). Thus, unlike caa1 ϩ , maa1 ϩ plays a different role in auxotrophy and rapamycin sensitivity.
The catalytic site of Caa1 is required for rapamycin resistance
Aspartate aminotransferases form a large protein subfamily with considerable diversity among its members. Yet, within the active catalytic site, 13 absolutely conserved residues have been identified (40,43). One of these residues is a conserved lysine that resides at the bottom of the active site that forms a Schiff base with the pyridoxal phosphate coenzyme. To test whether the catalytic activity of Caa1 is required for rapamycin resistance, we mutated the conserved lysine residue within the pyridoxal phosphate-binding site of Caa1, lysine 255, into arginine (Caa1 K255R ). Cells carrying the caa1-K255R mutation exhibited a slow growth phenotype on nutrient-rich medium, failed to form isolated colonies on minimal medium, and were sensitive to rapamycin (Fig. 2B). The protein level of Caa1 K255R is comparable with that of Caa1 (Fig. 2C). Thus, the enzymatic activity of Caa1 is required for rapid growth, aspartate biosynthesis, and rapamycin resistance.
Rapamycin sensitivity in caa1 mutant cells is FKBP12-dependent and results from inhibition of TORC1 and not TORC2
Because rapamycin inhibits TOR proteins only when it is found in a complex with the FKBP12 homolog (3), deletion of fkh1 ϩ , the S. pombe homolog of FKBP12, is expected to render cells resistant to rapamycin. Thus, for example, the rapamycin sensitivity of cells auxotrophic to leucine is suppressed by deletion of fkh1 ϩ (38). To examine whether the rapamycin sensitivity of caa1-1 mutant cells depends on the formation of the rapamycin-FKBP12 complex, we deleted the fkh1 ϩ gene in caa1-1 strain. Cells were plated on nutrient-rich medium with or without rapamycin, together with WT and caa1-1 strains, as controls. Deletion of fkh1 ϩ completely rescued the rapamycin sensitivity of caa1-1 mutant cells (Fig. 3A), arguing that sensitivity to rapamycin in caa1 mutant cells is TOR-dependent and the result of inhibition of Tor1, Tor2, or both.
A conserved serine residue within the FRB domain of either Tor1 or Tor2 is highly critical for the binding of the FKBP12rapamycin complex (44). This serine is found at positions 1834 or 1837 in Tor1 or Tor2, respectively. Mutations at the conserved serine residue confer rapamycin resistance by abolishing the binding of the FKBP12-rapamycin complex to the FRB domain (44, 45). We used the rapamycin-binding defective alleles tor1S1834E (tor1SE) or tor2S1837E (tor2SE) to uncover the target of rapamycin in caa1 mutant cells. We constructed ⌬caa1 tor1SE and ⌬caa1 tor2SE double mutant strains and plated the cells on nutrient-rich medium with or without rapa-
Activation of TORC1 by amino acids
mycin. The mutation at the FRB domain in Tor2, but not the equivalent mutation in Tor1, rescued the inability of ⌬caa1 mutant cells to grow in the presence of rapamycin (Fig. 3B). These findings demonstrate that the target for rapamycin inhibition in cells lacking the caa1 ϩ gene is Tor2 (TORC1) and not Tor1 (TORC2).
Loss of function of Caa1 results in a hyper-mating phenotype
Microscopic examination of caa1-1 or ⌬caa1 mutant cells revealed a hyper-mating phenotype, as cells underwent mating and sporulation in nutrient-rich medium, conditions that normally suppress sexual development. Thus, whereas WT cells of opposite mating types do not enter the sexual development pathway on nutrient-rich medium, ⌬caa1 mutant cells exhibited about 6 -7% mating efficiency under such conditions (see Fig. 3C).
The activity of two central signaling pathways, protein kinase A (PKA) and TORC1, is known to suppress sexual development on rich medium. PKA is a signaling pathway that couples carbon availability to growth and proliferation and inhibits sexual differentiation (46). Accordingly, deletion of pka1 ϩ , the catalytic subunit of PKA, results in initiation of sexual development under nutrient-rich conditions (47,48). TORC1 couples nitrogen sufficiency to growth control and inactivation of TORC1 using conditional mutations of the catalytic subunit Tor2 or the auxiliary subunit Mip1 (Raptor in human cells) result in hypermating under rich medium conditions (12,14,26).
We used the hyper-mating defect to examine genetic interactions between ⌬caa1 and mutations in the TORC1 or PKA pathways. We created the double mutant strain ⌬caa1 ⌬pka1 and microscopically examined the mating efficiency under rich medium conditions. We observed that the double mutant ⌬caa1 ⌬pka1 exhibited mating efficiency of about 30%, which is the sum of the mating efficiencies of each of the single mutations (ϳ7 and 22%; Fig. 3C). This finding indicates an additive effect between ⌬caa1 and ⌬pka1 and suggests that caa1 ϩ and pka1 ϩ act on two parallel signaling pathways that influence sexual development.
Next, we combined the ⌬caa1 with the tor2 temperaturesensitive allele (tor2-51). The tor2-51 mutation results in a partial activity of tor2 ϩ at the permissive temperature that leads to about 19% mating efficiency on rich medium (Fig. 3C). We observed that the mating efficiency of the double mutant cell tor2-51 ⌬caa1 was similar to that of single ⌬caa1 mutant cells (about 6%), indicating that the ⌬caa1 mutation is epistatic to tor2-ts and suggesting that tor2 ϩ and caa1 ϩ act in the same signaling pathway.
Gtr1 and Gtr2, the S. pombe homologues of the mammalian Rag GTPases, form a heterodimer that functions upstream of TORC1 and mediates amino acid signaling. Cells disrupted for either gtr1 ϩ or gtr2 ϩ also show a hyper-mating phenotype on rich medium (27), presumably due to lower activity of Tor2 (TORC1). We detected about 3-4% mating efficiency of ⌬gtr1 cells on nutrient-rich medium (Fig. 3C). The mating efficiencies of the double mutant cells ⌬caa1 ⌬gtr1 were similar to the mating efficiency of single ⌬gtr1 mutant cells, supporting the possibility that Caa1 and TORC1 reside on that same signaling pathway. Taken together, our genetic interaction analyses support a model in which Caa1 acts in parallel with the PKA pathway, but resides on the same pathway as TORC1.
Activation of TORC1 by amino acids
ylation in ⌬caa1 mutant cells was dramatically reduced in rich medium (Fig. 4A, ⌬caa1), indicating that Caa1 is required for full activation of TORC1. Addition of aspartate to no-nitrogen medium did not significantly improve TORC1 activity in ⌬caa1 cells (Fig. 4A, ⌬caa1). Thus, Caa1 is required for TORC1 activity at least partially irrespective of its role in aspartate biosyn- The extracted proteins were analyzed by immunoblotting with anti-phospho S6 kinase (Thr-389) antibody (Psk1-P) or with anti-Psk1 antibody (Psk1). When anti-Psk1 antibodies were used, the upper band corresponds to the phosphorylated form of Psk1. Anti-actin antibodies were used for loading control. The intensities of the Psk1-P and Psk1 (phosphorylated and nonphosphorylated forms) bands were measured for three independent biological repeats. Only one representative blot is shown for each experiment. The numbers represent the ratio of Psk1-P to total Psk1, normalized to the same ratio in WT cells in rich (YE) medium, Ϯ indicates S.D. B and C, prototrophic ⌬caa1 mutant cells (B) or caa1-K255R cells were grown in YE to logarithmic phase and then shifted for 60 min into EMM-N (-N) or EMM-N containing the indicated amino acids (aspartate, glutamine, asparagine, arginine, or serine) at the final concentration of 5 mM. The ratio between Psk1-P and Psk1 was calculated as described above. D, Western blot analysis of Ssp2-T189 phosphorylation in WT and ⌬caa1 mutant cells. Cells were grown to mid-log phase and then shifted to minimal media containing glucose (2%) or with no glucose for 1 h.
Activation of TORC1 by amino acids
thesis, consistent with the inability of aspartate to suppress rapamycin sensitivity in caa1 mutant cells (Fig. 1E).
In parallel to using the anti-Psk1-P antibody, we also raised anti-Psk1 antibodies to detect the total level of Psk1 (see "Experimental procedures"). The anti-Psk1 antibody detected two major bands, an upper band corresponding to the phosphorylated form of the protein, which was predominant in rich medium and a lower band, corresponding to the unphosphorylated form, which was predominant in no-nitrogen medium (see Psk1, Fig. 4A). In general, the results obtained using the anti-Psk1 antibody correlated well with the results obtained with the anti-Psk1-P antibody and supported a reduced level of TORC1 activity in caa1 mutant cells. However, repeated experiments showed the anti-Psk1-P antibody produced more reliable and consistent results for assessing the state of phosphorylation of Psk1. This is likely due to limitations of the band shift technique that may reflect multiple modifications. The anti-Psk1 blots were used to quantify the amount of phosphorylated Psk1 compared with total Psk1 (Fig. 4A).
Shifting cells from rich medium to media containing aspartate, glutamine, arginine, or serine showed a moderate reduction in TORC1 activity in WT cells, as judged by the level of Psk1-P (Fig. 4B). In contrast, ⌬caa1 cells showed relatively low activity of TORC1 in rich medium, which remained low upon shift to media containing the above amino acids as nitrogen source (Fig. 4B). Similar results were obtained for Psk1-P in catalytic subunit caa1-K225 mutant cells (Fig. 4C). Taken together, our results demonstrate that without Caa1, or in catalytically compromised caa1 mutant cells, TORC1 activity is compromised.
Loss of Caa1 does not induce AMPK (Ssp2) activation
Branched-chain aminotransferases (BCATs) have recently been demonstrated to be required for full activation of TORC1 in S. cerevisiae (49). BCATs affect TORC1 activity by a mechanism involving the TCA cycle flux, glutamine levels, and the AMPK kinase (49). The S. pombe AMPK homologue, Ssp2, has been shown to negatively regulate TORC1 in response to nitrogen stress (24). To determine whether loss of caa1 ϩ leads to activation of AMPK, we followed the level of phosphorylation of Ssp1 at Thr-189, a well-known readout for activation of AMPK (24). However, we did not observe activation of Ssp2 in caa1 mutant cells (Fig. 4D). Thus, Caa1 does not appear to down-regulate TORC1 via Ssp2.
A partial correlation between the ability of specific amino acids to re-activate TORC1 and their ability to support high growth rate
It was previously shown that glutamine was efficient in inducing re-phosphorylation of Psk1 following nitrogen starvation, whereas glutamate, proline, or leucine failed to reactivate TORC1 (16). Because caa1 mutant cells are rapamycin-sensitive, as well as auxotrophic to aspartate, we were interested in examining the role of aspartate, relative to other amino acids, in the activation of TORC1.
WT cells were grown to mid-log phase before being subjected to nitrogen starvation for 1 h, followed by addition of each amino acid at the final concentrations of either 1 or 5 mM for 20 min (Fig. 5A). We found that exposure of nitrogenstarved cultures to 5 mM glutamine, asparagine, arginine, aspartate, or serine induced re-phosphorylation of Psk1. No, or very minor activation of TORC1 was observed at the final concentration of 1 mM of any of the above amino acids tested, or when the nitrogen-starved cultures were exposed to glutamate, leucine, methionine, valine, isoleucine, or threonine at 1 or 5 mM. Our findings showing a positive role for glutamine in the activation of TORC1, whereas no such effect is observed for glutamate or leucine, are consistent with previous reports (16).
Time course experiments, which examined the phosphorylation of Psk1 following addition of amino acids for 5 or 20 min (at the concentration of 5 mM) demonstrated that glutamine, asparagine,arginine,andaspartatestronglyreactivatedthephosphorylation of Psk1 as early as 5 min after exposure, whereas serine was able to reactivate TORC1 only following 20 min of exposure (Fig. 5B). Again, glutamate, methionine, leucine, valine, isoleucine, or threonine did not support re-activation of TORC1 (Fig. 5B). Based on these findings, we divided the different amino acids into strong (glutamine, asparagine, arginine, or aspartate), intermediate (serine), and poor (glutamate, methionine, leucine, valine, isoleucine, and threonine) activators of TORC1 (Fig. 5C).
Interestingly, in S. cerevisiae glutamine, asparagine, and arginine were also classified as the most efficient amino acids for stimulating rapid and sustained TORC1 activity (34). It was suggested that there is a correlation between the ability of these amino acids to act as preferable nitrogen sources and their ability to act as good activators of TORC1 (34). In S. pombe, glutamic acid is often used as a proxy for a "good" nitrogen source, whereas proline is used as a poor nitrogen source (50). Yet a systematic examination of the different amino acids for their ability to support growth is lacking. We determined the growth rates of S. pombe cells on media containing each of the above amino acids as a sole nitrogen source (Figs. 6 and 7). We found that glutamine best supported S. pombe growth, whereas ammonia, arginine, asparagine, and aspartate closely followed. Glutamate supported an intermediate-to-fast growth rate, whereas the remaining amino acids could poorly support growth. Our data thus demonstrate a partial correlation between the quality of the nitrogen source and its ability to reactivate TORC1: glutamine, asparagine, arginine, and aspartate support fast growth and are good activators of TORC1; glutamate is a relatively good nitrogen source, but a poor activator of TORC1; serine on the other hand, poorly supports growth, but is an intermediate activator of TORC1. Notably, aspartate is a good nitrogen source and a good activator of TORC1 in S. pombe (Fig. 6), but not in S. cerevisiae (34). This may reflect metabolic differences between the two yeasts, which are yet to be determined.
Re-activation of TORC1 by asparagine, arginine, aspartate, or serine requires the activity of glutamine synthetase
It was demonstrated in S. cerevisiae and later in S. pombe that methionine sulfoximine (MSX), a specific inhibitor of glutamine synthetase, diminishes TORC1 activity in ammoniabased medium (16, 34, 51). Inhibition of TORC1 activity by MSX was efficiently suppressed by addition of glutamine to the Activation of TORC1 by amino acids medium, leading to the suggestion that TORC1 activity in yeast responds to glutamine levels (16,34). As previously shown (16), MSX did not affect reactivation of TORC1 when glutamine was added to the medium (Fig. 7). In contrast, MSX inhibited the reactivation of TORC1 by asparagine, arginine, aspartate, or serine, suggesting that these amino acids must be converted to glutamine to induce TORC1 activity.
Discussion
The growth of S. pombe cells is resistant to inhibition by rapamycin, yet reduction of TORC1 signaling has been shown to lead to rapamycin sensitivity (13,36,52). Thus, genetic screens to identify rapamycin-sensitive mutant cells can help identify genes that are involved in TORC1 signaling. In this study, we describe the isolation and characterization of a rapamycin-sensitive mutant carrying a loss of function mutation in caa1 ϩ , the cytosolic isoform of the aspartate aminotransferase gene. Disruption of caa1 ϩ leads to slow growth, hyper-mating, and auxotrophy to aspartate. The rapamycin sensitivity and hyper-mating phenotypes observed in caa1 mutant cells are reminiscent of the phenotype of loss of function of S. pombe TORC1 (12,13,26). Consistently, our results indicate that under rich nitrogen conditions, there is a decrease in TORC1 activity in caa1 mutant cells compared with WT cells.
The mechanism by which aspartate aminotransferase regulates TORC1 in S. pombe cells is yet to be determined. However, cells carrying a catalytic site mutant, caa1-K255R, show reduced levels of TORC1 activity and are sensitive to rapamycin, similar to complete loss of caa1 ϩ . Thus, it appears that the loss of TORC1 activity depends on Caa1 catalytic activity.
BCATs are required for full activation of TORC1 in S. cerevisiae, via multiple modes of activation, including activation of the EGO-Gtr complex (equivalent to the mammalian Ragula-tor-RAG complex) (49). The equivalents in S. pombe, the Gtr1-Gtr2 GTPases, Ragulator, and GATOR1-like complexes have been implicated in TORC1 regulation in response to amino acids (27,28,32,45,53). We have found a epistatic relationship between ⌬gtr1 and ⌬caa1 with respect to induction of sexual development under rich conditions (Fig. 3C), suggesting that Caa1 and Gtr1 may affect sexual development via the same mechanism. In S. pombe, the Ragulator-Gtr1-Gtr2 pathway has a dual role in regulating TORC1, as it positively regulates TORC1, but is also required to attenuate its activity (27,28,32,53). The relationship between Caa1 activity and the Gtr1-Gtr2 GTPases is thus expected to be complex and is beyond the scope of this work.
We classified 11 amino acids based on their ability to re-activate TORC1-dependent Psk1 phosphorylation. We identified glutamine, asparagine, arginine, and aspartate as amino acids that strongly re-activate TORC1 following nitrogen starvation. Serine exhibited an intermediate capacity to re-activate TORC1, whereas glutamate, methionine, leucine, valine, isoleucine, and threonine showed very poor or no activation of TORC1. Interestingly, glutamine, asparagine, and arginine have also been identified as the amino acids that best supported induction of S. cerevisiae TORC1 activity (34). Stracka et al. (34) suggested that the ability of an amino acid to act as a good activator of TORC1 correlated with its property as a preferable nitrogen source. The identification of glutamine, asparagine, and arginine as good inducers of TORC1 and good nitrogen sources is in agreement with this suggestion. Also in agreement with studies in S. cerevisiae (34), we found that the inhibitor of glutamine synthetase (MSX) inhibits the ability of arginine, asparagine, aspartate, or serine to induce TORC1 activity, suggesting that for ammonia to be sensed, it needs first to be assimilated into glutamine. Still, our findings indicate several differences in the two yeasts: glutamate is a relatively good nitrogen source (Fig. 6), yet it does not induce TORC1 activity. On the other hand, serine is a good activator of S. pombe TORC1, but acts as a poor nitrogen source for growth. In addition, aspartate acts as a good activator of TORC1, but not in S. cerevisae. To resolve the above inconsistencies and differences in the two yeasts, the mechanism by which glutamine induces TORC1 activity needs to be further explored. Importantly, addition of glutamine did not induce TORC1 activity in caa1 mutant cells. Therefore, for glutamine to be sensed by TORC1, cytosolic aspartate aminotransferase activity should be intact.
Yeast strains and growth conditions
S. pombe strains are described in Table 1. All the media in this study are derived from the S. pombe media as described in Ref. 50. Complex medium (YE) contains 3% glucose, 0.5% yeast Figure 5. Amino acids stimulate TORC1 activity. TORC1 activity was assessed in WT cells using anti-Psk1-P and anti-Psk1 antibodies, as described in the legend to Fig. 4. A, reactivation of TORC1 in response to addition of low or high concentrations of various amino acids. Cells were grown in minimal medium (EMM) to logarithmic phase, shifted to medium without nitrogen source (-N) for 1 h, followed by addition of different amino acids at the indicated concentrations (1 or 5 mM for each amino acid) for 20 min. The intensities of the Psk1-P and Psk1 bands were measured. The numbers represent the ratio of Psk1-P to total Psk1, normalized to the same ratio in WT cells in EMM. B, reactivation of TORC1 in response to various amino acids as a function of time. Cells were grown in minimal medium (EMM) to logarithmic phase, shifted to medium without nitrogen source (-N) for 1 h, followed by addition of different amino acids at 5 mM for 5 or 20 min. The ratio between Psk1-P and Psk1 was calculated as described above. C, a table summarizing the efficiency by which single amino acids can re-activate TORC1.
Activation of TORC1 by amino acids
extract, 150 mg/liter of adenine, and 75 mg/liters of uracil. The minimal medium used (EMM) contains 2% glucose and 5 g/liter of NH 4 Cl; salts, minerals, and vitamins as detailed in Ref. 54. Minimal media with either no nitrogen EMM-N (-N) or limited nitrogen source (low EMM) are minimal medium as described above with either no ammonia added or 3 g/liter of ammonia. For proline plates (Pro), ammonium chloride of the EMM was replaced with 10 mM proline. For the growth of auxotrophic strains, minimal media plates were supplemented to a final concentration of 75 mg/liter of histidine, 75 mg/liter of adenine, 80 mg/liter of uracil, and 225 mg/liter of leucine. Aspartate or glutamate was added at a final concentration of 100 mg/liter. Rapamycin (Sigma) was used at a final concentration of 100 ng/ml.
Construction of gene disruptions and site-directed mutagenesis
Gene deletions were carried out by homologous recombination. PCR was performed using the pFA6a-kanMX6 plasmid as a template (7) with primers 1201 and 1202 for caa1 deletion, and 1205 and 1206 for maa1 deletion. The PCR fragment was purified and transformed into WT strain. Correct integration at the indicated genes was validated by PCR. Site-directed mutagenesis of the conserved lysine residue in Caa1 into arginine (K255R) was carried out by PCR amplification with Phire DNA polymerase (Thermo Fisher Scientific), using the caa1myc-KanMX cassette as template and complementary primers that contain the desired mutation. The caa1-K225R-myc-KanMX fragment was amplified and integrated into WT 972 h Ϫ strain and G418-resistant colonies were selected. The presence of the desired mutations was confirmed by DNA sequencing.
Mating efficiency assays
Mating efficiency was determined as follows: cells from a fresh patch of homothallic strains were grown to logarithmic phase to a density of ϳ5 ϫ 10 6 -1 ϫ 10 7 cells/ml in rich medium. Then, the cultures were spotted on solid rich media at a density Prototrophic WT cells (TA0001) were grown in minimal medium (EMM) to logarithmic phase, shifted to medium without nitrogen source (-N) for 1 h, in the presence or absence of the glutamine synthase inhibitor MSX, followed by addition of different amino acids (5 mM) for 20 min. The numbers represent the ratio of Psk1-P to total Psk1, normalized to the same ratio in WT cells in EMM. Activation of TORC1 by amino acids of 5 ϫ 10 5 cells/ml and incubated at 30°C for three overnights. After the incubation, a toothpick was used to pick some of the cells from the center of each patch, and the cells were examined microscopically. The percentage of mating was calculated by dividing the number of zygotes, asci, and free spores by the number of total cells. One zygote or one ascus was counted as two cells, and one spore was counted as a half-cell.
Protein extraction
25 ml of logarithmic growing cells were harvested and re-suspended in 200 l of 20% trichloroacetic acid (TCA). After addition of the same volume of glass beads, cells were broken by vortexing for 15 min in a cold room (4°C). Glass beads were washed twice with 200 l of 5% TCA and the resulting extract was centrifuged for 10 min in 13,000 rpm at 4°C. The pellet was re-suspended in 200 l of 2ϫ Sample Buffer containing 4% SDS, 20% glycerol, 0.12% EDTA, 2.4% Tris-HCl, pH 6.8, 34 mg/ml of dithiothreitol (DTT), and a small drop of bromphenol blue. 100 l of 1 M Tris base was added to each sample. The samples were boiled for 3 min and centrifuged before loading.
Western blotting
Protein extracts were resolved on SDS-PAGE using 10 -12% acrylamide gels. Gels were run using TG-SDS running buffer (Bio-lab) at 50 volts for 15 min and then at 150 volts for an additional hour. The proteins were transferred in TG buffer (Bio-lab) to nitrocellulose membranes (Amersham Biosciences) using 400 mAmp for 90 min. Membranes were treated with blocking solution containing 5% milk in TBST buffer (Biolab) for 1 h. Psk1 phosphorylation was detected using antiphospho-p70S6 kinase Thr-389 (Cell Signaling, catalogue number 9206), total Psk1 protein was detected by antibodies raised against the Psk1 phosphopeptide NCEFLSNNAVSNH (Bio Basic Canada Inc.) and actin was detected using anti-actin antibody (MP-Biomedicals, catalogue number 691001). Phosphorylation of Ssp2 was detected using anti-phospho-AMPK-␣-Thr-172 (Cell Signaling, catalogue number 2535). Detection was carried out using the ECL SuperSignal detection system (Thermo Scientific).
Author contributions-S. R., A. C., and R. W. investigation; S. R. and A. C. methodology; S. R. and R. W. writing-original draft; A. C. and R. W. conceptualization; M. K. and R. W. writing-review and editing; R. W. supervision; R. W. funding acquisition. | 9,314.2 | 2019-10-22T00:00:00.000 | [
"Biology"
] |
Optimal Signal Timing Method of Intersections Based on Bus Priority
According to the traffic problems such as traffic congestion and buses moving slowly, the paper based on the idea of bus priority to introduced the per capita minimum delay of signal control method. In this paper, the intersection of Huaguang Road and Liuquan Road in Zibo Zhangdian was used as an example to propose a bus signal timing plan. By considering the impact of traffic delays and designing the phase of the intersection. It can reduce the per capita delay and improve service levels. Using Trans Modeler to simulate the timing scheme, the results showed that the optimized scheme can effectively reduce the per capita delay and ensure priority access for buses.
Introduction
Along with the rapid development of urbanization, the structure of road network is more and more complex. As the bottleneck of traffic development, intersection delay becomes the key factor that affects the efficiency of urban road traffic. The survey found that a reasonable signal control scheme could help ease congestion on the roads. At present, the traditional intersection timing method takes the minimum of vehicle delay as the target to determine the signal period length of the intersection, and the green time ratio of the intersection is divided according to the phase traffic flow ratio. This kind of timing method treats the bus as the same as other social vehicles, does not consider the size of the passenger volume of all kinds of vehicles, and loses the social fairness [1]. If the right of bus priority is not guaranteed, the traffic delay at the intersection will increase, which will hardly reflect the efficient, safe and fast performance of public transport [2].
The signal control based on bus priority gives the bus the right of priority at the intersection. In general, bus priority control can be divided into two categories [3]: reducing bus delay and reducing per capita delay according to the different methods and strategies. This article focuses on how to reduce the per capita delay at intersections. Considering the large number of buses carrying passengers, the green time ratio of each phase of intersections is reasonably allocated according to the proportions of traffic volume, and priority is given to bus passing. This method is used to ensure the distribution of the phase flow of the large proportion of the bus to the larger green time ratio. This signal timing method can ensure bus priority, reduce vehicle delays at intersections and improve the operational efficiency of road traffic. By constructing the Trans Modeler micro simulation platform to simulate the operation of vehicles, comparing the traffic delays before and after signal timing optimization at intersections, providing the virtual environment and technical support for verifying the feasibility of the scheme, and providing decision-making basis for traffic managers [4][5].
Traditional Signal Timing Scheme
The main design parameters of intersection signal timing are signal period and green time ratio. The traditional signal timing scheme in the phase design more use of British scholar Webster's method to determine the parameters and calculate the vehicle delay, that is, under the specified conditions, the calculated total vehicle delay is the smallest to get the optimal solution [6][7]. The timing scheme does not distinguish between the bus and the social vehicles. The length of the signal period and the time of the green light depend on the traffic of the key flow direction. If the traffic flow and passenger flow are the largest in a certain phase, then this method is feasible. But in many cases, the maximum flow of traffic does not necessarily have the largest passenger flow. If there are more buses in the phase flow direction with a small traffic volume, it is not reasonable to use this method for traffic signal timing. It does not consider the factors of per capita delay, so it will be difficult to play the efficient and fast characteristics of public transportation.
In order to calculate the vehicle delay of the intersection, it is necessary to determine the parameters of the signal period and the green time ratio. Based on research and experimentation, the optimal signal period that minimizes the total delay of vehicles passing an intersection is given by: Where: c=Intersection signal period; L = The total loss time of the signal period; Y=The total traffic flow ratio across the intersection; =Intersection of each phase flow; v=The actual traffic flow of the signal phase; S=Saturation flow rate of the phase; n=The number of key phases; The green time ratio of each phase is proportional to the ratio of the flow rate of the phase, and the formula for the green time ratio is as follows: Where: =Phase green time ratio at intersection; Obtain parking delays based on Webster formula: Where: =Average delay per car; q=Arrival rate of the vehicle; Q=Traffic in each phase
Intersection Phase Design Based on the Smallest Per Capita Delay
Bus signal priority means that without affecting the operation of the entire intersection or the main line of vehicles on the premise, using reasonable changes in phase order, phase time, insert special phase and other methods to ensure bus rapid transit through the intersection. According to the different research goals, the bus signal priority is divided into two kinds, one is to reduce bus delay as the target, and one to reduce the per capita delay [8][9].
The bus priority timing method based on the minimum per capita delay is aimed at reducing the overall delay of the traffic in all directions. In the process of optimization of signal timing by adjusting the signal phase and the phase of the time to give the bus priority [10]. In this paper, the traffic flow and the volume of passenger flow at the intersection are compared. On this basis, the maximum volume of passenger flow direction is given priority. By sorting the passenger volume, we can distinguish buses with higher carrying capacity than other vehicles. In order to avoid the queuing of vehicles at intersection, we should also consider the direction of traffic with larger traffic volume. In addition, the right turn vehicle does not conflict with other vehicles, and the effect of the right turn vehicle can not be considered when the phase is designed [11].
In this paper, the intersection of Huaguang Road and Liuquan Road in Zhangdian District of Zibo is taken as the research object. Based on the bus priority concept, the phase of intersection is adjusted according to the size of passenger flow to explore the appropriate signal timing scheme. In order to explain the phase design process, take Huaguang Road and Liuquan Road intersection as an example to make a detailed analysis, as shown in Figure 1
General Phase Design Scheme
Considering the basic principle of phase design and the condition of the traffic flow ratio, the phase design scheme 1 can be obtained, as shown in Figure 2.
Phase Design Scheme Based on Bus Priority
Step 1: First of all, select the flow direction than the maximum passenger flow 8, without considering the impact of non-motor vehicles and pedestrians, the left-turn and straight-ahead flows that do not conflict with this flow direction are 2, 4, 7. So the combination that can be placed in the same phase as 8 is: 8 and 2, 8 and 4, 8 and 7. Taking into account the phase setting of the intersection should minimize the delay of social vehicles, select the combination of the largest traffic flow: 8 and 2.
Step 2: Among the remaining flows, the traffic flowing to 5 is the largest, and the flows that do not conflict with this flow are 1, 4 and 11. The flow of traffic to flow 11 is the largest, choosing combinations 5 and 11 as the second phase.
Step 3: The flow direction that does not conflict with the flow direction 1 is 2, 5, 7. Where 2, 5 already belongs to the other phases, so that phase 3 is 1 and 7. Finally get the phase 4: 4 and 10. Phase design optimization scheme 2 shown in Figure. 3.
Basic Data
Taking the intersection of Huaguang Road and Liuquan Road in Zhangdian District of Zibo City as the research object, the flow direction at the intersection is shown in Figure 1. In order to simplify the calculation, the effects of pedestrians and non-motor vehicles are not considered. Statistical data on the number of social vehicles and the number of buses at the intersection are shown in Table 1. Among them, the conversion coefficient of the bus is 2, the average carrying coefficient of the car is 2.5 people / vehicle, the carrying coefficient of the bus is 30 people / vehicle, the single lane saturated traffic is 1800pcu/h.
Optimization Results Analysis
According to the traffic survey and Webster's formula, the parameters of vehicle intersection delay and average delay at intersection are calculated, and the calculation results of delay are shown in Table 2. It is found through comparison that the timing scheme after phase change can not only reduce the average delay of vehicles at intersections, but also reduce the Per capita delay, which can ensure the transport benefits of buses and help to improve the traffic capacity and service level of intersections.
Trans Modeler Simulation Verification
This article takes the intersection of Huaguang Road and Liuquan Road in Zhangdian District of Zibo City as the object of simulation and carries out two simulations. The first simulated vehicle simulation did not use the bus priority signal timing scheme. The second simulation of the vehicle simulation, by changing the phase, added at the intersection of bus priority signal timing scheme. By comparing the results of two simulation outputs, such as the average delay and the average parking number to evaluate the feasibility of the optimization scheme [12][13].
The intersection model built with Trans Modeler simulation software is shown in Figure 4. The simulation results before and after the implementation of the bus signal optimization scheme are compared as follows: By comparing and analyzing the simulation data, it is concluded that after the implementation of the bus priority control scheme, the average delay at the intersection is significantly reduced, and the service level at the intersection is upgraded from the original E level to D level. The simulation results further verify the feasibility of the bus priority signal timing scheme. From the aspect of reducing per capita delay, the operation efficiency of the bus is improved, and the priority of public traffic is ensured.
Conclusion
The traditional signal timing method only considers the traffic flow ratio when designing the timing scheme and does not consider the passenger flow ratio. Although the vehicle delay can be minimized, the average per capita delay can not be guaranteed. In this paper, the traffic benefit indexes before and after phase change of bus priority signal timing scheme are analyzed through examples. The simulation of vehicle running status with TransModeler micro-simulation software is carried out to verify the feasibility of phase optimization scheme. The simulation results show that the scheme with the minimum delay per capita is not only able to guarantee the minimum per capita delay, but also reduce the vehicle delay and improve the service level at the intersection. | 2,623.6 | 2018-04-04T00:00:00.000 | [
"Computer Science"
] |
Approximating the cumulant generating function of triangles in the Erd\"os-R\'enyi random graph
We study the pressure of the"edge-triangle model", which is equivalent to the cumulant generating function of triangles in the Erd\"os-R\'enyi random graph. By analyzing finite graphs of increasing volume, as well as the graphon variational problem in the infinite volume limit, we locate a curve in the parameter space where a one-step replica symmetry breaking transition occurs. Sampling a large graph in the broken symmetry phase is well described by a graphon with a structure very close to the one of an equi-bipartite graph.
Introduction
Sampling random graphs with prescribed macroscopic properties (such as a given density of certain subgraphs) received considerable attention in recent years. From a statistical physics perspective, one can think of two procedures: • the micro-canonical ensemble, where the sampling is performed with a uniform distribution over the set of all graphs that satisfy the macroscopic constraint exactly; • the canonical ensemble, where the sampling is done with respect to a larger set of graphs that satisfies the macroscopic constraint only on average.
We shall discuss here the simplest non trivial case, i.e. the constraint is on the number of edges and the number of triangles in the graph. In the micro-canonical ensemble these numbers are prescribed exactly. The canonical ensemble is instead provided by the so-called edge-triangle model, which is defined by the Boltzamnn-Gibbs distribution in which one tunes the average density of edges and the average density of triangles by varying the corresponding conjugate parameters. The edge-triangle model is in turn the simplest example of the more general exponential random graph class of models, in which one introduces several parameters to control the density of an arbitrary set of subgraphs. Whereas the equivalence in the thermodynamic limit of micro-canonical and canonical ensemble is true for several physical systems of interest (often the system is then studied in the canonical ensemble, that is usually more analytically tractable than the micro-canonical one), for random graphs it has been shown that such equivalence can not be taken for granted. In particular [18] identified a region of values for the densities of edges and triangles where there is ensemble inequivalence, as measured by a positive relative entropy between the micro-canonical and canonical measures. In this paper we address the following problem: How does a large graph look like when it is sampled from the edge-triangle model (i.e. imposing some given average values for the densities of edges and triangle )? Does the sampling from the edge-triangle model give the same result of the sampling with respect to the microcanonical ensemble (i.e. imposing some given exact values of edge and triangle densities)?
The edge-triangle model and the Erdös-Rényi random graph
To define the setting, let us consider a graph with n vertices, that we identify with the elements of the set [n] = {1, 2, 3, . . . , n}. We shall describe the graph using its adjacency matrix x = (x i,j ) i,j∈ [n] , defined as follows: the entry x i,j = 1 if the edge connecting vertex i with vertex j is present, and x i,j = 0 otherwise. Since the graphs considered in this paper are undirected and without loops, the adjacency matrices will be always symmetric, with 0 or 1 entries and zeros on the diagonal. We will denote by X n the set of adjacency matrices of size n, and we also define for later use X = ∪ n≥2 X n the set of adjacency matrices of all sizes. The number of edges and triangles in a graph represented by a matrix x ∈ X n is given by The previous quantities result in random variables if the graph is a random graph, i.e. if it is sampled from the set of 2 ( n 2 ) undirected simple graphs with n vertices according to some probability distribution. The simplest possible distribution is the so-called Erdös-Rényi model, where each pair of vertices is connected with probability p > 0, independently of the other pairs. Thus, in the Erdös-Rényi case the entries of the adjacency matrix, x = (x i,j ) i,j∈ [n] , form a set of independent identically distributed (i.i.d.) Bernoulli variables with P(x i,j = 1) = p. In spite of the simple probabilistic set-up the large deviation principle for the Erdös-Rényi graph is far from simple and has been developed only in recent years [14,16,13,25,12,32,17]. In particular, it has been found that the large deviation function may be non-convex.
The exponential random graph model is devised to enhance or decrease the probability of specific geometric structures in the random graph. Here we define the edge-triangle model that involves only triangles and edges [26]. Let β 1 , β 2 ∈ R, then the probability of a graph with adjacency matrix x ∈ X n in the edge-triangle model is given by: where Z n (β 1 , β 2 ) is the partition function, i.e. the normalizing factor The factors 6 and 2 are conventional in the definition and account for the permutations of 3 vertices of a triangle and the 2 vertices of an edge. The Erdös-Rényi model with paramemeter 0 < p < 1 is embedded in the edge-triangle model, since its distribution: can be obtained from (2) by setting β 1 = h p /2 and β 2 = 0. If β 2 > 0 the probability of finding triangles is enhanced with respect to the Erdös-Rényi case, while it is decreased if β 2 < 0. In the limiting case β 2 → −∞ the edge-triangle model (2) gives zero probability to graphs containing triangles.
A key quantity in the study of the thermodynamic properties is the pressure, that at finite volume is defined as ψ n (β 1 , β 2 ) = 1 n 2 ln Z n (β 1 , β 2 ).
By taking derivatives with respect to the model parameters one computes the averages of the edge density e = 2E n /n 2 and of the triangle density t = 6 T n /n 3 : where · n denotes expectation w.r.t. the measure ν n defined in (2). We shall be interested in the behavior of very large graphs which mathematically is described by the (thermodynamic) limit n → ∞. General convexity arguments imply that the thermodynamic limit is well defined, so that the infinite volume pressure exists and by Lebesgue dominated convergence limits and derivatives can be interchanged so that the relation (6) gives in the thermodynamic limit where e = lim n→∞ e n . We shall work with the parametrization (β 1 , β 2 ) = (h p /2, α/6) where we recall h p = ln p 1−p and 0 < p < 1. In this way the pressure of the edge-triangle model can be read as the cumulant generating function of the number of triangles in the Erdös-Rényi random graph. In other words, defining where · ER n denotes the expectation w.r.t. the measure ν ER n , by a simple computation one can show that Thus, studying the cumulant generating function of the number of triangles in the Erdös-Rényi random graph is equivalent to studying the pressure of the edge-triangle model.
Ensemble inequivalence
As the pressure ψ n (β 1 , β 2 ) is the crucial quantity in the canonical ensemble, the entropy s n (e, t) defined as is the important quantity in the microcanical ensemble. An heuristic application of the Laplace method would imply ψ(β 1 , β 2 ) = lim n→∞ 1 n 2 ln e n 2 (β1e+β2t−sn(e,t)) de dt = sup i.e. in the thermodynamic limit the canonical pressure can be obtained from the microcanonical entropy by a Legendre transform. One then says that the two ensembles are thermodynamic equivalent if such correspondence also holds in the reversed direction. This is the same as requiring that the microcanonical entropy is strictly convex, i.e. the involution property of the Legendre transform for strictly convex functions. The problem of ensemble inequivalence is, in fact, more general than just thermodynamic inequivalence (see [31] for a recent account). When the correspondence via Legendre transform between pressure and entropy does not hold, the difference between canonical and microcanonical ensemble can then be probed in several ways. In this paper we shall focus on macrostate inequivalence. Namely, we ask if sampling a very large graph uniformly at random from the set of all graphs with given dentities of edges and triangles (e * , t * ) is statistically equivalent to sampling a very large graph from the edge-triangle model with parameter values β 1 (e * , t * ), β 2 (e * , t * ) that are obtained by inverting the relations (8) with e = e * and t = t * . In the thermodynamic limit graphs are described by the notion of graphon, which is discussed below. Thus the equivalence that we consider amounts to ask if the two sampling procedures produce the same graphon.
The sampling from the micro-canonical ensemble has been investigated for instance in [28,29,30,22,3]. It has been found that the structure of graphs drawn from the microcanonical ensemble is very rich and may vary a lot as a function of the number of prescribed edges and triangles. For instance, for a choice (e, t) such that e = 1 2 + with ∈ ( l−2 2l , l−1 2l+2 ) with l ∈ N \ {1} and t on the scallopy curve, the vertex set of a graph drawn from the microcanonical ensemble can be partitioned into subsets ( − 1 of them of the same size and the last of different size). The graph has the form of a complete -partite graph on these pieces, plus some additional edges in the last piece that create no additional triangles [18].
To the best of our knowledge, sampling from the canonical ensemble has been investigated only in a limited region of the parameters (β 1 , β 2 ), see the review of know results in Section 2.1. It is the aim of this paper to conduct a systematic exploration of the full parameter space by means of numerical simulations.
Main results and paper organization
We investigate the sampling from the canonical ensemble by studying the pressure of the edge-triangle model, or equivalently the cumulant generating function µ p (α) of triangles in the Erdös-Rényi model with parameter p. We perform numerical simulations for finite graphs and compare them to the variational formulation describing the infinite volume. We shall collect multiple evidences that the structure of graphs in the canonical ensemble has only two possibilities: it is either the constant graphon describing the Erdös-Rényi graph (i.e. independent edges, yet with a modified parameter for the probability of edges accounting for the imposed number of triangles) or it is the graphon describing the 1-step replica symmetric breaking solution (generalizing the bipartite random graphs that is know to be the exact solution for α → −∞). By means of different numerical analysis ("cloning" method for a direct measurement and "gradient" method for the solution of the pressure variational problem) we shall identify a curve α c (p) in the plane (p, α) separating these two regimes called, respectively, the replica symmetric phase and the replica symmetry broken phase. We do not have a proof that replica symmetry broken phase α < α c (p) is entirely described by the 1-step replica symmetric breaking solution. However, in contrast to the microcanonical sampling, our numerical analysis suggests that in the description of the canonical sampling no higher level of replica symmetry breaking is required. As a consequence of the numerical analysis the value α c (p) may be identified as a bona-fide critical value for a 1-step replica symmetry breaking transition.
The paper is structured as follows: • in section 2 we review the variational formulation of the pressure. We recall the results that are known in the literature for the solution of the variational problem and describe the 1-step replica symmetry breaking solution.
• In section 3 we present the numerical analysis of the finite volume pressure based on the 'cloning method', which is a population dynamics algorithm.
• In section 4 we solve a discretized version of the variational problem in the infinite volume by a gradient projection method. Here we do not fix a-priori a specific structure for the optimal graphon.
• In section 5 we solve the variational problem restricted to a specific class of graphons, those corresponding to the 1-step replica symmetry breaking solution.
We will argue that the results of sections 4 and 5 coincide (within numeral accuracy) and they are well approximated by the finite volume direct measurements of section 3.
Variational formulation 2.1 Review of known results
The theory of graph limits [24,23,8,9,10] relies on the notion of graphon, which describes a random graph in the limit n → ∞. A graphon is defined as a bounded Borel measurable function f : The idea behind this definition is a mapping of a graph to the unitary square: intuitively the interval [0, 1] represents a continuum of vertices and f (x, y) is associated to the probability of connecting with an edge two vertices x and y. For example, the graphon describing the Erdös-Rényi random graph with parameter p is the constant function f identically equal to p. The set of graphons is denoted by W. On this set an equivalence relation is introduced according to which f, g ∈ W are equivalent if there exists a bijection σ : [0, 1] → [0, 1] such that σ and σ −1 are Borel measurable and preserve the Lebesgue measure, such that g(x, y) = f (σ(x), σ(y)). The set of equivalence classes is denoted by W, while f is the equivalence class containing f ∈ W.
In reviewing the known results in the thermodynamic limit, we follow here [15] (for a more general overview of large deviations for random graph see [12]). The thermodynamic limit of the pressure of the edge-triangle model is given by [15,Theorem 3 where t(f ), the density of triangles in f , is and the entropic term is Here f is a representative element of the equivalence classf and I p (u) for u ∈ [0, 1] denotes the Bernoulli relative entropy Then, from (10) and (13) we get with When the set of maximizers only consists of constant functions we say that we are in the replica symmetric phase. Conversely, if the elements of the maximizing set are non-constant functions, then we say that we are in the replica symmetry breaking phase. The infinite-dimensional variational problem (17), involving the non-linear functional H, has been analytically solved only in a region of the parameter values. In particular [15,Theorem 6.2] proves that for all 0 < p < 1 the system is in the replica symmetric phase for α > −2. The variational problem is then reduced to a scalar one, see [15,Theorem 4 where u * (α) is the optimizer that solves the fixed-point equation: From this, one infers that for α > −2 a graph sampled from the edge-triangle model in the limit n → ∞ will look like an Erdös-Rényi random graph with parameter u * (α), i.e. edges are independent from each others and present with a probability u * (α) (we refer to [5,15] for the precise statement). As for the region −∞ < α ≤ −2 the solution of the variational problem (18) is unknown. The Euler-Lagrange equation giving the stationarity condition are given by the following equation, which is the generalization of (20) to the case of non constant functions [15, Theorem 6.1]: It has been proved that for α small enough, a graph sampled from the edge-triangle model no longer resembles an Erdös-Rényi graph. In particular [15,Theorem 6.3] show that for α small enough (and for any value of p) the functional H(f ) is not maximized at any constant function. This result is based on the fact [15, Theorem 7.1] that one actually proves that in the limit α → −∞, the solution of the variational problem (17) is provided by the so-called equi-bipartite graphon defined as and in this limit one has lim For later use, we remark that using the constant graphon Thus, in the limit α → −∞ the replica symmetric solution is wrong by a factor 2!
A conjecture
The graphon (22) is the infinite volume correspondent of the equi-bipartite graph. In the latter the n vertices are partitioned into two disjoint sets of equal size with no edges connecting two vertices belonging to the same set. Clearly, in such a graph triangles do not exist (more generally bipartite graphs do not contain odd cycles [7,Theorem 4]). Analogously, the density of triangles in the equi-bipartite graphon (22) vanishes since t(g) = 0. Thus, as expected, the edge-triangle model in the limit α → −∞ is free of triangles.
Guided by the results of our numerical analysis that we illustrate below, we conjecture that for any fixed 0 < p < 1, there exists a unique finite and negative value α c (p) such that, crossing α c (p) from above, the optimizer of H defined in (18) switches from the constant function f α , identically equal to the solution of the fixed-point equation (20), to the graphon: where p 1 (α) and p 1 (α) are functions taking value in (0, 1) that satisfy the following conditions: We observe that lim α→−∞ g α = g, with g the graphon describing the equi-bipartite graph defined in (22). The rationale behind our conjecture is that the structure of (25) represents the simplest geometry that may emerge from the breaking of the homogeneous graphon. For the resemblance of the structure of the overlap matrix in the 1RSB solution of spin-glasses, we call this graphon the 1-step replica symmetry breaking solution.
Direct measurements for finite graphs
In this section we compute numerically the cumulant generating function of the number of triangles of a finite graph of size n. This expectation if substantially affected by events that, although rare, give a large contribution to the average defining the generating function. A standard tool for estimating the probability of rare events in the Erdös-Rényi random graph is the importance sampling technique, see for instance [6]. Here we follow an approach based on population dynamics, called 'cloning' [20,19,21,27,11,1,2]. In this section we adapt the method to a purely geometric problem, by introducing a dynamics for the graph construction.
Implementing cloning
The cloning algorithm is obtained by tilting a Monte Carlo dynamics that samples from a target distribution. In our case we would like to compute expectations w.r.t. the law of the Erdös-Rényi graph (4). We first describe the dynamical process generating the Erdös-Rényi graph and then we recall how the tilted dynamic arises in the cloning algorithm.
In the Erdös-Rényi graph each each edge is present, independently, with probability p ∈ (0, 1). To generate a graph of size n, we consider the Markov chain {X t , t ∈ N}, taking values on the set X of adjacency matrices, defined as follows. We label the n vertices in an arbitrary order. We start by selecting the first two vertices and we connect them with probability p, thereby obtaining a graph of size two. Then we select the third vertex and try to connect it to the first two vertices independently with probability p, thus obtaining a graph of size three. This procedure is repeated until the graph of size n is formed: each time a new vertex is selected, it is connected independently with probability p to each of those already visited. We stipulate that the discrete time step of this process corresponds to the attempt of adding a single new edge. Since the evolution from the graph of size i to the graph of size i + 1 requires i attempts, then the evolution starting form size two and leading to a graph of size n will require N n = n i=2 i = n 2 − 1 steps. We now introduce the tilted dynamics. Denoting by P the transition matrix of the Markov chain described above, i.e. P (x, y) = P(X t+1 = y|X t = x) for x, y ∈ X , the cumulant generating function of triangles reads where ν 0 is the initial distribution (i.e. Bern(p)). We rewrite the number of triangles for the graph of size n as where ∆T (x t , x t+1 ) = T (x t+1 ) − T (x t ) is the increment of triangles between two consecutive steps. In this way we obtain: The average in the previous equation can be also computed according to a different dynamics. Indeed, introducing the quantity and the stochastic matrix p α with elements P α (x, y) := P (x, y)e α n ∆T (x,y) we can write: As observed in [19] this representation of the average suggests a population dynamics scheme that, starting from a bunch of M initial individuals (clones) with distribution ν 0 (·), make them evolve according to the transition kernel P α (·, ·) and reproduce according to the rate k α (·). Denoting by M Nn the size of the corresponding population of clones at the final time N n , we get: The procedure is further illustrated with the example of size n = 3 in the Appendix.
Numerical results
We present here the results of the cloning algorithm for the cumulant generating function of the triangles. We fix a population of M = 7000 clones and a value of p = 0.4, and we vary the variable α and the graph size n. We denote by µ Cl n (α) the cumulant generating function returned by the cloning algorithm for the graph of size n with p = 0.4. Similarly, we shorthand µ RS (α) := µ RS 0.4 (α) and we denote the asymptotic valuesμ :=μ 0.4 andμ RS :=μ RS 0.4 . The outcomes of our simulations show that: • for large values of α, the algorithm reproduces the replica symmetric solution (i.e. µ Cl n → µ RS as n increases); • for small values of α, the algorithm provides a cumulant generating function which is independent of α within statistical fluctuations. This "constant" function approximates better and better the value of the exact solution at α = −∞ (i.e. µ Cl n →μ as n increases); • for all values of α and n, the curves furnished by the cloning algorithm are always above the curve that is obtained by taking the maximum between the replica symmetric solution and the exact solution at α = −∞ (i.e. µ Cl n ≥ max{µ RS ,μ}). The discrepancy reduces and n increases.
We observe that the intersection between the replica symmetric solution and the exact solution at α = −∞ occurs at a valueα −110, thus a substantially small value where the rate of reproduction of triangles in the cloning algorithm is very small. Despite this "extreme" situation, we see strong evidence that the replica symmetry solution does not hold for small α's, and for values α ≤ −110 the solution furnished by the equi-bipartite graphon is a very good approximation for the curve returned by the algorithm.
We discuss our results below. Figure 1 explores the region around α = 0. Figure 1(a) shows with dots the results of the cloning algorithm for n = 3, n = 4, n = 20, n = 110 and α ∈ [−3, 2]. The picture also reports, with orange dashed-dotted and maroon dotted lines, the exact curves µ n,0.4 for n = 3 and n = 4, that can be computed explicitly. The curve µ RS p (α), obtained by solving numerically the scalar variational problem (19) characterizing the replica symmetric regime for p = 0.4, is also displayed (pink continuous line). We observe that for n = 3 and n = 4 the cloning algorithm perfectly reproduces the exact result and, as long as n grows, the curves for increasing n's settle on the curve µ RS (α).
A further check on the behavior of the algorithm is provided by Fig.1(b). It shows, as a function of α, the average density of edges e n (α) Cl in the population of clones, that is e n (α) . This quantity is an estimator of the probability p of finding an edge connecting any two vertices of the graphs produced by the cloning. By effect of the tilting, p is different from the original value p = 0.4 and is a function of α, as expected. Since in replica symmetric regime the edge-triangle model converges (in the limit n → ∞) to an Erdös-Rényi model with parameter u * (α), see Section 2, we have reported in Fig.1(b) the optimizer u * (α). The fair agreement of the values e n (α) Cl on the curve of u * (α), gives a further evidence that the cloning algorithm is well working in this regime of α and reproduces the expected behavior of the edge-triangle model. The same conclusion can be drawn form Fig.1(c) that shows the estimate of the average density of triangles provided by cloning, i.e. t n (α) and the density of triangles in the symmetric replica regime, i.e. (u * (α)) 3 .
To check if the algorithm is able to reproduce the change occurring around α −110, we have run simulations in a strongly negative region of α. We plot in Fig.2 the results of the cloning method µ Cl n (α) (dots) together with the replica symmetric cumulant generating function µ RS (α). In the same picture we have also reported the asymptotic valuesμ = −0.255 andμ RS −0.510. The left panel of Fig.2 displays the cloning data for several n values, showing the converge towards a limiting profile as n is increased. Figure 2(b) exhibits the results for the largest graph size n = 110.
Numerical solution of the variational problem
The cloning algorithm implicitly solves the variational problem (17) by producing a population of graphs that approximate the optimizing graphon. Unfortunately, it is hard to scrutinize the structure of the graphon from the adjacency matrix of the graphs, when their size is large. Thus, in order to study the graphon in the replica symmetry broken phase, we solve (17) via a numerical discretization.
Discretization
The spatial discretization of graphon can be obtained considering the set of m × m symmetric matrices {f i,j } i,j with elements in [0, 1]. However, since the graphon that solves (17) does not take the values 0 and 1 [15, Theorem 6.3], we restrict our set of matrices to the following set: where ε > 0 is a small parameter that bounds the functions away from the singularity of the logarithm in the discretization of H(f ), that we define as follows: We remark that, as it happens in the continuous setting, H m (f ) enjoys a symmetry property. Indeed, given f, g ∈ Γ m,ε we have H m (g) = H m (f ) if there exists a permutation σ over {1, . . . , m} such that g i,j = f σ(i),σ(j) . In this case we call g and f equivalent. We solve numerically the discretization of (17) by applying the Gradient Projection (GP) method with both a constant and variable steplength, the latter chosen according to the Barzilai-Borwein rules [4].
Numerical results
Being interested in the structure of the optimizer in the replica symmetry breaking region, we have solved (38) for a set of p values ranging from 0.2 to 0.6 and α below −2, varying it with unitary step. For each value of p and α we have started the iterations of the GP method from a set of 12 initial conditions, using a grid of size m = 40 and setting ε = 10 −4 . Our main findings are: a) the presence of only two different geometrical structures of maximizers: the constant ones and chessboard-like one (see Fig.3). The values of the constant maximizers turn out to be equal, within the numerical approximation, to the solution of the fixed-point equation (20); b) there exists a critical value α c (p) such that when α > α c (p) the optimizer of H m (f ) is constant whereas when α < α c (p) it assumes a chessboard-like structure.
We observe that there are different chessboard-like structures that can be reached starting from different initial conditions, as shown in Fig.3. All of them are equivalent to the discretization of the 1-RSB graphon, that by abuse of notation we also denote by g α : With p fixed, and varying α, we sought the critical value α c (p) by comparing the values of H m computed at the homogeneous solutions f α and at the chessboard-like ones g α . We found that the values of f α coincide, within our approximation and for all α, with u * (α) (the solution of the replica symmetric fixed-point equation (20)), while the values p 1 (α) and p 2 (α) occurring in g α are close to 0 and p, see Tab.1 for large negative α values. This implies that g α is close to the equi-bipartite graphon g defined in (22). The evidence that the phase transition occurs at the critical value α c (p) is given by observing . An example is given in Table 1 where, for the case p = 0.4, the change of the optimizer shows that the position of the critical value α c (p) is between −109 and −110 (we recall that α varies with unitary step). Table 1: Numerical optimizers of (38) for p = 0.4 and α = −109, −110. We report the solution u * (α) to equation (20), the constant solution f α and the two values p 1 (α) and p 2 (α) taken by the chessboard solution g α . The last two columns show the values of H 40 (f ) that give evidence of the phase transition between −109 and −110.
Denoting by h α the global optimizer found by the GP method, in Fig.4 we represent H 40 (h α ) as a function of α for p ∈ {0.2, 0.4, 0.6}. In the replica symmetric region, i.e for α > α c (p), H 40 (h α ) is expected to approximate the solution µ RS (α) = α (u * (α)) 3 3 − I p (u * (α)) . The overlapping of the two curves above α = −455, α = −110, α = −69 and α = −47 is shown in Fig.4. Below such thresholds, that we identify as approximations of critical value α c (p), the optimal solution h α of H 40 switches from the constant one f α (approximating the fixed-point equation (20)) to a chessboard function g α . The function H 40 (h α ) is nearly flat for α < α c (p) and very close to the asymptotic value H(g) = 1 2 ln(1 − p) of µ p (α), see (23). of Section 4, here we assume that the breaking of the homogeneous graphon gives rise to a solution with the chessboard structure, see Fig.3, p1,p2 (x, y) = that we call the generalized 1-step replica symmetry breaking solution. We introduce the set R = {g (a) p1,p2 (x, y)| a, p 1 , p 2 ∈ [0, 1]} ⊂ W. This set contains the constant graphon (obtained by setting p 1 = p 2 for any a or, equivalently, by setting either a = 0 or a = 1) and, for a = 1 2 , the graphon (25), that we claim to be the solution of (17) for α < α c (p). Also, the limiting graphon (22) is contained in R.
We thus restrict the infinite dimensional problem (17) to the finite dimensional one obtained by restricting to the set R: Being interested in the phase transition, we consider this problem for α ≤ −2. Using this approach, we aim at locating the critical value α c (p) denoting the transition that we claim to occur in R between the homogeneous and the 1-step replica breaking solution. Obviously we can state that α c (p) ≤ α c (p), but we do not have any argument to assert that the transition in R is the same occurring in the whole space W and, as a consequence, that α c (p) coincides with α c (p). However, some evidence for the equality of the two critical points will be obtained in this section. The function to be maximized in (41) can be written as follows: since the density of triangles (14) in the generalized 1-step replica symmetry breaking graphons (40) is This function, that satisfies the symmetry F α (p 1 , p 2 , a) = F α (p 1 , p 2 , 1−a), can be defined by continuity up to the boundaries of [0, 1] 3 by setting I p (0) = I p (1) = 0. Therefore F α (p 1 , p 2 , a) attains its maximum on [0, 1] 3 . Due to the fact that the optimizer of (17) is bounded away from 0 and 1 [15], we assume that the coordinates (p 1 , p 2 ) of the maximum point lie in the interior of [0, 1] 2 and thus satisfies stationarity condition ∇F α (p 1 , p 2 , a) = 0, given by the following system of equuations: with p 1 , p 2 , a ∈ [0, 1] and α ≤ −2.
It is simple to check that the point (p * 1 (α), p * 2 (α), a * ) with p * 1 (α) = p * 2 (α) = u * (α) and any a * ∈ [0, 1] is a solution to equation (43) for any α. Indeed, when p 1 = p 2 equations (a) and (b) are equal and coincide with the fixed-point equation (20). From the numerical solution of system (43) as α is varied, we got evidence of the existence of a thresholdα c (p) above which (u * (α), u * (α), a * ) is the maximum (this holds for any a , being such parameter meaningless in the homogeneous case). Crossingα c (p) from above, a non-homogeneous solution with a * = 1/2 and p * 1 (α) ≈ 0, p * 2 (α) ≈ p arises; it turns out to be the maximum of the problem for α <α c (p).
We give more details on the procedure we followed in order to analyze the solutions of (43). First we observe that there are three special values of a, namely a = 0, 1 2 , 1. As we said above, a = 0, 1 correspond to the constant graphon that can be obtained also as a special case ( with p 1 = p 2 ) of a = 1 2 . Thus since the cases a = 0, 1 can be absorbed in a We solved numerically the previous equation for several values of p and α. The function G α (p 2 ) turns out to be a concave function whose maximun, that is also the solution to eq.(44), coincides with the solution that has the solution p 1 = u * (α). Indeed, the equation αx 2 = ln x(1−p) p(1−x) is equivalent to the fixed-point equation (20) that has an unique solution for α < 0.
• Case a = 1 2 . In this case the function to be maximized takes the much simpler form F α (p 1 , p 2 , 1 2 ) = α 12 (p 3 1 + 3p 1 p 2 2 ) − 1 2 (I p (p 1 ) + I p (p 2 )) with the stationarity condition: We have computed the numerical solution (p * 1 (α), p * 2 (α)) for several values of p, as in Fig.6. The left panels show that crossing a critical value α c (p) from above the transition between the constant graphon (p * 1 (α) = p * 2 (α))) and a 1-step replica symmetry breaking regime takes place. In particular, at α c (p) the solution jumps towards the point (0, p). The central and right columns of Fig.6 represent the solution below the critical value showing that, in the limit α → −∞, the values p * 1 (α) and p * 2 (α) converge to the limits 0 and p, respectively, as conjectured in (26) and (27). A further evidence of this transition is given in Fig.7 in which the curves F α p * 1 (α), p * 2 (α), 1 2 and µ RS (α) are displayed. Above the critical value α c (p) the two curves overlap thus revealing the replica symmetric phase whereas they separate below (replica breaking regime). For α <α c (p), the right panel Fig.7 shows that F α p * 1 (α), p * 2 (α), 1 2 is very close to the the asymptotic value (23) (the discrepancy vanishes as α → −∞). Figure 8 represents the density of triangles and edges corresponding to the maximizer g The former quantity is given in (42) while the latter is The jump discontinuity shown in Fig.8, located atα c (p), makes evident the existence of the transition that separates the replica symmetric phase, in which both quantities decrease for decreasing α, from the replica breaking phase. In this phase the density of edges gets close to the value p 2 , i.e. the density of the limiting equipartite graphon (22), while the density of triangle jumps towards a value close to zero, being zero the density of triangles of the same graphon. Figure 9 shows the dependence of the critical valueα c (p) from p. The data are in good agreement with a fit |α c (p)| ∼ p −2 .
We conclude our analysis by Fig.10, which displays the replica symmetric pressure (19), the 1-step replica symmety breaking pressure (41) and the pressure obtained from the discretized variational problem. We can observe that the three curves perfectly match above the critical thresholdα c (p), whereas in the subcritical phase the pressure obtained from the solution of the discretized problem agrees with the 1RSB pressure and is strictly larger than the replica symmetric pressure. Figure 10 shows that α c (p) is a singularity of α → F α (p * i (α), p * b (α), 1 2 ), at which the left and right derivatives are different.
We compute again (46) by applying the dynamics described in Sec.3 to a family of M clones. The three steps required to construct the edges of the graph G 3 are represented in Fig.11. The leafs of the tree represent the occupation variables of the three possible edges, x 1,2 , x 1,3 , x 2,3 . In the first step the edge connecting two vertices, say 1 and 2, of each clone is added with probability p. Thus, in the clone population about pM graphs have the edge (1, 2) and (1 − p)M graphs have not this edge, see the first level in Fig.11 . In the second step the edge (1, 3) is added, still with probability p, leading to four possible values for the pair (x 1,2 , x 1,3 ). Thus, the expected numbers of types are M p 2 , M p(1 − p), M p(1 − p), M (1 − p) 2 , see level 2 in Fig.11. Let us observe that in the first two steps, since ∆T (x, y) = 0, the original and tilted transition probabilities coincide: P α (x, y) = P (x, y), see (32). The situation changes in the last step. Indeed, the configuration of edges x = 11 (which means that x 1,2 = 1 and x 1,3 = 1) may evolve to y = 111, with ∆T (x, y) = 1, or to y = 110, in which case ∆T (x, y) = 0. For all the other configurations x we have ∆T (x, y) = 0. Thus, see the definition (32) of the tilted probability: being P (11, 111) = p, P (11, 110) = 1 − p and k α (11) = pe α 3 + 1 − p. Then, the probability of the paths connecting the root φ to the leafs 111, respectively 110, will be, : (48) The average in (33) can be computed as the sum over the paths from the root to the leafs (equivalently, as the sum over the leafs). Recalling that k α (11) = pe α 3 + 1 − p and observing also that k α (x) = 1 if x = 11, from (33) we have: µ 3,p (α) = 1 3 ln P(φ, 111)k α (11) + P(φ, 110)k α (11) + x / ∈{111,110} | 9,653 | 2020-07-25T00:00:00.000 | [
"Mathematics"
] |
Measurement and Evaluation Method of Distributed Optical Fiber Acoustic Sensing Performance
: Distributed acoustic sensing incorporates multiple indicators, and there exists a mutually constraining relationship among these indicators. Di ff erent application fi elds have varying requirements for indicators. Therefore, indicator testing and comprehensive evaluations are crucial for engineering applications. In this paper, we conducted a theoretical analysis of key indicators, including frequency response, sensitivity, spatial resolution, sensing distance, multi-point perturbation, and temperature in fl uence. The indicator test scheme was developed, and a test system was constructed. The test data were analyzed and compared in the time-frequency domain. A performance evaluation method for distributed acoustic sensing, based on the analytic hierarchy process, is proposed, and a comprehensive evaluation example focused on high-frequency applications is presented. The results show that the test scheme and method presented in this paper can accurately measure the upper limits of each indicator of distributed acoustic sensing. The proposed comprehensive evaluation method enables the assessment of sensor performance and applicability based on engineering practices. It addresses the challenge of evaluating distributed acoustic sensing with multiple indicators and o ff ers an e ffi cient approach for equipment development and engineering applications.
Introduction
Distributed fiber optic sensing technology utilizes light as the carrier of information and fiber optics as the medium for transmitting this information [1].It enables the longdistance and distributed measurement of temperature, strain, vibration, and other parameters [2].In the field of vibration measurement, phase-sensitive optical time-domain reflection (φ-OTDR) technology is highly regarded due to its advantages of high sensitivity, fast response speed, and high signal-to-noise ratio [3][4][5].Distributed acoustic sensing (DAS) is one of the commercially available φ-OTDR devices that are already being extensively employed for diverse engineering applications.The DAS can perform the real-time demodulation of phase information related to the position of a disturbance [6].This enables it to accurately restore external disturbance signals [7].As a result, the DAS exhibits great promise for various applications.It is necessary to master the operation steps of DAS devices and the basic indicators of DAS in depth.DAS has significant potential for applications in areas such as seismic observation and perimeter security.
In recent years, scholars have applied DAS in various condition-monitoring fields and obtained significant results.Table 1 shows the test/evaluation requirements for DAS in different application areas.These application areas require the evaluation of indicators, including the sensitivity and spatial resolution of DAS.DAS systems encompass a variety of indicators, each with interdependencies and constraints.In different application domains, indicator requirements vary.Therefore, it is crucial to assess the overall performance of DAS devices.However, there is currently a lack of research on the comprehensive evaluation of DAS performance.
Application Area Research Content Testing/Evaluation Requirements Reference
Oil and gas exploration and development Measure the dynamic strain on Vertical Seismic Profile (1) Coupling degree between optical cable and the wall; (2) Sensitivity [8,9] Rail transportation Monitor long-distance train position and speed Spatial resolution [10] Submarine cable state monitoring Monitor anchor drag and anchor drop faults Frequency response [11] Perimeter security Monitor and locate invasion points in real-time in the desert Sensitivity [12] In 2015, E. T. Nesterov et al. concluded that modulation instability limits the maximum optical pulse power, which, in turn, affects the sensing distance in DAS [13].In 2019, Hugo F. Martins et al. conducted a review on the substitution of coherent light pulses with chirped light pulses in φ-OTDR systems, resulting in the development of chirped φ-OTDR systems [14].This modification led to a significant enhancement in both the sensing distance and sensitivity of the system.In 2023, A.T. Turov et al. investigated the impacts of detection pulse parameters, as well as generator and amplifier parameters, on the spatial and spectral characteristics of DAS.This study also introduced methods aimed at reducing monitoring costs [15].In the same year, Boris G. Gorshkov et al. increased the sensitivity of DAS by 1.3 times, eliminating spontaneous Brillouin scattering noise [16].The current research mainly focuses on the influence of a certain parameter on a specific performance indicator in the DAS.However, there is a lack of research on the measurement and evaluation of the overall performance of DAS.In 2018, the Fiber Optic Monitoring Group disclosed the testing methods for some indicators of DAS, including the dynamic range test, frequency response test, fidelity test, self-noise test, spatial resolution test, and crosstalk test [17].Building on this foundation, the paper introduces a method suitable for the on-site testing of DAS equipment.This method places minimal demand on the equipment and offers advantages such as low cost, simple operation, convenience, and feasibility.
In this work, we present the sensing principle of φ-OTDR systems.The fundamental indicators of DAS in commercial φ-OTDR devices can provide a basis for proposing methods and conducting experiments to systematically assess DAS indicators.The DAS underwent a series of tests to evaluate its performance in various aspects.These tests included the frequency response test, amplitude test, spatial resolution test, sensing distance test, multi-point perturbation test, and temperature influence test.A comprehensive performance evaluation method for DAS based on the hierarchical analysis process (AHP) is designed to systematically and comprehensively assess the performance of DAS devices.
In the application of DAS systems, two main types of optical fibers are commonly chosen as sensing fibers: the ordinary single-mode fiber (SMF) and fiber Bragg grating (wFBG) array or Rayleigh-enhanced points.In 2019, Peng Zhu et al. integrated a wFBG array into the φ-OTDR system to detect, in a quasi-distributed way, the partial discharge ultrasonic signal of the cable joint [18].The detection signal covered a frequency range of 9-15 kHz.In 2022, Qirui Wang et al. successfully fabricated Rayleigh-enhanced points in the fiber under test (FUT) using the φ-OTDR system to detect guided wave ultrasounds [19].This approach led to improvements in the performance and effectiveness of ultrasonic nondestructive monitoring.At the same time, this system, combined with a convolutional neural network, successfully monitored and identified internal pipeline defects with different damage degrees [20].By employing a wFBG array or Rayleigh-enhanced points as the sensing fiber, significant improvements were observed in sensitivity, the signal-to-noise ratio, and other performance indicators.However, this shift transformed the system from distributed to quasi-distributed.The fabrication process of a wFBG array or Rayleigh-enhanced points is complex, and the associated costs are high, making it more suitable for short-distance condition monitoring.As this study involves the testing of the sensing distance indicator and is more inclined to analyze the performance indicators of distributed fiber sensing technology, ordinary SMF is selected as the sensing fiber.
The Working Principle of φ-OTDR Coherent Detection System
The narrow linewidth laser (PL-NLWM-1550-C-2-SA-A-M) from LD-PD INC SIN-GAPORE generates continuous light with a narrow linewidth of <3 kHz, a wavelength of 1550 nm and a power of 60 mW.The continuous light is divided into two parts by the 90/10 coupler.The 90% part of it enters the acoustic-optic modulator (AOM, SGTF100-1550-1P) to modulate into a pulse signal with a frequency shift of Δf = 100 MHz [21,22].The AOM is driven by an arbitrary waveform generator (AWG) to generate repetitive laser pulses.After being amplified by an erbium-doped fiber amplifier (EDFA), it enters the sensing fiber through an optical circulator (CIR) and undergoes Rayleigh scattering.The other 10% of the light passes through a polarizer scrambler (PS) and is used as the principal oscillator light with the backscatter Rayleigh (RBS) returning from the CIR to produce a beat-frequency signal in a 3 dB coupler.It is detected using a photodetector (PD, MBD-200M-AM) with a bandwidth of 200 MHz and converted into a high-gain electrical signal.The system schematic diagram is illustrated in Figure 1a.The voltage V(t) of the PD output beat signal can be simply expressed as follows: ( ) Here, AL(t) represents the amplitude of the local oscillator.φt(t) represents the phase information that contains the vibration signal.In this process, several factors can influence the spatial resolution, sensitivity, and frequency response of the system, including the line width of the laser, the stability of the light source, the optical pulse width, and the bandwidth range of the PD.There are several demodulation methods available for φ-OTDR systems.Among them, the adoption of a digital in-phase and quadrature (IQ) demodulation scheme [23] can truly achieve phase demodulation, enabling the precise restoration of vibration signals and improving the signal-to-noise ratio (SNR) of the system.The demodulation schematic diagram is depicted in Figure 1b.The PD output signal performs IQ phase demodulation.After passing through a low-pass filter (LPF), the demodulated components, I(t) and Q(t) can be represented as follows: The amplitude A(t) and phase φt(t) of the vibration signal can be represented as ( ) (t) ( ) According to the above formula, the phase φt(t) of RBS can be obtained.Given that the range of the arctangent function is (−π/2, π/2), the result needs to be transformed into a range (−π, π) based on the quadrants where I and Q are located.Through phase unwrapping, the final phase result can be achieved.The DAS tested in this paper was provided by Wuxi BuLiYuan Electronic Technology Co., Ltd., located in Wuxi, China, with the model identified as BLY pDAS-100P.The corresponding detection parameters are outlined in Table 2.The frequency response of a system is defined as the maximum signal frequency that the system can accurately detect or reproduce [24].Following Nyquist's sampling theorem, the frequency response can be expressed as Here, fre denotes the frequency response of the system, and fm denotes the repetition frequency of the injected pulsed light.The frequency response of the system is determined by the repetition frequency of the injected pulsed light.At any given moment, only a single pulse of light can be transmitted within the fiber to generate effective optical interference.Therefore, the repetition frequency of the pulsed light is limited by the length of the fiber.This constraint can be expressed as follows: Here, the length of the fiber is represented by L, the speed of light in a vacuum is denoted by c, and the refractive index of the fiber's core is denoted by n.When the length of the fiber is 2 km, the theoretical maximum limit for the pulse repetition frequency is 50 kHz, and the theoretical maximum limit for the frequency response is 25 kHz.
Sensitivity
Sensitivity is a measure used to indicate the ability of a system to effectively respond to weak signals [25].A higher sensitivity corresponds to a system that is capable of detecting smaller signal amplitudes, which indicates the better performance of the system.This paper evaluates the sensitivity of the DAS by conducting tests on the amplitude response.
Spatial Resolution
Spatial resolution refers to the minimum distance at which a system can distinguish and resolve adjacent perturbations.It can be expressed as follows: Here, the spatial resolution is denoted by z, and the optical pulse width is represented by Tw.When the optical pulse width is 100 ns, the system achieves a spatial resolution of 10 m.In this spatial resolution test, the optical pulse width of the DAS device remains constant, while the length of the optical fiber in the sensing section varies.
Sensing Distance
The sensing distance denotes the maximum length of fiber that the system can accurately detect and monitor effectively.The dynamic range [26], denoted as R, represents the primary limiting factor for the sensing distance.It can be mathematically expressed as follows: 5lg( ) Here, Po denotes the Rayleigh-scattered light power at the first end, while Pn represents the Rayleigh-scattered light power at the tail end.The length of the sensing distance of a system tends to increase as the dynamic range becomes larger.The sensing distance can be determined by analyzing whether a stable vibration signal is measured at the end of the fiber.The characteristics of vibration signals primarily include frequency and amplitude.The frequency response test of Equation ( 2) can also be utilized to evaluate the sensing distance indicator.When the same fiber length is used, a more effective sensing distance test can be achieved when the measured frequency response closely matches the theoretical limit value.
In addition to testing the key indicators mentioned above, it is important to consider the impact of multi-point simultaneous perturbations and temperature variations on the sensing performance of DAS devices.This requires testing specific indicators and conducting a thorough analysis of the results.
The Performance Evaluation of Indicators
DAS devices involve numerous indicators, and these indicators are interrelated, which can lead to confusion for users in different application fields when selecting and using DAS.Conducting a comprehensive evaluation of DAS performance is crucial to provide users with a reference that meets their specific requirements.
The evaluation of DAS performance, indeed, requires both quantitative and qualitative analysis.This evaluation approach aligns with the concept of AHP [27], which is commonly used to evaluate comprehensive and complex problems.The AHP is a comprehensive evaluation approach that incorporates both quantitative and qualitative analysis.It is employed to tackle complex evaluation problems by breaking them down into multiple levels and elements.Within each level, the elements are compared, assessed, and computed against one another.By utilizing analytical methods, the weight parameters of each level can be determined [28].This methodology provides a solution to the systemic problems encountered in scenarios involving multiple objectives, multiple levels, and challenges that cannot be entirely evaluated through quantitative methods.
In this paper, a set of AHP steps is designed specifically based on the characteristics of DAS indicators and engineering requirements.These steps are outlined as follows: First, construct a hierarchical model.The comprehensive evaluation model of DAS performance is divided into the following three layers: the top layer (H), the middle layer (M), and the bottom layer (L).In the context of evaluating DAS indicators, the top layer is defined as the target layer.The target layer sets only one factor, which is typically referred to as the "DAS performance evaluation".The middle layer consists of six factors that are crucial to consider, including M1 frequency response, M2 sensitivity, M3 spatial resolution, M4 sensing distance, M5 multi-point perturbation, and M6 temperature influence.The bottom layer, also referred to as the criterion layer, establishes the following three factors: K1 (excellent), K2 (fair), and K3 (poor).The AHP model, as illustrated in Figure 2, is designed to illustrate the evaluation process.
Next, the judgment matrix is constructed for a factor concerning the subsequent layer of associated factors.Let us take the construction of the judgment matrix for the target layer and the middle layer as an example.In this process, we compare two factors, Mi and Mj, from the middle layer at a time.The resulting judgment matrix is as follows: Here, the relative importance of the factors Mi and Mj in the middle layer concerning the target layer is represented by the comparison scale aij.The quantization method in reference [20] was used to determine the value of aij.The value aij was determined simultaneously according to the given conditions: aij > 0, aji = 1/aij (i ≠ j), aii = 1(i, j = 1, 2, …, n).
We constructed judgment matrices B, C, D, E, F, G, and H to evaluate M1 frequency response, M2 sensitivity, M3 spatial resolution, M4 sensing distance, M5 multi-point perturbation, and M6 temperature influence concerning the criterion layer.The order of this judgment matrix was 3, indicating that it is a square matrix with dimensions of 3 rows and 3 columns.After constructing the judgment matrices, we proceed with hierarchical single ranking and conduct a consistency test.To solve for the eigenvalues λ and eigenvectors WA of the judgment matrix A according to Equation ( 6), we needed to perform the following calculations.The weight value WA represents the weight value of all factors in the middle layer concerning the target layer.
Similarly, the eigenvalues and eigenvectors of the judgment matrices B, C, D, E, F, G, and H can be determined to complete the hierarchical single-sorting process.
A consistency check of the judgment matrix is necessary after the hierarchical singlesorting process.To calculate the stochastic consistency ratio CR, we can use Equation (7).
Here, the variable CR represents the consistency indicator, n denotes the order of the judgment matrix, and RI represents the average random consistency indicator.When the order of the judgment matrix is three, the consistency indicator is represented by RI as a value of 0.58.However, when the order is seven, the consistency indicator changes to a value of 1.32.
If the consistency indicator CI is less than 0.10, the consistency test of the judgment matrix is considered successful.However, if the CI exceeds 0.10, the judgment matrix needs to be readjusted until the CI is less than 0.10.
Finally, a hierarchical total ranking was performed.The weight values of all factors in the middle level and the criterion level, relative to the top level, were calculated separately to complete the hierarchical total ranking process.This process considers the importance of each factor at different levels and combines them to determine the DAS performance evaluation results.
Frequency Response Test and Analysis
To assess the frequency response of DAS, this study utilizes the nominal maximum pulse repetition rate and minimum pulse width specified by the device, which are 26.64 kHz and 100 ns, respectively.A frequency response test optical path is constructed, as illustrated in Figure 3a, while the corresponding test setup is depicted in Figure 3b.A sheathed fiber with a diameter of 900 µm, measuring 2090 m in length, is connected to the DAS unit using an FC/APC connector.The L1 segment serves as a jumper connecting to the test optical fiber.To minimize the impact of environmental disturbances during the system testing, the L2 and L3 segments of the fiber were enclosed in sealed cartons to isolate air vibrations.Additionally, the foam was used to isolate the fiber segments from vibrations on the tabletop.The dimensions of the carton used were 470 mm in length, 390 mm in width, and 315 mm in height.The L2 and L3 optical fibers were coiled into optical fiber discs with a diameter of 180 mm and a height of 20 mm.Subsequently, they were affixed to the surface of the carton.The sound source was positioned in the center between the two optical fiber discs, maintaining a distance of 50 mm from the surface of the optical fibers.The sound had a frequency range spanning from 130 Hz to 18 kHz (±3 dB).The system signal-to-noise ratio (SNR) of ≥80 dB ensured a high-quality signal.Additionally, the distortion degree was maintained at less than 0.1%, ensuring signal fidelity.The system had an output power of 3 W.By controlling the COOL Edit Pro software 2.1, the sound generated sinusoidal signals with an amplitude of 0 dB and frequency of 8, 10, 12, and 13 kHz, respectively.These signals propagate through the air in the form of sound waves.The L2 and L3 fibers are positioned along the signal propagation path, resulting in a change in the phase of the backscattered light from L2 and L3.The device shown in Figure 3c The phase data at a distance of 1831 m along the sensing fiber were extracted, and its time-domain waveform was plotted, as depicted in Figure 4.When the applied signal frequency is 8 kHz, it is difficult to observe the waveform of the sine wave signal in Figure 4b.This is primarily because only 3.33125 points can be sampled within one cycle.Moreover, failures occur in the phase unwrapping process for certain points.At a signal frequency of 13 kHz, merely 2.05 points can be captured within one cycle.Moreover, failures occur in the phase unwrapping process for certain points.This results in the severe distortion of the demodulation waveform.The spectrum was obtained through an FFT, as illustrated in Figure 5.As observed in Figure 5, the FFT measurements accurately capture the frequency values of the signal being measured, particularly when the signal frequency does not exceed 12 kHz.When the frequency of the signal being measured surpasses 12 kHz, the FFT may not effectively identify or capture the signal within this frequency range.Therefore, the determined frequency response of the device is 12 kHz when the length of the sensing fiber is 2090 m.This value approaches the theoretical limit but does not reach the theoretical limit of 26.64/2 kHz.
Magnitude Response Test and Analysis
In practical applications of the φ-OTDR system, the amplitude response plays a crucial role in determining the sensing performance of the sensor.A system that can measure smaller amplitudes indicates higher sensitivity and better sensing performance.The optical path for the amplitude response test of the DAS is designed according to the configuration illustrated in Figure 3a.The parameters of the DAS device are set to match those used in the frequency response test.By controlling the COOL Edit Pro software, the sound produces sinusoidal signals with a frequency of 1 kHz and amplitude of 0, −10, −20, −30, −40, and −50 dB, respectively.These signals propagate through the air in the form of sound waves.The L2 and L3 fibers are positioned along the signal propagation path, resulting in a change in the phase of the backscattered light from L2 and L3.The device shown in Figure 3c remains unaltered.
The phase data at a distance of 1831 m along the sensing fiber were extracted.To minimize the impact of external disturbances on the amplitude response test results, the acquired phase data were divided into six groups.An FFT was applied to determine the amplitude of each 1 kHz signal.The curve shown in Figure 6 was generated by averaging the results from the six sets of data.The obtained curve was fitted using exponential functions, and the fitted expression is as follows: In formula ( 8), the variable x represents the input signal amplitude in decibels (dB), while the variable y represents the average value obtained from the six sets of 1 kHz signals.The goodness-of-fit value was 0.9949, which is close to 1, indicating a good fit between the fitted curve and the actual data.When the applied signal amplitude was greater than or equal to −40 dB, a clear 1 kHz signal could be observed in the spectrogram.However, when the applied signal amplitude was −50 dB, the signal collected by the fiber was overwhelmed by noise, making it impossible to identify the 1 kHz signal.Therefore, based on this analysis, the determined amplitude response of the device was −40 dB when the length of the sensing fiber was 2090 m.
Spatial Resolution Test and Analysis
To assess the spatial resolution indicator of DAS, a test optical path was designed according to the configuration depicted in Figure 7a.The light pulse repetition frequency was configured to be 26.64 kHz, while the pulse width was adjusted to 100 ns.Using Equation (3), the spatial resolution was calculated to be 10 m.Starting at 1088 m, the fiber was wound inward for 5 m, 10 m, and 20 m to ensure that the endpoint of the applied vibration signal was the same.These wound fibers were placed separately on a vibration platform, while the rest were sealed in a carton.The vibration platform comprised an aluminum plate, sound, and square tube, as shown in Figure 7b.The physical photograph of the vibration platform is shown in Figure 7c.The computer-controlled sound system produced a sinusoidal signal with a frequency of 1 kHz and an amplitude of 0 dB.The vibration signal was sensed using an optical fiber mounted on the insulated rigid plate, which acted as a vibration sensor.This fiber optic sensor captured the vibration signal.The resulting data were then visualized in the form of a waterfall diagram, also known as a position-time-energy image.This diagram, labeled as Figure 8, displays the distribution of the position, time, and energy of the measured vibrations over a specific period.As the length of the sensing fiber increased, the width of the signal on the waterfall map became broader, denser, and more intense.When the length of the sensing fiber was 5 m, the signal width on the waterfall map became narrower, and the signal became intermittent, making it impossible to collect data continuously.When the sensing fiber's length was 20 m, it became evident that the signal acquisition effect was improved when observed through the waterfall diagram.
Sensing Distance Test and Analysis
The sensing distance test optical path of the DAS sensor is shown in Figure 9.The fibers used in this test were all G652D single-mode bare fibers.The length of L2 was set to 4988.5 m and 9988.5 m, respectively.To eliminate the effect of Fresnel reflections, the end of the test fiber L3 was placed 10 m away from the end of the fiber, and the end of the fiber was knotted.The length of L2 was set to 4988.5 m.The pulse repetition frequency of the DAS system was 13.32 kHz, and the pulse width was 100 ns.The L1, L2, and L4 segments of optical fibers were placed in sealed cartons.The computer-controlled sound system produced sinusoidal signals with an amplitude of 0 dB and frequencies of 1 kHz, 3 kHz, 5 kHz, and 6 kHz, respectively.The fiber optic on the insulated rigid plate sensed the vibration signal and performed the measurement.The measurement results are shown in Figure 10.
The length of L2 was set to 9988.5 m.The pulse repetition frequency of the DAS system was 9.99 kHz, and the pulse width was 100 ns.The L1, L2, and L4 segments of optical fibers were placed in sealed cartons.The computer-controlled sound system produces sinusoidal signals with an amplitude of 0 dB and frequencies of 1 kHz and 2 kHz, respectively.The fiber optic on the insulated rigid plate senses the vibration signal and performs the measurement.The measurement results are shown in Figure 11.
The DAS device can effectively measure the signal at a distance of 10 m from the end of the fiber when the length of the sensing fiber is 10 km.Therefore, the sensing distance of the system is no less than 10 km.However, according to Equation (2), when the length of the sensing fiber is 5 km, the theoretical upper limit of the pulse repetition frequency is 20 kHz, and the frequency response limit that the system can measure is 10 kHz.
According to the pulse repetition frequency and Nyquist's sampling theorem, the system can measure a frequency response limit of 13.32/2 kHz.However, in actual measurements, the frequency response is only 5 kHz.When the length of the sensing fiber is 10 km, according to Equation ( 2), the theoretical upper limit of the pulse repetition frequency is 7.992 kHz, and the frequency response limit that the system can measure is 5 kHz.However, according to Equation ( 1), the limit of frequency response that can be measured by the system is 7.992/2 kHz, and the actual measured frequency response is only 1 kHz.The ability of the device has room for improvement in terms of the signal-to-noise ratio and pulse repetition frequency.At the same time, the sensor distance measured to this DAS is only 10 km, and the sensor distance indicator needs to be improved.
Multi-Point Simultaneous Disturbance Test and Analysis
During vibration condition monitoring, simultaneous disturbances at multiple points often occur.Therefore, for practical condition monitoring applications, it is highly significant to carry out multi-point simultaneous perturbation tests.The design of an optical path for multi-point simultaneous perturbation testing is depicted in Figure 12.The light pulse repetition frequency was configured to be 10 kHz, while the pulse width was adjusted to 100 ns.The fiber segments, L2 and L3, were enclosed within two sealed containers.The box positioned at the L3 location was lightly tapped to observe the changes in the waterfall graph within the DAS software V0.6.10.The test results are depicted in Figure 13.In Figure 13a of the waterfall diagram, when no signal is applied, a signal emerges at the position marked by the red box.This signal is a result of the influence of the flange at 60 m and the Fresnel reflection at 2750 m.When only the rear disc fiber is clapped, a signal is observed, as depicted in Figure 13b.Apart from the signal at the position indicated by the red box in Figure 13a, a significant vibration signal was detected in the waterfall diagram precisely 1100 m backward from the tapping location (i.e., the starting position of the second tray of fibers).This observation perfectly matches the actual tapping position.These findings demonstrate that the device successfully meets the requirements for the multi-point simultaneous perturbation measurement.
Temperature Influence Test and Analysis
In principle, the measurement results of the φ-OTDR system are relatively immune to temperature variations.To explore the impact of temperature variations on the sensing performance of the DAS device, a test optical path was designed, as depicted in Figure 14a.The corresponding test setup is illustrated in Figure 14b.The light pulse repetition frequency was configured to be 26.64 kHz, while the pulse width was adjusted to 100 ns.The fiber section L2 was placed in a thermostatic water bath, while L1 and L3 were enclosed in the sealed carton.
The initial water temperature of the thermostatic water bath was set to 50 °C, and the temperature was increased in 10 °C intervals using internal circulation until it reached 100 °C.Once the water temperature stabilized at 100 °C, the internal circulation was utilized to decrease the temperature in 10 °C intervals until reaching 70 °C.The DAS device has a data storage capacity of only 1.25 s.The phase data for 1 s were intercepted, resulting in a total of 333,000 data points being captured.The wavelet packet algorithm was employed to decompose the data into eight layers using the dB3 wavelet as the basis function.This process generated a total of 256 sub-bands, with a frequency interval of 650.4 Hz between each sub-band.The energy of the first 10 sub-bands, specifically nodes (8, 0), (8, 1), (8,2), (8,3), (8,4), (8,5), (8,6), (8,7), (8,8), and (8, 9), were extracted.The frequency range covered by the first 10 sub-bands was from 0 to 6504 Hz.The energy variation curves with temperature for these ten nodes are plotted and illustrated in Figure 15.During the process of warming and cooling operations, the internal circulation mode generated a low-frequency random vibration signal.Moreover, the remaining nodes located in the high-frequency range (greater than 1 kHz) were not impacted by this vibration signal.Their energy levels were low and remained constant regardless of temperature variations.This indicates that the measurement results of this DAS device are not affected by temperature, demonstrating excellent performance.
Example of Performance Evaluation
This paper aims to utilize DAS for high-frequency small signal measurements.The application under the test exhibits a vibration frequency greater than 10 kHz and a small vibration amplitude.This paper utilizes a comprehensive evaluation approach for the DAS system by incorporating the AHP method.
First, the judgment matrix is constructed for each factor in the middle layer concerning the target layer.The DAS is designed to measure high-frequency vibration signals, which typically have small amplitudes.The measurement should prioritize the frequency response and sensitivity indicators.Therefore, when constructing the judgment matrix, the importance of frequency response and sensitivity is given more weight.The judgment matrix is constructed, as shown in Table 3.Using Equations ( 6) and ( 7), the calculation yields the following results: λmax = 6.00001,CI = 0.000003, RI = 1.24,CR = 0.0000023 < 0.1.Therefore, this judgment matrix satisfies the consistency test requirements.The relative weight values for frequency response, sensitivity, spatial resolution, sensing distance, multi-point perturbation, and temperature influence are determined to be 0.2963, 0.2963, 0.1481, 0.1487, 0.0741, and 0.0370, respectively.
Similarly, based on the test results of each indicator and the principle of the AHP, the comparative judgment matrix of the criterion layer (excellent, fair, poor) relative to the intermediate layer (frequency response, sensitivity) is presented in Table 4.Likewise, the assessments for spatial resolution, multi-point perturbation, and temperature influence are detailed in Table 5, while the sensing distance evaluations are provided in Table 6.The weight calculation results of each judgment matrix were obtained.The judgment matrix for evaluating the performance of the DAS was derived.The hierarchical ranking process was completed, and the resulting judgment matrix is presented in Table 7.
The final weights obtained for DAS were 0.5757 for the "excellent" factor, 0.2628 for the "fair" factor, and 0.1621 for the "poor" factor.The weight for the "excellent" factor was the highest among the three.
Conclusions
Based on the φ-OTDR vibration sensing principle, this paper conducts both theoretical analysis and experimental studies to investigate the indicator testing method for distributed acoustic sensors.A performance evaluation method based on the AHP method is proposed for assessing the sensing performance of DAS.The following conclusions can be drawn: (1) Evaluating the performance of DAS involves the consideration of various factors such as frequency response, sensitivity, spatial resolution, sensing distance, multipoint perturbation, and temperature influence.These indicators are interconnected and mutually affect each other, leading to constraints and interdependencies in the evaluation process.As the amplitude of the applied vibration signal increases, the detected signal amplitude on the fiber tends to increase exponentially.φ-OTDR enables multi-point simultaneous positioning, and its measurement results remain unaffected by temperature variations.However, as the sensing distance increases, the actual measured frequency response of the fiber deviates from the theoretical limit value.(2) DAS indicators are analyzed qualitatively and quantitatively.The analytic hierarchy process (AHP) is adopted to evaluate complex problems.The above indicators are categorized into the following three levels: excellent, fair, and poor, for comprehensive evaluation.The weight parameters of each level are determined using the AHP method.The specific implementation process of this evaluation method is illustrated through a comprehensive evaluation example focused on high frequency.This example demonstrates the convenience and effectiveness of the evaluation method.(3) The proposed method for testing the indicators of DAS effectively captures the actual limit values of each indicator.The comprehensive evaluation method can derive performance and applicability evaluation results based on practical engineering applications.It addresses the challenge of evaluating multiple indicators of DAS concerning each other and provides an effective approach for device development and engineering applications.
Figure 3 .
Figure 3. Frequency response testing.(a) Frequency response testing optical path; (b) frequency response testing device; and (c) geometric diagram of optical fiber and sound source.
Figure 4 .
Figure 4. Time domain waveform obtained from frequency response testing.(a) Global time−domain waveform; (b) local time−domain waveform of 8 kHz and 13 kHz.
Figure 7 .Figure 8 .
Figure 7. Spatial resolution testing.(a) Optical path; (b) vibration platform diagram; and (c) the physical photograph of the vibration platform.
Figure 10 .
Figure 10.Frequency response testing results when the sensing distance is 5 km.(a) Spectrogram at 1, 3, and 5 kHz, respectively; (b) spectrogram at the applied frequency of 6 kHz.
Figure 11 .
Figure 11.Frequency response testing results when the sensing distance is 10 km.(a) Spectrogram at the applied frequency of 1 kHz; (b) spectrogram at the applied frequency of 2 kHz.
Figure 13 .
Figure 13.Multi-point simultaneous perturbation testing results.(a) Waterfall plot chart when no signal is applied; (b) waterfall plot when clapping the second box.
Table 1 .
Test/evaluation requirements for DAS in different application areas.
Table 2 .
Judgment matrix of each factor in the middle layer relative to the target layer.
Table 3 .
Judgment matrix of each factor in the middle layer relative to the target layer.
Table 4 .
Judgment matrix of the criterion layer relative to the middle layer (frequency response and sensitivity).
Table 5 .
Judgment matrix of the criterion layer relative to the middle layer (spatial resolution, multipoint perturbation and temperature influence).
Table 6 .
Judgment matrix of the criterion layer relative to the middle layer (sensing distance).
Table 7 .
Judgment matrix of the criterion layer relative to the middle layer (sensing distance). | 8,131.2 | 2024-02-08T00:00:00.000 | [
"Engineering",
"Physics"
] |
Biobased carbon content of resin extracted from polyethylene composite by carbon-14 concentration measurements using accelerator mass spectrometry
An estimation procedure for biobased carbon content of polyethylene composite was studied using carbon-14 (14C) concentration ratios as measured by accelerated mass spectrometry (AMS). Prior to the measurement, additives and fillers in composites should be removed because they often contain a large amount of biobased carbon and may shift the estimation. Samples of resin with purity suitable for measurement were isolated from composites with a Soxhlet extractor using heated cyclohexanone. After cooling of extraction solutions, the resin was recovered as a fine semi-crystalline precipitate, which was easily filtered. Recovery rates were almost identical (99%), even for low-density polyethylene and linear low-density polyethylene, which may have lower crystallinity. This procedure could provide a suitable approach for estimation of biobased carbon content by AMS on the basis of the standard ASTM D 6866. The biobased carbon content for resin extracted from polyethylene composites allow for the calculation of biosynthetic polymer content, which is an indicator of mass percentage of the biobased plastic resin in the composite.
Background
Biobased plastics such as poly(lactic acid) and poly(hydroxyl alkanoic acid) are already produced commercially and are steadily gaining in popularity with public awareness of the environment. Furthermore production of polyethylene and polypropylene, which are major thermoplastic resins, is now achieved from biomass resources (Morschbacker 2009;Peters et al. 2010;Takahashi et al. 2012). To be certain of purchasing biobased plastics, it should be confirmed and certified that they are actually produced from biomass, and, if they must be, how much biobased plastic is contained in the plastics. Products of biomass origin and products of petroleum origin are indistinguishable because they have the same physical and chemical properties if they have same molecular structure. Therefore, in an attempt to increase general consumer knowledge and promote biobased plastics, the Japan Bioplastics Association (JBPA) (Japan Bioplastics Association 2013) is managing the "BiomassPla" mark certification system as an identification system for products of biomass origin. Under this system, products that meet the stipulated standards are certified as Biomass-Pla and are permitted to use the "BiomassPla" logo shown in Figure 1. The degree of biobased synthetic polymer in a product shall be a plastics product of 25.0 wt% or more in one of the authentication conditions of the aforementioned system. The degree of biobased synthetic polymer is the ratio of the biomass origin resin to the plastic product (Table 1).
The biobased carbon ratios of plastics can be estimated by the ratio of 14 C to 12 C measured by accelerator mass spectrometry (AMS) conforming to the standard ASTM D 6866 "Standard Test Methods for Determining the Biobased Content of Natural Range Materials Using Radiocarbon and Isotope Ratio Mass Spectrometry Analysis." The principle of this method using 14 C is based on a dating measurement for historical materials in archeology (Jull and Burr 2006;Currie 2004). 14 C is a radioisotope of carbon atoms with a half-life of 5730 years. 14 C atoms are continuously generated from 14 N atoms due to their interaction with cosmic radiation in the modern atmosphere. The ratio of 14 C to 12 C in modern air is constant at approximately 1 × 10 -12 in spite of the period. Plants absorb carbon dioxide in the atmosphere and incorporate it in their structure by photosynthesis. The ratio of 14 C to 12 C in a plant is 1 × 10 -12 immediately after photosynthesis. The 14 C in plant materials gradually decays into 14 N. The number of 14 C atoms continuously decreases and becomes half after 5730 years. Therefore, the age of materials including carbon atoms can be estimated using the ratio of the number of 14 C atoms to that of 12 C atoms and the half-life of 14 C. The ratio of 14 C to 12 C can be measured by AMS, although this ratio is as low as 1 × 10 -12 . The standard year is defined as 1950 according to the formulas for radiometric dating in ASTM D 6866-08. In ASTM D 6866-08, formulas for radiometric dating are applied to the determination of the biobased carbon content. A percent modern carbon (pMC) value can be estimated by comparing the measured ratio of 14 C to 12 C, and the standard ratio of 14 C to 12 C determined from the appropriate primary reference (oxalic acid) of SRM 4990c supplied by the National Institute of Standards and Technology (NIST), USA (SRM 2013). Theoretically, biobased carbon ratios for petroleum-based materials are estimated at 0%, and for biobased materials at 100%. Our previous reports (Funabashi et al. 2009;Onishi et al. 2010;Tachibana et al. 2010) described estimation of biobased carbon ratios for various polymeric composites with additives and fillers, and discussed repeatability and accuracy of this evaluation method. For reliable estimation of the ratios, we devised pretreatments for AMS samples such as lower-temperature oxidation and reaction by phosphoric acid.
This report describes a pretreatment for an AMS sample of polyethylene products with additives and fillers. Before evaluation of biobased carbon content in bioplastic products by AMS, isolation of resin is necessary to confirm the presence of biobased polyethylene. For isolation of resins we used a Soxhlet extractor, which is a general apparatus for the separation of solvent soluble components from solid materials and has also been used in polymer science. For example, it has been used for the separation of additives from polyolefins using methylene dichloride (El Mansouri et al. 1998), separation of oligomers from polypropylene using n-heptane (El Mansouri et al. 1999), and removal of insoluble parts of cross-linked polyethylene using xylene (Elzubair et al. 2003). We employed a commercial instrument that allows quick extraction at an elevated temperature by heating an extraction chamber of the instrument (Figure 2). In addition to resin isolation as a pretreatment, we considered evaluating the resin content of polyethylene composites. There is no useful method for measuring the content of resin components in polyethylene products because of the variety of chemical and physical properties of plastic products. Linear low-density polyethylene (LLDPE) and lowdensity polyethylene (LDPE) have side chains and branches coming off the main chains. LLDPE and LDPE are different in crystallinity as compared to high-density polyethylene (HDPE), which is composed of a fundamental structure of methylene chains, resulting in substantially different thermal properties (melting point) and X-ray diffraction patterns (Zhu et al. 1999;Mirabella and Bafna 2002;Perez et al. 2000). Therefore, these methods are not useful. Even FTIR measurement, which is the most effective and easiest-to-use analytical method for polymeric materials, can provide unclear results. This is because the presence of branches or side chains on polymer main chains and the lack of uniformity of samples may deform the intensity of absorption bands of the fundamental structures (methylene chains) of polyethylene (Koenig 1992;Hagemann et al. 1989). Furthermore, additives and fillers may interfere with the measurement due to overlapping spectra. As a result, we concluded that the amount of resin recovered from an extraction solution should be viewed as the most sensible value for resin content of plastic products, regardless of incomplete precipitation from the extraction solution and handling loss of recovered resins. The procedure for polyethylene isolation by a Soxhlet extractor in this study was based on a simple principle: difference in solubility behavior for each component in the composites. That is, polyethylene is soluble in a hydrophobic solvent at an elevated temperature and near insoluble at room temperature (Brandrup and Immergut 1966;Barton 1975). After cooling the extraction solution, resin components should form a dense precipitate and be successfully recovered by filtration of the entire extraction solution. Fillers such as graphite, calcium carbonate, starch, and cellulose are insoluble in hydrophobic solvents and remain in an extraction thimble throughout the operation. Organic additives such as antioxidants, UV-absorbents, and flame-retardants, which are mostly low molecular weight substances, have good solubility in organic solvents even at room temperature (Bolgar et al. 2008;Tolinski 2009). Various hydrophobic chemicals are known as good solvents for polyethylene at elevated temperature: hydrocarbons such as xylene and dodecane, and chlorinated hydrocarbons such as 1,2-dichlorobenzene and 1,2,4-trichlorobenzene. However, in this study we used cyclohexanone, which is typically not a good solvent for polyolefins due to a polar carbonyl function on the molecule, to improve resin recovery from the extraction solution. By cooling the solution to room temperature, polyethylene can form a precipitate, whereas hydrophobic organic additives remain in the extraction solution. Filtering the floating polyethylene precipitate and rinsing with a volatile solvent can yield a good sample with purity suitable for AMS measurement, and at the same time, can provide an estimate of the amount of resin content of composites.
Various kinds of additives, including antioxidants, UVabsorbents, nucleation agents, flame-retardants, and fillers, are usually applied to polyethylene products in combination. Extensive testing of composites covering a variety of commercial additives is impractical. Therefore, we chose some fillers (graphite and calcium carbonate), which contain petroleum-based carbon, and others such as pulverized oyster shell (Gofun, where calcium carbonate is a major ingredient), starch, and cellulose, which may often be used in high quantities in polyethylene composites, and may drastically increase estimations of biobased carbon content owing to the bio-origin of carbon. Soluble additives used in commercial polyethylene products are too numerous to count. However, a limited number of compounds may be sufficient for testing because almost all of them have common characteristic properties due to the hydrophobic functional groups needed to retain compatibility with the hydrophobic properties of the resin (see Scheme 1). In spite of the differences in their chemical skeletons and functions, they could be classified together based on their solubility in hydrophobic solvents. Therefore, we tested a flame retardant (decabromodiphenyl ether, DBDPE) because large amounts of flame retardants are usually added (for instance 40 wt%), compared to small amounts of other additives (less than 1%). Another reason the flame-retardant was chosen was due to the fact that this compound has UV absorption properties and could be readily detected.
In this study we planned to confirm whether the isolation procedure of polyethylene proceeds successfully enough to be adapted as part of a standard method for estimating biobased carbon content of industrial products. An important issue is whether a hot solution of composite provides a dense precipitate that could be readily separated from the extraction solution. Physical properties of the precipitates were studied by scanning-electron microscope observation, differential scanning calorimetry (DSC), X-ray diffraction, and UV-visible measurements to confirm crystallinity of the precipitates and adequate removal of the additive from the composites.
Materials
Materials in this study were purchased from the following companies: biobased high-density polyethylene
Soxhlet extraction instrument
A Soxhlet extractor (Büchi Co. B-811) was used as supplied by the manufacturer. However, to shorten dissolution time of polymers, a stirrer bar (15 mm long) was inserted into an extraction thimble. The bar was linked with a motor using a stainless steal wire through a small hole in the cooling tower of the instrument. A glass tube that supplied hot vapor from the solvent reservoir at the bottom to an extraction chamber was wrapped with a film heater to facilitate hot vapor supply. Nitrogen gas was continuously added into the instrument chamber to avoid oxidation of resins.
Preparation of polyethylene composites
Before preparation of polyethylene composites, the fillers (graphite, calcium carbonate, Gofun, cornstarch, and cellulose) were dried at 120°C under reduced pressure until constant weights were obtained. Composite sheets with a thickness of 0.50 mm were prepared according to the previous report (Onishi et al. 2010). Polyethylene fine powder with a particle size less than 125 μm and an additive or fillers were mixed using a mortar and pestle. The resulting mixture was heated to 200°C at 20 MPa for 5 min in a stainless steel mold, and was gradually cooled by standing at room temperature.
Isolation of polyethylene resin from composites
The procedure of extraction and isolation of polyethylene resins from the composites was carried out as follows. A sample of 500 mg of a composite sheet (in small pieces of approximately 3 mm square) was placed in an extraction thimble, and 125 mL of a solvent was placed in a solvent reservoir. The instrument was operated under Soxhlet Warm mode, which is the same as the standard operation of the traditional Soxhlet extractor except for heating the extraction chamber to facilitate extraction. Extraction was carried out for 3 h under a nitrogen atmosphere to avoid oxidation of components, and stirring was used at a rotation rate of 100 rpm in an extraction thimble to facilitate solubilization of the resins. Polyethylene precipitates formed in 30 min as the extractions cooled. Formation of precipitate from solution was rapid (in one min if the solution was cooled with running water). As a precaution solutions were allowed to stand for 3 hours, and the precipitates were subsequently recovered by filtration using a polytetrafluoroethylene (PTFE) membrane filter (Millipore Co., pore size: 0.45 μm). The precipitates were rinsed with the solvent for extraction, then with a volatile solvent (ethanol). The samples for scanning electron microscope observation, X-ray diffraction, and thermal analysis were dried under vacuum for 24 hr without heating to avoid altering the surface morphology and the crystallinity of the samples. For other purposes (weighing of resin recovery, UV-visible, AMS measurements, and ICP analysis) drying was conducted at 60°C to shorten the drying time.
Recovery of fillers from composite
After the extraction operation the wet extraction thimbles were dried under vacuum at 80°C for 1 hr. Based on the weights of an extraction thimble before and after extraction, complete extraction was confirmed and the recovery rate of fillers from composites was calculated.
Observation of polyethylene precipitate on scanning electron microscope (SEM) Polyethylene precipitates were carefully transferred on an adhesive tape on a platform for SEM observation to avoid deformation of the specimens. Platinum deposition (4 nm thick) on the surface of the precipitates was conducted prior to observation using a field emission-scanning electron microscope (Hitachi High-Technologies Co., S-4300).
Preparation of films for measurement of UV-visible spectra
Polyethylene films for measuring transmission of UVvisible spectra were prepared from polyethylene composites and their recovered precipitates by extraction. The samples were placed between a pair of glass plates, and heated under a nitrogen atmosphere at 160°C for 15 min using a pair of stainless steel thickness gauges to prepare films with a thickness of 0.1 mm. UV-visible spectra of the films and a chloroform solution (49 mg/L) of decabromodiphenyl ether as a reference were recorded at a scanning rate of 120 nm/min and a slit width of 2 nm on a spectrophotometer (Shimadzu Co. U-3000).
Wide-angle X-ray scattering (WAXS) X-ray scattering of polyethylene samples was recorded on a Rigaku Miniflex II diffractometer with CuKα-Ni-filter radiation. Plates of pure polyethylene with high crystallinity were used as a reference (24 mm in diameter and 2 mm in thickness); these were prepared on aluminum pans by heating at 150°C for 15 min and pressing the resin surfaces with a glass plate. After cooling at room temperature the resin plate was annealed at 100°C for 24 hr. The scan rate was 2°θ/min. WAXS patterns of polyethylene precipitates were obtained in the same way, by preparing the samples in the same aluminum pans in a way that was as compact as possible.
Differential scanning calorimetry (DSC)
Melting properties of polyethylene samples were measured on a conventional DSC instrument (DSC 7020, Hitachi High-Tech Science Co, the former Seiko Instrument Inc.). Two scanning cycles of heating and cooling were carried out under a nitrogen gas atmosphere between −30°C and 150°C at a scan rate of 10°C/min.
Analysis of calcium by radiofrequency inductively coupled plasma (ICP)
The precipitates obtained from the Soxhlet extractor using 500 mg of composite containing calcium carbonate or Gofun were moistened in 25 mL of acetone followed by 25 mL of 1% nitric acid. The mixture was agitated in a glass vessel using an ultrasonic bath for 3 hr. After the resin powder was filtered out, acetone in the filtrate was evaporated under reduced pressure. The resulting solution was diluted to 100 mL with 1% nitric acid and analyzed using an analytical instrument for radiofrequency inductively coupled plasma (ULTIMA2, HORIBA Ltd., the former Jobin Yvon S.A.S.). An amount of 500 mg of calcium carbonate or oyster shell (Gofun) powder was placed in an extraction thimble and treated with the common extraction operation. The extraction solvent in the reservoir was removed under reduced pressure until dry. The residue was added to 25 mL of 1% nitric acid and 25 mL of chloroform, and was shaken using a separable funnel to remove hydrophobic impurities derived from the extraction solvent. After the remaining chloroform, which was dissolved in the aqueous layer, was removed under reduced pressure, the aqueous solution was diluted with 1% nitric acid to 100 mL volume and analyzed using the ICP instrument.
Results and discussion
The Soxhlet extractor is a sophisticated instrument for lab work, and the extraction of resin proceeded smoothly once a heating program was properly set with reference to an instruction provided by Büchi Co. In a preliminary experiment using resin pellets, completion of resin extraction could be confirmed via weight change of the extraction thimble before and after the operation. This was due to the low hygroscopicity of the extraction thimble; a small amount of remaining resin in the extraction thimble could be confidently detected (the weight increase of the extraction thimble was 0.21 wt%, or approximately 14 mg, when the extraction thimble was dried under vacuum at 60°C and exposed to air with 53% relative humidity). Solvent selection was generally the most important factor in extraction experiments. We tested several commercially available solvents that had proper solvency and boiling points (bp) for the resins. The dissolution rate of resins and the heating capacity of the instrument restricted the boiling point of solvents to around 150°C. The candidates included good solvents for polyolefins, such as o-xylene (bp 144°C) and 2-chlorotoluene (bp 159°C), and also unsuitable solvents including ketones and esters, such as cyclohexanone (bp 155°C), 2-heptanone (bp 151°C), 5-methyl-3-heptanone (bp 157°C), and amyl acetate (bp 148°C). In preliminary experiments for dissolution of polyolefins in test tubes, unsuitable solvents showed longer dissolution times for resins in comparison with good solvents. The extraction time for resins using the Soxhlet extractor was as long as expected. Extraction was completed using o-xylene for 1 h and cyclohexanone for 3 hr. Though a good solvent, o-xylene, was better in terms of extraction time, we ultimately used cyclohexanone because of the recovery rate of resins from extraction solvents and its wider commercial availability among several unsuitable solvents. Good solvents, such as o-xylene, sometimes produced swelling precipitates that were inconvenient for filtration of the extraction solution. This situation was not a serious issue for polyethylene resins, but it did pose a problem for other polyolefins, such as polypropylene and poly(1-butene). We preferred a common set of extraction conditions that would be applicable to a wide range of polyolefins. We plan to report elsewhere on the extraction experiments of polypropylene resins, including copolymers of propylene and ethylene, using good and unsuitable solvents. When heated extraction solutions were cooled to room temperature, the solubility of polyethylene decreased resulting in white precipitates of resin in 30 min. The precipitates were recovered on a porous PTFE membrane by filtration, followed by an ethanol rinse to shorten the drying time. bLLDPE and pLDPE, which have side chains or branching on main chains (making them problematic to crystallize), also formed the same bulky-looking but dense precipitates as bHDPE. As shown in Figure 3, scanning electron microscope observation showed that these precipitates were coarsesurfaced particles.
X-ray diffraction of the precipitates obtained from polyethylene resins showed an overlap between the crystalline and amorphous patterns indicating that the precipitate was a semi-crystalline polyethylene, as shown in Figure 4a. The peaks were broader than that of a resin plate prepared by annealing at 100°C (Figure 4d). Even when the test tube was cooled rapidly in an ice-water bath, the hot solution produced dense precipitates instead of a swelling gel. Therefore, it is not necessary to control the cooling condition of the hot extraction solution to recover polyethylene precipitates. The hot solutions of composites with additives also formed dense precipitates, which could be just as readily separated as pure propylene resins. Melting properties observed by DSC were an indicator of crystallinity of the precipitates. An endothermic peak for bHDPE precipitates appeared at slightly lower temperature (127.9°C) during the first heating compared to the second heating (130.0°C) (Figure 5a). A substantial difference in enthalpy change between the first and second scans (196 J g -1 , 195 J g -1 , respectively) was not observed. The precipitates could be considered to have a moderate level of crystallinity when melting points and enthalpy changes are compared with those of resin annealed at 100°C (136.0°C, 223 J g -1 ) (Figure 5b). However, the thermogram and the XRD results indicated that the precipitates have good crystallinity.
Precipitates from bLLDPE showed a melting peak at a lower temperature and with lower enthalpy change (123.3°C, 126 J g -1 ) than those of bHDPE, as expected from chemical structures with side chains derived from a co-monomer (1-butene) (Figure 5c). Precipitates from pLDPE showed a melting peak at a lower temperature and lower enthalpy change (108.4°C, 137 J g -1 ), as expected from branching of main chains (Figure 5d). These thermal properties may be generally observed in polymeric materials where crystallization is hindered by branching or side chains. It is worth noting that the thermogram from the first scan was almost the same as the second scan. This indicates that the precipitates from extraction solutions hold almost the same physical properties as the bulk materials.
We concluded that the recovery rate of resins from composites accounts for the content of resin within composites. This assumption was based on the quantitative recovery of resins from the extraction solutions (Table 2). Excellent recovery rates were observed in spite of losses encountered during handling of precipitates for bHDPE (Entry 1, 98.8%), and further, for bLLDPE (Entry 5, 99.6%) and pLDPE (Entry 11, 98.9%). Low recovery rates for bLLDPE and pLDPE were worrisome because of the intrinsic low regularity of polymer chains. Crystalline properties of the polymers and prompt crystallization may explain the high recovery rate of resins.
To quantitatively understand the removal of fillers from composites, polyethylene composites containing graphite, calcium carbonate, starch, and cellulose, were used for an extraction experiment, and the separation of the components was confirmed by the amount of recovered resins from extraction solutions and remaining fillers on the extraction thimbles. Recovery rates for resins from six composites (Entry 2, 3, 6, 7, 9, and 10) a b c d ranged from 73.6% to 75.2%, and the recovery rates for the remaining fillers ranged from 23.9% to 25.5% (Recovery rates used in this paper were calculated based on the total sample weight of the composites. The theoretical maximum recovery rates of resin and filler in this paper are 75% and 25%, respectively). These results were almost equivalent to the component percentage of the composites, and could satisfy our protocol when handling loss of fine precipitates and uneven quality of the composites are taken into account. In the case of the composites containing solvent-soluble additives (DBDPE, Entry 4 and Entry 10), the recovery rate of resins (Entry 4, 73.7% and Entry 8, 75.2%, respectively) were the same as the composites containing graphite or calcium carbonate, although the recovery rates of the additives were apparently null. This indicates that the additives that dissolved at elevated temperatures still remained in the extraction solution at room temperature and were not incorporated into the resin precipitates. From the composites containing starch or cellulose, additives, which may intentionally be added in order to raise the apparent biobased carbon content of plastic products, polyethylene resins were successfully isolated at the theoretical recovery rates (Entry 9, 74.9% and Entry 10, 74.6%). The isolation of resins from the composites containing graphite or calcium carbonate, which do not contain biobased carbon, was confirmed on the basis of biobased carbon content measured by AMS. The biobased carbon content of the recovered precipitates from composites of bHDPE (Table 3, Entry 2, 97.7%, Entry 3, 98.1%, and Entry 4, 100.2%) were comparable to that of the original resin (97.7%) indicating effective elimination of the fillers from the composites. In case of composites of bLLDPE, clear isolation of resins was also confirmed: biobased carbon content for precipitates were measured at 87.4% (Entry 6) and 88.1% (Entry 7), whereas the original composite contained 88.4% (Entry 5). AMS results also affirmed that extraction of the composites containing the solvent-soluble additive DBDPE (Entry 4, 100.2% and Entry 8, 89.0%) yielded resin precipitates that were not contaminated with additive. Figure 6 shows the UV-visible spectra of thin composite films of bHDPE and bLLDPE containing DBDPE (75/ 25) and the films prepared from the corresponding precipitates (Entries 4 and 8 in Table 2). The composite films of bHDPE ( Figure 6a) and bLLDPE (d) showed a strong absorption (and scattering) in the ultraviolet region due to the aromatic skeleton of the additive molecule. On the other hand, the films from precipitates showed weak absorption bands and backgrounds with light scattering (b and e). A decrease in the absorption bands for the composites of the organic additive indicated the efficiency of the isolation process. Ratios of the additive to resins, calculated from a chloroform reference solution of the additive (g), were small (0.02 wt% for bHDPE (Entry 4) and 0.04 wt% for bLLDPE (Entry 8)). These findings suggest that the formation of pure precipitates lacking any accompanying additive may depend on prompt crystallization of polymers from the solutions and a high degree of crystallinity of the polymer.
Our results indicate that the extraction solvent should have enough solvency for additives at room temperature. Solubility tests were conducted for typical additives: decabromodiphenyl ether (flame retarder, DBDPE) (1), 2,2-Bis[3,5-dibromo-4-(2,3-dibromopropoxy)phenyl]propane (flame retarder) (2), 3,5-di-tert-butyl-4-hydroxytoluene (antioxidant) (3), tris(2,4-di-tert-butylphenyl)phosphite (antioxidant) (4) 3,9-Bis(octadecyloxy)-2,4,8,10-tetraoxa-3,9-diphosphaspiro[5.5]undecane (antioxidant) (5), 2-(3,5-di-tert-amyl-2-hydroxyphenyl)benzotriazole (UV-absorbent) (6), 2-hydroxy-4-n-octyloxybenzophenone (UV- absorbent) (7), and bis(2,2,6,6-tetramethyl-4-piperidyl)sebacate (photo-stabilizer) (8). At room temperature 10 mg of each organic additive (1-8 in Scheme 1) readily resolved in 1 mL of cyclohexanone in a few minutes, and at 150ºC the process was more rapid. After cooling the solutions to room temperature almost all the organic additives were still soluble except for DBDPE, which dissolved at a diluted concentration (10 mg/10 mL). The solvency must satisfy the requirements of the isolation process of polyethylene because the amount of additives applied to plastic products are generally less than one percent. As far as biobased carbon content of precipitates from composites containing petroleum-based fillers indicate (Table 3), collection efficiency of the extraction thimble was adequate to remove fillers. Despite this result, efficiency of the extraction thimble was confirmed via quantitative analysis of calcium derived from calcium carbonate or Gofun (oyster shell) that escaped through the extraction thimble. In the case of the extraction experiment using resin/Gofun composite, calcium was readily detectable (Table 4, Entry 2). The details were not clear. However, a calcium carbonate level of 0.39% in the precipitates was considered a negligible amount for estimation of the biobased carbon content using AMS. The amount induced a slight shift of biobased carbon content (0.05%) because the carbon content of calcium carbonate is lower than that of polyethylene resin (12.0% and 85.6%, respectively). Generally the carbon content of fillers is less than polyethylene resins, except for graphite due to hetero elements in the material. The amount of filler or additive that may shift biobased carbon content by 0.3% (detectable limit of AMS) was calculated at 0.6% for cellulose or starch, and 1.7% for DBDPE. These considerations indicate that this isolation procedure can satisfy the standard pretreatment.
Conclusion
Considering the solubility of polyethylene, fillers, and additives applied in polyethylene products, an isolation procedure of resin samples for AMS analysis was examined using a Soxhlet extractor. Solvent-soluble additives and solvent-insoluble fillers in the various model composites were effectively removed to isolate pure resin samples suitable for AMS analysis. Recovery rates of polyethylene from heated solutions of composites (typically 99%), and rejection rates for additives and fillers (>99%) indicate that this procedure is an effective pretreatment for polyethylene products prior to AMS measurements. In addition, results indicated that the biobased synthetic polymer content could be confirmed from the biobased carbon content of resins. | 6,612.2 | 2014-01-03T00:00:00.000 | [
"Materials Science"
] |
Investigation of Hydrothermal Performance in Micro-Channel Heat Sink with Periodic Rectangular Fins
The micro-channel heat sink (MCHS) is an excellent choice due to its exceptional cooling capabilities, surpassing those of its competitors. In this research paper, a computational fluid dynamics analysis was performed to investigate the laminar flow and heat transfer characteristics of five different configurations of a variable geometry rectangular fin. The study utilized a water-cooled smooth MCHS as the basis. The results indicate that the micro-channel heat sink with a variable geometry rectangular fin has better heat dissipation capacity than a straight-type micro-channel heat sink, but at the same time, it has larger pressure loss. Based on the analysis of various rectangular fin shapes and Reynolds numbers in this study, the micro-channel heat sink with rectangular fins exhibits Nusselt numbers and friction factors that are 1.40–2.02 and 2.64–4.33 times higher, respectively, compared to the smooth heat sink. This significant improvement in performance results in performance evaluation criteria ranging from 1.23–1.95. Further, it is found that at a relatively small Reynolds number, the micro-channel heat sink with a variable geometry rectangular fin has obvious advantages in terms of overall cooling performance. Meanwhile, this advantage will decrease when the Reynolds number is relatively large.
Introduction
With the development of microfabrication technology, more and more electronic gadgets and microelectronics represent an irreversible change in high power, high heat dissipation, and miniaturization, especially in the fields of computing, automobile, telecommunication, and aerospace industries [1][2][3].The significant challenge in the miniaturization of semiconductor products arises from the substantial heat generated within a limited space.Advanced electronic devices and microelectronics of the new generation are expected to produce heat dissipation in the range of multiple kilowatts, potentially reaching up to 1000 W/cm 2 [4].The operational temperature of microelectronic devices is influenced by their physical properties.For every 1 • C increase within the working temperature range of 70-80 • C, the reliability of these devices decreases by 5%.Additionally, a 10 • C rise in the junction temperature of electronic components leads to a 50% increase in the failure rate [5].Many conventional heat removal technologies cannot effectively enhance the performance of heat transfer under the condition of heat flux of more than 100 W/cm 2 [6].Hence, the efficient thermal management of microelectronic devices is essential, considering overheating is harmful to the efficiency and reliability of microelectronic components.As a result, developing an efficient heat dissipation solution becomes a top priority.In recent years, the MCHS has emerged as the predominant heat dissipation method for semiconductor devices, particularly in the field of thermal solutions.The MCHS design consists of multiple parallel coolant micro-channels with varying widths, effectively reducing the thickness of the thermal boundary and significantly increasing the heat exchange surface area.
The single-layered MCHS was initially developed by Tuckerman in 1981.This heat sink design has the capability to dissipate heat at a rate of 790 W/cm 2 under a temperature difference of 71 K between the inlet and outlet [7].The primary purpose of the MCHS is to enhance the natural and forced convection's ability to transfer heat.A study conducted by Adham focused on investigating the pressure drop and heat transfer characteristics of MCHS.The methodologies were also evaluated by the researchers.These approaches were employed to assess the overall performance of micro-channel heat sinks under various conditions of physical property parameters [8].
In light of the growing emphasis on size reduction and stringent temperature limitations in integrated micro-cooling systems, micro-channel heat sinks with passive microstructures are considered an efficient solution to meet these demands.This is because they do not require any external energy source, making them highly favorable in terms of energy efficiency.Xu [9,10] conducted a series of experiments and simulations to investigate the heat transfer characteristics of a micro-channel heat sink.This particular heat sink configuration consisted of parallel longitudinal micro-channels and multiple transverse microchambers.The findings revealed that the heat sink design was able to effectively reduce the pressure drop while enhancing heat transfer performance.This enhancement can be attributed to the reduced effective flow distance within the micro-channels.In their study, Cai et al. [11] investigated the impact of micro-channel geometry, rectangular ribs, and rib height on the overall performance of a heat sink.The researchers conducted a comparison between interrupted micro-channel heat sinks that incorporated rectangular ribs in transverse chambers and conventional heat sinks to analyze the obtained results.By evaluating the performance based on specific criteria, they aimed to determine whether the interrupted micro-channel heat sinks with rectangular ribs offered a superior cooling solution.Cheng [12] numerically simulated the effects of varying the microstructures on the thermal performance of the heat sink.They found that increasing the number of microstructures in each layer of the heat sink improved the thermal performance, resulting in a decrease in the overall temperature of the heat sink.In Xia's study [13], the objective was to investigate the influence of structural parameters on the heat transfer rate and fluid flow within a system.The results revealed that changes in these parameters, such as the presence of reentrant cavities and the occurrence of jet and throttling effects, led to a notable improvement in the system's performance.Specifically, the slipping of the working fluid over the reentrant cavities and the resulting jet and throttling effects played a crucial role in enhancing heat transfer and fluid flow within the system.This was achieved by allowing the fluid to flow more efficiently, creating a smoother and more efficient flow.Sui and Mohammed [14][15][16] conducted comprehensive studies involving both experimental and numerical approaches to investigate the laminar flow and heat transfer characteristics in wavy micro-channel heat sinks.Their research aimed to understand the behavior of fluid flow and heat transfer performance in these specific heat sink configurations.Their findings suggest that wavy micro-channel heat sinks have the potential to be more effective at dissipating heat than smooth micro-channel heat sinks.This is because the wavy channels create more turbulence, which increases the number of heat transfer points and increases the overall heat transfer rate.Additionally, the wavy channels also create more surface area, which increases the rate of heat transfer from the channel walls to the surrounding environment.Qu [17] analyzed the micro-pin-fin heat sink and the straight channel micro-channel heat sink and found that the straight channel micro-channel heat sink has a higher thermal resistance but a lower pressure loss.The two types of heat sinks are suitable for different applications.For applications where heat dissipation is the priority, the micro-pin-fin heat sink is a better choice.Rahbarshahlan et al. [18] improved heat transfer by adding hydrophobic surfaces to parts of the micro-channel, and research shows that the hydrophobic surfaces can reduce the amount of friction between the liquid and the surface, which further increases the efficiency of the heat transfer process.Zhang [19] enhanced heat transfer by the nanofluid-cooled heat sink.They obtained that reducing the diameter of the nanoparticles can increase the surface area, which can increase the number of contact points between the nanoparticles, resulting in better heat transfer performance.Increasing the volume fraction of the nanoparticles can also increase the number of contact points, which can improve the thermal conductivity of the heat sink.
Ghani et al. [20] explored the integration of sinusoidal cavities and rectangular ribs in the design of micro-channel heat sinks (MCHS).The results of their investigation demonstrated that the inclusion of sinusoidal cavities effectively increased the flow area, thereby minimizing pressure drop.Additionally, the presence of rectangular ribs enhanced the Nusselt number by reducing flow obstruction and promoting better heat transfer within the heat sink.Wang et al. [21] performed an experimental study to investigate the impact of microscale ribs and grooves on the performance of MCHS, with the Nusselt number enhanced up to 1.55 times that of a smooth channel.Zhai et al. [22] performed numerical investigations to analyze the performance of micro-channel heat sinks (MCHS) with various geometric structures of cavities and ribs.Their research aimed to understand how these variations in cavity and rib designs influenced the overall performance of the heat sink.The triangular cavities and ribs were found to be more effective than the other shapes.The triangular cavities and ribs provided higher thermal conductivity and better heat transfer characteristics compared to the other shapes.This is because of the increased surface area of the triangular cavities and ribs which allowed for more efficient heat transfer from the fluid to the walls of the micro-channel.Alfellag et al. [23] conducted numerical simulations to explore the fluid flow and heat transfer characteristics in a micro-channel heat sink featuring trapezoidal chambers and oval fins, both with and without slots.They found that the suggested design exhibited a pin aspect ratio of 1.25, a pin distance from the cavity center of 0.03 mm, and a slot thickness of 0.008 mm.Consequently, this design fulfilled a higher performance assessment requirement of 1.37.Based on the studies mentioned above, it is evident that the inclusion of ribs and cavities in micro-channel heat sinks improves heat transfer performance but also increases pumping power requirements.Extensive research has been conducted on conventional rib designs, but limited work has been found regarding the use of unique rib shapes.
In this investigation, a numerical simulation is performed to analyze the laminar flow and heat transfer within a micro-channel heat sink with variable geometry rectangular fins.This study presents the first reported performance analysis of such a structural arrangement, which has the potential to greatly improve thermal dissipation.The objective of this research is to compare the Nusselt number, performance evaluation criteria, fluid flow characteristics, pressure distribution, and temperature distribution with those of a conventional straight micro-channel heat sink (MCHS).The addition of a rectangular fin is expected to significantly improve MCHS's overall performance.
Numerical Approach 2.1. Conservation Equations
Using a laminar flow, incompressible, steady-state, and three-dimensional model, the fluid flow within the micro-channel heat sink (MCHS) is simulated.The model was proposed by Zhang [24].The model considers the effect of wall conduction on the velocity profile and the effect of fluid axial conduction on the temperature profile.
In order to simplify this numerical model, several assumptions are made as follows: 1.
The flow is the Newtonian incompressible laminar flow that is steady and continuous.
2.
Volume force, surface tension, and radiation heat transfer are not considered.
3.
Thermophysical properties are constant for the solid domain.
According to the assumptions made in this study, the numerical model incorporates the following equations for energy, continuity, and momentum, which are applicable to different micro-channel configurations [11].
Continuity equation:
Momentum equation: Energy equation: For the solid: Here, χ 1 , χ 2 , and χ 3 represent the x, y, and z coordinates, respectively.ρ, µ and c pf is density, dynamic viscosity, and specific heat capacity, respectively.k is thermal conductivity.Subscripts f and s refer to fluid and solid, respectively, as shown in Figure 1.
For the solid: Here, χ1, χ2, and χ3 represent the x, y, and z coord density, dynamic viscosity, and specific heat capa conductivity.Subscripts f and s refer to fluid and solid, 1.
Boundary Conditions
In the system modeling, a uniform heat source with applied to the bottom surface of the substrate, specifical is positioned.The top wall surface of the channels and are considered to have thermal insulation.In contrast contact with the fluid are thermally linked through so uniform velocity condition is assumed at the inlet, and the outlet (assumed as gauge pressure).To simplify th inlet temperature and apply no-slip boundary condition
Boundary Conditions
In the system modeling, a uniform heat source with a power density of 400 KW/m 2 is applied to the bottom surface of the substrate, specifically where the heat-generating chip is positioned.The top wall surface of the channels and the outer surfaces of the heat sink are considered to have thermal insulation.In contrast, the remaining walls that are in contact with the fluid are thermally linked through solid-fluid thermal conduction.A uniform velocity condition is assumed at the inlet, and a pressure condition is applied at the outlet (assumed as gauge pressure).To simplify the simulations, assume a uniform inlet temperature and apply no-slip boundary conditions to all walls [11].
This paper utilizes FLUENT 19.2, a software based on the finite volume method (FVM), to investigate the flow and heat transfer characteristics of the micro-channel heat sink (MCHS).To solve the coupled fluid-solid heat transfer problem, the same SIMPLEC algorithm employed in the previous literature is employed to address the heat transfer between the fluid and solid surfaces.The momentum and energy equations were discretized using a second-order upwind scheme.The chosen methodology is based on its ability to achieve rapid convergence to the numerical model.The solutions are deemed to have converged when the residuals for continuity, velocity, and energy equations are below 10 −6 , 10 −6 , and 10 −7 , respectively.Figure 1 illustrates the schematic diagram of the conventional rectangular micro-channel heat sink geometry.The parameters considered are within the following ranges: T in = 293 K, P out = 0 (gauge pressure), and q w = 400 KW/m 2 .Water and silicon are employed as the working fluid and solid material, respectively.The techniques for enhancing convective heat transfer can be categorized into passive techniques, active techniques, and combined enhancement methods [11].Commonly used passive methods to enhance convective heat transfer include extended surfaces, flow disturbance devices, and flow channel structures [11].The simulated model presented in Figure 2 of this paper is based on the aforementioned passive design principles.The structural design of cases 1 to 4 not only increases the surface area for heat transfer between the cold and hot fluids within the flow channel but also incorporates interrupted structures that induce bending of the fluid, promoting secondary flow and generating strong vortices even in laminar flow conditions.Figure 2 presents the detailed dimensions for case 0-case 4, while Table 1 provides the specific parameters for the external dimensions.Case 1 in Figure 2 involves adding a rectangular fin with a length of L B inside the channel.Case 2 builds upon case 1 by incorporating periodic grooves into the rectangular fin with a length of L B2 and a depth of 1/2W B .Case 3 optimizes the rectangular fin by introducing a break with a length of L B4 .Case 4 further optimizes case 3 by applying periodic breaks with a length of L B4 .The design of these grooves and breaks in case 1-case 4 generates vortices in the opposite direction to the mainstream flow within the channel.These vortices effectively mix the hot and cold fluids, continuously disrupting the thermal boundary layer and enhancing heat transfer capability.The thermophysical properties of the working fluid are obtained from the corresponding reference [11].
are within the following ranges: Tin = 293 K, Pout = 0 (gauge pressure), a Water and silicon are employed as the working fluid and solid materi techniques for enhancing convective heat transfer can be catego techniques, active techniques, and combined enhancement method used passive methods to enhance convective heat transfer include exte disturbance devices, and flow channel structures [11].The simulated Figure 2 of this paper is based on the aforementioned passive des structural design of cases 1 to 4 not only increases the surface are between the cold and hot fluids within the flow channel but also incor structures that induce bending of the fluid, promoting secondary fl strong vortices even in laminar flow conditions.Figure 2 pre dimensions for case 0-case 4, while Table 1 provides the specific external dimensions.Case 1 in Figure 2 involves adding a rectangular LB inside the channel.Case 2 builds upon case 1 by incorporating pe the rectangular fin with a length of LB2 and a depth of 1/2WB.Ca rectangular fin by introducing a break with a length of LB4.Case 4 fur 3 by applying periodic breaks with a length of LB4.The design of breaks in case 1-case 4 generates vortices in the opposite direction flow within the channel.These vortices effectively mix the ho continuously disrupting the thermal boundary layer and enhan capability.The thermophysical properties of the working fluid are corresponding reference [11].
Data Acquisition
In the current work, the MCHS is characterized by the following govern fluid flow and heat transfer within the channels.
Data Acquisition
In the current work, the MCHS is characterized by the following parameters, which govern fluid flow and heat transfer within the channels.
The Reynolds number is a dimensionless quantity that is defined as: where ρ f represents the volume average fluid density, u m is the average flow velocity in the smooth channel section, D h is the hydraulic diameter, and µ f denotes the dynamic viscosity of the fluid.
The Reynolds number, at which laminar flow changes to turbulent flow is known as the critical Reynolds number, and previous studies have shown that laminar flow is common in micro-channels and that the critical Reynolds number is between 1000 and 1500 [25].Therefore, in the numerical simulation of this paper, the laminar flow model was chosen for the numerical calculation, and the Reynolds number of fluid flow was kept below 1000.
The pressure drop (∆P) is defined as the difference in pressure across the length of the micro-channel.∆p = ∆p in − ∆p out (7) ∆p in and ∆p out are the mass-weighted average inlet and outlet pressure.The average friction factor can be defined as: where L denotes the whole length of the channel.The average heat transfer coefficient is determined by the following formula: q w is heat transfer through the base, A b is an area of the base, and ∆T is the temperature difference between the wall and fluid.
Nusselt number Nu is a dimensionless number that represents the intensity of convective heat transfer and is also a standard for judging the performance of fluid heat transfer.Its expression is as follows: The PEC is given by: Within this framework, the average heat transfer coefficient is determined by utilizing the Nusselt number (Nu 0 ) and friction factor (f 0 ) of the smooth micro-channel heat sink for calculation purposes.
Meshing and Grid Independent Test
To ensure the reliability of the simulations conducted in this study, the mesh independence assessment is performed.The rectangular straight micro-channel (referred to as case 0) is employed to assess the mesh independence.Different mesh sizes are employed and described in the provided table.Figure 3 illustrates the unstructured grid employed in this numerical simulation, which is refined locally using the meshing method.
To ensure the reliability of the simulations conducted in this study, th independence assessment is performed.The rectangular straight micro-channel ( to as case 0) is employed to assess the mesh independence.Different mesh s employed and described in the provided table.Figure 3 illustrates the unstructu employed in this numerical simulation, which is refined locally using the m method.As a result, four sets of unstructured grids generated by the pre-pr software ANSYS MESHING 2022 are evaluated, namely mesh 1, mesh 2, mesh mesh 4. The accuracy difference between the finest mesh and any other mesh determined using the following formula: In the provided formula, M1 represents the finest mesh while M2 represe other mesh.The inlet velocity is specified as 0.5 m/s, and the corresponding re presented in Table 2.The results show that the inlet and outlet pressure drop obtained from the nu calculation of the model with a grid number of 1.479 million is the base, when number of grids is 0.379 million, the relative error of the results is 1.04%; w number of grids is 1.079 million, the relative error of the results is 0.23%.The re more accurate when the dense, denser, and very dense grid models are u numerical calculations, but they are more time-consuming.Therefore, on the bas accuracy and economy of the numerical calculation, the model with a grid nu 0.652 million was chosen.For the other models, the same method as case 0 was em As a result, four sets of unstructured grids generated by the pre-processing software ANSYS MESHING 2022 are evaluated, namely mesh 1, mesh 2, mesh 3, and mesh 4. The accuracy difference between the finest mesh and any other mesh can be determined using the following formula: In the provided formula, M 1 represents the finest mesh while M 2 represents any other mesh.The inlet velocity is specified as 0.5 m/s, and the corresponding results are presented in Table 2.The results show that the inlet and outlet pressure drop obtained from the numerical calculation of the model with a grid number of 1.479 million is the base, when the grid number of grids is 0.379 million, the relative error of the results is 1.04%; when the number of grids is 1.079 million, the relative error of the results is 0.23%.The results are more accurate when the dense, denser, and very dense grid models are used for numerical calculations, but they are more time-consuming.Therefore, on the basis of the accuracy and economy of the numerical calculation, the model with a grid number of 0.652 million was chosen.For the other models, the same method as case 0 was employed for grid independence verification in this simulation.Four different grids were selected and evaluated based on the parameter E. The final grid numbers used were 0.763 million, 0.754 million, 0.784 million, and 0.772 million.
Validation for Numerical Model
To validate the accuracy of the numerical calculation method employed in this paper, the computed results of the inlet and outlet pressure drop, as well as the fluid temperature difference, in the smooth channel (referred to as case 0) are compared with the corresponding theoretical calculations [26,27].
The theoretical formula for the pressure drop across the inlet and outlet of a rectangular micro-channel under laminar flow is given by: where P 0 is Poiseuille's number and K is the correction factor.P 0 and K are calculated as: In this paper, the aspect ratio (α) of the rectangular micro-channel is assumed to be 0.5.The theoretical equation for the temperature difference between the fluid inlet and outlet of the micro-channel proposed by Garimella and Singhal [28] is as follows: A and A in are the areas of the bottom and inlet cross-sections of the micro-channel, respectively.The results of the numerical calculation of the pressure drop and fluid temperature difference between the inlet and outlet of the rectangular micro-channel at different Reynolds numbers are compared with the theoretical calculation.The comparison of our simulation results with the theoretical results from Steinke and Kandlikar [27] and Garimella and Singhal [28] is illustrated in Figure 4.It is evident that the discrepancy between our simulation results and the theoretical results is less than 10% for both pressure drop and fluid temperature difference, indicating that our code can be utilized with increased confidence.
Flow Distribution
In this research, 3D numerical simulations for various geometric parameters are carried out to obtain the thermal performance of the micro-channels with the variable geometry rectangular fin.There are five input velocities taken into account: 0.6, 0.8, 1.0, 1.2, and 1.4 m/s.An in-depth analysis is completed on the corresponding pumping power and Reynolds number ranges.The subsequent paragraphs investigate the effects of these parameters on temperature distribution, pressure drop, and thermal resistance, revealing their respective impacts.
Understanding the flow structure within the micro-channel is crucial for assessing the performance of the MCHS.Thus, it is imperative to investigate the flow structure within the channel with a rectangular fin. Figure 5 illustrates the flow profile in the micro-channel with the variable geometry rectangular fin.Streamlines for each case are depicted in the x-z planes at a Reynolds number of 508.
Micromachines 2023, 14, x FOR PEER REVIEW 10 of 17 When the main flows traverse the rectangular fin, they deflect upwards in the rib upstream and create a recirculation zone downstream of the fins.The presence of a thin thermal boundary layer in the MCHS configuration can improve the mixing capability of both cold and hot fluids [28].
Comparing cases 0, 1, 2, 3, and 4 reveals that the depth of recirculation correlated with the separation of the main channel at the fin tip. Figure 6 illustrates the velocity distribution in the x-y plane at a height of z = 0.25 mm.In the case of case 0, it is evident that the velocity distribution exhibits maximum values near the center of the channel, gradually decreasing towards the walls.This is because the velocity of the fluid in the channel is greatest at the center due to the Bernoulli equation (i.e., the pressure is the lowest at the center).Therefore, the fluid is flowing faster at the center of the channel, resulting in a higher maximum velocity than at the edges.In the micro-channel with offset fins, the fluid experiences higher maximum velocities as it moves towards the sidewall without ribs.The fins create friction between the fluid and the wall, and that friction causes the velocity of the fluid to decrease.Without the fins, there is less friction, so the fluid will be able to move faster.Figure 6 demonstrates that the velocity within the recirculation zone is comparatively lower.It can be seen from the figures that the obstructions cause significant effects on the flow of the MCHS fluid.When encountering obstructions, the boundary layer of the fluid inside the flow channel is continuously disrupted, enhancing the mixing effect between the cold and hot fluids.The rectangular fin, located in the micro-channels, provides a different velocity distribution for the laminar flow profile than those of straight, smooth MCHS.Thus, the boundary layer cannot expand fully, and it is much less thin than smooth MCHS.
The presence of obstacles in MCHS leads to a significant increase in the Nusselt number.This is primarily attributed to the obstacles causing an acceleration of the coolant velocity in the regions between them, resulting in higher velocities compared to a straight and smooth MCHS configuration.Further, it can also be seen that the recirculation zone shows up in the backward part of the obstruction as a result of the effect of the block.
When the main flows traverse the rectangular fin, they deflect upwards in the rib upstream and create a recirculation zone downstream of the fins.The presence of a thin thermal boundary layer in the MCHS configuration can improve the mixing capability of both cold and hot fluids [28].
Comparing cases 0, 1, 2, 3, and 4 reveals that the depth of recirculation correlated with the separation of the main channel at the fin tip. Figure 6 illustrates the velocity distribution in the x-y plane at a height of z = 0.25 mm.In the case of case 0, it is evident that the velocity distribution exhibits maximum values near the center of the channel, gradually decreasing towards the walls.This is because the velocity of the fluid in the channel is greatest at the center due to the Bernoulli equation (i.e., the pressure is the lowest at the center).Therefore, the fluid is flowing faster at the center of the channel, resulting in a higher maximum velocity than at the edges.In the micro-channel with offset fins, the fluid experiences higher maximum velocities as it moves towards the sidewall without ribs.The fins create friction between the fluid and the wall, and that friction causes the velocity of the fluid to decrease.Without the fins, there is less friction, so the fluid will be able to move faster.Figure 6 demonstrates that the velocity within the recirculation zone is comparatively lower.The velocity in a recirculation zone is usually very low due to the presence of vortices, which cause a high level of fluid separation.This low velocity is a result of the fact that the vortices in the zone act as a barrier, slowing down the flow and preventing it from exiting the zone.As a result, the velocity of the flow within the recirculation zone is generally much lower than the velocity of the flow outside of the zone [29,30].Moreover, the presence of flow obstructions in the MCHS disrupts and re-establishes the coolant boundary layer, leading to a non-uniform velocity distribution.Even though a recirculating stream is apparent from behind the obstruction, the total heat transfer and thermal properties are enhanced.
Thermal Performance
In all cases, the Nusselt number is a non-dimensional parameter, and it is used to rate the heat transfer in the convective mode.Figure 7 shows the variation of the Nusselt number for various configurations of fins with the Reynolds number.As we increase the Reynolds number, the Nusselt number increases, and so does heat transfer.With the increase in Reynolds, the heat exchange efficiency is also improved, and more heat can be taken away by the fluid working medium in the unit time of the heat source.From Figure 7, it can also be found that for the micro-channel heat sinks with rectangular fins, when the Reynolds number is relatively small, the Nusselt number increases quickly.In contrast, when the Reynolds number is rather large, the Nusselt number increases The velocity in a recirculation zone is usually very low due to the presence of vortices, which cause a high level of fluid separation.This low velocity is a result of the fact that the vortices in the zone act as a barrier, slowing down the flow and preventing it from exiting the zone.As a result, the velocity of the flow within the recirculation zone is generally much lower than the velocity of the flow outside of the zone [29,30].Moreover, the presence of flow obstructions in the MCHS disrupts and re-establishes the coolant boundary layer, leading to a non-uniform velocity distribution.Even though a recirculating stream is apparent from behind the obstruction, the total heat transfer and thermal properties are enhanced.
Thermal Performance
In all cases, the Nusselt number is a non-dimensional parameter, and it is used to rate the heat transfer in the convective mode.Figure 7 shows the variation of the Nusselt number for various configurations of fins with the Reynolds number.As we increase the Reynolds number, the Nusselt number increases, and so does heat transfer.With the increase in Reynolds, the heat exchange efficiency is also improved, and more heat can be taken away by the fluid working medium in the unit time of the heat source.From Figure 7, it can also be found that for the micro-channel heat sinks with rectangular fins, when the Reynolds number is relatively small, the Nusselt number increases quickly.In contrast, when the Reynolds number is rather large, the Nusselt number increases relatively slowly.Furthermore, different models present different heat transfer capabilities.The heat exchange capacity of MCHS is very stability and safety of the heat source.Hence, for a = 1.2 m/s, the temperature distribution in the flo plot clearly demonstrates that the temperatu considerably higher compared to the newly propo enhanced heat transfer achieved by the rectangu rectangular fins results in a considerable decrease the central region and the side wall of the The heat exchange capacity of MCHS is very important, which will directly affect the stability and safety of the heat source.Hence, for all micro-channel heat sink models at u in = 1.2 m/s, the temperature distribution in the flow passage is depicted in Figure 8.The plot clearly demonstrates that the temperature in the smooth micro-channel is considerably higher compared to the newly proposed model, providing evidence for the enhanced heat transfer achieved by the rectangular fins.Significantly, the presence of rectangular fins results in a considerable decrease in the temperature differential between the central region and the side wall of the channel.The observed temperature distribution can be attributed to the effective mixing of hot water near the channel's side walls and cold water near its center.Case 4 has the lowest temperature of all configurations, which means it can take away more heat from the heat source.In addition, with a smooth channel, adding a rectangular fin greatly reduces the occurrence of local hotspots.
2023, 14, x FOR PEER REVIEW
rectangular fins results in a considerable decrease in the temperature differential between the central region and the side wall of the channel.The observed temperature distribution can be attributed to the effective mixing of hot water near the channel's side walls and cold water near its center.Case 4 has the lowest temperature of all configurations, which means it can take away more heat from the heat source.In addition, with a smooth channel, adding a rectangular fin greatly reduces the occurrence of local hotspots.
Pressure Drop
Figure 9 shows the pressure distribution of different channel models, including the smooth micro-channel.The observations from Figure 9 clearly indicate that the smooth channels exhibit lower pressure drop compared to the other channel models.This confirms that pressure losses in smooth channels primarily occur due to wall friction.However, in the case of the rectangular fin channel, additional pressure loss is introduced due to the generation of eddies in the liquid flow as it traverses over the ridges.The presence of the ridges in a fin channel leads to an increase in the surface roughness.As
Pressure Drop
Figure 9 shows the pressure distribution of different channel models, including the smooth micro-channel.The observations from Figure 9 clearly indicate that the smooth channels exhibit lower pressure drop compared to the other channel models.This confirms that pressure losses in smooth channels primarily occur due to wall friction.However, in the case of the rectangular fin channel, additional pressure loss is introduced due to the generation of eddies in the liquid flow as it traverses over the ridges.The presence of the ridges in a fin channel leads to an increase in the surface roughness.As the fluid flows over the ridges, the turbulent eddies created by the features cause an increase in the frictional losses in the channel.This loss of energy increases the pressure drop significantly, leading to a higher required pressure to maintain the same flow rate.The pressure drop in case 4 is the highest among all configurations, primarily due to the presence of rectangular fins in this channel.These fins introduce additional flow restrictions, leading to greater resistance to fluid flow.As a result, the rectangular fins significantly contribute to the increased pressure drop in case 4 compared to the other configurations.
Additionally, Figure 10 displays the correlation between the Darcy friction factor and Reynolds number for cases 1-4 and the smooth micro-channel.The Darcy friction factor is an important dimensionless quantity used to determine the friction inside a channel flowing with a liquid.It is the ratio of the pressure drop over the fluid per unit length of the channel to the square of the average velocity of the fluid.From Figure 10, it is evident that the friction factor for cases 1-4 is more significant than that of smooth micro-channels because of obstruction to flow caused by fins.At the same time, it can also be seen from the figure that when the Reynolds number is less than 450, the friction factor decreases significantly, and after the Reynolds number is greater than 450, the declining trend of the friction factor becomes gentle.It is likely due to a threshold of the friction factor relative to the Reynolds number.The characteristic of this threshold makes the Reynolds number larger than the less obvious effect of the friction factor beyond this threshold.Within the entire range of Reynolds number variation, the friction factor value of case 3 is the maximum.This is mainly due to the structure of this channel, with more obstacles to impede the fluid flow.
decreases significantly, and after the Reynolds number is greater than 450, the declining trend of the friction factor becomes gentle.It is likely due to a threshold of the friction factor relative to the Reynolds number.The characteristic of this threshold makes the Reynolds number larger than the less obvious effect of the friction factor beyond this threshold.Within the entire range of Reynolds number variation, the friction factor value of case 3 is the maximum.This is mainly due to the structure of this channel, with more obstacles to impede the fluid flow.
Overall Thermal Performance
Figures 11-13 depict the variations of Nu/Nu 0 , f/f 0 , and PEC as the Reynolds numbers range from 190 to 838.These figures serve to further evaluate the heat transfer performance of different micro-channel flow structures and compare them with the smooth micro-channel heat sink.As evident from Figure 11, the micro-channel with obstacles exhibits superior heat transfer performance compared to the smooth channel.Notably, within the range of Reynolds numbers less than 450, the micro-channel structure with obstacles demonstrates a more pronounced advantage in terms of thermal efficiency.As depicted in Figure 12, the newly proposed MCHS demonstrates a noticeable increase in the friction factor compared to the smooth micro-channel.Specifically, the friction factor of the proposed micro-channel heat sinks is 1.93-4.57times higher than that of the smooth heat sink design.From the graph, it can also be seen that as the Reynolds number increases, the growth rate of the friction factor first increases and then decreases.Hence, It is evident from Figures 11 and 12 that in the laminar flow stage, the heat transfer advantage of micro-channel structures with obstacles is obvious.The presence of obstacles can reduce the energy input to the heat transfer surface, increase the temperature difference, and thus increase the heat exchange efficiency.In addition, the flow from the upper layer is also restricted by the presence of obstacles, which makes the flow in the micro-channel more uniform, which is conducive to improving heat exchange efficiency.In order to assess the overall combined performance of the model designed in this article, the PEC parameter, which considers the heat transfer enhancement (Nu/Nu 0 ) and the friction factor (f/f 0 ), is evaluated for all cases.From Figure 13, as the inlet Reynolds number rises, the overall thermal performance of the obstruction-filled channels increases.Case 4 has the highest PEC, while case 1 has the lowest.This means that increasing the number of obstacles in the flow passage can improve the overall heat transfer performance of the heat sink.It is also apparent that as Re rises, PEC initially rises significantly, reaches a maximum, and then falls off quickly.The significant pressure drop experienced by the newly proposed micro-channel heat sink has led to a diminishing of its ability to effectively enhance heat transfer at higher Reynolds number conditions.
the heat transfer advantage of micro-chann presence of obstacles can reduce the energy i temperature difference, and thus increase th flow from the upper layer is also restricted b flow in the micro-channel more uniform, wh efficiency.In order to assess the overall comb this article, the PEC parameter, which consid and the friction factor (f/f0), is evaluated f Reynolds number rises, the overall therm channels increases.Case 4 has the highest P that increasing the number of obstacles in th transfer performance of the heat sink.It is rises significantly, reaches a maximum, a pressure drop experienced by the newly pro diminishing of its ability to effectively enhan conditions.
Conclusions
1.In this study, an analysis is conducted Reynolds numbers.The findings reve rectangular fins demonstrated significa factors compared to the smooth heat si 1.40-2.02times higher, while the friction remarkable improvements in performa
In this study, an analysis is conducted on different shapes of rectangular fins and Reynolds numbers.The findings reveal that the micro-channel heat sink with rectangular fins demonstrated significantly higher Nusselt numbers and friction factors compared to the smooth heat sink.Specifically, the Nusselt numbers were 1.40-2.02times higher, while the friction factors were 2.64-4.33 times higher.These remarkable improvements in performance led to performance evaluation criteria ranging from 1.23-1.95. 2.
Open interrupted MHCS has higher heat transfer performance than rectangular MCHS and is an effective way to improve the thermal performance of this type of MCHS.
3.
The periodic truncation of the fins makes the velocity boundary layer of the fluid always in an alternating state of destruction and reconstruction, and a transverse recirculation zone is generated at the tail of the rectangular fins.Under the interac-tion of this effect, the degree of fluid disturbance in the open intermittent MCHS is intensified, and the heat transfer performance is greatly improved.4.
All rectangular fin configurations had a higher friction factor since adding fins made the flow more restricted.Due to the biggest pressure drop, case 3 has the highest friction factor across the whole Re range.An increase in pressure drop results in a higher demand for pumping power.5.
Among the various case configurations, case 4 exhibits superior thermal performance.However, it is accompanied by a notable disadvantage of high-pressure drop, resulting in an increased requirement for pumping power.To quantitatively evaluate the overall performance of the MCHS, we introduced a thermal enhancement factor.It is observed that case 3, due to its significant pressure penalty, has the lowest thermal enhancement factor across all Reynolds numbers compared to the other cases.6.
For single-phase media, the use of rough surfaces (such as rectangular fins, grooves, interruptions, etc.), can enhance disturbances and mixing in the fluid.Secondly, the use of flow disturbance units can create secondary flow and enhance the mixing between the mainstream fluid and the boundary layer fluid, thus achieving the purpose of enhanced heat transfer.This is currently a very effective method for enhancing convective heat transfer.
Figure 5 .
Figure 5.The distribution of streamlines in the x-z planes for all cases at a channel height of 0.25 mm.@u in = 1.2 m/s.
Figure 5 .
Figure 5.The distribution of streamlines in the x-z planes for all cases at a channel height of 0.25 mm.@uin = 1.2 m/s.
Figure 6 .
Figure 6.The velocity distribution in the x-z middle cross-section.@uin = 1.2 m/s.
Figure 6 .
Figure 6.The velocity distribution in the x-z middle cross-section.@u in = 1.2 m/s.
Figure 7 .
Figure 7.The relationship between the average Nusselt Reynolds number for all cases.
Figure 7 .
Figure 7.The relationship between the average Nusselt number for heat transfer in MCHS and the Reynolds number for all cases.
Figure 8 .
Figure 8.The temperature distribution of the x-z middle cross-section.@uin = 1.2 m/s.
Figure 8 .
Figure 8.The temperature distribution of the x-z middle cross-section.@u in = 1.2 m/s.
Figure 9 .
Figure 9.The pressure distribution of the x-z middle cross-section.@uin = 1.2 m/s.Figure 9.The pressure distribution of the x-z middle cross-section.@u in = 1.2 m/s.
Figure 9 .Figure 10 .
Figure 9.The pressure distribution of the x-z middle cross-section.@uin = 1.2 m/s.Figure 9.The pressure distribution of the x-z middle cross-section.@u in = 1.2 m/s.
Figure 10 .
Figure 10.The relationship between the Darcy friction factors of micro-channel heat sinks and the Reynolds number for all cases.
Figure 11 .
Figure 11.Heat transfer characteristics for MCHS with different design types, cases 1-4, based on case 0. The subscript 0 denotes case 0.
Figure 13 .
Figure 13.The PEC for MCHS with different desig 0 denotes case 0.
1 .
In this study, an analysis is conducted o Reynolds numbers.The findings revea rectangular fins demonstrated significa factors compared to the smooth heat sin
Figure 13 .
Figure 13.The PEC for MCHS with different desig 0 denotes case 0.
Figure 13 .
Figure 13.The PEC for MCHS with different design types, cases 1-4, based on case 0. The subscript 0 denotes case 0. | 9,878.4 | 2023-09-23T00:00:00.000 | [
"Physics",
"Engineering"
] |
Novel members of the AGAMOUS LIKE 6 subfamily of MIKCC-type MADS-box genes in soybean
Background The classical (C) MIKC-type MADS-box transcription factors comprise one gene family that plays diverse roles in the flowering process ranging from floral initiation to the development of floral organs. Despite their importance in regulating developmental processes that impact crop yield, they remain largely unexplored in the major legume oilseed crop, soybean. Results We identified 57 MIKCc-type transcription factors from soybean and determined the in silico gene expression profiles of the soybean MIKCc-type genes across different tissues. Our study implicates three MIKCc-type transcription factors as novel members of the AGAMOUS LIKE 6 (AGL6) subfamily of the MIKCC-type MADS-box genes, and we named this sister clade PsMADS3. While similar genes were identified in other legume species, poplar and grape, no such gene is represented in Arabidopsis thaliana or rice. RT-PCR analysis on these three soybean PsMADS3 genes during early floral initiation processes revealed their temporal expression similar to that of APETALA1, a gene known to function as a floral meristem identity gene. However, RNA in situ hybridisation showed that their spatial expression patterns are markedly different from those of APETALA1. Conclusion Legume flower development system differs from that in the model plant, Arabidopsis. There is an overlap in the initiation of different floral whorls in soybean, and inflorescent meristems can revert to leaf production depending on the environmental conditions. MIKCC-type MADS-box genes have been shown to play key regulatory roles in different stages of flower development. We identified members of the PsMADS3 sub-clade in legumes that show differential spatial expression during floral initiation, indicating their potential novel roles in the floral initiation process. The results from this study will contribute to a better understanding of legume-specific floral developmental processes.
Background
Flower development in plants involves tightly regulated processes starting from floral initiation to flower formation. The underlying processes have been extensively investigated, as flower development is an important agronomic trait that determines crop yield. Various transcription factors are essential in regulating these developmental processes, including the family of MADS-box transcription factors.
The MADS-box transcription factors, especially the plant-specific classical ( C ) MIKC-type MADS-box genes, are known to play key regulatory roles in different stages of flower development. Their roles in coordinating floral developmental processes have been revealed by functional studies largely carried out in the model plant, Arabidopsis thaliana. The MIKC C -type genes are characterised by a conserved structural organisation of the MADS-box, Intervening-, Keratin-likeand C-domains. The highly conserved MADS-domain and the weakly conserved I-domain are required for DNA binding, while the strongly conserved K-domain and the variable Cdomain regulate protein interactions [1].
Genome-wide analyses of the MIKC C -type genes have been carried out in Arabidopsis [2], rice [3] and poplar [4]. While Arabidopsis and rice genomes have similar numbers of MIKC C -type genes (39 vs. 38), poplar has 55 of these genes, suggesting a higher birth rate compared to Arabidopsis or rice. These MIKC c -type genes can be divided into 15 distinct gene clades with each clade named after the first member identified [3,5]. All but two (TM8 and OsMADS32) are found in Arabidopsis [3,5], and the FLC clade may be absent from the rice genome [3]. It remains unclear whether all clades are present in the poplar genome, as no TM8 genes were used in the phylogenetic analysis [4].
The SQUA subfamily clade includes four Arabidopsis members, APETALA1 (AP1), CAULIFLOWER (CAL), FRUITFULL (FUL) and AGAMOUS-LIKE 79 (AGL79) [2]. The functions of AP1, CAL and FUL have been characterised, indicating that they play partially redundant roles in determining floral meristem identity [6]. The SEPALLATA (SEP) family belongs to the AGL2 clade, and there are four members documented in Arabidopsis [7]. All four members (SEP1, SEP2, SEP3 and SEP4) play redundant functions in determining floral organ identity and floral meristem determinacy. AP1 has been shown to bind directly to the SEP3 promoter, hence increasing the expression of SEP3 rapidly [8]. The AGL6 subfamily has a relatively small representation (only two genes, AGL6 and its paralog AGL13) in Arabidopsis. While no knockout phenotype has been described for either of these genes in Arabidopsis, studies in rice, maize and Petunia hybrida have largely demonstrated the roles of the AGL6 subfamily in regulating floral organ identity and floral meristem determinacy, indicating redundant roles with closely related genes including SEP [9][10][11]. A phylogenetic study showed that subfamilies of SQUA, SEP and AGL6 are always rooted together in one superclade, which may be correlated with their overlapping roles in regulating flower development.
A total of 212 MADS-box genes were predicted in the recent genome sequence of soybean [12]. Earlier we reported the diversification of some gene expression and microRNAs in legume SAM [13][14][15][16]. However, much remains to be learned about these genes, especially given their potential impact on crop production. Soybean is the largest legume crop in the world and accounts for greater than 50% of the global oilseed production. In this study, we identified all the soybean MIKC c -type MADSbox genes using the current Glyma1.0 gene set and identified potential phylogenetic relationships to their Arabidopsis, rice and poplar counterparts. We examined the expression patterns across different soybean tissues for the entire family. Intriguingly, the results revealed a novel AGL6 sister clade of MIKC c -type genes in soybean, and we focused our subsequent analysis on members of this novel sub-clade.
Results and discussion
Molecular evolutionary analysis of soybean MIKC c -type MADS-box transcription factors When we searched the soybean predicted gene set available at Phytozome [12] for sequences containing both a MADS-box and K-domain, we identified a total of 57 sequences. Subsequent inspection revealed three of the sequences were incomplete. We attempted to obtain a full-length sequence for these genes using gene prediction software on the genome sequence surrounding these partial sequences but did not yield any results. Therefore, we omitted these sequences from further analysis. To investigate their phylogenetic relationships with MIKC c -type genes from Arabidopsis, rice and poplar, reported MIKC c group protein sequences [3][4][5] were retrieved from their respective databases. A total of 159 conceptually translated protein sequences were used in the phylogenetic analysis.
Fifteen existing clades were identified from the generated phylogenetic tree: AGL2, AGL6, SQUA, AGL12, FLC, TM3, AGL17, AG, OsMADS32, TM8, STMADS11, AGL15, GGM13, DEF, and GLO ( Figure 1). The relationships among the different clades are similar to other reports [2][3][4][5]. With the exception of OsMADS32 and TM8, soybean genes are found in all clades, and the number of soybean sequences within each clade varies from one (AGL15 and AGL17) to nine (SQUA1), with most genes occurring in duplicate. Consistent with a previous report [5], the genes in AGL2, SQUA and AGL6 form a superclade together ( Figure 1). Within the AGL6 clade, a strongly supported internal branch seems to be separate from the existing AGL6 members, suggesting it is a novel sister clade not represented in Arabidopsis or rice. This novel sub-clade consists of three soybean genes, Glyma16g32540 and a homeolog pair, Glyma10g38580 (GmMADS3a) and Glyma20g29250 (GmMADS3b). The top BLASTX match of these genes is a MADS-box transcription factor from garden pea, which is annotated as PsMADS3 [17] (data not shown). Therefore, we named it the PsMADS3 sub-clade.
We next examined whether similar orthologous members of PsMADS3 exist in species other than soybean, garden pea and poplar. A BLASTP search at the NCBI public database identified potential PsMADS3 sequences from other species (data not shown). Using the top matching 40 orthologs including sequences from the sister clade, AGL6, for phylogenetic analysis, three distinct groups were identified in the tree rooted with gymnosperm AGL6 as the outgroup ( Figure 2). One clade groups all of the monocot sequences, whereas the other two clades contain AGL6 and PsMADS3 sequences ( Figure 2). As we were only using sequences available in the public database, it is likely that similar PsMADS3 sequences exist in species other than those examined. Future studies with increased taxon sampling will help to clarify the node. Members of the PsMADS3 sub-clade are represented in not only legume species but also poplar and grape ( Figure 2). As the genome sequences for Arabidopsis and rice are available and well annotated, we are confident that members of this PsMADS3 subclade are absent from these two species. Furthermore, the observation that no orthologous PsMADS3 genes is found among the top matching sequences from other monocots including wheat and maize ( Figure 2) implies that such genes may be absent from monocot. The PsMADS3 clade may have evolved after monocot-dicot divergence and following the emergence of the Arabidopsis species. These genes may have been overlooked previously due to their absence in Arabidopsis and rice. In fact, none of these legume genes were analysed in a recent study on eudicot AGL6 [18].
In silico analysis of soybean MIKC c gene expression patterns
We previously performed high-throughput RNA-sequencing on micro-dissected shoot apical meristems (SAMs) undergoing the early floral initiation process [29]. The samples were derived from soybean plants (10day-old) subjected to different lengths of short-day (SD) treatments (0SD, 1SD, 2SD, 3SD and 4SD), and the induction of the floral meristem identity genes including GmAP1 occurred on 4SD. Because genes that are involved in later processes of flowering such as floral organ development or only expressed in other tissues such as roots may not be captured by our dataset, we also used the transcriptome data reported by Libault and co-workers [19] to include a diverse range of tissues including flower, seed pod, root, nodule and root tip in our in silico analysis ( Figure 3).
All identified soybean MIKC c -genes are expressed in at least one of the three reproductive tissues represented (reproductive SAM, flower and pod), except for members of the AGL12 clade, which are only expressed in the root (Figure 3). Arabidopsis AGL12 is preferentially expressed in the root, and recent loss-of-function Figure 1 Relationships among representative members of MIKC c -type genes. a. The phylogenetic tree was based on MUSCLE alignments of conceptual protein sequences spanning the MADS-, I-and K-domains of sequences from Arabidopsis, rice, poplar and soybean. The unrooted bootstrap consensus tree was constructed using the Maximum Likelihood method implemented in Mega 5. The number for each node is the bootstrap percentages (200 replications), and nodes with less than 50% bootstrap values were collapsed. Fifteen clades of MIKC c -type genes are indicated, with PsMADS3 potentially representing a novel sister clade of AGL6.
analyses have revealed its roles in not only regulating root meristem cell proliferation but also flowering transition [20,21]. Based on the soybean AGL12-LIKE expression profile, it is tempting to speculate that their functions in floral regulation may have been lost. A similar expression pattern was observed for most MIKC cgenes clustered within a clade. All duplicated genes are transcribed and have comparable expression profiles, especially in the reproductive SAM (Figure 3), suggesting their functional significance.
As for the superclade consisting of AGL2, SQUA, and AGL6, there are some notable differences in the gene expression profiles among the three clades ( Figure 3). For example, all AGL2-LIKE genes except one (Glyma01g08130) are absent from the SAM during the early floral transition process but are expressed later in the floral developmental process in the flower and pod. This pattern is expected as these genes are known to be activated following AP1 induction in Arabidopsis [22]. The phylogenetic tree indicates that Glyma01g08130 is the counterpart for Arabidopsis SEP4. In addition to being found in flower and pod like the rest of the AGL2-LIKE genes, it is also expressed in the SAMs during the floral initiation process and very highly in nodules. This pattern implies a likely diverged function of GmSEP4 with additional roles in the early floral initiation process as well as in nodule formation. Glyma01g08150, one of the four soybean counterparts of Arabidopsis AP1, also likely plays a role in nodule formation. Although the expression of Glyma01g08150 is drastically induced on Figure 2 Phylogeny of AGL6 genes. The tree was produced as in Figure 1 but from complete protein sequences. The predicted peptide sequence of GmMADS3a (Glyma10g38560) was used for a BLASTP search at NCBI. The top 40 matches (E value < 1e-50) were downloaded from NCBI and filtered for duplicated or short sequences prior to phylogenetic analysis. The tree was rooted by using the gymnosperm AGL6 as the outgroup. Plants used for analysis: Agapanthus praecox, Arabidopsis thaliana, Asparagus officinalis, Bambusa oldhamii, Chrysanthemum morifolium, Crocus sativus, Dendrocalamus latiflorus, Elaeis guineensis, Epimedium sagittatum, Gerbera jamesonii, Glycine max, Gnetum parvifolium, Gossypium hirsutum, Hordeum vulgare, Hyacinthus orientalis, Lolium perenne, Lotus japonicus, Medicago truncatula, Oryza sativa, Petunia hybrida, Picea abies, Pisum sativum, Poa annua, Populus trichocarpa, Sorghum bicolor, Triticum aestivum, Vitis vinifera, Zea mays.
4SD in the SAM (20 RPKM), the level of expression is 6-fold less than that in the nodule (134 RPKM; Figure 3). Intriguingly, its homeolog Glyma02g13420 is not expressed in the nodule but rather has the highest expression in the reproductive SAM (105 RPKM; Figure 1 & 3), suggesting a functional divergence between this homeolog pair.
Although members of soybean AGL6 genes are expressed in the reproductive SAMs, changes in their transcript levels are not comparable with those of PsMADS3-LIKE and SQUA-LIKE genes during the early floral transition process, suggesting that the latter two clades are likely to play more prominent roles in the developmental transition process. Because there is no information available for PsMADS3-LIKE genes, we focused our study on members of this novel sister clade.
Expression of GmMADS3 during the floral initiation process
We carried out RT-PCR analysis to verify the expression of this novel family in the soybean SAM during the early floral initiation process (Figure 4). RNA was extracted from dissected SAMs of plants undergoing 0, 2, 4, 6 and Figure 3 Expression profiles of soybean MIKC c -type transcription factors. The highest expression level for each gene across two different sets of samples is given as an RPKM value (see Methods). The level of expression for a gene is represented as the percentage of the maximum expression level and colour coded from 0% (white) to 100% (black). S0-S4: samples derived from SAM after 10-day-old soybean plants were shifted to short-day growth conditions as described in the Methods at intervals of 0 short-day (S0), 1 short-day (S1), 2 short-day (S2), 3 short-day (S3) or 4 short-day (S4). 10 SD treatments. RT-PCR was carried out with primers specific to each of the soybean members in this PsMADS3 branch (Glyma16g32540, Glyma10g38580, and Glyma20g29250) as well as to GmAP1 (Glyma16g13070) as the control for the floral induction process. Consistent with our previous study [23], the induction of GmAP1 occurs after the 4 short-day treatment. All three soybean genes in the PsMADS3 clade displayed a similar expression pattern to that of GmAP1, consistent with the in silico analysis.
To examine the spatial expression pattern of these genes during the floral initiation process, in situ hybridisation analysis was performed. On 4SD, GmAP1 expression was detected in the incipient floral primordia of the inflorescence meristem (Figure 5a). On 6SD, newly established floral meristems became more prominent and GmAP1 expression was detected throughout these meristems (Figure 5b). GmMADS3a (Glyma10g38580) exhibited a rather similar expression pattern to GmAP1 on 4SD (Figure 5d), but its expression subsequently spread to the entire inflorescence meristem as well as to the newly established floral meristems on 6SD (Figure 5e). As the GmMADS3 homeolog pair is almost identical in nucleotide sequence including the UTR regions (data not shown), no gene-specific probe could be made, and thus the signals observed may correspond to the expression of both genes. Nevertheless, the expression of GmMADS3 throughout the entire inflorescence and floral meristem suggests that it can serve as both an inflorescence and floral meristem identity gene. Because the expression of GmMADS3 initially overlaps with that of GmAP1, it likely performs similar functions as GmAP1. Its subsequent widespread expression in the inflorescence meristem may ensure all vegetative activities at the SAM are replaced with the initiation of the floral meristem at the meristem flanks.
The expression of Glyma16g32540 is distinct from that of GmAP1 and GmMADS3. A weak signal associated with its expression was detected in the centre of the inflorescence meristem ( Figure 5g); on 6SD, its expression was also observed in the centre of the newly emerged floral meristem (Figure 5h). The expression of Glyma16g32540 in the centre of the meristem indicates its potential regulatory roles in orchestrating events in the inflorescence meristem. The spatial expression pattern of the soybean PsMADS3-LIKE genes supports that these genes are novel, as their expression differs markedly from the spatial expression of closely related family members such as GmAP1 (this study) or Arabidopsis AGL6 [24].
Conclusions
In contrast to Arabidopsis where the initiation timing of floral whorls does not overlap, the legume soybean has a flower development system with overlapping whorls [25]. Furthermore, unlike Arabidopsis that usually cannot undergo flowering reversion [26], the soybean inflorescent meristem can revert to leaf production when the environmental growth conditions are switched from SD to LD [27]. Because the MIKC C -type MADS-box genes play key regulatory roles in different stages of flower development, it is conceivable that members of the PsMADS3 sub-clade identified in this study could contribute to developmental plasticity in cooperation with key floral regulators such as GmAP1 or GmFLC. Future studies aimed at defining the interacting partners of these genes will aid in our understanding of the floral transition process.
Sequence and phylogenetic analysis
Conceptually translated protein sequences were retrieved from public databases (Phytozome, Rice Genome Annotation Project, TAIR and LjGDB). For the initial identification of the soybean MIKC C -type MADS-box transcription factors, all annotated genes were screened for both the MADS-box domain (PFAM00319) and Kdomain (PFAM01486). The results were then manually inspected and filtered for truncated protein sequences, resulting in a total of 54 sequences (Additional file 1: Table S1). Sequences were imported into MEGA version 5 software for subsequent phylogenetic and molecular evolutionary analyses [28]. MUSCLE alignments of protein sequences spanning the MADS-, I-and K-domains were carried out using the default settings in MEGA. After alignment, the evolutionary history was estimated using the Maximum Likelihood method based on the JTT matrix-based model as implemented in Mega 5 with bootstrap analysis set at 200 replicates.
For the expression profile analysis, two separate transcriptome sequencing data were used [19,29]. The abundance for each gene was normalised within each dataset and expressed in reads per kb per million (RPKM) and are provided in Additional file 1.
Plant growth and RNA extraction
Soybean plants [Glycine max. (L) Merr. Cv. Bragg] were grown in a greenhouse located at the University of Melbourne, Victoria, Australia. To induce flowering, 10-dayold plants were shifted to a growth chamber maintained at a constant temperature of 25°C with a 10-hr day (150 μmol m -2 s -1 ) and 14-hr night (short-day). Shoot apical meristems (SAMs) were micro-dissected, as previously described [24]. Total RNA was extracted from dissected SAM (approximately 80 SAMs per extraction) using the Qiagen RNeasy Mini Kit (Qiagen, Victoria, Australia) with on-column DNAse digestion.
The soybean actin gene was used as an internal control. The PCR reactions were separated on 1% agarose gels containing 0.1 μg/μl ethidium bromide and visualised under UV light. | 4,477.2 | 2013-07-20T00:00:00.000 | [
"Biology"
] |
Noise-tolerant Modular Neural Network System for Classifying ECG Signal
,
Introduction
The electrocardiogram (ECG) is a non-invasive clinical test that measures and records electrical changes that take place in the heart when it beats [1].ECG is vastly used for screening, diagnosis, and monitoring of several heart conditions.Most ECGs are recorded and interpreted by health professionals, few of which have received formal training and proper assessment of competency in recording and interpreting ECGs [2,3], and many selfreported their ECG reading skills as inadequate [4].Therefore, several automated approaches have been developed to increase efficiency and enhance accuracy in interpreting ECG waveforms [5].This is the case of classification systems based on artificial neural networks (NN), which have become very popular and most widely employed for successful classification of ECG signals [5] because of their natural ability to deal with incomplete or ambiguous input in pattern recognition tasks [6].
An ECG signal consists mainly of five continuous electromagnetic waves namely, P, Q, R, S, and T (Fig. 1).The amplitude, direction, and duration of the waves, and their morphological aspects are analyzed for specific abnormalities.Other important information includes the peak area, called the QRS complex, the duration of the PR and QT intervals, and the deviation of the PR and ST segments.These characteristics can be contaminated by the physical parameters of electronic and mechanical devices, electrical activity of muscles, degradation of the electrode-skin contact, and other causes [7][8][9].Noise corruption can generate similar morphologies to the ECG waveform, reducing the discriminating power of heartbeat patterns, and increasing the rate of false alarms for cardiac monitors [9].Therefore, a large number of NN approaches for ECG classification have included signal preprocessing for noise reduction, using a wavelet transformer (WT) [6,[11][12][13], nonlinear cubic spline interpolation (CSI) [14], fast Fourier transformation (FFT) [15] or band-pass filters [16][17][18][19].
Nevertheless, current strategies for this critical step, the preprocessing for noise removal, are still unsatisfactory because clinical interpretation often requires even higher signal quality to detect cardiac disorders [20,21].For that reason, NN systems for ECG classification that are robust and efficient, and have greater noise tolerance, are needed.In this paper, we develop and test a noise-tolerant ECG signal classifier based on an NN approach.The method uses a modular NN architecture to perform initial training and testing on a simulated dataset.Ultimately we discriminated normal and abnormal heartbeat patterns for single lead of real ECG signals.
Literature review
Most NN systems were tested using ECG data from the MIT-BIH arrhythmia database [5], which contains 48 ECG recordings with signals that were band-pass filtered in the frequency range of 0.1 to 100 Hz and sampled at 360 Hz [22].In this sense, Rohan and Patil also used a low pass filter with a cut-off frequency of 30-100 Hz to pre-process 16 records from the MIT-BIH database [16].They then employed an NN approach composed by two hidden layers with eight neurons and classified four types of cardiac arrhythmias with an overall accuracy of 99.9%.Asl et al. used a 5-15 Hz band-pass filter to remove contamination from 2009 ECG segments, each with 32 RR intervals [17].The authors developed a three-layered NN with one hidden layer of 20 neurons that classified RR interval signals into four arrhythmia categories with an average accuracy of 99.3%.Das and Ari employed an NN approach with a pre-processing band-pass filter (3-20Hz) to reduce noise in 44 records of the database [18].They classified five types of ECG beats with an S-transform NN approach and achieved 97.9% of average classification accuracy.In contrast, Tang and Shu eliminated noise from an ECG waveform using high-pass filter with 0.7Hz and low-pass filter with 100Hz [19].Their quantum NN model recognized ECG signals with an overall accuracy of 91.7%.
In two studies the authors used WT technique to remove noise from MIT-BIH records.Javadi et al. proposed a modular NN based on a mixture of experts and negatively correlated learning using stationary WT as a tool for noise reduction [6].The NN ensemble produced a recognition rate of 96% for classifying normal heartbeats, premature ventricular contraction arrhythmias and other cardiac abnormalities.Vijayavanan et al. preprocessed ECG signal to remove different kinds of artifacts using discrete WT [12].They proposed a probabilistic NN approach to discriminate the difference between a normal ECG signal and an arrhythmia affected signal with an accuracy of 96.5% classification rate.Similarly, Naima and Timemy used discrete WT denoising procedure on ECG data collected from two hospitals in Bagdad [13].Their discrete WT-NN classifier with six neurons in the hidden layer detected acute MI with 95% accuracy.On the other hand, Güler and Übeyli decomposed ECG signals from the Physiobank database [23] into time-frequency representations also using discrete WT.They classified four types of ECG beats with a total accuracy of 96.9% through a combined NN model composed by 30 hidden neurons [11] In its place, Setizwan et al. employed the nonlinear CSI method to estimate and eliminate noise from the baseline ECG of MIT-BIH registers.The implemented fuzzy-neuro learning vector quantization algorithm produced 95.5% of the overall accuracy rate to classify normal beat and 11 types of arrhythmias [14].Meanwhile, Vishwa et al. applied direct FFT to remove low frequencies and restore an ECG signal from the MIT-BIH arrhythmia database with the help of inverse FFT [15].The NN model composed by three and five neurons in first and second hidden layer respectively obtained a detection accuracy of 96.7%.
In contrast, Garg and Sharma used an NN model with two hidden layers to analyze ECG records from the MIT-BIH database with no additional filter and correctly detected normal vs. arrhythmic ECGs with a general accuracy of 96.6% [24].Finally, Jadhav et al. used records from the Cardiac Arrhythmia Database of the UCI Machine Learning Repository [25] with no prior filtering for classification of normal and abnormal ECG signals.Their NN approach with two hidden layers resulted in 82.4% correct classifications [26].
Materials and methods
The NN classification comprises five stages: (i) simulation of ECG signal, (ii) extraction of features that indicate cardiac abnormalities, (iii) computer generation of normal and abnormal heartbeat patterns, (iv) artificial noise injection, and (v) cardiac rhythm classification on simulated and real ECG signals.
ECG signal simulation
We used a standard electrocardiographic 12 lead representation of the heart electrical activity [27] divided in the P, PQ, QRS, ST, T and TP sections (Fig. 1), to simulate an ECG signal with specific parameters (Table 1).The resulting signal was composed of different waveforms and frequencies.The P and T sections of the simulated ECG signal were similar to waveforms generated by the movement of a piston, which allowed generation of a mathematical model of their behavior [28].The piston describes an oscillatory motion that can be approximated by a simple harmonic.The position equation of the piston is a function of the angular velocity: x(t)=r•cos(ωt), where x(t) is the piston position versus time, r is the radius of the crank and ωt is the angular velocity of rotation in radians.
The P wave modeled by the piston motion equation had maximum amplitude of 0.125 mV.The T wave was split into two sections: the first section had maximum amplitude of 0.16 mV, a positive slope, and a period of T1; the second section had the same maximum amplitude, a negative slope, and a period, T2, which was less than T1.The QRS segment used corresponding voltages at the Q, R, and S points and intermediate voltages for the PQ-Q, Q-R, R-S, and S-ST sections.A positive off-set voltage was then added to each value.Each section was further characterized by amplitude and corresponding slope (Table 1).The final output ECG signal, simulated using MATLAB software [29,30], had a duration of 700 ms for each cycle and was mounted on a signal base of 512 mV amplitude (Fig. 2).
Cardiac abnormalities
Feature extraction is a key issue in recognition and classification tasks.We used a combination of morphological and timing features to distinguish between a normal heartbeat (NH) and disorders of heart rate and cardiac rhythm.The shape, position, and time duration of P, Q, T waves and the ST segment, were used to identify specific abnormalities (Fig. 3).The P wave is the first positive deflection on ECG signal, with a normal duration <120 ms and amplitude rarely exceeding 0.25 mV.Greater amplitude suggests right atrial enlargement (RAE) [31], and an inverted P wave can indicate junctional rhythm (JR) [32].The Q wave is the first downward deflection on ECG signal.Pathological Q waves, with a duration >40 ms or depth >0.1 mV, can be a sign of current or previous myocardial infarction (MI) [33].Greater than 0.2 mV depression of the ST segment, which connects the QRS complex and the T wave, is attributable to cardiac ischemia (CI) [34].Widespread inversion of the T wave, the first deflection following the QRS complex, is also associated with CI [35].
Dataset
After extracting the morphological and timing characteristics of the simulated ECG signal, we generated 10000 heartbeat feature vectors (normal and abnormal) for each ECG segment.The dimensions of the feature vectors for the P, Q, ST and T waves were 102, 115, 120, and 200-dimensional respectively.We then built matrices , , and ) from randomly selected pattern vectors, using 900 for each matrix.These were organized as interspersed normal ( ) and abnormal patterns ( ).In the case of a P wave with two associated pathologies ( ), both abnormal patterns were inserted after each NH pattern: (1) Later, we randomly combined the pattern vectors to generate a total training dataset, composed of 5400 samples from five classes.The first class was a NH; the other four classes were specific cardiac pathologies for each ECG wave.This takes into account that CI could be attributable to depression of the ST segment or widespread inversion of the T wave [34].The total training data was partitioned into two datasets: training and testing set.The testing set was not seen by NN classifier during the training phase.It is only used for testing the generalization of NN approach after it was trained.We randomly selected the 80% examples for training, and the rest 20% examples as testing data.
To assess the robustness of the learned patterns within noisy conditions, and to improve the generalization capability of the resulting NN system, we created artificial corruption in all ECG segments of testing set (Fig. 4) using a Gaussian white-noise model [40].We injected 1 to 12% of randomly generated Gaussian white noise [41,42].We defined quality categories to describe the noise level: minor (1-2%), moderate (4-6%), severe (8-10%), and extreme (12%).The corrupted testing set was used during the training phase to improve the behavior of NN ensembles while they were trained within noise conditions.
Later we build a corrupted dataset for testing of trained NN with a total of 21600 contaminated ECG segments (17100 NH, 1800 CI, 900 RAE, 900 JR, and 900 MI segments).In addition, the NN approach was tested on real ECG records of the Physikalisch-Technische Bundesanstalt (PTB) Diagnostic ECG Database [36].The PTB database contains digitized ECG signals provided by the National Metrology Institute of Germany.This ECG collection included 15 simultaneously measured signals: the conventional 12 leads together with the 3 Frank lead ECGs.Each signal is digitized at 1000 samples per second, with 16 bit resolution over a range of ± 16 mV and sampling frequency equal to 1 KHz [36].We selected data from 221 subjects with a clinical summary available, which included ECG records classified as NH (n=52) or cardiac abnormalities (148 MI, 14 dysrhythmia and 7 myocardial hypertrophy).For the testing purpose, we considered an unbalanced dataset in favor of arrhythmia data (76.5%) to improve the testing generalization capabilities of the NN classifier to recognizing cardiac abnormalities.Lead V1 was chosen for the whole analysis; because it has the largest ratio of atrial to ventricular signal amplitude and therefore can offer more representative characteristics for identifying the common heart diseases [37,38].The final test set consisted of 884 ECG traces built from 4 heartbeats per individual.
Neural network classifier
The modular NN classifier consisted of four, threelayered, feedforward micro NNs built through Matlab NN toolbox, one for each ECG interval analyzed.A back-propagation algorithm in batch gradient descent with momentum mode [39] and random weights/bias initialization were used for training.Transfer function group was of the hyperbolic tangent-logarithmic sigmoid-linear type for input-hidden-output layers.The learning rate and momentum coefficient were selected as 0.05 and 0.9 respectively.Performance was tested using the mean squared error parameter, computed for differences between the actual outputs and the outputs obtained in each trained micro NN.The training ended, if the total sum of the squared errors was <0.001, or when 3000 epochs were reached.The target outputs for NH, RAE, JR, MI, and CI were given by (0,0,0,0), (0,0,0,1), (0,0,1,0), (0,1,0,0), and (1,0,0,0), respectively.
Simulated ECG dataset
When the NN system was trained using pattern vectors of clean and noised simulated ECG signals, the MSE convergence goal (0.00096) was reached in 109 epochs.The best performance was obtained using 10 (P and T) or 5 (Q and ST) neurons in the hidden layer of the micro NNs.In the first scenario of testing with an artificial corrupted dataset, correct classifications over 10 runs averaged 99.2%, 95.1%, 91.4%, and 85.2% for minor, moderate, severe, and extreme noise (Table 2) respectively.Total confusion matrix of each micro NN model for all the levels of noise is shown in Table 3
Real ECG dataset
For the last stage of the study the trained NN approach was tested directly on raw ECG traces from PTB database, exclusively for discriminating between NH and cardiac abnormality.Overall classification accuracy was 95.7%.The results are shown by a confusion matrix, where each cell contains the number of ECG traces classified for the corresponding combination of estimated and true outputs (Table 4).
Estimated output
True In Table 6 the overall performance of our proposed NN classifier is compared with the recognition rate of previous NN approaches for EGC classification found in the literature.
Discussion
Overall, results of the previous studies make it clear that suppression of noise corruption embedded in analysed signals improves the accuracy of NN classifiers.However, filtering parameters, particularly cut-off frequencies and phase response characteristics, should be chosen such that clinical information in ECG signals remains undistorted, while as much noise as possible is removed [43].This is difficult because the signal and noise often share the same frequencies.Furthermore, adaptive filters do not normally have a sharp delineation between the pass-bands and cut-bands, but rather a slow transition in the filter response.If the clinical signals and noise are close, it may not be possible to remove the noise without removing some of the clinical signal [44].
Converse to previous NN approaches for classification of ECG signals, the system proposed trains with clean and noisy data [55].By using inputs corrupted with randomly sampled noises and various signal-tonoise ratios, we were able to build a robust classifier without an adaptive filter, because the injected noise improves the generalization capability of the NN model [56].The rationale of this approach is that the perturbation introduced in training by the injected noise can be learned by the NN structure and recognized in the test phase.More exactly, noise injection during the training favors an optimal solution at which the objective function is less sensitive to the change of the input [57].On the other hand, this approach is based on the premise that an NN method, which can provide accurate classification with noise, is preferable to methods that modify the original signal.Furthermore the modular design based in micro NNs provides a more specific classification for each considered kind of cardiac abnormality.In this sense, the analysis and experiments suggest that by injecting a minor to extreme level of noise in training of NN, the noise patterns can be effectively learned, and the generalization capability of the micro NNs can be improved.Both of these advantages result in substantial performance improvement of NN for ECG classification in noise conditions, without the inclusion of adaptive filters.
However, although the average classification accuracy and precision of our NN system is competitive,
NN approach
Filtering technique Accuracy (%) *Rohan et al. [16] 30-100 Hz 99.9 the system was tested for the detection of only four types of cardiac pathologies.On the other hand, the first results of the trained NN approach were achieved with artificially generated random Gaussian white noise, without any specific assumption on the origin of the noise.Moreover, the additive noise model can differ to some extent from real ECG records that are corrupted by physiological noise and exhibiting spatial correlation across the individual ECG signals [16].These limitations require additional research for situations where the nature of the contaminating noises are better known, and the additive artificial noise may be selected according to the particular situation.Therefore, our system requires further verification including information about the noise sources using actual ECG data and classifying other specific types of cardiac disorders.
Conclusion
We developed a robust and fairly accurate, noise-tolerant NN approach for detecting and diagnosing specific cardiac abnormalities.
Figure 3 :
Figure 3: Morphological and timing features of (a) the P wave; (b) the Q wave; (c) the ST segment; and (d) the T wave, used to detect specific cardiac abnormalities
Table 1 :
Parameters of typical ECG lead. .
Table 2 :
Classification performance for contaminated ECG segments.
Table 3 :
Confusion matrix for classification of artificially corrupted ECG segments.Overall classification accuracy was 93.9%.The best performance was achieved for the micro NNs, Q and ST, correctly classifying 98.8% of NH and 100% of abnormal segments.
Table 4 :
Confusion matrix for classification of real ECG signals.
Table 5 :
Total test performance of the NN classifier.
Table 6 :
* ECG signals that were previously band-pass filtered in the frequency range of 0.1 to 100 Hz and sampled at 360 Hz.Comparison of the overall classification accuracy of the proposed method and previous NN approaches in literature.
The modular NN system discriminates between simulated normal and abnormal cardiac rhythms with high accuracy for ECG signals with minor to moderate noise and good accuracy for signals with severe to extreme noise.The previous artificial noise injection stage enables the trained NN classifier to handle noise and recognize cardiac abnormalities on raw ECG signals with high Precision.With further verification, this system could facilitate the use of NN approaches to support clinical decisions. | 4,207 | 2019-03-01T00:00:00.000 | [
"Medicine",
"Engineering"
] |
The Biopolitical Turn of the Post-Covid World . Leftist and Neoliberal Insights of Puzzling Biopolitics
As the 21st century became shaped by the matters of public health, the Covid-19 pandemic revealed that it is a trap to believe that we have to choose between the medicalisation of politics and the politicisation of medicine . My thesis is that models of good governance in the post-pandemic world must be shaped by leftist principles, values and practices, in order to ensure not the reopening, but the reconstruction of public life, which needs more than ever overcoming social inequalities and political polarisations, whereas liberal principles should be implemented in order to fix standards of economic performance and efficiency after applying mechanism of recovery . Governments as well as electoral spheres are reticent to biopolitical incursions, historically associated with panoptic systems . I claim that it is time to plead for positivising biopolitics as political humanism . My research will expose twelve themes for disseminating biopolitics as political humanism, focused on sensitive key-domains such as labour, social cohesion, security, infodemia, domestic life and good governance .
Immunising communities: A biopolitical framework inspired by Robert Esposito
The pandemic raised by the spread of Covid-19 has rapidly developed what politically is known as a state of exception: human rights have been narrowed or even suspended for a determined period of time and the management of the sanitary crises has been doubled by the management of population . Such coordinates depict what Foucault, Agamben and Esposito claim as a biopolitical scenario: governments face the exclusive responsibility of securing biological life (zoe), revealing the non-biological life (bios), reduced to economy, politics, culture, as a secondary priority . It is well known that biopolitics is one of the worst nightmares of political philosophy: it activates only in cases of natural emergencies, such as pandemics, or contractual break-ups that lead to political general conflicts or wars . At a first glimpse, the biopolitical power draws on the rationality invested by governments in shaping and controlling populations by procedures of constraint and coercion that tend progressively to barrow, counterweights of civil freedom . Traditionally, biopolitics becomes the 'politics of life' (Siisiäinen, 2018, p . 18), that tracks a particular raison d'État that quantifies freedoms and liberties as variables influenced by states of exception (Agamben, 2005) that 'do not foreclose all possibilities of historical specificity' (Walsh, 2014, p . 9) . Nevertheless, biopolitics is an equation of power that frames sovereignty and biopolitics as two mutually productive forces: the more a situation is exceptional, the more the administration of life by disciplinary practices is needed . At the threshold of modernity, biopolitics has been grasped as a form of governance that articulates sovereignty by maximising mechanisms of control and surveillance; thus 'the production of a biopolitical body is the original activity of a sovereign power' (Agamben, 1998, p . 6) . According to Agamben, albeit such states of exception reflect a suspension of normality and consequently impose certain power practices, societies rather assume that as long as a juridical content is rational, they can abandon themselves to it . Therefore, in a biopolitical frame, individuals do not expect to be banned by the law; they are abandoned to the law (Agamben, 1998, p . 31) . This phenomenon happens because of two major causes: on the one hand, each legal content is determined by the jus divinum and therefore, we tend to respect the law because we acknowledge our respect towards a messianic form of rationality, that is subsequent to any juridical imperative; on the other hand, because we abandon ourselves to the law by the law, or, to be more specific, 'abandonment respects the law, it cannot do otherwise' (Nancy, 1983, 149) . But how far can we address nowadays such abandonment? In the traditional paradigm of biopolitics, it was conceived as a proof of trust and obedience in front of a superior ontological authority that inspires power; nonetheless, today, in a secular world, such aspects are hardly conceived as parts of a reliable, valid argument . After the 21st century, political philosophy turned biopolitics towards a form of biopower that has nothing to do with a divine rationality of state . The dark decades of this historical time have been shaped by a radical biopolitics that advanced not the politicising of life, but the politicising of death . Political bodies have been nationalised, and historical subjects have been regarded as exceptions, meaning as conditio inhumana, lacking the dignity and the right to live as long as they were declared undesirable subjects within a state border . We do not know if God lived at Auschwitz (see Agamben, 2002), but it is clear that biopolitics feed the mentalities behind panoptic systems that renounced to subtlety and shift to dominant, dictatorial control mechanisms, specific to death camps . Therefore, biopolitics operates in a double sense: from the oppressor to the oppressed and vice versa, meaning that it measures the biopower that annihilates life and the resistance of the victims against invalidating their bios and retracting their zoe. The dominant exegetic part of biopolitics is mainly concentrated on the negative project of biopower: authors such Foucault and Agamben insist on the destruction phenomenon behind the architectonic of disciplinary and punitive societies . Nevertheless, authors such as Esposito bring to the spotlight a more balanced, equilibrated perspective, according to which disadvantaged communities have strengthened their capacity to survive and overcome obstacles dictated by a discretionary biopower and that a biopolitical project is equally represented by the attempt to face, resist and recover from such dictatorial regimes . This latter acceptance of biopolitics should rather be reinforced nowadays when the Covid-19 pandemic activated a biopolitical undertaking of good governance and resilience . The main aim of the current analysis is to prescribe a biopolitical positive project that could depict solutions for raising durable resilient societies governed in the name of life without sacrificing bios, namely culture, religion, society or economics . Such endeavour reflects a great opportunity to address biopolitics as a new humanism, that we have expected after Camus's and Heidegger's claims (inspired by different reasons 1 ) that humanism is no longer possible in our contemporary history . This would be biopolitical humanism that targets resilience as a product of cultural mentalities invested in immunising civilisations against the abnormal life caused by pandemics . First, we should briefly overview the advantages of a biopolitical critique of this pandemic .
A biopolitical governance will help individuals to secure life and maximise its quality . This biopolitical background reshapes, of course, the major priority of governance models . We are used to governing by administering regenerable resources -forms of capital, highly criticised by leftist approaches and rather preferred by liberal convictions . Nonetheless, we have never been challenged by now to govern life as an unregenerable resource, for which a biopower is needed to conserve the life and safety of populations and to aim at developing nations in due course .
The correct question is not 'how compromised is the sanitary and economic normality of the state?' , but 'how far are we from understanding that any crisis is an opportunity?' Krisis is not possible without krinein: the crisis imposes a judgment, a choice, thus, an opportunity . We have the possibility to choose biopolitics as a solution to the project of good governance to combat the effects of the health crisis on the well-being of society and to increase resilience, orienting it towards progress . A biopolitical judgment frames the management of the Covid-19 pandemic out of the classical, divisionary perspective that aims to separate the right and the left; it rather pleads to bring them together, jointly, around a core objective that inspires securing life and developing culture by advancing first, leftist measures to combat inequalities and disparities raised by the pandemic and only afterwards right-wing practices to stimulate recovery and growth .
At the Western level, the spirit of the European pan-community has rather been considered a matter of axiological consensus . States have arbitrated differently, through a rational calculation, the gradual closure of borders and the maximisation of social isolation protocols to total quarantine ("lockdown") . Citizens looked at maximising surveillance and control of the population under the sign of a test of autonomy and freedom, which reactivated the biopolitical appetite for interrogating what should and could, in these conditions, reflect good governance?
Gradually, the political and civil spheres were crossed by common moral dilemmas that suggest that the only way to reconcile them is a biopolitical platform . Is it normal to 1 According to Heidegger, the age of technical rationality lead to violence and oppression, therefore, our human reason that trusted wholeheartedly technique has failed and humanism has more than ever disappointed us. Thus, philosophy will move frontwards but only following the anti-humanist path, whereas Camus considers humanism still possible as long as the metaphysics of sufferance framed by the 21st century can be integrated into a political project that will defy and defeat colonialism, war and oppression. In the end, we must imagine Sisif being happy, and this is exactly the task that this new humanism should fulfil by all its means (see Heidegger, 2013and Camus, 1942, 1951. give political priority to securing public health at the expense of privacy? How far can we sacrifice individual freedoms in the name of the principle of prudence? Why does the state of emergency become the source of normative contents that perpetuate the militarisation of society in order to protect the population? Is civil cooperation possible to develop biosecurity through collective health discipline, as long as the authorities are considered with scepticism and even distrust by citizens? To do politics starting from life means to put humanism at the heart of political action . Not just any humanism, but a biopolitical humanism . This is not an ideological fad, despite the fact that today we live in an age crossed by the continuous dynamics of ideologies . Some ideologies updated, such as nationalism, others performed as 'autoimmune' , such as liberalism, which is becoming more and more pronounced illiberalism . The principle of this humanism is that political power must be biopolitical . But it must be put at the service of the community, not at the basis of constructing a certain immunity in the face of social solidarity . These two words that are quite abused along public speeches framing the management of the pandemic, community and immunity, are bridged by a common radical, the Latin munus. For this argument, I engage the biopolitical theory of Robert Esposito . 2 Translated both as 'obligations' and as habits naturally developed by a community, munus is suppressed, in times of Covid-19, by social distancing . In the attempt to raise immunisation in the face of disease, communities distance their members more and more, alienating them from traditional obligations and habits, from their natural quality of being social beings . What makes us authentic, sociability, makes us sick (Harari, 2020) . But let us keep in mind for now that munus is shared by community and immunity, in order to track the multiple implications of such consubstantiality in biopolitical terms .
How can we ensure that 'immunity' does not destroy our 'community'? This is a biopolitical issue . 'Munus' must be rewritten today so that we can preserve all our obligations to the community and our habits together, becoming immune to the disease . But can current politics make this image a reality? Can politics choose first and foremost life and only after power? In times of pandemics, power must come to us from life . The fact that everyone's life depends on power is the greatest disease we suffer from . Therefore, biopolitics, which none of us knows how to do yet, must be the new obligation and habit of the political class . Politicians must simultaneously operate the priorities of biological life (zoe) and the superstructures of non-organic life (bios). Thus, the biopolitical challenge for a post-pandemic world is to draw principles of good governance that pursue equally and responsibly the guarantee, by the state, of all the necessary 2 Esposito embraces the traditional position of biopolitics defined as 'a science by the conduct of states and human collectivities, determined by laws, the natural environment, and ontological givens that support life and determine man's activities', a statement that lacks, however, 'a categorical generalness' (Esposito, 2008, p. 21). In his perspective, physics and power conceive sovereignty in different regimes that turn us back to the Kantian question surrounding the rationality of governance, that progressively goes, along the 19th century, towards the Foucauldian challenge of understanding the power's hold over life (Esposito, 2008, 32). Esposito rather prefers the Foucauldian acceptance that life is no longer 'a scientific concept', but 'an epistemological indicator' (Esposito, 2008, 40) of classifying and discerning scientific discourses that do not exclude those oriented towards the analysis of power. Thus, modernity grasps the age of bio-history that transforms human life throughout bio-power. In these terms, preserving life becomes a priority, coined as conservation vitae.
resources for the preservation of organic life, integrity and body health . Only in this manner will be possible the biopolitical tone of development of all non-organic fields such as politics, society, economy and culture, starting from their potential to preserve and improve the quality of life .
Who is afraid of biopolitics?
In its own way, biopolitics is a civilisational barometer . It shows us how life has been valued and protected in various historical contexts noted as states of emergency, such as pandemics, wars, civil conflicts, in the context in which the security of life is both a legal and disciplinary issue . The pandemic forces us to rethink the social contract, a process in which the main challenge is that of population management. In this regard, a clarification is needed: in the career of the term, the positive meanings of biopolitics, as a government strategy for managing the population in order to preserve safety and the phenomenon of life, corresponded to negative meanings, such as biopower or bioterror. It usually depicted political situations in which a category of individuals was considered undesirable to a totalitarian society and this led to their exclusion, marginalisation or closure in devices of supervision, control and discipline . 3 In migration issues, such as the management of refugee groups, biopolitics has often indicated abusive, repressive governing bodies . But these negative meanings are not the subject of this discussion . Through its scale, the pandemic has activated biopolitical discourse . We may have prejudices against this term, but they come only from ignorance or from an impermissible error of extending the concept to a project of power with an iron fist . It is not necessary to do so . We live, as I have said on other occasions, in an age in which citizens feel governed by fear. It is the fear of disease, which sometimes arouses distrust in the authorities responsible for managing the epidemiological risk, to the point that the idea of a sanitary dictatorship dangerously seduces not corona-sceptics, but bona fide citizens, who respect all prudential rules . But who no longer resist the anxieties caused by the unpredictable extension of restrictions . Fighting the government by fear is a biopolitical project . Authors such as Lorenzini argue that biopolitics is back since the pandemic activated new 'genres of quarantine' (Lorenzini, 2020), leading to control, discipline and even surveillance, all conceived as Covid-19 responses . If Foucault used to address the nationalisation of the biological, nowadays, in times of pandemics, we face its internationalisation . Lorenzini closely observes that each biopolitical regime advances 'a blackmail': usually, individuals must be for or against a regime of governance, but biopolitics forces us to conceive each political measure as the best option -in a utilitarianist perspective -within a crisis, thus reflecting a reasonable compromise . Therefore, biopolitics expects from us not acceptance or refusal, nor conformity or anarchism, but a justificatory thought . Traditionally, the Foucauldian argument on biopolitics states that biopower is not exclusively explicit: it can act implicitly, in subtle manners, multiplying its effects and diversifying in order to perform global obedience from populations reflected as masses at risk . Therefore, if we look to the 'dark' side of biopolitics, resistance is not the key for a proper and ingenious philosophical analysis of such phenomena, but the power of biopolitics to mirror our resilience, conformity and reasonability, thus becoming an expression of what Foucault would call 'the critical ontology of ourselves' (Foucault, 1984, 47) . Lorenzini defines biopolitics as 'a politics of differential vulnerability': social inequalities occasioned by this pandemic should be solutioned by an efficient governance method . Therefore, biopolitics is the correct political framework not only in times of pandemic but especially in depicting a post-pandemic world .
At the same time, resilience is a biopolitical expression, before being a psychological, affective, social one . The global response to the coronary crisis is to secure humanity both biologically and morally . Authors such as Giorgio Agamben, Michel Foucault, Robert Esposito, Yuval Harari point out that the policy of a pandemic tends to develop authoritarian implications on the part of democracies . On the other hand, the state of emergency in which the golden principle is 'follow the rules to recover' , challenges us to understand how citizens follow the rules imposed in a state of emergency and why deviating from them means not recognising the rationality of these rules . Who pays for disobedience, for making others sick, for freedom? This remains an open question that highlights the fact that each of us is responsible for the other's biological life, not only for his/her own and that convinces us, once more, that biopolitical discourses are adequate for depicting a post-pandemic world .
Last but not least, the medical drama, as we have all seen in Italian hospitals, reengaged the biopolitical protocol . At a first glimpse, this is a bioethical problem: doctors forced to choose, in conditions of insufficient material resources in the fight against the pandemic, the life of which patient will rather be saved: that of a young man who, mathematically, has more chances of cure, or that belonging to an old man who has comorbidities and thus, less chances of going through the disease? Such a choice leads to medical bioethics under the sign of biopolitics .
To put all in a nutshell, we see how all these problems indicate a very simple phenomenon: either we will seek to understand bio-politically this pandemic, looking for solutions to combat it, or we will turn a global disease into a pretext for internal power games, with incredible and unfortunate costs for everyone's life . Running away from the term -biopolitics -just because it reminds us of the most difficult political regimes from our humanity, is not a solution . The return of biopolitics to our situation, in order to understand how humanity has reacted, over the years, to similar pandemic contexts, what mistakes have cost lives and what misunderstandings have affected rights and freedoms, is a gesture of responsibility these days .
Biopolitical governance is not a simple exercise of population management: De-politicising biopolitics
A biopolitical platform for governance is not a left or right project . This is an absolutely necessary construction of a political humanism starting from the revision of the following 12 principles and measures, puzzling Leftist and Neoliberal Insights . My research will expose twelve themes for disseminating biopolitics as political humanism, as it follows: 3.1. The new model of social cohesion is based on the principle of solidarity in solitude.
Social distancing and (self ) isolation test our civility and social responsibility . The boundaries of empathy, trust and mutual cooperation in the absence of direct interaction are equally reshaped . Living exclusively at home involves a certain routine, in the ergonomics of which work, loneliness or cohabitation find, as the case may be, new forms of manifestation . Solidarity in solitude is a challenge both at the level of individual life and at the level of states, which, although they react as 'closed societies' , seek to maintain a sense of the European pan-community through an open morality . Compassion without cooperation cannot be a space for the administration of life through solidarity . According to de Mata (2020), in the attempt to align health and other perspectives, such as relaunching the economy and reopening public sectors of cultural, social and educational activity, the principle of solidarity in solitude must precede the priority of tracking recovery and resilience . The biggest threat for any community becomes, from a certain point, 'the isolation fatigue' (de Mata, 2020, p . 20), that lead not only to a vulnerable sense of mutual commitment and unity, but also to the fragmentation of unitarian projects, such as the European Union . Nevertheless, it is not like solidarity has never been a core value for our communities by now -it reflects the central belief at the heart of the European union; what has radically changed in understanding its social role in increasing bonding and cooperation is represented by the effects of this pandemic that 'brought solidarity and appreciation to the front lines that, although have always been there working for the population, were previously 'invisible' to the public eye' (Cuschieri, 2020, p . 6) . This pandemic has the power to develop a more emphatic sense of solidarity, by engaging compassion: 'once we understand ourselves as interconnected, we can collectively construct a disaster imaginary of solidarity . In this way, pandemics can be ethically innovative disasters' (Pascoe & Stripling, 2020, p . 443) . Therefore, whoever expects resilient society to overlap to solidary communities has fallen into a trap: solidarity should be the primary value determining not a pandemic world, but a post-pandemic one .
Social inequality is a topic for public debates devoted to the improvement of life standards and quality in remote work paradigms for protecting citizens and increasing safety.
The home quarantine protocol takes up the main concerns of the leftist political agenda on social inequalities and class privileges, on the basis of which comfort, security and quality of life are assessed . Carrying out professional activities at home is far from the romantic rhetoric of quarantine . The distinction between working time and free time is doubled by that between living space and space for professional activity . The (non) material costs involved in these new contexts make their mark on the quality and privacy, in most cases exceeding the financial strength of individuals . In different occasions, the physical distancing protocol must be maintained between family members belonging to different risk groups and cohabiting in a space where it is impossible to minimise the interaction . In addition, in the name of securing the lives of citizens through isolation at home, the state has often failed to ensure their physical and moral integrity . Statistics show that crime in the public space is declining, but cases of domestic violence and abuse are dramatically increasing . There is also a cynicism of the isolation protocol: there are many individuals who do not have a home . For these vulnerable categories of citizens, protecting life means taking life from the beginning, with the support of the state . This is why leftist measures are prior to right-wing practices in governing this pandemic and constructing a post-Covid world . The public spheres has been crossed by different and intriguing opinions, such as Jane Fonda's statement, that the coronavirus has been 'God's gift to the Left': elections from this pandemic revealed that the political spectre has been radically inclined in favour of left-wing parties that chose to solve social disparities before accelerating economic growth in terms of a post-pandemic world scenario . Therefore, ideologically, left-wing politics is more equipped to face the social challenges occurred by this pandemic, whereas in what concerns the fate of liberalism, many authors insist that 'the spread of the virus complicates the implementation of policies consistent with liberal international order, potentially destroying the order in which liberal democracies participate' (Norrlöf, 2020, p . 799) . Consequently, I defend the idea that biopolitics ensures a cyclical, natural and progressivist alternance of left-wing and right-wing principles, such ideological nuances regaining their doctrinaire nuances only in a post-pandemic world: to reboot this pandemic society, we need to depoliticise biopolitics, thus, to govern not for the sake of the left or the right, but for the good of a society that has no need of political competition, but of political cooperation .
3.3. The compatibility of public health measures to protect the lives of citizens with human rights should be coherent and attainable, so that the temporary suspension of universal rights will not lead to censorship, discrimination, xenophobia.
Governance must provide conditions for biopolitics, not thanatopolitics (Foucault, 2003) . It is not the arbitration of death, but the protection and disposition of life in the social space that is the main concern of political action . The public sphere pointed out some of the important themes of this direction . For example, limiting access to medical services regardless of the severity of the medical case, invoking caution in social distancing and avoiding overloading the health system; access to key medicines in a treatment regimen as a form of respecting the right to health; 4 non-compliance with the principles of the right to privacy by limiting travel in order to reconcile professional and family life or forms of civil partnership; limiting religious freedom by imposing robust and essential restrictions in combating the spread of coronavirus on public cult activity, etc . It is not the effects of the medical crisis that will be ungovernable at the end of this pandemic, but the social reactions to the medical crisis .
3.4. Democratisation of biopolitical security. We need the transparency of any form of protecting the life and health of citizens in public spaces through biometric surveillance.
There are gaps in communication between the state and citizens in the administration of protocols to prevent the spread of coronavirus . The progressive increase of state intervention in the administration of civil life has generated panic among the population . People thus knew the invisible and subtle force of the 'invisible hand' . The fact that there is a virtual biometric surveillance only ensures the effectiveness of this protocol and the production of disciplinary effects on subjects or patients . This does not mean that the measure cannot be felt as invasive . We live within digital societies, whose advantages can be valued not only along the informational or cognitive sphere, but also within medical or social environments . However, the transparency of biocommunicability is a crucial measure to make known to citizens that the surveillance of the disease does not coincide with the surveillance of individuals; governing the Covid-19 crisis overlaps with the limits of prudence and biosecurity . Authors such as Albert et al . argue that 'COVID-19 is a threat to global security by the ontological crisis posed to individuals through human security theory and through high politics, as evidenced by biosecurity' (Albert et al ., 2020, p . 1) . Although such arguments are quite plausible and embraced by experts in the fields of biopolitics, a problem still remains: biosecurity will be reshaped from now on, as the biggest danger is not the virus itself, as Harari would put it, but the behavioural effects, in terms of control and surveillance, of this pandemic . According to Harari, Covid-19 taught us that contemporary history struggles between 'the choice between totalitarian surveillance and citizen empowerment' . It is not surveillance for the sake of combating the virus the worst danger threatening our democracies, but surveillance for other reasons than sanitary ones . It is one thing to have your phone ringing after passing by a Covid-19 infected person -as tracking applications monitor the circulation of non/diagnosed patients, and it is another thing to use this pandemic as a precedent for perfecting systems inside-or over-the-skin-surveillance that could easily emerge in dystopian, newer totalitarian regimes (Harari, 2021) .
3.5. Controlling the effects of automatising labour in different economic sectors in order to reconsider and preserve the value of manual labour, individual effort, working time, within both essential and non-essential industries.
As Covid-19 induced automation and labour disparities, leftist agendas began to seduce public spheres as they have been focused on reducing job losses and increasing the role of the human intelligence and force work within different industries . Recent 'findings suggest that COVID-19-induced automation may exacerbate labour market disparities, as females with mid to low levels of wages and education appear to be at the highest risk of being negatively affected' (Chernoff & Warman, 2021) . In fact, this pandemic reduced physical interaction as much and, as incidences of Covid-19 became lower, the economic scenario that this crisis added a 'shadow cost' (Korinek & Stiglitz, 2021) on labour has increased . For example, the costs of adapting a business to Covid-19 conditions have accelerated the appetite for remote or automatic work, by case . However, by the time we will see if this pandemic caused a new Industrial Revolution, we must understand that in different non-essential domains, many jobs have been conceived as redundant and, consequently, attracted a modest financial support from the state . On the one hand, many non-essential domains should be redefined as essential domains: for example culture, in order to save the production of culture and arts and the employees of creative cultural sectors from collapse . On the other hand, labour markets still have to implement technology in order to ensure a so-called material progress of automatising labour . The greatest impact of this pandemic will be, in terms of revaluing human work and effort, a new wealth distribution supported by the degree of automatising labour in each society .
3.6. Increasing human empathy by assuming solidarity with all life forms. From this point of view, species life is a political strategy.
Preserving and improving life does not reflect a simple task of the biopolitical agenda, but also an ecological turn of state policies . However, this is not a matter of reflection on natural policy . The climate crisis caused by technological exuberance and its improvement with the social isolation of individuals forces us to reconsider the relationship between nature and individuals through the prism of nonhuman species . This pandemic was an opportunity to restart ecosystems: nature took back Venice, as cruise ships disappeared and its biosphere began to manifest freely, from ducks to dolphins . However, the moral is that lockdowns have been a benefit for certain species but 'nature will not heal' (Owens, 2021) in two months of emergency-state that supressed any human public activity .
3.7. Combating forms of national isolation in the name of state biosecurity.
The delayed reaction of solidarity promoted by the European Union towards certain countries radically affected by the pandemic -such as Italy -may set a risky precedent for increased hostility, not transnational hospitality . 5 An external biopolitical platform is one of the few projects that can optimise common biosecurity standards for the future of the European Union . Governing the coronavirus crisis means, in a biopolitical framework, governing the mobility of the population in all its aspects: professional migration, economic cooperation, free or cultural tourism, as correlated phenomena in terms of inclusion and social emergency . Fragility must not become a lesson in humiliation between states, but a morality of common sense . As we see, the pandemic occasioned a particular context in which racist discourses began to flourish: we have seen the European waves of Asianophobia, after the Wuhan case, and the riposte of Asian French citizens who rise the campaign #JeNeSuisPasUnVirus, and nowadays we see that this trend begins to reactivate older forms of racism, such as antisemitism . Moreover, during this pandemic, resilience became the core value of our contemporary societies . The need for social distance called, in turn, for the principle of solidarity in solitude . Isolation forms raised beyond anxiety, hate, racism and lack of empathy . A study recently published by INSHR-EW (The "Elie Wiesel" National Institute for Studying the Holocaust in Romania) revealed that the pandemic reactivated anti-Semitic attitudes that began to manifest progressively in online spheres . Therefore, costs of isolation can be, at least from a political standpoint, devastating for cultivating civilised and emphatic public spheres .
3.8. Legislative innovations in the field of increasing security and protecting the lives of citizens should not take advantage of the anomy or vulnerabilities of democratic models. This is possible in the direction of maximising state intervention at the level of individual life . History has shown us that politics tends to turn any crisis into a field of totalitarian experiences . Therefore, to any biopolitical action of the state, the citizens react, naturally, by suspicion . Individuals question whether prevention measures are far too restrictive . Citizens wonder if the measures imposed for population surveillance and control do not develop, symptomatically, disciplinary effects that can turn into authoritarian reflexes . People need to understand organically that the state of emergency is not a pretext for turning these interim measures into long-term surveillance protocols, which increase state intrusion into civilian, community and individual life . But in order to assume that the state of emergency does not develop rules that will last even after dropping such crisis, nor does it offer a way to compensate the vulnerabilities of democracy only for a class of privileged subjects, citizens must confirm their trust in the state, as a long-lasting process grounded on culture, which nowadays is mostly considered a non-essential domain .
3.9. No pandemic should be doubled by infodemia. Fake news feeds civil disobedience, anxiety and panic of the population.
Regaining the pragmatism of public communication in terms of biocommunication could bring an advantage to the media . Providing accurate information on public safety would reduce the state's information monopoly in strategic communication to combat the pandemic . At the same time, the immediate effect would be to democratise access to culture and truth . Otherwise, the state will remain, in its biopolitical vocation, a pure agent of information management on disease dynamics .
Consolidation of public space as an extension of domestic space.
Cohabitation between individuals is possible through social distancing without affecting cultural values, free time, social liberties, social segregation and respect between individuals . By maintaining protocols of social isolation, people oppose, in a sense, to their own nature: to socialise, to be together . No power is credible if it reduces its governable to an amorphic biomass . It should be a democratic, reflective power, the one that governs critical masses . Biopolitics involves thematising the cultural values associated with life through which we understand the predispositions of a people or a society to empathy, tolerance, cooperation, sacrifice . The model of open or closed societies arising from the management of the pandemic is nothing but the effect of cultures that adopt different mentalities and beliefs in the management of life . Returning to normal means returning to the community . But only culture has the capacity to gradually increase the participation of citizens in the dynamics of society and the state in which they live .
3.11. Reconsidering the relations between the Church and the state in the management of life could lead to more emphatic and efficient social spheres.
It is not just about recognising the church's ability to expand the cultural, symbolic and material capital of a form of spirituality in adapting people to the experience of a sanitary crisis . The Church has developed an eschatology of pandemics (Cunningham, 2008, p . 29) as narratives of the dynamics of this world and solidarity between individuals . But its role remains to complete the social agenda of a state through a space of intervention in which philanthropy, missionary work and spirituality maintain the ideals of solidarity and community cohesion . The state must not miss the opportunity to turn the Church into a partner for its interventions and social responsibilities . The dialogue between the state and the Church is not an element of anti-modernity . Public power is divided between a political and a social sphere . The state must arbitrate not the freedom of the two, up to mutual immunisation, but their potential to provide citizens with security and trust . The pandemic is not a test of faith . But biopolitics can be a test of secularisation .
3.12. Designing public policies in biopolitical terms.
The epidemic generates risk areas and groups, isolation and immunisation areas, domestic outbreak, militarisation . These things show that the profile of the politician capable of governing such a crisis resists through two political virtues: pragmatism and resilience . Unaccompanied by a historical sensibility, these are not virtues, but only skills . Voters are less and less used to looking at politicians as authors of a country project . This biopolitical crisis recovers the author's function as a competent and virtuous legislator . In recognising the legitimacy of measures to combat the pandemic but also in recognising their reasonableness, people link the authority of the law to the authority of the author . Who develops, in other words, public policies? What credibility and competence do politicians have in proposing laws that are both just and moral for the preservation of the lives and safety of citizens? What is the trust capital and expertise that public policy makers must have for the law not to produce immoral effects when it concerns sensitive topics such as freedom, privacy and human rights? This time, the elaboration of public policies must be done situating as a source, but also as a goal, the life of individuals .
Instead of conclusions
One can reject a biopolitical platform for the sake of maintaining governance on either side of the political spectrum . Both the leftist and the right-wing oriented political measures could build their own biopolitical ideological agenda based on these foundations . However, in times of a post-pandemic world, it would be reasonable and lucid to drop political rivalries in order to advance a biopolitical regime that makes use of both wings of the political spectre by securing biological life in front of non-biological undertakings of life, from cultural and economic insights to social ones . The biopolitical left and biopolitical liberalism cross at the heart of biopolitics: these twelve topics could map a post-Covid political agenda for any reasonable governance that would value and cherish the pandemic experience as an opportunity to strength, not to fault contemporary, imperfect democracies . Along this article, a positive sketch of biopolitics as a moderated regime that has nothing to do with its negative governmental tradition has been engaged in order to offer a new perspective on the multiple manners in which not only our lives, but equally this domain experienced radical changes that redesign the priorities of political interventions across public spheres . In the end, it is not a post-Covid world defying the pandemic the one we would like to live in, but one that turns such despicable historical crises into an opportunity for progress . Resilience is not incompatible with progress: as long as this statement will be supported by empirical facts revealed by governmental decisions that chose to defend life not to use it for electoral advantages, biopolitics will earn a positive place in our future . | 9,248.4 | 2021-10-29T00:00:00.000 | [
"Political Science",
"Philosophy"
] |
Mechanical and Thermomechanical Behavior Of Sic/Si Compounds Subjected To Controlled Atmospheric Conditions
Biomorphic SiC/Si compounds were fabricated from copaiba wood (Copaifera officinalis, natural wood native to Peru), by reactive infiltration of molten silicon in a porous carbon preform obtained by a controlled pyrolysis process of wood. Structural and microstructural characterization tests by X-ray diffraction and scanning electron microscopy, respectively, revealed, on the one hand, the presence of crystalline phases of SiC, Si and C, and on the other, the typical morphology of this type of material, which it consists of a continuous SiC scaffold with elongated channels in the direction of tree growth and the presence of residual Si and C located mainly in the porosities of the material. The mechanical behavior in uniaxial compression was also studied at a constant compression rate of 0.05 mm/min and as a function of temperature (from ambient to 1400 oC) and test atmosphere (ambient air, humid air, dry air, Ar, N2 and reducing mixture (95% Ar + 5% H2). The mechanical results were evaluated based on values of maximum stress and modulus of elasticity (stiffness), finding a clear reduction in the values of maximum stress and stiffness of the material when the samples passed of ambient test temperatures at 1400 oC. On the other hand, mechanical tests in a controlled atmosphere were carried out at a constant temperature of 1100 oC and the results showed that the mechanical behavior of the studied compounds is slightly influenced by the working atmosphere. Mechanical data found in the various test conditions will be an important support for the definition of the maximum allowable stress (considering the safety factor applied for a particular case) in the industrial application of the materials studied in this
Introduction
The development of new engineering materials, mainly those that work in high temperature conditions, are currently in great demand. In this sense, the modern industry has seen in carbides an enormous potential to meet its highly demanding requirements regarding mechanical properties [1]. On the other hand, wood is a renewable resource with a unique cellular microstructure and whose architecture has been ranked by nature through the years. Wood has now become a very important precursor material for the development of new bio-inspired (biomorphic) materials, which take advantage of the good combination of mechanical resistance, toughness, rigidity and low density that most of the available timber species present. [2][3][4][5][6][7][8][9].
Most research work on carbides is currently focused on the fabrication and mechanical and functional characterization of biomorphic SiC and SiC/Si obtained from cellulosic precursors [3] [12]. Today there are several methods for obtaining biomorphic carbides, but the most studied method is that of reactive infiltration of metallic silicon in a porous carbon preform, which stands out among other methods for proposing a feasible, scalable, environmentally friendly methodology and energy saving [3]. The process of obtaining carbides from wood involves a first stage of 53 pyrolysis the wood to obtain a carbon preform that is then infiltrated with metallic elements such as Si, Ti, Ta, etc. The carbon preforms are obtained by a controlled thermal pyrolization process of the wood at temperatures above 800 ºC in an inert atmosphere (Ar, N2) [10] [11], later the carbon preform is infiltrated with metallic elements considering a excess of these up to over 30% of the stoichiometric quantity necessary for the mass of carbon to infiltrate [12][13][14][15][16] Biomorphic SiC and SiC/Si applications are wide and depend on their mechanical properties and pore structure, the most outstanding applications include: high temperature filters for gas or liquid, catalyst support, energy storage materials, barriers thermal, anti-wear surfaces, etc. [17][18][19][20][21] The mechanical properties of SiC/Si compounds have been studied extensively [22][23][24], but mostly under ambient temperature and atmospheric conditions, however, only few studies have been directed at considering the atmosphere of work as a factor of great importance for the design of structures fabricated with SiC/Si. Therefore, the present study tries to show the effect of the change of working atmospheres in the maximum mechanical resistance and stiffness of SiC/Si composites.
Materials and Methods
SiC/Si composite materials were fabricated by reactive infiltration of metallic silicon in porous carbon preforms. Several authors have previously reported the detailed fabrication methodology for these materials [13]. In this work copaiba wood was selected as the main raw material, this wood species is classified as high density basic wood and is native to the Peruvian jungle. Cubes of approximately 10 mm on each side were cut from the selected wood and then dried at 80 ºC, then they were conducted to a pyrolysis process in an inert atmosphere up to the maximum temperature of 900 ºC with an isotherm time of 30 min. (Fig. 1 (a)).
The carbon preforms obtained after the pyrolysis were subjected to a reactive infiltration process under vacuum and at a maximum temperature of 1550 ºC for 30 minutes with an excess of silicon of 50% with respect to the stoichiometric amount for the amount of carbon to be infiltrated ( Fig . 1 (b)). The structural characterization of the carbon preforms and of the SiC/Si compounds was carried out in an X-ray diffraction machine (Bruker, model D8 Endeavor, Germany) that has a Cu tube and Kα radiation (wavelength of 0.15405 nm). All tests were performed in the 2θ scanning range of 10º to 75º, with a step width of 0.02º and tube conditions of 35 kV and 40 mA.
Microstructural characterization was performed on polished carbon and SiC/Si surfaces using an AM SCOPE light microscope (50X -500X ME320B-PZ, USA).
Microstructural observations were made in reflection mode and on previously polished surfaces with grade 6, 3 and 1 micron diamond paste (in that order).
The mechanical characterization was carried out in a universal testing machine (MICROTEST, model EM1/50/FR, Spain) that has three interchangeable chambers that cover the range of test temperatures between -30 and 1500 ºC and has a hermetic system of control of atmospheres with which it is possible to carry out tests in environmental and non-ambient (inert, oxidizing or reducing) conditions. The mechanical tests were carried out in uniaxial compression at a constant compression rate of 0.05 mm/min and at variable temperatures between ambient and 1400 ºC. In addition, with the aim of evaluating the effect of non-environmental conditions on the mechanical response of the studied materials, tests were carried out at a constant temperature of (a) (b) 54 1100 ºC and in atmospheres of humid air, dry air, Ar, N2 and reducing mixture (95% Ar + 5% H2).
The samples used in the mechanical tests consisted of 5x5x10 mm parallelepipeds cut from larger samples with the help of a low speed precision cutter that has diamond edge discs. The force and displacement data obtained in the mechanical tests were converted to stress vs. strain graphs. The stressstrain curves allowed us to analyze and compare the values of maximum stress and moduli of elasticity of the materials studied under the various test conditions. Fig. 2 shows the X-ray diffraction spectra for samples of carbon ( Fig. 2 (a)) and of SiC/Si compounds ( Fig. 2 (b)). The diffraction peaks identified in the carbon sample reveal the presence of graphite, however, considering the shape of the baseline of the diffraction spectrum, the carbon obtained could be considered as a mainly amorphous material. On the other hand, the diffraction spectrum of the SiC/Si compound confirms the presence of up to three crystalline phases: SiC, carbon (graphite) and Si. The shape of the baseline indicates that the SiC/Si compound after the infiltration process is completely crystalline and contains Si and C (graphite) residual. SiC/Si. Fig. 3 presents the morphology found on the surfaces of carbon ( Fig. 3 (a)) and SiC/Si compound ( Fig. 3 (b)). Both microstructures are of the cross section (with respect to the growth direction of the tree) and are clearly defined. The carbon sample has a single phase of C, with the presence of some rough or less rounded regions in darker contrast that would correspond to pores distributed throughout the area of the micrograph, and are also elongated in the growth direction of the tree. On the other hand, on the polished surface of SiC/Si it was possible to observe up to four distinct phases: SiC (dark gray), Si (light gray), porosity (rounded black regions) and unreacted carbon (black regions no definite shape). The phases found microstructurally are in good agreement with the X-ray diffraction results presented in Fig. 2 of this work. Fig. 4 presents stress vs. strain curves where a clear ratio of increase of the average maximum resistance is appreciated when increasing the amount of silicon in excess of 20 to 50% with respect to the stoichiometric molar ratio of the SiC molecule ( Fig. 4 (ac)). On the other hand, a systematic reduction of the average maximum resistance can be seen when the test temperature increases from room temperature to 1400 °C in the compound with 50% excess silicon (Fig. 4 (cf)). Fig. 5 shows stress vs. strain curves for SiC/Si compounds with 50% silicon in excess with respect to the stoichiometric composition of the SiC molecule. The tests were carried out in various types of atmospheres to verify the mechanical behavior (maximum resistance to compression and modulus of elasticity) of the materials studied under conditions in which they could work as part of industrial mechanical components. A greater sensitivity of the average maximum resistance in test atmospheres of N2 and H2 (5%) + Ar (95%) could be appreciated. Table 1 presents a summary of the data obtained for the mechanical resistance and moduli of elasticity maximum of SiC/Si compounds evaluated in uniaxial compression and under variable conditions of temperature and atmosphere. It has been possible to verify a systematic reduction of the values of maximum stress and moduli of elasticity when the test temperature increases from room temperature to 1400 ºC. This result could be attributed to the presence of residual metallic silicon in the porosities of the SiC ceramic matrix. The metallic Si present in the SiC/Si compound tends to soften with increasing test temperature, this effect being more evident at temperatures close to the melting point of Si. The effect of this softening translates into a systematic reduction in the overall stiffness of the SiC/Si compound.
Mechanical Characterization
On the other hand, it has been possible to verify a slight relationship between the mechanical behavior of the SiC/Si compound and the test atmosphere, which suggests that the change of working atmosphere should be considered in the engineering design of components or products that Include the material studied in this work. 6. shows the upper and lower limits found for the data set of maximum stresses and moduli of elasticity of SiC/Si compounds under conditions of variable temperature and test atmosphere. Fig. 6 (a) and 6 (b) show the influence of the test temperature on the average maximum stress and modulus of elasticity, respectively. A lower dispersion of values can be seen when the test is carried out at 1400 ºC, this observation could suggest that at temperatures of 1400 ºC the Si present in the compound contributes minimally to the overall mechanical response of the compound (being close to its melting point), which can be seen more clearly in Fig. 6 (a) In this scenario, only the mechanical response mechanisms of the SiC ceramic matrix take on importance for data analysis. Another important aspect that deserves to be analyzed correctly can be seen in the tests at room temperature (RT) of Figs. 6 (a) and 6 (b), where the values found for the maximum stress are widely dispersed between 1427 and 2064 MPa, while the values for the modulus of elasticity only have a slight dispersion between 142 and 148 MPa. In this regard, it could be suggested that the dispersion of the maximum stress values is due to the fact that the appearance and propagation of cracks in the ceramic matrix (responsible for the maximum stress of the material) is variable and occurs at stresses above the proportional limit. . For its part, the modulus of elasticity has little dispersion at room temperature because this property is measured only in the elastic range before the limit of proportionality (before cracks appear).
Figs. 6 (c) and 6 (d) show the upper and lower limits for all the data found for maximum stress and moduli of elasticity of SiC/Si compounds tested in variable atmospheres. In this regard, previously we must mention that this group of tests were carried out at a constant temperature of 1100 ºC, since it was previously possible to confirm that at that temperature the material exhibited a completely fragile behavior and in this way only the test atmosphere could influence the mechanical behavior of the studied materials. The results found showed that the SiC/Si compounds were very mechanically stable in humid air and dry air atmospheres, with respect to their maximum stress, but under nitrogen and reducing atmospheric (95% Ar + 5% H2) conditions, their maximum stress was slightly reduced. The results of maximum stress in an inert atmosphere of Ar were on average the highest, this result could suggest that when the SiC/Si compound is in an inert environment it does not undergo any chemical change leading to the mechanical instability of the material. In the other test atmospheres it is possible that chemical changes occur that produce an effect on the final mechanical response of the materials, hence the importance of evaluating the dispersion of data to be able to suggest permissible design effort from the application that can be used give the studied material.
The comparative graph of the modulus of elasticity (E) as a function of the test atmosphere shown in Fig. 6 (d)) reveals that there is no clear or direct relationship between the test atmosphere and the stiffness of the materials studied in this work. Therefore, it could be suggested that the stiffness of the material is stable in various working environments.
Conclusions
SiC/Si compounds have been successfully fabricated from copaiba wood, following procedures established in the literature for wood pyrolysis in an inert atmosphere followed by reactive infiltration of metallic silicon in carbon preforms.
X-ray diffraction studies confirmed the presence of a mostly amorphous phase in samples of pyrolyzed wood (carbon preform) with low energy graphite peaks. On the other hand, it was found that the SiC/Si compounds obtained after the reactive infiltration process were completely crystalline with the presence of diffraction peaks of SiC, Si and C (graphite).
The microstructural studies were in good agreement with the data found in the DRX studies, showing a single homogeneous phase of carbon in the samples of pyrolyzed wood (with morphology similar to that of the cellulosic precursor) and four phases in the SiC/Si compounds. : SiC (dark gray), Si (light gray), residual carbon (black without defined shape) and pores (black with rounded shape).
Uniaxial compression studies showed a clear influence of the maximum stress and modulus of elasticity with the test temperature. It was found that as the test temperature increased, the studied materials became less rigid and presented less maximum stress. It is suggested that these results are due to the progressive softening of the remaining Si as the test temperature near the silicon melting point increases.
The mechanical data found in the controlled atmosphere tests showed that the studied materials are sensitive (respect to their maximum stress) to the change in working atmosphere, mainly in nitrogen and reducing mixture atmospheres (95% Ar + 5% H2). However, changes in atmosphere do not have a definitive effect on the stiffness of the materials studied. Both, the maximum stress and stiffness of the materials studied under conditions of variable temperatures and atmospheres must be considered to define the allowable stress of these materials in their probable industrial application. | 3,676.4 | 2020-10-19T00:00:00.000 | [
"Materials Science",
"Environmental Science"
] |
Design of Intelligent Neuro-Supervised Networks for Brain Electrical Activity Rhythms of Parkinson’s Disease Model
The objective of this paper is to present a novel design of intelligent neuro-supervised networks (INSNs) in order to study the dynamics of a mathematical model for Parkinson’s disease illness (PDI), governed with three differential classes to represent the rhythms of brain electrical activity measurements at different locations in the cerebral cortex. The proposed INSNs are constructed by exploiting the knacks of multilayer structure neural networks back-propagated with the Levenberg–Marquardt (LM) and Bayesian regularization (BR) optimization approaches. The reference data for the grids of input and the target samples of INSNs were formulated with a reliable numerical solver via the Adams method for sundry scenarios of PDI models by way of variation of sensor locations in order to measure the impact of the rhythms of brain electrical activity. The designed INSNs for both backpropagation procedures were implemented on created datasets segmented arbitrarily into training, testing, and validation samples by optimization of mean squared error based fitness function. Comparison of outcomes on the basis of exhaustive simulations of proposed INSNs via both LM and BR methodologies was conducted with reference solutions of PDI models by means of learning curves on MSE, adaptive control parameters of algorithms, absolute error, histogram error plots, and regression index. The outcomes endorse the efficacy of both INSNs solvers for different scenarios in PDI models, but the accuracy of the BR-based method is relatively superior, albeit at the cost of slightly more computations.
Introduction
Parkinson's disease illness (PDI) is a neurological disorder normally caused by an early significant death of dopaminergic neurons, and the resulting deficiency of dopamine within the basal ganglia results in movement disorders [1].PDI patients may suffer from tremors/shaking, kinetic problems, postural instability, and rigidity and anxiety, as highlighted in Part 1 of the graphical abstract provided in Figure 1.In 2016, around 6.1 million people were affected by the PDI [2], and a rapid increase in PDI patients has been observed in the past two decades [3,4].The current therapeutic treatments for PDI are based on restoring dopamine levels.These remedies are helpful in providing symptomatic relief to PDI patients, but they are not disease modifying, and, therefore, PDI has remained incurable [5].Mathematical modeling of PDI may help in better understanding the dynamics of the disease and, thus, improved treatments for its recovery.Different mathematical models of PDI have been proposed [6].For instance, in [7], Anninou et al. developed a mathematical model for PDI by exploiting the concept of fuzzy cognitive maps, and a generic algorithm The objectives of the current investigation are as follows:
y t a y t a y t a y t b y t b y t b y t b y t y t b y t y t b y t y t y t a y t a y t a y t b y t b y t b y t b y t y t b y t y t b y t y t y t a y t a y t a y
• Study the dynamics of a mathematical model for PDI, governed with three differential classes to represent the rhythms of brain electrical activity that are measured at different locations of the cerebral cortex.The objectives of the current investigation are as follows: • Study the dynamics of a mathematical model for PDI, governed with three differential classes to represent the rhythms of brain electrical activity that are measured at different locations of the cerebral cortex.
•
Construct intelligent neuro-supervised networks (INSNs) by exploiting the knacks of multilayer structure neural networks backpropagated with the Levenberg-Marquardt (LM) and the Bayesian regularization (BR) approaches.
• Optimize the mean squared error based fitness function for sundry scenarios of PDI models by the variation of sensor locations to measure the impact of the rhythms of brain electrical activity.
•
Compare the outcomes on the basis of exhaustive simulations of the proposed INSNs via both LM and BR methodologies with reference solutions of PDI models by means of learning curves on MSE, adaptive control parameters of algorithms, absolute error, histogram error plots, and regression index.
The structure of the remaining paper is as follows: Section 2 presents the details of the related works; Section 3 provides the mathematical model of PDI along with a description of the proposed INSNs; Section 4 discusses the simulation results for different scenarios of PDI; and Section 5 concludes the study by noting some potential future research directions.
Related Works
In PDI, the freezing of gait of patients is a common problem, referring to sudden/temporary inability to initiate or continue walking, and the PDI patient feels as if his or her feet are glued to ground when this occurs [11].The freezing episodes often occur in PDI patients during the gait initiation or when turning, and they may considerably affect mobility/independence.Although the exact reason for freezing in PDI is not yet fully understood, it is widely believed that it results from a combination of motor/cognitive factors, such as (i) motor fluctuations, where the person experiences periods of good mobility-on periods and periods of poor mobility i.e., only off periods; (ii) dual-task interference, i.e., performing dual tasks, such as talking when walking or carrying an object, can enhance the risk of freezing; and (iii) emotional factors, such as anxiety and stress, can trigger or may worsen the freezing of gait.Parakkal et al. investigated the freezing of gait in PDI patients and suggested an ankle push off model [11], where a simplified neuromechanical model of gait is used for observing the variability and freezing in PDI.The mathematical model presented in [11], composed of the stance-leg, demonstrated an inverted pendulum (IP) operated by the ankle, and pushes off forces through the trailingleg and pathological forces from the plantar flexors of the stance-leg.Further, the effect on walking of the swing-leg is modeled in a biped model (BM), while freezing and irregular walking is studied in both the BM and IP model.The plantar flexors (PF) correspond to the swing-leg pushing the center-of-mass forward, and the PF correspond to the stance-leg producing opposing torque.The study conducted in [8] demonstrated that the opposing forces produced by PF can persuade freezing, and it also explained the gait irregularities that are closer to freezing, such as step length reduction and irregular walking patterns.
Heyete et al. [12] presented the Bayesian mathematical model (BMM) to identify predictors of long-term motor/cognitive results and the progression rate in PDI.A BMM of motor/cognitive outcomes in PDI may aid understanding of the complex interactions among various factors and predict the progression of the disease.BMM exploits the concepts of probability and integrates data from multiple sources, including clinical assessments, demographic details, and neuroimaging results, to predict the different motor and cognitive outcomes.Belozyotov et al. [13] presented a mathematical model to study the behavior of PDI through the EEG signal of the patient taken at three different locations of the cerebral cortex (CC), considered as a three-time series describing the behavior of the disease curve in a three-dimensional phase-space, and then the 3D system of quadratic differential equations was constructed, whose solution provides the disease curve.Further, in [13], chaotic dynamics have been observed in the PDI mathematical model that can capture the complex interactions among the neurons and how their activity evolves over time.The chaotic attractors formed by the CC signals give information about the normal process or disease progression, depending on the nature of the chaotic graph.
Borah et al. [14] presented the fractional order model of PDI by exploiting the strong mathematical foundations of fractional calculus that allow real order differentiation or integration to be taken through generalizing the conventional integer order calculus.The design of appropriate controllers to control the chaos in the bio-mathematical models, including the PDI presented, enables stable performance to be attained.Further, the design of anti-controllers is also demonstrated in [14] for generating chaos when turbulence is required.Detailed analyses of the fractional order PDI model involving chaos are conducted in [14], where absence of chaos reflects the onset of the disease, and where anti-control schemes through linear state feedback, sliding mode, and single-state sinusoidal feedback are developed.
The intelligent computing approaches are introduced to effectively model or optimize different engineering, mathematical, and applied sciences problems [15][16][17][18].In [19], a convolutional neural network (CNN) framework is introduced for emotion recognition that takes input from the extracts of mel scale spectrogram, chromagram, Tonnetz representation, mel frequency cepstral coefficients, and spectral contrast features through speech files.
In [20], the estimated yield of soya bean crops under drought conditions is studied through imagery from an unmanned aerial vehicle and CNN.Further, it is demonstrated that the fusion of 1-D and 2-D inputs in a CNN-based deep learning model enhances the estimated accuracy.In [21], an improved denoising autoencoder (DAE) is developed that integrates the concept of confidence level in conventional DAE for fast and accurate recommendations in recommender systems, which are required in the E-commerce industry to provide reliable recommendations to the users.The DAE structure proposed in [21] is effectively applied to Movie Lens 100 K and 1 M datasets with outstanding performance compared to the standard DAE structures in terms of precision, recall, and MAP metrics.In [22], swarm intelligence is exploited for the identification of fractional order nonlinear autoregressive exogenous systems through established strength of particle swarm optimization (PSO).In [23], a hybrid bi-directional gated recurrent unit (BiGRU) and bi-directional long-term short-term memory (BiLSTM) are presented for electricity theft detection in smart grids with preprocessing through feature engineering.Before using the BiGRU and BiLSTM for classification, the data imbalance issue is solved using a K-means minority oversampling scheme such that the balanced data are given as an input to the BiGRU and BiLSTM models for better classification accuracy.In [24], the fractional calculus concepts are incorporated in the optimization mechanism of PSO to enhance its optimization strength for effective parameter estimation of nonlinear Hammerstein autoregressive exogenous systems.Further, the key term separation principle is introduced in the PSO to accurately estimate the actual parameters of the Hammerstein nonlinear system by avoiding the redundant parameters.[25] present marine predators based optimization heuristics for a parameter estimation of Hammerstein output error systems.The marine predator is a recently introduced swarm intelligence optimization approach that mimics the behavior of predators for catching prey through Brownian and Levy distributions for estimating the optimum communication between predator and prey.In [26], the weather classification model through the hybrid CNN and generative adversarial network is developed for photovoltaic power forecasting.In [27], knacks of feedforward artificial neural networks (FANN) optimized with the Levenberg-Marquardt algorithm are presented for analysis of the power law fluidic problem of moving wedge and flat plate model.In [28], FANN optimized with the Bayesian regularization algorithm are exploited for peristaltic motion of a third grade fluid in the planar channel.In [29], FANN optimized with the Levenberg-Marquardt algorithm are proposed to study the dynamics of multi-walled carbon nanotubes coated with gold nanoparticles with a different velocity slip in curved channel peristaltic motion.In [30], the efficacy of ANN optimized with the Levenberg-Marquardt and the Bayesian regularization algorithms is analyzed for the Cattaneo-Cristov heat flux model with biconvection nanofluid flow.In [31], intelligent algorithms and control schemes are presented for battery management in electric vehicles with details of current advancement, major challenges, and future prospects in the domain of battery management systems.In [32], a comprehensive survey of the applications of the intelligent transportation systems is presented in the context of big data, with identification of the research gaps and potential future research directions in the domain of intelligent transportation systems.In [33], a variety of issues related to interoperability in the Internet of Things (IoT) are discussed, such as searching/processing IoT, implementing, modeling event, and workflow processes.In [34,35], automatic detection of motor imagery EEG signals is obtained for robust brain computer interface systems.In [36], the recognition of alcoholic EEG signals is performed using CNN and the concept of geometrical features.The graphical features are one of the newest approaches for identifying underlying patterns of EEG signals, and they are used for effective depression detection [37] as well as seizure recognition [38].
There are a few intelligent computation algorithms.These include a chimp-inspired optimization scheme [39], i.e., an intelligent optimization algorithm effectively exploited to solve different problems with reasonably accuracy through providing a good balance in the exploration and exploitation phases; a Kohonen neural network [40], i.e., an unsupervised self-organizing (SO) competitive neural network that performs automatic clustering and that updates the weights of the network through SO feature mapping with effective application to intrusion detection of the network virus; and a Mayfly algorithm [41], i.e., a swarm intelligence-based heuristic approach, applied to successfully solve different engineering optimization problems, including the asymmetric traveling salesman problem, due to the features of population diversity and enhanced local search capability.Others include a simplified slime mould algorithm [42], i.e., a modified version of the slime mould heuristic, with an introduction of enhanced adaptive oscillation for better exploration capability during the early search phase, with application to wireless sensor network optimization problems; a code pathfinder algorithm [43], i.e., a discrete complex code pathfinder heuristic for an efficient solution to the optimization problem of wind farm layout through an improved exploration capability; and a firefighting strategy based marine predators approach [44], i.e., an improved variant of marine predator heuristic through an introduction of opposition-based learning for more uniform initial population and adaptive weight factor for creating balance between exploration/exploitation capabilities to effectively handle the forest fire rescue issues.More of these intelligent computer algorithms include a chaotic grey wolf optimizer [45], i.e., a modified grey wolf optimizer by incorporating the concepts of chaotic maps and adaptive convergence factor for robust and accurate parameter estimation of control autoregressive systems; a subtraction average based optimizer [46], i.e., an optimization approach inspired by the subtraction average of searchers agents for the position updates of the particles in the search space; and an enhanced dragonfly heuristic [47], i.e., enhanced version of dragonfly algorithm with an improved mechanism of global search and a local search for the four color map problem.There are also two others: a non-dominated sorting genetic algorithm [48], i.e., a modified variant of genetic algorithm with special congestion approach and adaptive crossover scheme to effectively solve multi objective and multi modal optimization problems, and, lastly, a green anaconda optimizer [49], i.e., an optimization heuristic that mimics the natural behavior of the green anacondas to solve various benchmark optimization challenges.
The intelligent computing-based methodologies have been proposed for bioinformatics and biotechnology applications as well.These include a combination of a graph neural network and CNN for efficient breast cancer classification [50]; deep learning and transfer learning through regional CNN for white blood cell detection [51]; and a fine-tuned neural network and long-term short-term memory-based neural network for skin disease [52].They also include a convolutional autoencoder and transfer learning-based scheme for Alzheimer's disease visualization [53]; a perceptron neural network for bacterial behavior programming [54]; and a deep neural architecture with generative adversarial network for brain tumor classification [55].In addition, they include a deep neural network for epidemic prediction of COVID disease [56]; deep learning for sequential analysis of biomolecules [57]; elastic net and neural networks for the identification of plant genomics [58]; data mining and machine learning algorithms based on spectral clustering, random forest, and neural networks for cancer diagnosis through gene data [8]; and a stacking ensemble model based on an auto-regressive integrated moving average, exponential smoothing, a neural network autoregressive, a gradient-boosting regression tree, and extreme gradient boost models for infectious diseases [9].Finally, there are supervised machine learning algorithms for lung disease detection, respiratory sound analyses, and so on [10].Motivated by the widespread applications of the artificial intelligence methodologies, this study investigates exploiting the artificial intelligence techniques to study the dynamics of PDI.
Proposed Methodology
Before developing the INSNs, first the mathematical model of the PDI is introduced in this section.Let the rhythms x 1 (t i ), x 2 (t i ), . . . ,x k (t i ), i = 1, 2, . . ., N of cerebral activity at k point of cerebral cortex be measured by EEG and defined as [13]: where shows the discrete approximation of y(t) = [y 1 (t), y 2 (t), . . . ,y k (t)] T and ε 1 (t i ), ε 2 (t i ), . . . ,ε k (t i ) represent the white Gaussian noise.The accurate acquisition of the EEG signals is of great significance, and denoising of the signal is required before further processing.The multiscale principle component analysis (MSPCA) plays vital role in the denoising of a signal, which is a combination of principle component analyses and wavelet [59], and which is used for robust motor imagery brain computer interface classification [60,61].The system of differential equations are constructed as [13]: .
For practical application, let the EEG signal of the subject be taken at three different points of his cerebral cortex and considered as a three-time series describing the behavior in the three-dimensional space.In standard medical procedure, measuring sensors are placed on some defined points of the cerebral cortex.For this study, we considered the magnitude of electrical impulses at points P3, P4, and O1, as well as C3, C4, and T5, designated by coordinate y 1 (t), y 2 (t), and y 3 (t).A three-dimensional system based on these time series was constructed as: . y 1 (t) = −20.93+ 1.55y 1 (t) + 6.20y 2 (t) − 7.05y 3 (t) + 0.016y The system (7) simulated the impulses at C3, C4, and T5, while the system of the differential equation presented in (8) and ( 9) simulated the impulse at the P3, P4, and O1 points: . y 1 (t) = −3.11+ 0.19y 1 (t) + 0.72y 2 (t) − 1.19y 3 (t) + 0.022y . Now, the details regarding the implementation of the proposed intelligent neuro supervised networks are presented.The proposed scheme is implemented in two steps:
•
Reference dataset generation: First, the reference dataset for the INSNs is generated through determining the numerical results of the PDI models presented in (7) to (9).The state of the art Adams procedure is used to determine the numerical results of the PDI models of ( 7) to ( 9) through the 'NDSolve' routine of Mathematica software for finding the solution of the systems represented by the differential equations for t ∈ [0, 5], with a step size 0.2, i.e., total 251 input (time instances), and, accordingly, a 753 output (number of measurements) with 251 discrete instances for each y 1 , y 2 , y 3 .The value of the parameters of the quantities of interest and initial population representing the location of sensors for electrical rhythms of the brain are taken from the reported study [13].Further information regarding the justification of the parameter on the basis of theoretical analyses, i.e., global and local stability and population dynamics, can be seen in the reported study [13].
•
Developing neuro-supervised networks: The INSNs are constructed through a neural networks structure with logistic activation function to solve the PDI models of ( 7) to (9).For backpropagation, two different optimization algorithms are used, i.e., Levenberg-Marquardt (LM) and Bayesian regularization (BR).In LM, the number of hidden neurons is taken as 20 for all three PDI models of ( 7) to ( 9), while in the case of BR, the number of hidden neurons for the PDI model of ( 7) are 50, and for remaining two PDI models of ( 8) and ( 9), the neurons are 100.
The optimizers based on LM and BR adjust the weights of the neural networks through minimizing the deviation from the reference numerical solution in the mean square error (MSE) sense.The MSE, absolute error (AE), to assess the performance of the proposed INSNs is defined as: MSE y 1 = mean(y 1 − y 1 ) 2 ; MSE y 2 = mean(y 2 − y 2 ) 2 ; MSE y 3 = mean(y 3 − y 3 ) 2 (10) The proposed INSNs may play a significant role in solving the PDI mathematical models.As PDI is a complex neurological disorder, its accurate mathematical modeling can help understand the underlying mechanisms, its progression prediction, and developing effective treatment strategies.The proposed INSNs are capable of analyzing the complex data, identifying patterns, and making predictions, and they thus may contribute in advancing the knowledge of PDI and improving patient care.Therefore, in this study, the authors proposed a neural networks-based intelligent framework for solving the PDI mathematical model.However, this framework can be extended for clinical contributions in terms of early and efficient diagnosis as well as the prediction of PDI.Moreover, the proposed INSNs can assist in optimizing PDI treatment strategies by considering various factors, such as age, symptoms, and medication history.The proposed INSNs can help predict the most effective treatment options and dosages for individual patients.This can help in enhancing personalized medicine approaches and improved patient outcomes.
The INSNs normally demand more computational requirements, especially when dealing with large datasets.Training and optimizing INSNs for PDI models may require significant computational resources and time.This may affect the practical implementation of the INSNs, particularly for researchers with limited computing resources.
Performance Analyses
The simulation results of the proposed INSNs for PDI models 1, 2, and 3 presented in (7), (8), and (9), respectively, are provided in this section by considering both the BR and LM optimization algorithms.The best validation performance of the proposed INSN-LM is 5.0059 × 10 −3 at 1000 epochs, 7.8394 × 10 −5 at 1000 epochs, and 4.0022 × 10 −3 at 466 epochs for PDI models 1, 2, and 3, respectively.The corresponding gradient and the learning rates are [0.0010,0.0011, with the reference numerical solutions.The results presented in Figure 9 endorse the efficacy of the proposed INSN-LM.
The comparison of the INSN-BR and ISNS-LM is also conducted with respect to the MSE-based fitness values, the number of epochs, the time consumed in the computation, and the BR/LM parameters, such as gradient and learning rate, for all three PDI models, and the results are presented in Figure 10.In order to further analyze the behavior of the PDI models presented in (7) to (9), the parametric plots are also drawn and presented in Figures 11-13 for PDI model 1, 2, and 3, respectively.meanwhile, attains the performance of 3.12 × 10 −4 , 6.54 × 10 −5 , and 1.10 × 10 −4 in times of 0:00:08, 0:00:08, and 0:00:01 with 1000, 1000, and 472 epochs.The results clearly indicate that the INSN-BR provides more accurate results than the ISNS-LM but at the cost of bit more computation.
In order to further analyze the behavior of the PDI models presented in ( 7) to ( 9), the parametric plots are also drawn and presented in Figures 11a, 12a and 13a show the parametric plot of y1 and y2 for PDI model 1, 2, and 3, respectively.Similarly, Figures 11b, 12b, and 13b provide the parametric plots of y1 and y2 for PDI model 1, 2, and 3, respectively, and Figures 11c, 12c, and 13c provide the parametric plots of y2 and y3 for PDI models 1, 2, and 3. To further deepen the analyses, the 3D parametric plots are also constructed and presented in Figures 11d, 12d, and 13d for PDI models 1, 2, and 3, respectively.The parametric plots of Figures 11-13 further establish the stability of the PDI models.• In future, it looks promising to incorporate the fractional gradient-based algorithms [62,63] for backpropagation in INSNs for analyzing PDI models, and to investigate early and efficient diagnosis as well as prediction of PDI through the proposed INSNs.
Figure 1 .
Figure 1.Graphical abstract of the PDI study using INSNs.
Figure 1 .
Figure 1.Graphical abstract of the PDI study using INSNs.
Figures 11a ,
Figures 11a, 12a and 13a show the parametric plot of y1 and y2 for PDI model 1, 2, and 3, respectively.Similarly, Figures11b, 12band 13b provide the parametric plots of y1 and y2 for PDI model 1, 2, and 3, respectively, and Figures11c, 12c and 13cprovide the parametric plots of y2 and y3 for PDI models 1, 2, and 3. To further deepen the analyses, the 3D parametric plots are also constructed and presented in Figures11d, 12d and 13dfor PDI models 1, 2, and 3, respectively.The parametric plots of Figures11-13further establish the stability of the PDI models.
•
This study presented intelligent neuro-supervised networks, INSNs, in order to study the dynamics of Parkinson's disease illness (PDI) through the rhythms of brain electrical activity measured at different locations on the cerebral cortex, represented with three differential classes.Two types of INSNs are constructed by neural networks multilayer architecture backpropagated with the Levenberg-Marquardt and the Bayesian regularization algorithms, i.e., INSN-LM and INSN-BR.The Adams solver is used to generate the reference data for grids of input and target samples of INSNs for different PDI models obtained by varying the sensor locations in order to measure the impact of rhythms of brain electrical activity.The dataset for all three PDI models is arbitrarily segmented into training, testing, and validation, with a proportion of 80, 10, and 10, respectively, by optimizing the fitness function based on the mean squared error criterion.The values of mean square error and absolute error endorse the accuracy and the correctness of the proposed INSN-LM and INSN-BR for all three of the PDI models.Further, the analyses by means of histogram error plots, learning curves, control parameters, and regression index all confirm the efficacy of the proposed INSNs for the PDI models, although the accuracy of INSN-RB is relatively superior to the INSN-LM, albeit at the cost of slightly more computational budget requirements. | 5,962.6 | 2023-07-01T00:00:00.000 | [
"Medicine",
"Engineering",
"Computer Science"
] |
Hypoxia‐Inducible Factor 1 Alpha–Mediated RelB/APOBEC3B Down‐regulation Allows Hepatitis B Virus Persistence
Background and Aims Therapeutic strategies against HBV focus, among others, on the activation of the immune system to enable the infected host to eliminate HBV. Hypoxia‐inducible factor 1 alpha (HIF1α) stabilization has been associated with impaired immune responses. HBV pathogenesis triggers chronic hepatitis‐related scaring, leading inter alia to modulation of liver oxygenation and transient immune activation, both factors playing a role in HIF1α stabilization. Approach and Results We addressed whether HIF1α interferes with immune‐mediated induction of the cytidine deaminase, apolipoprotein B mRNA editing enzyme catalytic subunit 3B (APOBEC3B; A3B), and subsequent covalently closed circular DNA (cccDNA) decay. Liver biopsies of chronic HBV (CHB) patients were analyzed by immunohistochemistry and in situ hybridization. The effect of HIF1α induction/stabilization on differentiated HepaRG or mice ± HBV ± LTβR‐agonist (BS1) was assessed in vitro and in vivo. Induction of A3B and subsequent effects were analyzed by RT‐qPCR, immunoblotting, chromatin immunoprecipitation, immunocytochemistry, and mass spectrometry. Analyzing CHB highlighted that areas with high HIF1α levels and low A3B expression correlated with high HBcAg, potentially representing a reservoir for HBV survival in immune‐active patients. In vitro, HIF1α stabilization strongly impaired A3B expression and anti‐HBV effect. Interestingly, HIF1α knockdown was sufficient to rescue the inhibition of A3B up‐regulation and ‐mediated antiviral effects, whereas HIF2α knockdown had no effect. HIF1α stabilization decreased the level of v‐rel reticuloendotheliosis viral oncogene homolog B protein, but not its mRNA, which was confirmed in vivo. Noteworthy, this function of HIF1α was independent of its partner, aryl hydrocarbon receptor nuclear translocator. Conclusions In conclusion, inhibiting HIF1α expression or stabilization represents an anti‐HBV strategy in the context of immune‐mediated A3B induction. High HIF1α, mediated by hypoxia or inflammation, offers a reservoir for HBV survival in vivo and should be considered as a restricting factor in the development of immune therapies.
not its complete eradication because of the persistence of the viral DNA matrix, called covalently closed circular DNA (cccDNA). (2) Upon treatment arrest, the infection can relapse. (2) Therefore, treatments are urgently needed to progress toward a cure for chronic HBV infection.
Therapeutics developed for the treatment of HBV focus on activation of the adaptive and innate immune system. Several Toll-like receptor agonists have offered promising results both in vitro and in vivo. (3)(4)(5) Among these treatments, we and others have shown that induction of the cytidine deaminase, apolipoprotein B mRNA editing enzyme catalytic subunit 3B (APOBEC3B; A3B), upon immune-mediated lymphotoxinβ receptor (LTβR) agonization (e.g., by T cells) leads to cccDNA decay. (6,7) Most immune receptors such as LTβR are described to signal through the nuclear factor-kappa B (NF-κB) pathways. (8,9) NF-κB signaling is divided into two arms: the classical/canonical and the alternative/noncanonical pathway. (10) The canonical pathway signals through the IκB kinase (IKK) complex (inhibitor of nuclear factor κB kinase complex, consisting of NF-κB essential modulator/IKKα/IKKβ), triggering the phosphorylation and ubiquitination of nuclear factor of kappa light polypeptide gene enhancer in B-cells inhibitor alpha and the release of p50/RelA (NF-κB p65 subunit) heterodimer. (10) The noncanonical pathway signals through NF-κBinducing kinase (NIK), leading to the phosphorylation of IKKα and p100, which is subjected to processing into p52 forming p52/RelB (v-rel reticuloendotheliosis viral oncogene homolog B) heterodimers that activate target genes such as immune mediators. (11) To reduce the extent of chronic inflammation and its deleterious effects, NF-κB signaling has to be tightly regulated. (12) Among the factors involved in this regulation, hypoxia-inducible factor 1 alpha (HIF1α) has been shown to (1) be stabilized or induced by and (2) regulate NF-κB signaling, (13) in addition to its canonical induction by low oxygen levels. (14) HIF1α is constantly produced and is targeted to the proteasome in the absence of stabilizing conditions. (14) Here, we identify HIF1α stabilization and the concomitant decrease of RelB protein level as a restricting factor for immune-mediated antiviral strategies against HBV.
HBV pRepaRatIoN aND INoCUla
HBV was purified and concentrated from the culture medium of HepAD38 cells by heparin columns and sucrose gradient ultracentrifugation as described. (17) dHepaRG cells were infected with 200 viral genome equivalents per cell, in medium supplemented with 4% PEG-8000 (Sigma-Aldrich). Twenty-four hours after infection, cells were washed three times with PBS.
HUMaN lIVeR SpeCIMeN
Sections of formalin-fixed, paraffin-embedded liver resections of 15 patients chronically infected with HBV were obtained from the DZIF partner site in Heidelberg/Institute of Pathology of the Medical University Heidelberg. Chronic hepatitis B (CHB) patients were all in the immune-active phase of the disease and presented F3/F4 fibrosis grading and A3 activity (METAVIR scoring). Sections were cut to be 2 or 5 µM thick. Work with patient material was approved by the Heidelberg ethics committee under the following number: S206/2005. We confirmed that informed consent was collected from all co authors for the manuscript.
Additional materials and methods information can be found in the Supporting Information.
HIF1α StaBIlIZatIoN oFFeRS a ReSeRVoIR FoR HBV IN IMMUNe-aCtIVe patIeNtS
Hypoxia has been shown to strongly modulate immune responses, both positively and negatively, depending on the cells and the immune mechanisms involved. (14) Inflammatory cytokines and/or ligands have been shown to efficiently inhibit HBV infection. (3,18,19) Thus, we wanted to decipher whether HIF1α might be involved in HBV persistence in chronically infected patients by preventing immune activation. Consecutive cuts of livers from CHB patients with end-stage CHB, also considered as an immune-active phase, were stained for HIF1α and HBcAg. Highly oxygenated/low inflammation zones, highlighted by an absence of HIF1α staining, were also low for HBcAg staining in these CHB patients (Fig. 1A,B). In contrast, zones with low oxygen level or with inflammation (i.e., strong HIF1α staining) presented an increased number of HBcAg-positive nuclei. A correlation was found between the numbers of HIF1α-and HBcAg-positive cells (Fig. 1C).
We have previously shown that, on the one hand, LTβR agonization by an agonistic antibody (BS1) leads to cccDNA decay and HBV clearance, whereas, on the other hand, LTα/β are up-regulated in CHB patients. (6,20) Therefore, induction of LTβ in CHB patients should clear the infection given its antiviral effect. To assess whether the correlation of HIF1α and HBc observed in vivo (Fig. 1C) could be attributable to lower immune response in this area, liver of CHB patients were either stained for HIF1α and A3B by in situ mRNA hybridization on consecutive slides, or by costaining of mRNA and protein. High HIF1α staining was found in areas with low A3B expression, whereas low HIF1α staining was found in areas with strong A3B expression (Fig. 1D,E and Supporting Fig. S1).
Altogether, these data highlight that in areas with high HIF1α stabilization, A3B expression is impaired, allowing viral persistence even during liver inflammation. Therefore, high HIF1α areas provide a reservoir for HBV persistence in vivo.
HIF1α, BUt Not HIF2α, IS INVolVeD IN HypoXIa-MeDIateD apoBeC3B RepReSSIoN
Hypoxia can induce the stabilization of both HIF1α and HIF2α. Although we show that HIF1α knockdown can rescue A3B expression and antiviral effects of BS1 under HIF-stabilizing conditions (Fig. 2), we aimed to investigate a potential additional role of HIF2α. Therefore, cell lines doxycycline inducible for the overexpression of wild-type HIF1α, degradationresistant HIF1α, or wild-type HIF2α were generated. Of note, only a degradation-resistant HIF1α (carrying a P402A and a P564A mutation, eliminating the sites that, when hydroxylated, target HIF1α for degradation) was detected in the overexpressing cell line (Supporting Fig. S3A). Consequently, subsequent experiments were only performed with the degradation-resistant HIF1α. Transcriptional activity and expression of mutated HIF1α and HIF2α were confirmed by RT-qPCR and immunoblotting, respectively (Supporting Fig. S3A-D). Overexpression of HIF1α or HIF2α alone inhibited A3B up-regulation induced by BS1 (Fig. 3A). However, under hypoxia, only siRNAs against HIF1α, but not HIF2α, rescued A3B up-regulation, and no cumulative effect was observed when knocking down both HIF1α and HIF2α, highlighting that HIF2α only plays a minor role in A3B inhibition under hypoxic conditions (Fig. 3B). HIF1α and HIF2α knock-down efficiencies were confirmed by RT-qPCR (Supporting Fig. S3E). Moreover, inhibition of A3B by HIF1α and rescue by HIF1α knockdown were confirmed using different HIF1α stabilizers (DMOG, CoCl 2 , and VH298; Fig. 3C,D and Supporting S3F). Of note, LTβR surface expression remained unchanged under hypoxia, with a mild increase after HIF1α knockdown, highlighting that the effect of HIF1α stabilization was not attributable to a decreased receptor expression (Supporting Fig. S4G,H). Moreover, A3B repression was not attributable to cell death under hypoxia (Supporting Fig. S3I).
Altogether, these data show that under hypoxic conditions, HIF1α-but not HIF2α-impairs the induction of A3B.
HIF1α StaBIlIZatIoN INHIBItS NF-κB-INDUCeD a3B tRaNSCRIptIoN By DeCReaSINg RelB pRoteIN eXpReSSIoN leVel
The main signaling pathways activated upon LTβR agonization are related to NF-κB, suggesting that A3B is an NF-κB target gene. To confirm this hypothesis, we used two kinase inhibitors ([N-(6-chloro-7-methoxy-9Hβ-carbolin-8-yl)-2-methylnicotinamide] dHepaRG cells were infected with HBV. Six d.p.i., cells were transfected with either 10 nM of HIF1α-targeting or control siRNAs. On the next day, cells were subjected to 1% or 20% oxygen for 3 days and treated with ±0.5 µg/mL of BS1. Transfection and treatments were repeated once. (D,E) dHepaRG cells were infected with HBV. At 10 and 13 d.p.i., cells were transfected with either 10 nM of HIF1αtargeting or control siRNAs. Cells were then treated with ±0.5 µg/mL of BS1 and with ±100 µM of DMOG. (F,G) dHepaRG cells were infected with HBV. At 10 and 13 d.p.i., cells were transfected with either 10 nM of HIF1α-targeting or control siRNAs. One day after the second transfection, cells were treated or not with 0.5 µg/mL of BS1, either under the presence of 30 µM of FG-4592 or DMSO. Six days later, (B,D,F) mRNAs and (C,E,G) DNA were extracted and analyzed by RT-qPCR and qPCR. Bars represent the mean ± SD of (B,C) one or (D-G) three independent experiments performed in quadruplicates. Data were submitted to (C,E,G) an unpaired Student t test or (B,D,F) one-way ANOVA. *P < 0.05; **P < 0.01; ***P < 0.005; ****P < 0.0001. (H) dHepaRG cells were infected with HBV. At 10 d.p.i., cells were treated with ±0.5 µg/mL of BS1 and with ±100 µM of DMOG for 12 days. Episomal DNA was extracted and analyzed by Southern blotting. Abbreviations: DIG, digoxigenin; d.p.i., days postinfection; mitoDNA, mitochondrial DNA; MW, molecular weight; NT, nontreated; PF, protein-free; rcDNA, relaxed circular DNA. and [5-(p-fluorophenyl)-2-ureido] thiophene-3carboxamide) that target the IKK complex (IKKα/β). We observed that inhibition of IKKα/β reduces BS1-induced A3B in dHepaRG cells (Supporting Fig. S4A). Given that we showed that HIF1α stabilization prevents BS1-induced A3B, we anticipated that HIF1α would inhibit NF-κB target genes. Indeed, the induction of the well-known NF-κB target genes, nfkb2 and nik, upon BS1 treatment in normoxia is highly reduced in hypoxic conditions, and this effect was confirmed for A3B (Supporting Fig. S4B-D). We also extended our analysis with other activators of NF-κB (TNFα, IL-17, and LPS) and observed the same trend on the tested NF-κB target genes. Therefore, our results indicate a hypoxia-related impairment of the NF-κB signaling pathways. Interestingly, RelB is at the crossroad of both NF-κB pathways; relb transcription is dependent on the canonical, whereas RelB protein is part of the noncanonical, NF-κB dimer, p52/RelB. (10) We confirmed that, whereas BS1 increased RelB protein expression and A3B transcription, depletion of RelB drastically reduces BS1-induced A3B expression (Supporting Fig. S5B,C). Therefore, we addressed whether the inhibitory effect of HIF1α stabilization on BS1-induced A3B upregulation was a consequence of RelB inactivation.
Cell fractionation highlighted that DMOG strongly reduces BS1-induced RelB protein in both the cytosolic and the nuclear compartments, whereas RelA expression and nuclear translocation were not strongly affected (Fig. 4A). More important, the decrease of RelB protein levels in the DMOG/BS1 condition was completely rescued in HIF1α-depleted cells (Fig. 4B). HIF1α stabilization did not repress BS1-induced RelB mRNA up-regulation (Fig. 4C). These results were confirmed using longer DMOG treatment, a different level of hypoxia, and other HIF1α stabilizers (Supporting Fig. S5D-G). By immunostaining, we also confirmed that RelA nuclear translocation remained unchanged under hypoxia (Supporting Fig. S5H,I), whereas hypoxia impaired RelB induction (Fig. 4D). Interestingly, hypoxia also prevented BS1-induced p52 (the main binding partner of RelB) recruitment to the A3B promoter (Fig. 4E).
To investigate whether our in vitro findings would also be of relevance in vivo, C57BL6/J mice were injected either with DMSO or DMOG and euthanized 6 hours postinjection. In vivo, DMOG triggered HIF1α stabilization and a strong reduction of RelB protein expression in the liver, without affecting RelB mRNA. No change was observed for RelA or p50 (Fig. 4F).
Altogether, our in vitro and in vivo results identified a strong reduction of RelB protein, but not mRNA expression, as the main driver of HIF1αinduced impairment of A3B expression.
HIF1α-MeDIateD INHIBItIoN oF RelB/a3B eXpReSSIoN IS INDepeNDeNt oF ItS tRaNSCRIptIoNal aCtIVIty
HIF1α belongs to a large family of proteins, including ARNT and AhR. (21) It has been reported that RelB can dimerize with AhR or ARNT (RelB/ AhR or RelB/ARNT), either controlling RelB protein stability and/or RelB transcriptional activity. (22,23) Moreover, crosstalks between these proteins can occur through competition for common partners (e.g., HIF1α/ARNT vs. AhR/ARNT). (24) Thus, we investigated whether such processes could control RelB activity in our model. A schematic timeline of the experiments is depicted in Fig. 5A.
In dHepaRG cells, AhR knockdown did not interfere with BS1-induced RelB expression, highlighting that AhR was dispensable for RelB stability (Fig. 5B). Interestingly, contrary to HIF1α knockdown, RelB protein levels were not rescued in ARNT-depleted cells treated with DMOG/BS1 (Fig. 5C). It was reported that ARNT represses the transcription of particular NF-κB target genes, (23) as confirmed by the elevated expression of C-X-C motif chemokine ligand 10 in ARNT-depleted cells (Supporting Fig. S6A). However, ARNT knockdown had no impact on RelB mRNA expression, whereas vascular endothelial growth factor alpha expression (a target gene of the HIF1α/ARNT heterodimer) was reduced (Supporting Fig. S6B,C). In addition, neither AhR nor ARNT knockdown rescued A3B levels in DMOG-treated cells (Fig. 5D,E). These results indicate that HIF1α/ARNT dimerization, which is necessary for the canonical function of HIF1α as a transcription factor, is not the cause of decreased RelB protein and A3B mRNA expression.
In summary, our results demonstrate that HIF1α/ RelB crosstalk prevents BS1-mediated A3B expression through an unconventional HIF1α-dependent mechanism.
HypoXIa pReVeNtS IMMUNe INDUCtIoN By DySRegUlatINg eXeCUtINg patHWayS
To investigate the global effect of hypoxia, mass spectrometry was performed on control or HIF1αtargeting siRNA-transfected dHepaRG cells treated with or without BS1 under normoxia (NO) or hypoxia (HO). A schematic timeline of the experiment is depicted in Fig. 6A. Interestingly, whereas 418 proteins were significantly dysregulated in BS1-treated versus nontreated cells under normoxia (NO/NT vs. NO/ BS1), only two proteins were found to be dysregulated when comparing the same treatments under hypoxia (HO/NT vs. HO/BS1), indicating a global inhibition of responses to BS1 treatment (Fig. 6B). Pathways were grouped into four different clusters: I, transcription and translation; II, signal transduction and immune response; III, metabolism; and IV, DNA replication and repair. Results highlighted that BS1 treatment impaired the metabolism (e.g., drug and fatty acid metabolism) of dHepaRG cells and cellular transcriptional and translational machinery were among the most up-regulated pathways, leading to production of immune response pathway effectors (Fig. 6C).
Altogether, these data showed that hypoxia globally impaired immune responses by inhibiting cellular pathways implicated in RNA processing and surveillance, as well as protein production, independently of the target gene or the stimulus. Interestingly, HIF1α knockdown rescued A3B induction, most probably by rescuing RNA processing and ribosome pathways, although it was not sufficient to completely revert the hypoxic state of the cells.
Discussion
Development of new therapeutics against HBV have largely focused on the use of immune mediators, given that they have shown promising results both in vitro and in vivo. (3)(4)(5) We and others have previously shown that immune-mediated induction of A3B by LTβR agonization (i.e., with the LTβR agonist, BS1, or LTα 1 β 2 -expressing T cells) leads to noncytolytic degradation of nuclear HBV cccDNA, enabling longterm inhibition of HBV replication without rebound, even after treatment arrest. (6,7) HIF1α has been shown to impair immune responses. (13,25) Inflammatory signaling has been shown to induce HIF1α, which we confirmed in our current study. Moreover, HBV pathogenesis and resulting fibrotic scaring processes will influence liver oxygenation, therefore modulation of HIF1α induction and stabilization. In the liver of CHB patients in immune-active (i.e., patients who potentially could clear the infection given that they likely express high levels of cytokines), we found a positive correlation between HIF1α expression and HBcAg-positive areas. Given that A3B mRNA was low in areas with high HIF1α, it can be expected that, in vivo, HBV might escape the immune responses in areas with elevated HIF1α staining.
We hypothesized that the correlation observed between HIF1α, HBcAg, and A3B mRNA highlights that low immune responses in HIF1α-high areas allow viral persistence, creating a viral reservoir. Therefore, we can hypothesize that blocking HIF1α stabilization during the immune-active phase of CHB patients could indeed be sufficient to allow morepotent immune responses, among which is induction of A3B, and viral elimination.
In vitro, we confirmed, using 1% oxygen, DMOG, and a number of other molecules inducing HIF1α stabilization, as well as HIF1α-overexpressing cell On the next day, cells were subjected to 1% (Hypoxia) or 20% (Normoxia) oxygen for 3 days ± 0.5 µg/mL of BS1. Proteins were submitted to unbiased mass spectrometry analysis. (B) Data are presented as volcano plot of normoxia nontreated (NO/NT) versus normoxia BS1-treated (NO/BS1) comparison. Dotted line represents the limit of significance (adjusted P value, <0.05). Red dots represent the only two proteins that are still significantly dysregulated (i.e., adjusted P value, <0.05) in similar comparison under hypoxia (HO/NT vs. HO/BS1). (C-F) Pathway analysis of significantly changed proteins was conducted with preselected KEGG pathways using the ROAST algorithm. The pathways are represented for (C) NO/NT versus NO/BS1, (D) NO/NT/siCtrl versus NO/BS1/siCtrl, (E) NO/ BS1/siCtrl versus HO/BS1/siCtrl, and (F) HO/BS1/siCtrl versus HO/BS1/siHIF1α. The significantly (respectively, nonsignificant) upregulated (dark red bar; respectively, light red bar) or down-regulated (dark blue bar; respectively, light blue bar) pathways are presented as the percentage of proteins analyzed in the pathways. Of note, black bars represent the number of significantly dysregulated proteins in the pathway. Data were submitted to a LIMMA algorithm for selection of significantly changed proteins. *P < 0.05; **P < 0.01; ***P < 0.001; ****P < 0.0001. Abbreviations: Akt, protein kinase B; CYP450, cytochrome P450; FDR, false discovery rate; JAK, Janus kinase; KEGG, Kyoto Encyclopedia of Genes and Genomes; MAPK, mitogen-activated protein kinase; NT, nontreated; PI3K, phosphoinositide 3-kinase; PPAR, peroxisome proliferator-activated receptor; RIG-I, retinoic-acid-inducible gene I; ROAST, rotation gene set testing; STAT, signal transducer and activator of transcription.
lines, that HIF1α stabilization mediates a strong impairment of LTβR-dependent A3B induction. However, impairment of immune responses was not limited to A3B as an NF-κB target gene, neither to BS1 as an NF-κB inducer, highlighting that HIF1α modulated NF-κB and other immunesignaling pathways (e.g., IFNα/γ-induced cccDNA degradation) to prevent the induction of immune mediators. Indeed, we identified that HIF1α impairs RelB protein, but not RelB mRNA level, in vitro and in vivo. This suggests that either RelB mRNA is not properly exported from the nucleus and/or is not efficiently translated, as confirmed by our proteomic data, which showed an impairment of RNA-processing and ribosome pathways under hypoxia. Alternatively, RelB stability is subjected to posttranslational modifications associated with proteasomal/lysosomal protein degradation. (26) We also found that the inhibitory activity of HIF1α toward RelB was independent of its partner, ARNT. An ARNT-independent function of HIF1α starts to emerge, (27) and the HIF1α/RelB crosstalk we discovered could bring more insights into the immune metabolism of the liver.
The global inhibition of immune responses observed under HIF1α stabilization, with different ligands and on several targets, suggests the need to modulate HIF1α to obtain optimal immune activation and thus an antiviral response during immune therapies administration. However, it will be important to confirm the effect of HIF1α on other immune therapies and antiviral targets, as well as in vivo, in a therapeutic setup. Mass spectrometry revealed that even though HIF1α knockdown partially rescued pathways implicated in RNA and protein production and processing, it could not fully reactivate the immune response in cells. Interestingly, although the rescue of the "hypoxic state" of the proteome was only partial, it was sufficient to rescue A3B induction and thereby restore the anti-cccDNA effects of BS1 Unknown mechanism treatment. From a clinical perspective, this could have severe consequences for the outcome of immunestimulatory approaches for the treatment of CHB patients. The oxygen status of the liver microenvironment is not only important for parenchymal cells to be able to integrate external stimuli, but also for immune cells to exert their function properly. (14,25) Moreover, given that inflammation can trigger HIF1α stabilization, it will be mandatory to inhibit HIF1α to insure potent immune responses. Recently investigated HIF inhibitors have shown encouraging results in cancer therapies. (28) These molecules should be tested in the treatment of CHB, especially in patients with fibrosis, and thus with compromised liver oxygenation. In the context of immune-mediated A3B activation, a focus should be made on HIF1α inhibitors. Additionally, HIF1α inhibitors could be combined with immune therapies (3,5) to insure potent immune activation in the whole liver.
In summary, we have shown that HIF1α stabilization impairs NF-κB-mediated A3B induction, which is important for HBV cccDNA purging (Fig. 7). We believe that preventing the inhibitory activity of HIF1α toward RelB might represent a therapeutic window that should be considered as a support of combinatory immune therapies, to ensure a better efficacy of the treatment. | 4,999.4 | 2021-05-15T00:00:00.000 | [
"Medicine",
"Biology"
] |
Material Science & Metallurgical Engineering Education in India-Past, Present & Future
Metallurgy as a science discipline has been established from the beginning of stone-age. It is imperative to note as we have progressed we have added facets of material science to it and the field has evolved into a discipline of engineering. From science fiction to reality the inherent inter-disciplinary nature of this branch sets up the need of its education and training. There have been several instances in when the pivot of development has completely changed due to advancement in material science and metallurgy whether it is semiconductor industry or space industry. In this paper, we have reviewed the scope and development of this discipline in India detailing through the course of history to the present status.
Introduction:
Material science, also known as material science and engineering is the science of metals and metallurgy is the study of extraction of metals. Its study is important as we deal with metals every day.
To sustain life and energy, we are surrounded with metals and materials in various forms. The use of metals was started 5th and 6th millennium BC, even the periods and ages are being named after the major advancement in use of material. First came the Stone Age as during that era equipment and 2 1234567890''"" ICCRME-2018 IOP Publishing IOP Conf. Series: Materials Science and Engineering 404 (2018) 012001 doi: 10.1088/1757-899X/404/1/012001 utensils were made of stones, then came the Bronze Age and Iron Age. Earlier, metals found in native state were being used then metals which can be extracted, reformed in simple way by heating were used by masses. These all evidence shows that the metallurgy engineering or material science was needed and used from much earlier period. In India, we can get concepts related to metallurgy in various Vedas as Yajurveda & Atharvaveda. Metals were exchanged in trades between India and nearby regions. Earlier people used metals which can be obtained easily and materials like gold, silver and so on were being extensively used by people. In ancient India, metals were used majority for making weapons for wars and for making utensils. The Iron pillar made in New Delhi, India, is a perfect example of showing that metallurgy or material science was being practiced in earlier India too. To let the pillar stand for generation it was made corrosion resistance and the concept of alloy was used in making of the marvel [1].
In medieval India, with arrival of foreigners, there was more improvement to the use of material in different product making activities. Material science or Metallurgy is the science of material in which every aspect which affects the physical and chemical process of material like nature, structure, bonding, environmental factors are studied. Material science is an interdisciplinary field included in almost every branches of engineering as every field is incomplete without use of materials, so it is important to have proper knowledge to have better understanding of the field. Material science or metallurgy is included as a subject or optional subject in curriculum of mechanical engineering, electrical engineering, chemical engineering, electrical and electronics engineering, aeronautics engineering, and various other branches of technology. Due to rapid development after the world war, the manufacturing field has grown rapidly. These manufacturing units of any country provide a great boon to economy of that country, and developing countries such as India which has biggest population of youth [2]. This paper is divided three parts, first we have discussed about the metallurgy and material science education in India at undergraduate, postgraduate, doctoral and post-doctoral level, then future scope of metallurgy in different industry with the employment of material science engineer and at last conclusion of material science education in India. Table 1.
Basic Structure of Engineering Education in India:
Material science as a subject is included in various branches of engineering at different period at undergraduate and post graduate level. It was first included with physics, and then was taught with chemistry but later due to its inevitability in various fields it was taken separately as a subject [4]. It also keeps in track the research development and projects related to this field. Then its branches were Both UG and PG degree courses were being offered and doctoral or PhD was done on the specific field one want to pursue. Although it is being taught as a separate degree course but still many branches have one or more of material science course as subject in UG level.
At Diploma Level:
Diploma courses associated to material science or metallurgy engineering has been offered by only few polytechnic. In first year subjects are common with other linked branches. In second year, specific subjects (Metal Forming, Fuel Furnaces, Metallurgical Analysis etc.) related to the discipline has been offered. Subjects like testing of metals, physical metallurgy etc. are offered as a compulsory subjects in mechanical, civil, industrial, production and manufacturing diploma students. In final semester specific subjects (Steel Making, Corrosion of Metals, Alloy Steel etc.) related to the specialization are offered.
At Under Graduate Level:
There are about 6-10 core courses which is associated to the Material Science & Metallurgical
At Post Graduate, Doctoral & Postdoctoral Level:
Material Science & Metallurgical Engineering is divided into two main domains ferrous metallurgy and non-ferrous metallurgy at post graduate level material science and metallurgical engineering is offered under the specialization in Material Science, steel technology, corrosion science, industrial metallurgy, process metallurgy, foundry technology etc. [6,8]. Major topic covered under postgraduate course includes advance material characterization techniques, ceramics, steel technology, corrosion science, process energy, industrial metallurgy, material engineering, nano material, biomaterials in the end of last semester student have to justify his/her dissertation before the audience, which are the examiner.
Doctoral Research is done in the field of metal forming and mechanical behavior, material joining, iron and steel technology, integrated computational material engineering, material characterization, surface engineering, composite metal (bio-metal, ceramics, nano-metal) are the major fields in which major Indian researcher works. Post doctoral research has been performed only in few technical premium institutes in the emerging areas of Metallurgical Engineering. At present level post doctoral research has been done in the areas of physical metallurgy and multi component metallic alloys.
About the Professional Bodies:
The
About Simulation Tools:
There
About Literature:
There are many fields of engineering (aerospace engineering, ceramic engineering, chemical engineering, civil engineering, electrical engineering, electronics engineering, industrial engineering, manufacturing engineering, mechatronics engineering, mechanical engineering, production engineering, renewable engineering and so on) in which Material Science & Metallurgical engineering subjects are offered as core subjects as well as elective subjects in India. Weekly quiz (objective) or assignment related to that week has to be submitted by the candidates.
Online Web Resources by Indian Government:
Marks are allocated for each quiz and a final examination is also conducted which covers the whole course. After completing any course successfully, a certificate is issued by the NPTEL with the logo of concerned institute and MHRD [7].
Job Prospect:
The metallurgical profession is a wide field which offers a variety of job prospects for the students who took its associated courses in technology and engineering. Metallurgical engineers in India take the employment with the companies involved in ore production and metal extraction companies. They can take the profile of plant engineer, metallurgist, welding engineer, quality check engineer, and Metallurgical R&D lab technician and so on. They are also eligible in railways, armed force and government sectors [10].
Conclusion
With push coming through programs such as 'Make in India' and other initiatives from government focused on marking India on world as a manufacturing hub, it is quintessential to start from the primal need of machines that is raw material [9]. Through our research we have found that Indian institutes have a good platform through years of educational background. The inherent need is to allocate more budget to research activities in the department to allow them evolve. Industry-academia partnerships are also important as initial set up cost is a big factor in this area. To realize the ambitious goals set up my government agencies material science engineering could be the missing link. | 1,891.4 | 2018-09-28T00:00:00.000 | [
"Materials Science"
] |
Laser-Plasma and Self-Absorption Measurements with Applications to Analysis of Atomic and Molecular Stellar Astrophysics Spectra
: This work discusses laboratory measurements of atomic and diatomic molecular species in laser-plasma generated in gases. Noticeable self-absorption of the Balmer series hydrogen alpha line occurs for electron densities of the order of one tenth of standard ambient temperature and pressure density. Emission spectra of selected diatomic molecules in air or specific gaseous mixtures at or near atmospheric pressure reveal minimal plasma re-absorption. Abel inversion of the plasma in selected gases and gas mixtures confirm expansion dynamics that unravel regions of atomic and molecular species of different electron temperature and density. Time resolved spectroscopy diagnoses self-absorption of hydrogen alpha and hydrogen beta lines in ultra-high pure hydrogen gas. Radiation from a Nd:YAG laser device induces micro-plasma for pulse widths in the range of 6–14 ns, energies in the range of 100–800 mJ, and peak irradiances of the order 1–10 TW/cm 2 . Atomic line profiles yield electron density and temperature from fitting of line profiles to wavelength and sensitivity corrected spectral radiance data. Analysis of measured diatomic emission data yields excitation temperature of primarily molecular recombination spectra. Applications of the laboratory experiments extend to investigations of stellar astrophysics white dwarf spectra.
Introduction
Measurements of optical spectra from stellar astrophysical objects usually provide essential data for the determination of star-atmospheres and astrophysical phenomena [1]. White dwarf stars Sirius B and Procyon B, companions to Sirius and Procyon, reveal radiation temperatures of 26 kK and 8 kK, respectively [2]. The optical spectra of Sirius B show exclusively atomic, hydrogen Balmer series absorption spectra in the range of 400-700 nm. In turn, Procyon B reveals molecular, carbon Swan absorption spectra. This work also presents analysis of astrophysical absorption spectra from the Montreal data-base [2] and selected hydrogen absorption spectra from the Keck observatory archives (KOA) [3]. Application of well-established plasma spectroscopy [4] is instrumental for laboratory laser micro-plasma experiments for simulation of astrophysical conditions, in particular, those of white dwarfs.
This work presents characterization of gaseous hydrogen micro-plasma from the first four Balmer series members, H α , H β , H γ , and H δ lines. Previously communicated work primarily
Experimental Details
The experimental arrangement for laboratory measurements of laser-induced plasma [6] includes a Q-switched Nd:YAG laser device typically generating 6 ns laser pulses with an energy of 150 mJ per pulse at the fundamental wavelength of 1064 nm. Radiation is tightly focused to a spot size of the order of 10 µm to achieve a peak irradiance above 1 TW/cm 2 , or well above optical breakdown of SATP laboratory air and gaseous hydrogen of the order of one atmosphere. The emanating light from the micro-plasma is recorded with a Czerny-Turner type spectrometer that shows a resolution of 0.02 nm when using a 3600 groves/mm holographic grating and an intensified diode array or two-dimensional detector for time-resolved spectroscopy. Figure 1 illustrates the experimental schematic. The astrophysical datasets are preferably recorded with a high resolution spectrometer, for instance the so-called HIRES instrument at the Keck observatory that utilizes an echelle grating. The spectral resolving power, R = λ/∆λ, i.e., the ratio of wavelength, λ, and of the spectral resolution, ∆λ, typically is of the order of 40,000.
At H β , the spectral resolution equals ∆λ = 0.012 nm. In terms of the average gravitational redshift, v g = c ∆λ/λ, with c the speed of light, the velocity resolution amounts to 7.5 km/s. For comparison, the Sirius B gravitational redshift [14,15] is 89 ± 16 km/s. In other words, the Sirius B redshift equals ∆λ Sirius B = 0.14 nm.
From the gravitational shift [19], v g = 0.64 M/R, with M and R denoting the white dwarf (WD) mass and radius in solar units, respectively, and using thermodynamic cooling models, the WD parameters are inferred, namely WD gravitational constant, g, temperature, T, mass, M, and radius, R. Typical values for WDs with a hydrogen atmosphere of g, T, M, and R amount to 10 6 m/s 2 , 30 kK, mass of the sun, and size of the Earth, respectively.
Laboratory Emission Spectroscopy Results
Figures 2 and 3 illustrate recent measurements of H δ and H γ lines of laser plasmas in hydrogen gas. The figures display the average of 100 individually accumulated ICCD images. In these experiments, the laser beam propagates parallel to the slit, from top to bottom as indicated in the experimental schematic (see Figure 1). The recorded data are corrected for detector background and wavelength sensitivity. Wavelength calibration is accomplished with standard pen-ray light sources. The displayed image values represent spectral radiance in arbitrary units. Figure 2a displays H γ (at 434.07 nm) at the low-wavelength side of the image, background contributions from H δ (at 410.17 nm) and contributions from free-electron radiation. Figure 2d indicates diminished background contributions due to H δ . Analogously, Figure 3a shows H δ with contributions from H γ at the high wavelength side of the image. On the low wavelength side but outside the recorded spectral window, the indicated contributions are likely from H (at 397.01 nm) and are noticeable for all time delays in Figure 3. The full-width at half-maxima (FWHM) are extracted from the data for determination of electron densities as a function of time delay.
Results for the H β and H α and analysis were presented previously [5,6]. Electron densities from H β agree with those from H α [5,6]. Averages of the spatially resolved data along the slit yield spectra that appear similar to those recorded with a linear diode array [8]. Previously recorded H α and H β experiments using a photomultiplier while scanning the spectrometer [8] debate comparisons of the advanced Stark-broadening theory [20] with the standard theory [21]. The recorded H γ and H δ maps reveal expected variations along the slit dimension. This work reports analysis of the spectra displayed in Figures 2 and 3. Published line shapes [21] are utilized for determination of electron density. The H γ and H δ tables [21] are only available up to electron densities of 10 17 cm −3 with temperatures up to 20 kK, but the tables usually show a weak temperature dependence. Both H β and H δ show the typical dip at the center wavelength due to absence of the Stark component there.
Analysis of the recorded spectra encompasses determination of temperature from the integrated line-to-continuum ratios. Electron density measurements utilize full-width at half-maximum and for H β and H γ peak-separation. However, the H γ peak separation method is applied in analysis of experiments in 0.136-atm:0.136-atm H 2 :N 2 gas mixtures [22] that show electron densities in the range of 0.1-1 × 10 17 cm −3 for time delays longer than those for the 0.75-atm H 2 experiments addressed in this work. Figure 4 illustrates the line-to-continuum ratios and corresponding temperatures. The theoretical ratios of integrated line to 10-nm continuum ratios are recalculated [23], and the experimental data are added with error bars. The results for H γ and H δ agree with previously communicated H α and H β results [5]. Figure 5a,b shows Boltzmann plots [24,25] and inferred temperature for time delays of 275 ns and 150 ns, respectively. The error bars reflect uncertainties in determination of the baseline. Temperatures from line-to-continuum ratios and Boltzmann plots agree. The Boltzmann plot utilizes the natural logarithm of the Boltzmann distribution for determination of the temperature, The constant, ln C, is not used-only the slope determines the temperature. Figure 5 displays There are expected density variations across the slit height, but the analysis evaluates averages of the displayed spectra. For practical reasons, Lorentzian fitting is applied to find the full-width at half-maximum (FWHM) of the experimental spectra. From log-log fitting of H γ and H δ tables in the range of 0.1-1 × 10 17 cm −3 , one obtains for H γ FWHM, ∆λ γ , and for H δ FWHM, ∆λ δ , Similar to H β , the H δ peak-separation, ∆λ δ−ps , in the range of 0.1 × 10 17 cm −3 to 1 × 10 17 cm −3 amounts to In this work, density determinations are exclusively from FWHM of the first four Balmer series lines. Future work will address density determination from H δ peak separations that become distinguishable for electron densities in the range of 0.1-1 × 10 17 cm −3 , but for longer time delays than 275 ns.
For time delay of 275 ns, the FWHM for H γ and H δ are 9.1 ± 2 nm and 14.8 ± 2 nm, respectively. From Equations (2) and (3) ,one finds electron densities of 1.78 ± 0.4 × 10 17 cm −3 from H γ and 1.8 ± 0.3 × 10 17 cm −3 from H δ . The estimated error bars are primarily due to errors in determining the baseline for the measured line profiles in the 24-nm spectral window selected for each Balmer series line. However, the H γ and H δ values would extrapolate the formulas that are listed to be valid up to 1 × 10 17 cm −3 . However, the results would be consistent with electron densities obtained from H α and H β lines [5,6], namely, 1.9 × 10 17 cm −3 .
In the time-resolved laboratory data, there are pressure shifts that are larger than the gravitationalor Einstein-shift. Figure 6 exhibits the hydrogen beta line with the expected two peaks, one blue-shifted and one red-shifted due to pressure broadening [26]. The displayed data represent a re-examination of previously recorded spectra [8] in view of self-absorption and comparison with WD spectra.
The appearance of the H β line shape is illustrated for electron densities of the order of up to 60 × 10 17 cm −3 , i.e., η 0.24, or for electron density inferences from H β near the estimated Inglis-Teller limit [9]. For τ = 2 ns, it is difficult to speak of a traditional line shape because only the central portion can be demarcated. The 2-ns line-of-sight data are recorded near the tail of the laser pulse; consequently, experimental variations are expected for the accumulated averages. Figure 6 shows the H β line close to the Inglis-Teller limit [9] for τ = 2 ns. Figure 7 also illustrates that the peak-separation is significantly modified or masked by contributions from lower density lines; moreover, a single Lorentzian fit (Figure 7b) is not adequate due to the discrepancies near the peak. With impact broadening [27], the dominant process in the wings, it appears reasonable to consider Lorentzians for fitting of the spectra. Strictly speaking, hydrogen Stark profiles show asymmetries [28] that adversely affect the spectral line wings. The variations in the wings due to asymmetries are not explicitly considered in this work but are expected to be covered with the estimated error margins included in the Boltzmann plots. The double Lorentzian fit, Figure 7a, simulates the summed profile as a superposition from two regions that show different electron density. As the plasma expands spatially and temporally, and if one were to record continuously, one would expect several regions of different density contributing to the recorded signal.
Adding the original spectra [8], recorded in the first 2.5 µs temporal window with 6-ns gate-widths, yields a result similar to using a longer gate-width in the measurement. Strictly speaking, use of gate-widths and gate-delays that entirely cover the first 2.5 µs, and then adding the results would be equivalent to using a 2.5 µs gate. Nevertheless, Figure 7b is an acceptable representation of continuous recording as was the case for the collection of WD spectra.
Expanding laser-induced plasma is usually accompanied by a hypersonic shock wave, including rarefaction waves and associated temperature and density gradient regions that contribute in line-of-sight experiments. One can utilize integral inversion techniques to explore the spatial plasma distribution, or one can investigate line shape details in the data reduction of the spectra. In analogy with laser-ablation, occurrence of density variations can be attributed to formation of particle clusters [29,30] that can be studied with flowing gas leading to percolation effects that may cause line shapes that require double-Lorentzian line shape analysis.
Self-absorption of the H α line [31] and the H β line [32,33] is investigated with the so-called doubling-mirror method [34] for further evaluation of the H β line electron-density diagnosis [35] . Time-resolved spectra of H α are systematically collected with 10-ns gate widths in 100-ns time-delay steps [10,36]. Laser-plasma spectra are collected in standard ambient pressure and temperature (SATP) air. Figures 8 and 9 display measured spectra versus slit height for time delays of 300 ns, 400 ns, 700 ns, and 800-ns after initiation of optical breakdown in air. Comparisons of data recorded with and without the doubling mirror elucidate the level of self-absorption. With respect to the indicated slit height in Figures 8 and 9, Figure 10 illustrates Savitzky-Golay filtered spectra for 300 ns and 800 ns time delays. The electron density can be determined from H α Stark broadening and shift, but it can equally be determined from ionized nitrogen N + lines for early time delays. The H α line is red-shifted from 656.28 nm (see Figure 10a) but the N + lines are only slightly shifted from 648.21 nm and 661.06 nm. The electron density of 14 to 20 × 10 17 cm −3 determined from the H α line is higher than 12 to 13 × 10 17 cm −3 obtained from N + for the 300 ns time delays. Therefore, H α shows self-absorption for delays of 300 ns. Obviously, inferences from the FWHM of self-absorbed lines will lead to larger than true values for the electron density. However, the level of self-absorption is insignificant for determination of electron density for time delays of 800 ns after optical breakdown, Figure 10b.
Details of the 661.06-nm N + and 656.28-nm H α lines reveal asymmetries that can be corrected using the doubling mirror approach [34]. The correction factor, K λ or K corr , Analysis of the experimental data yields the ratio of the continuum radiation, R C , and the signal ratio, R λ , as function of wavelength, λ. Figure 11 illustrates the results. Additional experiments [32,33] focus on the H β line and for electron densities of the order of 10 17 cm −3 using a gate width of 0.5 µs and steps of 1 µs. Figure 12 a,b shows data recorded at time delay of 5 µs and 7 µs, respectively, from initiation of optical breakdown plasma. A plane mirror of 90 percent reflectance and a lens of 94 percent transmittance are employed to image the plasma onto itself, doubling in an ideal case the measured signal for optically thin laser-plasma. Figure 12. Measured, Savitzky-Golay filtered H β air-plasma data with and without using a doubling mirror [33]. The correction factor, K corr , is almost equal to 1. The error bars indicate the estimated errors in determination of the correction factor. Time delay: (a) 5 µs; and (b) 7 µs.
From fitting data to the H β line [32], electron densities of 0.73 × 10 17 cm −3 and 0.69 × 10 17 cm −3 are determined without and with the doubling mirror, respectively. For comparison, the N II 491.92-nm nitrogen line (measured simultaneously with H β ) indicates electron densities of 0.72 × 10 17 cm −3 and 0. 77 × 10 17 cm −3 without and with the doubling mirror, respectively. In addition, electron densities are determined from the H β peak separation. The electron density result suggests hardly any or an insignificant amount of self-absorption.
Recent work focuses on atomic and diatomic molecular emission spectroscopy [31]. Figure 13a,b illustrates line-of-sight and Abel-inverted spatial distribution of CN and the atomic carbon C I 193.09-nm line measured in second order. In principle, local electron density and temperature distribution can be evaluated without resorting to Abel inversion [37]. For laser-plasma, the outgoing shockwave exhibits electron density and temperature maxima near the edges of the slit-dimension in Figure 13a that can be directly evaluated from recorded C I line-of-sight spectra in these regions-plasma core contributions are negligible near the edges. Consequently, the sensitivity of an analysis without Abel inversion is expected to be close to that of line-of-sight diagnosis. For electron density and temperature distributions that show maxima near the center and monotonically decrease outwards, usually realized in laser-induced plasma for delays of several microseconds, the suggested method [37] for determination of plasma parameters should work well. However, adaptations of the method would be required due to occurrence of electron density and temperature maxima associated with the expanding laser-plasma shockwave.
Plasma expansion dynamics following laser-induced optical breakdown in gases show formation of cyanide, CN, within the first few 100 ns after optical breakdown. Time-resolved, integral-inverted, line-of-sight measurements within the first microsecond reveal spatial distributions [38]. Expansion dynamics clearly affect the distribution of electrons and molecules for time delays of 700 ns (see Figure 13b). Abel inverted CN spectra appear nearly uniform for a time delay of 1950 ns [38,39].
Application to Analysis of White Dwarf Spectra
Collection of spectra from white dwarf stars preferably occurs with a resolving power sufficient for determination of the gravitational shift. Recent data for the white dwarf star GD 394 B, recorded with an echelle spectrometer on 15 November 2015, with KOA-ID: HI.20151115.19364 [3], show a resolving power of R = 38,000, or a resolution of ∆λ = 0.013 nm. Figure 14a illustrates five overlapped regions of the recorded echelle spectra together with broad and narrow Lorentzian fits. Figure 14b shows the expanded region of the central absorption. Both broad and narrow features of a white dwarf (WD), astrophysical plasma are usually understood as absorptions from an outer region of the WD photosphere. However, it can also be labelled "self-absorption" of the emitted WD radiation.
Previously communicated analysis of the GD 394 B, H β spectra [40] shows narrow and broad photosphere absorption-widths of 0.39 nm and 7.3 nm that would indicate electron densities of 0.032 and 2.0 × 10 17 cm −3 , respectively. Re-analysis of the GD 394 B white dwarf data with two Lorentzian fits [41] and using all data points from overlapped regions of echelle orders leads to widths of 0.32 nm and 8.4 nm with corresponding electron densities of 0.022 and 2.2 × 10 17 cm −3 , respectively. Considering the error margins for the electron density measurements from the H β line [6], both results agree. Moreover, the analysis with two symmetric Lorentzians reveals 0.12-nm (average overlapped data) and 0. 13 The gravitational redshift of 0.043 nm (26.77 km/s) and the photospheric component of 0.047 nm (29.3 km/s) [42] account for a 0.09-nm redshift. Further improvements in fitting with possibly asymmetric line-shapes or adjustments to the background slope could very well confirm an overall redshift of 0.09 nm within error bars. Analysis of the white dwarf HG 7-85 from the Hyades cluster yields consistent results [6,43] for broad Lorentzian center wavelength and gravitational shifts of 0.072 nm and 0.08 nm, respectively.
The comparison of laboratory and astrophysical spectra implies that different electron density regions in the atmosphere of the investigated hot (∼25 kK) white dwarf stars cause absorption contributions that mask H β peak separation and central dip-shifts. Molecular spectra display variations across vibrational bands, possibly indicating varying C 2 concentrations of cool (∼7 kK) white dwarf atmospheres. Fitting of hydrogen spectra reveals broad and narrow profiles that may be caused by local thermodynamic non-equilibrium as WD stars cool [7,44]. The laser-induced plasma is well-reproduced in consecutive optical breakdown events allowing accumulations of several tens of laser-plasma events for each time delay. Recent laboratory measurements [45] explore white dwarf photospheric spectral lines to elucidate details in WD atmosphere modeling. Figure 16a displays a WD molecular C 2 Swan band absorption spectrum from GJ 841 B [46]. Analogous to analysis of C 2 emission spectra in laser-induced plasma, the absorption spectrum in Figure 16a is inverted and subsequently analyzed with the well-established diatomic molecular fitting program [47]. Figure 16b illustrates fitting results for the strongest band ∆v = 0 of the C 2 Swan spectra. Initial analysis indicates a temperature of 5.9 kK and a spectral resolution, δλ, of 2.5 nm for the entire range. The recorded data are shifted by an overall 0.5 nm to correct for wavelength accuracy. The GJ 841 B spectra are captured with a resolving power of 833, so the 0.5 nm adjustment is of the order of the spectral resolution. The fitting of ∆v = +2 and + 1 indicate best-fit spectral resolutions of 1.5 nm that are about a factor of 2 larger than those for ∆ v = −1 and − 2 fitting. The effective temperature of the GJ 841 B white dwarf is documented [2] to be ∼7.2 kK.
The analysis of the C 2 absorption spectra suggests deviation from equilibrium due to the significant differences in computed and recorded spectra for the investigated bands. Figure 17 displays further results for the ∆v = −1 and − 2 bands. The extracted temperature from fitting these selected bands is lower than that from fitting all bands ∆v = +2, +1, 0, −1, −2. The temperature inferred from ∆v = 0 bands is higher than that from the other bands.
Conclusions
Atomic and molecular emission spectra of the type encountered in laser-induced breakdown spectroscopy occur in astrophysical spectra from white dwarf stars. Gas-dynamic expansion affects distribution of the laser plasma, including distributions in the plasma core and just inside the expanding shock wave. Measured hydrogen electron densities and temperatures are in the range of 1-100 × 10 17 cm −3 and 10 kK to 120 kK (1-10 eV), respectively. Self-absorption does not affect electron density determinations in the range of 1-10 × 10 17 cm −3 from H α , but for electron densities in the range of 1-3 × 10 17 cm −3 the H β line is preferred. H γ and H δ appear well-suited for electron densities in the range of 0.1-1 × 10 17 cm −3 . However, laser ablation of solids reveals electron densities that are significantly higher close to the target than those encountered in gaseous breakdown plasma. Abel-inverted hydrogen and cyanide spectra indicate expansion dynamics, e.g., outgoing electron density and temperature waves. Plasma diagnosis is best accomplished with spatial and temporal resolution.
Author Contributions: C.G.P. conceived and performed the experiments with G.G. C.G.P. analyzed the result together with G.G. and C.M.H., and all authors contributed to the writing of the article.
Funding:
The authors appreciate the support in part by the Center for Laser Application, a State of Tennessee funded Accomplished Center of Excellence at the University of Tennessee Space Institute.
Conflicts of Interest:
The authors declare no conflict of interest. | 5,339.6 | 2019-05-30T00:00:00.000 | [
"Physics"
] |
Histamine H3 Receptor Integrates Peripheral Inflammatory Signals in the Neurogenic Control of Immune Responses and Autoimmune Disease Susceptibility
Histamine H3 receptor (Hrh3/H3R) is primarily expressed by neurons in the central nervous system (CNS) where it functions as a presynaptic inhibitory autoreceptor and heteroreceptor. Previously, we identified an H3R-mediated central component in susceptibility to experimental allergic encephalomyelitis (EAE), the principal autoimmune model of multiple sclerosis (MS), related to neurogenic control of blood brain barrier permeability and peripheral T cell effector responses. Furthermore, we identified Hrh3 as a positional candidate for the EAE susceptibility locus Eae8. Here, we characterize Hrh3 polymorphisms between EAE-susceptible and resistant SJL and B10.S mice, respectively, and show that Hrh3 isoform expression in the CNS is differentially regulated by acute peripheral inflammatory stimuli in an allele-specific fashion. Next, we show that Hrh3 is not expressed in any subpopulations of the immune compartment, and that secondary lymphoid tissue is anatomically poised to be regulated by central H3R signaling. Accordingly, using transcriptome analysis, we show that, inflammatory stimuli elicit unique transcriptional profiles in the lymph nodes of H3RKO mice compared to WT mice, which is indicative of negative regulation of peripheral immune responses by central H3R signaling. These results further support a functional link between the neurogenic control of T cell responses and susceptibility to CNS autoimmune disease coincident with acute and/or chronic peripheral inflammation. Pharmacological targeting of H3R may therefore be useful in preventing the development and formation of new lesions in MS, thereby limiting disease progression.
Introduction
Multiple sclerosis (MS), a chronic inflammatory disease of the central nervous system (CNS), is the most common disabling neurologic disease of young adults and adolescents affecting ,350,000 individuals in the United States and more than 1 million individuals worldwide [1]. The etiopathogenesis of MS is largely unknown; however, it involves both genetic and environmental factors [2,3,4]. The spectrum of clinical courses in MS is diverse and includes relapsing/remitting (R/R), primary progressive, secondary progressive, and progressive relapsing MS [5]. Additional subtypes based on severity include benign [6] and malignant MS [7,8]. The pathologic lesions that best correlate with acute clinical exacerbations of disease feature foci of inflammation associated with active myelin degradation and phagocytosis and partial axonal preservation. Axonal injury and loss, however, occur to varying degrees in all lesions and in normal-appearing white and grey matter, and axon loss is a major correlate for permanent clinical deficits. Structurally, MS lesions show characteristic features which include demyelination, loss of oligodendrocytes, preferential destruction of thin caliber axons, impaired remyelination and astrocytic gliosis [9].
Research into the mechanisms underlying neuroinflammatory reactions in MS is largely driven by two hypotheses [10]. The immune-initiated hypothesis contends that autoreactive T cells generated in the periphery gain entry to the CNS where they elicit an inflammatory cascade that results in injury to previously normal neural tissues. In contrast, the neural-initiated disease hypothesis posits that events within the CNS initiate the process and that autoimmune responses are secondary. Previously, in the course of our studies examining the role of histamine and histamine receptors in experimental autoimmune encephalomyelitis (EAE) -often used to model aspects of these essentially conflicting hypotheses -we identified histamine H 3 receptor (Hrh3/H 3 R) as a gene that potentially unites these opposing theories functionally [11].
H 3 R is expressed presynaptically where it is an inhibitory autoreceptor (inhibits release of HA from histaminergic neurons) and heteroreceptor (inhibits release of other neurotransmitters such as acetylcholine, noradrenaline, dopamine, 5-HT, GABA, and glutamate from non-histaminergic neurons) [12]. Absence of presynaptic inhibition results in failure to limit neurotransmitter release, increased postsynaptic activity, and neurotransmitter spillover [13]. Our studies revealed the existence of an H 3 Rmediated central component in susceptibility to EAE related to neurogenic control of blood brain barrier (BBB) permeability and expression of cytokines and chemokines, and their receptors, by peripheral T cells. Subsequently, H 3 R was shown to similarly regulate neuroinflammation in cerebral malaria, with H 3 Rdeficiency correlating with increased BBB permeability and altered T cell phenotypes [14]. Moreover, we identified Hrh3, which under normal physiologic conditions plays a role in regulating body weight [15], as a positional candidate gene for Eae8, a quantitative trait locus (QTL) controlling EAE susceptibility and associated weight loss [16,17,18].
In this study, we provide functional characterization of a G293D polymorphism in the third intracellular domain of H 3 R associated with G i/o and beta-arrestin coupling to second messenger signaling pathways [19,20,21] that distinguishes EAEsusceptible SJL mice and EAE-resistant B10.S mice. We also demonstrate allele-specific differential expression of Hrh3 isoforms in the CNS in response to peripheral inflammatory stimuli, i.e., adjuvants used to elicit disease. Using a transcriptomics approach, we further show that the absence of H 3 R signaling in the CNS Figure 1. Murine Hrh3 demonstrates a single nucleotide polymorphism. (A) Sequence alignment of Hrh3 alleles from B10.S and SJL mice, and rat. Hrh3 cDNAs from the B10.S and SJL mice were amplified, subcloned, and sequenced, as described in the Materials and Methods. (B) The strain distribution of the SNP at position 293 in Hrh3. The presence of the SNP in the indicated strains of mice was determined using restriction typing, as described in the Materials and Methods. doi:10.1371/journal.pone.0062743.g001 significantly alters early responses to such stimuli at the level of the lymph node (LN). Taken together, our results provide additional support for Hrh3 as a gene central to a neural reflex [22,23,24] controlling peripheral immune responses and EAE susceptibility, and as a positional candidate gene for Eae8. Importantly, our findings provide a functional framework uniting the immune-and neural-initiated models of MS pathogenesis, and provide insight into the mechanisms whereby gene-by-environmental stimuli may influence the long term progression and spectrum of clinical disease courses seen in MS [25].
Results and Discussion
Characterization of Hrh3 Polymorphism Distinguishing EAE-susceptible SJL and EAE-resistant B10.S Mice Previously, using B10.S.SJL-Eae8 congenic mice, we identified Hrh3 as a positional candidate gene for Eae8, a QTL controlling EAE susceptibility and associated weight loss [16,17,18]. Given that EAE-susceptible mouse strains experience weight loss with the onset of EAE [26] and that H 3 RKO mice manifest changes in weight and energy expenditure [27], we hypothesized that an Hrh3 polymorphism distinguishing EAE-resistant B10.S and EAEsusceptible SJL mice may underlie Eae8. As a first test of this hypothesis, we undertook cDNA sequencing of the two alleles to identify coding region variants. A single missense mutation at position 878 (G to A) leading to a change from glycine to aspartic acid at residue 293 (G293D in SJL) was identified (Fig. 1). An examination of 18 different inbred strains of mice using restriction fragment length polymorphism (RFLP) analysis confirmed the existence of two alleles segregating among the various inbred strains (Fig. 1).
The G293D substitution resides within the third intracellular (IC) loop which couples H 3 R to G i/o and beta-arrestin second messenger signaling pathways [19,20,21]. The G293D substitution in H 3 R is analogous to the amino acid substitutions recently identified within the third IC domain of H 1 R underlying Bphs, a shared immunopathology disease gene controlling Bordetella pertussis-induced sensitivity to histamine, EAE, and autoimmune orchitis [28,29,30]. Importantly, a A280V missense mutation within the third IC loop of the human H 3 R gene has been reported in multiple system atrophy (Shy-Drager Syndrome), a rare neurodegenerative disease [31] and as a risk factor for migraine [32].
To assess the functionality of the G293D polymorphism, we expressed the two alleles and compared the pharmacologic properties of the H 3 R ligands R-a-methylhistamine (RAM-HA) and Imetit in a radioligand binding assay, a GTPcS-binding assay, and a ligand-induced Ca 2+ mobilization assay ( Fig. 2A-C). Overall, no significant difference in either receptor affinity or activity was detected, suggesting that the G293D polymorphism does not alter the function of the protein per se. However, these results do not exclude the possibility that an Hrh3 isoform expression polymorphism may underlie Eae8, since the assays described above are limited to utilizing full length Hrh3 cDNA expressed under a heterologous promoter.
Differential Hrh3 Isoform Expression in SJL and B10.S Mice in Response to Peripheral Inflammatory Stimuli
Multiple H 3 R isoforms have been described for humans, rats, and mice. In the human, H 3 R isoforms demonstrate differences in pharmacologic activity [33,34,35]. Many of these isoforms differ from the full length transcript by a variable-length deletion in the third IC loop. Importantly, isoform variation in this region results in differences in H 3 R functional activity. For example, an 80 293T cells co-expressing Gqi5 and H 3 R or H 3 R 293D were stimulated with different concentrations of RAM-HA and Imetit. The ligand-stimulated Ca 2+ mobilization was monitored using a FLIPR instrument (Molecular amino acid deletion at the third IC loop of human H 3 R confers increased constitutive activity [21]. In the rat, 32 and 48 amino acid deletions, which in the mouse encompasses the G293D polymorphism, result in changing H 3 R's efficiency in G protein coupling to second messenger signaling pathways [19,20]. Consequently, these deletions result in increased constitutive activity, similar to the 80 amino acid deletion in the human H 3 R. In our previous EAE study, H 3 RKO mice immunized for the induction of EAE exhibited increased BBB permeability on D4 post-immunization, significantly earlier than the appearance of inflammatory cells in the CNS [11]. This finding supports a role for events predicted by the neural-initiated disease hypothesis, which posits that events within the CNS initiate the disease process and influence subsequent autoimmune responses. We reasoned therefore that differential expression of H 3 R isoforms in response to inflammatory stimuli i.e., adjuvants/pain/danger signals etc., may be important in neurogenic control of disease susceptibility in SJL and B10.S mice. In this regard, both pertussis toxin (PTX) [36] and complete Freund's adjuvant (CFA) [37] lead to increased BBB permeability and systemic exposure to lipopolysaccharide (LPS) directly disrupts endothelial cell barrier functions [38] including BBB transport of amyloid proteins [39,40], which have recently been shown to suppress EAE severity [41]. Moreover, advances in neuroscience and immunology have established the anatomical and cellular basis for bidirectional communication between the nervous system and immune systems [22]. To explore the possibility that inflammatory stimuli can impact Hrh3 isoform expression, Hrh3 isoform expression was studied by RT-PCR using forebrain tissue from untreated SJL and B10.S mice, or at D1 and D10 after immunization with proteolipid protein peptide 139-151 (PLP 139-151 )+ complete Freund's adjuvant (CFA)+pertussis toxin (PTX), or with each of the respective components of the adjuvants used to induce disease, CFA, PTX, or CFA+PTX. Isoform-specific RT-PCR primers were designed based on the 3 published rat isoform sequences; thus detecting murine orthologs of the rat Hrh3a (full length transcript), Hrh3b (missing 32 amino acids from the third IC loop), and Hrh3c (missing 48 amino acids from the third IC loop) isoforms [19,20].
Hrh3b in whole forebrain was consistently below the level of detection in both B10.S and SJL mice irrespective of treatment; in contrast, Hrh3a and Hrh3c transcripts were readily quantifiable. For both detectable isoforms, there was a response to treatment, but no difference between the four inflammatory stimuli, and no strain-by-treatment interaction, indicating that strain is the major source of variation underlying differential expression of the two isoforms (Fig. 3). Consequently, the treatment groups were pooled by strain and reanalyzed (Fig. 4).
Basal expression of Hrh3a was higher in SJL forebrain compared to B10.S, whereas Hrh3c was lower (Fig. 4). Treatment resulted in an increase in Hrh3a expression, and a decrease in Hrh3c expression in both strains. Expression of Hrh3a remained elevated between D1 and D10 in both strains; however, it remained higher in SJL mice compared to B10.S (Fig. 4A). In contrast, Hrh3c expression increased from D1 to D10 in B10.S mice, whereas it remained low in SJL mice (Fig. 4B). Overall, these results support the existence of both a basal and a treatment-specific Hrh3 isoform expression polymorphism.
Our previous findings with H 3 RKO mice suggested that signaling through H 3 R may be protective in EAE. Although EAE-susceptible SJL mice tend to express higher levels of the full length Hrh3a isoform, they express lower levels of the short Hrh3c isoform relative to B10.S mice. Given the finding that the rat ortholog of Hrh3c is constitutively active [19,20], and assuming that the mouse and rat orthologs are functionally homologous, we suggest that constitutive signaling through Hrh3c is protective in EAE. Consistent with this, expression of the Hrh3c isoform was not upregulated in SJL mice by D10, whereas it was upregulated in B10.S mice (Fig. 4B). Taken together, our findings suggest that differential Hrh3 isoform expression in response to peripheral inflammatory stimuli regulates neurogenic control of EAE in SJL and B10.S mice, and is a potential functional candidate polymorphism underlying Eae8. We also acknowledge the possibility that other candidate genes in addition to Hrh3 may reside within the Eae8 locus, particularly given the somewhat modest difference in Hrh3 isoform expression between SJL and B10.S mice (Fig. 4). However, since the Hrh3 alternative splicing and expression is regulated very rapidly after inflammatory insult (Fig. 4), we believe that modest changes in expression early can significantly alter the course of the subsequent immune response, and can lead more profound modulation of disease course later. For example, 2 fold overexpression of Tlr7 due to a gene translocation event is sufficient to significantly accelerate the course of autoimmune lupus [42,43].
The candidacy of Hrh3 for Eae8 is further supported by comparing the functional roles of H 3 R and their relationship to the clinical presentations of MS and EAE. H 3 R is expressed presynaptically where it is an inhibitory autoreceptor and heteroreceptor [44]. Consequently, the neurophysiologic roles of H 3 R are complex and impact a variety of phenotypes including weight, metabolism, cognition/memory, arousal, sensory-motor activity, thermoregulation, and inflammatory and non-inflammatory pain [44]. Many, if not all of these are dysregulated in MS [45,46,47,48]. For example, ,50% of MS patients experience one or more types of pain simultaneously, occurring at any point during the disease course [45]. In SJL/J mice with either EAE or Theiler's murine encephalomyelitis virus (TMEV) induced demyelination, a viral model of MS, dysregulated pain sensation, including allodynia and hyperalgesia, is observed [49,50,51]. In EAE, these effects involve dysregulation of the glutamatergic system [50] in which H 3 R, as a presynaptic heteroreceptor, negatively modulates glutamate release [44]. Similarly, cognitive impairment which is common in MS [46,52] is also seen in EAE in association with dysregulated glutamatergic and GABAergic transmission [53].
Hrh3 Expression in Secondary Lymphoid Tissues and by Hematopoietic Cells
In our earlier EAE study [11], and in a study on cerebral malaria in mice [14], it was proposed that H 3 R plays a role in neurogenic control of T cell effector responses. As such, Hrh3 would serve as a key gene in the elaborate interactions between the brain and the immune systems [54] comprising a neural inflammatory reflex [22,23,24]. To exclude the possibility that H 3 R is influencing immune responses directly through its expression in either secondary lymphoid tissues and/or hematopoietic cells including T cells, we examined its expression in the spleen and LN of naive animals, and by macrophages, mast cells, neutrophils, bone marrow-derived dendritic cells, B cells (B220+), effector CD8+ (TCRb+CD8+CD42), naive CD4+ (TCRb+ CD4+CD82CD25 low CD45R high CD44 low CD1d-tetramer-), memory CD4+ (TCRb+CD4+CD25 low CD45RB low CD44 high CD1d-tetramer-), NKT (CD1d tetramer+), and Treg cells (TCRb+CD4+Foxp3+) by RT-PCR. mRNA for any of the known Hrh3 isoforms was undetectable in both whole spleen and LN, as well as in all cells of Device) (see Materials and Methods). 293T cells expressing Gqi5 alone were used as controls (Cntl). doi:10.1371/journal.pone.0062743.g002 the innate and adaptive immune systems studied. These data are also predicted by the neural-initiated disease hypothesis and identify H 3 R as a key CNS intermediate in a neural inflammatory reflex influencing T cell effector responses and susceptibility to EAE, presumably through innervation of secondary lymphoid tissues.
To confirm the innervation of mouse spleen and LN, we employed immunohistochemistry using neuronal-specific enolase (NSE), a pan-neuronal marker [55]. In the spleen, dense innervation was observed, particularly around the vasculature, as previously reported [56] (Fig. 5A and B). However, in contrast to what has been reported for the LNs of other species, where nerve fibers have been shown to branch into the parenchyma in paracortical and cortical regions [57], only sparse innervation was detected in the mouse LN. This was primarily located in the capsular and subcapsular sinus (SCS) (Fig. 5C and D) and absent from the parenchyma. This pattern of innervation was similar regardless of the age of the mouse or LN location (data not shown). Although in our hands the mouse LN lacked innervation in traditional T and B cell zones, the finding of innervation in the capsular and SCS is nevertheless consistent with the finding that particulate antigens and pathogens arriving via the lymphatics are retained in the SCS of mouse LN [58,59,60,61,62], and that naive T cells can relocalize to the SCS in response to infection [63]. Moreover, SCS macrophages can present antigen to B cells [59,61,62], in line with their ability to retain antigen on their surface, rather than internalize and degrade it [64,65]. Additionally, SCS macrophages have been shown to be specialized APCs, which in conjunction with non-cognate B cells deliver opsonized antigens to germinal centers thereby promoting affinity maturation [66].
Taken together, our results support a role for innervation of secondary lymphoid tissues in H 3 R-mediated neurogenic control
H 3 R Signaling Regulates Gene Expression in LN
Given the findings that H 3 R negatively regulates the development of EAE, a T cell-mediated autoimmune disease, that antigen-specific CD4 + T cells from H 3 RKO animals demonstrate a unique effector profile [11,14], and that the microenvironments of secondary lymphoid tissues are anatomically poised to be subjected to H 3 R mediated neurogenic control, we evaluated the possibility that central H 3 R signaling modulates gene expression in the LN under basal conditions and in early responses to peripheral stimuli. To this end, we utilized microarrays to examine baseline gene expression in the LN of untreated WT and H 3 RKO mice. Because changes in Hrh3 isoform expression occur within 24 hours of exposure to peripheral inflammatory stimuli (Fig. 4), we also examined gene expression at 24 hours following the administration of the two adjuvants used to induce EAE, CFA alone or in combination with PTX.
Relatively few differences in gene expression between WT and H 3 RKO mice were observed in the LN at baseline (9 genes, Table 1). However, most of these genes were of immunological relevance, as identified by IngenuityH Pathway Analysis (IPA). Several immunoglobulin (Ig) genes were downregulated in H 3 RKO LN, whereas S100 calcium-binding proteins A8 and A9 (S100a8 and S100a9) were upregulated. The latter play an important role in inflammatory processes, and are upregulated during MS and EAE [70].
CFA treatment resulted in 13 differentially expressed genes between WT and H 3 RKO LN ( Table 2), many of which were also associated with inflammatory responses. Similar to baseline, several Ig genes remained lower in H 3 RKO LN after CFA treatment.
The greatest differences in gene expression between WT and H 3 RKO LN were observed after treatment with CFA+PTX, the EAE immunization protocol that elicits increased disease severity in H 3 RKO mice compared to WT. There were 29 genes differentially expressed between the strains ( Table 3). Consistent with increased EAE severity, the results of IPA revealed that many of the differentially expressed genes were involved in inflammation, with a Z-score clearly indicative of an exaggerated inflammatory/immune response in the LNs of H 3 RKO animals following exposure to CFA+PTX (Tables 4 and 5). Most genes associated with inflammatory functions were overexpressed in H 3 RKO animals, including Ig genes, which were expressed at lower levels at baseline, and S100a8 and S100a9, which were expressed at higher levels at baseline. We selected three proinflammatory genes, S100a8, pro-platelet basic protein/chemokine (C-X-C motif) ligand 7 (Ppbp), and myeloperoxidase (Mpo), all of which were expressed at higher levels in H 3 RKO LN after CFA+PTX treatment (Ppbp was also overexpressed in H 3 RKO LN after CFA treatment), for validation by qRT-PCR in a series of independent experiments. We confirmed that S100a8 and Mpo were expressed at significantly higher levels in H 3 RKO LN, while expression of Ppbp showed an increase that did not reach statistical significance (Fig. 6A). Furthermore, we also confirmed downregulation of Mup1, a gene that was under-expressed in H 3 RKO LN (Fig. 6B). Interestingly, S100a8, Mpo, and Ppbp are differentially expressed in the liver of CBA and BALB/c mice in response to Schistosoma infection, with higher expression correlating with more severe pathological outcome [71]. Taken together, given the exaggerated expression of inflammation-related genes in H 3 RKO LN, these results are consistent with negative regulation of peripheral immune responses by central H 3 R signaling.
In summary, our studies suggest that an Hrh3 isoform expression polymorphism regulates neurogenic control of EAE and T cell effector responses in mice, and is a potential functional candidate polymorphism underlying Eae8. Moreover, our data, predicted by the neural-initiated disease hypothesis, identify H 3 R as a key intermediate in a neural immune reflex [22,23,24] integrating peripheral inflammatory signals in the neurogenic control of disease susceptibility and T cell effector responses. Importantly, our findings which functionally unite the opposing neural-initiated and immune-initiated theories underlying neuroinflammatory reactions in MS provide potential insight into the mechanisms whereby gene-by-environmental stimuli may determine the long term progression and spectrum of clinical disease courses seen in MS [25] associated with subtle changes in BBB integrity preceding inflammatory lesions [72,73]. Pharmacologic targeting of H 3 R may therefore be useful in preventing the development and formation of new lesions in MS, thereby significantly limiting the progress of the disease.
Materials and Methods
Animals C57BL/6J, B10.S/SgMcdJ and SJL/J mice were purchased from the Jackson Laboratory (Bar Harbor, ME). B6.129P2-Hrh3 tm1Twl mice [74] (H3RKO), originally held at Johnson and Johnson Pharmaceutical Research and Development (San Diego, CA) were maintained at the University of Vermont (Burlington, VT). B10.S.eae8 SJL mice were generated by marker assisted selection using informative microsatellite markers spanning the eae8 interval [11]. Animals were backcrossed for ten generations at which point they were intercrossed and subsequently fixed as a homozygous interval-specific congenic line. All animals were maintained under specific pathogen free conditions on a 12:12 light dark cycle and were fed Purina mouse pellets (Ralston-Purina, St. Louis, MO) and water ad libitum. The experimental procedures performed in this study were approved by the Institutional Animal Care and Use Committee (IACUC) of the University of Vermont; IACUC protocol number 08-034, approved November 9, 2007. Guinea pig samples were obtained in a previously published study [75].
Identification of Polymorphisms in Hrh3
Total RNA from SJL/J and B10.S/SgMcdJ was isolated from adult spleen. cDNAs were PCR amplified using Taq polymerase and specific primer pairs flanking the mRNA coding region of Hrh3. The amplified fragments were cloned and at least three clones for each PCR fragment were sequenced from both insert termini. A single nucleotide polymorphism at position 878 (GRA) leading to a single amino acid change from glycine to asparagine at residue 293 (G293D) in the predicted sequence of H3R was identified between B10.S/SgMcdJ and SJL/J allele. Multiple sequence alignment was done using the MultAlign website [63]. Rat Hrh3 sequence was obtained from NCBI, accession number NP_445958.
Functional Characterization of Hrh3 Alleles
Radioligand competition binding assays were performed essentially as described [76]. Briefly, Hrh1 and Hrh1 G293D expression plasmids were transfected into COS-7 cells using Lipofectamine (Invitrogen, Carlsbad, CA). Membranes were prepared in binding buffer (50 mM Tris-HCl, pH 7.5, 5 mM EDTA). 3 H-R-a-methyl-histamine (RAM-HA) was used as the tracer at a final concentration of 1 nM. Unlabeled RAM-HA and Imetit at various concentrations were added as the competitors. The binding assays were carried out at room temperature for 1 hour. The binding mix was then filtered through 96-well GFC (Packard Instrument, Meriden, CT) filter plates and the plates were washed with ice cold binding buffer and dried in a 50 o C oven. After adding 50 ml Microscint-0 (Packard Instrument, Meriden, CT) to each well of the 96-well filters, the filters were counted in TopCount/NTX (Packard Instrument, Meriden, CT) to measure the bound 3 H-RAM-HA.
GTPcS binding assays were essentially performed as described [77]. The mouse Hrh3 and Hrh3 G293D expression plasmids were transiently transfected into CHO cells using Lipofectamine. Two days after transfection, the membranes were prepared from the transfected cells and GTPcS binding was performed using either RAM or Imetit as the stimulator.
Ca 2+ mobilization assays were carried out using the Hrh3 and Hrh3 G293D variants co-transfected into 293T cells with Gq i5 , a chimeric G-protein that shifts cAMP inhibition to Ca 2+ mobilization, respectively. 293T cells transfected with Gq i5 alone were used as the control. Two days after transfection, the transfected cells were seeded into 96-well black poly-D-lysine coated tissue culture plates and loaded with Fluo-3. Ligand-stimulated Ca 2+ mobilization was monitored using FLIPR (Molecular Devices, Sunnyvale, CA). For tissue collection, mice were anesthetized using Ketaset (Fort Dodge, IA) and perfused with 20 ml of phosphate buffer saline (PBS). Brain samples were snap frozen in liquid nitrogen and stored at 280uC until further processed for RNA isolation. Total RNA was extracted using RNeasy kit followed by a DNase treatment (Qiagen, Valencia, CA) according to the manufacturer's guidelines. The reverse transcription of RNA was performed using the Superscript III RT kit (Invitrogen, Carlsbad, CA).
Quantification of Hrh3 Isoform Expression
Probes for the three Hrh3 isoforms were designed using the Primer Extension software (Applied Biosystems, Foster City, CA). All real-time PCR reactions were performed using an ABI prism 7900 HT sequence Detection system with the sequence detection software system SDS 2.2 in accordance with the manufacturer's instructions, using Taqman chemistry. Standard curve assay, using serially diluted Hrh3 isoform-specific plasmid clones as standards, was used to determine copy number of Hrh3 isoforms in the samples. The copy number was then normalized to the internal control gene mHPRT. qPCR amplification efficiencies for Hrh3a, Hrh3b, Hrh3c isoform-specific primer sets were 92.5, 96.2, and 108.1%, respectively, with r 2 values = 0.99.
Hrh3 Expression by Cells of the Innate and Adaptive Immune Systems
Alveolar macrophages were collected by lavaging the lungs through a tracheal cannula with 1 ml DPBS from which cells were collected, counted by hemocytometer, and differential analysis was performed by cytospin and H&E stain. Nearly 100% of the cells were identified as alveolar macrophages in these preparations.
For the generation of bone marrow-derived dendritic cells (BMDC), bone marrow was flushed from the femurs and tibiae and cultured on 24-well plates at 1610 6 cells/well (1 ml/well) in RPMI-1640 containing 10% serum and 10% conditioned media from X63-GMCSF myeloma cells transfected with murine GM-CSF cDNA (kindly provided by Dr. Brent Berwin, Dartmouth College). Media was replaced on days 2 and 4 and the adherent and lightly-adherent BMDCs, predominantly CD11b + CD11c + by FACS, were collected on day 6.
For the preparation of neutrophils, the marrow was flushed from femurs and tibias with HBSS, layered atop a three-step Percoll gradient (72, 64, and 52%), and centrifuged at 1,0606g for 30 minutes. Samples of the 72:64% interface revealed greater than 95% morphologically mature-appearing neutrophils.
Microarray Analysis
Microarray analysis was conducted on female and male C57BL/6 WT or H 3 RKO mice at 8 weeks of age. Mice in the treatment groups were injected with either with CFA or CFA+PTX and euthanized at 24h. LN were removed from C57Bl/6 WT and H 3 RKO mice at 8 weeks, snap frozen in liquid nitrogen. Isolation and purification of RNA was completed using the RNeasy RNA extraction kit (Qiagen).
RNA amplification and microarray analysis was performed at UVM microarray core facility using manufacturer's described protocols [78]. Briefly, 2 ug of total RNA from each sample were reverse transcribed to the single stranded cDNA using T7oligo(dT) primer. T4 DNA polymerase was used to synthesize double-stranded cDNA, which served as a template for in vitro transcription using T7 RNA polymerase to produce biotinylated cRNA. The biotinylated cRNAs were fragmented into 50-to 200base fragments and then hybridized to GeneChip Mouse Genome 430A 2.0 Arrays for 16 h at 45uC in a rotating Affymetrix GeneChip Hybridization Oven 320. After hybridization, arrays were washed and stained with streptavidin-phycoerythrin on an automated Affymetrix GeneChip Fluidic Station F450. The arrays were scanned with an Affymetrix GeneChip Scanner 2700 and the images quantified using Affymetrix GeneChip Operating Software.
The signal intensity for each probe on each chip was calculated from scanned images using GeneChip Operating Software (Affymetrix), and signal intensities were analyzed using BioConductor (http://www.bioconductor.org). Probe intensities were background corrected, normalized, and summarized using the Robust Multichip Average algorithm described by Speed and coworkers [79,80], including background-correction, normalization, and summarization for each probe set and sample, using Partek Genomic SuitesH version 6.6 (Copyright ß 2009, Partek Inc., St. Louis, MO, USA). Microarray datasets were uploaded to the Gene Expression Omnibus repository, accession number GSE44873.
Sample quality was assessed based on the 39:59 ratio, relative log expression, and normalized unscaled standard error. Principal Component Analysis was used to screen for outlier samples that could potentially introduce latent variation into the analysis of differential expression across sample groups (none were detected).
To identify differentially expressed genes, linear modeling of sample groups was performed using ANOVA within Partek Genomic Suites. The magnitude of the response (fold change calculated using the least square mean) and the p-value associated with each probe set and binary comparison was calculated, as well as step-up, adjusted p-value for the purpose of controlling the false discovery rate [81,82]. Genes were considered to be differentially expressed when the signed fold change was greater than 2 and P,0.05.
Immunohistochemistry
After removing pancreatic, axillary, mesenteric, renal, cervical, and brachial LN as well as spleens, tissue was immediately fixed in 4% formaldehyde, 0.4% picric acid in 16PBS o/n at 4uC. Tissue was then rinsed 3x for 15 m in PBS and cryoprotected o/n at 4uC in 30% sucrose in 16PBS. Tissue was stored at 280uC in OCT prior to cryosectioning. Floating sections of colonic tissue from guinea pig gut (obtained in a previous study [75]), were used as positive tissue controls. Tissue was cryosectioned and collected slides and stored at 280uC prior to staining. Tissue was stained with rabbit anti-neuron specific enolase antiserum (Polysciences) diluted at 1:10,000 followed Cy3-conjugated goat anti-rabbit antibody (Jackson Immunoresearch) at 2.5 mg/ml.
Slides were analyzed with an Olympus AX70 fluorescence photomicroscope. Filter sets for Cy3 were 510 nm-550 nm excitation and 590nm emission. Images were captured with an Optronics Magnafire CCD camera, attached to the Olympus AX70 microscope. Images were cropped in Microsoft PowerPoint with minimal alteration (minor adjustments to brightness and contrast).
qRT-PCR Validation of Differentially Expressed Genes
WT and H 3 RKO mice were immunized subcutaneously with CFA, followed by i.v. injection of PTX. 24 hours later, draining LN were removed and snap frozen in liquid nitrogen. RNA was extracted using the RNEasy kit (Invitrogen) according to manufacturer's instructions. cDNA was reverse transcribed using the Taqman Gold RT-PCR kit. qRT-PCR was performed using the DyNAmo Colorflash SYBR green qPCR kit (Thermofisher) and previously described primer sets [71,83]. Ywhaz and Actb were used as reference genes and relative mRNA levels were calculated using the comparative C T method, normalizing to the expression level in WT LN. | 7,331.8 | 2013-07-22T00:00:00.000 | [
"Biology"
] |
Multilinear Supervised Neighborhood Preserving Embedding Analysis of Local Descriptor Tensor
Subspace learning based pattern recognition methods have attracted considerable interests in recent years, including Principal Component Analysis (PCA), Independent Component Analysis (ICA), Linear Discriminant Analysis (LDA), and some extensions for 2D analysis. However, a disadvantage of all these approaches is that they perform subspace analysis directly on the reshaped vector or matrix of pixel-level intensity, which is usually unstable under appearance variance. In this chapter, we propose to represent an image as a local descriptor tensor, which is a combination of the descriptor of local regions (K*K-pixel patch) in the image, and is more efficient than the popular Bag-Of-Feature (BOF) model for local descriptor combination. As we know that the idea of BOF is to quantize local invariant descriptors, e.g., obtained using some interest-point detector techniques by Harris & Stephens (1998), and a description with SIFT by Lowe (2004) into a set of visual words by Lazebnik et al. (2006). The frequency vector of the visual words then represents the image, and an inverted file system is used for efficient comparison of such BOFs. However. the BOF model approximately represents each local descriptor feature as a predefined visual word, and vectorizes the local descriptors of an image into a orderless histogram, which may lose some important (discriminant) information of local features and spatial information hold in the local regions of the image. Therefore, this paper proposes to combine the local features of an image as a descriptor tensor. Because the local descriptor tensor retains all information of local features, it will be more efficient for image representation than the BOF model and then can use a moderate amount of local regions to extract the descriptor for image representation, which will be more effective in computational time than the BOF model. For feature representation of image regions, SIFT proposed by Lowe (2004) is improved to be a powerful local descriptor by Lazebnik et al. (2006) for object or scene recognition, which is somewhat invariant to small illumination change. However, in some benchmark database such as YALE and PIE face data sets by Belhumeur et al. (1997), the illumination variance is very large. Then, in order to extract robust features invariant to large illumination, we explore an improved gradient (intensity-normalized gradient) of the image and use histogram of orientation weighed with the improved gradient for local region representation.
Introduction
Subspace learning based pattern recognition methods have attracted considerable interests in recent years, including Principal Component Analysis (PCA), Independent Component Analysis (ICA), Linear Discriminant Analysis (LDA), and some extensions for 2D analysis.However, a disadvantage of all these approaches is that they perform subspace analysis directly on the reshaped vector or matrix of pixel-level intensity, which is usually unstable under appearance variance.In this chapter, we propose to represent an image as a local descriptor tensor, which is a combination of the descriptor of local regions (K*K-pixel patch) in the image, and is more efficient than the popular Bag-Of-Feature (BOF) model for local descriptor combination.As we know that the idea of BOF is to quantize local invariant descriptors, e.g., obtained using some interest-point detector techniques by Harris & Stephens (1998), and a description with SIFT by Lowe (2004) into a set of visual words by Lazebnik et al. (2006).The frequency vector of the visual words then represents the image, and an inverted file system is used for efficient comparison of such BOFs.However. the BOF model approximately represents each local descriptor feature as a predefined visual word, and vectorizes the local descriptors of an image into a orderless histogram, which may lose some important (discriminant) information of local features and spatial information hold in the local regions of the image.Therefore, this paper proposes to combine the local features of an image as a descriptor tensor.Because the local descriptor tensor retains all information of local features, it will be more efficient for image representation than the BOF model and then can use a moderate amount of local regions to extract the descriptor for image representation, which will be more effective in computational time than the BOF model.For feature representation of image regions, SIFT proposed by Lowe (2004) is improved to be a powerful local descriptor by Lazebnik et al. (2006) for object or scene recognition, which is somewhat invariant to small illumination change.However, in some benchmark database such as YALE and PIE face data sets by Belhumeur et al. (1997), the illumination variance is very large.Then, in order to extract robust features invariant to large illumination, we explore an improved gradient (intensity-normalized gradient) of the image and use histogram of orientation weighed with the improved gradient for local region representation.
With the local descriptor tensor of image representation, we propose to use a tensor subspace analysis algorithm, which is called as multilinear Supervised Neighborhood Preserving Embedding (MSNPE), for discriminant feature extraction, and then use it for object or scene recognition.As we know, subspace learning approaches, such as PCA and LDA by Belhumeur et al. (1997), have widely used in computer vision research filed for feature extraction or selection and have been proven to be efficient for modeling or classification.
Xian-Hua Han and Yen-Wei Chen
Recently there are considerable interests in geometrically motivated approaches to visual analysis.Therein, the most popular ones include locality preserving projection by He et al. (2005), neighborhood preserving embedding, and so on, which cannot only preserve the local structure between samples but also obtain acceptable recognition rates for face recognition.In real applications, all these subspace learning methods need to firstly reshape the multilinear data into a 1D vector for analysis, which usually suffers an overfitting problem.Therefore, some researchers proposed to solve the curse-of-dimension problem with 2D subspace learning such as 2-D PCA and 2-D LDA by ming Wang et al. (2009) for analyzing directly on a 2D image matrix, which was proven to be suitable in some extend.However, all of the conventional methods usually perform subspace analysis directly on the reshaped vector or matrix of pixel-level intensity, which would be unstable under illumination and background variance.In this paper, we propose MSNPE for discriminant feature extraction on the local descriptor tensor.Unlike tensor discriminant analysis by Wang (2006), which equally deals with the samples in the same category, the proposed MSNPE uses neighbor similarity in the same category as a weight of minimizing the cost function for N th order tensor analysis, which is able to estimate geometrical and topological properties of the sub-manifold tensor from random points ("scattered data") lying on this unknown sub-manifold.In addition, compared with TensorFaces by Casilescu & D.Terzopoulos (2002) method, which also directly analyzes multi-dimensional data, the proposed multilinear supervised neighborhood preserving embedding uses supervised strategy and thus can extract more discriminant features for distinguishing different objects and, at the same time, can preserve samples' relationship of inner object instead of only dimension reduction in TensorFaces.We validate our proposed algorithm on different benchmark databases such as view-based object data sets and Facial image data sets (YALE and CMU PIE) by Belhumeur et al. (1997) and Sim et al. (2001).
Related work
In this section, we firstly briefly introduce the tensor algebra and then review subspace-based feature extraction approaches such as PCA, LPP.
Tensors are arrays of numbers which transform in certain ways under coordinate transformations.
The order of a tensor X∈R N 1 ×N 2 ו••×N M , represented by a multi-dimensional array of real numbers, is M.A ne l e m e n t o fX is denoted as X i 1 ,i 2 ,••• ,i M , where 1 ≤ i j ≤ N j and 1 ≤ j ≤ M. In the tensor terminology, the mode-j vectors of the nth-order tensor X are the vectors in R N j obtained from X by varying the index i j while keeping the other indices fixed.For example, the column vectors in a matrix are the mode-1 vectors and the row vectors in a matrix are the mode-2 vectors.
for all index values.X ×d U means the mode d's product of the tensor X with the matrix U.
The mode product is a special case of a contraction, which is defined for any two tensors not just for a tensor and a matrix.In this paper, we follow the definitions in Lathauwer (1997) and avoid the use of the term "contraction".
In tensor analysis, Principal Component Analysis (PCA) is used to extract the basis for each mode.The proposed MSNPE approach is based on the basis idea of Locality Preserving Projection (LPP).Therefore, we simply introduce PCA, LPP and a 2D extension of LPP as the following.
(1) Principal component analysis extracts the principal eigen-space associated with a set (matrix denotes the samples feature in transformed subspace.Then, the linear transformation P can be obtained by solving the following minimization problem with some constraints, which will be given later: where W ij evaluate the local structure of the image space.It can be simply defined as follows: By simple algebra formulation, the objective function can be reduced to: where each column P i of the LPP linear transformation matrix P can not be zero vector, and a constraint is imposed as follows: where I in constraint term P T XDX T P = I or Y T DY = I is an identity matrix.D is a diagonal matrix; its entries are column (or row, since W is symmetric) sums of W, . Matrix D provides a natural measure on the data samples.
The bigger the value D ii (corresponding to y i ) is, the more importance is y i .The constraint for the sample y i in Y T DY = I is D ii * y T i y i = 1, which means that the more importance (D ii is larger) the sample y i is, the smaller the value of y T i y i is.Therefor, the constraint Y T DY = I will try to make the important point (has density distribution around the important point) near the origin of the projected subspace.Then, the density region near the origin of the 93 Multilinear Supervised Neighborhood Preserving Embedding Analysis of Local Descriptor Tensor www.intechopen.comprojected subspace includes most of the samples, which can make the objecrive function in Eq. ( 2) as small as possible, and at same time, can avoid the trivial solution ||P i || 2 = 0forthe transformation matrix P.
Then, The linear transformation P can be obtained by minimizing the objective function under constraint P T XDX T P = I: Finally, the minimization problem can be converted to solve a generalized eigenvalue problem as follows: In Face recognition application, He et al [8] extended LPP method into 2D dimension analysis, named as Tensor Subspace Analysis (TSA).TSA can directly deal with 2D gray images, and achieved better recognition results than the conventional 1D subspace learning methods such as PCA, LDA and LPP.However, for object recognition, color information also plays an important role for distinguishing different objects.Then, in this paper, we extend LPP to ND tensor analysis, which can directly deal with not only 3D Data but also ND data structure.At the same time, in order to obtain stable transformation tensor basis, we regularize a term in the proposed MSNPE objective function for abject recognition, which is introduced in Sec. 3 in detail.
Local descriptor tensor for image representation
In computer vision, local descriptors (i.e., features computed over limited spatial support) have been proven to be well-adapted for matching and recognition tasks as they are robust to partial visibility and clutter.The current popular one for a local descriptor is the SIFT feature, which is proposed by Lowe (2004).With the local SIFT descriptor, usually there are two types of algorithms for object recognition.One is to match the local points with SIFT features in two images, and the other one is to use the popular BOF model, which forms a frequency histogram of a predefined visual-words for all sampled region features by Belhumeur et al. (1997).For a matching algorithm, it is usually not enough to recognize the unknown image even if there are several points that are well matched.The popular BOF model usually can achieve good recognition performance in most applications such as scene and object recognition.However, in BOF model, in order to achieve an acceptable recognition rate, it is necessary to sample a lot of points for extracting SIFT features (usually more than 1000 in an image) and to compare the extracted local SIFT feature with the predefined visual words (usually more than 1000) to obtain the visual-word occurrence histogram.Therefore, BOF model needs a lot of computing time to extract visual-words occurrence histogram.In addition, BOF model just approximately represents each local region feature as a predefined visual-word; then, it may lose a lot of information and will be not efficient for image representation.Therefore, in this paper, we propose to represent a color or gray image as a combined local descriptor tensor, which can use different features (such as SIFT or other descriptors) for local region representation.
In order to extract the local descriptor tensor for image representation, we firstly grid-segment an image into K regions with some overlapping, and in each region, we extract some descriptors (can be consider tensor) for local region representation.For a gray image, a M-dimensional feature vector, which can be considered as a 1D tensor, is extracted from
94
Principal Component Analysis www.intechopen.comthe local gray region.For a color image, a M-dimensional feature vector can be extracted from each color channel such as R, G and B color channels.With the feature vectors of the three color channels, a combined 2D M × 3 tensor can represent the local color region.Furthermore we combine the K 1D or 2D local tensor (M-dimensional vector or M × 32 D tensor ) into a 2D or 3D tensor with of size M × K × L (L: 1 or 3).The tensor feature extraction procedure of a color image is shown in Fig. 1(a).For feature representation of the local regions such as the red, orange and green rectangles in Fig. 1 (a), the popular SIFT proposed by Lowe (2004) is proved to be a powerful one for object recognition, which is somewhat invariant to small illumination change.However, in some benchmark database such as YALE and CMU PIE face datasets, the illumination variance is very large.Then, in order to extract robust feature invariant to large illumination, we explore an normalized gradient (intensity-normalized gradient) of the image, and use Histogram of Orientation weighed with Normalized Gradient (NHOG) for local region representation.Therefore, for the benchmark databases without large illumination variance such as COIL-100 dataset or where the illumination information is also useful for recognition such as scene dataset, we use the popular SIFT for local region representation.However, for the benchmark database with large illumination variation, which will be harmful for subject recognition such as YALE and CMU PIE facial datasets, we use Histogram of Orientation weighed with Normalized Gradient (NHOG) for local region representation.
(1) SIFT: The SIFT descriptor computes a gradient orientation histogram within the support region.For each of 8 orientation planes, the gradient image is sampled over a 4 by 4 grid of locations, thus resulting in a 128-dimensional feature vector for each region.A Gaussian window function is used to assign a weight to the magnitude of each sample point.This makes the descriptor less sensitive to small changes in the position of the support region and puts more emphasis on the gradients that are near the center of the region.To obtain robustness to illumination changes, the descriptors are made invariant to illumination transformations of the form aI(x)+b by scaling the norm of each descriptor to unity [8].For representing the local region of a color image, we extract SIFT feature in each color component (R, G and B color components), and then can achieve a 128 * 3 2D tensor for each local region.
(2) Histogram of Orientation weighed with the Normalized Gradient (NHOG): Given an image I, we calculate the improved gradient (Intensity-normalized gradient) using the following Eq.: where I x (i, j) and I y (i, j) mean the horizontal and vertical gradient in pixel position i, j, respectively, I xy (i, j) means the global gradient in pixel position i, j.The idea of the normalized gradient is from χ 2 distance: a normalized Euclidean distance.For x-direction, the gradient is normalized by summation of the upper one and the bottom one pixel centered by the focused pixel; for y-direction, the gradient is normalized by that of the right and left one.
Multilinear supervised neighborhood preserving embedding
In order to model N-Dimensional data without rasterization, tensor representation is proposed and analyzed for feature extraction or modeling.In this section, we propose a multilinear supervised neighborhood preserving embedding by Han et al. (2011) ) be the i th object in the c th class.For color object image tensor, L is 3, N 1 is the row number, N 2 is the column number, and N 3 is the color space components (N 3 =3).We can build a nearest neighbor graph G to model the local geometrical structure and label information of X .L e tW be the weight matrix of G.A possible definition of W is as follows: where X i −X j 2 means Euclidean distance of two tensor, which is the summation square root of all corresponding elements between X i and X j ,and • means l 2 norm in our paper.
Let U d be the d-mode transformation matrices (Dimension: A reasonable transformation respecting the graph structure can be obtained by solving the following objective functions: min • Solve the minimizing problem: min with eigenspace analysis end for end for output: the MSNPE tensor Table 1.The flowchart of multilinear supervised neighborhood preserving embedding (MSNPE).
98
Principal Component Analysis www.intechopen.comwhere X i is the tensor representation of the i th sample; X i×1 U 1 means the mode 1's product of the tensor X i with the matrix U 1 ,andX i×1 U 1×2 U 2 means the mode 2's product of the tensor X i×1 U 1 with the matrix U 2 , and so on.The above objective function incurs a heavy penalty if neighboring points of same class X i and X j are mapped far apart.Therefore, minimizing it is an attempt to ensure that if X i and X j are where In optimization procedure of each mode, we also impose a constraint to achieve the transformation matrix (such as U d in mode d) as the following: For the optimization problem of all modes, we adopt an alternative least square (ALS) approach.In ALS, we can obtain the optimal base vectors on one mode by fixing the base vectors on the other modes and cycle for the remaining variables.The d-mode transformation matrix U d can be achieved by minimizing the following cost function:
99
Multilinear Supervised Neighborhood Preserving Embedding Analysis of Local Descriptor Tensor www.intechopen.com In order to achieve the stable solution, we firstly regularize the symmetric matrix D d as D d = D d + αI (α is a small value, I is an identity matrix of same size with the matrix D d ).Then, the minimization problem for obtaining d-mode matrix can be converted to solve a generalized eigenvalue problem as follows: We can select the corresponding generalized eigenvectors with the first N ′ d smaller eigenvalues in Eq.( 14), which can minimize the objective function in Eq.( 13).However, the eigenvectors with the smallest eigenvalues are usually unstable.Therefore, we convert Eq. ( 14) into: The corresponding generalized eigenvectors with the first N ′ d smaller eigenvalues λ in Eq. ( 14) means those with the first N ′ d larger eigenvalues β(1 − λ) in Eq. ( 15).Therefore, the corresponding generalized eigenvectors with the first N ′ d larger eigenvalues can be selected for minimizing the objective function in Eq.( 13).The details algorithm of MSNPE are listed in Algorithm 1.In MSNPE algorithm, we need to decide the retained number of the generalized eigenvectors (mode dimension) for each mode.Usually, the dimension numbers in most discriminant tensor analysis methods are decided empirically or according to applications.
In our experiments, we retain different dimension numbers for different modes, and do recognition for objects or scene categories.The recognition accuracy with varied dimensions in different modes are also given in the experiment part.The dimension numbers is decided empirically in the compared results with the state-of-art algorithms.
After obtaining the MSNPE basis of each mode, we can project each tensor object into these MSNPE tensors.For classification, the projection coefficients can represent the extracted feature vectors and can be inputted into any other classification algorithm.In our work, beside Euclidean distance as KNN (k=1) classifier, we also use Random Forest (RF) for recognition.
Database
We evaluated our proposed framework on two different types of datasets.
(i) View-based object datasets, which includes two datasets: The first one is the Columbia COIL-100 image library by Nene et al. (1996).It consists of color images of 72 different views of 100 objects.The images were obtained by placing the objects on a turntable and taking a view every 5 • .The objects have a wide variety of complex geometric and reflectance characteristics.Fig. 3(a) shows some sample images from COIL-100.The second one is the ETH Zurich CogVis ETH-80 dataset by Leibe & Schiele (2003a).This dataset was setup by Leibe and Schiele to explore the capabilities of different features for object class recognition.In this dataset, eight object categories including apple, pear, tomato, cow, dog, horse, cup and car have been collected.There are 10 different objects spanned large intra-class variance in each category.Each object has 41 images from viewpoints spaced equally over the upper viewing hemisphere.On the whole we have 3280 images, 41 images for each object and 10 object for each category.
Methodology
The recognition task is to assign each test image to one of a number of categories or objects.The performance is measured using recognition rates.
For view-based object databases, we take different experimental setup in COIL-100 and ETH80 datasets.For COIL-100, the objective is to discriminate between the 100 individual 101 Multilinear Supervised Neighborhood Preserving Embedding Analysis of Local Descriptor Tensor www.intechopen.comobjects.In most previous experiments on object recognition using COIL-100, the number of views used as training set for each object varied from 36 to 4. When 36 views are used for training, the recognition rate using SVM was reported approaching 100% by Pontil & Verri (1998).In practice, however, only very few views of an object are available.In our experiment, in order to compare experimental results with those by Wang (2006), we follows the experiment setup, which used only 4 views of each object for training and the rest 68 views for testing.In total it is equivalent to 400 images for training and 6800 images for testing.The error rate is the overall error rate over 100 objects.The 4 training viewpoints are sampled evenly from the 72 viewpoints, which can capture enough variance on the change of viewpoints for tensor learning.For ETH-80, it aims to discriminate between the 8 object categories.Most previous experiments using ETH-80 dataset all adopted leave-one-object-out cross-validation.The training set consists of all views from 9 objects from each category.The testing set consists of all views from the remaining object from each category.In this setting, objects in the testing set have not appeared in the training set, but those belonging to the same category have.Classification of a test image is a process of labeling the image by one of the categories.Reported results are based on average error rate over all 80 possible test objects by Leibe & Schiele (2003b).Similar to the above, instead of taking all possible views of each object in the training set, we take only 5 views of each object as training data.By doing so we have decreased the number of the training data to 1/8 of that used by Leibe & Schiele (2003b), Marrr et al. (2005).The testing set consists of all the views of an object.The recognition rate with the proposed scheme is compared to those of different conventional approaches by Wang (2006) and those with MSNPE analysis directly on pixel-level intensity tensor.
For facial dataset, which has large illumination variance in images, we validate that the tensor representation with the proposed NHOG for image representation will be much more efficient for face recognition than that with the popular SIFT descriptor, which only is somewhat robust to small illumination variance.In experiments Yale dataset, we randomly select 2, 3, 4 and 5 facial images from each individual for training, and the remainders for test.For CMU PIE dataset, we randomly select 5 and 10 facial images from each individual for training, and the remainder for test.We do 20 runs for different training number and average recognition rate in all experiments.The recognitions with our proposed approach are compared to those by the state-of-art algorithm by Cai et al. (2007a), Cai et al. (2007b).
Experimental results
(1) View-based object data sets We investigate the performance of the proposed MSNPE tensor learning compared with conventional tensor analysis such as tensor LDA by Wang (2006), which is also used in view-base object recognition, and the efficiency of the proposed tensor representation compared to the pixel-level intensity tensor, which directly consider a whole image as a tensor, on COIL-100 and ETH80 datasets.In these experiments, all samples are also color images, and SIFT descriptor for local region representation is used.Therefore, the pixel-level intensity tensor is 3rd tensor with dimension R1 × C1 × 3, where R1andC1 is row and column number of the image, and the local descriptor tensor is with 128 × K × 3, where K is the segmented region number of an image (here K=128).In order to compare with the state-of-art works by Wang (2006), simple KNN method (k=1 in our experiments) is also used for recognition.Experimental setup was given in Sec. 5, and we did 18 runs so that all samples can be as test.Figure 6(a) shows the compared results of MSNPE using pixel-level tensor and local descriptor tensor (denoted MSNPE-PL and MSNPE with KNN classifier, respectively, MSNPE-RF-PL and MSNPE-RF with random forest) and traditional methods by Wang (2006) (n.d.).From Table 3 and 4, it is obvious that our proposed algorithm can achieve the best recognition performances for all most cases, and the recognition rate improvements become greater when the training sample number is small compared to those by the conventional subspace learning methods by Cai et al. (2007a), Cai et al. (2007b), Cai (2009) and Cai (n.d.).In addition, as we have shown in the previous section, our proposed strategy can be applied not only for recognition of face with small variance (such as mainly frontal face database), but also for recognition of generic object with large variance.With generic object dataset with large variance, the recognition rates are also improved greatly compared with using pixel-level tensor.
Conclusion
In this paper, we proposed to represent an image as a local descriptor tensor, which is a combination of the descriptor of local regions (K * K-pixel patch) in the image, and more efficient than the popular Bag-Of-Feature (BOF) model for local descriptor combination, and at the same time, we explored a local descriptor for region representation for databases with large illumination variance, Which is improved to be more efficient than the popular SIFT descriptor.Furthermore, we proposed to use Multilinear Supervised Neighborhood Preserving Embedding (MSNPE) for discriminant feature extraction from the local descriptor tensor of different images, which can preserve local sample structure in feature space.We validate our proposed algorithm on different Benchmark databases such as view-based and facial datasets, and experimental results show recognition rate with our method can be greatly improved compared conventional subspace analysis methods. 105 Multilinear Supervised Neighborhood Preserving Embedding Analysis of Local Descriptor Tensor www.intechopen.com Fig. 1.(a) Extraction of local descriptor tensor for color image representation; (b)NHOG feature extraction from a gray region.
Tensor objects X c i from C classes, X c i denots the i th tensor object in the c th class Graph-based weights: Building nearest neighbor graph in same class and calculate the graph weight W according to Eq. 9 and D from W Initialize: Randomly initialize U d r ∈ R N d for d =1,2,•••, L for t=1:T (Iteration steps) or until converge do for d=1:L (Iteration steps) do • Calculate D d and S d assuming Fig. 3. Sample images from view-based object data sets.
Fig. 4. (a) The compared recognition rates on COIL-100 between the proposed framework and the state-of-art approaches Wang (2006).(b) Average recognition rate with different mode dimension using random forest classifier.
T be the covariance matrix of the x i .One solves the eigenvalue equation λu i = Cu i for eigenvalues λ i ≥ 0. The principal eigenspace U is spanned by the first K eigenvectors with the largest eigenvalues, U =[u i | K i=1 ].I fx t is a new feature vector, then it is projected to eigenspace U: y t = U T (x t − m).T h ev e c t o ry t is used in place of x t for representation and classification.(2)LocalityPreservingProjection:LPP seeks a linear transformation P to project high-dimensional data into a low-dimensional sub-manifold that preserves the local Structure of the data.LetX =[x 1 , x 2 , ••• , x N ] denotesthe set representing features of N training image samples, and Y Han et al.97 Multilinear Supervised Neighborhood Preserving Embedding Analysis of Local Descriptor Tensor(2011)to not only extract discriminant feature but also preserve the local geometrical and topological properties in same category for recognition.The proposed approach decompose each mode of tensor with objective function, which consider neighborhood relation and class label of training samples.Suppose we have ND tensor objects X from C classes.The c th class has n c tensor objects and the total number of tensor objects is n.Let
Table 4 .
Average recognition error rates (%) on PIE dataset with different training number. | 6,944 | 2012-03-02T00:00:00.000 | [
"Computer Science"
] |
Using a Model of Germ-Free Animals to Study the Impact of Gut Microbiome in Research: A Step by Step Sterility Setting and Management
The particularly unique composition of the gut microbiota has the potential to influence the health or disease status of animal and human hosts. Altering the homeostasis of the host-bacteria could lead to changes in gut flora that result in disease or activation of a specific immunological response, which could explain the variations observed in patient responses to current therapies. A standardized model is crucial for studying the influence of the gut microbiota on therapeutic modalities. A step by step mouse model and sterility management system that compares a control strain of C57BL/6 mice to the established C57BL/6 germ-free (GF) strain has been developed. The GF BL/6 mouse phenotype is well established, and the anatomical differences between the GF and control mice were evident in this model. This method could be applied to research studies investigating the microbiome impact, the response to various therapies, or disease transfer via fecal transplants. A standardized sterility maintenance method is crucial in this context.
Introduction
The gut microbiome continues to be a focus of intense research. A recent publication catalogs the mouse gut metagenome, which is similar to that in humans, and revealed that approximately 99% of cataloged genes are bacterial. Furthermore, 95.2% of cataloged bacterial genes have been identified for humans and mice [1] The remaining 1% consists of viruses, fungi, worms, and other resident microorganisms [2]. Clinical data provide strong evidence that a stable and diverse gut microbial composition is important for the maintenance of optimal health [3,4]. Altered microbial profiles, resulting in altered homeostasis of the host-bacteria, have been associated with numerous human illnesses [5][6][7].
To assess the critical question of whether commensal bacteria play a role in a given disease and/or contribute to therapy failure, the ideal model would use germ-free (GF) animals, totally devoid of bacteria, in parallel with wild-type animals having a defined microbiota. Although use of an animal model to address this question is not ideal, designing such a model in humans is not ethically possible. Currently, germ-free studies most commonly use mice [8]. However, rats (Sprague Dawley), chickens and guinea pigs have been used, with piglets and calves used less extensively [8,9].GF mouse strains have been maintained continuously since 1954. Methods of obtaining GF animals were simplified by the development and improvement of isolation equipment [10]. Today, two distinct methods for obtaining GF mice exist: aseptic cesarean or embryo transfer, thereby reducing the risk of vertical contamination. The challenge, however, is to maintain a sterility barrier to avoid contamination, which is common in laboratories, and therefore introduce study bias. To our knowledge, no detailed method describing a protocol have been published in the literature.
The purpose of this preliminary work was to establish a protocol for comparing the response to a therapeutic agent under controlled (germ-free) conditions to settings in which the characteristic microbiota (wild-type) is maintained. A detailed method, easy to follow step by step, is proposed to maintain a continuous sterility flow thereby avoiding sterility breaches and highlighting the importance of not disrupting the microbiome of mice included in therapeutic cohorts (germ-free, gnotobiotic, or conventional controls) by the addition of antibiotics or using a method of sterilization impacting food nutrients, thereby introducing bias into the study. The impact of the microbiome on the variability of responses to a chosen therapeutic agent could then be investigated. Maintenance of a sterile environment without breach is crucial. A similar approach could be used for fecal transplants in GF animals, rendering them gnotobiotic for disease transmission studies.
Experimental Design
The first challenge faced when setting up a model using a germ-free colony was to ensure that the mice were housed in an environment totally free of pathogens, including viruses, fungi, and bacteria, and that non-germ-free controls were housed under the same conditions with respect to bedding, food and water supplies.
Materials and Equipment
• A devoted room and technical team are basic requirements. GF mice can be housed in sterile semirigid isolators (SRI) or in flexible isolators (Park Bioservices, LLC, Groveland, MA, USA) ( Figure 1C). We chose to work with SRI. Each isolator, depending on its size, can hold up to 12 cages, each housing four mice. Biological indicators (BI, Apex Biological Indicator for gaseous hydrogen peroxide) are placed prior to the fogging, and subsequently analyzed to verify adequate killing and determine bacterial log reduction. Specific personal protective equipment (PPE), which includes disposable coveralls, cap, mask, nitrile gloves, and shoe covers, is required to enter the room. Jewelry and perfume must be removed prior to putting on PPE. Use of a second set of shoe covers, cap and nitrile gloves are recommended prior to entering the GF room, as per specific protocol and Standard Operating Procedures (SOPs) ( Figure 1A) • C57 BL/6 GF mice, as well as their control C57BL/6 littermates, can be purchased from several authorized vendors, as well as other germ-free strains such as BALB/c, Swiss Webster, or strains bred in-house. However, the latter poses a challenge due to the poor reproductive performance of the mice, attributable to restricted abdominal space from an enlarged cecum. In our studies, mice were purchased from Taconic (Taconic Bioscience, Rensselaer, NY, USA).
•
All animal procedures and Animal Study Proposals (ASP) were reviewed and approved by an Institutional Animal Care and Use Committee (FDA White Oak IACUC for our studies).
Set up and Preparation
The key to conducting a germ-free study is to ensure total sterility of the germ-free mouse environment. To maintain such an environment, a strict protocol must be established before initiating the study and it must be maintained for the duration of this study.
Mice should be housed in a restricted-access room, accessible only to authorized personnel having proper training in germ-free handling ( Figure 1B). This room should be prepared prior to housing the animals. A thorough decontamination of the room containing the isolators is first performed using a hydrogen peroxide (H 2 O 2 ) vapor solution delivered by a fogger (HaloFogger Extended Nozzle, Quip Laboratories, Wilmington, DE, USA; Figure 1D).
Biological indicators (BI, Apex Biological Indicator for gaseous hydrogen peroxide) are placed prior to the fogging, and subsequently analyzed to verify adequate killing and determine bacterial log reduction. Specific personal protective equipment (PPE), which includes disposable coveralls, cap, mask, nitrile gloves, and shoe covers, is required to enter the room. Jewelry and perfume must be removed prior to putting on PPE. Use of a second set of shoe covers, cap and nitrile gloves are recommended prior to entering the GF room, as per specific protocol and Standard Operating Procedures (SOPs) ( Figure 1A)
Experimental Design
Before animals are ordered, ensure that all the supplies needed to conduct the study from inception to completion, including backups, are on hand. Isolators should be set up with all necessary sterile supplies needed to carry out the approved experiment, (i.e.; autoclaved cloth gloves, bags, forceps, paper towels, ear punch, labeled tubes, plastic rack, syringes and needles, scalpels, and disposable sharp mini-trash), prior to receipt and occupation by GF animals. This also includes preparation and autoclaving of the cages with fresh bedding (BioFresh Comfort White Auto, Lab Diet, Richmond, IN, USA). Bedding and environmental enrichment (Enviro-dri, Certified LabDiet, Richmond, IN, USA) are placed in the cage prior to autoclaving. Autoclaved, irradiated rodent chow and non-chlorinated water should also be prepared (Certified LabDiet, Richmond, IN, USA). Dechlorination of water is achieved by leaving water in an open container overnight so that chlorine in the water will dissipate prior to transfer into the water bottles for autoclaving. No antibiotics were added in the water, or in any other process to avoid bias in the microbiome studies. All materials entering the isolators were individually sterile wrapped, autoclaved, or appropriately sprayed with an approved sterilant (200 ppm MB-10, prepared with MB-10 tablets: 20.8% sodium chlorite, 7% sodium dichloroisocyanurate dihydrate solution; Quip Laboratories). A specific procedure used for transferring supplies into the isolator via a sterile port with 45 minutes' wait time was implemented to insure sterilization. Careful planning helped minimize the number of material transfers into the isolator. (Figure 2) 3.2.1. Procedure: Introduction of the Mice The first step consists of appropriately housing the sterile animals immediately upon arrival. Transport boxes arrive from the vendor in cylinders sealed with tape, with separate sets of cages inside (Figure 2A). Two technicians are needed to successfully house the GF animals. The transport container is first sprayed with sterilant and placed directly in front of the isolator port entrance. A direct fit must be ensured between the port cylinder and the transport cylinder and the two parts are liberally sprayed with sterilant to complete saturation. The soft plastic portion of the cylinder is attached to the SRI port with multiple layers of nonporous packing tape and allowed to saturate for one hour. At this time, a health assessment of the GF animals is conducted by an animal health technician dedicated to the GF room, and a facility veterinarian is contacted immediately if a health issue is detected [11].
Working in the Sterile Environment
Methods Protoc. 2020, 3, x FOR PEER REVIEW 4 of 8 to insure sterilization. Careful planning helped minimize the number of material transfers into the isolator. Figure 2).
Procedure: Introduction of the Mice
The first step consists of appropriately housing the sterile animals immediately upon arrival. Transport boxes arrive from the vendor in cylinders sealed with tape, with separate sets of cages inside (Figure 2A). Two technicians are needed to successfully house the GF animals. The transport container is first sprayed with sterilant and placed directly in front of the isolator port entrance. A direct fit must be ensured between the port cylinder and the transport cylinder and the two parts are liberally sprayed with sterilant to complete saturation. The soft plastic portion of the cylinder is attached to the SRI port with multiple layers of nonporous packing tape and allowed to saturate for one hour. At this time, a health assessment of the GF animals is conducted by an animal health technician dedicated to the GF room, and a facility veterinarian is contacted immediately if a health issue is detected [11]. Spare cages are also kept in the isolator for cage changing. Co-housed animals remain as cage mates from birth to minimize aggressive behavior. Homogeneity of the GF microbiome in these mice is assured by random pairing of the co-housed mice. Animals are accessible using the port gloves, which are covered by a supplemental pair of sterile surgical gloves or cloth gloves for manipulations ( Figure 2C). Colonies are age-and gender-matched, with control C57BL/6 mice housed under the same conditions in an adjacent room. All animals require one week of acclimatization before any experiments commence ( Figure 2D).
Identification of the Mice
Ear punching was used in our model to identify the animals (for injections, fecal transplant or collection purposes). The ear punch is aseptically introduced before animals enter the isolator, with a specific code implemented to identify each animal. The control C57BL/6 mice housed in the adjacent room might have the same codes to identify the different experiments. Spare cages are also kept in the isolator for cage changing. Co-housed animals remain as cage mates from birth to minimize aggressive behavior. Homogeneity of the GF microbiome in these mice is assured by random pairing of the co-housed mice. Animals are accessible using the port gloves, which are covered by a supplemental pair of sterile surgical gloves or cloth gloves for manipulations ( Figure 2C). Colonies are age-and gender-matched, with control C57BL/6 mice housed under the same conditions in an adjacent room. All animals require one week of acclimatization before any experiments commence ( Figure 2D).
Identification of the Mice
Ear punching was used in our model to identify the animals (for injections, fecal transplant or collection purposes). The ear punch is aseptically introduced before animals enter the isolator, with a specific code implemented to identify each animal. The control C57BL/6 mice housed in the adjacent room might have the same codes to identify the different experiments.
Animals Observation
Animal observation is carried out by trained technicians or investigators once daily and recorded. Cage change is performed every two weeks. Forceps are used to handle the mice to minimize a potential breach in the barrier. Isolator entries are documented on a log sheet. Each week, swabs are collected for bacterial and fungal culture testing. This includes swabs taken weekly from the port entrance walls, isolator walls, right and left gloves, mold trap, water and food stock, and cage environment. Gram stains of feces, mold trap, and feed are performed weekly, examined and compared to the previous week's Gram stain slide to document that there are no changes in sterility.
Fecal Sample Collection
Prior to beginning a study, fecal samples can be collected for microbiome analysis as baseline screening, then on regular intervals to monitor any changes. Prior to autoclaving, ensure that each tube is properly labeled and/or numbered. All tubes can be placed in the isolator at the beginning of the study. (Colored tubes could reduce mistakes due to label loss/readability.) Fecal samples are collected and placed into tubes with sterile tweezers within the isolator. Following the protocol, the tubes are placed inside the SRI port and sprayed with sterilant before being removed from the isolator. Tubes are then placed in sterile pouches, sealed, and kept on dry ice to be stored at -80 • C in appropriately labeled boxes.
Mice Injections
Biologic or Small Molecule Therapeutics Sterilized 1.5 mL tubes containing therapeutics to be injected intra-peritoneally or subcutaneously can be transferred through the port entrance after being liberally sprayed with sterilant prior to injections ( Figure 2B). An adequate number of individually-wrapped sterile syringes are sprayed and can be transferred en masse to cover the length of a study. Mice are removed from their cage individually for injections, then replaced and further monitored by a technician over the next few days to monitor potential adverse event occurrence.
Mice Can be Fecal Transplanted (Fecal Matter Transplant, FMT) Using a Similar Protocol Sterilized 1.5 mL tubes containing FMT preparations, (one selected strain of bacteria for example) prepared in sterile anaerobic conditions and to be given by oral gavage technique, can be transferred through the port entrance after being liberally sprayed with sterilant prior to injections. These tubes are transported in sterile sealed pouches from the sterile anaerobic chambers to the dedicated GF room and SRI sterile port entrance. An adequate number of individually-wrapped sterile syringes are sprayed and can be transferred en masse to cover the length of a study ( Figure 2B). An equal number of sterile individually wrapped needles for oral gavage are transferred to the isolator following the same sterilizing protocol (Cadence Science malleable stainless-steel animal feeding needles, Fisher, Hampton, NH, USA). Mice are removed from their cage individually for oral gavage, then replaced and further monitored to ensure proper transplantation. After FMT, mice and isolators are considered gnotobiotic, but continue to be handled using sterile conditions. 3.2.6. Mice Bleeding, Tissue Collection GF mice can be bled inside the isolator by a trained investigator. Blood collection tube coding/numbering/coloring should parallel that used for fecal collection. The tubes are placed on a plastic sterile rack previously introduced and each mouse is bled. A submandibular technique, using a sterile 4 mm lancet, has proven to be an effective choice. Alternative techniques include tail bleeding or terminal bleeding. Do not perform retro-orbital blood collection in these fragile mice. From each adult mouse, blood is collected in tubes and placed into the port to be transferred outside, Methods Protoc. 2020, 3, 18 6 of 8 following the sterile protocol. It is then immediately transferred to serum tubes for sterile processing and to avoid coagulation. Similarly, collection of other tissues can be performed inside the isolator following sterile protocols.
End of the Study
Removal of the mice from the isolator occurs under only two conditions: at the endpoint of a study, or if one of the animals is ill and needs to be euthanized per the facility Standard Operating Procedures (SOP) or as described in the animal study protocol. Control C57BL/6 mice are euthanized using similar established procedures, generally with CO 2 or by cervical dislocation.
Expected Results and Discussion
Characterization of phenotype (Figure 3) Control C57BL/6 animals are housed, fed, and watered with the same autoclaved supplies as the GF colonies, including the same biofresh bedding. This step is important to avoid any bias in therapeutic studies.
The GF mouse phenotype and description have been well documented for many years and the gastrointestinal structure and function of these animals was extensively characterized by Thompson and Trexler from London in 1971 [10]. The distended cecum, weighing about ten times more than normal, is intriguing. It has been suggested that this excess weight is due to enhanced water transport. So far, the best way to return the enlarged cecum to normal has been to colonize the gastrointestinal tract with anaerobic bacteria. The absorption, linked to the speed of transit in the small intestine, is also disturbed in GF animals; Partially digested and unabsorbed peptides accumulate in the cecum. The authors also reported that other organs, such as liver, lungs, heart, and spleen were smaller [10]. The GF mice in this protocol presented a phenotype consistent with previous publications [12][13][14]. They were lean with no discernable fat, presented a five-fold enlarged cecum relative to controls and exhibited slightly smaller organs; including smaller spleens, lungs, livers, and hearts ( Figure 3A-D). Additionally, they exhibited increased heat sensitivity and an increased need for food and water. We found the animals looked very healthy and were more active than conventional mice. To our knowledge, this hyperactive phenotype had not been previously reported.
Germ free mice have been extensively used for deciphering some mechanisms linked to diseases, such as Type 2 diabetes mellitus, T2DM [15] behavioral functions at the gut-brain axis and autism [16], cardiovascular diseases [17] and cancer [18]. The use of therapeutics to treat these diseases is also investigated with the use of germ-free colonies and use of a standardized protocol and method would be of interest for future studies. We used this model for investigating biologic therapeutics and microbiome interactions [19], and made use of the same setup to perform fecal matter transplant (FMT) in GF mice: one isolator was dedicated to gnotobiotic mice, while the other contained GF mice. Fecal transplants were manipulated in the same way as injections and sterilely introduced into the gnotobiotic isolator before being transferred immediately to mice. They were previously prepared under sterile anaerobic conditions and introduced into the isolator in prepared syringes individually wrapped in sterile pouches. Iso cages (Tecniplast, West Chester, PA, USA) can also be used as gnotobiotic isolators, with four to six mice co-housed in each cage.
In preclinical studies, GF animals do not mimic human conditions, but serve as models to better understand the influence of bacteria on host development and function. Moreover, colonizing the animals with one strain or a cocktail of known strains of bacteria enabled us to determine the impact of a specific bacteria on multiple health-related issues. Some alternatives to the germ-free murine model exist. The use of antibiotic treatment, probiotic feeding, fecal transplants, and mouse humanization have been explored 8 . However, using GF animals has proven to be the best alternative so far, even though mouse models have limitations with respect to extrapolation of findings to humans. using similar established procedures, generally with CO2 or by cervical dislocation.
Characterization of phenotype (Figure 3)
Control C57BL/6 animals are housed, fed, and watered with the same autoclaved supplies as the GF colonies, including the same biofresh bedding. This step is important to avoid any bias in therapeutic studies. The GF mouse phenotype and description have been well documented for many years and the gastrointestinal structure and function of these animals was extensively characterized by Thompson and Trexler from London in 1971 [11]. The distended cecum, weighing about ten times more than normal, is intriguing. It has been suggested that this excess weight is due to enhanced water transport. So far, the best way to return the enlarged cecum to normal has been to colonize the gastrointestinal tract with anaerobic bacteria. The absorption, linked to the speed of transit in the small intestine, is also disturbed in GF animals; Partially digested and unabsorbed peptides accumulate in the cecum. The authors also reported that other organs, such as liver, lungs, heart, and spleen were smaller [11]. The GF mice in this protocol presented a phenotype consistent with previous publications [13][14][15]. They were lean with no discernable fat, presented a five-fold enlarged cecum relative to controls and exhibited slightly smaller organs; including smaller spleens, lungs, livers, and hearts ( Figure 3A-D). Additionally, they exhibited increased heat sensitivity and an increased need for food and water. We found the animals looked very healthy and were more active than conventional mice. To our knowledge, this hyperactive phenotype had not been previously reported.
Conclusions
We are still in the discovery phase with respect to determining the role of the gut microbiota in human health and disease. This model introduces new avenues of research facilitating extrapolation of murine findings to humans. Ascertaining the function of the large number of microorganisms colonizing human bodies, particularly in the gut, might help elucidate the onset of, and gain insight into, the progression of diseases and selection of appropriate therapies. Furthermore, our enhanced understanding of the system might enable its modulation to improve therapeutic success. The complexity of these host-bacteria interactions, the fine regulation of symbiosis in the gut, and the immune mechanisms linked to these modifications clearly present a challenge today. Deciphering the interactions between the microbiome and host cells should help address numerous questions, particularly with respect to autoimmune diseases and their therapies. Deciphering the interactions between the microbiome and host cells will help in the design of a precision medicine approach to patient treatments, in which therapeutic optimization and successful responses would be more attainable. It could also initiate the development of new therapeutics targeting the microbiome to help optimize the efficacy of biologics or small molecule drugs. | 5,181.4 | 2020-02-22T00:00:00.000 | [
"Medicine",
"Biology",
"Environmental Science"
] |
Regulation of carbohydrate degradation pathways in Pseudomonas involves a versatile set of transcriptional regulators
Summary Bacteria of the genus Pseudomonas are widespread in nature. In the last decades, members of this genus, especially Pseudomonas aeruginosa and Pseudomonas putida, have acquired great interest because of their interactions with higher organisms. Pseudomonas aeruginosa is an opportunistic pathogen that colonizes the lung of cystic fibrosis patients, while P. putida is a soil bacterium able to establish a positive interaction with the plant rhizosphere. Members of Pseudomonas genus have a robust metabolism for amino acids and organic acids as well as aromatic compounds; however, these microbes metabolize a very limited number of sugars. Interestingly, they have three‐pronged metabolic system to generate 6‐phosphogluconate from glucose suggesting an adaptation to efficiently consume this sugar. This review focuses on the description of the regulatory network of glucose utilization in Pseudomonas, highlighting the differences between P. putida and P. aeruginosa. Most interestingly, It is highlighted a functional link between glucose assimilation and exotoxin A production in P. aeruginosa. The physiological relevance of this connection remains unclear, and it needs to be established whether a similar relationship is also found in other bacteria.
Introduction
Bacteria of the genus Pseudomonas are ubiquitous inhabitants of soil, water, plant surfaces, animal and human tissue and have a robust metabolism for amino acids and organic acids as well as aromatic compounds (Jim enez et al., 2002;Puchalka et al., 2008;Valerie et al., 2013;Daniels et al., 2010), and the deciphering of the complete genomes of a number of Pseudomonas strains from different species has revealed that these microbes metabolize a very limited number of sugars (Buell et al., 2003;Feil et al., 2005;Joardar et al., 2005), which are mainly glucose, glucuronic acid and fructose (Daniels et al., 2010). This metabolic pattern has been associated with their lifestyle as they inhabit environmental niches characterized by a limited presence of sugars (Silby et al., 2011). Studies in Pseudomonas putida have shown that there is a three-pronged metabolic system to generate 6-phosphogluconate from glucose (Del Castillo et al., 2007;Fig. 1). Glucose enters through the OprB porin into the periplasmic space. Once in the periplasm, glucose can be either transported to the cytosol or converted into gluconate and 2-ketogluconate (KG) to be subsequently transferred to the cytosol via different transporters (GntP and KguT respectively). Gluconate can be phosphorylated to 6-phosphogluconate by gluconokinase (GnuK), whereas the conversion of 2-ketogluconate into 6-phosphogluconate requires two enzymatic reactions mediated by KguK and KguD (Fig. 1); whereby, it starts the Entner-Doudoroff pathway (Nikel et al., 2015).
In addition, the growing number of complete genome sequences of Pseudomonas strains in public databases (Jayal et al., 2017;Nesme et al., 2017;Wilson et al., 2017) along with the increasing sophistication of the techniques used in metabolomics and transcriptomics (La Rosa et al., 2015;Nikel et al., 2015) has provided us with key information for a further understanding of the complex regulation processes and functionality of the enzymes that participate in carbohydrate metabolism in Pseudomonas. The genes encoding the carbohydrate catabolic pathways are organized in operons (Fig. 2), which are under the control of different regulators that respond differentially to distinct pathway intermediates, suggesting a hierarchy in the control of glucose metabolism related to a tight gene expression (Rojo, 2010;Daddaoua et al., 2014;Figs 2 and 3). Transcriptional regulation is the primary mechanism to control gene expression in prokaryotic cells (Ishihama, 2000). Typically, transcriptional regulators sense certain environmental cues and the resulting molecular stimulus modulates their interactions with RNA polymerases or DNA. In one-component regulatory system (OCS), the input (i.e. sensing) and output functions are united in a single protein. In two-component system (TCS), a membrane-bound histidine kinase is dedicated to signal sensing, whereas the response regulator protein mediates a transcriptional response (Mitrophanov and Groisman, 2008). TCS represents the major regulatory mechanism in bacteria and archaea and is responsible for the Fig. 2. Genetic organization of genes encoding enzymes of carbohydrate degradation pathways and exotoxin A in different Pseudomonas strains. The genes that were found to be regulated are boxed, and the corresponding regulator y system is provided over each block of genes. transformation of external and internal stimuli into adaptive responses, including regulation of gene expression and methylation of target proteins (Skerker et al., 2005;Mitrophanov and Groisman, 2008). In a typical ligandinduced TCS, changes in the autokinase activity of the sensor kinase modulate the rate of transphosphorylation to its cognate response regulator, which in turn defines the system output. In addition, there is evidence for TCS that contains additional signalling proteins (Mitchell et al., 2015;Popella et al., 2016). Data so far available indicate that the regulation of the glucose catabolic pathways in Pseudomonas is controlled by the concerted action of the one-component systems (OCSs) HexR, PtxS, PtxR and GntR as well as the two-component system (TCS) GltR/GtrS (Daddaoua et al., 2009(Daddaoua et al., , 2012(Daddaoua et al., , 2014(Daddaoua et al., , 2017. This review focuses on the description of this regulatory network and highlights a number of differences in regulation between strains of the soil bacterium P. putida and the opportunistic human pathogen Pseudomonas aeruginosa. Glucose metabolism in Pseudomonads is fundamentally different to that in Escherichia coli (Lendenmann et al., 1996;Fuhrer et al., 2005; Fig. 1), and the current knowledge on its regulation is reviewed here.
Carbohydrate catabolic pathways in Pseudomonas
Metabolic flux analysis of wild-type Pseudomonas and different mutant strains has permitted to estimate the carbon flow through the individual three peripheral uptake routes. These data revealed that the flow through the gluconate kinase (GnuK) route is minor, whereas the remaining flux appears to be similarly distributed to the routes that involve the phosphorylation of either glucose or 2-ketogluconate (Del Castillo et al., 2007). Carbohydrates enter the periplasmic space through porins located in the outer membrane (OprB1/OprB2) (Figs 1 and 3A). Once in the periplasm, glucose can be oxidized to gluconate through the action of a glucose dehydrogenase (Gcd), and gluconate is transported to the cytoplasm and phosphorylated to 6-phosphogluconate (6PG) by gluconate kinase (GnuK). Alternatively, gluconate, still in the periplasm, can be further oxidized to 2-ketogluconate (2KG) by the action of gluconate dehydrogenase (Gad); then, 2KG enters the cytoplasm and is converted into 6PG via 2-keto-6-phosphogluconate by the action of the 2-ketogluconate kinase (KguK) and 2-ketogluconate-6-phosphate reductase (KguD). Glucose can also be transported directly to the cytoplasm through an ABC uptake system, and the first acting enzyme is glucokinase (Glk) that phosphorylates glucose to give glucose 6-phosphate (G6P). Next, the combined action of glucose 6-phosphate dehydrogenase (Zwf) and 6-phosphogluconolactonase (Pgl) convert G6P into 6-phosphogluconate (6PG) (Del Castillo et al., 2007). The produced 6PG enters to the Entner-Doudoroff route and is converted into 2-keto-3-deoxy-6-phosphogluconate (KDPG) by the action of the 6-phosphogluconate dehydratase (Edd) and then hydrolysed to produce glyceraldehyde-3-phosphate and pyruvate by the action of the 2-keto-3-deoxy-6-phosphogluconate aldolase (Eda) (Figs 1 and 3A; Braga et al., 2004). Glyceraldehyde-3phosphate is further metabolized by the glyceraldehyde-3-phosphate dehydrogenase (Gap-1) to D-glycerate 1,3-bisphosphate, while pyruvate is decarboxylated to acetyl-coenzyme A (Acetyl-CoA) and enters the Krebs cycle ( Fig. 1).
Genomic distribution of genes involved in Pseudomonas carbohydrate catabolism
The analysis of the genomic localization of genes involved in glucose catabolism ( Fig. 2) revealed that genes are arranged in operons that encode for different sets of catabolic enzymes, some transcriptional regulators as well as specific porins. Interestingly, the two genes that encode the key enzymes of the Entner-Doudoroff pathway, edd and eda, are located in different operons. The edd and glk genes (phosphorylative branch) form part of the same operon together with the genes encoding the regulatory proteins GltR, GtrS and the gap-1 gene (Fig. 2). The eda gene forms an operon with the zwf and pgl genes (phosphorylative branch) as well as the gene encoding the regulatory protein HexR (Fig. 2). This genomic organization (edd/glk and zwf/pgl/ eda containing operons) suggests a complex regulation, because the main route of glucose metabolism in Pseudomonads occurs through the 2KG degradation pathway, which, however, are encoded by kguT, kguK and kguD that form another operon with ptxS, encoding a transcriptional regulator (Fig. 2). The fact that the genes that control the expression of Edd and Eda proteins are located in the same operons as the genes involved in the glucose phosphorylative pathway, which is not the main glucose degradative pathway in Pseudomonas, can be a reminiscence of an ancestral organism and explains why this pathway is still active in Pseudomonas; both edd and eda genes are necessary for the 6PG conversion into tricarboxylic acid (TCA) intermediates, regardless of the peripheral pathway by which glucose has been converted into 6PG.
Usually, promoters that control the expression of genes that encode proteins implicated in the synthesis of cell structures, such as flagella, pili and fimbriae, or those involved in complex cellular processes, that is, virulence or biofilm formation, are often controlled by multiple environmental signals that are recognized by multiple transcription factors (Dalebroux et al., 2010;Rasamiravaka et al., 2015;).
Data currently available indicate that the control of glucose metabolism in Pseudomonas is controlled by the concerted action of the HexR, PtxS, PtxR and GntR, which are OCSs, whereas and the GltR/GtrS is TCS. The transcriptional regulation of glucose degradation has been studied in Pseudomonas putida KT2440 and P. aeruginosa PAO1, and the current knowledge is summarized in Figure 1.
OCS involved in carbohydrate catabolism pathways
HexR. The HexR regulator belongs to the RpiR family of transcriptional regulators whose members typically act as transcriptional regulators in sugar catabolism and have been identified as both repressors and activators in Gram-negative and Gram-positive bacteria (Yamamoto et al., 2001). In E. coli, RpiR negatively regulates the expression of the rpiB gene that encodes a ribose 5phosphate isomerase, an enzyme that catalyses the reversible reaction of ribose 5-phosphate to ribulose 5phosphate and forms part of the pentose phosphate pathway ( Fig. 1; Sorensen and Hove-Jensen, 1996). Sugar-responsive RpiR proteins form dimers in solution and have an N-terminal helix-turn-helix (HTH) DNAbinding motif and a sugar isomerase-like binding (SIS) domain at their C-terminal extension (Bateman, 1999).
In Pseudomonas, HexR regulator is divergently transcribed from the zwf/pgl/eda operon, and the physical organization of these genes is highly conserved within Pseudomonas, which suggests the conservation of the identified regulatory mechanism (Fig. 2). HexR regulator controls the zwf/pgl/eda and edd/glk/gltR-2 operons as well as the gap-1 gene (Daddaoua et al., 2009;Fig. 3 and Table 2) and is also required for the metabolism of other sugars such as fructose and gluconate (Fig. 1). The HexR regulator exerts its regulatory action by binding to specific sequences in the target promoters (Figs 3 and 4 and Table 1), which in turn prevents the progress of RNA polymerase. HexR recognizes 2-keto-3-deoxy-6phosphogluconate (KDPG), an intermediate of the Entner-Doudoroff pathway (Table 2), and is required not only for glucose catabolism but also for the metabolism of other sugars such as fructose and gluconate (Fig. 1). HexR acts as a transcriptional repressor in the absence of a specific effector; but binding of KDPG to DNA-bound HexR causes protein dissociation and transcriptional activation (Fig. 4). Furthermore, it has been speculated that this is not a 'random genetic organization' as KDPG plays a relevant role as a signalling molecule in catabolite repression and in the response to oxidative stress in Pseudomonas (Daddaoua et al., 2009;Rojo, 2010).
Homology modelling of HexR using the structure of the YbbH transcriptional regulator from Bacillus subtilis (PDB: 2O3F) as a template (36% sequence identity) revealed that HexR is composed of two distinct domains, namely an N-terminal HTH containing DNA-binding domain (residues 20-126) and a C-terminal domain (residues 117-256) that is homologous to the phosphosugar isomerase domain of the RpiR family. Furthermore, the inspection of this model suggested that the amino acids involved in DNA recognition are Gln-43, Lys-46, Glu-49, Arg-54 and Arg-57, and the amino acids Arg-54 and Arg-57 may be directly interacting with DNA ( Fig. 5; Daddaoua et al., 2009).
In P. putida, KT2440 PtxS controls its own expression as well as that of the operons gad and kgu (Daddaoua et al., 2012). However, in P. aeruginosa PAO1, PtxS regulates, in addition (Figs 1 and 3), the expression of the toxA gene that encodes the exotoxin A, a primary virulence factor (Daddaoua et al., 2012;Fig. 3). This protein is an ADP-ribosyl transferase that irreversibly inhibits protein synthesis in eukaryotic cells causing cell death.
In P. aeruginosa, but not in other Pseudomonas, an additional transcriptional regulator ptxR is located within the kgu cluster and transcribed divergently from the ptxS gene (Fig. 2). PtxR belongs to the LysR-type of transcriptional regulators and does not share any significant sequence similarities with PtxS (13% sequence identity in an alignment with seven gaps). PtxR is also involved in the regulation of toxA expression (Hamood et al., 1996) which encodes an ADP-ribosyl transferase that irreversibly inhibits protein synthesis in eukaryotic cells causing cell death. It has been shown that endotoxin A production is triggered by certain environmental conditions (such as cation concentration, iron and oxygen levels or temperature) and is controlled by different regulators, and the involvement of RegA, PtxR and the iron-starvation alternative sigma factor PvdS in this complex regulatory process has been documented (Wick et al., 1990;Hamood et al., 2004).
PtxR is also involved in the regulation of several genes of the glucose degradation pathway (Daddaoua et al., 2012) and recognizes a pseudopalindrome with a consensus sequence of 5 0 -GGC-N 4-6 -GCC -3 0 (Fig. 3 and Table 2) which overlapped with the RNA polymerase binding site of P toxA , P kgu and P gad promoters. PtxR has a DNA-binding HTH domain (in the N-terminal region) and has a signal receptor domain at its C-terminal extension that is composed of two subdomains: one is responsible for inducer recognition, whereas the other is involved in the response (Maddocks and Oyston, 2008). A three-dimensional homology model of PtxR was built using, the LysR family protein member, CrgA of Neisseria meningitidis (PDB: 3 hhg) (Sainsbury et al., 2009) as a template. The in silico analysis of the model together with isothermal titration calorimetry (ITC) assays using PtxR mutants revealed that residues Ser-40, Glu-44 and Asp-52 in the HTH are involved in interaction with the target DNA (Fig. 5).
In P. aeruginosa, PtxS regulates the expression of ptxR; therefore, it is involved in the indirect control of the production of exotoxin A (Fig. 3). PtxS does not directly bind to the toxA promoter; instead, ITC analyses demonstrated that PtxS forms a tight complex with PtxR, either in its free form or when bound to the P toxA DNA (Colmer and Hamood, 1998;Daddaoua et al., 2012;). The binding of PtxS to DNA-bound PtxR prevents PtxR from activating transcription (Fig. 4). The binding of 2-ketogluconate to PtxS causes its dissociation from PtxR allowing the activation of transcription (Daddaoua et al., 2012). It could be speculated that PtxR might be responsible for the recruitment of RNA polymerase allowing transcription.
In contrast to the mechanism by which PtxR and PtxS control the expression of P toxA , both regulators bind to P kgu and P gad . As both proteins interact with each other and as their operator sites are separated by around 50 bp, it has been shown that the interaction between these two DNA-bound provokes the formation of a DNA loop (Daddaoua et al., 2013). It has been suggested that this loop structure prevents the RNA polymerase to access the promoter (Huo et al., 2009).
The binding of 2-ketogluconate to PtxS breaks the loop permitting RNA polymerase recruitment for the transcription of the genes involved in 2-ketogluconate catabolism ( Fig. 4; Daddaoua et al., 2012Daddaoua et al., , 2013. Therefore, it seems that there are two different control mechanisms exerted by the same regulator, in the P toxA promoter via PtxS/PtxR/DNA complex formation and in the case of P kgu and P gad promoters by forming a DNA/PtxS/PtxR/ DNA loop structure (Fig. 4). GntR. As has been mentioned by Daddaoua et al. (2017), the inspection of the genetic context of genes involved in glucose metabolism in P. aeruginosa resulted in the detection of a GntR-like transcriptional regulator (Jain, 2015), that is predicted to possess an Nterminal HTH DNA-binding motif and a periplasmic binding protein-like domain for effector binding (Daddaoua et al., 2017). In P. aeruginosa, it was found that the gntR gene is transcribed divergently to the gnuK gene (gluconokinase), which is located adjacent to those of the gluconate transporter (GntP) and a glyceraldehyde-3-phosphate dehydrogenase (GapN) (Fig. 2) It has been proposed that GntR regulates its own expression, as well as that of a gluconokinase (GnuK), gluconate permease (GntP) and gluconate 6phosphate dehydrogenase (GntZ) gene (Daddaoua et al., 2017;Del Castillo et al., 2007).
However, the GntR homologue of P. aeruginosa shared only modest sequence identities (11-37%) with characterized paralogues in Corynebacterium glutamicum (Frunzke et al., 2008), Sinorhizobium meliloti (Steele et al., 2009) and Vibrio cholerae (Roy et al., 2016). Recent data confirm that GntR represses its own expression as well as that of the GntP gluconate permease (Table 1). In contrast to PtxS and GtrS/GltR, GntR did not modulate expression of the toxA gene encoding the P. aeruginosa exotoxin A virulence factor. GntR bound to promoters P gntR and P gntP , and the consensus sequence of its operator was defined as 5 0 -AC-N 1 -AAG-N 1 -TAGCGCT-3 0 ( Table 2). Both operator sites overlapped with the RNA polymerase binding site. GntR employs an effector-mediated derepression mechanism (Fig. 4) because the release of promoter-bound GntR is induced by gluconate and 6-phosphogluconate that bind with similar apparent affinities to the GntR/DNA complex (Table 2). Surprisingly, GntR and PtxS are paralogous which may have evolved from a common ancestor (Daddaoua et al., 2017).
The three-dimensional model of the GntR N-terminal domain was generated with the I-TASSER server (Cscore: 0.48) based on the transcriptional regulator from Bacillus subtilis (PDB: 1ZVV) which had 25% of identity with GntR (Schumacher et al., 2006). The analysis of the model suggested that the amino acids Val-27, Tyr-31, Ser-41, Ala-43, Leu-68, Ala-69 and Ala-71 were important for the recognition of its target DNA (Fig. 5).
Role of the GltR/GtrS (TCS) in the regulation of the carbohydrate catabolism
The TCS GtrS/GltR was also found to participate in the transcriptional regulation of glucose catabolism. Different research groups have provided initial information on the role GtrS and GltR, but they did not release that they do form a TCS. Whereas GltR was identified to be essential for efficient glucose transport (Sage et al., 1996), GtrS was found to be important for optimal host colonization and dissemination in a mouse infection model by modulating type III secretion in response to host cells (O'Callaghan et al., 2012). However, Daddaoua et al. (2014) demonstrated that the GltR and GtrS form indeed a TCS. GtrS is a transmembrane sensor kinase that contains a periplasmic ligand binding domain. Efficient GtrS autophosphorylation and transphosphorylation to the GltR response regulator have been observed. GtrS recognizes specifically 2-ketogluconate and 6-phosphogluconate (Table 1), causing a modulation of its autokinase activity, leading in turn to changes in GltR transphosphorylation activity (Fig. 4).
GltR interacts with different promoters regulating the expression of the oprB, glk, edd and gap-1 genes (Fig. 3). Most interestingly, GltR also binds to the P toxA promoter regulating toxA expression, underlining the interconnectivity of regulatory mechanisms for glucose metabolism and exotoxin A expression in P. aeruginosa (Daddaoua et al., 2014). GltR acts as a transcriptional repressor that is released from DNA upon phosphorylation and the consensus sequence for GltR was determined to be 5 0 -tgGTTTTTc-3 0 (Table 1 and 2; Daddaoua et al., 2014).
This review summarizes the current knowledge on the specific regulatory circuits that govern glucose metabolism in Pseudomonads. However, there is also evidence for global regulation and interconnection with other processes, although the precise mechanisms have not been elucidated yet. Available information suggests the ketogluconate branch plays a role in the control virulence; and An and Moe (2016) showed that Gcd levels varied significantly with the carbon source used by Pseudomonas, showing that expression in glucose was higher than in glycerol, LB and citrate, which was consistent with the requirements in glucose dehydrogenase activity. These authors also showed that gcd expression was downregulated by inorganic phosphate, a demonstration of interconnection among metabolism of different nutrients.
Concluding remarks
Electrophoresis mobility shift assay, footprinting, isothermal titration calorimetry, classical expression assays and computer modelling analysis allowed to gain insight into the molecular mechanisms that govern carbohydrate degradation pathways in Pseudomonas. These analyses have answered a number of questions that emerged from the early biochemical studies. This review has focussed in the latest advances in the regulatory mechanism rather than in the carbohydrate metabolism by itself. The complexity of the glucose degradation in Pseudomonas is given by the fine regulation of glucose fluxes among three different convergent pathways and how these pathways are coordinately expressed. Pseudomonas aeruginosa is among the most feared human pathogens. Importantly, the transcriptional regulation of glucose metabolism in P. aeruginosa is intimately linked to bacterial virulence. So far, five specific regulatory systems have been shown to modulate glucose metabolism and transport (HexR, PtxS, PtxR, GtrS/GltR and GntR), of which two, PtxS and GtrS/GltR, were found to regulate, directly or through PtxR, the expression of toxA, encoding the primary virulence factor; exotoxin A. There is thus a functional link between glucose assimilation and exotoxin A production in P. aeruginosa. The physiological relevance of this connection remains unclear, and it needs to be established whether a similar relationship is also found in other bacteria.
In general, the effector molecules of most signal transduction systems are unknown. However, the effectors for four of the five systems have been established. In all cases, these effectors were intermediates of the glucose metabolism, whereas glucose itself is not recognized by any of the sensor proteins. 2KG and 6-phosphogluconate play central roles as they are recognized by two different sensor proteins. In the case of 6-phosphogluconate, this role may be related to the fact that all three glucose metabolism pathways converge in this metabolite. The importance of 2KG as an effector molecule may suggest that the corresponding metabolic route is of particular relevance. The structural basis for their recognition is different as, for example, 2KG is recognized by a periplasmic binding protein type of sensor domain (Pfam00532) at PtxS, whereas the GtrS sensor domain remains unannotated.
Another interesting feature is the cellular compartment at which the signals are of sensed. Whereas 2KG and 6-phosphogluconate are sensed in the cytosol by PtxS and GntR, respectively, both ligands are sensed by GtrS which contains a periplasmic sensor domain. TCS represents a genetic and metabolic burden as compared to OCS, but its capacity to sense ligands in the extracytosolic space is considered as a major advantage over OCSs. In the present case, a system based on the dual sensing of the same effector in different cellular compartments has evolved, which may suggest that sampling information on intermediate concentrations in both cell compartments is advantageous.
Taken together, the molecular mechanism of transcriptional regulation of carbohydrate metabolism in P. aeruginosa and P. putida can be considered a model system to understand complex regulatory processes in bacteria. | 5,383 | 2018-04-02T00:00:00.000 | [
"Biology"
] |
Li/graphene oxide primary battery system and mechanism
A novel type of Li/graphene oxide (Li/GO) battery based on a spontaneous redox reaction between Li metal and GO cathode is introduced as an alternative viable primary battery system. Here, we present an efficient synthesis of GO by the modified Hummers method and focus on a comprehensive study of the reduction mechanism. The Li/GO battery was thoroughly analyzed by various physical and electrochemical methods. GO rich in oxygen‐bearing functional groups on graphene layers provided lithium storage sites and delivered a high discharge capacity of around 720 mAh/g at 12 mA/g. Products formed on the surface during reduction were analyzed, and a mechanism was proposed. The results uncovered the reasons underlying the improved electrochemical properties and the contribution of the irreversible capacity of reduced GO in graphene‐based composite electrode materials for metal‐ion batteries. The Li/GO concept is expected to shed light on the design of similar M/GO batteries based on other active metal anodes (e.g., M = Na, Mg, Al, Zn).
| INTRODUCTION
Primary batteries are widely known in electronic devices with low power requirements, as they are easy to construct with simple components. 1 In general, ideal primary batteries must possess high specific energy and power density, good shelf-life, resistance to interference, and low cost. 1 The quality and price of primary batteries depend on constituent materials: cathode, anode and electrolyte solution. Therefore, there is a need for new alternative materials with low cost and good performance to produce primary batteries in bulk quantities. Carbon-based graphene materials received high scientific interest in recent years, especially in energy and environmental applications due to their remarkable physicochemical properties. [2][3][4][5][6] These include a high specific surface area (theoretically 2630 m 2 /g for single-layer graphene), 2,6,7 extraordinary electronic properties and electron transport capabilities, [8][9][10] high mechanical strength, 11 and excellent thermal and electrical conductivity. 12,13 According to the ISO/TS80004-13 standard of the International Organization for Standardization, graphene-based materials are monolayers of carbon atoms, consisting of one-atom-thick planar sheets comprising sp 2 -bonded carbon structures. Double-layer, three-layer, and multilayer graphene are materials consisting of two, three, and three to ten layers of carbon atoms, respectively.
Graphene oxide (GO) is a graphene derivative with rich oxygen functional groups (hydroxyl, epoxy, and carboxyl groups) attached to the basal planes and the sheet edges of graphene layers. 14,15 These functional groups are attached to help in dispersion owing to high colloidal stability in the water while imparting a unique set of mechanical, colloidal, and/or optical properties. 14,15 GO can be produced in desirable quantities and at a low cost. Based on its electrochemical reduction of GO, several primary and secondary battery systems were developed, for example, Li, Mg, Zn, Fe, and Cu, upon insertion of metal into GO at ambient conditions. [16][17][18][19][20] Graphene-based materials were used in batteries as electrode materials [21][22][23][24][25][26][27] and composite electrode materials, 28,29 current collectors and their protective layer, 30,31 surface coatings, 32,33 and conductive additives. 34 In most studies, the use of graphene-based materials demonstrates a significant improvement in the electrochemical performance of electrode materials in terms of increased discharge capacity and stabilization during cycling. [21][22][23][24][25][26][27][28][29][30][31][32][33][34] There are very few reports on GO alone as a cathode material for Li batteries, where the oxygen functional groups provide lithium storage sites. The exact mechanism of lithium storage in GO is unclear. Mechanistic studies revealed that the driving force for reduction comes from releasing electrons from the metal to the GO sheets while the metal is oxidized. [16][17][18][19]21,25,[35][36][37] Concerning the mechanism of GO electrochemical reduction, there is a debate. Most researchers believe that the C-O double bonds are the lithium storage sites, whereas some argue that the epoxy groups are the main sites. In any case, the oxygen content of GO is supposed to be the key factor affecting the electrochemical performance of GO as a cathode material. 25,[35][36][37] In the present work, nanostructured GO was used as one of the components in preparing cathode materials for primary Li batteries. This material based on graphene-like structure reveals high discharge-specific capacity due to the presence of various oxygen-containing functional groups capable of forming irreversible bonds with ions of the active material of the anode during the currentforming process (discharge). The products formed on the surface of GO during electrochemical reduction were analyzed, and a plausible mechanism is proposed herein. Besides the mechanistic studies described here, this paper opens the door for developing high specific energy, cost-effective, practical primary Li batteries.
| Synthesis methods
An aqueous dispersion of GO flakes (1 mg/ml) with lateral size of 0.1-4 μm and thickness of up to 1.5 nm was obtained by modified Hummer's method as shown in Figure 1. 38 In a typical synthesis, in the first stage (graphite intercalation), concentrated sulfuric acid (10 ml) was placed in a glass beaker equipped with a magnetic stirrer, followed by addition of ammonium persulfate (0.9 g) and phosphorus pentoxide (0.9 g). For complete dissolution, the resulting reaction mixture was heated to 80-85°C. To the above mixture, natural graphite powder (99.9% C, fraction 200-300 μm, 1 g) was added and stirred at a temperature of 80°C for 5 h, and then the mixture was cooled to room temperature naturally. Distilled water (250 ml) was slowly added, and the mixture was left for 7 h to allow the precipitate to settle down. The precipitate was washed several times until its pH was 7, filtered, and dried. In the second stage (oxidation of graphite), the powder obtained in the first stage (1 g) was transferred into a beaker containing concentrated sulfuric acid (40 ml), which was placed in a cooled ice bath. Potassium permanganate (5 g) was added slowly over about 2 h to the above mixture, and after 30 min, distilled water (300 ml) was slowly added while monitoring the temperature at 40°C. This stage was carried out with utmost care, over about 30 min, as temperatures above 55°C can lead to an explosion resulting from Mn 2 O 7 formation. Hydrogen peroxide (30%, 10 ml) was added dropwise; during the process, bubbles were observed, and the color of the suspension changed to yellow-brown. The resulting solid precipitate was washed with deionized water several times and filtered, and the resultant powder was dried. In the final stage (dispersion of graphite oxide), the powder obtained in the second stage was added to the distilled water and subjected to ultrasonic treatment (frequency of 20.4 kHz, specific power of 0.1-1 W/cm 3 ) for 15 min. The resulting dispersion was centrifuged for 10 min at 2000 rpm to remove large particles. Highly concentrated aqueous dispersions (hydrogels) of GO (2% mass) were obtained by centrifuging a dispersion of GO (1 mg/ml) with a Hermle ultracentrifuge at 14,000 rpm for 5 min.
| Characterization methods
The product was characterized and analyzed employing crystallographic, morphological, and spectroscopic studies. X-ray diffraction (XRD) spectra were measured using an Ultima IV X-ray diffractometer (Rigaku) with Cu Kα radiation (λ = 0.154 nm). The data were analyzed and processed using the Jade 9 software package. Scanning electron microscope (SEM) images were acquired on a SUPRA 40 scanning electron Carl Zeiss microscope operating at 1-10 kV working voltage. Energy dispersive Xray spectroscopy analysis was carried out on a JEOL JSM-7600F scanning microscope with an accelerating voltage of 15 kV and a resolution of up to 1 nm. IR spectra were measured with a VERTEX 70 v BRUKER FT-IR spectrometer by the attenuated total reflectance method using a PIKE GladyATR attachment with a spectral resolution of 4 cm. Raman spectra were acquired using a Renishaw In Via spectrometer (Great Britain) using a 514 nm laser for excitation. The spectrometer was calibrated on a standard monocrystalline silicon sample with a fundamental vibrational mode at 520.5 cm -1 . X-ray photoelectron spectroscopy (XPS) spectra were obtained on an Axis Ultra DLD spectrometer (Kratos) using monochromatic Al Kα radiation at an X-ray gun power of 150 W. Survey and high-resolution spectra were recorded at transmission energy of 160 and 40 eV and step size of 1 and 0.1 eV, respectively. To eliminate the effect of sample charging, spectra were recorded using a neutralizer. Elemental analysis (C, H, N, S(O)) was performed on an automatic analyzer Vario Micro cube. The combustion temperature of the sample was 950°C. C, H, N, S(O) content was calculated automatically by the instrument software. Thermogravimetric analysis (TGA) was carried out on a TGA/ DSC thermal analyzer SDT Q600.
All electrochemical studies of primary Li/GO cells were performed using pouch cells under similar experimental conditions. GO foams were glued to aluminum (Al) foils current collectors using a primer (CB/CMC = 5/95 by weight) as shown in Figure 2. Al foils (40-60 mm) were used as substrate (current collectors), and the loading level of the GO electrode was 2.7-3.0 mg/cm 2 . The electrodes were dried at 70°C under ambient conditions and transferred into a high-purity argon atmosphere using a Pure-Lab HE glovebox for the fabrication of Li/GO pouch cells. The SelectiLyte LP40 electrolyte solutions comprising ethylene carbonate, diethyl carbonate (EC/DMC 1:1 v/v) and 1 M LiPF 6, and a Dreamweaver Silver ARTM40 separator were used for the fabrication of electrochemical cells and battery prototypes. To examine the changes in GO structure during the electrochemical reduction in the discharge process, postmortem analysis was carried out by removing the electrodes from the discharged cell for further analysis. The discharge of cells was carried out to voltages of 2.5, 2.0, 1.5, and 1.0 V versus Li/Li + by electrochemical battery tester "Neware." Upon completion and during of the discharge process, a Biologic potentiostat/ galvanostat (model VMP3) was used for electrochemical impedance spectroscopy (EIS) analysis with an amplitude of 5 mV around equilibrium in the frequency range of 0.01 Hz to 100 kHz.
| RESULTS AND DISCUSSION
GO is a chemically modified graphene, which may be obtained by oxidation and exfoliation of graphite by the modified Hummer's method. 38 As shown in Figure S1, it is a monolayer of carbon atoms having both (largely) sp 2hybridized carbon atoms and (partially) sp 3 -hybridized carbon atoms containing oxygen functional groups located both on the basal (hydroxyl and epoxy) and on the edge plane (carboxyl, carbonyl, phenolic, lactone, and quinone). GO is a dielectric, but it has hydrophilicity, proton conductivity, high reactivity, and the ability to change the stoichiometric composition. The formation of GO was examined by various spectroscopic and microscopic techniques such as powder XRD, SEM, XPS, and atomic force microscopy (AFM) studies, as discussed below.
The XRD pattern of the synthesized GO is shown in Figure 3A, where a broad peak of (002) reflection at 2θ = 10.3 o indicates the expansion of the interlayer distance. The calculated interlayer separation is 9 Å, in good agreement with literature prepared by the modified Hummers method. 4,15,22,24,26 The SEM of pristine GO and GO electrode samples ( Figure 3B,C, respectively) display a flaky morphology. The XPS C1s deconvoluted spectra of the GO sample ( Figure 3D) shows four characteristic peaks at 284.5, 285.3, 286.6, and 288.8 eV for sp 2 hybridized C, sp 3 hybridized C, epoxy, carboxyl, and carbonyl group moieties, respectively. 16,19,[24][25][26][27]36,37 They were fitted to Gaussian-Lorentzian peak shape after performing a Shirley background correction. AFM measurements were carried out to investigate the surface morphology. For AFM imaging, GO was dispersed in ethanol and the film was coated on the microscope glass. The thickness measured from the AFM image~1.5 nm ( Figure 3E), corresponds to a single layer of GO. TGA of GO is shown in Figure 3F, where the GO sample was heated in air at a rate of 10°C/min up to 800°C. The reduced GO was initially obtained during the heating process, which is oxidized to CO 2 upon further heating. The GO sample showed ∼10% weight loss at up to 100°C due to the removal of water molecules trapped between the GO sheets, followed by ∼25% weight loss at 200°C, possibly due to pyrolysis of the functional groups, released as CO, CO 2 , and steam. Finally, there is a ∼30% weight loss at 600°C due to the combustion of the carbon skeleton of GO ( Figure 3E). 27 The elemental analysis shows С (58.0 ± 1.0%), H (1.5 ± 0.5%), O (39.0 ± 1.0%), and trace amounts of sulfur (less than 0.05%), which probably come from the preparation procedure. These results are in agreement with reported values. Electrical conductivity is one of the key requirements for electrode materials. To achieve the desired values, different conductive additives (carbon black, graphite) are integrated into the electrode active material. However, GO is known as a dielectric material, and in this study, it was used without any conductive additives. As described previously, 39-41 depending on the humidity or type of solvent, GO can show features of ionic conductivity. Thus, when immersed in an electrolyte solution, GO can transfer charge, which is a sufficient condition to start the process of electrochemical reduction, during where the material acquires an electronic type of conductivity. 19,20 To better understand the electrochemical reduction process, GO samples were subjected as cathode materials for Li primary batteries. Figure 4A displays the voltammogram of GO at scan rates of 0.5 and 1 mV/s at the potential window between open-circuit voltage (OCV~3.5 V) and 1.0 V versus Li/ Li + . A characteristic reduction peak appeared at 1.7 V versus Li/Li + during discharge regardless of scan rate. The GO electrodes were subjected to discharge at different C rates, namely C/50, C/25, and C/10 under the same conditions. Voltage profiles are shown in Figure 4B. The OCV of the cell before discharge was around 3.4 V F I G U R E 3 (A) X-ray diffraction patterns of graphene oxide (GO), high magnification scanning electron microscope (SEM) images of (B) pristine GO powder and (C) GO electrode foam, (D) X-ray photoelectron spectroscopy C1s deconvoluted spectra, (E) atomic force microscopy image with height profile of a GO single sheet and (F) thermogravimetric analysis of GO flakes versus Li/Li + . Upon initiation of galvanostatic discharge, there was a sudden decrease in the cell voltage followed by a plateau around 2.5-2.3 V, and a gradual decrease to the cut-off voltage of 1.0 V for C/50 to C/10 rates. The Li/ GO cells delivered a specific capacity of 721, 687, and 626 mAh/g for C/50, C/25, and C/10, respectively, quite close to the theoretical value (910 mAh/g), which was calculated on the basis of GO. In the current density calculations, the "C" rate was set as 600 mAh/g. As shown in Figure 4C the Li/GO cells can be subjected to high specific current densities from 10 to 200 mA/g, during the discharge process, the results concluded that Li/GO cells withstand high specific currents and are expected to be useful for a range of power densities. Furthermore, Li/GO cells were evaluated for pulse power studies as shown in Figure 4D, where the cells were discharged at 20 mA/g with pulse-specific current of 120 mA/g for 1 min at regular intervals of 0.5 h of discharge. The voltage decreases at pulse current of 120 mA/g, but increases back to the normal discharge plateau once the current is decreased to 20 mA/g. Furthermore, the discharge capacity of GO depends strongly on the content of oxygen functional groups and on the surface area (Tables 1 and 2, respectively). The surface area was measured by N 2 adsorption isotherms as shown in Table 1. The presence of functional groups and the content of oxygen can be controlled and thereby, the properties of the GO materials can be tuned as reflected in the data presented by Table 2.
GO foam shows more intensive electrochemical reduction than GO film or graphite oxide electrodes. As electrochemical reduction is a surface chemical reaction, the extent of reduction depends mainly on the interlayer distances. In the case of GO film or graphite oxide electrodes, the interlayer distance is 9 Å, and the size of the lithium ions in the solvation shell is around 10 Å ( Figure 5A), which cannot reduce the functional groups in between the graphene layers. In GO foam, on the other hand, the layers are randomly aligned ( Figure 5B). Therefore more functional groups have access to reduction with the insertion of Li ions which compensate the negative charge that the electrode receives upon reduction.
Li/GO nonaqueous primary cells were assembled, subjected to discharge at 50 mA/g current density, and F I G U R E 4 (A) Voltammograms (arrow indicates the direction of the reduction process), (B) discharge curves (voltage profiles in constant current discharge processes), (C) various voltage profiles of a typical Li-GO cell, measured during discharge processes at different specific currents from 10 to 200 mA/g, and (D) discharge curves at a specific current of 20 mA/g with pulse-specific currents of 120 mA/g for 1 min at regular intervals (0.5 h) for Li/GO cells. GO, graphene oxide studied by EIS at initial and several discharged potentials as shown in Figure 6. The depth-of-discharge (DOD) is defined as the ratio between the charge removed from the cell by discharge and the total capacity of the cell. The variation of impedance parameters with DOD was studied to identify the cell's state of charge. Nyquist plots of fresh and fully discharged Li/GO cells shown in Figure 6 consist of a high frequency intercept on the real axis, a semicircular region between 100 kHz to 0.01 Hz, and a linear region is parallel to the real axis. Two overlapping semicircles are observed at the initial stages, while well-separated semicircles are seen at higher DOD (discharged below 2.3 V, down to 1.0 V). From the Nyquist plots, the cell resistance decreases from 1000 to 239 Ω during the initial discharge to 2.3 V, and further decreases to 217 Ω at ensuing lower discharge potentials. The substantial decrease in R ct indicates that the progressing lithiation of these electrodes facilitates better accessibility of the nonaqueous electrolyte solution to active surface sites (oxygen-based functional groups). The better separation is more facile, as expected after dielectric rupture of the surface films, which frees a large fraction of the Li surface from the film, decreasing its resistance. The increase in the resolution of the spectra at higher DOD (≤2.3 V versus Li/Li + ), namely, the appearance of two semicircles instead of one, reflects also the change in accessibility of reduction sites, the formation of surface films comprising LiF and AlF 3 on the GO surface, confirmed analytically and discussed in the following sections.
| MECHANISTIC STUDIES
Several characterizations are compared before and after the electrochemical process to understand the mechanism of electrochemical reduction. Figure 7 shows the surface structure before and after discharge. As evident from Figure 7B, after discharging to 2.5 V, individual nanoparticles with an average size of 20 nm are formed on the surface of GO as shown in Figure 7A. The number of observed nanoparticles increases when the discharge process proceeds up to 2.0 V ( Figure 7C). A significantly dense layer of particles is observed when a voltage of 1.5 or 1 V is reached ( Figure 7D,E). In the process of electrochemical reduction, the reaction products are formed Figure 7B-E). To study the composition and structure of the products, an analysis of GO samples was carried out before and after electrochemical reduction (discharge was performed to a voltage of 1 V) by SEM with an energy-dispersive X-ray spectroscopy (EDX) extension ( Figure 7F1-3,G1-3). According to the EDX results, the initial samples of GO contain carbon, oxygen, and trace amounts of manganese and sulfur originating from potassium permanganate and sulfuric acid used in the synthesis of GO ( Figure 7F1-3). After electrochemical reduction, a significant decrease in the carbon content was measured due to the formation of a continuous coating of reaction products on the GO surface ( Figure 7G1-G3), such as LiF, AlF 3 , and so forth. In addition, significant amounts of oxygen were detected, as well as phosphorus and fluorine, from the LiPF 6 salt, which was used as an electrolyte. Please note that EDX analysis does not allow the detection of elements with low atomic weight such as Li and hydrogen.
In addition to energy-dispersive analysis, samples were also studied by Fourier-transform IR spectroscopy (FTIR) (Figure 8). Spectra obtained before and after electrochemical reduction substantially differ from one other. There are no peaks characteristic of adsorbed water, carboxyl, and hydroxyl groups at 3000-3800 cm −1 . After reduction, carbonyl (1726 cm −1 ) and hydroxyl (1409 cm −1 ) groups were obtained. Low-intensity peaks for C═C (1652 cm −1 ) and epoxy groups (1085, 1160 cm −1 ) were preserved, but distinct high-intensity peaks at 467 and 500 cm −1 characteristics of LiF were also obtained at 710 cm −1 for LiPF 6 and 978 cm −1 for POF 3 . 42-44 Figure 9A,B presents XRD data of samples before and after electrochemical reduction, respectively. The broadened reflection at the 11-13°2θ range, corresponding to GO, can be identified for the initial sample. After electrochemical reduction, a peak in the 11-13°2θ range is not observed in the samples. However, a distinct peak at 23°is formed, which is characteristic of reduced GO. Two weak peaks at 55°and 68°appear, corresponding to both Li 2 O and LiOH. 45,46 As cathodes based on GO were fabricated on the surface of aluminum foil, the XRD spectra also contain reflections of Al phases, which correspond to 38°, 42°, 44°, 65°, 78°, 81°, 99°, 112°, and 116°. An increase of intensity at 42°and reduction in intensity with F I G U R E 8 IR spectra of graphene oxide samples before (A) and after (B) electrochemical reduction KORNILOV ET AL. | 9 of 15 broadening of the peak at 38°after GO recovery can probably be explained by AlF 3 and LiF formation on the Al current collector due to the reduction of electrolyte solution species as part of the overall discharge processes. 47,48 XPS analysis of the samples ( Figure 9C,D) revealed the presence of F1s, C1s, O1s, P2p, and Li1s peaks in the photoelectron spectra of reduced cathodes. According to published data, 49 the deconvolution of the F1s spectrum ( Figure 9E) demonstrates the presence of LiF and Li x PF y . The deconvoluted spectra of C1s ( Figure 9F) clearly show GO reduction when compared to those of C1s before electrochemical reduction ( Figure 3D). F I G U R E 9 X-ray diffraction spectra of graphene oxide (A) before and (B) after electrochemical reduction. X-ray photoelectron spectroscopy survey spectra of graphene oxide samples (C) before and (D) after electrochemical reduction and deconvoluted spectra of (E) F1s (F) C1s after reduction Thus, according to the results of in-situ structural studies and based on published data, it is clear, that the process of electrochemical reduction of GO consists of the interaction of oxygen-containing functional groups with Li + ions. Therefore, it is first necessary to analyze the bonding energy of oxygen-containing functional groups to understand the sequence of their recovery (Table 3). Kim and coworkers calculated 50 the binding energies of epoxy and hydroxyl groups, indicating that the epoxy is much more stable than the hydroxyl group (62 vs. 15.4 kcal/mol).
Elemental analyses, IR, XRD, and XPS confirm the presence of lithium fluoride. It is well known that EC, EMC, and PF 6 − anions are reduced irreversibly at potentials below 1.6 V versus Li in the presence of Li ions. ROCO 2 Li, ROLi species, Li 2 CO 3 , LiF, Li x PF y , and Li x POF y moieties are relevant reduction products. 51 The irreversible reduction process of the solution in our cells is not massive because the cathodic polarization of the GO cathodes is limited to 1 V versus Li to fit the potential window of practical Li/GO batteries. However, part of the capacity of the cathodes should be assigned to these side reactions, which in the present case are not problematic, since anyway, we deal with primary Li batteries. The spectroscopic data reflect the above side reduction of the electrolyte solutions, showing the presence of LiF and phosphorous containing species (Figure 9). It should be noted that the presence of a functional group on the GO surface facilitates the formation of hydrogen bonds with intercalated water molecules ( Figure S2). [52][53][54][55] The water content can reach 8-12 wt%, as evidenced by the TGA results ( Figure 3F). Hence, their presence enables further interactions with the LiPF 6 electrolyte as suggested in the scheme of Figure S3. Hereby, the immersion of GO in an electrolyte solution leads to the formation of HF. Furthermore, when the electrochemical potential is shifted to the negative range ( Figure 4A), lithium ions and hydroxyl functional groups interact with the formation of LiOH, which in turn reacts with HF to form LiF and H 2 O ( Figure 10A). With a further shift of the electrochemical potential to the negative range, epoxy groups react with lithium ions to form Li 2 O, which, in turn, further reacts with HF to form LiF and H 2 O ( Figure 10B). Therefore, the process of electrochemical reduction of GO is accompanied by an increase of the water content in the electrolyte solutions. To verify this mechanism, an analysis of the electrolyte humidity in the electrochemical cells before and after discharge (electrochemical reduction) was performed (according to the Fisher's method). The results (Table 4) confirm the alleged judgment.
Accordingly, the current-forming process during the discharge of the primary chemical element system Li| LiPF 6 , EC:DMC|GO can be described by the reactions shown in Figure 10. According to the proposed mechanism, reducing the hydroxyl group used one electron, while reducing the epoxy group requires two electrons. The theoretical capacity of GO can be calculated. The total hydrogen content for the GO sample (Table 5) is 2.40 wt%, while the sample contains 11.41 wt% of water ( Figure 4F). According to the calculations, the content of hydrogen in GO that comes from the water is 1.25%. Therefore, the hydrogen content, related to functional groups, will be 1.15%. For easy calculation, it can be assumed that 1.15% of the hydrogen coming from hydroxyl groups. According to the proposed mechanism ( Figure 10A), one electron is responsible for the reduction of -OH groups, so the amount of electric charge can be calculated from the amount of substance (hydrogen). Thus, restoring the -OH groups in 1 g of GO corresponds to around 300 mAh/g GO. Let us suppose that the total oxygen content according to the CHNS(O) analysis is 46.65% ( Table 5). The content of oxygen corresponding to the epoxy group can be calculated as total oxygen content (46.65%) minus the oxygen coming from water (10.15%) and the hydroxyl groups (18.2%). According to the proposed mechanism ( Figure 10B), two electrons participate in the reduction of epoxy groups. Respectively, according to the amount of substance (oxygen), the amount of electric charge can be calculated, corresponding to around 610 mAh/g GO. Thus, according to the proposed mechanism, a complete electrochemical reduction of 1 g of GO will require around 910 mAh. As a final discussion, it is interesting to compare the Li/ GO battery system described herein to other commercial primary Li battery systems to evaluate its practical importance. Table 6 provides such a comparison. There are several different types of primary Li battery technologies that are available, commonly used today for a variety of applications. The comparison reflected by Table 6 seems to show that the Li/GO battery system described herein has a chance to compete well and outperform with already known and used commercial primary Li battery systems. We can conclude that further optimization work that will include cost effects can lead to the successful development of new effective high energy density and revolutionize the use of greener primary Li/GO battery systems.
| CONCLUSIONS
Based on the spontaneous redox reactions of a novel type of Li/graphene oxide (Li/GO) system, a primary battery was demonstrated along with a comprehensive study of the electrochemical reduction of the GO cathodes. The GO cathode material was prepared by the modified Hummers method and thoroughly characterized by various methods. The oxygen-containing functional groups on the graphene layers serve as lithium storage sites and deliver a high discharge capacity of about 720 mAh/g at a current density of 12 mA/g. The mechanism of the electrochemical reduction was proposed by in situ analysis where two electrons participate in the reduction of the epoxy groups, which are the main lithium storage sites. The results disseminated the reasons for the electrochemical improvement and mechanism of redox reactions in GO electrode materials of Li/GO batteries and are expected to provide a strong starting point for the development of new practical high energy density primary Li/GO batteries.
ACKNOWLEDGMENTS
A partial support for this study was obtained from the Israeli Prime Minister Office (alternative fuels for transportation program) and the Israeli High Education Committee in the framework of the INREP consortium and project. | 6,756.4 | 2022-01-27T00:00:00.000 | [
"Engineering"
] |
A Systematic Literature Review of Quantum Computing for Routing Problems
Quantum Computing is drawing a significant attention from the current scientific community. The potential advantages offered by this revolutionary paradigm has led to an upsurge of scientific production in different fields such as economics, industry, or logistics. The main purpose of this paper is to collect, organize and systematically examine the literature published so far on the application of Quantum Computing to routing problems. To do this, we embrace the well-established procedure named as Systematic Literature Review. Specifically, we provide a unified, self-contained, and end-to-end review of 18 years of research (from 2004 to 2021) in the intersection of Quantum Computing and routing problems through the analysis of 53 different papers. Several interesting conclusions have been drawn from this analysis, which has been formulated to give a comprehensive summary of the current state of the art by providing answers related to the most recurrent type of study (practical or theoretical), preferred solving approaches (dedicated or hybrid), detected open challenges or most used Quantum Computing device, among others.
I. INTRODUCTION
In the optimization and transportation communities, routing problems are intensively studied [1]- [3]. Two are the main reasons behind the interest in this specific topic: i) their high computational complexity, making them difficult to tackle even in medium-sized instances [4]; and ii) their proven applicability on business and logistics real-world scenarios, as well as on leisure and tourism-based problems [5]. In other words, advances in the design of efficient routing problem algorithms derive into business and social benefits. This situation is the reason for the upsurge of intelligent methods designed for dealing with routing problems.
Regarding the complexity of routing problems, it is important to highlight that most solving algorithms demand notable computational resources. In fact, some modern computers are completely unable to embrace brute force techniques for even relatively small instances. In consequence, a plethora of time-efficient solving schemes have been developed along the The associate editor coordinating the review of this manuscript and approving it for publication was Derek Abbott . last decades, being heuristic and metaheuristic techniques the most resorted approaches.
Up to now, the vast majority of intelligent solvers have been conceived for their development and execution in classical computation resources. However, as an alternative to these classical devices, Quantum Computing (QC, [6]) has emerged as a promising paradigm for tackling optimization and routing problems. Deemed as the next frontier in computation, QC is a hot topic in the current scientific community. This is so because of its potential power in terms of performance, bringing a remarkable advantage in complex optimization problems [7].
In a nutshell, a quantum computer consists of a device which performs computation by leveraging quantum mechanical phenomena. Unlike classical computers, quantum devices work with an information unit coined as qubit [8], which can hold much more information than a classical bit. In particular, a qubit can be both 1 and 0 at the same time, overcoming limitations imposed by the classical binary representation.
Today, two QC paradigms coexist: the annealing-based and the gate-model quantum computers [9], [10]. The former ones are characterized by performing a process known as quantum annealing, which returns the state of minimum energy of a given quantum Hamiltonian (i.e., energy function). In optimization problems, the quantum Hamiltonian is defined according to the fitness function of the problem, so that the lower energy state represents the solution to the problem at hand [11]. Quantum-gate devices in contrast are characterized by employing quantum circuits (i.e. operations) named as quantum gates, which can be compared to the classical logic gates in traditional circuits. Thus, quantumgates are sequentially applied to qubit states manipulating them up to reaching the final solution of the problem [12].
Having said this, a closer examination in the most renowned databases unveils that scientific studies revolving around QC applied to routing problems are continuously growing. The activity is significant and steady, hence perfectly illustrating the current condition of the community. This accumulation of material establishes an ideal breeding ground for the conduction of a reference manuscript that systematically summarizes all the works already done. This situation has fueled the performance of this paper, motivating its main goal: to provide a unified, self-contained and endto-end review of the research done in QC focused on routing problems.
An efficient way to properly synthesize the research that has recently been done is through a Systematic Literature Review (SLR, [13], [14]). Embracing the definition provided in [13], a SLR is a procedure for evaluating and interpreting relevant works related to a specific topic, phenomenon, or research question. For this reason, a SLR can be deemed as a mean for aggregating the experience obtained from a significant collection of studies [15]. Thus, a SLR constitutes a rigorous and fair procedure for selecting and analyzing all the scientific material that answers a group of well-defined and specific research questions. In this kind of studies, an special analysis is carried out on the meta-data of each selected paper, in order to obtain findings such as the most cited works, the geographical situation of the research community, or the type of papers preferred by practitioners. As can be read in works such as [16], one of the advantages of SLRs is its replicability, as well as its adequacy for depicting the past and current status of a certain research field. Another advantage of SLRs is their inherent facility for being extended with additional studies.
With all this, we adopt the SLR mechanism for analyzing the work already done in the confluence of QC and routing problems fields, leading to a comprehensive survey carried out through a meta-analytic approach. Thus, we systematically organize all the studies conducted up to date on this specific topic, with the main intention of i) summarizing their features, ii) understanding their contributions and iii) highlighting principal authors and relevant institutions. This kind of paper is also valuable for identifying diverse domains that are likely to receive further attention in the near-term future.
In general, the present review covers a lapse of 18 years. To perform this work, we have considered four of the most reputed scientific databases (Google Scholar, Scopus, IEEE and Web Of Science), resulting in a collection of 53 documents published in form of journal or conference papers, book chapters, PhD thesis, Master thesis and reports.
The remainder of this manuscript is structured as follows: in Section II we deeply describe the research protocol followed for the conduction of this SLR, contributing in this way to its replicability and extension. Furthermore, we highlight the main research question of our SLR, as well as the secondary questions that will be answered along the whole paper. Section III is the central part of this work, outlining the published works in this research area under six different criteria. After this section, a critical analysis of published milestones and influential papers is provided in Section IV. Finally, Section V concludes our survey with a summary of the main conclusions with views to the future of this exciting field.
II. RESEARCH PROTOCOL
In order to build a valuable SLR, some formal steps must be necessarily followed, conceiving what is known as research protocol. Specifically, this procedure is composed of the following phases: i) establish the main and secondary research questions, ii) formulate the search key, iii) select the most appropriate scientific databases, iv) set the inclusion/ exclusion rules, and v) report the phases of the survey.
Thus, sticking to the aforementioned protocol, the first step is to clearly define the scope of this review: to thoroughly examine the work conducted on QC applied to routing problems in the last two decades. Additionally, we are interested on characterizing the types of problems that are most frequently addressed, as well as the algorithms and quantum processors employed in its resolution. Furthermore, we also intend to discern those most recurring types of study in the literature (in both theoretical or experimental approaches), and which are the institutions and countries paying more attention to this novel paradigm. Therefore, our main research question (MQ) is formulated as follows: MQ: ''In the whole twenty-first century, what are the principal research works conducted around quantum computing applied to routing problems?'' Starting from MQ, several secondary questions arise (SQ). These SQ are essential to nourish the review with valuable information. Furthermore, SQs are also necessary to guide the whole research procedure, offering the guiding light for finding and selecting relevant documents. In the SLR presented in this manuscript, we have defined the following 8 secondary questions: For building a rigorous and comprehensive SLR, appropriate search keys (SK) must be constructed for being used in the engines of the digital scientific databases.
Inclusive SKs are required to write a comprehensive SLRs. For this purpose, an empirical experimentation has been implemented by testing different configurations, far beyond the simple combination of the logical operators along with the topic keywords. The resulting SK is the following SK: ("Quantum Computation" OR "Quantum Computing" OR "Quantum Annealing" OR "DWAVE" OR "D-WAVE" OR "Quantum Gate" OR "Qiskit") AND ("Routing Problem" OR "Traveling Salesman Problem" OR "Vehicle Routing Problem" OR "TSP" OR "VRP").
Thanks to this SK, we can reach studies exploring both quantum paradigms (annealing-based and gate-model quantum computers), and also a wide variety of routing problems, with the Traveling Salesman Problem (TSP, [17], [18]) and Vehicle Routing Problem (VRP, [19], [20]) as the most representative ones.
The next step of the SLR after the definition of the MQ, SQs and the SK is the selection of the scientific databases. These databases must meet two crucial requisites: i) they need to be exhaustive enough to incorporate studies coming from different sources, such as conferences, books, journals or reports; and ii) they must provide open and configurable search engines that admit the input of the established SK. We summarize the four databases considered in this SLR in Table 1. Furthermore, it is important to highlight here that ACM Digital Library 1 has not been deemed because of the same reasons depicted in previous studies such as [16]. In a few words, this search engine does not efficiently deal with the principal components of the search key, providing less satisfactory results in comparison to the other four scientific databases.
In any case, despite the efforts made to define an accurate and restrictive SK, the amount of works returned are usually immense. To fix possible inaccuracies, once the search has been performed over the scientific databases, additional exclusion/inclusion rules are applied in order to clearly define which works fall under the specific umbrella of the SLR. In this review, we have used the following criteria: 1) Papers must be clearly fall within the field of quantum computing, excluding works framed into the knowledge branch known as quantum-inspired evolutionary algorithms [21]. 2) Selected works must be completely focused on routing problems, either from a theoretical or an applied perspective. Articles that deal with these types of problems in a tangential way are discarded. 3) Chosen manuscripts must be written in English (at least, the title and abstract). 4) All the papers must be available for its consultation on any of the selected databases. Furthermore, at least, the title, authors information and abstract must be openly accessible.
It is interesting to pause briefly on the first adopted criteria. In the last two decades, a research stream known as quantum-inspired evolutionary algorithms has been made its way into the optimization community. In a nutshell, quantum-inspired techniques are a specific class of evolutionary algorithms which base their operation on concepts and principles of quantum computing such as interference, coherence, and the qubit unit of information. Anyway, these methods are conceived ''for a classical computer rather than for quantum mechanical hardware'', as explained in [22]. That is, despite quantum-inspired evolutionary algorithms are based on quantum computation, they cannot be run on any quantum machine. In fact, a recent paper published by Gammanpila and Fernando [23] demonstrates that these algorithms adopt certain assumptions that make impossible to being directly handled by any quantum computer. Consequently, papers modelling quantum-inspired evolutionary algorithms are excluded from this SLR.
Finally, and based on the protocol described in this section, we have performed the following steps for fairly conduct our SLR: i) search in selected scientific databases, in which we have applied our defined SK and gathered all the resulting papers for further analysis; ii) filtering process, which objective is to discard all papers that are not compliant with our inclusion/exclusion criteria; and iii) manual search, in which new works coming from the bibliographic references of the already selected papers are either included or discarded.
III. ANALYSIS OF THE CONTRIBUTIONS GATHERED
This section is devoted to the presentation and analysis of the collection of papers contributing to this SLR. This section is, in turn, divided into different subsections and sorted to answer independently the SQs specified above. Thus, Section III-A is associated with SQ1. Additionally, Section III-B is related to SQ2, while SQ3, and SQ4, and SQ5 are tackled in Section III-C. Furthermore, Section III-D addresses SQ6, and VOLUME 10, 2022 the SQ7 is answered in III-E. The last secondary question SQ8 is tackled in III-F. Finally, we conclude this section focusing our attention on our main research question MQ in Section III-G.
A. COLLECTION AND PUBLICATION TIMELINE
Adopting the research protocol above described, first step was the collection of papers. Here are the results retrieved in November 2021, which total these quantities of papers: • Google Scholar: 975 papers. • Scopus: 764 papers. • IEEE Xplore Digital Library: 20 papers. • Clarivate Analytics -Web of Science: 50 papers. From this initial collection, we proceeded to eliminate duplicates and works not meeting the 3rd and 4th exclusion/ inclusion clauses. After that, we discarded works unrelated to routing problems and quantum computing (2nd exclusion/inclusion clause). From this process, a collection of 95 papers was selected. From this filtering process, we strictly applied our defined 1st inclusion/exclusion criterion. The application of this step led us to the elimination of 42 papers, yet working on routing problems, implement quantum-inspired evolutionary algorithms. For this reason, the total amount of works contributing to this SLR was reduced to 53. Table 2 shows the whole collection of works remaining after the complete procedure. These works are listed chronologically, using the publishing date (in case of journal papers, pre-prints, PhD Thesis and Master Thesis) and event celebration date (in case of conferences) as reference. For each work, identification, reference, year and type of paper are added. In this last category, we consider journals (Jour), conferences (Conf), PhD Thesis (PhD), Master Thesis (MsT) and pre-prints (Prepr) published in reputed repositories, such as arXiv. 2 Additional aspects of these papers will be further analyzed in the following sections.
By analyzing Table 2 one can detect the date of the very first published study (in this case, a journal paper from 2003) which originated the subsequent flow of studies. Nevertheless, the impact was low during the next years, supported by the absence of works published in 2010, 2014, 2015 and 2016. After the three-year hiatus among 2014 and 2016 (both included), the field experienced a remarkable upsurge with a growth maintained up to the present days. The main reasons of this resurgence are the substantial technological advances implemented on hardware by companies such as IBM or DWAVE in the building of quantum computers, and the democratization of these resources to any researcher and developer (under certain conditions). In fact, the most used developing platforms, Qiskit of IBM and Leap from DWAVE, were launched in March 2017 and October 2018, respectively, coinciding with the increase of works focused on QC and routing problems (see Fig. 1). Additionally, note that 25 papers (47,16%) have been published in scientific 2 https://arxiv.org/ journals, 17 (32,07%) in international conferences, and 9 (16,98%) in reputed scientific repositories, something understandable considering the immaturity of the field of study. Finally, and aiming at answering the first of the secondary research questions (SQ1), we depict in Fig. 1 the timeline of the works under study. This figure shows that the most prominent interval is that between the 2019 and 2021, comprising the 64,15% of the complete collection.
B. TYPES OF PROBLEMS STUDIED
Historically, two of the most studied routing problems are the aforementioned TSP and VRP. These two problems, and all their variants, are the focus of a significant number of studies in the literature. Their solving complexity, along with their easy formulation, flexibility to add mathematical constraints, and wide applicability to real-world problems have led both TPS and VRP to receive a constant and moderate growth for the last 5 decades.
This situation is also evident in the work proposed in this survey, in which the vast majority of the papers hybridizing QC and routing problems deal with the TSP, VRP or any of their many variants. Table 3 summarized all gathered papers sorted according to the problem they solve.
As can be deduced from Table 3, 32 (60,37%) of the identified papers are built for the TSP problem. Most of these works make use of the canonical version of the problem, whereas some papers tackle any variant such as the Asymmetric TSP [70] or the TSP with Time Windows [75]. Regarding the topic of these papers, 29 are used with benchmaking purposes, while the rest are conceived to deal with real-world applications. We can see examples in industrial robotics [65], [69], Smart Cities [72] and UAV planning [33].
In turn, VRP-related works make up 24,52% of the total compilation (13 out of 53), being the canonical formulation the most recurrent one. Other interesting problem versions are also posed, such as the formulation known as Social Worker in [41] and [46], or the VRP with Balanced Pick-up in [60]. Further studied variants are the large scale VRP in [57] and the VRP with time restrictions, state, and capacity in [71].
Furthermore, we have found 11 other studies revolving around routing problems that fall outside the umbrella of TSP and VRP. Here, we can distinguish optimization problems in graphs, such as the Hamiltonian Cycle 3 in [30], [61] or Graph Traversal problems (such as eulerian tours or optimal postman tours) in [34], [36]. Other works encompass studies dealing with the well-known shortest path problem in [39], [54] or the less studied longest path problem [58].
Finally, it is interesting to highlight those works which combine several techniques or even present the problem from a mixed technical perspective. In [34] and [36], for example, in addition to the graph traversal problems above mentioned, the TSP is also considered. Another example can be found in [73], in which the Capacitated VRP is solved by decomposing it into a TSP and a clustering problem.
It is interesting to point here that some of the works above mentioned represent the first efforts made by the community to apply the advances of quantum computing in real environments. In [69], for example, authors propose a preliminary method for the optimization of robotic assembly lines in automotive industrial plants. Authors in [65] present an approach for optimizing non-colliding multi-robot routes, building use cases composed by two robots performing straight lines between two points. Another example can be found in [72], in which the author contextualizes its work on the routing of garbage trucks on smart cities. In any case, due to the embryonic state of the field, the adoption of quantum technology in realistic environments is still an open challenge, as it is discussed in the upcoming Section III-F. It is for this reason that the works focused on the resolution of real-world routing problems employ reduced and controlled use cases.
Lastly, and intending to answer the secondary question SQ2 as thoroughly as possible, a timeline distributing the amount of works per problem paradigm is depicted in Fig. 2. This figure demonstrates the predominance of the TSP line of research over the years, despite VRP and other approaches are gaining attention due to the evolution of the QC resources and their open access.
C. TYPES OF STUDIES, ALGORITHMS AND QUANTUM PROCESSOR EMPLOYED
In this section, we analyze the collection of papers from three different perspectives. First, aiming to answer the above scheduled SQ3, we conduct a brief study in Table 4 in order to determine which kind of research work (theoretical or practical) is the most prolific in the current related literature. A deeper examination is performed in this aspect, taking advantage of the data laid out in this Table 4 along with a characterization of the publications formats employed by researchers (journal, conference, report. . . ). Centering ourselves on the practical approaches, we later give the reader insights into some figures of interest related to the type of algorithms (Pure Quantum and Hybrid solvers) and the quantum processors employed (IBM quantum-gate model, DWAVE quantum annealer and Fujitsu Digital Annealer).
Several conclusions can be drawn from Table 4. At a first glance, it is worth-mentioning the equity in the number of theoretical and practical papers. As for the publication source, conferences are preferred for practical papers whereas journals have emerged as more appropriate forum for theoretical papers.
Another interesting aspects can be inferred from Fig. 3, where practical and theoretical papers are counted on a time axis. On the one hand, and coherently with the conclusions mentioned in III-A, an extraordinary upsurge of practical papers has been witnessed during the last 3 years on account of the unleashing of open platforms such as Qiskit of IBM and Leap from DWAVE. Until the unveiling of these kind of platforms, the access to QC resources was very limited, hindering the conduction of studies in this field. On the other hand, theoretical papers have settled into a regular pace along the years.
Having answered the secondary question SQ3, we proceed now with a thorough analysis of the 27 practical papers, giving answer to SQ4 and SQ5. For this purpose, we firstly introduce a few brief remarks about two important concepts.
On the one hand, it should be highlighted that, despite it is deemed as the next frontier in computation, QC is still in an incipient stage of development and current available computers suffer from remarkable limitations in terms of capability and performance. In this context, two types of solvers prevail in the current literature: i) purely quantum approaches, which aim to solve a problem using only QC resources, and quantum-classical hybrid algorithms, which are conceived to overcome current QC limitations.
On the other hand, as mentioned in the introduction, two QC architectures coexist in the current literature: the annealing-based and the gate-model quantum computers. The leading provider of annealers is D-Wave Systems. This company launched the D-Wave Advantage_system1.1 device in 2020, which is accessible via D-Wave's cloud interface, coined as Leap. This device counts with a working graph of 5436 qubits and its Hamiltonian is expressed as an Ising model. Another option is the Fujitsu Digital Annealer. 4 This device, which is presented as an alternative to quantum computing technology, uses a digital circuit design inspired by quantum phenomena. Regarding gate-model quantum computers, current commercial devices have from 10 to 50 qubits in place, being IBM their most renowned provider.
Having said this, Table 5 shows the results of the second study belonging to this subsection. In this table, we list the complete collection of 27 practical papers, along with the type of solver (Pure Quantum and Hybrid solvers) and QC device picked out (IBM quantum-gate model, DWAVE quantum annealer and Fujitsu Digital Annealer).
Answering SQ4, an equilibrium exists between the type of solvers used by researchers, with 13 papers presenting purely quantum approaches, and 14 proposing an hybrid formulation. By taking a closer look, one can observe that researchers using the IBM machine tend to come up with purely quantum approaches (9 out of 11), while practitioners on annealers develop hybrid methods in a more frequent way (12 out of 16).
With reference to SQ5 secondary question, DWAVE quantum annealer has been the most exploited device up to now by the community, appearing in 16 out of 27 practical papers. In addition, researchers have picked the IBM computer in 11 papers, while the Fujitsu Digital Annealer has been resorted to one single paper.
D. NUMBER OF CITATIONS
In this section, we evaluate the impact, in the form of amount of citations, of the papers to reply to SQ6. Fig. 4 arranges the ten most cited works given by Google Scholar digital library as reference. We find [28] as the most cited paper with 181 citations, as a result of being the first quantum annealing approach for the TSP. The legacy of this paper has become as inspiration for many subsequent studies.
Also groundbreaking is the work presented by Srinivasan et al. [59], proposing the first solver for the TSP based on an IBM approach. This paper has accumulated 55 citations in approximately 3 years.
The total number of citations for the whole collection of papers is 1042, which makes an average of 19,66 cites per paper. Given the immaturity of the field (proved in Fig. 4) and taking into account that almost the half of the papers (26 out of 53) have been published in the last two years, this value is an indication of the interest that this field is bringing to the scientific community. More precisely, the most cited paper [28] was published in 2004, although recent papers are also ranked in the top, such as [59], published in 2018, or [71], [73] and [75], issued in 2019.
E. RESEARCHERS AND AFFILIATIONS
Regarding the nationality of the collected works and aiming to answer SQ7, we have analyzed the information in two different ways in order to gain a deeper understanding about the real mass of research groups and institutions: • With focus on researchers' country of affiliation, extracting the number of researchers working on this field ( Fig. 5.a).
• With focus on institutions themselves, calculating the number of distinct-geographically-located centers involved in the research and/or development of this technology ( Fig. 5.b).
As a result, we have identified 21 distinct nationalities, which have been summarized in Table 6. In this table, and also in the depicted Fig. 5, we can clearly distinguish that the most representative countries are United States, China and India with 30, 18 and 16 researchers distributed in 15, 10 and 8 institutions, respectively.
Disposing the information in these columns, let us gain specific insights about the level of collaboration among the institutions and how the research community is integrating knowledge or expertise from various groups and even disciplines. Therefore, by comparing the count of papers and institutions, it is remarkable how countries like China or India are creating strong networks to conduct their works.
F. OPEN CHALLENGES POSED BY THE COMMUNITY
Considering the review of the research activity discussed in preceding sections, there is no doubt that QC has brought a fresh breeze to the community. Advances made in the joint area of QC applied to routing problems is growing at a remarkable pace, exposing the potential benefits of using quantum technologies in this logistics-related optimization problems. In any case, the incipient stage of maturity of QC has brought to the fore several challenges and research niches that shall mark the path of future studies.
It should be noted that the QC research community holds complementary interests, presumably as a result of individual areas of knowledge/interest: • Practitioners coming from industrial and applied research groups, mostly concerned about QC-based formulations and experiments over more realistic scenarios [69], [73].
• Researchers coming from quantum physics, usually interested in analyzing the hardware performance and reliability [42], [53].
• Researchers with background in traditional Artificial
Intelligence, involved in testing the limits of QC, comparing results and leveraging fundamentals, heuristics or shareable-across-platforms knowledge [29], [71].
The research activity produced by this heterogeneous community has led to a series of open research questions, which can be grouped into the following three blocks to give a clear-cut answer to SQ8: • Up to now, several canonical problems have been formulated and addressed by researchers, such as the TSP [24], VRP [49] or the Hamiltonian Cycle Problem [36]. More sophisticated variants of these problems have also been formulated, considering features such as timewindows [75] or capacity constraints [71]. However, researchers are forced to devise problems tailored to hardware capacity, which often fall short from advanced or realistic formulations [62], [64], [71]. Furthermore, the necessity of taking a step forward and advance in the formulation of more realistic problems is a widely accepted open challenge [31], [50], [66], [68]. Within this frame of reference, the following research lines have been identified by the community as future work: i) the efficient modeling of advanced routing problems, focusing on building mathematical formulations which optimize the problem embedding [61], [66], ii) the solving of real-world problems oriented to realistic applications (such as traffic optimization or detecting routing anomalies), even if the size of these problems is restricted [50], [69], [70] and iii) the development of VOLUME 10, 2022 hybrid solvers using classical optimization mechanisms to tackle complex problems [33], [72].
• Current commercial quantum devices impose some inherent limitations, such as noise or decoherence [7]. To overcome this latent problem, practitioners have elaborated on the formulation of error correction and mitigation strategies, as can be seen in [27]. In any case, the amount of studies working on this specific aspect is still low. Some researcher have actively shown and proved the effect of this phenomena in the performance of their algorithmic schemes [23], [66], [72], [73], whereas others just add some remarks in this respect in the future work section [41]. On this grounds, some authors have spotlighted the importance of conducting studies analyzing the effect of the quantum hardware noise [23], [29], since some error mitigation or correction mechanisms can be addressed in the problem formulation itself, giving rise to more resilient algorithms • Quantum devices have demonstrated to be very sensitive to parameterization [69]. On this regard, these parameterizations apply to multiple stages and elements on QC: i) parameters in the problem formulation (such as penalties), ii) parameters for algorithm tuning (commonly referred as hyperparameters), and iii) parameters targeted to quantum hardware (specifications for configurable behaviors of the quantum process). Despite this, up to now, the great majority of practical papers are focused on solving a specific problem or testing the adequacy of a formulation given an intuitively suitable parameterization. In fact, the literature lack profound research on the fine-tuning of these routing problems, algorithms and hardware, as stated by [66]. In this concrete paper, authors express that one of the most difficult challenge is the proper choice of penalty values in problem formulation, and they theorize some alternatives to assist in the proper selection of these parameters. Other works also underline the importance of parameterization in algorithms and quantum hardware, calling for the development of more sophisticated solving models and suggesting the adoption of classical fine-tuning heuristics [69]. In short, practiotioners are nowadays recognizing the necessity of better parameterization procedures in order to fairly analyze the impact of these configurations on the quality of results, robustness or scalability of the approach. [31], [73].
G. SUMMARY OF THE SLR
As a summary of this Section III, we should first say that the meta-analysis carried out on the gathered collection of papers have helped us to answer eight different secondary questions (listed in Section II). After addressing these SQs, we are now in position to give a response to the key question of the SLR (MQ), which is the main objective of this study. We outline the ten most cited papers in Table 7, highlighting some important information that has been discussed along this whole Section III: year, problem solved, type of study, source of publication and quantum resource used. These ten papers are quite representative of the current state of the field, in which the immaturity of quantum resources makes theoretical papers more present in the literature. Furthermore, the TSP has been the most studied routing problem during the period, with the VRP in the second position. Lastly, it is worth mentioning that 7 of these papers have been published in scientific journals, while 2 were presented in international conferences and 1 was uploaded as a pre-print in the reputed arXiv repository.
IV. CRITICAL ANALYSIS OF MILESTONE AND INFLUENTIAL PAPERS
Lastly, we complement our SLR providing a critical analysis of some selected milestone and influential papers. Considering the immature stage of this knowledge field, and taking into account that this manuscript represents the first review paper conducted on the conjunction of routing problems and quantum computation, we have found interesting to give some technical specifications aiming at helping readers to find a guide for their future works on this field. To do so, we include Table 8 gathering both the most cited papers (deemed as influential works) and those milestone papers that developed routing problems algorithms for the first time or introduced advanced technological strategies. For each selected work, we incorporate a brief description of its main contribution as well as the problem complexity (for applied studies).
As can be noted in this table, first influential studies were focused on problem formulations at a time when quantum computers were not accessible. Examples of this statement include formulations for the TSP [28], [32], VRP [49] and other routing related problems [36]. These works encouraged the community to evolve new strategies and complement these theoretical studies with experimentation.
In this context, Moser introduced the first theoretical paper aiming to combine both routing problems and QC [24]. Furthermore, Martoňák et. al proposed the first ever formulation of the TSP for a quantum annealer in [28]. That work has gained a lot of attention in the last two decades becoming the most cited one in this specific field. More specifically, authors endorsed their formulation through an experimentation carried out with classical techniques using TSP instances of a size up to 1002 nodes. Two years later, authors in [32] continued on the road taken by [28], presenting an alternative TSP formulation for a quantum annealer to improve efficiency in escaping from local optima. Also significant is the research conducted in [42], which introduces one of the fist experiments on a simulated quantum hardware. More concretely, authors developed a classically simulated nuclearmagnetic-resonance, which was applied to a single small instance of the TSP consisting of 4 nodes.
Furthermore, a remarkable milestone was the research conducted in [47] in 2013, which gave birth to a guide for beginners offering very understandable explanations of TSP formulations translated to a real adiabatic computer. Another influential work (ranked among the most cited ones in the field) is [53], which introduces a quantum backtracking algorithm for dealing with TSP instances with a maximum degree of 3. Finally, it is worth-mentioning the recent theoretical work carried out by Papalitsas et al. in [75], in which the first QUBO representation of a complex TSP is given. That TSP variant deals with time-windows, which is arguably the most frequently imposed restriction on routing problems.
Turning our attention to the VRP, the first theoretical groundbreaking paper can be found in [49]. This paper, which is also among the most cited ones in the field, introduces the first ever formulation of the VRP for being solved in a quantum annealer. Lastly, also significant was the theoretical work developed by Dörn in [36]. In that paper, unveiled in 2007, the author introduced the first quantum annealing formulation of some relevant routing problems, such as the eulerian graph problem, hamiltonian cycle problem, or the project scheduling problem. In that work, also the TSP is considered, strengthening the conclusions drawn in previous related studies.
Focusing on papers elaborating on practical experiments on real QC devices, the first crucial milestone was achieved in 2017 through the work proposed by Srinivasan et al. in [59]. Authors presented the first solving scheme for a routing problem using a real quantum computer. Specifically, that revolutionary paper tackled the TSP using the IBM quantum hardware, addressing a single TSP instance of 4 nodes. On another vein, authors in [69] achieved a further remarkable milestone, proposing the first DWAVE-technology-based solver for the TSP, facing instances with a maximum of 5 nodes. Despite the publication of these pioneering works, arguably, the most influential research on this context is the one carried out by Feld et al. in [73]. In that study, authors faced the well-known Capacitated VRP decomposing it into a clustering problem and a TSP. While the first of these problems is dealt using classical techniques, the TSP is tackled using the DWAVE quantum annealer. The technical and informative nature of this paper, giving clear details about the QUBO formulation, has made it an influential work in the community. Using a hybrid quantum-classical approach, TSP instances composed of 14 to 38 nodes are faced, while the overall algorithm addresses CVRP instances from 22 to 101 clients. Lastly, it is interesting to highlight the study proposed in [60], in which the first approach based on the Fujitsu quantum inspired device is presented, able to solve small instances of the TSP.
Regarding pure VRP formulations, the first milestone belongs to [71], whose authors employed the DWAVE quantum annealer for instances up to 6 nodes, being the first VRP solver run on a real quantum hardware. Additionally, Behera et. al introduced the first approach based on IBM universal quantum device in [29]. That work, published in 2020, tackles small VRP instances composed of 4 and 5 nodes. Finally, it is also interesting to mention the research conducted by Mahasinghe et al. in [61] where authors solved the Hamiltonian Cycle problem using the DWAVE quantum hardware, analyzing results on graphs of size 4.
V. CONCLUSION AND FURTHER WORK
The main purpose of this manuscript has been the conduction of a systematic literature review (SLR) to analyze the research carried out in the confluence of Quantum Computing and routing problems topics. The time span analyzed in this paper covers 18 years of research, which have witnessed 53 research papers published in form of journal or conference contributions, book chapters, PhD thesis, Master thesis and reports.
From this analysis, it is noticeable that the TSP engages most of the researchers (60,37% -32 out of 53 papers), while the VRP amounts to the 25,52% of the contributions (13 out of 53). The rest of the papers deal with other routing problems, such as the Shortest Path Problem or the Hamiltonian Cycle.
Regarding the type of papers, an equality has been found in the literature, with 26 theoretical works and 27 practical contributions. In any case, and thanks to the advance of the technology, a clear inclination towards practical articles come into sight in the last three years. Further equality can be also detected in the kind of solvers preferred by practitioners, with 13 papers presenting purely quantum approaches and 14 studies proposing hybrid techniques. Regarding QC devices, DWAVE beats IBM by 16 to 11. Furthermore, we have complemented our analysis by sharing the envisioned status of this area, pointing out some challenges detected by researchers and practitioners working on this field. These open opportunities should stimulate research efforts in years to come. We have identified three different main challenges: i) the need of dealing with more sophisticated and real-world oriented problems, ii) the value that will provide the proper understanding of the quantum hardware limitations, and iii) the necessity of properly parameterize problem formulations, algorithms and quantum devices.
As a final note, we can arguably affirm that the application of quantum computing to vehicle routing problems counts with an exciting and prolific future. Continuous advances made on quantum technology will encourage the community to implement more realistic routing problems (increasing complexity and adding new constraints), taking future studies to unprecedented challenges not even imagined today. | 9,282 | 2022-01-01T00:00:00.000 | [
"Computer Science",
"Physics",
"Engineering"
] |
Haploinsufficiency of Activation-Induced Deaminase for Antibody Diversification and Chromosome Translocations both In Vitro and In Vivo
The humoral immune response critically relies on the secondary diversification of antibodies. This diversification takes places through somatic remodelling of the antibody genes by two molecular mechanisms, Class Switch Recombination (CSR) and Somatic Hypermutation (SHM). The enzyme Activation Induced Cytidine Deaminase (AID) initiates both SHM and CSR by deaminating cytosine residues on the DNA of immunoglobulin genes. While crucial for immunity, AID-catalysed deamination is also the triggering event for the generation of lymphomagenic chromosome translocations. To address whether restricting the levels of AID expression in vivo contributes to the regulation of its function, we analysed mice harbouring a single copy of the AID gene (AID+/−). AID+/− mice express roughly 50% of normal AID levels, and display a mild hyperplasia, reminiscent of AID deficient mice and humans. Moreover, we found that AID+/− cells have an impaired competence for CSR and SHM, which indicates that AID gene dose is limiting for its physiologic function. We next evaluated the impact of AID reduction in AID+/− mice on the generation of chromosome translocations. Our results show that the frequency of AID-promoted c-myc/IgH translocations is reduced in AID+/− mice, both in vivo and in vitro. Therefore, AID is haploinsufficient for antibody diversification and chromosome translocations. These findings suggest that limiting the physiologic levels of AID expression can be a regulatory mechanism that ensures an optimal balance between immune proficiency and genome integrity.
Introduction
B cells are responsible for generating a repertoire of antibodies of virtually unlimited diversity in order to confront the antigenic universe. Antibody diversification is achieved through somatic remodelling of immunoglobulin (Ig) genes at two different stages of B cell differentiation. The first one is antigen-independent and takes place during B cell generation in the bone marrow through a site-specific recombination named V(D)J recombination, which gives rise to B cells expressing a primary repertoire of Iow affinity IgM antibodies (reviewed in [1]). Upon antigen encounter B cells have as yet another chance to further diversify their antibody repertoire in germinal centers by two independent molecular mechanisms called somatic hypermutation (SHM) and class switch recombination (CSR). SHM reshapes the antigen binding site of Igs by introducing nucleotide changes in their variable genes. B cells where SHM gives rise to antibodies with higher affinity for their cognate antigen are positively selected, a process referred to as affinity maturation (reviewed in [2] and [3]). CSR is a regionspecific recombination reaction that replaces the primary m constant (Cm) region by a downstream constant region (Cc, Ce or Ca), thereby generating antibodies endowed with new functions for pathogen neutralization while retaining the same antigen specificity. CSR takes place between highly repetitive sequences that precede the Cm, Cc, Ce and Ca genes, called switch regions, through the generation of double strand breaks (DSBs), ligation, and concomitant excision of the intervening sequence from the locus (reviewed in [4,5]).
Both SHM and CSR are initiated by the very same enzyme, Activation Induced Cytidine Deaminase (AID) [6]. In humans, mutations in the AID gene are associated with a rare (1/2000000) immunodeficiency called Hyper IgM Syndrome type 2 (HIGM2) [7]. HIGM2 patients display impaired CSR and SHM and are prone to bacterial infections of the respiratory and digestive tracts [7]. AID initiates SHM and CSR by deaminating cytosine residues of the variable and switch regions of the Ig genes, respectively [8,9,10,11,12,13]. Cytosine deamination on DNA converts a normal C:G pair into a U:G mismatch. AID-generated U:G mismatches are processed through uracil removal by Uracil-N-Glycosylase (UNG) or through recognition by the mismatch repair (MMR) machinery, which results in the generation of either a mutation (SHM) or a DSB (CSR) [11,14,15,16].
Most of the lymphomas diagnosed in the western world arise from mature B cells and are characterized by the presence of chromosomal translocations that involve one of the Ig loci and a proto-oncogene [17,18]. These translocations are known to play a role in the etiology of these B cell neoplasias [17,18]. In vivo and in vitro studies have shown that AID can promote the generation of pro-lymphomagenic translocations [19,20,21,22], and that CSR and the translocation reaction are initiated by a common pathway that involves DNA deamination and UNG [20]. The impact of AID function in B cell neoplasia development has been addressed in a number of in vivo models, including IL6 [19,21] and pristane [23] promoted plasmacytomas, BCL-6-induced diffuse large B cell lymphoma [22], Em-Myc model of B cell lymphoma [24] and a myc-induced multiple myeloma model [25]. In all the cases absence of AID either delayed the onset or shifted the nature of the neoplasia towards a more immature origin, hence reinforcing the idea that AID expression plays a role in the generation of mature B cell lymphomas by promoting DNA lesions.
Therefore, AID function, while crucial to the development of an efficient immune response, can pose a risk to DNA stability in B cells. Different regulatory mechanisms may be responsible to minimize unwanted DNA damage by AID.
First, AID mutagenic activity is mostly limited to the Ig loci (reviewed in [26]), and although AID-induced lesions in other genes have been reported [27,28], these events are rare. Second, AID accessibility to DNA is restrained by fine control of subcellular localization [29,30]. Third, the presence of AID mRNA is mainly restricted to activated mature B cells [31], thus limiting its function to the cell type and time window where it is required. Transcriptional regulation exerted by B cell specific transcription factors and cis elements (reviewed in [32], as well as microRNA-mediated post-transcriptional regulation [33,34,35] contribute to this expression pattern.
We hypothesized that limiting physiological AID expression levels could provide an additional mechanism to restrict its deleterious activity. To address this question we analysed the impact of AID reduction in AID +/2 mice on CSR, SHM and the generation of chromosome translocations.
AID expression is limiting for its function in CSR
In order to assess whether physiologic AID expression level is limiting for its function, we evaluated AID gene dose effect in mice harbouring one or two functional alleles of the AID gene, AID +/2 and AID +/+ mice, respectively. B cells from congenic Balb/c ByJ animals [21] were used to minimize strain-to-strain variations. We first wanted to ascertain that deletion of one AID allele in AID +/2 mice resulted in a reduction of AID levels as compared to AID +/+ mice. Mature B cells were isolated from spleens from AID +/+ and AID +/2 mice, stimulated in vitro in the presence of LPS and IL4 to promote AID transcriptional activation, and AID expression was quantified after 3 days of culture by real-time RT-PCR. We found that indeed AID mRNA levels are reduced roughly to 50% in AID +/2 as compared to wild type AID +/+ B cells (Figure 1a). AID deficient mice display a mild B cell hyperplasia and enlarged germinal centers [6]. We found that AID +/2 mice have slightly increased numbers of B cells in spleen as compared to AID +/+ (Figure 1b) and contain a higher proportion of germinal center B cells in Peyer's patches, as measured by the expression of the GL7 + Fas + activation markers (Figure 1c).
The above results indicate that AID +/2 cells express reduced levels of AID mRNA and show hyperactivation features that are intermediate between AID proficient (AID +/+ ) and AID deficient (AID 2/2 ) cells.
To address if reduced AID expression in AID +/2 B cells results in a decrease of serum Ig levels, we immunized AID +/+ , AID +/2 and AID 2/2 mice with NP-CGG and analysed Ig concentration by ELISA. In agreement with previously reported data [6], we found no significant difference in serum Ig concentration between heterozygous and fully deficient AID mice (Figure 1d), presumably due to the high variablity in Ig titers. This result is consistent with the finding by Takizawa et al that decreased Ig titers in immunized AID +/2 mice is only detected for antigen-specific high affinity antibodies [36]. To directly measure the impact of AID reduction on the induction of CSR, B cells isolated from AID +/+ , AID +/2 and AID 2/2 spleens were stimulated in the presence of LPS and IL4 to promote CSR to IgG1. FACS analysis after 2 days of stimulation showed that AID +/2 B cells show a reduction in the efficiency of CSR to IgG1 (Figure 1e) which parallels the observed reduction of AID mRNA levels ( Figure 1a). CSR reduction in AID +/2 compared to AID +/+ B cells can be observed throughout the duration of the culture (Figure 1f) both for IgG1 isotype (LPS+IL4 stimulation, graph on the left) and for IgG3 isotype (LPS stimulation, graph on the right), and was not associated with proliferation defects (Figure 1e and not shown).
We next asked if surpassing the level of physiologic AID expression would conversely result in an increase of CSR. To approach this issue we transduced spleen B cells from wild type mice with retroviruses encoding AID or AID E58Q catalytically inactive mutant, together with GFP for tracking purposes. We found that AID, but not AID E58Q overexpression, promotes an increase in the efficiency of CSR, as measured by the expression of IgG1 after stimulation in the presence of LPS and IL4 (Figure 1g and h).
From these results we conclude that AID gene dose affects the efficiency of CSR in primary B cells, which implies that the AID gene is haploinsufficient for CSR.
AID expression is limiting for SHM
To assess whether AID haploinsufficiency was also evident on its activity in SHM we first analysed the accumulation of mutations in the 59 end of the m switch region (Sm). AID activity during CSR leads to the introduction of mutations in Sm. We stimulated CFSE-labelled spleen B cells from AID +/+ and AID +/2 mice in the presence of LPS and IL4 for 96h and quantified the accumulation of mutations in cells that had undergone 5 or more divisions after sorting, DNA extraction, PCR amplification, cloning and sequencing. We found a lower mutation frequency in the Sm region of AID +/2 when compared to AID +/+ cells (1.2610 24 vs 1.9610 24 ) although this difference was not statistically significant (t test p = 0.171) (Figure 2a). We next examined the SHM frequency in vivo by isolating Fas + GL7 + germinal centre cells from AID +/2 and AID +/+ peyer's patches and sequencing of the intronic region immediately downstream of the JH4 gene. Our analysis showed that B cells from AID +/2 mice contain fewer mutations than AID +/+ cells (0.9610 23 vs 3.1610 23 t test p = 0.011) (Figure 2b). From these results we conclude that AID is also haploinsufficient for SHM and therefore that the physiologic level of AID expression is limiting for the diversification of antibodies.
B cells from IL6tgAID +/2 hyperplastic lymph nodes harbour fewer translocations than IL6tgAID +/+ B cells IL6 transgenic (IL6tg) mice develop lymphoid hyperplasia, presumably resulting from IL6-induced proliferation protection of mature B cells from apoptosis. Hyperplastic lymphoid tissues from IL6tg mice are enriched in B cells that harbour chromosomal translocations involving the IgH locus and the c-myc protooncogene (c-myc/IgH), analogous to those found in human Burkitt lymphoma. In the absence of AID, IL6tg B cells are devoid of c-myc/IgH translocations and the onset of lymphoid hyperpla-sia is delayed [19,21]. Breakpoints of c-myc/IgH translocations found in IL6tg mice cluster in a narrow region of the c-myc gene, which encompasses part of its first exon and first intron. In contrast, translocation breakpoints have been found spreading over a large region of the IgH locus, from the V-J H region to the alpha switch (Sa), which precedes the most distal alpha constant (Ca) segment [19,21,37]. This distribution likely reflects the sites of AID-initiated double strand breaks during CSR and on occasion, during SHM.
To determine if AID gene dose has an influence on the frequency or distribution of c-myc/IgH translocations in vivo we analysed B cells from IL6tgAID +/+ and IL6tgAID +/2 hyperplastic lymph nodes by long-range PCR, cloning and sequencing. Oligonucleotides priming at the mu switch (Sm) region or at the Sa region of the IgH were combined with c-myc oligonucleotides to detect proximal, c-myc/IgHm or distal, c-myc/IgHa, translocations, respectively [38] (Figure 3a). We found that the total frequency of c-myc/IgH translocations was reduced in IL6t-gAID +/2 (0,77610 24 ) B cells when compared to IL6tgAID +/+ (1,95610 24 ) (Figure 3c, left).
We readily detected proximal c-myc/IgHm translocations in both IL6tgAID +/+ and IL6tgAID +/2 B cells, in agreement with previous reports [21,37] (Figure 3b). When the frequency of cmyc/IgHm translocations was calculated by performing serial dilutions of B cell samples from hyperplasic lymph nodes, we found that it was slightly reduced in IL6tgAID +/2 when compared with IL6tgAID +/+ cells (Figure 3c, middle; 1.2610 24 vs 0.8610 24 ). Breakpoints from IL6tgAID +/+ and IL6tgAID +/2 translocations were mapped and characterized by cloning and sequencing ( Figure 3d and Table 1). We found that translocation breakpoints at the c-myc gene mostly clustered at the end of the first exon and the beginning of the first intron, regardless of the genotype being analysed (Figure 3d). Breakpoints at the IgH locus were slightly more proximal to Em in the case of IL6tgAID +/+ B cells (Figure 3d), although this difference was not statistically significant. Both mutation frequency nearby translocation junctions and number of microhomology nucleotides at junctions were similar in IL6tgAID +/+ and IL6tgAID +/2 B cells.
In contrast to c-myc/IgHm proximal translocations, we found that distal c-myc/IgHa translocations, while present in 20% of the IL6tgAID +/+ samples, were not detected in any of the IL6tgAID +/2 mice analysed (Figure 3b-c, right). We conclude that in vivo, the reduction of AID expression in IL6tgAID +/2 mice results in a shifted pattern and reduced frequency of c-myc-IgH translocations.
The frequency of c-myc/IgH translocations generated in vitro is reduced in AID +/2 B cells Lymphoid hyperplasia generated in IL6tg mice is a long latency disease presumably involving complex selective events which can bias the number and nature of the translocations found in sick animals. In order to assess more faithfully the effect of limiting AID gene dose on the generation of chromosomal translocations, we analysed the frequency of these events as generated after short in vitro cultures of primary B cells. C-myc/IgH translocations are initiated in B cells through the same molecular pathway as CSR, which involves cytosine deamination by AID, and UNG. This process can be recapitulated in vitro by stimulating spleen B cells in the presence of LPS and IL4. The frequency of these events in wild type B cells is extremely low (below one translocation every ten million cells), but it is increased in the absence of p53, p19 ARF or ATM, reflecting that DNA damage and oncogenic stress pathways are activated downstream of AID function to prevent aberrant joinings or spreading of cells that harbour lymphomagenic translocations. In particular, p53-mediated protection against AID-triggered c-myc/IgH translocations seems to require both alleles of the tumor suppressor, since p53 +/2 B cells display a dramatic increase in translocation frequency when compared to wild type littermates. Therefore we decided to exploit the higher frequency of c-myc/IgH translocations found in p53 +/2 B cells to determine whether restricting AID expression levels has an impact on the occurrence of these events.
To verify that the p53 +/2 genotype or the mixed strain of these mice do not interfere with AID haploinsufficiency (described above), we generated p53 +/2 AID +/+ and p53 +/2 AID +/2 animals and analysed the efficiency of CSR in LPS+IL4 B cell cultures. Expectedly, we found that p53 +/2 AID +/2 B cells show a decreased level of CSR when compared to p53 +/2 AID +/+ cells, as measured by the expression of IgG1 at different culture timepoints ( Figure 4a). This reduction is comparable to that observed in Balb/c p53 +/+ mice (see above).
From this result we conclude that the reduced AID levels expressed by mice that harbour a single copy of the gene (AID +/2 ) results in a decreased frequency of c-myc/IgH translocations. Therefore, AID is haploinsufficient for the generation of these lymphomagenic lesions.
Discussion
The progress of the humoral immune response relies on the reshaping of the antibody repertoire upon infection by the introduction of somatic changes into the DNA of Ig genes. Higher affinity antibodies are produced by SHM and new effector capabilities are generated by CSR. SHM and CSR are initiated by AID through the deamination of cytosine residues on antibody genes [6]. Accordingly, AID is a critical enzymatic activity for the development of the adaptive immune response [6,7]. Recent reports have stated that AID-mediated deamination on DNA can also lead to the generation of unwanted lesions, namely, mutations outside the Ig genes [27,28], or DNA breaks and chromosomal translocations [19,20,21,22], whose contribution to lymphoid neoplasia has been demonstrated in several in vivo models [19,21,22,23,24,25]. Therefore AID function is expected to be tightly regulated to prevent the generation of DNA lesions. Here, we have addressed the question of whether restriction of AID expression levels in vivo could play a regulatory role in its activity. We performed our analyses in mice that harbour a single functional allele of the AID gene, which results in a reduction of roughly 50% of AID mRNA levels. This reduction is likely due to a per-cell decrease of AID levels, as it has been shown that AID expression is biallelic [36]. Both in mice and humans, the absence of AID results in a hyper IgM immunodeficiency syndrome that is characterized by the absence of somatic mutations and of Ig isotypes other than IgM. This phenotype is accompanied by lymphoid hyperplasia and enlarged germinal centers. Although the significance of these latter features to the immunodeficiency is unclear, we found that reduction of AID expression in AID +/2 mice brings about a mild increase of B cell numbers in the spleen and of germinal center cells in Peyer's patches. More importantly, AID gene dose is indeed limiting for the generation of switched isotypes, as B cells from AID +/2 mice have an impaired ability to perform CSR in vitro, in agreement with very recently published results [36,39] This observation is reinforced by the finding that exogenous AID expression results in an increase of the CSR rate. In addition, our results show that AID expression is also limiting for SHM. Altogether these data indicate that AID is haploinsufficient for antibody diversification. In humans, AID deficiency (HIGM2) is considered an autosomal recessive disease [7,40,41,42]. This apparent discrepancy with our data can be explained by the fact that HIGM2 patients have relatively mild and variable clinical phenotypes, possibly due to the contribution of genetic and environmental factors [7,41,43]. Among those, it is important to note that numerous different mutations in the AID gene have been identified in HIGM2 patients, whose impact on the efficiency of the CSR and SHM reactions is highly variable [7,41,43,44]. In addition, the number of HIGM2 patients characterised so far is small [45], which makes it very difficult to establish genotype-phenotype relationships. Therefore, it is expected that heterozygous individuals display mild to absence of clinical manifestations.
Our analysis of AID-induced chromosomal translocations shows that elimination of one AID allele reduces the frequency of these aberrations. Of note, c-myc/IgH translocations involving the distal Sa switch region were not detected in B cells from IL6tgAID +/2 hyperplasic lymph nodes. Although the sample size does not allow concluding that this absence is absolute, our data indicate that Sa translocations are very infrequent when the amount of AID is limiting. This observation is in agreement with the finding that the frequency of AID mutations in Sa is much lower than in Sm [46]. In turn, this would imply that, in the presence of a single AID allele, the lesions that trigger a DSB in this region are very rare and fall below our detection limit. Analysis of AID-promoted c-myc/IgH translocations in in vitro stimulated B cells has allowed an accurate measurement of the frequency of these events in AID +/2 B cells, as opposed to the IL6tg (this study) or pristane induced plasmacytomas in Bcl-xL mice [36], where the contribution of in vivo selective mechanisms can not be discriminated. Our results show unequivocally that reduction of AID levels results in a significant decrease in the frequency of these events. This reduction in c-myc/IgH translocations found in AID +/2 mice is not accompanied by a shift in the location or nature of translocation breakpoints, which indicates that diminishing AID levels affects only the frequency of targeting, rather than its specificity. Conversely, we had previously observed that AID overexpression in primary B cells produces a major increase in the frequency of c-myc/IgH translocations. In particular, a ten fold protein increase gave rise to a thousand fold increase in translocation frequency [20]. This observation suggests that surpassing a certain level of AID expression is likely to overwhelm the cellular surveillance pathways that protect against these lesions and to result in their accumulation in a non-linear fashion. Together, these results show that AID haploinsufficiency is also revealed in its deleterious activity as promoter of illegitimate chromosome fusions. Interestingly, previous results have shown a longer median survival for IL6tgAID +/2 than for IL6tgAID +/+ mice (7.9 vs 5.5 months) in two independent studies which showed similar survival curves for IL6tgAID 2/2 mice [19,21]. Very recently it has been reported that in a Bcl-xL transgenic background, AID +/2 mice have a delayed plasmacytoma development in response to pristane injection [36]. Together, these observations indicate that AID haploinsufficiency can be relevant for tumor progression in vivo.
In summary, we have found that a half reduction of AID expression in mice harbouring a single AID allele, results in a reduction of AID activity, both in the efficiency of antibody diversification and in the frequency of generation of chromosome translocations. This implies that there is a gene dose effect of AID expression, and therefore, that AID is haploinsufficient. Our findings suggest that restraining the physiologic levels of AID expression can be a mechanism that allows the achievement of an optimal balance between immune proficiency and genome integrity.
Mice, B cell cultures, flow cytometry and ELISA assays
Congenic Balb/c AID +/2 mice were obtained by breeding Balb/c AID 2/2 mice with wild type Balb/c mice. p53 +/2 AID +/2 mice were obtained by breeding Balb/c AID 2/2 mice with C57BL/6J p53 2/2 mice (Jackson laboratories). Lymph node samples were obtained from IL6tgAID +/+ , IL6t-gAID +/2 and IL6tgAID 2/2 mice , Dorsett 2007. All experiments with mice were performed following the Animal Bioethics and Comfort Committee protocols approved by the Instituto de Salud Carlos III. B cells were purified from spleens by anti-CD43 immunomagnetic depletion (Miltenyi Biotech), labelled with 5 mm CFSE (Molecular Probes) (when indicated) and cultured in RPMI in the presence of 25 mg/ml LPS (Sigma), . Specificity of amplification products was determined by southern blot and hybridization with IgH (middle) and myc (lower) probes. Results for p53 +/2 AID +/+ and p53 +/2 AID +/2 cells are shown on the left and right gels, respectively. Translocation identifications are indicated above lanes. (C) Frequency of c-myc/IgH translocations in p53 +/2 AID +/+ and p53 +/2 AID +/2 B cells. Translocation frequency was determined by serial dilution of DNA samples, followed by PCR amplification, cloning and sequencing. (D) Representation of translocation breakpoints at the c-myc and IgHm genes found in p53 +/2 AID +/+ and p53 +/2 AID +/2 B cells. Amplification products of c-myc/IgH translocations were cloned and sequenced. Translocation breakpoints at the c-myc (upper diagram) and IgH (lower diagram) genes are shown as closed (p53 +/2 AID +/+ ) and open (p53 +/2 AID +/2 ) circles. C-myc exon 1 and IgH Em enhancer are represented as grey boxes and distance to these elements is shown underneath (bps). Arrows on the right indicate the position of the PCR oligonucleotides used for amplification. doi:10.1371/journal.pone.0003927.g004 10 ng/ml IL-4 (Peprotech), 10 mM Hepes (Gibco), 50 mM bmercaptoethanol (Gibco) and 10% fetal bovine serum (Gibco). For class switch recombination and differentiation assays BAFF (20 ng/ml R&D Systems) was also included to the previously described medium. Retroviral transductions were performed in LPS+IL4 stimulated B cells in the presence of 8 mg/ml polybrene. Flow cytometry analysis was performed after staining with anti-GL7-FITC, anti-CD95-PE, anti-CD19-APC, anti-IgG1 biotin and anti-IgG3 antibodies, APC-streptavidin and 7-AAD (all from BD Biosciences) in a FACSCanto flow cytometer (BD Biosciences). P-values were calculated using unpaired Student's t test (Graph-Pad software). For determination of serum Ig titers, age-matched 7-9 weeks old mice were immunized by footpad injection with 50 mg/ml of NP-CGG (Biosearch Technologies) in complete Freund's adjuvant. Serum was collected from blood samples 15 days after the immunization and IgG titers in AID +/+ , AID +/2 and AID 2/2 mice were determined using a mouse IgG specific ELISA system (Roche Applied Science).
Mutation analysis
For analysis of mutations at the Sm region, CD43 2 spleen B cells were CFSE labelled, stimulated for 96h in the presence of LPS and IL4 and B cells that had undergone 5 or more cell divisions were isolated by sorting (FACSAria). DNA was extracted and amplified using the oligonucleotides 59-AATGGATACCTCAGTGGTT-TTTAATGGTGGGTTTA-39 and 59-GCGGCCCGGCTCA-TTCCAGTTCATTACAG-39y for 26 cycles (Pfu Ultra, Stratagene). For JH4 intron mutations, Fas + GL7 + cells were purified by cell sorting (FACSAria), DNA was isolated and amplified with oligonucleotides 59ACTATGCTATGGAC-39 and 59-CTGGA-CTTTCGGTTTGGTG-39 for 9 cycles and then 59-GGTCAAG-GAACCTCAGTCA-39 and 59-TCTCTAGACAGCAACTAC-39 for 21 cycles using Pfu Ultra (Stratagene). Amplification products were purified, cloned and sequenced. Sequence analysis was performed using Lasergene software. P-values were calculated using unpaired Student's t test (GraphPad software). | 5,980.8 | 2008-12-12T00:00:00.000 | [
"Biology"
] |
A Fast Calibration Method for Phased Arrays by Using the Graph Coloring Theory
Phased array radars are able to provide highly accurate airplane surveillance and tracking performance if they are properly calibrated. However, the ambient temperature variation and device aging could greatly deteriorate their performance. Currently, performing a calibration over a large-scale phased array with thousands of antennas is time-consuming. To facilitate the process, we propose a fast calibration method for phased arrays with omnidirectional radiation patterns based on the graph coloring theory. This method transforms the calibration problem into a coloring problem that aims at minimizing the number of used colors. By reusing the calibration time slots spatially, more than one omnidirectional antenna can perform calibration simultaneously. The simulation proves this method can prominently reduce total calibration time and recover the radiation pattern from amplitude and phase errors and noise. It is worth noting that the total calibration time consumed by the proposed method remains constant and is negligible compared with other calibration methods.
Introduction
To ensure a radiation pattern meeting the required performance, an active phased array must guarantee amplitude and phase matching among elements [1][2][3][4]. When these amplitude and phase errors are small and cannot be satisfied by the manufacturing process, the phased array antennas must be calibrated, especially for a digital beam-forming phased array with very sharp beams. Many calibration techniques have been proposed to obtain uniform amplitude and phase values for each antenna [3,[5][6][7][8][9][10][11][12][13][14][15]. These methods can be divided into five types: Near-field calibration, auxiliary probe calibration, internal calibration, far-field calibration, and mutual coupling calibration. Near-field calibration uses a test probe scanning across the array surface to directly measure each antenna's amplitude and phase errors. This method can obtain high-accuracy calibration results. However, it is very time-consuming since it requires a complex equipment to accurately control the probe's movement. Near-field calibration is always performed in the factory and is not suitable for in-the-field calibration [6,9]. The auxiliary probe calibration method places auxiliary antenna probes around the array. These probes are used to couple radiant signal from each antenna. The received signal is compared to a stored reference obtained during factory test to obtain each antenna's amplitude and phase errors. This method is much faster than the near-field calibration. However, the coupling between each probe and each antenna is different and always results in prominent calibration errors [10,11]. Internal calibration method places a built-in calibration circuitry beside each antenna to measure RF output power, receive gain and so on. These measured values are transmitted to a central calibration module where the amplitude and phase errors of each antenna are calculated. This method allows precise calibration by direct sampling of phases and amplitudes. A major disadvantage of this method is the requirement of significant additional hardware (high cost for large arrays) [3,12]. Far-field calibration method is a very popular method, which places a transmitter/receiver in the far field to transmit calibration signal to/receive signals from the array. This method can perform background calibration without interrupting radar's normal operation. However, the calibration performance of this method is highly susceptible to environmental influence. Furthermore, it requires the radar keeping absolutely still during calibration. Sometimes the far-field transmitter/receiver is difficult to deploy. This method may not be optimum for airborne radars and ship-borne radars [2,13]. Mutual coupling calibration method uses the mutual coupling effect among antennas to measure the amplitude and phase differences by transmitting from one antenna and receiving from another. The mutual coupling among adjacent antennas should be identical and the array should be able to transmit with one antenna and receive with another simultaneously. Plenty of works have been down in this method. However, in these works, only one antenna transmits at a time, and this procedure is repeated until all antennas are measured [14,16].
Reference [16] combines the mutual coupling method and the rotating element electric field vector (REV) method [17] to achieve amplitude-only measurement for phased array calibration. By shifting the phase of one antenna from 0 • to 360 • , the REV method measures the composite electronic field of the entire array and obtains the amplitude and phase for the antenna. However, the REV method's total number of measurement is in proportion to the number of antennas and the phase shifting resolution. Assuming the bit number of the phase shifters is p for an array with N antennas, the total number of measurements using the REV method equals to 2 p N, which is the measuring number of each antenna multiplied by the total antenna number [18]. If we divide the total calibration time into non-overlapping equal-length time slots and each time slot is allocated to a dedicated antenna, the number of measurements can also be expressed as the measuring number of each antenna multiplied by the number of time slots. This method consumes a large number of measurements to calibrate phased arrays with a large number of antennas, hence drastically reducing the effective radar working time. Some advanced algorithms have been proposed to reduce the number of measurements, such as the extended REV method [19] and the REV-H method [20]. The extended REV method measures signals of multiple antennas simultaneously. The phases of these antennas are successively shifted and then the relative amplitude and phase errors are obtained by expanding the measured power variation through Fourier transformation. However, the total number of measurements of the extended REV method is at least 7.5N + 1. The REV-H method divides antennas into different groups according to the normalized Hadamard matrix. The phases of all the elements in the same group are rotated at the same time, and the composite electric field vector of this group can be obtained by the simplified REV method, through which we can derive the relative electric fields of all elements. This method requires 30N measurements. All these methods mentioned above aim to reduce the measuring number of every antenna. However, the total number of measurements is still proportional to the antenna number, which is still unacceptable for a large array.
In this paper, we propose a fast calibration method based on graph coloring theory. Other than the previously mentioned methods, which try to minimize the measurement number of each antenna, the proposed method focuses on sharing calibration time slots to calibrate multiple antennas simultaneously. Though limited literature has been published in the area of calibration time allocation for phased arrays, plenty of scheduling/resource allocation strategies in other areas can be found [21][22][23][24][25]. Reference [21] converts the intersensor interference avoidance problem into the scheduling problem of multiple access in a shared channel. It adopts the graph coloring theory to transform this scheduling problem into a coloring problem aiming at minimizing the number of used colors. Reference [22] proposes a pilot allocation scheme based on the graph coloring theory to mitigate pilot contamination for multi-cell massive MIMO systems. The reference firstly constructs an interference graph to describe the potential inter-cell interference relationship (ICI) among users.
Then the graph coloring-based scheme is used to eliminate ICI in the interference graph. Reference [23] models a wireless sensor network (WSN) as an Interference-Communication (IC) graph and the receiver nodes or transmission links are assigned with different colors. Thus, it transforms the receiver-based and link-based interference-free channel allocation problems into graph coloring problems and proposes a fair channel allocation protocol that minimizes the maximum interference experienced by any transmission. The issue of phase array calibration is similar to the TDMA scheduling problem in three aspects. First, the goal of array calibration is to achieve the minimum calibration time under the condition that every antenna receives a calibration signal with minimal interference. The goal of a TDMA system is to achieve a minimal number of channels shared by nodes under the condition that every node can swap information without collision. Second, the calibration time slot available for each phased array antenna is analogous to the shared channel accessible for each node in the TDMA system. Third, multiple antennas occupying the same calibration time slot suffer from mutual interferences. Analogically, in the TDMA system, every node sharing the same channel will collide with each other. Graph colouring theory has already been widely used in the TDMA domain to solve this collision problem. Though no research concerning the use of graph theory in the phased array calibration is found, due to the similarity between phased array calibration and TDMA channel allocation, the graph colouring theory is theoretically applicable for the phased array calibration problem.
Compared to the REV method, the extended REV method, and the REV-H method, our proposed method achieves the minimum number of measurements, especially for large-scale phased arrays with omnidirectional radiation pattern. Extensive simulations demonstrate that the proposed method requires at most 8, 9 and 16 calibration time slots for a hexagonal-tiling planar array, square-tilting planar array, and triangular-tiling planar array, respectively.
The paper is organized as follows. In Sections 2 and 3, the theory of the antenna scheduling method for phased array calibration is proposed, which adopts the graph coloring theory to transform the calibration scheduling problem into a coloring problem to minimize the number of colors. In Section 4, numeric simulations are provided to validate performance. Finally, Section 5 concludes the paper.
Problem
Phased array radars, especially digital beam forming based phased array radars, have demonstrated excellent performance. However, excessive amplitude and phase errors could deteriorate its performance. To minimize these errors, radar needs to be calibrated. There are several types of calibration. Factory calibration uses near-field scanning to identify errors, which is time-consuming and can only calibrate static errors initially. On-field calibration is often mandated after deployment to calibrate varied errors due to environment and aging. Our proposed calibration method can be applied to both situations. First, we leverage mutual coupling among omnidirectional antennas to perform local calibration based on the REV method. Second, we expand local calibration to as many antennas as possible with negligible interference, which can drastically reduce the overall calibration time.
This paper targets to calibrate a large-scale planar phased array with omnidirectional antennas distributed regularly, which is demonstrated in Figure 1. The receiving path and transmitting path share an identical antenna. The switch controls the transition between transmitting and receiving modes. Below gives some definition for detailed discussion.
One-hop neighbor antenna: An antenna j is a one-hop neighbor of antenna i if the distance between antenna i and j is the shortest; Antenna j is also called the adjacent antenna of antenna i; Two-hops neighbor antenna: An antenna j is a two-hop neighbor of antenna i if antenna j is the one-hop neighbor of antenna i's one-hop neighbor.
n-hops neighbor antenna: An antenna j is an n-hops neighbor of antenna i if antenna j is the one-hop neighbor of antenna i's (n-1)-hops neighbor.
We take a regular planar array in Figure 2 as an example. Omnidirectional Antennas are represented by circles. The one-hop neighbor antennas of antenna 6 are antenna 2, 5, 7, and 10, and the two-hops neighbor antennas of antenna 6 are antenna 1, 3, 8, 9, 11, and 14.
Mutual Coupling
Given omnidirectional antennas in a regular array with identical spacing between adjacent antennas, every antenna couples identical power to all its n-hops neighbor antennas and presents same mutual coupling coefficients. The coupled power decays with increasing distance. Therefore, any two groups of antennas must keep enough distance to reduce mutual interference when performing local calibration simultaneously, as shown in Figure 3, where antennas in group 1 receive interferences from antennas in group 2. In general, we can suppose that the mutual interference between two groups can be ignored when they are separated more than m-hops away from each other. The value of m is determined by the mutual coupling effect between antennas as well as the signal-to-interference ratio (SIR) required by each group to perform local calibration.
Local Calibration
Take Figure 3 as a simple explanation of the local calibration. First, the radar calibration controller turns an antenna, e.g., antenna 9, into transmission mode and radiating a calibration signal. The controller also turns all adjacent antennas (e.g., antenna 2, 8, 10, and 16) into receiving mode and receiving the radiant calibration signal. By using the mutual coupling-based REV method, the amplitude and phase errors among antenna 2, 8, 10, and 16 can be measured. The REV method measures amplitude of the composte signal of antennas in one group when phase of each antenna is shifted from 0 • to 360 • . The measured amplitude is sinusoidal about the phase shift. The ratio of maximum amplitude value and minimum amplitude value is used to calculate the amplitude error of each antenna. The phase shifter corresponding to the maximum amplitude is used to calculate the phase error of each antenna. Detailed description of the REV method can be found in [15,16].
However, the interference from the other nearby transmitting antennas could deteriorate the mutual coupling-based calibration. For instance, when antenna 9 and its adjacent antennas are performing mutual coupling-based REV method, antenna 11 happens to be in transmitting mode. Antenna 10 receives signals from both antenna 9 and 11. The signal from antenna 11 is interference, which deteriorates the received calibration signal from antenna 9. Hence the conventional mutual coupling-based calibration method activates only one single transmitting antenna at a time, and all the other antennas switch to receiving mode, which is the root cause for the enormous calibration time. In order to cut calibration time, we propose an anti-interference calibration method by allocating antennas with significant mutual interference into other calibration time slots. Consequently, we can disregard the interferences from other nearby transmitting antennas, and receive the desired calibration signal with enough SIR. Consequently, we achieve flexibility to arrange many calibrations concurrently and reduce the total calibration time.
Calibration Time Slot Scheduling vs. Graph Coloring
A phased array contains thousands of antennas. Antenna failures may occur sporadically after a long-term operation, and lead to topology variation. In addition, antennas may also suffer from performance degradation. This must be considered in the calibration scheme.
Taking antennas as dots and the mutual couplings between one-hop neighbor antennas as lines, the calibration time slots scheduling issue of a phased array is transformed into a coloring problem. The objective is to minimize the total calibration time, which is equivalent to minimize the number of used colors for a map.
An example of a square phased array is shown in Figure 4a. When there are defect antennas, it is shown in Figure 4b, in which the defect antenna loses mutual couplings with its adjacent antennas. Hence there are no connection lines. Similar to the TDMA scheme [23,25,26], we divide the total calibration time into non-overlapping equal periodic time slots. These time slots are assigned to different antennas for local amplitude and phase calibration simultaneously. Based upon this, an interference-negligible antenna calibration scheduling scheme can be obtained by arranging as many antenna calibrations in one time slot as possible to achieve efficiency. This is similar to color a graph with minimum colors.
The graph coloring scheme has already been applied for channel assignment problems in the time, frequency and code domains [22,27,28]. Here, we use it to optimize phased array calibration schemes. Given a simple graph, G = (V, E), where V is the set of vertices (i.e., antennas) and E is the set of edges (i.e., mutual couplings between adjacent antennas). The one-hop neighbor vertices of each vertex v are defined as To arrange the calibration time slots of antennas is equivalent to color each vertex, which is essential to determine a color assignment strategy for G, that is f : V(G) → F, where f is the assignment function and F is a set of colors. The values in F are positive integers representing different colors so that any two vertices interfering with each other are given different colors. To improve the calibration efficiency is to minimize the number of colors used so as to achieve minimum time slots for calibration. Since each antenna in Figure 1 is constituted of both transmitter and receiver, the calibration procedure for a single antenna can be divided into two parts, i.e., receiver path calibration and transmitter path calibration.
Receiver path calibration
Given any one vertex v in V, we turn v into transmitting mode and its one-hop neighbor vertices N(v) into receiving mode. Because of the existing of E, every vertex in N(v) can receive the calibration signal transmitted by v. Therefore, the local calibration techniques can be performed to obtain the amplitude and phase mismatches among vertices in N(v). There is one constraint that v's neighbor vertices within m-hops are prohibited from transmission to minimize interference. Amplitude and phase errors of other vertices can be measured similarly.
Transmitter path calibration
Similarly, for the transmitter path calibration, the vertex v is turned into receiving mode, and antennas in N(v) are in transmitting mode so that the signal received by v is a composition of signals transmitted by N(v). Afterward, the local calibration can be applied to solve the amplitude and phase mismatches among antennas in N(v). Similar to the analysis in the receiver path calibration, here v's neighbors within m-hops are prohibited from being in receiving mode.
Thus, the problem considered in this paper is to identify the optimal coloring solution for the following problem: According to [29], finding such an optimal solution for this problem is NP-complete. Therefore, it is critical to identify an effective algorithm.
Graph Coloring Theory-Based Array Calibration
In this section, we propose a fast array calibration method based on graph coloring theory, which does not require array's global topology information. This scheme can also handle dynamic topology variation due to antenna failure or array upscaling. Therefore, the array equipped with the proposed algorithm can ensure large-scale phased array performance under harsh environments, like outer space, desert, and ocean surfaces. To be specific, every vertex v in V obtains the connection information of its neighbors within m-hops in phase one of the calibration. In phase two, they exchange time slots allocation information among neighbor vertices within m-hops to facilitate calibration. A detailed explanation is as follows.
Phase one
In this phase, we build up a neighbor map for every vertex to record its neighbors within m-hops. Each vertex does not have knowledge of its neighbors within m-hops initially, and the corresponding neighbor map is vacant for every vertex. Take the topology of the nine vertices array in Figure 5 as an example. For convenience and illustrative purposes, we assume that m is 3 in this example. Situations with different m can be deduced by analogy. The procedure to build a neighbor map is expressed as follows: 1. Label antennas (vertices) from 1 to 9; 2. Turn antenna 1 into transmitting mode and others into receiving mode; and 3. The antenna 1 broadcasts its identity information. Due to the existence of mutual coupling, all antennas receive this signal with different power level. The signal received by antenna 1's one-hop neighbor antenna (i.e., antenna 2) is much higher than those from other antennas'. We then set up a signal detection threshold to ensure that the received signals from one-hop neighbor antennas exceed the threshold and are identified, while the received signals from other antennas are below the threshold and disregarded. By doing this, an antenna can only propagate its identity to one-hop neighbor antennas; 4. When antenna 2 receives this signal, antenna 2 records antenna 1 into its neighbor map as its one-hop neighbor. However, antenna 2 does not know its two-hop neighbors and three-hop neighbors; 5. Then antenna 2 turns into transmitting mode and broadcasts its identity while the others turn into receiving mode. This signal is received by antenna 1, 3, and 4, so that antenna 1, 3, and 4 add antenna 2 into their neighbor maps accordingly; 6. After all antennas have broadcasted their identities to their one-hop neighbors, all antennas have built up their neighbor maps with knowledge of their one-hop neighbors, as shown in Table 1. Subsequently, we kick off the second round broadcasting to obtain knowledge of their two-hop neighbors. Similarly, each antenna broadcasts its identity together with the identities of their one-hop neighbors to enable the acquisition of antennas' two-hop neighbors, as shown in Table 2. For instance, antenna 2 broadcasts a radio-frequency signal to the air containing its own identity as well as its one-hop neighbors', i.e., antenna 1, 3, and 4. Antenna 1 receives this signal and knows that antenna 3 and 4 are antenna 2's one-hop neighbors and hence antenna 1's two-hops neighbors. Table 2. The neighbor map of 9 antennas after the second round broadcasting. In the third round broadcasting, every antenna broadcasts its identity with the identities of their neighbors within two-hops. Therefore, every antenna obtains the information of its neighbors within three-hops, as shown in Table 3. Table 3. The neighbor map of 9 antennas after the third round broadcasting. For situations with a different m, the aforementioned broadcasting procedure is repeated m times, until each vertex obtains the information of its neighbors within m-hops.
Phase two
In phase two, we use an allocation matrix to derive the number of time slots that each calibration needs, where the matrix's row represents each vertex and the matrix's column represents the allocated calibration time slots (colors). The maximum number of allocated calibration time slots equals the number of vertices when every vertex occupies a time slot. Each grid in the allocation matrix can have three values: 1, 0, and blank, which is defined as follows:
1:
The vertex reserves the time slot for calibration; Blank: The vertex can use this time slot for calibration; and 0: The vertex cannot use this time slot because it is reserved for the vertex's neighbors within three-hops.
Obviously, the maximum number of calibration time slots is 9 for our example. This scheme has been adopted by the methods mentioned in [17,19,20] and is inefficient.
The allocation matrix starts with the initial state that the i-th time slot is reserved for the i-th vertex, as shown in Figure 6. Given the knowledge of its neighbors from phase one, the i-th vertex is able to infer the prohibited time slots that must be reserved for its neighbors within m-hops (m = 3 in this example). For instance, vertex 2 can deduce that time slots 1, 3, 4, 5, 6, and 8 are reserved for its neighbors within three-hops. It can also deduce that the remaining time slots are available to use. Moreover, vertex 2 also has partial knowledge of vertices 1, 3, 4, 5, 6, and 8's neighbors. In particular, vertex 2 knows that time slots 1 and 4 are unavailable for vertex 3 since vertices 1 and 4 are vertex 2's one-hop neighbors and vertex 3's two-hops neighbors. However, vertex 2 does not know whether time slots 7 and 9 are available for vertex 3, because vertex 2 does not know whether vertices 7 and 9 are in vertex 3's neighbor map or not. The deduced allocation matrix of every vertex after building up the neighbor map is shown in Figure 7a. This suggests that the total time slots are proportional to the number of antennas without any calibration time slot scheduling algorithm, and the calibration time is long. The next step is to determine the time slot allocation strategy with negligible interference. For example, it is intuitive that vertices 1 and 6 can execute calibration concurrently in time slot 1, or vertices 1 and 7, vertices 1 and 8, vertices 1 and 9. Since there are many options, we use the following steps to identify the suitable combination.
First, every vertex broadcasts its own row information shown in Figure 7a. Hence, every vertex obtains the time slots allocation information of its own and its one-hop neighbors'. Second, every vertex broadcasts its own row information and its one-hop neighbors'. At last, every vertex broadcasts its own row information and its neighbors' within two-hops. As a result, every vertex can acquire time slot allocation information of its neighbors within three-hops. For instance, vertex 8 knows that time slot 1 is unavailable for vertices 2, 4 and 5 while it is available for vertices 6, and 7. However, vertex 8 does not know whether time slot 1 is available for vertex 9 since vertex 9 is not in its neighbor map. When vertices 6, 7, 8, and 9 execute calibration in time slot 1, they may experience interference. To resolve this issue, an extra mechanism has to be applied to re-allocate time slot 1. We simply assign each vertex with a priority according to its vertex number. For example, vertex 6 has the priority over vertices 7, 8, and 9 to use time slot 1, and vertices 7, 8, and 9 clearly know this. When vertex 6 occupies time slot 1, time slot 6 is released and becomes available to other vertices. Vertex 6 broadcasts its new time slot occupation status to its neighbors within three-hops. As a result, vertex 2, 4, 5, 7, 8, and 9 can infer that time slot 1 is unavailable and time slot 6 becomes available. All these updates are illustrated in Figure 7b, where the time slot updates are highlighted in grey. Figure 7c to f show the assignments of time slots 2, 3, 4, and 5. It is worth noting that only five time slots are used to perform array calibration by using the proposed color assignment strategy.
Simulation
We implement the proposed color assignment based calibration on several typical antenna topologies belonging to uniform phased array. Arrays with antennas scattered randomly are not covered in this paper. As shown in Figure 9, we discuss three antenna array topologies, i.e., hexagonal tiling, square tilting, and triangular tiling, according to the number of one-hop neighbors.
The Mutual Coupling vs. Antenna Separation
First, we evaluate mutual coupling between two circular patch antennas for different separations (S) in HFSS 15, which validates the assumption in Section 2.1 that the interferences are negligible compared to the calibration signal, given that, it meets the condition of the optimal coloring problem shown in Equation (1). Figure 10 shows the dimension of circular patch antennas in the first simulation. Various types of patch antennas with omnidirectional radiation patterns are available [30][31][32][33][34]. We chose a circular patch antenna [30] as the implementation architecture for validation. Figure 11 shows the simulated return losses (S11 and S22) and mutual couplings (S21) with different S. The resonant frequency is 2.33 GHz. The numerical mutual coupling results at 2.33 GHz are listed in Table 4. The mutual coupling decays about 18.56 dB when the separation S is increased from d to 2d (d = 105 mm). When S ≥ 2d, the mutual coupling decays about 13 dB/d. The simulation results indicate that the received signal strength from a one-hop transmission antenna (S = d) is 46.16 dB stronger than that from a four-hop transmission antenna (S = 4d). To achieve a better than 10 dB SIR for an acceptable accuracy of calibrated phase and amplitude mismatch, we mandate that two calibration groups are four-hops or more hops away from each other, as mentioned in Section 2.1. This essentially ensures the received calibration signal SIR, and we will discuss this in the following paragraph. It is worth noting that the mutual coupling between antennas depends on many factors, including antenna geometry, the separation between adjacent antennas, the substrate used, and so on. Arrays with stronger mutual coupling effect require larger m to obtain the desired SIR for each group.
Received Calibration Signal's SIR
By applying the proposed color assignment method, multiple antennas are able to perform calibration simultaneously. Hence, every receiving-mode antenna receives calibration signals from its one-hop neighbor at transmitting-mode as the desired calibration signal, and from other multiple-hop neighbors at transmitting-mode as interferences. In the second simulation, we evaluate the received calibration signal's SIR.
There are two extreme cases during calibration. One is when an antenna locates at the center of the array whose neighbors are symmetrical around it with the strongest interferences, and the other one is when an antenna located at the corner of the array whose neighbors are asymmetrical with longer distances and the least interferences. The other cases have asymmetrical surrounding antennas whose distances are between the above two cases. Here we define the distance between adjacent antennas as d. To simplify the analysis, we only need to evaluate the performance of these two extreme cases.
First, we evaluate the case where the antenna located at the center of the array being activated as a transmitter to calibrate its one-hop neighbors. This corresponds to the receiver path calibration as described in Section 2.3. The calibration signal and interferences are generated at the feeding points of each transmitter antenna with an identical frequency and phase. After propagating through different distances, all interferences are superimposed at a one-hop neighbor antenna of the central antenna, which is in receiving mode and the received calibration signal's SIR can be calculated. We implement this simulation over arrays with different topologies and different number of antennas, whose simulation results are represented by three solid lines in Figure 12. This is the worst condition because the antennas under calibration receive the strongest interference. Second, we evaluate the case where the antenna located at the corner (e.g., top left) of the array serves as the calibration signal transmitter and its one-hop neighbors are the antennas under calibration. We calculate the calibration signal's SIR received by these antennas, whose results are shown as the dotted lines in Figure 12. The legend "Hex & Central & Tx" represents the simulation results when the central antenna is configured as a calibration signal transmitter for arrays with hexagonal topologies. Other legends can be interpreted in a similar fashion.
Furthermore, we turn the central antenna and interference transmitters into receiving mode. Their one-hop antennas are activated in transmitting mode. This situation corresponds to the transmitter path calibration in Section 2.3. The received calibration signal's SIR of the central antenna is shown as the solid lines in Figure 13. The received calibration signal's SIR of the receiver located at the top left corner of the array is shown as the dotted lines in Figure 13. Figures 12 and 13 reveal that the calibration signal's SIR stays stable for arrays with at least 12 × 12 antennas, regardless of the array topology. The lowest SIR is about 14 dB, which is sufficient to perform the REV calibration method. The simulation results also suggest that our proposed method can perform well for arrays with at least 12 × 12 antennas, while it may not be able to achieve good results when the array is small.
Calibration Time Slots vs. Total Number of Antennas
In the third simulation, we simulate the required time slots versus the antenna number on arrays with different scales and topologies in Microsoft Visual Studio 2010. The simulation results in Figure 14 show that the total time slots increase with antenna number increasing when array scale is small. However, when the antenna scale continues to grow, the required time slots quickly saturate. The maximal required time slots used by arrays with hexagonal topology, square topology, and triangular topology are only 8, 9, and 16, respectively. As a comparison, we list time slots used by other methods and the proposed method in Table 5. The proposed method requires minimal time slots among all mentioned methods.
Method Total Time Slots
The REV method N The extended REV method N The REV-H method N Proposed method(hexagonal topology) ≤8 * Proposed method(square topology) ≤9 * Proposed method(triangular topology) ≤16 * * Refer to Figure 14 for specific total time slots used by arrays with various N. Figure 15 presents the simulated total calibration time slots used for 12 × 12 arrays with different antenna failure rates. It can be seen that the total time slots reduce as the antenna failure rate increases. This is because the calibration time slot available for an antenna is confined by its neighbors within three-hops. However, when some of its neighbor antennas fail, the antenna loses mutual-coupling-based connections, which means more calibration time slots belonging to its neighbors become available. Figure 16 compares the total number of measurements consumed by different calibration methods. The REV-H method consumes fewer measurements compared to the REV method, which requires only 30 phase shifting operations per calibration time slot. In addition, its total calibration time slots equal to the number of antennas. Hence, the total number of measurements is 30N. The total number of measurements with the extended REV method is at least 7.5N + 1.
Exploiting the graph coloring algorithm, the proposed method reduces the calibration time slots from N to no more than 16 for the three topologies mentioned above. Therefore, the total number of measurements is 256 KP, where P represents total calibration time slots used and is shown in Figure 14. K represents the repetitions of phase shift in each calibration time slot, which depends on array topology. Take a large planar array with square topology as an example. When an antenna is activated to perform mutual coupling-based REV calibration, its 4 one-hop neighbors sequentially repeat the 256-steps phase shifting, where K = 4. Similarly K = 3 and 6 for hexagonal topology and triangular topology respectively. Simulations suggest that the proposed method demonstrates a great advantage when N is large, as shown in Figure 16. The total number of measurements required by other methods is also summarized in Table 6. Figure 16. Comparison of the total number of measurements between the REV method, the REV-H method, the extended REV method and the proposed method with different topologies. Table 6. Comparison of total number of measurements of a large array with N antennas by different methods.
Method Total Number of Measurements
The REV method 256N The REV-H method 30N The extended REV method 7.5N + 1 Proposed method(hexagonal topology) ≤6144 * Proposed method(square topology) ≤9216 * Proposed method(triangular topology) ≤24,576 * * Refer to Figure 16 for specific total number of measurements used by arrays with various N. Besides, it can be calculated by the expression mentioned above, 256 KP.
The radiation pattern comparison before and after our proposed calibration is shown in Figure 17. A 12 × 12 square-tilted planar array is used in this comparison. Initially, every antenna is identical without any amplitude and phase errors. The radiation pattern of this ideal phased array is shown in Figure 17a. The maximal side lobe power is 13.05 dB lower than the main lobe beam. The plot is central symmetric perfectly. Then every antenna is assigned with random amplitude and phase errors. The standard deviations of amplitude and phase errors are 3 dB and 120 degrees, respectively. The radiation pattern is shown in Figure 17b. Without any calibration, the main lobe power deteriorates by 1.24 dB compared to Figure 17a, and the maximal side lobe power is 1.15 dB higher than the side lobe in Figure 17a. Furthermore, the average power of the side lobes is much higher than Figure 17a (the green and dark blue areas in Figure 17a turn into red and yellow in Figure 17b). To calibrate this phased array, the proposed calibration time allocation method is implemented on this phased array. Hence, each antenna is assigned to a certain time slot. As a result, multiple antennas perform local calibration processes at the same time slot to obtain the amplitude and phase errors among their one-hop neighbors. The SIR of each calibration signal is shown in Figures 12 and 13. After all antennas have been calibrated, the system obtains amplitude and phase errors among all antennas. These known amplitude and phase errors are further trimmed out by electronic circuits or digital signal processing. The radiation pattern after calibration is shown in Figure 17c. The recovered main lobe is 0.92 dB higher than that in Figure 17b, and the maximal side lobe power is improved by about 0.69 dB. Also, the average sidelobe power is much lower than that in Figure 17b.
Conclusions
We have described a graph coloring theory-based method to calibrate omnidirectional phased array antennas with a high speed. We adopted the graph coloring theory to transform the calibration scheduling problem into a coloring problem. To validate, we performed several simulations to compare our proposed method with several other calibration methods. Simulation results show that our method can prominently reduce total calibration time. Further simulations indicate that the proposed method can recover the radiation pattern from amplitude and phase errors. | 8,509.2 | 2018-12-01T00:00:00.000 | [
"Computer Science"
] |
On the Strong Convergence of an Algorithm about Firmly Pseudo-Demicontractive Mappings for the Split Common Fixed-Point Problem
Based on the recent work by Censor and Segal 2009 J. Convex Anal.16 , and inspired by Moudafi 2010 Inverse Problems 26 , we modify the algorithm of demicontractive operators proposed by Moudafi and study the modified algorithm for the class of firmly pseudodemicontractive operators to solve the split common fixed-point problem in a Hilbert space. We also give the strong convergence theorem under some appropriate conditions. Our work improves and/or develops the work of Moudafi, Censor and Segal, and other results.
Introduction
Throughout, let H 1 and H 2 be real Hilbert spaces, and let A : H 1 → H 2 be a bounded linear operator.The split feasibility problem SFP 1-4 is to find a point where C is a closed convex subset of a Hilbert space H 1 and Q is a closed convex subset of a Hilbert space H 2 .If the two closed convex subsets C and Q are fixed point sets of U and T , respectively, where U : H 1 → H 1 and T : H 2 → H 2 are nonlinear operators, we obtain the two-set split common fixed-point problem SCFP .The split common fixed-point problem 5-8 requires to find a common fixed point of a family of operators in one space such that its image under a linear transformation is a common fixed point of another family of operators in the image space.This generalizes the split feasibility problem SFP and the convex feasibility problem CFP .
In 2008, Censor and Segal proposed the split common fixed-point problem SCFP in 5 for directed operators in finite-dimensional Hilbert spaces.They invented an algorithm for the two-set SCFP which generated a sequence {x n } according to the iterative procedure where γ ∈ 0, 2/λ with λ being the spectral radius of the operator A * A. Let x 0 ∈ R n be arbitrary.
They proved the convergence of the algorithm in finite-dimension spaces.Inspired by the work of Censor and Segal, Moudafi 6 introduced the following algorithm for μ-demicontractive operators in Hilbert spaces: where γ ∈ 0, 1 − μ /λ with λ being the spectral radius of the operator A * A, t k ∈ 0, 1 .Let x 0 ∈ H 1 be arbitrary.Using Féjer-monotone and the demiclosed properties of U − I and T − I at the origin, Moudafi proved the convergence theorem.Based on the work of Censor, Segal, and Moudafi, Sheng and Chen gave their results of pseudo-demicontractive operators for the split common fixed-point problem recently.Furthermore, we modify the algorithm 1.4 proposed by Moudafi and extend the operators to the class of firmly pseudo-demicontractive operators 9 in this paper.The firmly pseudo-demicontractive operators are more general class, which properly includes the class of demicontractive operators, pseudo-demicontractive operators, and quasi-nonexpansive mappings and is more desirable, for example, in fixed-point methods in image recovery where, in many cases, it is possible to map the set of images possessing a certain property to the fixed-point set of a nonlinear quasi-nonexpansive operator.Also for the hybrid steepest descent method, see 10 , which is an algorithmic solution to the variational inequality problem over the fixed-point set of certain quasi-nonexpansive mappings and applicable to a broad range of convexly constrained nonlinear inverse problems in real Hilbert spaces.Our work is related to significant real-world applications, see, for instance, 2-4, 11 , where such methods were applied to the inverse problem of intensity-modulated radiation therapy IMRT and to the dynamic emission tomographic image reconstruction.Based on the very recent work in this field, we give an extension of their unified framework to firmly pseudodemicontractive operators and obtain convergence results of a modified algorithm in the context of general Hilbert spaces.
Our paper is organized as follows.Section 2 reviews some preliminaries.Section 3 gives a modified algorithm and shows its strong convergence under some appropriate conditions.Section 4 gives some conclusions briefly.
Preliminaries
To begin with, let us recall that the split common fixed point problem 5 proposed by Censor and Segal in finite spaces.
Given operators U i : R n → R n , i 1, 2, . . ., p, and T j : R m → R m , j 1, 2, . . ., r, with nonempty fixed points sets C i , i 1, 2, . . ., p and Q j , j 1, 2, . . ., r, respectively.The split common fixed point problem SCFP is to find a vector 2.1 In the sequel, we concentrate on the study of the two-set split common fixed-point problem, which is to find that where Definition 2.1.We say that T is demicontractive 6 means that there exists constant β < 1 such that T is pseudo-demicontractive 9 means that there exists constant α > 1 such that Definition 2.2.We say that T is firmly pseudo-demicontractive means that there exist con- The inequality 2.5 is equivalent to An operator satisfying 2.5 will be referred to as a α, β firmly pseudo-demicontractive mapping.It is worth noting that the class of firmly pseudo-demicontractive maps contains important operators such as the demicontractive maps, quasi-nonexpansive maps, and the strictly pseudo-contractive maps with fixed points.
Next, let us recall several concepts: 3 a mapping T : H → H is said to be strictly pseudo-contractive if
2.9
Obviously, the nonexpansive operators are both quasi-nonexpansive and strictly pseudocontractive maps and are well known for being demiclosed.
Lemma 2.3 see 5 .An operator T is said to be closed at a point y ∈ R n if for every x ∈ R n and every sequence x k in R n , such that, In what follows, only the particular case of closed at zero will be used, which is the particular case when y 0. Lemma 2.4 see 12 .Let {a n },{b n } and {δ n } be sequences of nonnegative real numbers satisfying the inequality where Motivated by the former works in 5-9 , we modify the algorithm proposed by Moudafi in 6 for solving SCFP in the more general case when the operators are firmly pseudo-demicontractive, defined on a general Hilbert space and also change several conditions.Then, we prove a strong convergence theorem of the modified algorithm about firmly pseudo-demicontractive operators, which improves and/or develops several corresponding results in this field.We present in this paper only theoretical results of algorithmic developments and convergence theorems.Experimental computational work in other literatures 4, 10 shows the practical viability of this class of algorithms.
Main Results
Let us consider now the two operators split common fixed-point problem SCFP where α, β and μ, θ are two firmly pseudo-demicontractive coefficients of U, T , respectively.
In what follows we always assume that the solution set of the two-operator SCFP is not empty, which denotes by Based on the algorithm of 5, 6 , we develop the following modified algorithm to solve 3.1 .Algorithm 3.1.Initialization: let x 0 ∈ H 1 be arbitrary.
Iterative step: for k ∈ N set u k x k γA * T − I Ax k and let where 1 − β /λ < γ < 0 with λ being the spectral radius of the operator A * A, t k > 0. Proof.Taking y ∈ Γ, that is, y ∈ Fix U , Ay ∈ Fix T , and using 2.6 , we obtain that
3.5
Using the expression of u k in Algorithm 3.1, we also have
3.6
From the definition of λ, it follows that Now, by setting θ 2γ x k −y, A * T −I Ax k and using the fact 2.5 and its equivalent form 2.6 , we infer that
3.8
Substituting 3.8 , 3.7 , and 3.6 into 3.5 , we get the following inequality: and δ k t k μ − 1 , 3.10 can be formulated as We also can denote that a k 1 x k 1 − y 2 , a k x k − y 2 , thus 3.11 can be rewritten as Obviously, {a k }, {b k }, and {δ k } are sequences of nonnegative real numbers.Since Since λ being the spectral radius of the operator A * A, 3.9 also can be reformulated as the following:
3.14
If we take limits from both sides of 3.9 , we can get the following:
3.16
Because T − I is closed at the origin, from 3.15 and 3.16 , using Lemma 2.3, we have T − I Ay 0, that is, T Ay Ay.
3.17
The sequence generated by modified algorithm 3.1 converges strongly to the solution of SCFP.The proof is completed.
Under the same conditions as in Thereom 3.2, if we take β θ 1 i.e., U, T are pseudodemicontractive operators , the strong convergence also holds, so we get the following corollary.
closed at the origin, λ the spectral radius of the operator A * A, then the sequence generated by the modified algorithm 3.4 converges strongly to the solution of 3.1 .
H 1 → H 2 is a bounded linear operator, U : H 1 → H 1 , T : H 2 → H 2 are two firmly pseudo-demicontractive operators with Fix U C and Fix T 5, 6 find x * ∈ C such that Ax * ∈ Q, 3.1where A : | 2,184 | 2012-05-07T00:00:00.000 | [
"Mathematics"
] |
A Non-Intrusive Pressure Sensor by Detecting Multiple Longitudinal Waves
Pressure vessels are widely used in industrial fields, and some of them are safety-critical components in the system—for example, those which contain flammable or explosive material. Therefore, the pressure of these vessels becomes one of the critical measurements for operational management. In the paper, we introduce a new approach to the design of non-intrusive pressure sensors, based on ultrasonic waves. The model of this sensor is built based upon the travel-time change of the critically refracted longitudinal wave (LCR wave) and the reflected longitudinal waves with the pressure. To evaluate the model, experiments are carried out to compare the proposed model with other existing models. The results show that the proposed model can improve the accuracy compared to models based on a single wave.
Introduction
Pressure vessels are widely used in many fields, such as chemical plants, power stations, etc. In many cases, pressure vessels are safety critical because they contain flammable, explosive, virulent, or corrosive materials. Accidents due to explosion and leakage of the contents will lead to serious consequences, and high pressure is one of the most identified causes leading to these kinds of accidents. Therefore, it becomes critically important to measure the pressure of these vessels in an accurate and convenient manner.
It is very common that the pressure is measured by pressure gauges where it is able to directly touch the materials contained. However, it is not always applicable or cost-effective to mount such pressure gauges. For example, sometimes it will require the upgrade of a lot of old equipment, or the installation of gauges will change the integrity of the vessel, which may lead to other safety issues. There is another approach to measure the pressure-non-invasive approaches [1]. There are several proposed methods of this approach, such as the strain gauge method [2], the capacitor method [3], and the ultrasonic method [4][5][6][7]. These methods can solve some problems, but there is still room for improvement in terms of the accuracy. In [2], Hoffmann discusses that the accuracy of the strain gauge method is heavily affected by the environment-particularly the temperature and humidity. In [3], it was stated that the capacitor method is only applicable to small-diameter pressure vessels, and its accuracy is sensitive to the type of medium inside the pressure vessel and the environment.
The ultrasonic method is more promising and has attracted more interest since it was proposed, because it has been identified that the ultrasound wave can carry much richer pressure-related information. Guers et al. [1,5] established the relationship between the amplitude of an ultrasonic wave propagated inside the vessel and the vessel pressure, and used the reflected ultrasonic signal from the fluid-vessel interface to measure the pressure. However, it is greatly influenced by the type of medium inside the vessel. Zhang et al. [4] found that the travel-time changes of surface waves changed linearly with the pressure, and applied the surface waves to the pressure measurement of thin-walled vessels. However, the propagation of the surface wave is severely affected by the roughness condition of the vessel wall. Ling et al. [6] applied L CR (the Critically Refracted Longitudinal) wave and Rayleigh wave simultaneously to reduce the temperature effect, but the system is complicated because it needs at least four ultrasonic probes. Bi et al. [7] achieved higher sensitivity than the L CR wave and Rayleigh wave by employing the reflected longitudinal waves and temperature compensation. In all these methods above, the difference of waves' travel-time under different pressure is considered to be small in both the reflected longitudinal waves and the L CR wave, and this limits the accuracy of pressure measurements.
Meanwhile, temperature is another major factor which will affect the ultrasonic properties [8][9][10]. Thus, in order to increase the capability of interference mitigation and to improve the accuracy of measurement, we proposed a non-invasive method of measuring pressure by taking account of L CR and the multiple reflected longitudinal waves. The rest of the paper is structured as follows: in Section 2, we explain the acoustoelastic effect of the ultrasonic wave, its application in pressure measurement, and propose a multi-waves fusion algorithm used in our proposed measurement method; in Section 3, we describe the design of a new pressure sensor based on our method and the prototype measurement system; in Section 4, we discuss our experiment and analyze its results; finally, in Section 5, we summarize the conclusions that can be drawn from this paper.
Generation of Multiple Waves in the Vessel Wall
When a longitudinal wave is generated from a polymethyl methacrylate (PMMA) wedge and penetrates the outer wall of the pressure vessel with the first critical angle (α I ) (as shown in Figure 1), there will be waveform conversions at the interface. According to Snell's Law, the origin wave will split into two waves: a L CR wave and a refracted shear wave. The L CR wave propagates along the outer wall and will be received by the receiving probe. The refracted shear wave will reach the inner wall of the pressure vessel with the refracted angle (β), and then the first inner reflected longitudinal wave (Lre-I1st) and the first reflected shear wave (Sre-1st) will be generated. The first reflected shear wave (Sre-1st) will reach the outer wall and generate the first reflected longitudinal wave (Lre-1st) and the second reflected shear wave (Sre-2nd), and so on. As shown in Figure 2, the receiving probe at the other end of vessel will receive multiple waves, including the L CR wave and reflected longitudinal waves such as the Lre-1st wave and the Lre-2nd wave, etc. Among the waves received by the receiving probe, the L CR wave will always arrive first because it travels the shortest distance and it travels with the velocity of a longitudinal wave, which is about twice the velocity of a shear wave. The Lre-1st wave reaches the receiving probe with a time delay ∆t (described by Equation 1) after the L CR wave. In this analogy, other adjacent waves will have the same time delay ∆t between them. By utilizing this pattern, we can identify these waves at the receiving probe, and we analyze these waves to compute the pressure inside of the vessel.
where δ is the thickness of the pressure vessel wall, V S and V L are the velocity of the shear wave and the longitudinal wave, respectively.
The Acoustoelastic Effect and the Relationship between Pressure and Travel-Time Change
Hughes and Kelly [11] developed the relationship between the wave speeds and the strain in the pressure vessel, which can be expressed as: = + 2 + (2 + )( + + ) + (4 + 4 + 10 ) (2a) where and are the longitudinal wave velocity and shear wave velocity along the axial direction of the vessel wall respectively. , , and are the strains along the axial, radial, and circumferential directions of the vessel wall, respectively.
is the initial density of the pressure vessel. and are the second-order elastic constants, while l, m, and n are the third-order elastic constants.
By using Hooke's Law [12], we can model the relationships between the strain components in three orthogonal directions and stress as:
The Acoustoelastic Effect and the Relationship between Pressure and Travel-Time Change
Hughes and Kelly [11] developed the relationship between the wave speeds and the strain in the pressure vessel, which can be expressed as: = + 2 + (2 + )( + + ) + (4 + 4 + 10 ) (2a) where and are the longitudinal wave velocity and shear wave velocity along the axial direction of the vessel wall respectively. , , and are the strains along the axial, radial, and circumferential directions of the vessel wall, respectively.
is the initial density of the pressure vessel. and are the second-order elastic constants, while l, m, and n are the third-order elastic constants.
By using Hooke's Law [12], we can model the relationships between the strain components in three orthogonal directions and stress as:
The Acoustoelastic Effect and the Relationship between Pressure and Travel-Time Change
Hughes and Kelly [11] developed the relationship between the wave speeds and the strain in the pressure vessel, which can be expressed as: where V AA and V AR are the longitudinal wave velocity and shear wave velocity along the axial direction of the vessel wall respectively. ε A , ε R , and ε C are the strains along the axial, radial, and circumferential directions of the vessel wall, respectively. ρ 0 is the initial density of the pressure vessel.
λ and µ are the second-order elastic constants, while l, m, and n are the third-order elastic constants. By using Hooke's Law [12], we can model the relationships between the strain components in three orthogonal directions and stress as: where E is the elasticity modulus of vessel material, υ is Poisson's ratio, and σ A and σ C are the axial stress and the circumferential stress, respectively. In the thin-shell theory [13], the stress field in the vessel wall is two-dimensional, including the stress in the axial direction and in the circumferential direction, as described by: where p is the internal pressure of the vessel, R is the average radius of the vessel, and δ is the thickness of the wall. According to the above analysis, we are aware that the velocity of the longitudinal wave and shear wave along the axial direction are affected by the pressure in the vessel. In reality, the velocity of the ultrasonic wave is high if the vessel is made of steel. The speed of the longitudinal wave is about 5800 m/s, and that of the shear wave is about 3100 m/s [14]. While the velocity change is relatively small, it is reasonable to assume that the velocity change is linear with the change of pressure to some extent.
It is worth mentioning that elastic constants λ, µ, l, m, n and elasticity modulus E are all affected by the temperature of the vessel wall. So, the relationship between the wave velocity and the pressure is also affected by the temperature. In practical application, the wave velocity can be obtained by measuring the propagation time between fixed transducers.
The Multi-Waves Fusion Algorithm
The relationships between the pressure and propagation time of different ultrasonic waves have already been studied in our previous works. For example, Rayleigh wave has been discussed in [6], the L CR wave in [6], and the reflected longitudinal wave in [7]. However, the accuracy of pressure measurement depends heavily on the accuracy of the propagation time measurement, which can be affected by multiple factors, including noise, temperature, etc. Furthermore, the change in travel-time induced by pressure is very small in value. So, it is difficult to achieve precise measurement of pressure using a "single" ultrasonic wave method.
Data fusion techniques combine data from multiple sensors and related information from associated data. This can help in improving accuracy and analyzing more specific inferences than in the case of a single sensor alone [15]. As discussed in the previous section, a single receiving probe will at least be able to detect the L CR wave and several reflected longitudinal waves-such as the Lre-1st wave and the Lre-2nd wave. By using data fusion techniques, it is reasonable to believe that pressure measurement accuracy can be improved.
The Pressure Sensor Based on Ultrasonic Wave
The fundamental architecture of the ultrasonic sensor for pressure measurement is shown in Figure 3. The sensor consists of a central processing unit (CPU), a Time-to-Digital Converter (TDC) chip, an exciting module and a receiving module, an ultrasonic transducer, and a switch. The TDC chip is used to generate the exciting signal for the transmitter. The exciting module can amplify the exciting signal. The ultrasonic wave received by the receiver is amplified by the receiving module, and then input into the TDC chip to measure the propagation time. The switch can be turned on or off by the CPU. Variable time delay can be set to capture the propagation time of the L CR wave and the ith (i = 1, 2, . . . ) reflected longitudinal wave. According to the sequential arrangement on the time line, the L CR wave and the reflected longitudinal wave can be separated by the programmable time delay and switch. A TDC chip (TDC-GP21) produced by ACAM™ is used for precise time measurement, which has a measurement range of 3.5 ns (0 ns) to 2.5 µs with the typical resolution of 45 ps (in measurement mode 1).
The Experimental System
We developed a prototype of the proposed ultrasonic sensor and tested it in our experimental system, as shown in Figure 4. The system consists of a pressure pump, a pressure vessel, a digital pressure gauge, and an ultrasonic sensor, which includes two ultrasonic probes: a transmitting probe (T) and a receiving probe (R), ultrasonic exciting and receiving modules, and the control and processing module. exciting signal. The ultrasonic wave received by the receiver is amplified by the receiving module, and then input into the TDC chip to measure the propagation time. The switch can be turned on or off by the CPU. Variable time delay can be set to capture the propagation time of the LCR wave and the ith (i = 1, 2, …) reflected longitudinal wave. According to the sequential arrangement on the time line, the LCR wave and the reflected longitudinal wave can be separated by the programmable time delay and switch. A TDC chip (TDC-GP21) produced by ACAM™ is used for precise time measurement, which has a measurement range of 3.5 ns (0 ns) to 2.5 μs with the typical resolution of 45 ps (in measurement mode 1).
The Experimental System
We developed a prototype of the proposed ultrasonic sensor and tested it in our experimental system, as shown in Figure 4. The system consists of a pressure pump, a pressure vessel, a digital pressure gauge, and an ultrasonic sensor, which includes two ultrasonic probes: a transmitting probe (T) and a receiving probe (R), ultrasonic exciting and receiving modules, and the control and processing module.
The Experimental System
We developed a prototype of the proposed ultrasonic sensor and tested it in our experimental system, as shown in Figure 4. The system consists of a pressure pump, a pressure vessel, a digital pressure gauge, and an ultrasonic sensor, which includes two ultrasonic probes: a transmitting probe (T) and a receiving probe (R), ultrasonic exciting and receiving modules, and the control and processing module.
The pressure pump (model number: SB-10, Shanghai Liyu Metal Co., Ltd., Shanghai, China) is employed to change the pressure in the pressure vessel. A digital pressure gauge (model number: Table 1 shows the properties of the pressure vessel. Of the ultrasonic sensor, the ultrasonic probes have a frequency of 5 MHz, and their separation is 110 mm.
Results Analysis
In the experiments, the receiving probe receives the L CR Wave and a series of reflected longitudinal waves. Waves which are detected with high SNR are considered in the construction of the measurement models. In our experiments (shown in Figure 2), the L CR wave, Lre-1st wave, Lre-2nd wave, Lre-3rd wave, Lre-4th wave, Lre-5th wave, Lre-6th wave, and Lre-7th waves are qualified and therefore selected.
Change in Travel-Time with Temperature and Pressure
Considering the influence of temperature on the travel-time of waves, we controlled the temperature of the pressure vessel ranging from 20.2˝C to 30.2˝C with an interval of 1˝C in the experiments. The first experiment is to establish the relationship between travel-time change and temperature at zero pressure. The data collected from the experiments are shown in Figure 5. MPa and an error of no more than 0.02 MPa is utilized to meter the actual pressure in the pressure vessel. Table 1 shows the properties of the pressure vessel. Of the ultrasonic sensor, the ultrasonic probes have a frequency of 5 MHz, and their separation is 110 mm.
Results Analysis
In the experiments, the receiving probe receives the LCR Wave and a series of reflected longitudinal waves. Waves which are detected with high SNR are considered in the construction of the measurement models. In our experiments (shown in Figure 2), the LCR wave, Lre-1st wave, Lre-2nd wave, Lre-3rd wave, Lre-4th wave, Lre-5th wave, Lre-6th wave, and Lre-7th waves are qualified and therefore selected.
Change in Travel-Time with Temperature and Pressure
Considering the influence of temperature on the travel-time of waves, we controlled the temperature of the pressure vessel ranging from 20.2 °C to 30.2 °C with an interval of 1 °C in the experiments. The first experiment is to establish the relationship between travel-time change and temperature at zero pressure. The data collected from the experiments are shown in Figure 5. The lines in different colors are the fitting results using linear regression corresponding to different waves. Most of the data points are close to the corresponding line. Additionally, the R 2 of all the regression results are above 0.98. It can be concluded that the travel-time change was linearly proportional to the temperature for the LCR wave and the reflected longitudinal waves.
Once we can determine the relationship between travel-time change ( ∆ ( ,∆ ) , and ∆ are pressure and temperature change respectively ) and temperature ( ) , we also need to understand the relationship between ∆ ( ,∆ ) and . Figure 6 shows the data collected from our The lines in different colors are the fitting results using linear regression corresponding to different waves. Most of the data points are close to the corresponding line. Additionally, the R 2 of all the regression results are above 0.98. It can be concluded that the travel-time change was linearly proportional to the temperature for the L CR wave and the reflected longitudinal waves.
Once we can determine the relationship between travel-time change (∆t pp,∆Tq , p and ∆T are pressure and temperature change respectively) and temperature pTq, we also need to understand the relationship between ∆t pp,∆Tq and p. Figure 6 shows the data collected from our experiments. In the past research, linear regression analysis was applied to develop the relationship between travel-time change and pressure [6]. However, the relationship between travel-time change and pressure is not perfectly linear-especially in the low-pressure zone, as shown in Figure 7. The nonlinearity might be caused by the existence of residual stress. experiments. In the past research, linear regression analysis was applied to develop the relationship between travel-time change and pressure [6]. However, the relationship between travel-time change and pressure is not perfectly linear-especially in the low-pressure zone, as shown in Figure 7. The nonlinearity might be caused by the existence of residual stress.
Measurement Models Based on Different Waves
Based on experimental data and the relationships we have identified, the pressure measurement models can be established [16].
The pressure measurement model based on the LCR wave with temperature compensation (Model_LCR_T) can be described as Equation (5a); From Figure 6a, we can see that Lre-4th has the highest sensitivity of travel-time change with pressure. The pressure measurement model based on the Lre-4th wave with temperature compensation (Model_LRE4_T) can be described as Equation (5b); the pressure measurement model based on multiple waves (Model_Linear) can be described as Equation (5c), where coefficients , , and are listed in Table 2; the pressure measurement model based on multiple waves with temperature compensation (Model_Linear_T) can be described as Equation (5d), where coefficients , , , and are listed in Table 3. Considering the nonlinearity between travel-time change and pressure in the low-pressure zone, the nonlinear model (Model_Quadratic) is proposed (which can be described as Equation (5e)), where coefficients , , , and are listed in Table 4. The nonlinear model with temperature compensation (Model_Quadratic_T) can be described as Equation (5f), where coefficients , , , , and E4 are listed in Table 5.
Measurement Models Based on Different Waves
Based on experimental data and the relationships we have identified, the pressure measurement models can be established [16].
The pressure measurement model based on the L CR wave with temperature compensation (Model_LCR_T) can be described as Equation (5a); From Figure 6a, we can see that Lre-4th has the highest sensitivity of travel-time change with pressure. The pressure measurement model based on the Lre-4th wave with temperature compensation (Model_LRE4_T) can be described as Equation (5b); the pressure measurement model based on multiple waves (Model_Linear) can be described as Equation (5c), where coefficients A 1 , B 1i , and C 1 are listed in Table 2; the pressure measurement model based on multiple waves with temperature compensation (Model_Linear_T) can be described as Equation (5d), where coefficients A 2 , B 2i , C 2 , and E 2 are listed in Table 3. Considering the nonlinearity between travel-time change and pressure in the low-pressure zone, the nonlinear model (Model_Quadratic) is proposed (which can be described as Equation (5e)), where coefficients A 3i , B 3i , C 3 , and D 3i are listed in Table 4. The nonlinear model with temperature compensation (Model_Quadratic_T) can be described as Equation (5f), where coefficients A 4i , B 4i , C 4 , D 4i , and E4 are listed in Table 5.
Experimental Results for Pressure Measurement
In Table 6, we compared the coefficient of determination (R 2 ), the adjusted R 2 , and the root-mean-square error (RMSE) of different models. In order to evaluate the accuracy of pressure measurement models, we analyzed the test data set from the experiment in which the temperature ranges from 20 to 30.2˝C and the pressure ranges from 0 to 6.6 MPa. Figure 8 shows the predicted pressure and the reference pressure (measured by the pressure gauge). The area between the dashed lines tagged with +5% and´5% has relative error less than 5%. The middle line indicates where the predicted pressure equals the reference pressure. From the analysis of these experiments, it is reasonable to believe that the first two models (Model_LCR_T and Model_LRE4_T) have a lower accuracy than the last four models (Model_Linear, Model_Linear_T, Model_Quadratic, and Model_Quadratic_T). And the mean relative error (MRE) (excluding data whose pressure equals zero) of the last four models is 4.3188%, 4.5328%, 3.7793%, 3.6925%, respectively. from 0 to 6.6 MPa. Figure 8 shows the predicted pressure and the reference pressure (measured by the pressure gauge). The area between the dashed lines tagged with +5% and −5% has relative error less than 5%. The middle line indicates where the predicted pressure equals the reference pressure. From the analysis of these experiments, it is reasonable to believe that the first two models (Model_LCR_T and Model_LRE4_T) have a lower accuracy than the last four models (Model_Linear, Model_Linear_T, Model_Quadratic, and Model_Quadratic_T). And the mean relative error (MRE) (excluding data whose pressure equals zero) of the last four models is 4.3188%, 4.5328%, 3.7793%, 3.6925%, respectively.
The results show that models based on multiple waves (Model_Linear, Model_Linear_T, Model_Quadratic, and Model_Quadratic_T) are more accurate than models based on single wave (Model_LCR_T and Model_LRE4_T). The nonlinear models with quadratic terms (Model_Quadratic and Model_Quadratic_T) work better than linear models based on multiple waves (Model_Linear, Model_Linear_T). Models without temperature compensation (Model_Linear and Model_Quadratic) can achieve same-level accuracy as models with temperature compensation (Model_Linear_T and Model_Quadratic_T).
Conclusions
In this paper, a new mechanism of pressure measurement based on ultrasonic waves is proposed. A prototype of the ultrasonic sensor is developed and tested in a series of experiments; we can conclude that it is suitable to measure the pressure inside cylindrical pressure vessels by measuring the travel time of various longitudinal waves. In the experiments, we identified that the change in travel time of the critically refracted longitudinal wave (LCR wave) and the reflected longitudinal waves vary linearly with the pressure. By applying a data fusion algorithm, the measurement models of selected waves-including LCR wave and several reflected longitudinal waves-are established. Through experiments at several temperatures, we can conclude that the The results show that models based on multiple waves (Model_Linear, Model_Linear_T, Model_Quadratic, and Model_Quadratic_T) are more accurate than models based on single wave (Model_LCR_T and Model_LRE4_T). The nonlinear models with quadratic terms (Model_Quadratic and Model_Quadratic_T) work better than linear models based on multiple waves (Model_Linear, Model_Linear_T). Models without temperature compensation (Model_Linear and Model_Quadratic) can achieve same-level accuracy as models with temperature compensation (Model_Linear_T and Model_Quadratic_T).
Conclusions
In this paper, a new mechanism of pressure measurement based on ultrasonic waves is proposed. A prototype of the ultrasonic sensor is developed and tested in a series of experiments; we can conclude that it is suitable to measure the pressure inside cylindrical pressure vessels by measuring the travel time of various longitudinal waves. In the experiments, we identified that the change in travel time of the critically refracted longitudinal wave (L CR wave) and the reflected longitudinal waves vary linearly with the pressure. By applying a data fusion algorithm, the measurement models of selected waves-including L CR wave and several reflected longitudinal waves-are established.
Through experiments at several temperatures, we can conclude that the measurement models which take multiple waves into account will achieve higher accuracy than models using a single wave because the models of multiple waves can significantly mitigate the interference of temperature. In addition, we also found in our experiments that the model with quadratic terms would be more accurate.
To acquire the accurate travel-time change of various waves, not only a new mechanism of measurement but also a set of adequate devices are essential; for example, an analog circuit based on TDC is important in the pressure sensor. | 5,958.4 | 2016-08-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Nonlinear transverse vibrations of clamped beams carrying two or three concentrated masses at various locations
In a recent work, a discrete model for geometrically nonlinear transverse free constrained vibrations of beams with various end conditions has been developed and validated via comparison with known results corresponding to nonlinear vibration of clamped beams carrying a concentrated mass. It is extended here to continuous beams carrying two or three concentrated masses at various locations and subjected to large vibration amplitudes. The discrete model used is an N-dof (N-Degrees of Freedom) system made of N masses placed at the ends of solid bars connected by springs, presenting the beam flexural rigidity. The large transverse displacements of the bar ends induce a variation in their lengths giving rise to axial forces modelled by longitudinal springs causing nonlinearity. The calculations made allowed application of the semi-analytical model developed previously for nonlinear structural vibration involving three tensors, namely the mass tensor mij , the linear rigidity tensor kij and the nonlinearity tensor bijkl presenting the effect of the change in the bar lengths. The addition of three concentrated masses studied here induces a change in the mass matrix. By application of Hamilton’s principle and spectral analysis in the modal basis, the nonlinear vibration problem is reduced to a nonlinear algebraic system, using an explicit method, developed previously for non-linear structural vibration. This study shows that concentrated masses may be used for practical purposes to shift the resonant frequency; if the three masses locations are appropriately chosen.
Introduction
In addition, this discrete model may be very easily
Introduction
Application of the discrete model developed in the work [1,2,3] is made here to a Bernoulli beam carrying two or three concentrated masses at various locations and subject to geometrical nonlinear vibration due to large transverse displacements. This model focuses on the known physical phenomenon of the dynamic behavior: the stretching of the beam induces nonlinearity. To escape the occurrence of modal coupling, internal resonance, bifurcation points and chaos, the amplitudes considered here do not exceed the radius of gyration r of the cross section of the beam with respect to the neutral axis given by / r I S = , where S is the beam section area and I is the quadratic moment of the beam with respect to the neutral fiber (in the case of a rectangular
√12
In addition, this discrete model may be very easily adapted to the study of beams with variable cross sections, with concentrated masses or stiffness, or with discontinuities in the section, the stiffness or the material properties.
Presentation and nomenclature
The studied model of a beam with three concentrate dmasses M s , M t and M q is shown in Figure 1: respect to the neutral fiber (in the case of a rectangular section, r is equal to the thickness of the beam divided by √12). This study shows that the developed model may be used to study successfully non-linear vibrations of beams carrying many concentrated masses simply by changing the mass matrix but without any change in the linear and nonlinear stiffness tensors, corresponding to a uniform beam defined in [1]. (1 ( 1) ) ... ... (1) determines the location coordinates of the three masses: We consider the case where the masses do not change the bending stiffness. The general form of the linear stiffness matrix ij k ⎡ ⎤ ⎣ ⎦ to take into account (of order N-2) is unchanged, compared to that of a uniform beam. In the case of a beam carrying masses it can be written as N must be a common multiple of s, t and q.
Dimensionless formulations
The following equations link the dimensional values to the dimensionless ones (with an asterisk): * y y r = (2) 2 * L EI 6 Results in the linear case for a beam concentrated mass ou total mass of the beam where ρ is the density of the beam density, E the Young's modulus, α � le ratio of the concentrated mass i to the total mass of the beam, and i η the non-dimensional location of the concentrated mass i.
Results in the linear case for a beam with two concentrated masses
The results obtained for N = 49 are given in Table 1 It is noted that when the masses are placed in the middle of the beam, the first natural frequency decreases; this is due to the shape of the first vibration mode which has a maximum amplitude in the middle of the beam, while the second eigenmode remains unchanged at this location since the form of this mode has a node in the middle of the beam. Note also that when the masses are placed in 1/4 of the beam span, the second natural frequency decreases until the value of the first natural frequency (for s η = 0,3 ; t η = 0,7 and q η = 0; this is due to the shape of the second vibration mode having three antinodes at these locations of the beam.
Using the known formula , the nonlinear potential energy is deduced as 4 2 8 nl k V y l ≅ .
By the same approach, we calculate the nonlinear energy for all of the bars (see Figure 4), which we compare to the tonsorial expression to determine the coefficients of
Cases of non-linear vibration of a beam carrying two or three concentrated masses
It should be noted that the coefficients of linear rigidity and non-linear tensors k ij and b ijkl calculated in the modal basis are different from those of a continuous beam, because the transition matrix [ ] Φ has undergone a change, due to addiction of concentrated masses, as illustrated in the following equations. (14) to (17)).
Our method consists on applying Hamilton's principle in the modal basis: We obtain a system of nonlinear algebraic equations in We used the explicit method presented in [4] in the Modal basis to solve this equation: The explicit formulation is based on an approximation which consists
Conclusions
The discrete model developed and validated in the case of a continuous beam presented in [1,2,3], was applied to beams with two or three concentrated masses. a a a b i j k r i j k summation for the repeated indices i, j, k the first, second and third-order terms so that the only remaining term applied to beams with two or three concentrated masses. Linear and nonlinear vibrations were examined. This shows the effectiveness of this discrete model, its formulation and the associated program for the study of linear and nonlinear vibrations of a beam with discontinuities in the distribution of masses. The concentrated masses change significantly the beam dynamic response. | 1,570.8 | 2016-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Singularities in FLRW Spacetimes
We point out that past-incompleteness of geodesics in FLRW spacetimes does not necessarily imply that these spacetimes start from a singularity. Namely, if a test particle that follows such a trajectory has a non-vanishing velocity, its energy was super-Planckian at some time in the past if it kept following that geodesic. That indicates a breakdown of the particle's description, which is why we should not consider those trajectories for the definition of an initial singularity. When one only considers test particles that do not have this breakdown of their trajectory, it turns out that the only singular FLRW spacetimes are the ones that have a scale parameter that vanishes at some initial time.
I. INTRODUCTION
Hubble's law, the observed abundance of elements, the cosmic background radiation and the large scale structure formation in the universe are strong evidence that the universe expanded from an initial very high dense state to how we observe it now. However, what happened exactly during this hot density state is still an open problem. One of the questions that needs to be answered is whether there was a singularity at the beginning of spacetime. Such a singularity is in accordance with the very general theorems of Hawking and Penrose [1], [2] defined as a non-spacelike geodesic that is incomplete in the past. One uses this definition because test particles move on these trajectories and thus have only traveled for a finite proper time.
The flatness, horizon and magnetic monopole problem can be solved with a period of exponential expansion in the very early universe [3], [4]. To avoid a singularity before that period, it was suggested that one can have past-eternal inflation in which the universe starts from an almost static universe and flows towards a period of exponential expansion. This way the universe would not have a beginning. One of the characteristics of inflationary models is that the Hubble parameter H is positive. In [5] it was shown that when the average Hubble parameter along a geodesic H av is positive, the geodesic is past-incomplete such that we would have a singularity. This is also applicable to models of eternal inflation in which the average Hubble parameter along geodesics does not go to zero sufficiently fast (i.e. such that we do not have that H av is zero). In [6], a model of eternal inflation was given with all non-spacelike geodesics complete, but in [7] these kind of models were shown to be quantum mechanically unstable. Hence, this would imply that also models of eternal inflation start from a singularity.
In [8] it was pointed out that in De Sitter space the test particles that follow those past-incomplete trajectories and have a non-vanishing velocity, will have an energy that becomes arbitrarily large when going back in the past. This can be generalized to general Friedmann-Lemaître-Robertson-Walker (FLRW) spacetime and means that the energy of such a test particle can become super-Planckian at some initial time such that their description breaks down. This is the reason one should not consider those trajectories when defining a singularity. When one only considers the trajectories of test particles that do not have a breakdown of the description of their trajectory, one finds that the only FLRW spacetimes that start from a singularity are the ones with a scale factor that vanishes at some initial time. This implies that models of eternal inflation or bouncing models are singularity free provided one requires sub-Planckian test particles at all times.
In this paper we first consider the past-(in)completeness of geodesics in spacetimes with an FLRW metric. We review the general singularity theorems of [1], [2] applied to these models and we review the more general (in the context of cosmology) argument of [5]. After that we consider how the energies of test particles change in time. We adopt units in which the velocity of light c = 1.
II. PAST-(IN)COMPLETENESS OF GEODESICS IN FLRW SPACETIMES
Consider a universe with an FLRW metric which describes a spatially homogeneous, isotropic spacetime: (1) where κ is the curvature of spacelike three-surfaces and the scale factor a(t) is normalized such that a(t 1 ) = 1 for some time t 1 . This metric is a good description of our universe, since from experiments as WMAP and Planck, it follows that our universe is spatially homogeneous and isotropic when averaged over large scales. Geodesics γ(τ ), where τ is an affine parameter, satisfy where | V | 2 = g ijγ iγj and is the normalization of the geodesic: = 0 for null geodesics and = −1 for timelike geodesics. We thus have a pastincomplete geodesic when for an initial velocity Notice that when a(t 0 ) = 0 for some time t 0 , all non-spacelike geodesics are pastincomplete. When t 0 = −∞ and the integral (3) is converging, we cannot immediately conclude that geodesics are past-incomplete. It is possible that we only consider a part of the actual spacetime. An example is given by κ = 0, and the Hubble parameter H =ȧ/a satisfyingḢ/H 2 = 0, in which case a(t) = e Ht with H constant. If the whole manifold would be covered by these coordinates, it would result in past-incomplete geodesics. However, this model only describes one half, known as the Poincaré patch, of the larger De Sitter space; the whole space is described by choosing κ = 1, a(t) = cosh(Ht)/H which yields complete geodesics. See also [9] and [10]. When the integral (3) is diverging one can conclude that geodesics in that specific coordinate patch are past-complete. Of course, one can also assume that a certain model with t 0 = −∞ covers the whole spacetime. Then the past-(in)completeness of a geodesic is determined by the integral (3). From (3) we see that in spacetimes with a(t) > A ∈ R >0 all non-spacelike geodesics are pastcomplete. Hence for a spacetime to have a nonspacelike geodesic that is past-incomplete, a(t) needs to become arbitrarily small.
There are a few theorems that prove that a spacetime contains a (past-)incomplete geodesic. Hawking and Penrose, [1], [2], proved theorems that state that when for all geodesics γ and the spacetime obeys a few other conditions such as containing a trapped surface, there is a non-spacelike geodesic that is incomplete. Condition (4) for the metric (1) yields Using Eq. (2) one finds that condition (5) becomes κ ≥ 0 : .. a ≤ 0; ..
In particular for all κ we need that ..
a ≤ 0 at all time, or that the spacetime is non-accelerating. Notice that when .. a ≤ 0, a will always be zero at some time t 0 (this might be in the future), unless a is a positive constant (H = 0) in which case we do not have past-incomplete geodesics. Hence, when we want to use these theorems to say something about an initial singularity in an FLRW spacetime, we need a metric that has a scale parameter a that becomes zero at some time in the past. Describing the matter content of the universe by a perfect fluid where p is the pressure, ρ the energy density and U µ = (1, 0, 0, 0), the condition (6) translates via the Friedmann equations to Although it seems that we have less restrictions when κ ≥ 0, it is impossible that ρ + p < 0 and ρ + 3p ≥ 0 for non-negative spatial curvature. In Fig. 1 one finds an illustration of condition (8). For κ ≥ 0, we have less restrictions, the red shaded area below the dashed line is also included, but it is impossible for an FLRW spacetime with nonnegative spatial curvature to be in that area.
Another theorem that proves that a geodesic is past-incomplete was published in [5] and is also applicable to spacetimes that have a(t) > 0 for all t. It says that when the average Hubble parameter H =ȧ/a along a non-spacelike geodesic, H av , satisfies H av > 0, the geodesic must be pastincomplete. For the metric (1), the argument is as follows. Consider a non-spacelike geodesic γ(τ ) between an initial point γ(τ i ) and a final point γ(τ f ). We can integrate H along the geodesic, using Eq. (2): Notice that for the second equality sign, one should break up the integration domain into parts where a = a(t) is injective, but that one will end up with the same result. Hence, this integral as function of the initial affine parameter τ i is restricted by some fixed final τ f . This means that when τ i has to be some finite value such that the geodesic is past-incomplete. Notice that it is still possible to construct an FLRW spacetime that has H > 0 at all times and complete geodesics. For this we need that H av must become zero when τ i → −∞.
Examples are for instance given by spacetimes with H > 0 and a → a 0 > 0 for t → −∞ (in this case we will have that H → 0 as t → −∞).
III. ENERGY OF TEST PARTICLES
As stated before, the definition of a singularity is based on the trajectories of massive test particles and massless particles. For cosmological spacetimes with an FLRW metric, we would like to study the energies of test particles over time. We will generalize the argument given in [8] for De Sitter space to a general FLRW spacetime.
Using Eq. (2) we find that for massive test particles We already saw that in order for a spacetime to have a past-incomplete non-spacelike geodesic, the scale parameter a needs to become arbitrarily small. With Eq. (11) this then implies that when the particle has a velocity | V (t 1 )| at time t 1 , the velocity and hence the energy of a test particle with mass m become arbitrarily large when moving back to the past.
The statement above for massive test particles carries over to photons. In this case the angular frequency as observed by a comoving observer is Thus also the energy of photons E = ω will become arbitrarily large when moving back to the past.
In [8] it was noted that one cannot have particles with arbitrarily high energies because if such a particle has a nonvanishing interaction cross section with any particle with a non-zero physical number density, then the particle will interact with an infinite number of them, breaking the Cosmological principle. However, the particle's energy cannot become arbitrarily high because it will reach the Planck energy E P = G ≈ 1.22 · 10 19 GeV at some time t. With this energy, the particle's Compton wavelength is approximately equal to its Schwarzschild radius such that it will form a black hole. Therefore, the description of the particle's trajectory will break down. Scattering processes involving vacuum fluctuations may cause the test particle's energy to never reach the Planck energy. If these processes are significant the particle's trajectory is not a geodesic anymore. Near the Planck energy scattering processes are dominated by processes that involve the exchange of a graviton [11]. To estimate this effect we consider photon-photon scattering with the exchange of a graviton. We model the loss of energy of the photon when going back in time as where n is the number density of virtual photons and σ is the cross section of the scattering process. The particle gains energy from the expansion of the universe because −H is positive (when going back in time) and it looses energy from the scattering with virtual photons. We estimate the density of virtual photons as one per Hubble volume: The differential cross section for photon-photon scattering with the exchange of a graviton for un-polarized photons is [12] dσ dΩ = κ 4 8π 2 k 2 sin 2 (θ) 1 + cos 16 1 2 θ + sin 16 1 2 θ where κ = √ 16πG, k is the momentum of the photon and θ is the scattering angle. Since we are primarily interested in large momentum exchange, we neglect small angle scatterings when calculating the total cross section of this process: where we have the relation sin(θ/2)= ξ/2. Taking only angles .26π < θ < .74π into account for the scattering, we have that 2 log 1 ξ − 363 140 +log(4) ≈ 1. With Eqs. (13), (14) and (16) we find that the energy of the test photon does not increase when where E = k is the photon energy. Using the Hubble parameter of cosmic inflation which typically is about − H ≈ 10 13 GeV, we find from (17) that the scattering process becomes significant when Hence, processes involving gravitons will not cause the particle's energy to stay smaller than the Planck energy and a black hole will form. This implies that the description of the particle's trajectory (as a geodesic) breaks down, either because of interaction processes or by the formation of a black hole. The latter definitely happens when the initial energy is near the Planck energy.
Up to now, the maximum energy of a single particle that has been measured is of the order of 10 20 eV [13] which is eight orders of magnitude smaller than the Planck scale. These particles were all cosmic ray particles, so their probable origin is a supernova, an active galactic nucleus, a quasar or a gamma ray-burst. Even when using this energy as an upper bound for the energy of test particles, we have that the description of the trajectories of non-commoving test particles breaks down at times that are certainly later than the Planck era, the period where we have to take quantum gravitational effects into account. In [8] the arbitrarily high energies of test particles were used to argue that these particles should be forbidden in De Sitter space. This can be done by using a different time arrow in the two patches of De Sitter space that one has in the flat slicing. That way the two coordinate patches become non-communicating and describe eternally inflating spacetimes. We will not look into these kind of constructions for general FLRW spacetimes but we want to use the arbitrarily high energies of test particles to give a consistent definition of a singularity. When the particle's description breaks down before it reaches the beginning of its trajectory, it is not very useful to use that particle as an indication for an initial singularity. That is the reason why we suggest to define a singularity in spacetimes with an FLRW metric that has a parameter a that becomes arbitrarily small, as a timelike geodesic with | V (t 1 )| = 0 that is pastincomplete. For such trajectories, we have that dt = dτ which means that a spacetime has no initial singularity when a(t) > 0 for all t ∈ R. Hence, an FLRW spacetime starts from a singularity precisely when a(t 0 ) = 0 at some initial finite time t 0 .
IV. CONCLUSION
We pointed out that spacetimes with an FLRW metric such that a(t) > 0 for all t ∈ R have no initial singularity. This was done by first observing that in models that have a(t) > A ∈ R >0 all non-spacelike geodesics are past-complete. When a becomes arbitrarily small, it is possible that the spacetime contains a past-incomplete geodesic. With the usual definition of a singularity, this means that the spacetime has an initial singularity. However, that definition is based on a test particle that has that geodesic as trajectory. We pointed out that when this particle has an initial velocity, its energy will become super-Planckian at some time in the past if it kept following that geodesic. This means that the particle stops being a test particle and it does not matter that its trajectory is past-incomplete. For a model in which the scale factor becomes arbitrarily small, we should define an initial singularity as a trajectory of a comoving particle that is past-incomplete. This implies that the only FLRW spacetimes with an initial singularity are the ones such that a(t 0 ) = 0 at some initial time t 0 . Hence, bouncing spacetimes and past-eternal inflationary models do not start from a singularity. One can use similar arguments to show that the only FLRW spacetimes that have a singularity in the future are the ones that have a scale factor such that a(t) vanishes at some time in the future. It would be interesting to examine if similar results hold for universes that are obtained by perturbating an FLRW spacetime. | 3,894.8 | 2016-06-03T00:00:00.000 | [
"Physics"
] |
The role of vitamin D receptor gene polymorphisms in gestational diabetes mellitus susceptibility: a meta-analysis
Background Gestational diabetes mellitus (GDM) is a common disease during pregnancy. The association of vitamin D receptor (VDR) polymorphisms with GDM is still controversial. This study aimed to assess the associations between VDR polymorphisms and GDM risk. Methods We searched Cochrane Library, PubMed, and Embase electronic database for all eligible studies published from Jan 1, 1980 to December 31, 2020 to conduct a Meta-analysis. We analyzed four VDR polymorphisms: BsmI (rs1544410), ApaI (rs7975232), TaqI (rs731236), and FokI (rs2228570). Inclusion Criteria: (1) The data can be evaluated; (2) case–control study; and (3) meeting the Hardy–Weinberg’s law. Exclusion criteria: (1) Insufficient or extractable data; (2) Severe publication bias in the data; and (3) duplicate publications. We eventually included 15 studies in seven articles, including 2207 cases and 2706 controls. Results We eventually included 15 studies in seven articles, including 2207 cases and 2706 controls. The data showed that ApaI (rs7975232) VDR gene polymorphism was related with the risk of GDM for the comparison of CC vs AA and recessive model in overall population and FokI (rs2228570) VDR gene polymorphism was associated with the risk of GDM for recessive model in overall population. BsmI (rs1544410) polymorphism was not related with the risk of GDM in overall population. However, in the analysis of subgroups grouped by race, BsmI (rs1544410) has certain correlations. And, the data suggested the TaqI (rs731236) polymorphism was not associated with GDM. Conclusion Based on the meta-analysis, VDR ApaI (rs7975232) and FokI (rs2228570) polymorphisms increase susceptibility to GDM. In the future, it can be used to diagnose and screen molecular biomarkers for GDM patients.
Background
Gestational diabetes mellitus (GDM) is defined as glucose intolerance diagnosed during pregnancy [1]. GDM is characterized by increased insulin resistance, hyperglycemia, and obesity [2][3][4]. The prevalence of GDM is increasing in decades and floating from 1.7 to 11.6% among populations [5]. Although considerable research effort has been focused on GDM, the pathophysiology of the disease remains incompletely understood. Genetic and environmental factors play an important role in the etiology of GDM [2].
Vitamin D deficiency is associated with diabetes mellitus [6][7][8]. Vitamin D receptor (VDR) gene polymorphisms may contribute to development of diabetes mellitus through calcium metabolism alteration and modulation of insulin secretion [9][10][11]. Three single nucleotide polymorphisms BsmI, ApaI and TaqI of the VDR gene were found in the major untranslated regions that regulate gene expression. FokI is a T > C substitution that results in exon 2 [12,13]. The above four VDR gene polymorphisms all have a certain effect on insulin production, and secretion plays a role in the pathogenesis of GDM. Therefore, VDR gene polymorphisms may plays a role in the pathogenesis of GDM.
Many studies have researched the role of VDR gene polymorphisms in GDM. It is reported that VDR has four well-characterized di-allelic polymorphisms: BsmI (A > G, rs1544410), ApaI (A > C, rs7975232), TaqI (T > C, rs731236), and FokI (C > T, rs2228570). However, the results of these studies are still uncertain [13][14][15][16][17][18][19]. Different research teams and research designs might lead to differences in results. The objective is to clarify the effect of VDR gene polymorphisms on GDM risk, we conducted a meta-analysis of all eligible case-control studies.
Search strategy
We identified the keywords "VDR" OR "vitamin D receptor" AND "polymorphism" OR "variant" OR "allele" OR "genotype" OR "gestational diabetes" OR "gestational diabetes mellitus" OR "GDM" to search the articles in Cochrane Library, PubMed, and Embase electronic database. All articles published until December 31, 2020. In addition, manually search the article's reference list for more literature. This article does not collect unpublished data. When multiple articles contain studies of the same population, complete studies were chosen in this study. The language of the publication is limited to English or at least an English abstract.
Data extraction
The data was independently evaluated by two reviewers according to include and exclude criteria for these documents, discuss whether can be included in the meta-analysis. The difference was not resolved until the consensus of each item was reached. The following information was recorded for each study: author's name, year of publication, country of origin, racial descent, source of the control population, genotyping methods, matched factors as well as adjusted factors, number of cases and controls.
Statistical analysis
ORs (odds ratios) and 95% CIs were used to estimate the relationships between VDR gene polymorphism and GDM. For heterogeneity detection, we chose the P value to measure. If P < 0.05, we chose the random effect model, otherwise chose the fixed effect model. For publication bias we calculated Egger and Begg' test, respectively (P < 0.05 was considered representative of statistically significant publication bias). If P < 0.05, it was considered biased. Hardy-Weinberg's law was detected in all control groups. This meta-analysis was performed using STATA (version 14.0; US).
Study selection
We found 186 records through a full search of the database. After several rounds of screening, 36 articles met our requirements. After two individuals independently evaluated the inclusion and exclusion criteria, 15 casecontrol studies in a total of seven articles were included in the study [13][14][15][16][17][18][19]. We identified 186 articles from the database, and after excluding irrelevant and duplicate research, 36 articles entered the next step of analysis. According to the inclusion and exclusion criteria, seven articles were included in our study. The specific retrieval process was shown in Fig. 1. [13][14][15]18].
Publication bias
Funnel plot for comparison of allele models for ApaI ( Fig. 2A), FokI (Fig. 2B) and BsmI ( Fig. 2C) gene polymorphisms was evaluated to intuitively show the situation of publication bias. We used Begg's test and Egger's test to assess publication bias ( Table 2). The results of the Egger's test are P = 0.03 for the contrast of CT vs TT + CC of FokI (rs2228570), while the Begg's test are P = 0.296. Publication bias was not observed in any other analysis under various other comparative models.
BsmI (rs1544410)
The results showed that BsmI (rs1544410) was not related to GDM risk in the general population. In the subgroup, a relationship with higher GDM risk was found in the Asian population allele model, Fig. 8). Other related results of BsmI (rs1544410) were shown in Table 2.
TaqI (rs731236)
The data showed that the TaqI (rs731236) polymorphism of the VDR gene was not related to susceptibility to GDM ( Table 2). TaqI (rs731236) was heterogeneous in CT and TT contrast, overt dominant models, and overdominant models in overall population. In the subgroup, CT versus TT showed heterogeneity between the dominant model and the over-dominant model ( Table 2).
Sensitivity analyses
One-way sensitivity analysis was performed on the data involved in this meta-analysis. Each study of the metaanalysis was deleted to reflect the overall impact of each data set, and the corresponding combined results did not change substantially.
Discussion
GDM has become major health concern worldwide. Studies suggested that VDR gene polymorphisms might have an impact on GDM risk [14,16,18]. However, it is difficult to obtain more accurate results through a single study to determine the relationship between genes and diseases. Meta-analysis can solve the problem of insufficient statistics in a single study, so as to draw more precise conclusions. The association of VDR gene polymorphisms with the incidence of cancer, osteoporosis, and autoimmune thyroid disease has been confirmed in a meta-analysis [20][21][22]. In our study the PICO was shown as follow: P: Gestational diabetes mellitus; I: vitamin D receptor (VDR) polymorphisms; C: control people; O: susceptibility. This study showed ApaI (rs7975232) VDR gene polymorphism was related with GDM for the comparison of CC vs AA and recessive model in overall population and FokI (rs2228570) VDR gene polymorphism was associated with the risk of GDM for recessive model in overall population. The BsmI (rs1544410) and TaqI (rs731236) polymorphisms of the VDR were not related with GDM in overall population. Due to differences between races, evidence that could cause disease is sometimes not very reliable. This suggests that different races influence genetic background differently [23]. Therefore, based on subgroup analysis of different races, it can be found that the same polymorphisms in disease susceptibility in different populations play different roles. In our study, subgroup analysis suggested that the VDR gene ApaI (rs7975232) polymorphism was significantly associated with GDM for the comparison of CC vs AA and recessive model in Asian population and under the comparison of CC vs AA in Caucasian population. For VDR gene FokI (rs2228570) polymorphism, it was significantly associated with GDM under the comparison of CC vs AA and the recessive model in Asian and under allelic model and the recessive model in Caucasian. However, for VDR gene BsmI (rs1544410) polymorphism, it was significantly associated with GDM under allelic model, the comparison of GA vs AA, dominant model, and over-dominant model in Asian and under allelic model, the comparison of GG vs AA, the comparison of GA vs AA, dominant model, recessive model and over-dominant model in African population. Interestingly, the subgroup analysis in Asia and Africa for BsmI (rs1544410) is the opposite, perhaps because of ethnic differences. Of course, it also may be the difference in results caused by the insufficient number of studies included. We certainly need more and better research to get more reliable results. However, this meta-analysis has some limitations. Firstly, heterogeneity may influence the results of this meta-analysis. Nonetheless, we use specific research standards to strictly perform data extraction and analysis to minimize this possibility. Secondly, the study only includes published studies, and the existence of results indicating no meaning or negative results may not be published, and this will increase the likelihood of publication bias. Finally, our results have not been adjusted. If you can get more research data, you should be able to analyze it more accurately. We can obtain more accurate results by adjusting other variables, including age and family history, etc. [24][25][26][27]. In addition, an in-depth analysis of these factors provides a more complete understanding of the linkages between these factors and the risks of GDM.
Conclusions
In summary, VDR ApaI (rs7975232) and FokI (rs2228570) polymorphisms increase susceptibility to GDM. In the future, it can be used to diagnose and screen molecular biomarkers for GDM patients. VDR BsmI (rs1544410) polymorphism was associated with GDM in Asian and African population. VDR TaqI (rs731236) polymorphism was not associated with GDM. | 2,413.6 | 2021-12-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Active Semisupervised Clustering Algorithm with Label Propagation for Imbalanced and Multidensity Datasets
The accuracy of most of the existing semisupervised clustering algorithms based on small size of labeled dataset is low when dealing with multidensity and imbalanced datasets, and labeling data is quite expensive and time consuming in many real-world applications. This paper focuses on active data selection and semisupervised clustering algorithm in multidensity and imbalanced datasets and proposes an active semisupervised clustering algorithm. The proposed algorithm uses an active mechanism for data selection to minimize the amount of labeled data, and it utilizes multithreshold to expand labeled datasets on multidensity and imbalanced datasets. Three standard datasets and one synthetic dataset are used to demonstrate the proposed algorithm, and the experimental results show that the proposed semisupervised clustering algorithm has a higher accuracy and a more stable performance in comparison to other clustering and semisupervised clustering algorithms, especially when the datasets are multidensity and imbalanced.
Introduction
Semisupervised clustering algorithm has been studied recently as a method for improving the performance of clustering algorithm, and it allows the human expert to incorporate domain knowledge into the process of clustering and thus guides it to get better results.The use of domain knowledge in clustering task is motivated by the fact that the priori knowledge for some data objects can be obtained in many applications, the priori knowledge can be the labels of the data objects or the relationships between data objects.The "must-link" and "cannot-link" constraints capture relationships among data objects.Labeled objects could be used in clustering algorithms to help determine the groups and get more meaningful results.Most of the existing semisupervised clustering algorithms can be divided into three categories: method based on labeled data [1][2][3][4][5][6][7][8][9], pairwise constraints method [10][11][12][13][14][15][16], and fuzzy semisupervised method [17][18][19][20][21][22].
Semisupervised clustering algorithms based on labeled data utilized the label information to improve the performance of clustering.Semisupervised k-means clustering algorithm is a popular semisupervised clustering method [1][2][3][4].Basu et al. exploited labeled data to generate initial seed clusters [1].Bilenko et al. proposed a principled probabilistic framework based on hidden markov random fields for semisupervised clustering and presented HMRF-KMEANS based on EM and hidden markov random fields framework [2].Leng et al. used labeled data to initialize the process of k-means clustering and obtained the similarity threshold of clusters based on the label information; they also utilized similarity threshold to guide k-means clustering algorithm [3].Dang et al. presented a novel initialization method by propagating the labels of labeled data to more unlabeled data [4].Zhong used deterministic annealing to expand three semisupervised clustering methods seeded clustering, constrained clustering, and feedback clustering, and their performances were compared with real text datasets [5].Semisupervised density-based clustering is another kind of popular semisupervised clustering method [6,7].Lelis and Sander exploited labeled data to find values for .They gave a fixed value of MinPts and used the minimum spanning tree (MST) to partition dataset [6].Böhm and Plant expanded the clusters starting at all labeled objects simultaneously and proposed a semisupervised hierarchical clustering algorithm [7].Guan et al. proposed an asymmetric similarity measure for two different documents and a new semisupervised clustering algorithm by expanding affinity propagation [8].Shiga and Mamitsuka combined soft spectral clustering and label propagation and proposed a semisupervised clustering algorithm by learning locally informative data from multiple graphs [9].
The concepts of two basic pairwise constraints were defined by Wagstaff et al. [10]; they made the insertion of domain knowledge into the clustering (-means in this case) process, and the pairwise constraints were given as the mustlink and cannot-link.Reference [11] divided the pairwise constraints method into instance-level semisupervised clustering [10,12,13] and space-level semisupervised clustering [11,[14][15][16].Wagstaff et al. viewed the pairwise constraints as instance-level constraints in the process of clustering and proposed the semisupervised clustering algorithm COP-KMeans [10].Ruiz et al. proposed a semisupervised clustering algorithm called C-DBSCAN [12], which built a set of initial local clusters by partitioning data space into denser subspaces and cannot-link constraints, then merged density-connected local clusters and enforced the must-link constraints, finally, C-DBSCAN merged adjacent neighborhoods in a bottom-up fashion and enforced the remaining cannot-link constraints.Wang and Davidson combined spectral clustering and pairwise constraints in a principled and flexible manner [13].They used a user-specified threshold to lower-bound how well the given constraints were satisfied, instead of trying to satisfy every given constraint, and they proposed a flexible and generalized framework for constrained spectral clustering.Instance-level semisupervised clustering method introduces pairwise constraints into clustering only and does not utilize the priori knowledge with the highest degree.Space-level semisupervised clustering not only makes use of constraints but also employs the space information provided by the constraints to adjust the process of clustering.
Fuzzy clustering model adopts membership to show the results of clustering, and membership grades are used as probabilities that each data object belongs to every class.In order to improve the performance of fuzzy clustering, the priori knowledge has been applied into it, and most of them used the priori knowledge to modify the objective function.Labeled data [17][18][19] and pairwise constraints [20][21][22] are two principal forms of priori knowledge in the fuzzy semisupervised clustering.Pedrycz and Waletzky improved the performance of clustering algorithm by using the information provided by labeled patterns to aid the process of clustering [17].Bouchachia and Pedrycz utilized the information provided by labeled data to modify the objective function of fuzzy c-means [18].Gao et al. proposed a fuzzy semisupervised clustering algorithm based on distance, which guided the process of clustering by using background information provided by labeled data and optimized the objective function by adding the label information into it [19].Grira which guided the process of solving membership matrix [20].Pedrycz et al. used pairwise constraints information to optimize fuzzy c-means by adding an optimization step into the iteration process [21].Yan et al. proposed fuzzy semisupervised coclustering algorithm for document by using the pairwise constraints to guide the process of constructing it [22].Most of the semisupervised clustering algorithms assume that the labeled dataset or pairwise constraints are given.In practice, getting the priori knowledge is very expensive and time consuming.In addition, if the size of labeled dataset is too small in the process of constructing semisupervised clustering based on labeled data, some clusters may have no labeled data in imbalanced dataset, and then the data in those clusters will be assigned to other clusters forcibly.For example, the dataset shown in Figure 1 contains four clusters (these clusters are labeled with shapes "⋅", "◼", "", and " * ", resp.).The size of cluster "◼" is much less than that of the rest of the clusters.If the labeled data are randomly selected from the whole dataset, the data objects in cluster "◼" are very difficult to be selected.If cluster "◼" has no labeled data, then most of the semisupervised clustering algorithms will miss the cluster "◼".How to select data from imbalanced dataset to guarantee that each cluster has more than one data that can be selected is one work of this paper.One of the solutions to this problem is to adopt active learning method to guide the process of selecting data points, which aims to cover as many clusters as possible.
The active learning method, which aims to achieve high accuracy using labeled data as few as possible, selects informative data actively and labels them by oracle.The active learning method can minimize the cost of obtaining labeled data points greatly without compromising the performance of clustering algorithm, and this is very attractive and valuable in real-world applications.
Perhaps the simplest and most commonly used active learning technique is uncertainty sampling [23], and least confident strategy, margin sampling, and entropy are the most popular uncertainty sampling strategies.Since the most likely label sequence can be efficiently computed using dynamic programming, least confident strategy has been popular with statistical sequence models in information extraction tasks [24,25].However, the least confident strategy only considers information about the most probable label, and it discards information about the remaining label distribution, whereas margin sampling was proposed to correct for a shortcoming in least confident strategy by incorporating the second most likely label [26].Entropy may be the most popular uncertainty sampling strategy, and it is easily applied to more complex structured instances, such as sequences [25] and trees [27].Scheffer and Wrobel presented an active learning algorithm to reduce the required data labeling effort and increase the quality of the learning model by selecting "difficult" unlabeled samples [28].
Although most of the active learning strategies are applied into classification tasks, in the recent years, active learning is also introduced into clustering [29][30][31][32][33][34][35].Mallapragada et al. selected constraints through using a minmax criterion to improve the performance of semisupervised clustering algorithms by selecting most uncertain data [29].The uncertainty sampling technique selects the data objects which lie in the boundaries of clusters, and they are not "representative" of other data in the same cluster.Since knowing their labels is unlikely to improve the performance of the clustering algorithm as a whole, the "representative" method was proposed to solve this problem [30,31].Nguyen and Smeulders selected the most representative samples to avoid repeatedly labeling samples in the same cluster [30].Vu et al. selected useful examples according to a min-max approach to determine the set of labeled data [31].Active learning technique was also introduced into semisupervised clustering based on pairwise constraints [32][33][34][35].Zhao et al. selected informative document pairs for obtaining user feedback by using active learning approach and incorporated instance-level constraints to guide the clustering process in DBSCAN [32].Grira et al. defined an active mechanism for the selection of candidate constraints to minimize the amount of constraints required [33].Wang and Davidson presented an active query strategy based on maximum expected error reduction and a constrained spectral clustering algorithm that can handle both hard and soft constraints [34].Huang et al. conducted a preliminary clustering process to estimate the true clustering assignments and chose informative document pairs by means of learning the intermediate cluster structure [35].
Most of the existing active learning algorithms are poolbased or stream-based, and they are mainly applied in supervised learning.Although active learning is introduced into semisupervised clustering, the performances of these clustering algorithms are unsatisfiying when dealing with the imbalanced and multidensity datasets.The most uncertain data lies on the boundaries of clusters, and it is not "representative" of other data in the same cluster.So knowing its label is unlikely to improve the performance of the clustering algorithm as a whole.This paper selects the data with max density from each cluster which is the result of MST clustering.
Since the dataset is imbalanced, the distribution of labeled data in a given dataset is not the same as the whole data space, and a data point and its -nearest labeled data may not be in the same cluster, which leads to the result that most of the existing semisupervised learning algorithms cannot work well, especially when the size of labeled dataset is very small.However, in the whole data space, the label of a data point should be the same as that of most of its k-nearest neighbors.The proposed semisupervised clustering algorithm with label propagation is based on this idea.It expands the labeled dataset by labeling k-nearest neighbors of labeled dataset based on a threshold.Once an unlabeled data is labeled, it should be added into labeled dataset.If the difference of density between clusters is large in multidensity datasets, the expanding process cannot use the same threshold, and the threshold should be generated automatically according to the density of each cluster to which the labeled data point belongs.A new active semisupervised clustering algorithm, called active semisupervised clustering algorithm for imbalanced and multidensity datasets, is proposed based on the facts previously described.The presented algorithm tries to ensure that the selected data can cover as many clusters as possible in a given imbalanced and multidensity dataset.Those selected data are labeled by oracle, and they are viewed as the initial set of labeled data in the process of semisupervised clustering.The proposed algorithm expands the labeled dataset by propagating labels according to expanded threshold obtained automatically based on the character of each cluster which is obtained by running MST clustering algorithm.The proposed clustering algorithm mainly has the following two advantages in comparison with other semisupervised clustering methods.
(1) The proposed semisupervised clustering method utilizes MST clustering to select data points actively so as to avoid labeling data in the same cluster repeatedly.If we need labeled data objects, we partition the given dataset into clusters by using MST clustering and select actively only one data from each cluster.This method can reduce the number of labeled data points greatly without compromising the performance of clustering, and the selected data can cover as many clusters as possible.
(2) The proposed clustering algorithm achieves label propagation by using labeled data to expand their knearest neighbors according to the criterion that is automatically obtained based on the density of the cluster to which the labeled data point belongs, and the expanding model only requires one parameter.
The rest of this paper is organized as follows.Section 2 gives the proposed semisupervised clustering algorithm.In Section 3, three datasets from UCI Machine Learning Repository and one synthetic dataset are used to demonstrate the proposed algorithm.We summarize our work in Section 4.
Active Semisupervised Clustering for Imbalanced and Multidensity Datasets
The k-nearest neighbors algorithm is most often used for classification, and it gives the label of an unlabeled data by comparing it to the first most similar objects in the training set.Given a dataset = ∪ , where is the labeled dataset and is the unlabeled dataset, the nearest neighbors algorithm labels an unlabeled data with the most frequent label among its -nearest labeled neighbors.
The label of an unlabeled data is given as follows: where and are the labels of the data objects and , respectively, and the meaning of KNN(, ) is defined as given in Definition 1.
Definition 1. KNN(, ).Given one cluster and one data object ∈ , KNN(, ) is the set of k-nearest neighbors of in .
Each classification algorithm requires enough labeled data to achieve high classification accuracy.However, labeling data is quite expensive and time consuming in many realworld applications, and we can get a very small size of labeled dataset.For instance, there are 3 classes in Figure 2, and contains 5 data objects (3 data objects in 1 , 2 data objects in 2 , and no data objects in 3 ).The size of labeled dataset is very small compared with the whole dataset; suppose that we let = 1 for k-nearest neighbors algorithm and use it to label the unlabeled data.All unlabeled data objects in 3 and four unlabeled data objects in 2 are assigned to 1 .
There are two problems for most of classifications and semisupervised clustering algorithms like k-nearest neighbors that lead those unlabeled data to wrong class when the size of labeled dataset is too small.
(1) The first one is that the whole dataset is imbalanced and the size of labeled dataset is too small, and using random method to select labeled data cannot guarantee that each class has more than one data object to be selected.
(2) The second is that the class label of some unlabeled data and that of its k-nearest labeled neighbors are not the same.
An active semisupervised clustering algorithm with label propagation for imbalanced and multidensity datasets is proposed to solve the previously mentioned problems.It uses MST clustering to partition the given dataset into clusters and selects one data object from each cluster as labeled data.This method for data selection can guarantee that the selected data can cover as many clusters as possible.
where || is the number of data in cluster .
Definition 4. (, ).Given one cluster and a data object ∈ , (, ) is defined as follows: The proposed active semisupervised clustering process can be divided into two algorithms: active data selection algorithm (Algorithm 1) and semisupervised clustering algorithm with label propagation (Algorithm 2).Algorithm 1 selects important data which do not lie in the boundaries of clusters and outputs those selected data after labeling them.Algorithm 2 expands the labeled datasets by propagating themselves labels to their neighbors.
If the dataset is imbalanced and we select small number of data points from this kind of datasets randomly, then there exist some clusters which have no data to be selected.Using these selected data as the labeled data to guide the process of clustering, the data objects in clusters which have no data being selected are assigned to other clusters forcibly.Thus, decreases the accuracy of semisupervised clustering algorithm, and the clustering results are unsatisfying.In order to make the selected data cover as many clusters as possible, an active mechanism of selecting data points is presented.It partitions a given dataset into clusters by using MST clustering algorithm; here, is the number of the data objects which will be selected, and only one data (1) Let = || × , is the number of data points to be selected, || is the size of dataset .(2) Use Prime method to construct MST of .
(5) End Foreach (6) Sort all edges in descending order according to . (7) Insert the sorted edges into a list: edgesLst.(8) Foreach edge in edgesLst do (9) Delete edge from MST (10) Check the number of partitions in MST, num (11) point is chosen in each cluster.Since only one data point in each cluster is selected, each of selected data should be the better representations of corresponding cluster, and the centers of clusters and the data with maximum density are two better representation of each cluster.This paper utilizes the method of label propagation to achieve a high accuracy of semisupervised clustering algorithm, and the data objects with maximum densities are chosen by us and are labeled by oracle.The details of selecting data points are shown in Algorithm 1.
Algorithm 1 has two parameters and . is the dataset which will be clustered, and is the percent of the selected data in .Algorithm 1 uses the MST clustering to partition into clusters, and the value of is larger than or equal to the real number of clusters in the dataset .MST clustering algorithm used in Algorithm 1 is proposed by Zahn [36].In the process of labeling the data, we should select the certain data objects which do not lie in the boundaries of clusters.Since the selected data are "representative" of other data in the same cluster, their labels are easy to be labeled, and this can reduce the required data labeling effort and increase the quality of the labeled data.The proposed semisupervised clustering algorithm requires very small number of labeled data, and even some cluster has only one data to be selected as labeled data.The data with max density in one cluster is easier to be labeled compared with the rest of data, so Algorithm 1 selects the data with max density in each cluster and labels them by querying the oracle about labels of the selected data.
How to use small number of labeled data to achieve a higher accuracy of clustering algorithm is a challenging work, especially when the dataset is imbalanced and multidensity.The semisupervised clustering algorithms should use the character of labeled dataset to guide their clustering process.In this paper, firstly, the clustering results of MST are merged according to the label of its labeled data (each cluster has and only has one labeled data).Since the density of each cluster is not unique and the densities of clusters may be different, we should not use the same expanding threshold when utilizing the method of label propagation to expand the labeled dataset.Secondly, the expanding threshold of each cluster should be obtained based on its density automatically, and it is used to expand the labeled dataset in one cluster.Finally, the rest of unlabeled data are assigned with the most frequent label among its k-nearest labeled neighbors.More detailed information is given in Algorithm 2.
The in step 1 of Algorithm 2 is the parameter of knearest neighbors.
Step 2 uses Algorithm 1 to select data points.Since the value of is not less than that of and if is larger than , then some clusters in 1 , 2 , . . ., are in the same cluster.Algorithm 2 can be divided into three stages.Firstly, Step 4 merges the clusters which should be in the same cluster into one. and are two data points in , and and are the labels of them, respectively.If Step 4 merges and into one.Secondly, different clusters may have different densities in multidensity datasets, which leads to the result that the process of label expanding cannot adopt the same expanding threshold on the whole data space when the difference of density between clusters is very large.It should adopt different expanding threshold according to its density of the cluster to which it belongs.Step 9 computes the expanding threshold for each cluster.In each cluster (1 ≤ ≤ ), the labeled data which are in expand their labels to their k-nearest neighbors based on the threshold which is obtained in automatically, and function (, V(), ) uses the expanding threshold V() to expand the labeled dataset by propagating the labels of labeled data in cluster , and the expanding process is given as Algorithm 3. Steps 5 to 13 complete the process of label propagating.Thirdly, since we use the expanding threshold V() in the process of label propagation, then part of unlabeled data in cluster is not be labeled.We should label these unlabeled data after the ending of label propagation and use the k-nearest neighbors rule to deal with the rest of unlabeled data.
Algorithm 3 expands the labeled data in cluster by using the mechanism of label propagation.In cluster , we find out the k-nearest neighbors in for each data in .In cluster , V() is used as the expanding threshold, which is necessary in multidensity dataset.Steps 10 to 15 utilize V() as the threshold to expand the labeled data in cluster .Firstly, we take out one labeled data which has not been used to expand its label to KNN(, ) in .For any data point in KNN(, ), if and only if KNN(, , ) is less than V(), the label of is assigned to .After dealing with KNN(, ), it takes another labeled data which has not been used to expand its k-nearest neighbors in and uses the same method to label its -nearest neighbors.If all of the labeled data in have been used to label their k-nearest neighbors, Algorithm 3 returns the as the result.
Experimental Results and Discussion
We use three standard datasets from UCI Machine Learning Repository [37]-IRIS, Wine, and Ecoli-and one synthetic dataset which is imbalanced and multidensity to demonstrate the performance of the proposed algorithm.The Euclidean metric is employed to compute the distances between data objects.In order to prove that the proposed method has the ability of dealing with the imbalanced and multidensity datasets, we construct three imbalanced datasets by deleting data objects from IRIS, Wine, and Ecoli.Since the priori knowledge is given as the labeled data, we compare the proposed algorithm with SSDBSCAN and Constrained-Kmeans.We use the clustering accuracy to evaluate the clustering results.The notion of clustering accuracy (CA) of a dataset is defined as follows: where || is the size of the unlabeled dataset and | | is the number of labeled data which are labeled correctly by clustering algorithms in . 3 and 4.
Figure 3 shows the experimental results of the 3 algorithms which run on the IRIS dataset.Figure 3 shows that the proposed algorithm has a higher accuracy compared with the SSDBSCAN and Constrained-Kmeans.In addition, the proposed algorithm is more stable than SSDBSCAN and Constrained-Kmeans, especially when the size of labeled dataset is very small.The proposed algorithm can reach stable state when selecting more than 3% of all data (there are only 4 labeled data).The accuracy of Constrained-Kmeans is very low when selecting 3% and 4% of all data, just because there is one cluster which has no data being selected, Constrained-Kmeans partitions IRIS dataset into 2 clusters forcibly, and SSDBSCAN has the same problem.The method of labeled data selection is based on MST clustering, and the experimental results show that the accuracy of clustering can be improved highly when using 4 labeled data to guide the process of clustering.
Figure 4 displays the experimental results of algorithms running on the modified IRIS dataset.The proposed algorithm has a much higher accuracy and more stable state
Constrained-Kmeans
The ratio of labeled data to the whole data Clustering accuracy (%)
Constrained-Kmeans
The ratio of labeled data to the whole data Clustering accuracy (%) than SSDBSCAN and Constrained-Kmeans.When IRIS is modified to be imbalanced and multidensity, the labeled data which is selected by using random method cannot cover all clusters, which makes some clusters assigned to other clusters in force, and this is reflected in SSDBSCAN and Constrained-Kmeans, especially in Constrained-Kmeans.But the accuracy of the proposed algorithm is little influenced.The accuracy of the proposed algorithm reaches 93.8% when selecting 3% of all data, and the presented algorithm can reach stable state when selecting more than 7 labeled data.
Wine Dataset.
Wine dataset contains 178 data objects, and these data can be assigned to 3 clusters whose sizes are 59, 71, and 48, respectively.We adapt the same method to turn Wine dataset into an imbalanced and multidensity dataset by removing 25 data objects from the first cluster randomly, and let modified Wine denote this dataset.We select 2, 3, 4, 5, 6, 7, 8, 9, and 10 percents of the dataset from Wine and the modified Wine to be labeled datasets, respectively, and view the rest of the data as unlabeled datasets.Figures 5 and 6 show the changes of accuracy of the three algorithms.
Figure 5 shows that the proposed algorithm has a more stable state than SSDBSCAN and Constrained-Kmeans, and the accuracy of the proposed algorithm is much higher than that of Constrained-Kmeans.The proposed algorithm can reach stable state when selecting more than 5% of all data (there are only 9 labeled data).Since SSDBSCAN and Constrained-Kmeans use random method to select labeled datasets, there exists some cluster that has no data that can be selected as labeled data, and their accuracy fluctuates along with the change of percent of labeled data and this is also shown in Figure 5.
Figure 6 shows that the accuracy of Constrained-Kmeans and SSDBSCAN fluctuates much larger than that of the The ratio of labeled data to the whole data Clustering accuracy (%) Our method SSDBSCAN proposed method, and the proposed algorithm reaches a stable state when selecting only 5% of all data.When we select more than 4% of all data as the labeled data actively, the accuracy of the proposed method is 94.1%, and when the percent is more than 5, the accuracy is 96.1%.When the labeled data cover all clusters, Constrained-Kmeans has a high clustering accuracy which is close to that of the proposed method.But, in the 9 labeled datasets, only two labeled datasets cover all clusters, and the rest 7 labeled datasets miss some cluster.The accuracy of Constrained-Kmeans is less than 80% on the 7 labeled datasets.The accuracy of SSDBSCAN is less than 80% on all the labeled datasets.Since the data objects of the last three clusters are less than 6 and they can be viewed as noises, in the experiment, we delete these data.We select 2, 3, 4, 5, 6, 7, 8, 9, and 10 percents of the dataset from Ecoli dataset, respectively.The experimental results are shown in Figure 7.The results are similar to those in Figures 4 and 6. Figure 7 shows that the proposed algorithm has a much higher accuracy and more stable state than SSDBSCAN and Constrained-Kmeans.The accuracy of Constrained-Kmeans and SSDBSCAN fluctuates along with the difference of labeled data.
Synthetic Dataset.
In this subsection, we generate 2500 data objects which have two attributes and are viewed as imbalanced and multidensity datasets, and these data can be partitioned into 4 clusters whose sizes are 1000, 100, 800, and 600, respectively.These data are shown in Figure 1.Ten subsets were selected from this synthetic dataset to demonstrate the three algorithms, and the ratios of them to the whole dataset are 1, 2, 3, 4, 5, 6, 7, 8, 9, and 10 percents, respectively.The experimental results are shown in Figure 8.
The accuracy of Constrained-Kmeans and SSDBSCAN depends on the labeled data seriously.Although we select The ratio of labeled data to the whole data Clustering accuracy (%) Our method SSDBSCAN 10% of all data, the second cluster has no one data that can be selected as the labeled data, and, in the clustering results, the data objects in the second cluster have to be assigned to other clusters, and this phenomenon manifests in the clustering results of SSDBSCAN.In addition, even if Constrained-Kmeans selects labeled data from all of the clusters, it assigns some data objects from the rest of the three clusters to the second cluster, and this is the reason why the accuracy of Constrained-Kmeans is not improved as the percent of labeled data increases.Figure 8 also shows that the proposed algorithm has a much higher accuracy compared with SSDBSCAN and Constrained-Kmeans.The accuracy of the proposed algorithm exceeds 98% on the 10 subsets.
Conclusion
A new active semisupervised clustering algorithm is proposed which actively selects informative data by dealing with the clustering results of MST.Labeling these data and using them to label their k-nearest neighbors are based on an adaptive threshold.The experimental results show that the proposed semisupervised clustering can reach a stable state which only requires very small size of labeled dataset.However, the accuracy of the proposed semisupervised clustering is much lower in the dataset in which clusters overlap each other than that in the dataset in which the boundaries between clusters are not very vague.In the future, we plan to extend this work to the dataset in which clusters overlap each other.We will work on the data selection strategy in an active manner and the method of label propagation in the imbalanced and multidensity datasets in which clusters overlap each other.
3 Figure 2 :
Figure 2: Low accuracy of KNN on an imbalanced dataset.
Figure 3 :
Figure 3: Clustering accuracy (%) obtained with the proposed algorithm and other algorithms on IRIS.
Figure 4 :
Figure 4: Clustering accuracy (%) obtained with the proposed algorithm and other algorithms on modified IRIS.
Figure 6 :
Figure 6: Clustering accuracy (%) obtained with the proposed algorithm and other algorithms on modified Wine.
Figure 7 :
Figure 7: Clustering accuracy (%) obtained with the proposed algorithm and other algorithms on Ecoli.
Figure 8 :
Figure 8: Clustering accuracy (%) obtained with the proposed algorithm and other algorithms on Synthetic.
Although the k-nearest labeled neighbors of each data in 3 are not in 3 , the k-nearest neighbors are in 3 (if ≤ 4).Since k-nearest neighbors of each data in 3 are unlabeled, knearest neighbors algorithm has to find the nearest labeled neighbor from 1 and 2 .The proposed algorithm selects more important data objects as labeled data and expands its label to its neighbors.Some definitions are given as follows in order to describe the proposed active semisupervised clustering algorithm.Definition 2. KNN (, , ).Given one cluster , one data object ∈ , and ∈ KNN (, ), KNN (, , ) is the distance between and .Definition 3. V ().Given one cluster , V () is defined as follows: Foreach cluster T in 1 , 2 , . . ., do (17) Compute density of each point in T (18) Select one data with max density and add it to (19) End Foreach (20) Query oracle about labels of data in .(21) Return 1 , 2 , . . ., and .Algorithm 1: Selecting data by using MST clustering algorithm ( (, )).Suppose that the number of different labels in is p. (4) Merge 1 , 2 , . . ., into 1 , 2 , . . ., according to labels of data in .(5) Foreach cluster in 1 , 2 , . . ., do Foreach cluster in 1 , 2 , . . ., do
( 1 )
Get all the labeled data which belong to C from .(, V () , ). | 7,477.4 | 2013-11-26T00:00:00.000 | [
"Computer Science"
] |
Characteristics of Microcellular Foamed Ceramic Urethane
Ceramics are non-metallic inorganic materials fabricated from natural or high-purity raw materials through heating and cooling processes. Urethane is a three-dimensional plastic with both elasticity and chemical resistance; moreover, it is used as a rubber substitute. The use of both materials in various applications is gradually increasing. However, as ceramics and urethane have distinctly different properties, this prompted questions regarding the properties of a material that is fabricated using both materials. Therefore, we studied the characteristics of a composite material fabricated through physical foaming using a batch process. The process was conducted with gas saturation, foaming, cooling, and curing. When a specimen of 2 mm thickness was saturated in 5 MPa of CO2 for 2 h, the solubility was 6.43%; when foaming was carried out at a temperature of 150 °C in boiled glycerin, the foaming ratio, cell size, cell density, and void fraction were found to be 43.62%, 24.40 µm, 9.1 × 10⁷ cells/cm2, and 22.11%, respectively. Furthermore, the volume increased by 102.96%, color changed from dark to light gray, hardness decreased by 24%, thermal diffusivity increased by 0.046 mm2/s at 175 °C, and friction coefficient decreased to 0.203. Thus, the microcellular foamed ceramic urethane exhibits a larger volume, lighter weight, and improved thermal conductivity and friction coefficient.
Introduction
The material properties of solids are divided into six categories: mechanical, thermal, magnetic, optical, electrical properties, and corrosion resistance. In engineering materials, processing and performance factors are also indispensable. Since the structure changes depending on how the material is processed, it greatly influences the performance of the final product. Many researchers characterize according to the material properties so that one of the thousands of available materials can be selected for the suitable material. However, the material is rarely perfectly ideal for use. Like the relationship between stiffness and ductility, a moderate compromise must be made [1].
Industrial materials include metals, alloys, polymers, ceramics, glass, composite materials, and natural materials, with over 50,000 types of materials in total. Metals and metal alloys have been predominantly used as industrial materials; however, polymeric and composite materials have gradually garnered attention globally. The choice of materials is a part of the design process. Most mechanics, such as dynamics and statics, have already established a theory or principle, and making them change is difficult. However, it is possible to achieve a new theory or principle that develops new properties of the material. Therefore, the designer's focus should be on the characteristics of the material, not the material itself. The inherent mechanical properties of the material influence the product design and account for the most significant proportion of the cost of manufacturing the final product. The introduction of new or multiple process methods is essential for solving the shortage of industrial materials that may soon occur, particularly considering environmental factors for waste generated by using these materials (Table 1) [2,3]. Ceramic and urethane are used in various practical applications; research and development on them are being actively conducted. The proposed microcellular foamed polymer improves insulating properties owing to the decrease in conductivity and is suitable for various acoustic fields because of the attenuation of the cell/matrix interface. In addition, the polymer has improved thermal and mechanical properties and reduced cost because of an expansion in volume [4]. Ceramics are used in various applications such as biomedical materials, metal-ceramic composites, high specific strength materials, and products using absorbents and filtration catalysts. Several types of foaming methods are being developed, and ceramics that aid in the fabrication of micro-sized cells through foaming are garnering technical attention. Thus, the market has seen rapid development in this regard; moreover, the versatility of ceramic lies in its endless potential.
Therefore, it was necessary to study the properties of a new composite by mixing urethane, which is typically low in density and has good flexibility, and ceramic, with excellent heat and electrical performance and high strength. Although these two materials are very different in material properties, they are readily available around us and have in common that they are already used in many places.
To maximize the advantages of ceramic and urethane, the types of frequently used functional materials have been studied extensively. In particular, because it is a material that can achieve various combinations, repeated attempts to study topics such as blending ratio and method, foaming conditions, and foaming methods are being conducted. With continued research, ceramics, and urethanes have become invaluable in our lives [5][6][7][8].
However, the needs of customers are gradually diversifying because of the rapidly changing environment, lifestyles, and social culture development. Discussions on materials capable of satisfying customers are actively taking place. Furthermore, meeting the requirements of both the necessary specifications and the diversity of product types are emphasized. Additionally, environmental damage and human health play important roles. Owing to rapid industrialization and the use of plastics and freon gas, the planet is experiencing abnormal changes in climate, and various species of animals and plants are under threat. To combat this, the government of each country is preparing various systems and tools that can be legally managed; it has become an essential issue for the consideration of developers and users. Furthermore, the characteristics of a material synthesized using ceramic and urethane has not yet been studied in detail. Therefore, research and development of such a composite material must proceed with the observation of the changed characteristics after its synthesis.
Efficient design of a relatively inexpensive material that incorporates the beneficial characteristics of extant materials is therefore necessary. We conducted a study on how ceramic and urethane cross-linked sheets with completely different physical properties are Polymers 2021, 13, 1817 3 of 13 changed through the batch process, an eco-friendly microcellular foaming method, and the physical changes and improvements in properties gained after the process.
Materials and Methods
The batch process (Figure 1), a foaming method developed by Martini in 1979, enables microcellular foaming in a simple manner. This process can achieve improved results at identical strength, fracture toughness, and insulation properties with the use of less material. The fabrication of microcellular foam varies drastically in terms of cell generation, depending on the saturating gas type, gas pressure, solubility, foaming temperature, and foaming time [9].
Materials and Methods
The batch process (Figure 1), a foaming method developed by Martini in 1979, enables microcellular foaming in a simple manner. This process can achieve improved results at identical strength, fracture toughness, and insulation properties with the use of less material. The fabrication of microcellular foam varies drastically in terms of cell generation, depending on the saturating gas type, gas pressure, solubility, foaming temperature, and foaming time [9].
Materials
In this study, a ceramic urethane sheet (MISUMI Group Inc., Seoul, Korea, Product code No. UTSCM) was purchased. The specimens were cross-linked after mixing the ceramic powder with urethane comprising polyester polyol. The specimens were 25 mm wide, 25 mm long, and 2 mm thick. The mechanical properties of the specimens were as follows: a specific gravity of 1.25~1.28, shore hardness of A 70, heat resistance of 70 °C, cold resistance of −20 °C, tensile strength of 53 MPa, and elongation of 680%. Based on the elemental analysis results (PerkinElmer Inc., Seoul, Korea, Product no. 2400 Series II CHNS/O), the fabricated composite consisted of 55.86% carbon, 7.72% hydrogen, 5.06% nitrogen, 0.60% sulfur, and other components.
This ceramic urethane sheet is manufactured by mixing urethane and granulated ceramic powder to maximize abrasion resistance. The average size of the ceramic particles was 2.04 µm, and maximum size was 13.07 µm, and the minimum size was 0.42 µm. The particle distribution included about 14.22% of large-sized particles (more than 5 µm), 71.56% of medium-sized particles (less than 5 µm to more than 1 µm), and about 14.22% of small-sized particles (less than 1 µm). It was checked that the ceramic powder exhibiting the above distribution occupied 2% of the total specimen.
Equipment
A batch process was used to fabricate the microcellular foam. The specimens were saturated with carbon dioxide (Samheung, Seoul, Korea; product grade no. CO₂) in a vessel with an inner diameter of 52 mm and a height of 200 mm, which was equipped with an electric heater. In the foaming phase, an oil bath (Chang Shin Science Co., Seoul, Korea, Product No. C-WHT) and 99.50% glycerin were used to produce microcellular-sized cells. An electronic densimeter (Alfa Mirage, Model No. MD-300S) and electronic scale (OHAUS, Model no. AR2130) was used to measure the densities and weights of the specimens.
Materials
In this study, a ceramic urethane sheet (MISUMI Group Inc., Seoul, Korea, Product code No. UTSCM) was purchased. The specimens were cross-linked after mixing the ceramic powder with urethane comprising polyester polyol. The specimens were 25 mm wide, 25 mm long, and 2 mm thick. The mechanical properties of the specimens were as follows: a specific gravity of 1.25~1.28, shore hardness of A 70, heat resistance of 70 • C, cold resistance of −20 • C, tensile strength of 53 MPa, and elongation of 680%. Based on the elemental analysis results (PerkinElmer Inc., Seoul, Korea, Product no. 2400 Series II CHNS/O), the fabricated composite consisted of 55.86% carbon, 7.72% hydrogen, 5.06% nitrogen, 0.60% sulfur, and other components.
This ceramic urethane sheet is manufactured by mixing urethane and granulated ceramic powder to maximize abrasion resistance. The average size of the ceramic particles was 2.04 µm, and maximum size was 13.07 µm, and the minimum size was 0.42 µm.
The particle distribution included about 14.22% of large-sized particles (more than 5 µm), 71.56% of medium-sized particles (less than 5 µm to more than 1 µm), and about 14.22% of small-sized particles (less than 1 µm). It was checked that the ceramic powder exhibiting the above distribution occupied 2% of the total specimen.
Equipment
A batch process was used to fabricate the microcellular foam. The specimens were saturated with carbon dioxide (Samheung, Seoul, Korea; product grade no. CO 2 ) in a vessel with an inner diameter of 52 mm and a height of 200 mm, which was equipped with an electric heater. In the foaming phase, an oil bath (Chang Shin Science Co., Seoul, Korea, Product No. C-WHT) and 99.50% glycerin were used to produce microcellular-sized cells. An electronic densimeter (Alfa Mirage, Model No. MD-300S) and electronic scale (OHAUS, Model no. AR2130) was used to measure the densities and weights of the specimens.
Microcellular Foam Processing Method of Ceramic Urethane
Satisfactory solubility is an important condition for microcellular foaming. To predict this condition, information on the thickness of the material used and the diffusion coefficient of the saturation gas are required.
Carbon dioxide, a supercritical fluid, was selected as the saturated gas on behalf of the ethylene, ethane, dinitrogen monoxide, isopropanol, water, toluene, propane, ammonia, and more, since it is non-flammable, non-toxic, inexpensive, and has a near-ambient critical temperature. Carbon dioxide can be described as a hydrophobic solvent with polarity comparable to that of n-hexane. Hence, nonpolar or light molecules easily dissolve in supercritical carbon dioxide, whereas the polar or heavy molecules have very poor solubilities. Some light organic compounds, either polar or nonpolar, are used as co-solvents to enhance carbon dioxide's solvating power and polarity. It is the most representative supercritical fluid, and it dissolves various substances very well. It has been industrially used in various processes, including polymerization, polymer fractionation, particle formation for pharmaceutical and military use, textile dyeing, and cleaning of machines and electronic parts. Moreover, it is one of the most commonly used gases for microcellular foaming. Therefore, we used carbon dioxide for saturation. [10] The diffusion coefficients of carbon dioxide are 1.0 × 10 −5 cm 2 /s and 1.0 × 10 −6 cm 2 /s (at 250 bar and 353 K) [11]. The saturation time was calculated using the following equation: where L is the thickness of the specimen and D is the diffusion coefficient of the gas. Since a specimen of 2 mm thickness and carbon dioxide were used in this experiment, the saturation time, t, was calculated to be a minimum of 2200 s to a maximum of 22,000 s. Based on these data, saturation was checked every 1 h from a minimum of 30 min to a maximum of 7 h at 60 • C of vessel temperature in absolutely the same way as the batch process ( Figure 2) [12].
Polymers 2021, 13, x sity measured using ASTM D792-20 was calculated as the foaming ratio using lowing equation [15,16]: Foaming ratio (%) = × 100 After the foaming process, curing was performed for approximately one we no change in the foaming ratio and shrinkage was observed in all foamed specim The prepared specimens were placed in a gas injection vessel. Four specimens were placed in one vessel, where a space between the specimen and another specimen must be made for optimized saturation of the gas. It was directly related to the foaming ratio. Therefore, we wrapped the specimens with a paper towel and made a gap between them so that the specimens did not touch each other. We confirmed that the lid replaced with the new rubber packing was tightly sealed. The specimen was removed from the vessel after saturating it in 5 MPa carbon dioxide for 2 h. The weights of the specimens were immediately measured, and the initial and post-saturation weights of the specimens were determined. A total of five methods were available to measure the solubility; among them, we employed the gravimetric method [13] (pp. 5-10). The solubility of the specimens was determined using the following equation: Based on the above results, a test was conducted to determine the most suitable temperature and pressure for the fabrication of the microcellular foam ( Table 2). The pressure was increased by 1 MPa (from 1 to 5 MPa) (Figure 3), and the temperature of the boiling glycerin was increased by 10 • C (from 90 to 180 • C) ( Figure 4). Polymers 2021, 13, x sity measured using ASTM D792-20 was calculated as the foaming ratio using lowing equation [15,16]: Foaming ratio (%) = × 100 After the foaming process, curing was performed for approximately one w no change in the foaming ratio and shrinkage was observed in all foamed specim The expansion of the appearance can recognize the cell growth of the microcellula foam and can be accurately confirmed using scanning electron microscopy (SEM; JEO Ltd., Massachusetts, USA, FE-SEM Model no. IT-500). To conduct an SEM analysis, w froze the foamed specimens using liquid nitrogen (supplied by Samheung, Seoul, Korea and subsequently broke and pretreated them to aid clear visibility of the cross-section The processed specimens were photographed after plasma processing (Cressington Sc entific Instruments Ltd., Watford, UK, Sputter coaters Model no. Cressington 108 auto for approximately 120 s [17]. As shown in Figure 5, cells (micro-sized cells) that did no exist before foaming were created during the cellular foaming process. Cell size wa measured using ImageJ, and the following equation expresses the relationship betwee the cell size, density, and void fraction: where is the cell density of the foamed specimen, d is the cell size, and is the voi fraction [12]. The specimens were fully saturated by carbon dioxide foamed by imbalanced heat conditions. Subsequently, as pores form in the material, the density of the specimen decreases. The foaming ratio is a value consisting of the cell size and number of cells of a microcellular foamed plastic, which is demonstrated by measuring densities before and after foaming [14]. The specimens were subsequently placed in an oil bath containing boiled glycerin, and foam processing was performed. The time required was approximately 60 s; when the gas saturation was low, the process required a longer duration, and when the saturation was high, the process required a relatively short duration. The density measured using ASTM D792-20 was calculated as the foaming ratio using the following equation [15,16]: Foaming ratio (%) = Density Before − Density After Density Before × 100 After the foaming process, curing was performed for approximately one week until no change in the foaming ratio and shrinkage was observed in all foamed specimens.
The expansion of the appearance can recognize the cell growth of the microcellular foam and can be accurately confirmed using scanning electron microscopy (SEM; JEOL Ltd., Massachusetts, USA, FE-SEM Model no. IT-500). To conduct an SEM analysis, we froze the foamed specimens using liquid nitrogen (supplied by Samheung, Seoul, Korea) and subsequently broke and pretreated them to aid clear visibility of the cross-section. The processed specimens were photographed after plasma processing (Cressington Scientific Instruments Ltd., Watford, UK, Sputter coaters Model no. Cressington 108 auto) for approximately 120 s [17]. As shown in Figure 5, cells (micro-sized cells) that did not exist before foaming were created during the cellular foaming process. Cell size was measured using ImageJ, and the following equation expresses the relationship between the cell size, density, and void fraction: and, where N 0 is the cell density of the foamed specimen, d is the cell size, and V f is the void fraction [12].
Results
The types of saturation gas, pressure, solubility, foaming time, and foaming temperature directly influence the fabrication of the microcellular foam [4]. We observed that 2 h was the optimal saturation time for a 2 mm specimen, the solubility was approximately 6.43%, the saturation gas pressure was 5 MPa, and the ideal foaming temperature was 150 °C for the base conditions.
Cell Growth and Changes in External Shape
The foaming ratio achieved was 43.62%, the average cell size was found to be 24.40 µm, the cell density was 9.1 × 10⁷ cells/cm 2 , and the void fraction was 22.11% (Table 3). Table 3. Results of the fabrication of microcellular foamed ceramic urethane.
Shape and Hardness
An alteration in the shape and color of the microcellular foamed ceramic urethane was clearly visible (Figure 6). These changes can be determined by measuring the weight, density, and thickness of the specimens.
The volume of the specimen increased by 102.97% and the color changed from dark to light gray; however, the specimen that had a higher foaming ratio turned brighter. Shore A hardness was measured using a digital hardness tester (HANDO-Midyo, Korea, Shore A hardness tester, Model No. HD-KR10A). The hardness of the non-foamed ceramic urethane was 70, and that of the microcellular foamed ceramic urethane was 56 (Figure 7), that is, the hardness decreased by approximately 24.5% [18].
Results
The types of saturation gas, pressure, solubility, foaming time, and foaming temperature directly influence the fabrication of the microcellular foam [4]. We observed that 2 h was the optimal saturation time for a 2 mm specimen, the solubility was approximately 6.43%, the saturation gas pressure was 5 MPa, and the ideal foaming temperature was 150 • C for the base conditions.
Cell Growth and Changes in External Shape
The foaming ratio achieved was 43.62%, the average cell size was found to be 24.40 µm, the cell density was 9.1 × 10 7 cells/cm 2 , and the void fraction was 22.11% (Table 3).
Shape and Hardness
An alteration in the shape and color of the microcellular foamed ceramic urethane was clearly visible (Figure 6). These changes can be determined by measuring the weight, density, and thickness of the specimens.
Thermal Diffusivity
The changes in the thermal diffusivity of the ceramic urethane specimens were analyzed using LFA 457 (MicroFlash ® , NETZSCH Korea Co., Ltd., Paju, Korea, Laser flash apparatus Model no. LFA 457). The measurements were performed starting from a reference temperature of 25 °C to a maximum temperature of 200 °C. Additionally, measurements were made at intervals of 25 °C, and nitrogen was used as the gas.
When the two materials of different thermal properties were mixed, the thermal characteristics of the fabricated material were similar to those of urethane. We observed that the thermal diffusivity of the original specimen gradually decreased as the measurement temperature increased, whereas the thermal diffusivity of the foamed specimen increased as the measurement temperature increased (Figure 8). Thus, we confirmed that ceramic urethane exhibits improved thermal diffusivity through microcellular foam processing [19,20]. The volume of the specimen increased by 102.97% and the color changed from dark to light gray; however, the specimen that had a higher foaming ratio turned brighter. Shore A hardness was measured using a digital hardness tester (HANDO-Midyo, Korea, Shore A hardness tester, Model No. HD-KR10A). The hardness of the non-foamed ceramic urethane was 70, and that of the microcellular foamed ceramic urethane was 56 (Figure 7), that is, the hardness decreased by approximately 24.5% [18].
Thermal Diffusivity
The changes in the thermal diffusivity of the ceramic urethane specimens were analyzed using LFA 457 (MicroFlash ® , NETZSCH Korea Co., Ltd., Paju, Korea, Laser flash apparatus Model no. LFA 457). The measurements were performed starting from a reference temperature of 25 °C to a maximum temperature of 200 °C. Additionally, measurements were made at intervals of 25 °C, and nitrogen was used as the gas.
When the two materials of different thermal properties were mixed, the thermal characteristics of the fabricated material were similar to those of urethane. We observed that the thermal diffusivity of the original specimen gradually decreased as the measurement temperature increased, whereas the thermal diffusivity of the foamed specimen increased as the measurement temperature increased (Figure 8). Thus, we confirmed that ceramic urethane exhibits improved thermal diffusivity through microcellular foam processing [19,20].
Thermal Diffusivity
The changes in the thermal diffusivity of the ceramic urethane specimens were analyzed using LFA 457 (MicroFlash ® , NETZSCH Korea Co., Ltd., Paju, Korea, Laser flash apparatus Model no. LFA 457). The measurements were performed starting from a reference temperature of 25 • C to a maximum temperature of 200 • C. Additionally, measurements were made at intervals of 25 • C, and nitrogen was used as the gas.
When the two materials of different thermal properties were mixed, the thermal characteristics of the fabricated material were similar to those of urethane. We observed that the thermal diffusivity of the original specimen gradually decreased as the measurement temperature increased, whereas the thermal diffusivity of the foamed specimen increased as the measurement temperature increased (Figure 8). Thus, we confirmed that ceramic urethane exhibits improved thermal diffusivity through microcellular foam processing [19,20].
Coefficient of Friction
The change in the friction coefficient of the ceramic urethane sheets was measured using a tribometer (Anton Paar Korea Ltd., Seoul, Korea, Pin-on-Disk tribometer Model no. TRB 3 ). The test conditions were as follows: 1 mm alumina milling media balls, rotation radius was 5 mm, vertical load was 5 N, and RPM was 5.6, and the results were confirmed after 180 cycles (Figure 9) [21].
The coefficient of friction of the original sheet was 0.583, which gradually decreased when the foaming temperature was increased. After foaming at 150 °C, a total decrease from 0.38 to 0.203 was observed, that is, the coefficient of friction approximately halved after foaming ( Figure 10).
When the surfaces in contact move relative to each other, the friction between them causes the kinetic energy to be converted into thermal energy. In addition, this property manifests as another critical result, leading to poor performance or component damage. This can be prevented in two ways: by coating the surface to strengthen or reduce friction [2,22,23].
Coefficient of Friction
The change in the friction coefficient of the ceramic urethane sheets was measured using a tribometer (Anton Paar Korea Ltd., Seoul, Korea, Pin-on-Disk tribometer Model no. TRB 3 ). The test conditions were as follows: 1 mm alumina milling media balls, rotation radius was 5 mm, vertical load was 5 N, and RPM was 5.6, and the results were confirmed after 180 cycles (Figure 9) [21].
Coefficient of Friction
The change in the friction coefficient of the ceramic urethane sheets was measure using a tribometer (Anton Paar Korea Ltd., Seoul, Korea, Pin-on-Disk tribometer Mode no. TRB 3 ). The test conditions were as follows: 1 mm alumina milling media balls, rota tion radius was 5 mm, vertical load was 5 N, and RPM was 5.6, and the results wer confirmed after 180 cycles ( Figure 9) [21].
The coefficient of friction of the original sheet was 0.583, which gradually decrease when the foaming temperature was increased. After foaming at 150 °C, a total decreas from 0.38 to 0.203 was observed, that is, the coefficient of friction approximately halve after foaming ( Figure 10).
When the surfaces in contact move relative to each other, the friction between them causes the kinetic energy to be converted into thermal energy. In addition, this propert manifests as another critical result, leading to poor performance or component damag This can be prevented in two ways: by coating the surface to strengthen or reduce fric tion [2,22,23]. The coefficient of friction of the original sheet was 0.583, which gradually decreased when the foaming temperature was increased. After foaming at 150 • C, a total decrease from 0.38 to 0.203 was observed, that is, the coefficient of friction approximately halved after foaming ( Figure 10). Polymers 2021, 13, x 11 of 13 Figure 10. Changes in the friction coefficient of ceramic urethane.
Discussion
Ceramic and urethane are materials with different properties. By treating these two materials as a single material, we fabricated a composite material comprising micro-cells using a batch process. Parameters such as the types of saturation gas, saturation pressure, saturation temperature, foaming time, and temperature directly influence the microcellular foam. Therefore, we studied and derived the ideal value for foaming ceramic urethane through experiments and finally obtained a microcellular foamed ceramic urethane material.
The importance of optimization and systematization of microcellular foaming conditions based on the construction method was established in this study. Even if identical ceramic urethane sheets were to be used, the optimal usage would depend on the size, shape, and density of the cell obtained through the microcellular foaming process. It was possible to obtain the mechanical and thermal properties that could be compared with rubber or metal by using the existing standard; however, it was difficult to confirm the inherent properties of ceramic urethane. Lastly, we attempted to discover new characteristics of the fabricated material by employing various techniques during the microcellular foaming process.
A composite material fabricated by mixing two or more materials is obtained by integrating the best characteristics of each constituent material and by achieving a combination of properties that are not exhibited by any single material. Ceramic and urethane have advantages and disadvantages on opposite sides; however, it is worth developing ceramic urethane characteristics by controlling methods and various conditions. Ceramic and urethane are currently very easily found around us and are useful materials in many places. In terms of price and safety, they are also excellent. However, since this microcellular foamed ceramic urethane material that we have developed has never been used before and has different characteristics than those found in a single product, it is necessary to find a place where the microcellular foamed ceramic urethane sheet can be applied in terms of engineering. Currently, applications using ceramic urethane are deficient. These include construction materials [24], dendrite-free solid-state batteries [25], insulation paints, and waterproof paints, and more. Therefore, we have a plan to research microcellular foamed ceramic urethane of soundproofing and absorption sound. [26] The physical properties of the material, which are further expanded after foaming, can be beneficial and can also reduce manufacturing costs by increasing the material's properties with fewer processing steps and time. In addition, once a suitable application for this material is found, and we are confident of its remarkable perfor- When the surfaces in contact move relative to each other, the friction between them causes the kinetic energy to be converted into thermal energy. In addition, this property manifests as another critical result, leading to poor performance or component damage. This can be prevented in two ways: by coating the surface to strengthen or reduce friction [2,22,23].
Discussion
Ceramic and urethane are materials with different properties. By treating these two materials as a single material, we fabricated a composite material comprising micro-cells using a batch process. Parameters such as the types of saturation gas, saturation pressure, saturation temperature, foaming time, and temperature directly influence the microcellular foam. Therefore, we studied and derived the ideal value for foaming ceramic urethane through experiments and finally obtained a microcellular foamed ceramic urethane material.
The importance of optimization and systematization of microcellular foaming conditions based on the construction method was established in this study. Even if identical ceramic urethane sheets were to be used, the optimal usage would depend on the size, shape, and density of the cell obtained through the microcellular foaming process. It was possible to obtain the mechanical and thermal properties that could be compared with rubber or metal by using the existing standard; however, it was difficult to confirm the inherent properties of ceramic urethane. Lastly, we attempted to discover new characteristics of the fabricated material by employing various techniques during the microcellular foaming process.
A composite material fabricated by mixing two or more materials is obtained by integrating the best characteristics of each constituent material and by achieving a combination of properties that are not exhibited by any single material. Ceramic and urethane have advantages and disadvantages on opposite sides; however, it is worth developing ceramic urethane characteristics by controlling methods and various conditions. Ceramic and urethane are currently very easily found around us and are useful materials in many places. In terms of price and safety, they are also excellent. However, since this microcellular foamed ceramic urethane material that we have developed has never been used before and has different characteristics than those found in a single product, it is necessary to find a place where the microcellular foamed ceramic urethane sheet can be applied in terms of engineering. Currently, applications using ceramic urethane are deficient. These include construction materials [24], dendrite-free solid-state batteries [25], insulation paints, and waterproof paints, and more. Therefore, we have a plan to research microcellular foamed ceramic urethane of soundproofing and absorption sound. [26] The physical properties of the material, which are further expanded after foaming, can be beneficial and can also reduce manufacturing costs by increasing the material's properties with fewer processing steps and time. In addition, once a suitable application for this material is found, and we are confident of its remarkable performance. Moreover, its infinite capacity will continue to be identified and confirmed through future research.
Conclusions
We succeeded in fabricating microcellular foam using a batch process; the fabricated material, which comprises ceramic and urethane, is harmless to humans and the environment. Various studies and experiments have been conducted. The specimens were saturated with carbon dioxide at a pressure of 5 MPa for 2 h at 60 • C and then physically foamed in a glycerin bath at 150 • C.
We achieved the following results from the experiment: the solubility, foaming ratio, cell size, cell density, and void fraction were found to be 6.43%, 43.62%, 24.40 µm, 9.1 × 10 7 cells/cm 2 , and 22.11%, respectively. Furthermore, the volume increased by 102.97%, the color changed from dark to light gray, the hardness decreased by 24%, the thermal diffusivity increased by 0.046 mm 2 /s at 175 • C, and the friction coefficient decreased by approximately half to 0.203.
According to the results, synthesis using the microcellular foaming process results in synthesized material that has a larger volume, lighter weight, improved thermal conductivity, and lower friction coefficient. In other words, it exhibits the positive characteristics of ceramic and urethane. However, it is necessary to study other characteristics, such as vibration and soundproofing, in the further study of this material.
Data Availability Statement:
The data presented in this study are available upon request from the first author. | 7,392 | 2021-05-31T00:00:00.000 | [
"Materials Science"
] |
Improvement of AlGaN/GaN HEMTs Linearity Using Etched-Fin Gate Structure for Ka Band Applications
In this paper, AlGaN/GaN high electron mobility transistors (HEMTs) with etched-fin gate structures fabricated to improve device linearity for Ka-band application are reported. Within the proposed study of planar, one-etched-fin, four-etched-fin, and nine-etched-fin devices, which have 50-μm, 25-μm, 10-μm, and 5-μm partial gate widths, respectively, the four-etched-fin gate AlGaN/GaN HEMT devices have demonstrated optimized device linearity with respect to the extrinsic transconductance (Gm) value, the output third order intercept point (OIP3), and the third-order intermodulation output power (IMD3) level. The IMD3 is improved by 7 dB at 30 GHz for the 4 × 50 μm HEMT device. The OIP3 is found to reach a maximum value of 36.43 dBm with the four-etched-fin device, which exhibits high potential for the advancement of wireless power amplifier components for Ka band applications.
Introduction
Over the past decade, the world has seen the rapid spread of transmitting electronic devices in favor of networking and communication systems, such as artificial intelligence (AI), Internet of Things (IoT), and big data. Researchers and industrial engineers have been designing high-speed and high-stability wireless components with III-V semiconductor materials [1][2][3][4]. Therefore, high electron mobility transistors (HEMTs) have now been widely used in high-frequency electronics, such as antennas and broadband satellites [2]. In addition to Gallium Arsenide (GaAs) HEMTs [5][6][7], Gallium Nitride (GaN) HEMTs [8][9][10][11][12] have been used in high radio frequency (RF) power components, such as an mm-wave power amplifier, due to its high breakdown voltage, high critical field, wide bandgap, and high electron peak velocity [13][14][15][16].
At an ideal linear region, an RF HEMT device works as an active device in a Monolithic Microwave Integrated Circuit (MMIC) and could amplify the RF signals with a constant power gain. Nevertheless, the nonlinear characteristics of a realistic solid state HEMT device cause the power gain to decrease after a certain input power, thereby decreasing the device's output power. Moreover, in a two harmonic wave tone load-pull test, the intermodulation distortion signals (IMD) of the device under test (DUT) increased rapidly with the input power level, which ultimately distorted the fundamental power signal. This is because the input signal of one of the harmonic waveforms intermodulates with the other, and generates third-order intermodulation products, which have now been widely used to quantify the linearity performance of HEMT devices [17,18].
When it comes to improving the linearity of a HEMT device at very high frequency, i.e., the Ka band, the gate controllability performance, such as the G m and G m flatness of the device, becomes the critical factor for the improvement of the device's linearity characteristics, the device, becomes the critical factor for the improvement of the device's linearity characteristics, such as the third-order output intercept point (OIP3) and third-order intermodulation output power (IMD3) [19]. A low second derivative value of the curve, meaning a flat curve, is favorable for the RF HEMT device to show that the device can withstand the gate voltage swing under high RF input power, keeping the device's high switching capability as stable as possible [20]. Researchers have shown that better gate controllability could be achieved by etching AlGaN/GaN device gates along the gate width to form fin-shaped gates, overcoming the deficiencies of small gate length GaN devices, which have exhibited poor gate control over the 2DEG channel [21]. The fin-shaped gate structure provides the GaN devices with a high value, as well as a flatter curve [22,23], which is suitable for high-frequency device operation and serves to mitigate the poor gate control caused by short-channel effects for short-channel AlGaN HEMTs with wide bandgaps [24,25].
However, due to large amounts of etched-away AlGaN barrier layers, AlGaN/GaN FinFETs often suffer from a low saturation current, which makes them unable to provide enough output power for high-frequency data transmission. To increase the RF linearity performance of the AlGaN/GaN HEMT device, as well as maintaining the 2DEG current, the number of etched fins should be limited and optimized.
In this study, AlGaN/GaN HEMTs with different etched-fin gate structures are investigated to improve device linearity for Ka band device applications. The direct current (DC) and RF performance are investigated, and the IMD3 and the OIP3 values are measured to study the linearity improvement of the GaN HEMT device with optimized etchedfin gate structures. The gate controllability as well as linearity performance of the HEMT devices with respect to the different drain biases have also been measured and discussed.
Materials and Methods
The AlGaN/GaN HEMTs on a SiC substrate wafer was grown by metal organic vapor deposition (MOCVD) on a 4-inch SiC substrate. From the bottom to the top, the structure of the AlGaN/GaN HEMT consists of a 900 nm i-GaN buffer layer, a 500 nm GaN channel layer, a 1 nm AlN spacer layer, and a 22 nm Al0.22Ga0.78N barrier layer; the device's 3D structure is depicted in Figure 1. The room-temperature electron mobility of 1700 cm 2 /V·s and a sheet carrier density of 8.5 × 10 12 /cm 2 were measured for the structure after material growth. There are four major steps in the fabrication of AlGaN/GaN HEMTs on a SiC substrate, which include Ohmic contact formation, active region definition, gate formation, and thick metal interconnect fabrication. The Ohmic metal of Ti/Al/Ni/Au was deposited by an e-gun evaporator, and then annealed by a rapid thermal anneal system (RTA) at 850 • C for 30 s in an N 2 atmosphere. Then, the B 11+ ion implantation isolation process was used to define the active region. The etched-fin gate region was formed by first depositing the SiN X layer with the plasma-enhanced chemical vapor deposition (PECVD) system and further covering it with an e-beam photoresist.
After this, the etched-fin area was defined using the JEOL e-beam lithography system. The defined fin regions were further etched away using the inductively coupled plasma (ICP) system. In this study, a planar structure (no etched fins) and three different etched-fin gate structures were designed. Trench numbers of 1, 4, and 9 were etched away, with a trench width of 500 nm and a trench depth of around 550 nm to the buffer layer.
Careful removal of the e-beam photoresist after etched fin definition is critical to prevent e-beam photoresist residuals, which may affect the gate length definition during the next e-beam lithography process and may degrade the gate controllability due to poor Schottky contact after gate metal deposition. A larger fin width of up to 500 nm also ensures that the fin etch process is stable and uniform over the whole gate width, which is due to the low aspect ratio of the fin depth and the fin width.
The gate length was defined using the stepper lithography system after the etch fin process, and the wafer was uniformly dipped in a diluted HCl solution (HCl:H 2 O = 1:10) for 1 min before metal deposition to remove any AlGaN barrier layer native oxides. A Ni/Au (50 nm/500 nm) gate Schottky metal was deposited on the defined gate region and was deposited down the etched-fin regions, forming direct contact with the AlGaN barrier layers. Finally, a 150 nm SiN X was passivated on the wafer using the PECVD and a 2 µm thick Au metallization was deposited on the source and drain pads. The schematic cross-section and the top-view micrographs of the device gate structure are shown in Figure 1a,b, respectively, showing the epitaxial material layers and the gate structure with (1) a planar format, (2) 1 etched fin, (3) 4 etched fins, and (4) 9 etched fins.
Results and Discussion
Here, 4 × 50 µm AlGaN/GaN HEMTs with different etched-fin gate structures have been fabricated and measured to compare their linearity performance. Figure 2 shows the I DS -V GS and G m -V GS comparison curves of the fabricated devices with no etched fins (planar), one etched fin (one trench), four etched fins (four trenches), and nine etched fins (nine trenches). The device with four etched fins exhibits the highest G m value and the device with nine etched fins has a highest threshold voltage (V th ) of −4.05 V. The V th in this study is defined as the V GS when I DS reaches 1 mA/mm. The G m value of the four-etched-fin device increased up to 14% compared to that of the planar device and started to degrade when the etched fin number was increased to nine, which may have been due to the lowering of the gate controllability caused by the increased fin-gate field effect [26], which is also discussed with the Technology Computer-Aided Design (TCAD) simulation results in this study. These transfer characteristics demonstrate the effectiveness of the etched-fin structure in increasing the gate controllability, the device's gate switching capability, and the potential to withstand voltage and current swinging under high input power RF tests.
Next, S-parameter results were measured on-wafer using the E8361C PNA network analyzer and the 4142B DC supplier. The system was calibrated with a short open loadthrough calibration standard. The calibration accuracy was verified by ensuring that both S21 and S12 of the through standard were less than ±0.01 dB and that both S11 and S22 were less than −45 dB within the measured frequency range after calibration [27]. The current gain (H 21 ) and maximum stable power gain (MSG) were derived using Microwave Office XL, and the f T and f Max of the devices were obtained by extrapolating the gain curves with a slope of −20 dB/decade. Since the current gain versus frequency curves began to deviate from the slope of −20 dB/decade, the gain values above 25 GHz were hidden for clear visualization. The small signal results show obvious improvements with the etched-fin gate structure, and the four-etched-fin device exhibits the highest f T and f Max values of 38.7 GHz and 91.9 GHz among the four device structures at a drain bias of 20 V and a gate bias of −3.05 V, as shown in Figure 3. The f T and f Max of the nine-etched-fin device did not increase with the increased etched fins, which is due to the lowered transconductance and increased gate-to-source capacitance (C gs ) resulting from the doubled etched fins compared to the four-etched-fin device. The C gs increases with the fin number due to the increased contact area between the gate metal and the semiconductor sidewall, causing the change in f T and f Max , as shown in Figure 3. Next, S-parameter results were measured on-wafer using the E8361C PNA network analyzer and the 4142B DC supplier. The system was calibrated with a short open loadthrough calibration standard. The calibration accuracy was verified by ensuring that both S21 and S12 of the through standard were less than ±0.01 dB and that both S11 and S22 were less than −45 dB within the measured frequency range after calibration [27]. The current gain (H21) and maximum stable power gain (MSG) were derived using Microwave Office XL, and the fT and fMax of the devices were obtained by extrapolating the gain curves with a slope of −20 dB/decade. Since the current gain versus frequency curves began to deviate from the slope of −20 dB/decade, the gain values above 25 GHz were hidden for clear visualization. The small signal results show obvious improvements with the etchedfin gate structure, and the four-etched-fin device exhibits the highest fT and fMax values of 38.7 GHz and 91.9 GHz among the four device structures at a drain bias of 20 V and a gate bias of −3.05 V, as shown in Figure 3. The fT and fMax of the nine-etched-fin device did not increase with the increased etched fins, which is due to the lowered transconductance and increased gate-to-source capacitance (Cgs) resulting from the doubled etched fins compared to the four-etched-fin device. The Cgs increases with the fin number due to the increased contact area between the gate metal and the semiconductor sidewall, causing the change in fT and fMax, as shown in Figure 3. The polynomial curve fitting technique, using (1), was applied to investigate the IDS-VGS curves of the etched-fin devices [18].
Therefore, if we analyze the linearity of the IDS-VGS curves, we can see that the IDS increases linearly with VGS, giving a lower a3 and a5, with a larger a1 [28]. The IDS-VGS polynomial first-, third-, and fifth-order coefficients of VDS = 20 V are listed in Table 1. Decreased / and / values of the devices with four and nine etched fins have been The polynomial curve fitting technique, using (1), was applied to investigate the I DS -V GS curves of the etched-fin devices [18].
Therefore, if we analyze the linearity of the I DS -V GS curves, we can see that the I DS increases linearly with V GS , giving a lower a 3 and a 5 , with a larger a 1 [28]. The I DS -V GS polynomial first-, third-, and fifth-order coefficients of V DS = 20 V are listed in Table 1. Decreased a 3 /a 1 and a 5 /a 1 values of the devices with four and nine etched fins have been observed, indicating the relatively lower a 3 and a 5 values with relatively higher a 1 values. For the device's RF linearity assessment, the OIP3 and IMD3 values could be evaluated using Equations (2) and (3) [18], where G ds is the output conductance, and R L is the load resistance. Since the transconductance characteristics determine the voltage gain of a HEMT device, the influence of the G m , which is the flatness of the G m curve, on the IMD3 value and the influence of the G m on the OIP3 will be the two main concerns in the following discussion [29].
Research has shown that the IMD3 levels of the devices could also be derived as in (4) [18], indicating that the lower a 3 and a 5 values could represent lower IMD3 levels.
To evaluate the device's RF linearity, two-tone load-pull results were measured with a calibrated 30 GHz frequency signal using the Focus Load-Pull system with a frequency span of 10 MHz. A block diagram of the two-tone load-pull measurement setup with the signal generators, the spectrum analyzer, and the power supply is shown in Figure 4.
First-order intermodulation output power (IMD1) and IMD3 values were measured and OIP3 values were extrapolated using the fundamental power (F1) and third-order intermodulation power (2F1-F2) data curve with a slope of 1 and 3, respectively, at the linear region. The large signal results of different gate biases (0.5, 0.375, 0.25, and 0.125 I DSS ) were measured and are shown in Figures 5-8.
First, the 30 GHz large signal load-pull measurement results with the IMD3 value comparison results of the designed 4 × 50 µm AlGaN/GaN HEMT devices, biased at I DS = 0.5 I DSS and V DS = 20 V, were analyzed and are shown in Figure 5. The linear gain improved from 7.38 dB to 8.12 dB, the IMD3 level at 16 dB back-off from P1dB (dBm) decreased from −54.82 dBm to −56.72 dBm, the ∆(OIP3-P1dB) value increased from 9.24 dB to 11.26 dB, and the OIP3 value increased from 33.97 dBm to 35.72 dBm. Furthermore, the 4 × 50 µm devices exhibited a maximum power density of more than 2.1 W/mm. The performance of the four different devices is listed in Table 2. .
To evaluate the device's RF linearity, two-tone load-pull results were measured with a calibrated 30 GHz frequency signal using the Focus Load-Pull system with a frequency span of 10 MHz. A block diagram of the two-tone load-pull measurement setup with the signal generators, the spectrum analyzer, and the power supply is shown in Figure 4. Table 2.
Third, the 30 GHz large signal load-pull measurement results with the IMD3 value comparison results of the 4 × 50 µm AlGaN/GaN HEMT devices, biased at I DS = 0.25 I DSS and V DS = 20 V, were also analyzed and are shown in Figure 7. The linear gain improved from 7.54 dB to 8.28 dB, the IMD3 level at 13 dB back-off from P1dB (dBm) decreased from −52.36 dBm to −59.54 dBm, the ∆(OIP3-P1dB) value increased from 6.73 dB to 11.22 dB, and the OIP3 value increased from 28.29 dBm to 32.67 dBm. The performance of the four different devices is listed in Table 2. Fourth, the 30 GHz large signal load-pull measurement results with the IMD3 value comparison results of the 4 × 50 µm AlGaN/GaN HEMT devices, biased at I DS = 0.125 I DSS and V DS = 20 V, were also analyzed and are shown in Figure 8. However, although the linear gain improved from 7.39 dB to 7.84 dB, the IMD3 level at 14 dB back-off from P1dB (dBm) increased from −54.73 dBm to −50.83 dBm, the ∆(OIP3-P1dB) value decreased from 11.09 dB to 7.96 dB, and the OIP3 value decreased from 29.45 dBm to 26.42 dBm. The contrasting trends of the results compared to the previous ones show the gate operation voltage limits of these devices. With the G m curves shown in Figure 2, the G m values of the four-and nine-etched-fin devices with I DS = 0.125 I DSS were too low for high-frequency operation, causing the relatively poor OIP3 and IMD3 performance. At I DS = 0.125 I DSS , the G m curves of the four-and nine-etched-fin devices showed a larger slope, indicating a large increase in the G m , and resulting in a larger IMD3 value. The performance of the four different devices is listed in Table 2. The RF results correlate with the trends of the transfer characteristics, and from the measured results, the best operation gate biases were found to lie between 0.5 I DSS and 0.25 I DSS for these etched-fin HEMT devices.
The performance of the four different devices is listed in Table 2, showing the comparison of the RF characteristics with I DS = 0.5, 0.375, 0.25, and 0.125 I DSS at 30 GHz.
From the observations above, the etched-fin device has been concluded to offer obvious improvements regarding the linearity performance compared to that of the planar device, owing to the enhanced gate controllability, represented by the transfer characteristics and right-shifted threshold voltage, which provide the etched-fin devices with a higher power gain under 30 GHz load-pull measurements and improved OIP3 and IMD3 values. However, the nine-etched-fin device has been observed to exhibit lower linearity compared to the four-etched-fin device and one-etched-fin device at specific gate biases. This may be due to the gate electric field effect between adjacent gate fins, and the increase in the C gs . With limited numbers of etched fins, the gate controllability could be increased, but when the fin number continues to increase, the fields coming from the gate fins seem to interfere with one another, and this causes the gate controllability to degrade, lowering the G m value and increasing the |G m | value. Furthermore, the C gs for nine etched fins is higher than in the four-etched-fin device, causing parasitic capacitance effects to deteriorate the device performance, such as the power gain and first and third output power at high-frequency Ka band operation.
To further investigate the buffer deep-etched-fin-gate electric field effect, the Al-GaN/GaN HEMT linearity performance with different etched-fin gate structures has been analyzed by changing the drain bias to conduct different drain currents to the channel. The transfer characteristics of the four different devices with different etched-fin gate structures have been measured and two-tone load-pull measurement has been performed at 28 GHz with a frequency span of 10 MHz.
The transfer characteristics of the AlGaN/GaN HEMT devices with different etched-fin numbers, and measured at different drain voltages are shown in Figure 9a-c. At V DS = 10 V, the one-etched-fin device has the highest G m value, while the four-and nine-etched-fin devices have flatter G m curves. The IMD3 shows an improvement with the etched-fin gate design, which is consistent with the transfer characteristic curves at the set operation gate bias for the load-pull measurement, as shown in Figures 9a and 10.
On the other hand, as the drain voltage rises to 15 V and 25 V, as shown in Figure 9b,c, the G m value rises in the one-and four-etched-fin cases, but drops at nine etched fins, which shows that although the existing field effects coming from the gate fins act as a supporter to contribute to the control of the 2DEG channel, there may be a limitation to the number of etched fins, due to the shortened distances between the etched-fin gates, and the decrease in G m may be due to the repelling of charges in the fins [26]. The observations from the transfer characteristics are consistent with those of the RF measurement results. Figure 10 shows the measured and analyzed (OIP3-P1dB), power gain, and IMD3 level at 7 dB back-off from P1dB at different drain biases of the AlGaN/GaN HEMTs on a SiC substrate with the planar, one-etched-fin, four-etched-fin, and nineetched-fin gate structures. The Δ(OIP3-P1dB) value, power gain value, and IMD3 value versus different drain voltages show that when the devices were operated under lower drain voltages-in this case, 10 V-the devices with four and nine etched fins do not demonstrate significant improvements in power gain and output power, which may be due to the increased gate voltage swing at smaller drain biases [30]. However, at higher drain voltages, the devices show improved performance with increased etched fin numbers, but they still show a limit, which is consistent with the trends shown in Figures 5-8. The observations from the transfer characteristics are consistent with those of the RF measurement results. Figure 10 shows the measured and analyzed (OIP3-P1dB), power gain, and IMD3 level at 7 dB back-off from P1dB at different drain biases of the AlGaN/GaN HEMTs on a SiC substrate with the planar, one-etched-fin, four-etched-fin, and nineetched-fin gate structures. The Δ(OIP3-P1dB) value, power gain value, and IMD3 value versus different drain voltages show that when the devices were operated under lower drain voltages-in this case, 10 V-the devices with four and nine etched fins do not demonstrate significant improvements in power gain and output power, which may be due to the increased gate voltage swing at smaller drain biases [30]. However, at higher drain voltages, the devices show improved performance with increased etched fin numbers, but they still show a limit, which is consistent with the trends shown in Figures 5-8. The observations from the transfer characteristics are consistent with those of the RF measurement results. Figure 10 shows the measured and analyzed (OIP3-P1dB), power gain, and IMD3 level at 7 dB back-off from P 1dB at different drain biases of the AlGaN/GaN HEMTs on a SiC substrate with the planar, one-etched-fin, four-etched-fin, and nineetched-fin gate structures. The ∆(OIP3-P1dB) value, power gain value, and IMD3 value versus different drain voltages show that when the devices were operated under lower drain voltages-in this case, 10 V-the devices with four and nine etched fins do not demonstrate significant improvements in power gain and output power, which may be due to the increased gate voltage swing at smaller drain biases [30]. However, at higher drain voltages, the devices show improved performance with increased etched fin numbers, but they still show a limit, which is consistent with the trends shown in Figures 5-8.
The phenomenon concerning the effects of increased numbers of buffer deep fins has also been analyzed and discussed with the simulation results. The repelling of the electrostatic potential between closely packed gate fins has been modeled and visualized using the Sentaurus TCAD simulation tool. The four-etched-fin three-dimensional (3D) AlGaN/GaN HEMT model was built with a single-etched-fin gate design, as shown in Figure 11a. Figure 11b also shows the schematic diagram of a four-etched-fin device, with the height of the fin (h) and the width of the etched fin (W Fin ), and the partial width of the gate (W n ), with n equal to the number of etched fins. W 1 represents a 25 µm partial gate width, W 4 is 10 µm, and W 9 is 5 µm. The cross-sections of the fin gate with the equilibrium electrostatic potential distribution are shown in Figure 12a,b. The distances between the two gate fins in Figure 12a,b are 10 µm (W 4 ) and 5 µm (W 9 ), respectively. The X-axis represents the depth of the etched fin and the Z-axis moves along the gate width. The gate voltage is set to −3 V and the drain voltage is set to 20 V. The ranges for the equilibrium electrostatic potential are both set to 0 to 2.19752. static potential between closely packed gate fins has been modeled and visualized using the Sentaurus TCAD simulation tool. The four-etched-fin three-dimensional (3D) Al-GaN/GaN HEMT model was built with a single-etched-fin gate design, as shown in Figure 11a. Figure 11b also shows the schematic diagram of a four-etched-fin device, with the height of the fin (h) and the width of the etched fin (WFin), and the partial width of the gate (Wn), with n equal to the number of etched fins. W1 represents a 25 µm partial gate width, W4 is 10 µm, and W9 is 5 µm. The cross-sections of the fin gate with the equilibrium electrostatic potential distribution are shown in Figure 12a,b. The distances between the two gate fins in Figure 12a,b are 10 µm (W4) and 5 µm (W9), respectively. The X-axis represents the depth of the etched fin and the Z-axis moves along the gate width. The gate voltage is set to −3 V and the drain voltage is set to 20 V. The ranges for the equilibrium electrostatic potential are both set to 0 to 2.19752.
With the schematic diagram in Figure 11b, we can explain that the increase in arises from the increase in the effective gate length (LGate, eff). The one-etched-fin device has a LGate, eff of 4 × h + W1, the four-etched-fin device has a LGate, eff of 10 × h + W4, and the nineetched-fin device has a LGate, eff of 20 × h + W9. However, although the LGate, eff of the etchedfin gate devices increases with increased etched fin numbers, the results from Figure 12 indicate that the electrostatic potentials between the fin gates of the nine-etched-fin device interact with one another more significantly than in the four-etched-fin device due to the short distances between the sidewall gate. It can be concluded that there exists a repelling of the electric fields from the gate sidewalls, which increases when the etched fins are set to be closer to each other-in this case, W4 to W9-thus degrading .
Conclusions
AlGaN/GaN HEMTs on a SiC substrate with etched-fin gate structures were successfully fabricated and demonstrated good linearity improvements for Ka band applications. The device's DC and RF performance were improved due to the enhanced gate controllability over the gate width using an optimized etched-fin design. High power gains of more With the schematic diagram in Figure 11b, we can explain that the increase in G m arises from the increase in the effective gate length (L Gate, eff ). The one-etched-fin device has a L Gate, eff of 4 × h + W 1 , the four-etched-fin device has a L Gate, eff of 10 × h + W 4 , and the nine-etched-fin device has a L Gate, eff of 20 × h + W 9 . However, although the L Gate, eff of the etched-fin gate devices increases with increased etched fin numbers, the results from Figure 12 indicate that the electrostatic potentials between the fin gates of the nine-etchedfin device interact with one another more significantly than in the four-etched-fin device due to the short distances between the sidewall gate. It can be concluded that there exists a repelling of the electric fields from the gate sidewalls, which increases when the etched fins are set to be closer to each other-in this case, W 4 to W 9 -thus degrading G m .
Conclusions
AlGaN/GaN HEMTs on a SiC substrate with etched-fin gate structures were successfully fabricated and demonstrated good linearity improvements for Ka band applications. The device's DC and RF performance were improved due to the enhanced gate controllability over the gate width using an optimized etched-fin design. High power gains of more than 8 dB were obtained for the device when operated in a 30 GHz measurement environment. Etched-fin devices show better linearity performance at high frequencies than the planar device due to increased G m values and lowered values of the second derivative of G m . The four-etched-fin device, which had an optimized 10-µm separation between the etched fins, exhibited optimized linearity performance under a gate bias point of 0.5 I DSS , 0.375 I DSS , and 0.25 I DSS among the planar, one-etched-fin, and nine-etched-fin devices at V DS = 20 V. TCAD 3D device simulation results have also been provided to discuss the effect of increased etched fin numbers, which may degrade the gate controllability. Overall, the etched-fin devices demonstrate improved device linearity performance at the Ka band and show high potential for the advancement of wireless power amplifier systems. | 6,833.4 | 2023-04-25T00:00:00.000 | [
"Engineering",
"Physics"
] |
Relationships among the β3-adrenargic receptor gene Trp64Arg polymorphism, hypertension, and insulin resistance in a Japanese population
A polymorphism in the ADRB3 gene (Trp64Arg) has been associated with obesity, insulin resistance, and hypertension. This cross-sectional study investigated the relationships among this polymorphism, hypertension, and insulin resistance values (HOMA-IR) in 719 Japanese subjects aged 40 years and older. The genotype frequencies of Trp64Trp (homozygous, wild), Trp64Arg (heterozygous, variant), and Arg64Arg (homozygous, variant) were 466 (65%), 233 (32%), and 20 (3%), respectively. Insulin resistance was associated with an increased risk of hypertension in a Japanese population. This relationship was dependent on the presence or absence of the Trp64Arg polymorphism (odds ratio, 2.054; confidence interval, 1.191 to 3.541; P value, 0.010). Therefore, the Trp64Arg polymorphism of ADRB3 was associated with hypertension and insulin resistance in a healthy Japanese population. This relationship, which was dependent on the polymorphism, may predict the development of hypertension and diabetes.
Introduction
Hypertension is a major risk factor for global disease burden [1]. Many patients with hypertension have diabetes mellitus, which is strongly related to coronary heart disease, major stroke subtypes, and deaths attributed to other vascular causes [2]. The pathophysiology of these two diseases are similar and related to obesity and insulin resistance. Insulin resistance is a pathological condition that impairs insulin sensitivity. Previous studies reported a close relationship between hypertension and insulin resistance [3]. The etiology of hypertension involves genetic disorders. The present study on hypertension was based on Genome-wide Association Studies (GWAS), which search a vast number of single nucleotide polymorphisms (SNP) in a large cohort [4]. Prior to the initiation of GWAS, the main strategy employed to identify hypertension-susceptibility genes was the candidate gene approach. Several candidate genes have been reported using conventional approaches [5].
The β3-adrenergic receptor (ADRB3), one of the candidate genes, is a class of G proteincoupled receptors that primarily mediates lipolysis and thermogenesis. A polymorphism in the ADRB3 gene (Trp64Arg) impairs the function of ADRB3. A decline in function causes the pathogenesis of multiple conditions, including hypertension, insulin resistance, and obesity [6][7][8].
The effects of the Trp64Arg polymorphism need to be considered in investigations on the relationship between hypertension and insulin resistance. However, few studies have examined this triangular relationship. Widén et al. reported that the ratio of hypertension and insulin resistance was higher in Finns with the Trp64Arg polymorphism [7]. However, Fujisawa et al. suggested that the Trp64Arg polymorphism did not markedly affect the development of hypertension or insulin resistance in Japanese individuals [9]. Therefore, the present study aimed to examine the relationships among hypertension, insulin resistance, and the Trp64Arg polymorphism.
Study design and participants
Comprehensive medical check-up data obtained from the residents of Shika town, a rural area in Japan, were used for the analysis. Baseline data were derived from the SHIKA study, an overview of which was previously reported [10]. In brief, the SHIKA study is a populationbased observational study conducted to investigate approaches that prevent lifestyle-related diseases. It was conducted with the approval of the Ethics Committee of Kanazawa University and informed consent was obtained from all participants. The target subjects of the SHIKA study were all middle-aged residents who were delivered a self-administrated questionnaire and requested to undergo a comprehensive health examination. In the present study, data on 1191 voluntary participants from 40 years of age who underwent the comprehensive health examination between March 2014 and January 2017 were available. The design of the present study was cross-sectional. This study was conducted with the approval of the Ethics Committee of Kanazawa University. Written informed consent was obtained from all participants.
Subjects with incomplete data on SNP (n = 326), blood pressure (n = 5), or fasting blood sugar (n = 83) and those whose HOMA-IR (homeostasis model assessment for insulin resistance) was less 0.3 (n = 18) were excluded from the analysis. Therefore, this study ultimately included 719 subjects (Fig 1).
Genotyping
Genomic DNA was extracted from blood samples using the QIAamp DNA Blood Maxi Kit (QIAGEN Inc., Venlo, Netherlands) according to the manufacturer's instructions or consigned to a company specialized in clinical laboratory testing (SRL, Inc., Tokyo, Japan). SNP genotyping was performed using the Japonica Array v2 [11] (TOSHIBA Inc., Tokyo, Japan).
The genotypes of ADRB3 Trp64Arg (rs4994) in 825 unrelated subjects (based on genomewidep values) were extracted from array data. The call rate for SNP was 100% and a departure from the Hardy-Weinberg equilibrium was not observed.
Blood pressure measurement
Well-trained nurses and clinical technologists measured blood pressure (BP) using a fixed protocol. Two automated digital sphygmomanometers, HEM-907 (OMRON Inc, Kyoto, Japan) and UM-15P (Parama-tech Inc., Fukuoka, Japan), were used to check BP and their measurement of principle, the oscillometric method, was the same. This medical check-up was conducted in the morning and BP was measured in a fasted state.
BP was measured twice consecutively in a sitting position with an appropriate cuff and averages were adopted as BP data.
Subjects were divided into two groups according to the following definition of hypertension: subjects diagnosed with hypertension and being treated with antihypertensive drugs or those with BP of higher than 140/90 mmHg in medical check-ups.
Assessment of insulin resistance
HOMA-IR is regarded as a robust index for the assessment of insulin resistance and the homeostasis model assessment of beta cell function (HOMA-β) has been proven as a reliable tool for the assessment of insulin secretion. These indexes are widely used in large population studies [12]. HOMA-IR and HOMA-β are assessed using the following equations: HOMA-IR = (FPI × FPG) / 405, HOMA-β = (360 × FPI) / (FPG − 63). FPI is an aberration of fasting plasma insulin concentration (μIU/mL) and FPG is that of fasting plasma glucose (mg/dL).
Other variables
Daily habits were assessed using self-administered questionnaires. Subjects who had a habit of exercising for more than 30 minutes at least twice a week for 1 year or habitually
PLOS ONE
Relationships among the β3-adrenargic receptor gene polymorphism, hypertension, and insulin resistance performed tasks such as carrying baggage, walking, and cleaning for more than 1 hour a day were regarded as subjects with an exercise habit. The frequency of drinking was classified into two groups according to answers to the following questions: "Do you drink more than one glass of sake (22 g ethanol) per day three times a week?" or "Do you drink at least four times a week?". A drinking habit was confirmed by replying in the affirmative to either of these questions.
Statistical analysis
The Student's t-test was used to compare the average of continuous variables and the chisquared test to compare the proportions of categorical variables. All subjects were stratified into two groups: the BP groups (Hypertension group and Normal BP group) and ADRB3 polymorphism groups (Trp64Trp group and Trp64Arg or Arg64Arg group). A two-way analysis of variance (two-way ANOVA) was used to examine differences in HOMA-IR between the BP groups and ADRB3 polymorphism groups. A multiple logistic regression analysis after adjustments for independent factors was performed to assess the relationship between BP and HOMA-IR.
In all analyses, the threshold for significance was P<0.05. All statistical analysis were performed using IBM SPSS Statistics version 24.0 for Mac (SPSS Inc., Armonk, NY, USA).
Results
The demographic characteristics of subjects stratified by the genotype of the ADRB3 polymorphism were shown in Table 1.
Among all subjects, 327 were men and 392 were women with a mean age of 61.8 years. Mean SBP and DBP in 368 subjects treated with antihypertensive drugs were 138 and 80.1 mmHg, respectively. Fifty-four subjects were being treated for diabetes and mean HbA1c (NGSP) and HOMA-IR were 5.93% and 1.36, respectively. Table 2 shows the demographic characteristics of the two ADRB3 polymorphism groups (Trp64Trp group and Trp64Arg or Arg64Arg group), which were similar. No significant differences were observed in age, the prevalence of hypertensive subjects, the use of antihypertensive drugs, SBP, or DBP in each stratified group. Characteristics regarding sugar metabolism (FBS, HbA1c, fasting insulin, HOMA-β, and HOMA-IR) did not significantly differ between the two groups.
When subjects were classified into two groups according to the definition of hypertension, the Hypertension group was significantly older (P < 0.001) and had a higher BMI (P < 0.001) and lower eGFR (P = 0.011) than the Normal BP group ( Table 3). The Hypertension group also showed significantly higher FBS (P < 0.001), HbA1c (P = 0.008), fasting insulin (P < 0.001), and HOMA-IR (P < 0.001) than the Normal BP group.
We also assessed differences in variables related to diabetes between the BP groups and ADRB3 polymorphism groups using a two-way ANOVA ( Table 4). The results obtained revealed a significant interaction between BP groups and ADRB3 polymorphism groups for HOMA-IR (P = 0.046).
To adjust for the effects of confounding factors, a multiple logistic regression analysis was used to evaluate the relationship between HOMA-IR and hypertension. Based on the significant interaction between the BP groups and ADRB3 polymorphism groups for HOMA-IR (Table 4), we performed separate multiple logistic regression analyses according to the presence or absence of the ADRB3 polymorphism (Table 5). In subjects with the ARDB3 polymorphism (Trp64Arg or Arg64Arg), HOMA-IR was inversely associated with hypertension after adjustments for the following confounding factors: sex, age, BMI (Model 1), eGFR, exercise habit, smoking status, drinking habit, treatment of diabetes (Model 2), and HOMA-β (Model 3). On the other hand, no correlation was observed between HOMA-IR and hypertension in those homozygous for the wild-type allele (Trp64Trp) group. These relationships were the same in other multiple regression analysis models.
PLOS ONE
Relationships among the β3-adrenargic receptor gene polymorphism, hypertension, and insulin resistance
Discussion
The present study was conducted in an attempt to examine the relationships among hypertension, insulin resistance, and the Trp64Arg polymorphism. The results obtained suggested that the Trp64Arg polymorphism of ADRB3 was associated with hypertension and insulin resistance.
In the present study, insulin resistance assessed by HOMA-IR was associated with an increased risk of hypertension in a Japanese population. This result is consistent with previous findings showing the important role of insulin resistance in predicting the future incidence of hypertension in middle-aged Japanese men [13]. In this 7-year follow-up study, subjects with the highest baseline insulin resistance values were more likely to become hypertensive after 7 years. This study adds further evidence to the finding that insulin resistance is a risk of hypertension in Japanese.
The present results also revealed that this relationship was dependent on the presence or absence of the Trp64Arg polymorphism. The relationship between high HOMA-IR and a low risk of hypertension appeared to be stronger in subjects who were heterozygous for the variant allele (Trp64Arg) or homozygous for the variant allele (Arg64Arg) than in those who were homozygous for the wild-type allele (Trp64Trp). This correlation was still observed after adjustments for confounding factors.
PLOS ONE
Relationships among the β3-adrenargic receptor gene polymorphism, hypertension, and insulin resistance diabetes mellitus and had a slightly lower resting metabolic rate [6]. Widén et al. suggested that the ADRB3 polymorphism was associated with insulin resistance syndrome, which includes obesity and hypertension, in Finns [7]. Clément et al. showed that individuals with the ADRB3 polymorphism may have an increased capacity to gain weight in France [8]. In contrast to these findings, no significant relationship was observed between the ADRB3 polymorphism and these phenotypes in the present study ( Table 2).
The reason for this difference currently remains unclear; however, two possibilities need to be considered. Due to the relatively weak contribution of the ADRB3 polymorphism to insulin resistance and hypertension, the sample sizes of previous studies may have been insufficient. Furthermore, differences in the genetic background may have led to insulin resistance and hypertension. Another candidate gene to these pathologies may have affected the findings obtained in these studies. In addition to the ADRB3 gene, genes related to insulin resistance and obesity have been reported in Caucasians. Among all, polymorphisms in the adiponectin gene have been reported to reduce adiponectin levels in overweight and obese children in Italy and increase insulin resistance [14]. Moreover, two Japanese studies reported that polymorphism in the adiponectin gene are associated with insulin resistance [15,16]. Although we could not obtain information on the polymorphisms of the adiponectin gene in this study, it is worth investigating in future studies. The present study had some limitations. Causality was not examined because the study design was cross-sectional. Therefore, further studies are needed to confirm the present results. Furthermore, a selection bias needs to be considered; subjects were voluntary collaborators for the comprehensive health examination, the sample size of which was too small to clarify the effects of the ADRB3 polymorphism. In addition, we did not obtain information on other variables, such as other candidate genes related to insulin resistance and hypertension.
In conclusion, the Trp64Arg polymorphism of ADRB3 was associated with hypertension and insulin resistance in a Japanese population. This relationship, which was dependent on the polymorphism, may predict the development of hypertension and diabetes. The present results need to be interpreted with caution due to the limitations described above. | 3,026.8 | 2021-08-04T00:00:00.000 | [
"Biology"
] |
19-(Benzyloxy)-19-oxojolkinolide B (19-BJB), an ent-abietane diterpene diepoxide, inhibits the growth of bladder cancer T24 cells through DNA damage
Diterpenoids jolkinolide A and B, were first isolated from Euphorbia fischeriana. In our previous research, 19-(Benzyloxy)-19-oxojolkinolide B (19-BJB), a derivative of jolkinolides, was synthesized as a novel ent -abietane diterpene diepoxide. In this study, 19-BJB showed strong in vitro activity against bladder cancer cell lines. DNA damage which was observed through the interaction of 19-BJB with nucleotide chains and affected DNA repair resulted in the activation of checkpoint kinase 1 (Chk1) and checkpoint kinase 2 (Chk2) in bladder cancer cell lines. In vivo testing in nude mice also proved that 19-BJB revealed a potential inhibitory effect on tumor growth. Additionally, the 3D-QSAR models of jolkinolides were established. Briefly, we proved that 19-BJB could potentially be used as a drug to inhibit the growth of bladder tumor.
Introduction
Urothelial carcinoma is a major global health problem. Bladder cancer, a subtype of urothelial carcinoma, is among the top ten most common cancer types in the world. Each year, about 3.0% of the newly cancer diagnoses and 2.1% of cancer deaths are caused by urinary bladder cancer [1,2]. The increasing number of bladder cancer patients in recent years due in part to the rise in the levels of environmental carcinogens [3]. It is estimated that the annual incidence of urothelial carcinoma in western countries is approximately two new cases per 100,000 inhabitants [4]. The therapeutic regimen for urothelial carcinoma relies on transurethral resection in combination with gemcitabine for local treatment [5]. Nevertheless, cisplatin is the standard chemotherapeutic agent for the treatment of advanced bladder cancer. However, renal toxicity and chemoresistance can compromise the efficacy of cisplatin, indicating the need to find new agents that can improve the outcomes for patients with a poor prognosis. Natural products have played a major role in new drug discovery for centuries, with over 47% of approved anticancer agents being of natural origin [6]. Due to the special structures and diverse biological properties, diterpenoids have attracted much attention of researchers [7]. Plants belonging to Euphorbiaceae families are widely applied in traditional Chinese medicine because these families have rich sources of diterpenoids [8]. Interestingly, the most recent reports have highlighted their potential as potent remedies against several cancers [9]. The abietane diterpenes, many of which have a α,β-unsaturated lactone functional group, are the main reason why Euphorbiaceae families have antitumor activities [10]. Jolkinolides A and B, a kind of diterpenoid, were first separated from Euphorbia fischeriana [11]. Jolkinolide B has been found to exhibit significant inhibitory effects, such as the activities against human prostate adenocarcinoma LNCaP cells [12], human leukemia K562 cells [13], human esophageal carcinoma Eca-109 cells, and human hepatoma carcinoma HepG2 cells [14]. Several anticancer mechanisms of jolkinolide B have been proposed. In human leukemic U937 cells, jolkinolide B was found to reduce cell viability and induce apoptosis in a dose-and time-dependent manner through the activation of caspase-3 and caspase-9, and the down-regulation of PI3K/ Akt [15]. Ma et al. found that jolkinolide B inhibits RANKL-induced osteoclastogenesis by suppressing the activation of NF-κB and MAPK signaling pathways [16]. Later, Yang et al. found that jolkinolide B markedly attenuates LPS-induced histological alterations, lung edema, inflammatory cell infiltration, and myeloperoxidase activity as well as the production of TNF-α, IL-6, and IL-1β. Furthermore, jolkinolide B also significantly inhibits the LPSinduced the degradation of IκBα and phosphorylation of NF-κB p65 and MAPK [17]. The anticancer mechanisms of some jolkinolide B derivatives have also been studied. It was reported that 17-acetoxyjolkinolide B inhibits cytokine-induced NF-κB signal transduction [18]. Another derivative, 17-hydroxyjolkinolide B, was found to capable of inhibiting JAK/ STAT3 signal transduction in HepG2 cells [19], while was found to exhibit LPS-induced production of inflammatory mediators such as prostaglandin E2, nitric oxide, and pro-inflammatory cytokines through the suppression of MAPK phosphorylation and NF-κB activation [20].
Recently, our research group reported a protocol for synthesizing jolkinolide A and jolkinolide B [21]. We also reported the first synthesis of jolkinolide derivatives [22,23]. 19-(Benzyloxy)-19-oxojolkinolide B (19-BJB), one of derivatives we synthesized, showed potential inhibitory effect in different kinds of cancer cell lines. In this study, we have elucidated that 19-BJB showed strong in vitro activity against bladder cancer cell lines. This compound inhibited cell proliferation and induced DNA damage. The in vivo testing in nude mice also proved that 19-BJB had a potential inhibitory effect on tumor growth. Additionally, the 3D-QSAR models of jolkinolides were established and the bulky groups on C-19 of ring A maybe the key moiet of DNA damage. All of the compounds described above are shown in Fig 1 [11,[18][19][20][21][22][23].
carcinoma of urinary bladder tissue. J82 is a non-muscle-invasive, fibroblast-like, adherent bladder cancer cell line. NTUB1 is a muscle-invasive, epithelial-like, adherent cell line derived from a 70-year-old female patient. The cisplatin resistant type and the taxol resistant type are NP14 and NTaxol, which have the same cellular phenotypes as NTUB1 [24]. HaCat is an immortal keratinocyte cell line from adult human skin. All cells were cultured at 37˚C in a humidified incubator with 5% CO 2 .
MTT assay
The cell lines were seeded in an atmosphere of 5% CO 2 at 37˚C. Cells were plated into 96-well plates at a density of 5×10 3 cells in 100 μL of growth medium 24 h prior to treatment. Following treatment with 19-BJB at different dosages (1.56, 3.13, 6.25, 12.50, 25, 50, and 100 μM) for 48 h, 100 μL of MTT solution (0.5 mg/mL) was added to each well and the plates were incubated at 37˚C for 1 h. Then the MTT solution was replaced by DMSO (100 μL) to dissolve the reduced MTT crystals. Cell viability was assessed by measuring absorbance at 550 nm in an ELISA Reader [25]. Dose response curves were then created as a percentage of vehicle treated control cells and IC 50 was calculated using GraphPad Prism 5. The procedures of the MTT tests for other drugs, including jolkinolide A, jolkinolide B, cisplatin, and paclitaxel, were the same as those for 19-BJB.
Colony assay
The cells were maintained in 6-well plates at a concentration of 100 cells per well. Each well was treated with different concentrations of 19-BJB. The cultures were maintained under standard culture conditions. After two weeks, the medium was replaced with crystal violet solution and was stained for 5 min. The number of colonies was determined by an inverted phase-contrast microscope at ×40 magnification. A group of > 10 cells was considered as a colony. The colonies were counted 3 times, and the average was calculated.
Live and dead
T24 cells were incubated in 6-well plate at 1 × 10 5 cells / well. 19-BJB was added at the concentrations of 0 μM, 0.5 μM, 1 μM, 2 μM, 4 μM, and 8 μM. After 48 h incubation, trypsin was added, and the cells were washed twice by PBS, then centrifuged at 1000 rpm for 5 min. 2 μM calcein AM and 4 μM ethidium homodimer were added in 1 mL PBS as dye to stain the cells, which were then observed using a TailTM Image-Based Cytometer 30 min later. In the further experiment, cells were treated with 10 μM ATMI for 3 h before 19-BJB (0, 4, and 8 μM) was added in different dosage. The cells were maintained in the medium with ATMI and 19-BJB for 48 h, and were harvested for the live and dead test.
FACS analysis of cell cycle
Once T24 cells achieved 70% to 80% confluency, they were treated with 0.1% DMSO or 19-BJB at concentrations of 0 μM, 2 μM or 4 μM for 24 h and 48 h. After treatment, the cells were fixed in ice-cold 70% ethanol overnight. After fixation, the cells were washed three times with cold PBS and then stained in 500 μL of propidium iodide solution. Samples were analyzed on a BD FACScan flow cytometer, and the percentages of cells in the G0-G1, S, and G2-M phases of the cycle were determined using WinMDI 2.9.
Protein isolation and western blot analysis
Samples (normalized according to cell number) were treated with 19-BJB at varying concentrations over 24 h or 48 h. Cell extracts were then prepared in 1000 μL RIPA lysis buffer containing protease inhibitors, 5 μL Na 3 NO 4 (200 mM), 5 μL PMSF (200 mM) and 5 μL NaF (200 mM). Cell lysates were centrifuged at 4˚C and 15000 rpm for 30 min, and the supernatant was collected. BCA assay was used to test protein concentration in cells. Clarified protein lysates containing equal amounts of protein (20 μg) were separated on 8-15% sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) gels and electrophoretically (2 h at 90 V) transferred to a PVDF membrane. Blots were then blocked for 1 h in TBST containing 5% blocking grade non-fat dry milk, and then incubated overnight with primary antibody at 4˚C. Blots were then washed in TBST three times and incubated for 1 h at room temperature with secondary antibody. Electrochemiluminescence was used to enhance the detection of immunoreactive bands.
Genotoxicity testing
The information about DNA damage given by the comet assay reflects the number of single or double stand breaks formed in the cellular DNA before or during the process of electrophoresis [26]. For this test, a suspension of isolated cells is embedded into an agarose gel onto a microscope slide and subsequently lysed by detergents in lysis buffer. Then the liberated DNA is exposed to alkali to unwind it from the strandbreakage sites and electrophoresed under alkaline conditions. In the presence of DNA strand breaks, staining with propidium iodide solution results in structures resembling comets with the tail length or tail fluorescence content reflecting the frequency of DNA strand breaks and hence DNA damage [27]. The standard alkaline procedure allows the detection of both single-and double-strand DNA breaks as well as apurinic/apirimidinic sites that are expressed as frank strand breaks in the DNA under the alkaline conditions of the assay. Immediately after exposure, cells were processed in the comet assay under alkaline conditions, basically following the original procedure [28] with minor modifications [29]. The cells were seeded in 6-well plate and treated with 19-BJB for 48 h. Then the cells were washed twice with PBS and detached with 300 μL of 0.05% trypsin solution. About 2 min later, trypsinization was terminated by adding culture medium (1 mL). The cells in each treatment well were taken and collected by centrifugation at 1000 rpm for 5 min. 10 μL of PBS with pellets suspended was added into 100 μL LMA at 37˚C, and 60 μL was immediately spread onto microscope slides. Agarose microgels were set for 30 min at 4˚C. Then the slides were immersed in ice-cold lysis solution (10 mM Tris-HCl, 2.5 M NaCl, 100 mM Na2EDTA, and 1% Triton X100; pH 10) and left to stand for 1 h at 4˚C, protected from light. The slides were then washed twice with dd water and placed in an electrophoresis tank after lysis. Electrophoresisi was then performed for 10 min at 20 V. Then the slides were washed twice with dd water and fixed in 75% ethanol for 5 min. The slides were then dried and stained with propidium iodide for 5 min. The samples were then observed using a fluorescence microscope. The extent of induced DNA damage was measured as the percent of fluorescence migrated to the comet tail by a computer-based image analysis system (CometScore 15).
Spectral pattern of 19-BJB synthesized on DNA template
The interaction of 19-BJB with DNA was studied using UV-visible absorption spectroscopy [30]. The stock DNA (1.0 mg/mL) was dissolved in Tris-HCl buffer (50 mM, pH = 7.5). 19-BJB was prepared at different concentrations (0, 5, 10, and 40 μM). 19-BJB and DNA were mixed and incubated at 37˚C for 2 h and 4 h. Then the mixture were dropped into a 1-cm quartz cell and scanned in the wavelength range of 220-350 nm at 25˚C using a Thermo-Scientific spectrophotometer (model Evolution 300, USA).
8-Oxoguanine detection
Cells were cultured in 6-well plates at 1× 10 5 cells / well. Each well was treated with 19-BJB at varying concentrations (2 μM, 4 μM, and 8 μM) for 24 h. Then the cells were washed twice with PBS and fixed with ice-cold 70% ethanol overnight. After fixation, the cells were washed thrice with cold PBS and then centrifuged at 1000 rpm for 5 min. The pellets were collected and 1% Triton and RNase A (10 mg/mL) were added at room temperature for 1 h. Anti-oxoguanine 8 fluorescent antibody was added and the suspension were then stored at 4˚C overnight. Then the samples were centrifuged at 1800 rpm for 5 min. After washing the samples twice with PBS, goat anti-mouse IgG FITC was added for 1 h. After washing the samples again with PBS, the fluorescence of the samples was analyzed using BD FACScan flow cytometer.
The molecular docking study
The molecular docking calculation was performed by Autodock version 4.2 with the Lamarckian Genetic Algorithm to estimate the binding ability and simulate the binding model between 19-BJB and DNA (CGATCG) [31]. The structure of the DNA fragment (PDB ID: 1Z3F) was obtained from the Protein Data Bank (https://www.rcsb.org/) [32]. In the DNA structure, the co-crystalized substrates, including ligands, water and metal molecules were removed, and the polar hydrogens and the Kallman united atom charges were added to the atoms of the DNA by AutoDock Tool version 1.5.4 interfaces (ADT) [33]. The structural optimization of 19-BJB was performed by energy minimization with the MMFF94 force field using ChemBio3D software (version 11.0; Cambridge Soft Corp.) [34]. In addition, the polar hydrogens and Gasteiger charges were also added to the structure of 19-BJB by ADT [35]. The grid box calculated by the AutoGrid program was set at the location of the co-crystal ligand-binding site and its size was set bigger enough to contain the size of the ligand. Therefore, the coordinates of the central grid box were set at x = 0.395, y = 17.235, and z = 46.179, and the sizes of dimensions of x, y, and z-axis were respectively set as 56 x 38 x 38 Å grid at the spacing of 0.375Å [36]. All of the docking parameters were set to the default values except for the maximum number of energy evaluations, which was increased to 25,000,000 per run [37]. The docking results were analyzed with cluster analysis by ADT. Besides, all pictures were generated by the Accelrys Discovery Studio Visualizer (Version 4.0, Accelrys Software Inc.) [34].
Toxicity test of 19-BJB
19-BJB was dissolved in DMSO and mixed with 0.9% physiological saline to the desired concentration. Two male nude mice were weighed and treated with 19-BJB at the dosage of 10 mg/kg three times per week. One mouse was treated by intraperitoneal injection while the other was treated by intravenous injection. After 4 days, the dosage was changed to 20 mg/kg. After 14 days, it was changed to 40 mg/kg. Both mice exhibited no signs of acute toxicity.
In vivo antitumor efficacy study
Compound 19-BJB was tested for in vivo efficacy in 5-week BALB/c male nude mice xenograft models. T24 cells in the logarithmic growth phase were cultured and the injection volume was calculated at 1×10 7 cells / 0.2 mL / mouse. With the medium aspirated, the cells were washed with PBS. After being treated with trypsin, the cells were collected and centrifuged at 1000 rpm for 5 min, and then mixed with matrix gel with medium [38]. The mixed gel was injected into 14 mice averagely. The injected mice were randomized into control (saline) and treatment (19-BJB) groups (n = 7) when the volumes of xenografts were increased almost 50 mm 3 . The mice were treated with normal saline (control group) or 19-BJB (20 mg/kg, treatment group) every two days. The tumor growth was then monitored daily beginning several days after the first injection. The tumor volume were measured as V (mm 3 ) = ab 2 /2 (a = largest diameter of tumor, b = smallest diameter of tumor). All the mice were sacrificed with carbon dioxide after 28 days. The tumor tissues were washed in PBS after dissection, and were soaked in 10% paraformaldehyde for immunohistochemistry (IHC).
Immunohistochemistry (IHC)
Tumors resected from mice were fixed in 10% paraformaldehyde for 48 hours and embedded in paraffin. 4-μm thick sections were cut and processed for immunohistochemistry and were incubated with anti-p-Chk1, anti-p-Chk2, anti-cleaved caspase-3 and anti-cleaved PARP-1, anti-TUNEL, and anti-Ki-67 (used at 1:100) at 4˚C overnight [39]. Following incubation, immunoperoxidase staining was carried out using a streptavidin-peroxidase kit obtained from Abcam. The slides were examined under a light microscope, and representative pictograms were taken from a minimum of five or six different slides for each group.
CoMFA analyses of jolkinolide derivatives
To further understand the relationship between the structure and activity in jolkinolide derivatives, the comparative molecular field analysis (CoMFA) was performed by Sybyl-X version 1.2 software from Tripos Inc. (St. Louis, MO). Firstly, the structural optimization of 33 jolkinolide derivatives was performed by energy minimization with MMFF94 force field using Sybyl-X software. Subsequently, 19-BJB was chosen as the template and then all the synthesized derivatives were aligned to the A and B rings of jolkinolides, set as core structure, by database alignment method using Sybyl-X software. To establish the CoMFA model, the anticancer activities of NTUB1 cell line of 33 jolkinolide derivatives were conducted and then transferred the pIC 50 (-log IC 50 ) for calculation. Steric and electrostatic field in three-dimensional grid with a spacing of 2.0 Å were calculated for all derivative compounds at each lattice intersection. A sp3 carbon atom was used as a probe atom with a charge of +1.00 charge, and a cutoff energy value of +30 kcal/mol was used to minimize electrostatic energies. A partial least-squares (PLS) method was used to linearly correlate the CoMFA fields to the values of pIC 50 . In addition, the leave-one-out (LOO) validation method and cross-validate (CV) analysis were performed to validate the quality of the CoMFA model and the regression of model was performed by no validation analysis.
Statistical analysis
The data are presented as mean ± SD for triplicate samples. GraphPad Prism 6 (GraphPad Software Inc., San Diego, CA, USA) was used to analyze the significance of the differences among the groups. Unpaired two-tailed t test was performed to analyze the differences between two independent groups. � p < 0.05, �� p < 0.01, ��� p < 0.001, ���� p < 0.0001 were considered statistically significant.
19-BJB inhibits proliferation of human bladder cancer cells
To investigate the effects of 19-BJB on cell growth, T24, J82, NTUB1, NP14, Ntaxol and HaCat cells were exposed to eight different concentrations of 19-BJB for 48 h (Fig 2). Table 1 shows the IC 50 values of 19-BJB in the different cell lines. The results of MTT testing showed that 19-BJB had a better inhibition than the positive drug cisplatin, but a poorer inhibition than paclitaxel.
The colony formation assay showed that T24 cells formed significantly fewer colonies after treatment (Fig 3). Relatedly, it could clearly be seen that the treatment of T24 cells with 19-BJB resulted in a significant different speed of growth in a dose-dependent manner, with the number of surviving colonies in the medium decreasing with higher concentration of 19-BJB. The cells could not survive in 19-BJB at the concentration of 8 μM.
We then used an image-based cytometer to analyze the percentage of live cells stained ( Fig 4A and 4B) with calcein AM and dead cells stained with ethidium homodimer-1, respectively. The cell live and dead test showed that the results of cell staining were affected by 19-BJB in a dose-dependent manner. The results showed that the number of cells with green fluorescence were decreased and the number of cells with red fluorescence were obviously increased in the context of over 4 μM of 19-BJB ( Fig 4C). Notably, the treatment of T24 cells with 19-BJB significantly induced cell death.
The results of MTT testing, colony assay, live and dead test indicated that 19-BJB had an inhibitory effect on the growth of T24 cells. In MTT test, the inhibitory effects of 19-BJB against cisplatin-resistant bladder cancer NP14 and paclitaxel-resistant bladder cancer NTaxol were much better than the effect of 19-BJB against its parental bladder cancer NTUB1. This finding prompt us the mechanism that 19-BJB induced in cancer cell growth may be totally different from that of cisplatin or paclitaxel. In previous reports, jolkinolide B was found to inhibit DNA synthesis, induce G1 phase arrest and lead to apoptosis in LNCaP cells [12]. Jolkinolide B was also demonstrated to arrest the cell cycle in G1 phase and induce apoptosis in K562 cells [13]. As 19-BJB was a derivative of jolkinolide B, DNA damage study was considered in our research.
To examine whether 19-BJB treatment could affect cell cycle progression in bladder cancer cells, T24 cells were treated with different concentrations of 19-BJB. To further examine the effects of 19-BJB on cell cycle progression, an analysis of the collected cells by flow cytometry was performed (Fig 5A). The results for the cells treated for 24 h shown in Fig 5C indicated that the fractions of G0/G1 phase were increased as the treatment concentration of 19-BJB was increased. The fraction of G2/M in 2 μM group did not change obviously. While in 4 μM group, the fraction of G2/M decreased. In 48 h group (Fig 5D), the fractions of G0/G1 phase decreased. At the same time, the fractions of G2/M phase increased correlated with concentration. A large amount of cell debris formed as the treatment time was increased, which indicated that cell death may occur when the treatment time was 24 h. In 48 h group, the apoptosis peak became more obvious as the concentration of 19-BJB was increased (Fig 5B).
DNA damage effect of 19-BJB
The comet assay was conducted to test the genotoxicity of 19-BJB. The information about DNA damage yielded by the comet assay indicates the number of single-or double-strand CometScore was used to calculate the tail length, tail moment, and olive moment. It was obvious that the tail length, tail moment, and olive moment were increased at high concentrations of 19-BJB, which proved that nuclear damage occurred in T24 cells.
The checkpoint effector kinases Chk1 and Chk2 played an important role in the DNA damage response. Fig 7 illustrates that 19-BJB treatment of T24 cells resulted in increased expression of phospho-Chk1 (p-Chk1) and phospho-Chk2 (p-Chk2), while down-regulating the expression of Chk1 and Chk2. Also, 19-BJB treatment increased, in a dose-dependent manner, the expression of cleaved caspase-3 and cleaved poly ADP-ribose polymerase (cleaved PARP-1), as well as the phosphorylation of H2A histone family member X (p-H2AX) in T24 cells, compared to vehicle-treated controls. Taken together, these results imply that 19-BJB activates the phosphorylation of DNA damage response-related proteins, such as Chk1, Chk2, and Histone H2AX, influencing the cleavage of caspase-3 and indicating apoptotic effects against bladder cancer cells.
We also examined the influence of ATMI to further elucidate the role of 19-BJB in DNA damage. Fig 8 showed the protein expression of p-Chk1, p-Chk2 and p-H2AX increased when 19-BJB (0, 2, 4 μM) exist, just similar to Fig 7. But when 10 μM ATMI was added before T24 cells was treated with 19-BJB, the expression of DNA damage related proteins were down-regulated. Taken together, these finding suggested that the increasing expression of p-Chk1, p-Chk2 and p-H2AX caused by 19-BJB could be reversed in the presence of ATMI.
The molecular docking study of 19-BJB and DNA
The maximum absorption wavelength of DNA is 260 nm in the ultraviolet region [40]. The presence of 19-BJB (Fig 11) could significantly decrease the UV absorption upon DNA The expression of DNA damage related proteins proved the possible reason of cell death caused by 19-BJB. In Fig 11, 19-BJB was observed to have an interaction with DNA, but the reason that elicited a DNA damage response was still unknown. 8-oxoguanine is one of the most common DNA lesions, which can results in a mismatched pairing with adenine resulting in G to T and C to A substitutions in the genome [41,42]. In Fig 12, the gray shaded part represents the control group. The peak was shifted to the right as the concentration of 19-BJB was increased, indicating that the FL1-H fluorescence expression of 8-oxoguanine became more significant. The binding between 19-BJB and DNA caused the synthesis of 8-oxoguanine in T24 cells after 24 h. The results in T24 cells showed an obvious change over 4 μM of 19-BJB.
The molecular docking simulation was performed to further understand the binding situation between 19-BJB and DNA structure (CGATCG) by AutoDock software. The results showed that the value of binding energy was -5.93 kcal/mol, and the value of inhibition constant (Ki) was 44.66 μM. Based on the position at which the 19-BJB molecule docked with the DNA fragment, it was clear that 19-BJB was inserted into the CG fragment of the DNA sequence. As shown in Fig 13, the 19-BJB molecule was aligned parallel to the nucleotide chain. Besides, the carbonyl residue on the ring D (16-carbon) formed a hydrogen bond with the nucleotide chain. Furthermore, it was observed that the benzyl ester group at 19-C was inserted into the CG fragment of the DNA, and the benzene ring remained parallel to the purine ring and the pyrimidine ring of the DNA, forming a strong π-π bond, which played an important role in the docking with the DNA. In addition, the adjacent carbonyl residue (19-carbon) of the benzyl ester group also formed a hydrogen bond with the nucleotides, further stabilizing the docking effect.
In vivo antitumor efficacy study
Since 19-BJB possessed a strong anti-proliferation effect, we proceeded to evaluate its antitumor efficacy in vivo. A T24 xenograft model [43,44] was used because this cell line was more sensitive to 19-BJB. Preliminary studies indicated that mice were able to tolerate the acute (single-dose) administration of 19-BJB at doses of 40 mg/kg body weight. Because treatment of tumor-bearing animals was anticipated to require repeated administration, we conducted a Fig 14. The weights of tested mice were showed in S3 Fig. 19-BJB was well tolerated and no body-weight loss was observed at a dosage of 20 mg/kg. Compared with the vehicle treatment, 19-BJB induced inhibition (tumor growth inhibition rate = 51.7% at 20 mg/kg). The immunohistochemistry results for p-Chk1, p-Chk2, cleaved caspase-3, cleaved PARP-1, TUNEL and Ki-67 are shown in Fig 14C, and the results for the drug group obviously differed from the results for the control group.
3D-QSAR study of jolkinolide derivatives
After jolkinolide derivatives synthesized, comparative molecular field analysis (CoMFA) was employed to establish the 3D-QSAR model to further explain the structure-activity relationships (SAR) of jolkinolide derivatives. For the evaluation of functional groups introduced into the ring A of jolkinolide, we evaluated the 33 derivatives we had synthesized to analyze the impact on the design of jolkinolide derivatives [22,23]. The structures of 33 jolkinolide derivatives were displayed in the S1 Fig. Ring A and ring B, the common core structure of jolkinolide derivatives, were chosen to set as the core structure for the alignment of 33 derivatives ( Fig 15A). After that, the alignment of 33 derivatives was shown in Fig 15B. Besides, the steric field and electrostatic field of CoMFA models were respectively established and shown in Fig 15C and 15D. Additionally, the result of cross-validation statistical parameters of CoMFA model in partial least squares (PLS) analysis was summarized in the S1 Table. The predicted pIC 50 of the jolkinolide derivatives were shown in S2 Table. All the residuals between predicted and actual pIC 50 were less than a logarithmic unit, which indicated the CoMFA model had a good predictive performance. The correlation plot of actual activity against predicted activity by the CoMFA model was illustrated in the S4 Fig. All points uniformly distributed around the regression line and R2 > 0.95, which showed a good predictive ability and accuracy of the model.
As illustrated in Fig 15C, CoMFA models suggested that the bulky groups are not suitable for presentation in the areas surrounding the ring C and ring D of jolkinolides. In contrast, a bulky substituent, such as the benzyl ester group, can be introduced onto the C-19 of jolkinolides. It is expected that a substituted group modified on ring A is a strategy for increasing the anticancer activity of jolkinolides. Most important, we also found that CoMFA contour maps presented an unfavorable region on the lactone ring (ring D) (Fig 15C). For the electrostatic contour map, the contour map showed that the 11,12-epoxide ring and the 8,14-epoxide ring were located in the electrostatically unfavorable region (red area), indicating that this region favors negatively charged atoms, such as an epoxide ( Fig 15D). As shown in Fig 15D, we observed that the ring C of jolkinolides favors a hydrogen bond acceptor, indicated by the magenta area located around both epoxide rings. This finding supports the electrostatic results that the existence of an epoxide ring may contribute to the anticancer activity of the jolkinolides' skeleton. Additionally, the substituent at C-19 is an unfavorable hydrogen bond acceptor (Fig 15D). Taking together, we propose that the substituent at C-19 is suitable for modification with a benzyl group.
Discussion
The five-years overall survival rate of patients who suffer from bladder cancer disease has remained in the range of 60-70% over the last two decades, even the patients receive the standard of care high dose chemotherapy combined with surgical resection [5]. Although the maximal dose of conventional chemotherapy drugs has been used, the clinical outcomes of bladder cancer have not improved significantly. Some drugs, such as cisplatin and paclitaxel, are lack of specificity so that they have to be used in an excessive dose and sometimes caused drugresistant problem [45]. In this study, we have first shown that 19-BJB exhibits the high potential of inhibitory activity against bladder cancer cells, including cisplatin-and paclitaxel-resistant cells. The in vivo antitumor efficacy study also indicated that 19-BJB had a good activity in inhibiting tumor growth, which encouraged that 19-BJB could be used as a potential candidate for bladder cancer therapy. In addition, the inhibitory effects of 19-BJB against cisplatin-resistant bladder cancer NP14 and paclitaxel-resistant bladder cancer NTaxol were much better than the effect of 19-BJB against its parental bladder cancer NTUB1. These findings may provide an important clue for further clinical drugs combination approaches.
For the investigation of mode of actions, we evaluated the effects of DNA damage response (DDR) upon 19-BJB treatment in the bladder cancer cells. DDR recognizes DNA damage, and then initiates a cascade reaction to repair the broken chain. ATM plays an important role in DNA damage, especially after DSBs (double-strand breaks). ATM is recruited to the sites of DSBs by sensor protein. Then the signals are transduced to downstream effectors such as Chk2 and p53, leading to cell cycle arrest [46]. ATR (ATM and Rad3-related) is another checkpoint related kinase which is related to ATM activated in response to persistent singlestranded DNA and involve in various kinds of DNA damage, especially those related to DNA replication [47]. Once ATR is activated, Chk1 is phosphorylated, initiating a signal transduction cascade that culminates in cell cycle arrest [48]. Chk1 and Chk2 coordinate the DNA damage response and cell cycle checkpoint response [49]. In early period of DSBs, H2AX is involved in the steps recruiting a series of DDR related proteins to the injury site of DNA strands after phosphorylated by ATM and ATR [50,51]. Then p-H2AX binds to MDC1 (mediator of DNA damage checkpoint protein 1) to form a complex for the further interactions in DBS repair [52]. The presence of p-H2AX is the direct evidence of DBS in DNA [53]. PARP-1 participates in several DNA repair processes, especially plays an important role in the repair of single-stranded DNA breaks [54].
In our research, the results indicated that 19-BJB treatment could inhibit the proliferation of human bladder cancer cells via induction of DNA damage. In Figs 7 and 9, the expression of p-Chk1 and p-Chk2 increased, which indicated that cell cycle arrest. p-H2AX is also increased both in 24 h and 48 h groups, which showed a DSBs may occur. The presence of cleaved-PARP-1 proved that DNA repair was activated after the damage caused by 19-BJB. In order to clarify the mechanism of DNA damage caused by 19-BJB, KU-55933, an ATM inhibitor (ATMI), was added into medium before the cells was treated with 19-BJB. In Fig 8, the increasing expression of p-H2AX, p-Chk1 and p-Chk2 could be reversed by ATMI. These findings suggested that 19-BJB does have DNA damage and may activate ATM-related pathway. The live and dead test in Fig 10 also proved that cell viability decreased slightly when ATMI was added. The FACS analysis of cell cycle (Fig 5) showed obvious G1/S arrest in 24 h group, and not obvious G2/M arrest in 48 h group. Considering the differences above, we tried to make a reasonable assumption. The expression of p-H2AX and cleaved-PARP-1 were checked after treating with 19-BJB in 24 h (Fig 9). At the concentration of 8 μM, an obvious increasing expression of protein was found. In Fig 12, the appearance of 8-oxoguanine in T24 cells after treating with 19-BJB in 24 h indicated that 19-BJB may cause the change in the basic structure of DNA. In early period after 19-BJB was treated, ATM/Chk2 checkpoint pathway and phosphorylation of H2AX were both activated by DSBs. Then the downstream protein cdc25A was activated, causing G1/S cell cycle arrest. Over time, ATR/Chk1 checkpoint pathway was activated by ATM, the downstream protein cdc25C was activated, causing G2/M cell cycle arrest [55,56]. In our western blot analysis, the protein expression of p-Chk1 was significantly less than that of p-Chk2 (Fig 7A), which may indicate ATM/Chk2/cdc25A checkpoint pathway was more important in our results. The apoptosis peak became more obvious as the concentration of 19-BJB was increased, especially in the cells after 48 h treated. Additionally, the observation of apoptosis after 19-BJB treatment was monitored by the TUNEL-positive staining, and the cleaved caspase-3 (Figs 7B and 14C). In summary, we considered that the T24 cell cycle arrest caused by DNA damage would lead to cell apoptosis after 24-48 h.
19-BJB had a better inhibitory activity compared with the parent compound jolkinolide B, which may due to the modification of ring A. In previous reports, jolkinolide B was found to inhibit the growth of multiple cancer cells through different mechanisms [15][16][17]. Both derivatives, 17-acetoxyjolkinolide B and 17-hydroxyjolkinolide B which had a substituent on C-17 of ring D, were found to have a better inhibitory activity than jolkinolide B [18][19][20]. But the DNA damage effects and SAR of jolkinolide derivatives were never reported. In our study, the bulky groups such as benzyl ester may make the contribution of DNA damage. Results of DNA damage responses suggest that the mechanisms may act through the interaction of 19-BJB and DNA, which also provoked canonical DNA damage responses and checkpoint activation. The UV analysis data suggested the interaction of 19-BJB and DNA (Fig 11). In summary, the mode of action of 19-BJB may insert into DNA helix structure and further induce the DNA damage responses.
We provided an important insight into the mode underlying jolkinolides derivative-mediated anticancer effects, and these findings support the hypothesis that diterpenoids may act as attractive lead structures for developing antitumor agents. To understand the SAR of related derivatives, the 3D-QSAR model was conducted by using CoMFA analysis and the result showed that bulky groups at ring A of jolkinolides. This finding was consistent with the results of the docking of 19-BJB and DNA. The benzyl ester group of 19-BJB inserted the DNA structure and induced DNA double-strand breaks. Moreover, the ring C with the epoxy group could make the structure more rigid, so that the π-π bond formed by the benzyl ester group and hydrogen bond formed by ring D would be more stable. We therefore hypothesize that the new introduced benzyl ester group at jolkinolides play a critical role on the effects of DNA damage. The new moiety of benzyl ester contributes to the interaction of jolkinolides and DNA nucleotides (Fig 13). Collectively, the steric substituents at C-19 are able to substantially enhance the anticancer activity of jolkinolides. Together with both the epoxy on ring C and the α,β-unsaturated lactone on ring D, these functional groups are prerequisites for the inhibitory effects.
Taken together, our findings first reveal the new mode of anticancer activity induced by jolkinolides structure. The results prove that structural modification of jolkinolides could be focused on C-19 of ring A. The new modified ring A may enhance the DNA binding activity and contribute to the anticancer effects of jolkinolides. | 8,561 | 2021-03-16T00:00:00.000 | [
"Medicine",
"Chemistry"
] |
Increasing Interest in Learning Mathematics with the Make-A-Match Learning Model for Class II Students at Al-Azhar Gampengrejo Islamic Elementary School
Abstract
Mathematics has an important role in life, especially in the development of technology and science. Given the importance of mathematics, this subject is taught in schools, starting from early childhood education to high school, including in tertiary institutions. For some students, mathematics at school is a subject that is considered arduous and frightening. The teacher's role is important and necessary to increase students' interest in learning, especially in mathematics. Teachers are required to be precise in using effective models and being able to influence students to actively participate in the learning process to the fullest and increase their learning outcomes. This study aims to increase students' interest in learning with the Make a Match learning model which focuses on students interacting and collaborating in developing their knowledge through the concept of playing while learning. The method used in this study uses classroom action research. Starting from planning, implementation, observation, and reflection. The use of make-a-match in mathematics with the mathematics of division, and multiplication with numbers, as well as the introduction of flat shapes and their results can increase student learning interest by 88.8% from previously only 58.8% by using lectures.
For some students, mathematics at school is a subject that is considered arduous and frightening, and a few students even avoid this lesson. This is caused by many things, one of which is that the learning activities carried out by the teacher at school may be boring, and many formulas or calculations make students no longer interested and even hate them (Aeni et al., 2022).
Even though the purpose of giving mathematics lessons to students, especially at the elementary school education level, is to equip and foster students' thinking skills logically, systematically, critically, analytically and creatively, and actively (Susanto, 2013). Mathematics was taught from an early age to the top level and even college mathematics was used to solve existing problems. Such as raising an attitude of respect in life by having curiosity, paying attention, and being tenacious and confident in solving the problems faced (Karso, 2014).
The role of a teacher is important in increasing student learning interest, especially in mathematics. Teachers are expected to be able to use learning models appropriately so that they can influence students to be more active in learning and the results achieved can be maximized and continue to increase (Siregar & Sentosa, 2015). One of the learning models used to increase students' interest in learning mathematics is by using make a match which is a cooperative learning model that places more emphasis on students being able to work together and develop knowledge through learning activities while playing (Wulandari et al., 2018). Learning that is done in pairs (make a match) was introduced by Lorna Curran. One of the advantages of this learning model is that students can find partners while practicing concepts and topics in pleasant conditions.
Making a match requires students to find a partner according to the problem cards that are obtained freely through a lottery. The cards obtained are prepared by the teacher and then distributed to each student. The class students are grouped into two pairs, one group is responsible for answering problems and the other group brings question cards. Make a match is a form of learning model that invites students to find answers to questions from their partner through the concept of paired card games with a predetermined duration limit, students are invited to think, work together and provide motivation to each other in the learning process (Aeni et al., 2022).
The make-a-match learning model encourages students to be active and help each other in mastering learning to achieve maximum achievement (Isjoni, 2012). With the application of the make-a-match learning model, it is hoped that students will not only listen to lectures from the teacher, but students will be more active, motivated, and happy in participating in mathematics lessons. The advantages of Make a Match include being able to increase student learning activities cognitively or physically. Because there is an element of play in it, this game model is fun to increase students' understanding of the material being studied and increase student learning motivation to train students to appreciate study time. Meanwhile, the weakness of making a match is that when this strategy is not prepared properly, a lot of time is wasted and students are prone to noise when looking for partners (Huda, 2011).
Previous research on the effectiveness of using the make-a-match learning model as applied to portfolio-based science lessons in fifth-grade elementary school students whose results influenced student learning outcomes. The results of this study suggest that students are always active and creative so that they can be maximized, besides that it is suggested for other lessons in elementary school by preparing supporting tools (Wulandari et al., 2018). In other research on the effectiveness of the make-type cooperative learning model a match is very effective in improving mathematics learning for students in class I at SD Muhammadiyah and the results can be seen through the teacher and student activity sheets (Aeni et al., 2022).
For students, interest in learning is a part that encourages students to take part in the learning process carried out by the teacher which does not just appear. Learning conditions also take into account and pay attention to student interests by providing freedom and opportunities independently to participate actively in the process of teaching and learning activities ongoing. The indicators used to measure students' learning interests are their interest in learning, attention, and motivation in learning (Nurhasanah & Sobandi, 2016).
The interest in learning here is like an example of a student having an interest in mathematics, it will appear in his feelings to always have and be interested in the lesson. Then be diligent and continue to understand the knowledge related to it, and then will follow the lesson with great enthusiasm without any burden on him. Then what is meant by student attention is concentration or mental activity on the observations made and putting aside other things, students will pay attention in learning and focus on what is learned. Then motivation is an effort made to carry out learning actions in realizing behavior in achieving the expected goals through the learning interaction process (Nurhasanah & Sobandi, 2016).
Al-Azhar Gampengrejo Islamic Elementary School is in the Kediri district. Based on information obtained from teachers who teach, students often experience problems related to decreased interest in learning, especially in participating in mathematics lessons which have an impact on decreasing learning outcomes and learning that is not optimal. This is the same as what is experienced by students in general in elementary schools, where mathematics is scary, so this lesson is difficult to accept. There are many real problems and one of the indications is that the teacher when teaching is still monotonous and does not attract attention, especially in mathematics lessons. Then students experience difficulties in understanding their lessons, and the teacher's low mastery of the use of teaching aids and existing learning media is also a problem.
From the description, this Classroom Action research was carried out to find out more about how to use the make-a-match learning model in mathematics to increase students' interest in learning at Class II Al Azhar Gampengrejo Islamic Elementary School.
METHOD
This research approach uses Classroom Action Research, an action in examining activities that are intentionally carried out and occur in class (Suharsimi Arikunto, 2006). Classroom action research by involving researchers directly in the research process starting from monitoring, recording, and data collection process starting from the beginning to reporting the results (Trianto, 2011). The research design used includes planning, acting, observing, and reflecting. In this research, what will be improved is students' learning interest in the process of learning mathematics regarding material about flat shapes using the make-a-match method (Trianto, 2011).
The research location is Al Azhar Islamic Elementary School which is located at Jalan Diponegoro No. 112 RT/RW 01/03 Ngebrak Village, Gampengrejo District, Kediri Regency. The data collected in this study was through scores of students' work on practice questions. Verbal statements from students and teachers that were obtained through the results of interviews related to the learning process carried out and recording of field data from the results of a series of student activities that took part in the learning process.
The subject from this study were grade II students at Al-Azhar Gampengrejo Islamic Elementary School with a total of 18 people with details of 6 females and 12 males. This will be used as a basis for consideration to find out more deeply how the level of success of students in the learning process is given by taking action through applying the make-a-matching method in the learning process in mathematics as the primary data source.
Secondary data sources come obtained indirectly from data collectors. The data source is obtained from learning outcomes collected from other people as supporting data. In this study, secondary data sources were the head of the madrasa and the administration of Al-Azhar Gampengrejo Islamic Elementary School. The types of secondary data used in this study are activities, places or locations, documentation, and archives.
RESULTS AND DISCUSSION
Planning is the initial stage or step that is carried out by the researcher before going directly into the classroom, the researcher previously made initial preparations such as consulting and discussing with the Head of the Madrasa at Al-Azhar Ngebrak Gampengrejo Islamic Elementary School. This was done by researchers so that researchers could find out more about the characteristics of students in class II and get directions from the Head of the Madrasah which were taken into consideration in the planning process.
After the researchers conducted discussions, for the next stage the researchers conducted a study related to learning devices with the goals achieved. In this classroom action research, the position of the researcher doubles as a math teacher with twodimensional figure material. Then after that, the researcher makes a lesson plan which the researcher will use as a foundation for implementing learning in class. Researchers also prepare learning media that will be used when in class, then researchers also prepare pretest and posttest questions which totaled 5 multiple choice questions for each test.
The implementation of the action took place with an allocation of 2 × 35 minutes. The following description of the implementation of the actions in cycles I and II begins with initial activities of the teacher greeting, then inviting students together to pray and take attendance for student attendance. At this stage, the teacher conducts questions and answers regarding material about the two-dimensional figure that is known by students. The teacher also gave a pre-test which consisted of 5 multiple-choice questions about the material that have taught.
In the core activity, the teacher explains material related to two-dimensional figures, their characteristics, and various kinds of plane shapes which include squares, rectangles, triangles, and circles. Using the lecture method, the teacher also writes on the blackboard so that students can easier to repeat the material that has been delivered by the teacher. After delivering the material, the teacher invites students to ask questions about material that can't be understood. At the end of the activity, the teacher gives a post-test question which contains 5 questions related to flat shape material, then the teacher invites students to read prayers together.
Observing is carried out by researchers to observe the ongoing process of learning. Some students caught sleepy, and some were not very focused on paying attention to the material conveyed by the teacher. Judging from the results of the post-test questions given at the end of the lesson, many students had not completed the answers to the questions given, including the quiet class atmosphere which made students less enthusiastic about participating in mathematics lessons.
Reflecting is done by reflecting on the results of the test before using make a match. Learning is not suitable for attracting students' interest in participating in the mathematics learning process which seems monotonous, so they cannot understand properly and maximally the material presented. This can be seen from the lack of enthusiasm of students to pay attention to the lesson when the teacher is explaining the material and from the results of the tests that have been carried out.
In response to the results of the tests that have been carried out, then the following considerations are made to improve: 1). Need to increase interest in learning by considering the characteristics of students who are in class II by using the make-a-match learning model. 2). Conduct reflection at each meeting to measure the extent to which the level of success of learning that has been implemented in the class. 3). Then the researchers decided to make improvements in learning in mathematics about flat shapes by using the make-a-matching model to increase student learning interest. For mathematics material, the researcher applied Minimum Criteria Completeness > 70 to find out the differences before using the make-a-match learning model and after applying the make-a-match model.
The implementation of learning mathematics by applying the make-a-matching model begins with observation. It is known from the initial data that student learning completeness in mathematics is still at 50%. With 9 students who have completed and 9 others who have not completed. So it is hoped that the application of the make-a-match learning model in mathematics lessons can attract and increase students' interest in learning to follow the learning process. Cycle I was carried out with a time allocation of 2 × 35 minutes. The material presented is an about two-dimensional figure which includes the characteristics of various flat shapes such as squares, rectangles, triangles, and circles.
Based on the results of observations, the learning process that has been carried out is by the scenarios listed in the Learning Implementation Plan. The teacher's steps to carry out the learning process are 1) the teacher greets when entering the class and invites students to pray together and checks the student attendance list; 2) Then the teacher explains the material and conducts a question and answer process; 3) the next stage the teacher provides direction and motivation and concludes learning today.
In the implementation of this first cycle, the teacher has not maximized to be able to foster interest in learning mathematics, this can be seen by making observations when the teacher is explaining the material, many students look daydreaming, not paying attention, and some are even sleepy. From a learning process like this when given a posttest, many students still have not finished learning. In cycle I, it was known that 9 students completed the learning, and 9 students did not complete learning and it can be seen with a class average value of 58,8.
Then in the next stage in cycle II, it is carried out with a time allocation of 2 x 35 minutes. The material discussed is about two-dimensional figure, their characteristics, and various kinds of flat shapes. In cycle II, students are not only given material and then answer post-test questions, but students are also invited to learn using the make-a-match learning model to increase student interest in learning with the concept of learning while playing. Before the material begins, the teacher explains in advance that after the material is delivered, students will be divided into pairs and each student will be given a question card or answer card.
Then students are asked to find a partner from each question card or answer card by giving a predetermined time limit. When hearing this explanation, students were very enthusiastic about listening to the material and enthusiastic about participating in learning mathematics.
The learning steps in cycle II carried out by the teacher are 1) the teacher greets when entering class and invites students to pray together and check student attendance; 2) the teacher repeats the math material that was presented at the previous meeting; 3) the teacher conveys material and explains to students regarding learning using make a match; 4) the teacher carries out mathematics learning using make a match; 5) the teacher invites students to conclude today's material.
The results of this classroom action research, learning that has been done in mathematics using make a match is proven to increase student learning interest by increasing learning outcomes by 88.8% from the previous result of 58.8%.
Based on the results of the implementation of cycle I and cycle II, the make-amatch learning model is proven to be able to increase student learning interest in mathematics at Al-Azhar Gampengrejo Islamic Elementary School in class II students and this does not only have an impact on learning interest but can also have an impact on improving student achievement results. Students do not only listen to monotonous lectures from the teacher but students are invited to learn together while having fun playing.
The results of this classroom action research are in line with the advantages of using the make-a-matching method which can increase student activity in learning both cognitively and physically in which there are elements of fun games and can increase students' understanding and interest in learning (Huda, 2013). In addition, interest in learning increases if the learning process is interesting and children play an active role in the teaching and learning process (Slameto, 2015).
Mathematics learning uses the make-a-match learning model at Al-Azhar Gampengrejo Islamic Elementary School in class II students in mathematics which include division, multiplication with numbers, and the introduction of two-dimensional figure and their characteristics can increase students' interest in learning with a percentage of 88.8% from previously only 58.8%. In this learning model students are invited to learn while playing to find a partner from a question card and an answer card. | 4,144 | 2023-03-18T00:00:00.000 | [
"Mathematics",
"Education"
] |
Structural Identification of the Pacemaker Cells and Expression of Hyperpolarization-Activated Cyclic Nucleotide-Gated (HCN) Channels in the Heart of the Wild Atlantic Cod, Gadus morhua (Linnaeus, 1758)
Hyperpolarization-activated cyclic nucleotide-gated (HCN) channels are proteins that contain highly conserved functional domains and sequence motifs that are correlated with their unique biophysical activities, to regulate cardiac pacemaker activity and synaptic transmission. These pacemaker proteins have been studied in mammalian species, but little is known now about their heart distribution in lower vertebrates and c-AMP modulation. Here, we characterized the pacemaker system in the heart of the wild Atlantic cod (Gadus morhua), with respect to primary pacemaker molecular markers. Special focus is given to the structural, ultrastructural and molecular characterization of the pacemaker domain, through the expression of HCN channel genes and the immunohistochemistry of HCN isoforms, including the location of intracardiac neurons that are adjacent to the sinoatrial region of the heart. Similarly to zebrafish and mammals, these neurons are immunoreactive to ChAT, VAChT and nNOS. It has been shown that cardiac pacemaking can be modulated by sympathetic and parasympathetic pathways, and the existence of intracardiac neurons projecting back to the central nervous system provide a plausible link between them.
Introduction
In contrast to the contractile organs in protostomes, the morphology of the heart in deuterostomes became more complex due to the higher demands on oxygen distribution. With the exception of the Cyclostomata, the adult fish heart is formed by six chambers, or segments, arranged in the following series: sinus venosus, atrium, atrioventricular canal, ventricle, conus arteriosus, and bulbus arteriosus. Several of these chambers have acquired different morphological and functional significance across the phyletic scale (see [1]). In teleosts, the two main cardiac chambers, ventricle and atrium, are distinguished by the expression of chamber-specific genes, regulating their establishment and maintenance [2]. Amniotes are characterized by the transition to a fully separated heart with four chambers, as found in crocodilians, avian, and mammals. Contractile myocytes have replaced mesothelial cells, and endocardial cells line the inner surface of the heart. Thus, in vertebrates, a variety of specialized cell types have replaced the cardiac mesothelium in the pumping organs in invertebrates. Excitation of the heart occurs in a specialized region known as sinoatrial node (SAN), which is essential for the maintenance of a normal heart rhythm. SAN is regulated by extrinsic (central nervous system) and intrinsic factors, such as the intracardiac neurons, natriuretic peptides, and mechanical forces [3]. In zebrafish, the primary pacemaker is found in a ring-like structure at the location of the sinoatrial valve [4][5][6]. However, in some fish species, no ultrastructural differences between the pacemaker tissue and myocardium were observed, except for a particular arrangement of the myocardium in some teleosts [7]. The criterion allowing a distinction of the nodal cells from other cardiac muscle cells in higher vertebrates, is their relative poorness in myofibrils within the cytoplasm. This characteristic is present in some parts of the sinoatrial myocardium of the loach [8], catfish [9], and trout [10], but more investigations are needed in order to draw a definite conclusion. Localization of myofibril-poor cells, as in the case of the Keith and Flack node, is not obvious in fish, but in the catfish, a scattering of nodal-like fibers is observed in the part that is closest to the sinus venosus, by anatomical and electrophysiological studies [11].
Hyperpolarization-activated cyclic nucleotide-gated (HCN) protein channels are a multigene family that play a role in establishing the pacemaker rate of the vertebrate hearts; this is unlike the hagfish, where the pacemaker activity and HCN expression reside in multiple areas of the atrium and ventricle of the aneural heart [12]. Recent electrophysiological studies, by [13], pointed to the pacemaker protein HCN4, and its correlation with pacemaker currents in the hearts of zebrafish. Also, Hassinen et al. (2017) have reported the expression of six HCN transcripts in brown trout (Salmo trutta fario) sinoatrial (SA) pacemaker cells, despite a small functional I f current. This is in agreement with the anatomical identification of the pacemaker cells in the SAR (sinoatrial region) and AVR (atrioventricular region) of the zebrafish heart [6], where the population of Isl1positive cells in the SAR completely overlapped the population of HCN4-IR cells, and also the primary initiation site of electrical activity from the SAR to AVR. In hagfish, its heart rate is set via pacemaker cells [14,15]. These cells are characterized by HCN protein channels in the cell membrane, which are responsible for slow depolarization of the membrane. In the heart and nervous system of mammals, HCN channels contribute to neuronal and cardiac firing rates. Further, [12] discovered six isoforms of HCN, equivalent to HCN 2, 3 and 4 of mammalian muscle HCN in hagfish. Their expression is reported in all cardiac chambers, but HCN3a is higher in atrial and ventricular muscle, suggesting that HCN4 dominance in adult mammalian hearts appeared after hagfish divergence.
The available data concerning intracardiac innervation in some fish species are those reported by [16] and [5,6], which emphasize the occurrence of the main localization of the nervous tissue in the sinoatrial plexus, which is regarded as the automatism (pacemaker) center of the heart. This plexus is a well-developed network of nerve fibers and nerve cell bodies (intracardiac neurons), concentrated in the SAR, but also distributed along the entire atrial canal up to the atrio-ventricular junction [5]. The sinoatrial region, known as sinoatrial node (SAN), is regulated by extrinsic sympathetic and parasympathetic innervation, and an intracardiac nervous system comprising the intracardiac neurons that are embedded in the myocardium. These neurons contain parasympathetic, and non-adrenergic and non-cholinergic (NANC) transmitters in fish hearts [16,17], and form intracardiac circuits that are important for the internal processing of extrinsic inputs and for intracardiac reflex control of cardiac function [3]. In fact, recent immunohistochemical evidence has demonstrated the expression of adrenergic beta 2 and cholinergic muscarinic type 2 receptors in the pacemakers of both the sinoatrial and atrioventricular regions in the zebrafish heart [6].
It is now well established that the teleost lineage experienced a specific whole-genome duplication that is thought to have occurred 320-350 million years ago, following their split from the holosteans [18]. Whole-genome duplication is a major force of adaptive genome evolution, since it generates duplicate genes that can be lost, undergo sub-functionalization (functional divergence), or acquire new functions (neofunctionalization) [19]. These additional paralogous genes add an extra layer of complexity to the molecular regulation of all biological processes, including cardiogenesis and pacemaker function [20]. To the best of our knowledge, the molecular characterization of HCN isoforms in teleost hearts is very limited. Phylogenetic analyses have indicated the existence of four HCN genes that are homologs to urochordate isoforms. Additional lineage-specific duplications appear to have evolved in urochordate and fish genomes [21].
In the present study, we examined the structural and immunohistochemical features of pacemaker cells, and their innervation in the heart of Atlantic cod (Gadus morhua), which is one of the best studied paracanthopterygians, due to their commercial importance. Moreover, we have integrated these data with the expression of four hcn paralogues in different areas of the Atlantic cod heart.
Morphological Characterization of the Pacemaker in Atlantic Cod
The sinus venosus wall contained abundant collagen, fibroblasts, and discrete bundles of smooth muscle cells. The latter component became more apparent, and was better organized, near the atrium. In addition, the wall of the sinus venosus contained accumulations of neurons and numerous nerve bundles (Figure 1a). The neurons and nerves were surrounded by connective tissue, but were not encapsulated in any way (Figure 1a). The neurons showed a nucleolus, intranuclear rodlet, lysosomes in the vicinity of endoplasmic reticulum, and supporting glia (Figure 1b). The nerves in the sinus wall mostly contained myelinated fibers, but non-myelinated fibers were also present. Nerve bundles were distributed throughout the sinus venosus wall, but neuronal groups occurred more frequently in the sinoatrial region, close to the atrial myocardium and the pacemaker area ( Figure 1a).
Differential Expression of Hcn Paralogues in Heart
All of the hcn paralogues that were examined (hcn1, hcn2a, hcn2b and hcn4) were ubiquitously expressed in the Atlantic cod heart, albeit at varying levels in different regions ( Figure 2). In the sinus venosus, hcn2a, hcn2b and hcn4 were the predominant paralogues, and were expressed at similar levels. There was a significant difference between the hcn2a and hcn1 transcript levels, with hcn2a being 2.7-fold more abundant than hcn1. The most highly expressed paralogue in atrium was hcn2a, which was 2.8-and 3.1-fold higher than hcn2b and hcn4, respectively. It was also 2.5 times more abundant than hcn1, but this difference was not statistically significant. Hcn2a was also the predominantly expressed paralogue in the ventricle, and its transcript levels were three-fold higher than hcn1 transcripts. In the bulbus arteriosus, hcn1 and hcn2a were expressed at similar levels, and their transcripts were significantly more abundant than hcn2b and hcn4. Hcn1 mRNA levels were 20.7-and 4.4-fold higher than hcn2b and hcn4, respectively. Similarly, hcn2a transcripts were 19.6-and 4.2-fold more abundant than hcn2b and hcn4, respectively.
Overview of the Innervation Pattern
A schematic of the cod heart showing disposition of various chambers is reported in the Figure 3.
The intracardiac nervous system was demonstrated with the pan-neuronal markers AcT and Hu, showing the innervation of the sinoatrial region (SAR) of the heart of this species. A nerve plexus, the sinoatrial plexus (SAP), is located at the venous pole of the heart. AcT and Hu labeling revealed that axons coursed within the sinoatrial plexus and the somata of the intracardiac neurons (ICNs). Immunolabeling revealed the pattern of the SAR innervation, but did not indicate a differentiation between the axons originating from the extracardiac neurons and those arising from neurons, with their somata located intracardially (ICNs). Double immunolabeling with antibodies against ChAT and nNOS revealed a complete co-localization of the two neuronal markers (Figure 4a-d). Varicose nerve fibers that were double labelled with these antibodies were seen in close association at the surface of ChAT-nNOS-immunopositive neuronal somata (Figure 4d). densed myofibrils and regular mitochondria (Mt). Moreover, n is the myocardial nucleus; arrow is the myelinated nerve fiber. (e) TEM. A large cytoplasmic area contains condensed myofibrils surrounded by tightly packed rounded mitochondria (Mt). Note the presence of one myelinated (thick arrow) and several unmyelinated (thin arrows) nerve fibers. (f) TEM.
Differential Expression of Hcn Paralogues in Heart
All of the hcn paralogues that were examined (hcn1, hcn2a, hcn2b and hcn4) were ubiquitously expressed in the Atlantic cod heart, albeit at varying levels in different regions ( Figure 2). In the sinus venosus, hcn2a, hcn2b and hcn4 were the predominant paralogues, and were expressed at similar levels. There was a significant difference between the hcn2a and hcn1 transcript levels, with hcn2a being 2.7-fold more abundant than hcn1. The most highly expressed paralogue in atrium was hcn2a, which was 2.8-and 3.1-fold higher than hcn2b and hcn4, respectively. It was also 2.5 times more abundant than hcn1, but this difference was not statistically significant. Hcn2a was also the predominantly expressed paralogue in the ventricle, and its transcript levels were three-fold higher than hcn1 transcripts. In the bulbus arteriosus, hcn1 and hcn2a were expressed at similar levels, and their transcripts were significantly more abundant than hcn2b and hcn4. Hcn1 mRNA levels were 20.7-and 4.4-fold higher than hcn2b and hcn4, respectively. Similarly, hcn2a transcripts were 19.6-and 4.2-fold more abundant than hcn2b and hcn4, respectively. Relative expression levels of hcn1, hcn2a, hcn2b and hcn4 in sinus, atrium, ventricle and bulbus arteriosus in Atlantic cod. Transcripts were quantified by qPCR, normalized using the geometric average of ubi and eef1 expression and shown as relative values compared to hcn1 transcript levels in each sample. Data are expressed in arbitrary units (A.U.) as mean ± S.E. (n = 5). Different superscript letters ( a, b ) indicate significant differences in transcript levels between hcn paralogues in each heart region. Differences in hcn transcript levels within each heart area were determined by one-way ANOVA with a Holm-Sidak post hoc test (p < 0.05). Kruskal-Wallis one-way ANOVA on ranks with Dunn's post hoc test was performed when the data did not meet the normality and equal variance requirements. Relative expression levels of hcn1, hcn2a, hcn2b and hcn4 in sinus, atrium, ventricle and bulbus arteriosus in Atlantic cod. Transcripts were quantified by qPCR, normalized using the geometric average of ubi and eef1 expression and shown as relative values compared to hcn1 transcript levels in each sample. Data are expressed in arbitrary units (A.U.) as mean ± S.E. (n = 5). Different superscript letters ( a, b ) indicate significant differences in transcript levels between hcn paralogues in each heart region. Differences in hcn transcript levels within each heart area were determined by one-way ANOVA with a Holm-Sidak post hoc test (p < 0.05). Kruskal-Wallis one-way ANOVA on ranks with Dunn's post hoc test was performed when the data did not meet the normality and equal variance requirements.
Intense ChAT immunostaining was observed in axonal varicosities interconnecting the neuronal somata. The simultaneous detection of AcT and VAChT demonstrated two distinct populations of neuronal somata (neuron subtypes). A higher number of axon profiles and nerve terminals is noticed in the SAR (Figure 5a,c,d,f). Sometimes AcT-positive large axons, at single pole of soma originating from hillock, were observed (Figure 5f). Most of the clustered neuronal somata are positive to VAChT, and AcT-positive axons appeared to surround these somata (Figure 5f).
Sometimes AcT-positive terminals were apposed to VAChT-positive neuronal somata. In both cases, the axon profiles form varicosities "en passant" [22]. The axons terminals within the SAR were not double labeled with AcT and VAChT antibodies, thus indicating separate nerve fiber populations. Few axons exhibited co-labelling of the neuronal markers. Varicose AcT, VAChT and ChAT-nNOS axons were observed in close association with cardiomyocytes in the SAR and the atrium.
Overview of the Innervation Pattern
A schematic of the cod heart showing disposition of various chambers is reported in the Figure 3.
Identification of Pacemaker Cells
In the SAR region, antibodies to HCN isoforms (HCN1, HCN2, HCN4) were used to distinguish pacemaker cells from the surrounding cardiomyocytes. Cells expressing HCN1, HCN2 and HCN4 formed clusters that were intermingled with myocardial cells. HCN immunoreactivity is localized to either the cytoplasm or the cell membrane. Colocalization studies showed that most of the isoforms were often co-expressed with Islet-1 (Figure 6a,d). However, the combination of the antibodies to Islet-1, HCN1 and HCN2 revealed the presence of distinct cell populations, containing immunoreactivities to single markers (Figure 6e,f), probably due to cell size, ionic current densities, or a response to autonomic modulation [23,24]. Intense ChAT immunostaining was observed in axonal varicosities interconnecting the neuronal somata. The simultaneous detection of AcT and VAChT demonstrated two distinct populations of neuronal somata (neuron subtypes). A higher number of axon profiles and nerve terminals is noticed in the SAR (Figure 5a,c,d,f). Sometimes AcT-positive large axons, at single pole of soma originating from hillock, were observed (Figure 5f). Most of the clustered neuronal somata are positive to VAChT, and AcT-positive axons appeared to surround these somata (Figure 5f). The simultaneous detection of HCN4 and Hu has been used to identify the presence of the pacemaker cells among Hu-immunoreactive nerve fibers and axon bundles (Figure 7a-c).
HCN immunoreactivity is localized to either the cytoplasm or the cell membrane. Co calization studies showed that most of the isoforms were often co-expressed with Isle (Figure 6a,d). However, the combination of the antibodies to Islet-1, HCN1 and HCN revealed the presence of distinct cell populations, containing immunoreactivities to sing markers (Figure 6e,f), probably due to cell size, ionic current densities, or a response autonomic modulation [23,24]. Double immunolabeling with ChAT and nNOS antibodies revealed that numerous and sparse thick axon bundles are seen adjacent to the clustering of neuronal somata and the presumptive pacemaker region (Figure 4a-c). signal). (b-f) Pacemaker cells revealed colocalization of Islet-1 and HCN2 (d; merge). Two populations expressing Islet-1 (green signal and arrows) and HCN2 (red signal and arrows), respectively, were observed in e and f. Scale bars: 20 µm.
The simultaneous detection of HCN4 and Hu has been used to identify the presence of the pacemaker cells among Hu-immunoreactive nerve fibers and axon bundles ( Figure 7a-c).
Discussion
We have shown that the wall of the sinus venosus in G. morhua contains a complex pattern of nerves and neurons. Most of the cardiac innervation in teleosts is conveyed by vagosympathetic trunks that enter the heart following the sinus venosus wall (for a recent review, see Icardo, 2017). The cardiac output is regulated by the parasympathetic limb that causes cardio inhibition, and sympathetic limb that causes cardioexcitation [5,16,25]. Early and late studies have indicated that these nerves interact with resident intracardiac neurons, establishing a rich nervous plexus at the sinoatrial junction [5,22,26,27]. More than 90% of these neurons have been characterized as postganglionic cholinergic [5], although the final rate may vary between species [27,28]. In our study, we have demonstrated that ICNs in the SAR showed expression ChAT, VAChT and nNOS, and thus were cholinergic and nitrergic neurons that may innervate the pacemaker tissue. Also, preganglionic axons, presumably originating from vagosympathetic trunks, were positive for ChAT and nNOS, and were seen in close contact with ICNs, thus suggesting that Ach is released at the preganglionic synaptic junctions, acting on nicotinic receptors [5]. The presence of nNOS/NO in these terminals should also be correlated with its influence on afferent neurons and circuitry neurons in the heart.
In zebrafish, the sinoatrial plexus also contains efferent adrenergic neurons, and even afferent neurons projecting centripetally [5]. In the same sinoatrial junctional area, structural, immunological and electrophysiological studies have localized the heart pacemaker [4,5,26,27,29,30]. Colocalization studies showed that HU-positive nerve terminals and axon bundles approached HCN4 pacemaker cells, thus indicating the innervation of the pacemaker region by extrinsic and intrinsic nerves [22]. Our results in Atlantic cod are in agreement with previous findings, indicating a common pattern in the teleost heart.
From a structural point of view, the pacemaker area is located in the immediate vicinity of the atrial musculature, being surrounded by connective tissue. This area is not bound by any connective capsule and differs from the atrial myocardium, by having wide intercellular spaces, vascular profiles, and abundant axon terminals that establish numerous contacts with the surface of the myocardial cells. Previous structural studies have tentatively identified the pacemaker area on the basis of the abundance of neuromuscular junctions, and have focused on the richness and structure of the nerve terminals [22,26,29]. Curiously, the structure of the myocardium appears to have been mostly overlooked, except for a report indicating the absence of atrial specific granules (i.e., those containing the atrial natriuretic peptides) [29], and for several comments on the paucity of the myofibrillar content. This is in line with the poor myofibrillar content of the atrial myocytes (see [31]). The present results indicate that many myocardial cells in the pacemaker area show total or partial myofibrillar distortions, due to myofilament condensation, close apposition of the Z bandlike material and disappearance of the other myofibrillar bands. Of note, single myofibrils may alternate condensed areas with areas showing the normal cross-striated pattern. The presence of "normal" myocardial cells and of cells with myofibrillar condensation could indicate the existence of cell subpopulations within the pacemaker area. This is in line with our colocalization studies, which showed that Islet-1 was not co-expressed with the HCN1 and HCN2 isoforms, and at least two cell subpopulations are present. The abundance of mitochondria, the lateral aggregation of myofilaments, and the accumulation of Z bands may be interpreted as images of nascent myofibrils, suggesting that these cells are not fully differentiated. Thus, they may retain embryological characteristics, such as the capability for spontaneous depolarization at a high rate. This suggestion is compatible with the expression of the transcription factor Isl-1 that is up-regulated in the embryonic myocardium, being later restricted to the sinus node in mammals [32], and to the sinoatrial junction in adult zebrafish [4] and goldfish [27]. Of note, HCN4-and Isl-1-positive myocardial cells in the sinoatrial valve of the goldfish coexist with myocardial cells that are negative to both HCN4 and Isl-1 [27].
In the nervous system and the heart of mammals, HCN channels contribute to the regulation of neuronal and cardiac firing rates. The currents produced by HCN channels are classified into Ih (hyperpolarization), Iq (queer), or If (funny) currents. Especially, Ih control the rhythmic activity of cardiac pacemaker cells, and spontaneous firing of neurons. Highly conserved sequences among the HCN channel family indicate that the vertebrate HCN1-4 genes arose from duplication of a single ancestral gene, prior to the lineage divergence. The functional properties of HCN1-4 channels are similar, but not identical. Functional differences among these paralogues may be predominantly due to motifs in their intracellular N-and C-termini [14,33].
In Atlantic cod cardiac myocytes, paralogues of hcn1, hcn2a, hcn2b and hcn4 are expressed at varying levels in different cardiac regions, but paralogues to hcn2a, hcn2b and hcn4 appeared to be expressed at the same levels in the sinus venosus, with a maximal expression of hcn2a in the sinus, atrium and ventricle. On the other hand, hcn1 and hcn2a were the major channel proteins that were expressed in the bulbus arteriosus. Orthologs of all four mammalian hcn genes (hcn1-4) were expressed in brown trout heart [14,34]. The two paralogues of hcn2 in Atlantic cod and the six in trout arose from duplications of hcn genes in the fish lineages [14]. There is a difference in cardiac hcn channel composition among Atlantic cod, brown trout, and mammalian heart. Hcn4 is the predominant isoform of the hcn transcripts in rabbit and murine SA nodes [35]. The major expression of hcn3 orthologs in brown trout is strikingly different to the mammalian heart [14]. HCN4 proteins were reported in the cardiac pacemaker tissue of the adult zebrafish and goldfish [27]. Notably, the pacemaker tissue of the trout contains both c-AMP (cyclic adenosine monophosphate)insensitive (hcn3 and hcn1) and c-AMP-sensitive (hcn4 and hcn2a) isoforms, while in the atrium and ventricle, there are c-AMP-insensitive isoforms [14]. The analyses of molecular data revealed the presence, in the cod pacemaker tissue, of c-AMP-sensitive hcn-2 isoforms that are also mostly expressed in the atrium and ventricle. Unlike HCN3, HCN4 and HCN2 are strongly dependent on c-AMP levels [35]. In conclusion, it seems that the HCN channels in Atlantic cod are not similar to those in the trout and hagfish [12,14]. The physiological significance of these differences awaits investigation, as well as the specific contribution of c-AMP modulation for the distinct and ubiquitous member of HCN2 in the sinus, atrium, ventricle, and bulbus arteriosus of the cod, since c-AMP is a crucial factor regulating Ih/HCN channel function in the heart and brain [36].
In the present study, we have not focused on the anatomical localization of a presumptive pacemaker tissue in other parts of the heart, such as the atrioventricular region. In the bulbus arteriosus, our molecular investigations proved the presence of a higher expression of hcn2. HCNs were commonly associated with both central and peripheral nervous systems [37,38]. The bulbus arteriosus of fish hearts showed a coronary circulation that is regulated by autonomic nerves [39].
In conclusion, despite the vast morphological differences between the simple fish heart and the structurally more complex mammalian heart, there is a striking degree of evolutionary conservation of the fundamental and molecular pathways [20].
Collection of Samples
Wild Atlantic cod (Gadus morhua) (n = 12) were caught by line fishing in Saltfjorden (latitude: 67 • 16 32" N and longitude: 14 • 33 26" E) near Mørkvedbukta research station (Nord University, Bodø, Norway). The sex, length and weight of each fish were recorded (Tables S1 and S2). Fish were humanely sacrificed by a blow to the head with a priest, followed by transection of the spinal cord. Their heart was excised and carefully dissected in different regions (sinus venosus, atrium, ventricle and bulbus arteriosus), which were collected separately in cryotubes, immediately frozen using dry ice and stored at −80 • C until RNA extraction. For transmission electron microscopy (TEM), whole hearts from 5 individuals were fixed in 25 mL fixative consisting of 3% (v/v) glutaraldehyde (Sigma-Aldrich, Steinheim, Germany) and 0.5% (w/v) CaCl 2 (Sigma-Aldrich) in phosphate-buffered saline (Sigma-Aldrich).
General and Neurotransmitter-Specific Labeling
The immunohistochemical procedures in the present study were similar to those described previously regarding the neuroanatomical studies on the visceral organs in fishes [16,[40][41][42]. Tissue sections (6 µm) were rinsed in PBS and transferred to a PBS solution containing 2% Triton X-100 (X-100), Sigma-Aldrich), 1% (w/v) bovine serum albumin (BSA, A9576, Sigma-Aldrich), and 1% (v/v) normal goat serum (NGS, G9023, Sigma-Aldrich) for 48 h at 4 • C with agitation; they were then incubated with primary antibodies (Table 1). Primary antibodies were diluted in a solution containing 0.25% Triton X-100, 1% BSA, and 1% NGS in PBS and incubated overnight, then transferred to a solution of PBS containing the appropriate secondary antibody conjugated to AlexaFluor 488 or 555 fluorophores (Life Technologies, Burlington, Canada). The incubation time with secondary antibodies was 1 h. Final rinsing of tissue sections was performed in PBS before mounting with Vectashield (Vector Laboratories Inc., Burlingame, USA) to reduce fluorophore photobleaching. The primary antibodies used in this study have been previously validated in zebrafish, bichir and gar hearts and mudskipper gill [5,6,17,40,43]. In the present study to determine the general innervation of the heart, we have used antibodies against acetylated tubulin (AcT), the human neuronal protein C/D (Hu). The anti-AcT and anti-Hu antibodies were considered the most appropriate antibodies in mammals with which to obtain reliable estimates of total number of enteric neurons [44]. Antibodies to AcT-Hu were also used for the description of zebrafish intracardiac neurons [6] and enteric neurons [45]. Hu proteins are involved in many posttranscriptional mechanisms for the development and maintenance of the nervous system.
Cholinergic axons and somata were detected by immunoreactivity for choline acetyltransferase (ChAT), an enzyme involved in Ach synthesis, combined with the antibody to neuronal nitric synthase (nNOS) to label the intracardiac innervation. In more specimens antibody against vesicular acetylcholine transporter (VAChT) was used together with ChAT to double label cholinergic elements, and with AcT to differentiate between cholinergic elements, neuronal fibers and neuronal cell bodies [44]. VAChT antibodies have been used to reveal the expression of acetylcholine in the neuroepithelial cells and the neurons in the fish gill [40,41]. Neurons capable of producing nitric oxide (NO) were detected by the presence of nNOS (monoclonal anti-nNOS and polyclonal anti-nNOS). The characterization, specificity and reliability of the antibodies directed against choline acetyltransferase (ChAT) and nNOS, and their application in morphological studies of the intracardiac ganglia containing a heterogeneous population of neurons [46], the zebrafish enteric nervous system [44] and the neuroepithelial cell system of the fish gill [41,47], have been previously reported.
Pacemaker Cell Immunohistochemistry
Paraffin sections containing the sinoatrial region were treated with antibodies against HCN1, HCN2, HCN4, and the transcription factor Islet-1, to detect the pacemaker cells. Some specimen subsets were labeled using antibodies against all three HCN isoforms and the Hu neuronal protein to detect the innervation of the pacemaker cells. Pre-absorption of the primary antisera to HCN1, HCN2 and HCN4 with the respective blocking peptide (HCN1 blocking peptide, BLP-PC056, HCN2 blocking peptide, BLP-P030 and HCN4 blocking peptide, BLP-PCO52 from Alomone Labs) according to the guidelines of the supplier led to the complete elimination of the immunostaining of the pacemaker tissue. HCN4 antibodies as reliable markers of the pacemaker cells as well as the controls of HCN4 and Islet-1 antibodies in the heart of zebrafish were reported by [4,5].
Analysis and Imaging
Processed specimens were viewed using a LSM 700 Zeiss confocal microscope. The sections were analyzed, and images acquired using a Zeiss LSMDUO confocal laser scanning microscope with META module (Carl Zeiss, Micro Imaging, GmbH, Jena, Germany). Zen 2011 (LSM 700, Zeiss software) built in colocalization view was used to highlight the expression of both antibody signals in order to produce a co-labeling signal. Digital images were cropped, and the figure montage was prepared by the use of Adobe Photoshop 7.0 (Adobe System, San Jose, CA, USA).
Semi-Thin Sections and Transmission Electron Microscopy (TEM)
Selected samples from the tissues fixed in 3% (v/v) glutaraldehyde were postfixed in 1% (w/v) osmium tetroxide, dehydrated in graded acetone and propylene oxide, and embedded in Araldite (Fluka, Buchs, Switzerland), following routine procedures. Semithin sections were cut with an LKB III ultratome, stained with 1% (w/v) toluidine blue, and observed with a Zeiss Axioskop 2 plus microscope equipped with an AxioCam HRc digital camera. For TEM, ultrathin sections cut with a Leica ultracut UCT were stained with uranyl acetate and lead citrate, and examined with a Jeol-JEM-1011 working at 80 KV and equipped with a Gatan ORISUS SC 1000 CCD camera.
RNA Extraction and cDNA Synthesis
Total RNA from the different heart segments was extracted following the QiAzol protocol (Qiagen, Germany). Purity and quantity of RNA were determined using the Nan-oDrop 1000 (Thermo Fisher Scientific, Waltham, MA, USA). RNA integrity was assessed by gel electrophoresis on 1.2% (w/v) agarose gels (Sigma-Aldrich) stained with SYBR safe DNA gel stain (Thermo Fisher Scientific). One microgram of total RNA from each sample was reverse transcribed using the reverse QuantiTect transcription kit (Qiagen), following the manufacturer's protocol.
Real-Time PCR (qPCR) and Quantification of Gene Expression
For each heart region, the 5 samples with the best RNA quality were selected for qPCR analysis, as detailed in Supplementary Materials Table S1. Real-time PCR amplification was performed on a LightCycler ® 96 thermocycler (Roche Diagnostics, Basel, Switzerland). All PCR reactions were carried out in duplicate in 10 µL reactions consisting of 5 µL of LightCycler 480 SYBR green I master mix (Roche), 1 µL of gene-specific primer pair (5 µM each) and 4 µL of 25× diluted cDNA sample. Non-reverse transcription control and no template controls were included for each primer pair. Details of the primers used for target and reference genes are provided on Table 2. Thermocycling parameters were as follows: initial denaturation at 95 • C for 10 min, followed by 45 cycles at 95 • C for 10 s, optimized annealing temperature (58-65 • C) for each gene (Table 2) for 30 s and 72 • C for 20 s. The specificity of amplification was determined by melting curve analysis. The standard curve was obtained by running a 5-point series of 2-fold dilution (1:2, 1:4, 1:8, 1:16 and 1:32) pooled cDNA. Relative expression of the target genes was determined using GeNorm [48], as reported in [49]. The geometric means of the two most stable reference genes (eef1a and ubi) were used as calibrators. Differences in hcn transcript levels in each heart area were determined by one-way ANOVA with a Holm-Sidak post hoc test (p < 0.05). Kruskal-Wallis one-way ANOVA on ranks with Dunn's post hoc test was performed when the data did not meet the normality and equal variance requirements. Statistical analyses were performed with the SigmaStat statistical package (Systat software, UK). Table 2. Details of the primers used for relative quantification of mRNA levels by qPCR. In addition to the primer sequences used to quantify each transcript level, amplicon sizes, annealing temperatures (Ta) and amplification efficiencies (E) are indicated. | 7,254.8 | 2021-07-01T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Magnetic excitations in assemblies of dipolar coupled nanoparticles
The equilibrium magnetization configurations and the associated microwave susceptibility spectra of dipolar coupled nanoplatelets are explored using three-dimensional (3D) micromagnetic simulations. First, the case of periodic arrangements of nanoplatelets on square arrays is considered. As a result, a macro-vortex state defined as a flux closure pattern spreading over the whole array with or without a vortex core can be stabilized starting from an initial orthoradial magnetization configuration. For macro-vortex states with a vortex core, the linear excitation spectrum exhibits a sub-GHz resonance line ascribed to the vortex core dynamics at the array center. The features of this line (spectral position and amplitude) depend on the array size and the strength of the dipolar coupling through the interplatelet spacing. This resonance is also observed for macro-vortex states without a vortex core but only in the regime of a strong dipolar coupling. The effect of disorder is then investigated by numerically generating assemblies of nanoplatelets with a position disorder and, shape and size distributions. The micromagnetic simulations reveal flux closure magnetization configurations as well but without a vortex core. A low-frequency resonance appears in the susceptibility spectra for quite high surface contents of nanoplatelets but its amplitude is weaker compared to the case of periodic arrays. This line arises from a collective mode extended over a few nanoplatelets. A large variety of static and dynamical behaviors is thus evidenced resulting in a great complexity even in such
Introduction
Magnetic nanoobjects are the building blocks shared by a wide range of promizing technologies like the future ultrahigh density storage media [1], the nanomagnet-based logic functions for new data processing systems [2], the energy-harvesting devices [3], the biomedical applications [4] and the miniaturized microwave components [5]. Design of new devices integrating such nanoobjects requires a deep understanding of their static and dynamical magnetic properties both as isolated and interacting objects. Schematically, the assemblies of nanoobjects can be classified in two categories, namely, the geometrically ordered ones and the disordered ones. The ordered systems correspond to periodic arrangements of nanoobjects [6]. The artificial spin ices defined as a two-dimensional arrays of closed spaced single-domain nanoobjects coupled by dipolar interaction belong to this family [7]. One remarkable feature of such systems is the appearence of a magnetic frustration due to the impossibility of satisfying simultaneously all competing dipolar interactions between the nanoobjects. The static and dynamical properties of artificial spin ices are controlled by the shape, the size and the composition of the nanoobjects, and the geometrical properties of arrays (symmetry and lattice constants). A large variety of equilibrium magnetization configurations has been experimentally and numerically reported [7,8].
In addition, the excitation spectrum at microwave frequencies experimentally and numerically investigated reveals the presence of collective spin-wave modes [9] and reflects the existence of topological defects within the array [10]. On the other hand, disordered assemblies of magnetic nanoobjects are exemplified by nanocomposites where ferromagnetic nanoparticles are embedded in a nonmagnetic matrix [11]. In contrast to ordered systems, the numerical simulations based on the micromagnetism theory for describing the dynamic response of such systems are rather rare. The effective susceptibility spectra of the nanocomposites are usually computed using phenomenological approaches based on mixing rules [12]. Even if such models can incorporante some information regarding the composite morphology like the existence of nanoparticles clusters [11], this approach suffers from severe approximations. In particular, the particles are assumed to be in the single-domain state and the dipolar interaction between the particules is treated by means of an effective demagnetizing factor [11]. The aim of this paper is to investigate the equilibrium magnetization configurations and the associated microwave excitation spectra in the linear regime of dipolar coupled nanoplatelets by means of three-dimensional (3D) micromagnetic simulations. The characteristics of nanoplatelets have been chosen to allow the occurence of nonuniform magnetization states within the individual elements. First, the effect of the dipolar coupling strength on the equi-librium micromagnetic configurations and the susceptiblity spectra is studied in the case of periodic arrays of nanoplatelets. Then, aggregates of randomly distributed nanoplatelets are considered and the impact of geometrical disorder on the microwave response is analyzed. Emphasis is placed on the existence of a sub-GHz resonance resulting from the dynamics of flux closure magnetization configurations. The nominal nanoelements of interest are thin squareshaped platelets defined by the platelet side L and and the platelet thickness L z , the z-axis being along the platelet thickness. In the conducted numerical simulations, L = 90 nm and two values of L z were considered: L z = 4 nm and L z = 12 nm. For the ordered assemblies, the nanoplatelets were arranged in two-dimensional square arrays. The interplatelet spacing δ (edge-to-edge distance) was varied between 3 nm and 30 nm. δ is related to the lattice constant a by: a = L + δ. Various array sizes were investigated: N = 2×2, 3×3, 4×4, 5×5, and 6×6, where N is the number of periodic cells equal to the number of platelets. Figure 1(a) shows an example of a 5×5 array of nanoplatelets with δ = 15 nm. For the disordered assemblies, the degree of disorder was introduced using the following two-step generation rules. First, the nominal square-shaped nanoplatelets can be transformed into rectangular ones with the in-plane sizes L x and L y inside the size range 80 nm < L x , L y < 100 nm. Second, the position of the nanoplatelets can be changed from the initial one in the square array by applying spatial deviations δ x and δ y along the x and y axes, respectively. These deviations satisfy 4 nm < δ x , δ y < 12 nm and the excluded volume rule (no overlap between the platelets). In both cases, L x , L y , δ x and δ y result from a random draw with a uniform distribution. Each configuration is characterized by the surface content of nanoplatelets defined as ϕ s = NL x L y /S , where S is the surface of the resized arrays. Figures 1(b) and 1(c) display two examples of disordered configurations with ϕ s = 56% and ϕ s = 81%, respectively.
Micromagnetic simulations
The high-frequency response of ordered and disordered nanoplatelets assemblies was computed by means of micromagnetic simulations using two 3D home made codes described elsewhere [13]. The first code computes an equilibrium magnetization configuration by integrating the Landau-Lifshitz (LL) equation in the time domain using a second-order Taylor scheme and an optimized time step. The second one solves the LL equation linearized around the equilibrium magnetization configuration (small-amplitude motion regime) in the frequency domain. It computes the local dynamic susceptibility tensor χ i j (r, ω) i, j = x, y, z which depends both on space and angular frequency, and then the dynamic susceptibility tensor averaged over the element's volume V, χ i j (ω) = χ i j (r, ω) V i, j = x, y, z, which connects the high-frequency response of a magnetic configuration δm to a weak exciting RF magnetic field δh such as δm = χ δh. These codes were previously used to investigate the microwave response of vortex-state [14] and bubblestate [15] single elements. The micromagnetic simulations were performed using magnetic parameters representative of isotropic Permalloy (Ni 80 Fe 20 ) for each nanoplatelet, namely : the saturation magnetization M S = 8. 10 5 A/m, the exchange constant A = 1.3 10 −11 J/m, the gyromagnetic ratio γ = 1.76 10 11 s −1 T −1 and the damping parameter α = 0.02. The nanoplatelets are spatially discretized using a regular cubic mesh. The mesh sizes were chosen to be equal to ∆ x = ∆ y = ∆ z = 3 nm lower than the exchange length, Λ = 2A/(µ 0 M 2 s ) 5.7nm.
Isolated nanoplatelets
Various stable magnetization configurations exist in soft planar magnetic nanostructures depending on the element size and thickness [16]. Let us consider the case of isolated square-shaped platelets with L = 90 nm and L z = 12 nm. Three equilibrium magnetization configurations can be stabilized in such platelets. The first configuration is the vortex state described as an in-plane curling magnetization with a central region, the vortex core, where the magnetization points out of plane ( Fig. 2(a)). The magnetic energy of this vortex state is equal to 26.6 ev. In what follows, the magnetic energy of various magnetic configurations will be normalized by the vortex one. The second configuration is the onion state (also termed leaf state) where the magnetization has a privileged in-plane orientation along one diagonal of the square platelet ( Fig. 2(b)). The reduced magnetic energy is o = 0.945. The third magnetic configuration is the so-called buckle or C state where the in-plane magnetization adopts a C-like shape ( Fig. 2(c)). For this case, c = 0.942. These three stable states have magnetic energies close to each other with the C state being the lowest-energy state. Figure 2(h) displays the zerofield dynamic susceptibility spectra (imaginary part of the χ xx element termed Im(χ xx )) for the three stable configurations within the [0.5 GHz-20 GHz] frequency range. (h) Dynamic susceptibility spectra (imaginary part of the χ xx element, Im(χ xx ( f ))). Maps of the local susceptibility for the vortex state (Im(χ xx (x, y)) (d) and Im(χ + (x, y)), (e)), the onion state (Im(χ xx (x, y)), (f)) and the C state (Im(χ xx (x, y)), (g)) computed at the resonance frequency of the low-frequency resonances in the Im(χ xx ) spectra. The high levels of susceptibility are in red.
A common feature between the three spectra is the appearance of a single low-frequency resonance and multiple resonance lines above 7 GHz. For the vortex state in disk-shaped nanostructures, it is well-established that the excitation spectrum consists of a low-frequency vortex translation mode corresponding to a gyrotropic motion of the vortex core as a whole around the element center and high-frequency resonances ascribed to quantized spin-wave modes due to the finite lateral size of the element and essentially spreading outside the vortex core [17]. In what follows, the attention will be focused on the lowest-frequency resonances emerging in the three micromagnetic situations. For our case of a vortex state in a square platelet, the low-frequency resonance located at f=0.91 GHz is due to the vortex translation mode as confirmed by the color maps of the imaginary part of the local susceptibility (Im(χ xx (x, y)), Fig. 2(d) and, more explicitly, Im(χ + (x, y)) with Fig. 2(e)) computed at the resonance frequency and showing a high-level of dynamic susceptibility within the vor-tex core. For the onion state, the broad low-frequency resonance at f=1.65 GHz is associated with a high level of magnetic response along the opposite diagonal with respect to the privileged one in the static configuration ( Fig. 2(f)). For the C state, the low-frequency resonance appears at f=4.9 GHz. This resonance is due to a high level of magnetic response inside a Y-shape area with the x-axis as the symmetry axis ( Fig. 2(g)). Let us move on the assemblies of ordered nanoplatelets.
Initialization of the magnetization configuration
The research of an equilibrium micromagnetic configuration requires to start from an initial configuration. The legitim questions from there are how to choose this initial state and what is the impact of this state on the final result. This issue is of prime importance when the micromagnetic simulations are conducted on multiple interacting magnetic objects as in-depth discussed for the search of magnetic ground state in artificial spin ices [8].
To illustrate the effect of this starting state, various initial magnetization configurations have been considered for a square array of nanoplatelets; a random configuration, a uniform configuration with the magnetization vector oriented along a nanoplatelet side (y-axis), centered and off-centered orthoradial magnetization configurations. The orthoradial direction for a given nanoplatelet is defined as the cross product between the position vector at the center of the nanoplatelet and the normal of the array. The nanoplatelets have the following geometrical parameters : L = 90 nm and L z = 12 nm with an interplatelet spacing δ =15 nm. The array size is fixed at 5×5. The associated equilibrium magnetization configurations resulting from the micromagnetic simulations are reported in Fig. 3. For the initial random configuration (Fig. 3(a)), a mixing of vortex, onion and C states is observed within the platelets. In addition, some platelets support an S state (in-plane magnetization along the S-like shape) which is not a stable state for isolated platelets of the same size. The presence of S state reflects the dipolar coupling between the platelets and locally favors the flux closure.
Such magnetic flux arrangements have been experimentally observed using electron holography in FeNi nanoparticle chains [18] and Fe nanocubes [19]. For the uniform intial state (Fig. 3(b)), onion, C and S states are stabilized within the platelets. No vortex state is revealed. For the orthoradial initial configurations, the equilibrium magnetization configurations correspond to flux closure patterns with a spiraling in-plane magnetization spreading over the whole platelet array. These configurations are termed macro-vortex states. For a centered orthoradial initial configuration (Fig. 3(c)), a vortex core appears at the central nanoplatelet of the array whereas no vortex core exists for an off-centered initial configuration (Fig. 3(d)). In the last case, the macro-vortex configuration bears some similarities with the local vortex states evidenced in artificial spin ice systems with elongated single-domain elements [20]. To compare the energy between the different configurations, the reduced magnetic energy per platelet is defined as p = N /(N v ), where N is the magnetic energy of the array with N platelets. The above-described equilibrium states have the following energies: p =0.759, 0.697, 0.635 and 0.618 obtained from the random, uniform, centered orthoradial and off-centered orthoradial initial configurations, respectively. It should be remarked that p is always lower than unity meaning that the dipolar coupling allows to reduce the magnetic energy with respect of a collection of noninteracting platelets. In addition, the offcentered macro-vortex state has the lowest energy. From the experimental point of view, the orthoradial magnetization configurations can be carried out by applying an in-plane rotating magnetic field and then reducing its amplitude down to zero. In the present work, the purpose is to investigate the mi-crowave susceptibility spectra associated with flux closure patterns and to evidence the conditions for appearing a low-frequency resonance (below 1 GHz). and 4(f). The centered macro-vortex state with a vortex core (resp. without a vortex core) appears in arrays with an odd (resp. even) number of cells. (ii) For a given array size, the lowest-energy state is always the magnetization configuration without a vortex core but the energy differences between the configurations with and without a vortex core are weak (see Fig. 4). Figure 5. Ordered arrays of nanoplatelets with a centered macrovortex state and a vortex core. Array size dependence of the Im(χ xx ) spectrum. The interplatelet spacing is δ=3 nm. The inset maps the local susceptibility (Im(χ + (x, y)) of the 3×3 array. Figure 5 and 6 displays the Im(χ xx ) spectra of magnetization configurations with a centered macro-vortex state . Ordered arrays of nanoplatelets with a macro-vortex state but no vortex core. (d) Array size dependence of the Im(χ xx ) spectrum. The interplatelet spacing is δ=3 nm. Maps of the local susceptibility (imaginary part) associated with the low-frequency resonance for arrays of size 2×2 (a), 4×4 (b) and 6×6 (c). The color code is the same as in Fig.2. with a vortex core for various array sizes ( Figure 5) and for various interplatelet spacings ( Figure 6). These spectra are mainly characterized by an high-amplitude and lowfrequency resonance (below 1 GHz). The local susceptibility map reveals that this resonance is localized within the vortex core at the array center and is associated with the macro-vortex translation mode by analogy to the single platelet case (inset in Fig.5). Increasing the size array for a given interplatelet spacing, δ =3 nm, (Fig. 5) leads to a lowering of the resonance frequency of the macro-vortex translation mode. This behavior is consistent with the one of a unique platelet with the increasing platelet size. Ris-ing the interplatelet spacing for an array size fixed at 5×5 (Fig. 6), results in a frequency shift of the resonance frequency of the macro-vortex translation mode towards the high frequencies as a consequence of reducing the dipolar interaction between the platelets. As a result, the resonance frequency of this low-frequency mode can be typically tuned between 200 MHz and 1 GHz.
Flux closure magnetic configurations
The Im(χ xx ) spectra of magnetization configurations with a centered macro-vortex state without a vortex core for various array sizes are reported in Figure 7 (δ =3 nm). In each case, a unique high-amplitude resonance exists as well but the resonance frequency is located above 2 GHz. The local susceptibility maps point out that this resonance is due to a mode extended over the whole array with a fourfold symmetry. Increasing the array size leads to a decrease of the resonance frequency. Figure 8. Ordered arrays of nanoplatelets with an off-centered macro-vortex state without a vortex core. Imχ xx spectra for two array sizes 3×3 and 5×5. Regime of strong dipolar coupling (δ = 3 nm). The insets correspond to the local susceptibility maps (imaginary part) of the low-frequency resonance for the two array sizes. The color code is the same as in Fig. 2. Figure 9. Ordered arrays of nanoplatelets with an off-centered macro-vortex state without a vortex core. Imχ xx spectra for two array sizes 3×3 and 5×5. Regime of weak dipolar coupling (δ = 15 nm). Figure 8 shows the Im(χ xx ) spectra for magnetization configurations with an off-centered macro-vortex state without a vortex core. Interestingly, the presence of a lowfrequency resonance (below 1 GHz) depends on the dipolar coupling strength. For a strong dipolar coupling (δ =3 nm), a low-frequency response exists with a marked resonant shape for a large enough array size (Fig. 8). It should be noted that the amplitude of this resonance is high for this array size. The local susceptibility map reveals that this resonance arises from a cooperative mode extending over several platelets. For the regime of a weak dipolar coupling (δ =15 nm), no low-frequency response is observed whatever the array size (Fig. 9). It should be remarked that resonance lines assigned to nonuniform collective excitations within arrays have been reported for stadium shaped nanoelements in a square array [9]. However, such nanoelements are uniformly magnetized by an in-plane bias magnetic field in contrast to the nanoplatelets considered in this work. Figure 10. Disordered assemblies of nanoplatelets numerically generated with a surface content ϕ s = 58% (a) and ϕ s = 56% (b). (e) Im(χ xx ) spectra for the two above described configurations. Maps of the local susceptibility (imaginary part) associated with the low-frequency resonance for the ϕ s = 58% (c) and ϕ s = 56% (d) configurations. The color code is the same as in Fig.2.
Disordered assemblies of nanoplatelets
As an example, the case of disordered aggregates formed by 25 rectangular platelets are considered. The platelet thickness is fixed at L z =4 nm. The lateral size distribution of nanopatelets and their relative positions were generated according to the process described in Sec.2.1. The surface content of nanoplatelets, ϕ s is changed by compacting them along the x and y-axes.
The results coming from two independent draws corre- Figure 11. Disordered assemblies of nanoplatelets. Im(χ xx ) spectra for two high surface contents ϕ s = 67% and ϕ s = 81%.
sponding to ϕ s = 58% and ϕ s = 56% are presented in Fig. 10. In each case, the starting state is an orthoradial magnetization configuration. The two equilibrium magnetization configurations (case ϕ s = 58%, Fig. 10(a) and case ϕ s = 56%, Fig. 10(b)) are flux closure states essentially mixing S states, C states and onion states at the level of individual nanoplatelets. Figure 10(e) shows the associated Im(χ xx ) spectra. These ones reveal the existence of a low-frequency resonance at 0.65 GHz for ϕ s = 58% and at 1.16 GHz for ϕ s = 56% but with a weaker amplitude with respect to the case of the periodic arrays. The resonance lines arise from a mode essentially localized inside one nanoplatelet (case ϕ s = 58%, Fig. 10(c)) or extended over three nanoplatelets (case ϕ s = 56%, Fig. 10(d)). Despite a quite similar surface content, these two assemblies of nanoplatelets lead to distinct feartures of the low-frequency resonance in terms of resonance position, resonance linewidth and line amplitude. Let us now consider the case of higher surface contents. The Im(χ xx ) spectra for two disordered aggegates with ϕ s = 67% and for ϕ s = 81% are displayed in Fig. 11. A striking feature is the absence of a low-frequency resonance for these very dense composite media. These results highlight the impact of disorder on the dynamical behavior of dipolar coupled nanoplatelets and the difficulty to control the lowfrequency part of the susceptibility spectra.
Summary and perspectives
These numerical results illustrate the large variety of existing microwave susceptibility spectra even for the idealistic case of assemblies of thin soft platelets with a regu-lar shape coupled by dipolar interactions. Restricting to magnetization configurations with a flux closure pattern at the array scale, a high-amplitude and low-frequency resonance (below 1 GHz) exists in ordered arrays supporting a macro-vortex state with a vortex core but also without a vortex core for the regime of strong dipolar coupling. For the disordered aggregates, a low-frequency absorption line (between 0.5 GHz and 2 GHz) emerges as well but with a weaker amplitude. This resonance appears in flux closure states and are associated with a cooperative mode spreading over a few platelets. Such resonances disappear for higher surface contents. These preliminary results evidence the strong impact of disorder on the susceptibility spectra. These micromagnetic simulations of interacting magnetic nanostructures can be viewed as a first step to get a better understanding of the microwave magnetic response of real systems. To go further, several points will have to be taken into account. First, the magnetic history of the sample plays a critical role and must be closely reproduced in the demagnetization protocols used in the micromagnetic simulations. Second, the presence of defects, as well as the shape and size distributions of nanoparticles can affect the equilibrium magnetizations configurations rending difficult to control and predict the microwave magnetic behavior. Introduction of information on the nanocomposite morphology in the micromagnetic simulations is needed. Third, clustering effect occurs in magnetic nanocomposites with a significative volume content of particles. The micromagnetic simulations will have to address this issue. This overall challenge is ambitious but in phase with the perspective of using arrangements of 2D or 3D nanoparticles in future devices. | 5,364.8 | 2020-01-01T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Depression Prediction by Using Ecological Momentary Assessment, Actiwatch Data, and Machine Learning: Observational Study on Older Adults Living Alone
Background Although geriatric depression is prevalent, diagnosis using self-reporting instruments has limitations when measuring the depressed mood of older adults in a community setting. Ecological momentary assessment (EMA) by using wearable devices could be used to collect data to classify older adults into depression groups. Objective The objective of this study was to develop a machine learning algorithm to predict the classification of depression groups among older adults living alone. We focused on utilizing diverse data collected through a survey, an Actiwatch, and an EMA report related to depression. Methods The prediction model using machine learning was developed in 4 steps: (1) data collection, (2) data processing and representation, (3) data modeling (feature engineering and selection), and (4) training and validation to test the prediction model. Older adults (N=47), living alone in community settings, completed an EMA to report depressed moods 4 times a day for 2 weeks between May 2017 and January 2018. Participants wore an Actiwatch that measured their activity and ambient light exposure every 30 seconds for 2 weeks. At baseline and the end of the 2-week observation, depressive symptoms were assessed using the Korean versions of the Short Geriatric Depression Scale (SGDS-K) and the Hamilton Depression Rating Scale (K-HDRS). Conventional classification based on binary logistic regression was built and compared with 4 machine learning models (the logit, decision tree, boosted trees, and random forest models). Results On the basis of the SGDS-K and K-HDRS, 38% (18/47) of the participants were classified into the probable depression group. They reported significantly lower scores of normal mood and physical activity and higher levels of white and red, green, and blue (RGB) light exposures at different degrees of various 4-hour time frames (all P<.05). Sleep efficiency was chosen for modeling through feature selection. Comparing diverse combinations of the selected variables, daily mean EMA score, daily mean activity level, white and RGB light at 4:00 pm to 8:00 pm exposure, and daily sleep efficiency were selected for modeling. Conventional classification based on binary logistic regression had a good model fit (accuracy: 0.705; precision: 0.770; specificity: 0.859; and area under receiver operating characteristic curve or AUC: 0.754). Among the 4 machine learning models, the logit model had the best fit compared with the others (accuracy: 0.910; precision: 0.929; specificity: 0.940; and AUC: 0.960). Conclusions This study provides preliminary evidence for developing a machine learning program to predict the classification of depression groups in older adults living alone. Clinicians should consider using this method to identify underdiagnosed subgroups and monitor daily progression regarding treatment or therapeutic intervention in the community setting. Furthermore, more efforts are needed for researchers and clinicians to diversify data collection methods by using a survey, EMA, and a sensor.
Challenges of Geriatric Depression
Depression is one of the most prevalent mental health problems in older adults. Globally, it was the third leading cause of years lived with disability in 2015, increasing by 18.4% between 2005 and 2015 [1]. In particular, older adults living alone are most vulnerable to depression compared with those living with others [2,3] because 30.2% of Koreans living alone are reported to have significant symptoms of depression [4]. In Korea, the economic burden of individuals with depression increased by about 27.1% from 2009 to 2013, with an estimated total cost of Korean $27.1 billion [5]. Thus, early and accurate diagnosis is critical to initiate timely treatment and reduce further disease burden; however, it is difficult to diagnose geriatric depression [1,2].
Underdiagnosis or inaccurate diagnosis of depression in older adults living alone remains a challenge [2]. Self-report and clinical interview are the primary methods to diagnose depression [6]; however, there are several challenges to overcome when diagnosing older adults living alone. First, no proxy currently exists that uses objective observations for reporting older adults' depressive symptoms in their daily lives. Second, existing self-reporting instruments have a limitation to assess mood variability in response to daily stress in the natural environment, even the validity and reliability [6]. Third, atypical symptoms of depression are more common in older adults compared with younger adults [2,6,7]. Finally, older Asian adults are often hesitant to report their depressed mood and depressive symptoms because of social stigma or misconceptions [8]. Thus, it is vital to use diverse methods to collect additional data to fill the gaps contributing to the underdiagnosis of geriatric depression [2,6].
Potential Methods to Identify Geriatric Depression
Ecological momentary assessment (EMA) has been suggested as a promising instrument because it can detect an individual's real-time experiences and mood in real-world settings over time and in different situations [9]. Currently, clinicians rely on the retrospective reports of patients' depressed mood in unfamiliar examination rooms using standardized instruments [6,10]. However, EMA allows individuals to report their momentary mood in the right now and here, multiple times in their real and familiar environment [9,11], rather than in artificial settings or laboratories. Thus, health care professionals can collect high ecologically valid data that are more specific to the older adult's contextual situation and readily applicable to lifestyle modification [9,11,12].
As some older adults have difficulty using high-tech devices, one of the challenges with using EMA by older adults is the limited choices available when choosing a device [9]. Although a diverse number of smart devices are available, a wrist-worn Actiwatch is one method for collecting ecological momentary data [11] and other types of data. It has been primarily used to measure objective sleep and motor activity [13,14]. The usefulness of actigraphy has been well recognized and validated in individuals with depression [15] because of its familiarity, portability, and simple operation that involves pressing the device button to input the data [13]. Although actigraph has been suggested as being an advantageous device for use with older adults [16], few studies have used actigraphy to measure momentary mood or related problems in older adults with depression [17,18].
EMA and actigraphy data require a new analytic approach rather than conventional analysis. Numerous studies have used machine learning with wearable sensor data to improve symptom detection and for monitoring in diverse populations of neuropsychiatric patients living in real-time and in diverse life contexts [19][20][21]. Specifically, the assessment of suicide risk and emotional distress was identified as a possibly applicable area for using social media text mining [22], predicting negative emotions using mobile phone usage pattern [23], and detecting everyday behavior related to clinically meaningful levels of depression using sensor data [24]. Machine learning involves an objective, data-driven, and situation-independent analysis, and it has also been applied in the analysis of sensor data for automated assessment of mental health [24][25][26]. Machine learning analytic techniques that computationally mine meaning from data and classify, detect, and segment meaningful patterns, associations, relationships, and trends between variables and simulation are used to build predictive and optimization models [25,27]. These data analytics can equally be applied to small data to extract and model insights [25,27]. Therefore, depression is an ideal construct to establish, validate, and clinically apply machine learning approaches using sensor data. Thus, this study aimed to develop a supervised machine learning algorithm to predict the classification of depression groups among older adults living alone using EMA data from older adults living with a depressed mood.
Overview of Study Design
On the basis of expert opinion [28] and a methodological guide [27], this study was conducted in 4 steps: (1) collecting data associated with depression, (2) data processing and representation, (3) data modeling (ie, feature engineering and selection), and (4) training and validation of the prediction model ( Figure 1).
Sample
We recruited a convenience sample of 56 older adults living alone via a community health center in Korea between May 2017 and January 2018. Inclusion criteria were as follows: the participants (1) were aged 65 to 94 years, (2) understood Korean, (3) lived as a single household in the community, and (4) had at least mild levels of depressive symptoms (score ≥5 on the Short Geriatric Depression Scale). We excluded 3 individuals with significant cognitive impairment and a high risk of suicide according to the Korean versions of the Mini-Mental Status Examination [29] and Crisis Triage Ration Scale [30]. In addition, the data from 6 individuals were excluded from the analysis because of refusal to wear or incomplete wearing of an Actiwatch (n=5) and data loss because of device error (n=1). Thus, the final sample comprised 47 participants who constantly wore an Actiwatch and reported sufficient numbers of EMA (ie, at least once a day) for 2 weeks [31]. All participants provided written informed consent, and the institutional review board of the affiliated university (IRB 2017-0007-1) approved the study.
Depression Measures
To classify the depression and nondepression groups, the Korean versions of the Short Geriatric Depression Scale (SGDS-K) and the Hamilton Depression Rating Scale (K-HDRS) were used to assess self-reported and clinician-observed symptoms of depression at baseline and 2 weeks. The SGDS-K is a 15-item instrument to assess subjective depressive symptoms (0=symptom absent and 1=symptom present). The total score ranges from 0 to 15; a higher score indicates more severe levels of depression [32]. The K-HDRS is a 17-item instrument that is the most widely used observer rating scale for observing depressive symptoms and severity. The total score ranges from 0 to 52, and a higher score indicates more severe depression [33]. The diagnostic validity of SGDS-K and K-HDRS for screening has been reported in previous studies [33,34].
Actigraphy Data and Ecological Momentary Assessment
Physical activity and ambient light exposure were measured using a wrist-worn Actiwatch (Actiwatch Spectrum PRO, Philips Respironics). It is considered to be a valid and reliable accelerometer that continually detects wrist movements that reflect activities; sleep-wake patterns; light exposure to the red, green, and blue (RGB) spectral regions; and broad-spectrum white light [35,36]. Data were collected continuously in 30-second epochs for 14 consecutive days. Participants wore the Actiwatch on the nondominant wrist all the time and were instructed to take it off only when taking a bath or for a few minutes as needed. Furthermore, participants were instructed that long sleeves should not cover the light sensor in the Actiwatch. We used Actiware software (Philips Respironics) to export the data.
Participants were instructed to rate their momentary mood using a button on the Actiwatch for 2 weeks. When the participants pressed the button, an increasing number was shown in the front window of the Actiwatch. The mood level was measured on a 10-point Likert scale (ranging from 1=very depressed to 10=not depressed) on the individualized preset time, as EMA reporting time in a day should be set in consideration of each individual's lifestyle and convenience [37]. The Actiwatch included options for audible and vibrating alarms that were set to appear 4 times a day as a means to remind participants to complete the EMA regarding their current depressed mood. When a participant missed the alarm to complete the EMA, they received additional alarms over 5-min intervals until they responded. Research staff also provided a follow-up phone call to all participants after 1 week, and additional contact was provided for troubleshooting on an as-needed basis. Participants received gifts that were worth US $10 for completing the 2-week data collection.
Defining Depression
The diagnosis of depression in older adults living alone was made using a decision-making model that is primarily used in the machine learning method [38]. To assess depression, SGDS-K and K-HDRS were administered at baseline and 2 weeks later. The data for a probable diagnosis of depression were based on the second measure at 2 weeks, during which the correlation between SGDS-K and K-HDRS was the highest (r=0.753; P=.01). To develop a diagnostic model, participants were classified into 2 groups: depression (SGDS-K≥7 and K-HDRS≥8) and nondepression (SGDS-K≤6 and K-HDRS≤7) groups [39][40][41]. Finally, 18 older adults were classified into the depression group and 29 were classified into the nondepression group ( Figure 2).
Data Processing
Data were collected through the Actiwatch for 2 weeks. Data preprocessing began with the downloading of raw data and conversion into usable data. In a participant's activity time series, continuous data were collected by the Actiwatch every 30 seconds. Triaxial data were calculated as an activity count. On the basis of the characteristics of signal data collected in the time series at 30-second intervals, null or abnormal values because of user or device error were checked and excluded from the analysis.
If a participant removed the device, the triaxial accelerometer would record values of 0 to indicate the duration of time for which it was not worn. Natural human behavior involves micromovements that are sensed by the accelerometer even during sleep, and therefore, periods with a continuous absence of movement indicate device removal. The raw accelerometer data were aggregated into minute-by-minute epochs using a script written in R (version 3.3.1). Data were considered as missing or being excluded from the analysis when (1) there were 0 measures in 5 min despite the indication of the wearing status or (2) more than 60% of a participant's data were missing in a day.
The processing of EMA scores was performed as follows: (1) exploring the pattern of each participant's EMA scores of depressed mood at 4 different time points each day, (2) comparing the 4 EMA scores and the grand mean scores between 2 groups, (3) examining the overall patterns between
Feature Engineering
Feature engineering is a step of parameterizing the collected raw data. It is a data mining process that quantitatively and qualitatively characterizes the statistics of the measured values for each participant through data aggregation or pattern analysis [38]. Moreover, it is important to determine additional variables to consider for the prediction model during feature engineering [38]. The variables to be included in the prediction model are the original variables included in the existing data and the new types of variables created using these primitive variables. Through the feature engineering process, the existing and new variables are identified to specify the characteristics of the participants and improve the accuracy of the prediction models [38].
After examining the daily fluctuations in the EMA, activity, and ambient light exposure, we compared both groups' 4-hour mean differences in the selected variables using a Mann-Whitney U test and time series plot. To find additional variables to consider for the prediction model, we tested diverse types of sleep parameters, such as total time in bed, total sleep time, sleep efficiency, and wake-after-sleep-onset (WASO).
Feature Selection
This process selects and filters significant variables used in the prediction model. Owing to the small sample size, unnecessary variables in the process of predicting depression may decrease the model's degree of freedom, which can negatively affect the explanatory power or the normal operation of the model [38]. Therefore, it is necessary to evaluate the significance of the processed variables through hypothesis testing and logistic regression analysis to select the most efficient combination of variables [38].
The actual selection of the variables was performed by a mean difference test of the statistics of the depression and nondepression groups and by logistic regression analysis based on various combinations of variables to examine the fitness of the models and determine the final combination of explanatory variables to be used in the prediction [38]. We selected EMA score, activity, and ambient light exposure and sleep efficiency to perform a binary logistic regression.
Training and Validation Test
We evaluated training and validation data of the predictive model to classify depression groups based on the previously selected explanatory variables through the cross-validation process and assessed the validity of the final model [38]. Data from 47 participants were divided into training data and test data, and the machine learning method was applied to calculate the predictive power of the model. Specifically, the parameter value of the prediction model was calculated from the learning data and was applied to the test data, and the prediction level was evaluated using rxLogit function in Microsoft R package.
Data Partitioning
To train our models without overfitting and test their subsequent performance, we randomly partitioned the dataset. Each time series was randomly assigned to a partition while maintaining an even class distribution of the target variable, depression group. Data were split at a 0.65:0.35 ratio for training and testing, respectively. Thus, the data were divided as follows: 30 as training data (n=20 for the nondepression group and n=10 for the depression group) and 17 as test data (n=9 for the nondepression group and n=8 for the depression group) in consideration of the sample size. Ideally, to ensure the cross-validation of the model and the validity of the evaluation, the data resplitting process should be repeated 100 times or more at random (Figure 3).
Training the Models
Machine learning techniques that were used for training and validation included the logit, decision tree, boosted trees, and random forest models. To test the validity of the prediction model, several indices were used such as accuracy, precision, recall (or sensitivity), specificity, F score, and area under receiver operating characteristic curve (AUC) in comparison between the machine learning model and a logistic regression, which are mainly used in the classification model [38]. Each indicator was calculated in the Confusion matrix. Our model showed that depression was positive and nondepression was negative.
Descriptive Statistics of the Sample
The average age of the 47 participants was 78 (SD 5.24) years; they were mainly women (44/47, 94%), had less than high school education (42/47, 89%), and had a moderate socioeconomic status (35/47, 74%). The sample characteristics of the 6 participants who were excluded from the analyses did not significantly differ from those of the 47 participants included in the final analyses.
On the basis of the traditional depression assessment tools (SGDS-K and K-HDRS), 38% (18/47) of the participants were classified into the depression group. We performed the Mann-Whitney U test to compare the 2-week grand means of the depression and nondepression groups, and significant differences were identified. The depression group reported a significantly lower score of nondepressed mood and physical activity and higher levels of exposure to white and RGB light (all P values <.01). Data were used to identify the pattern of time series of signal data and for reference for interpretation (Table 1).
Mean Differences in Different Sections of Time
To obtain a descriptive picture of the daily fluctuations in EMA, activity, and ambient light exposure and compare the 4-hour mean differences of the 2 groups, we performed the Mann-Whitney U test and time series plot. The depression group reported lower levels of EMA scores and daily activity throughout the day. However, there were higher levels of white and RGB light exposures in the depression group compared with the nondepression group.
On further examination of different time frames, both groups showed the highest levels of activity in the morning between 8:00 am and 12:00 pm. However, the highest levels of white and RGB light exposures were observed between 12:00 pm and 4:00 pm in the nondepression group and 8:00 am and 12:00 pm in the depression group. More activities were observed for the depression group after 9:00 pm compared with the nondepression group, although the difference was not significant (Figure 4). This finding suggests that 4-hour data could be considered for modeling (Table 2).
Additional Variable: Sleep Efficiency
As there were no differences in activity levels at night (see Figure 4) and no significant difference in other sleep components, sleep efficiency was chosen for modeling. Sleep efficiency is defined as the ratio of the total sleep time at night compared with the total amount of time spent in bed, which reflects night-time activity and the quality of sleep. We calculated sleep efficiency based on sleep time and WASO based on sleep component equations (Multimedia Appendix 1) [42].
In this study, activity for more than 5 consecutive minutes during the sleep time was considered as wakefulness. The total sum of all moments of wakefulness is referred to as WASO, when smoothing intermittent movements of night-time activity. A cutoff of 85% or higher was used to determine good sleep efficiency [42]. On the basis the mean differences in sleep efficiency of the 2 groups, the nondepression group had a sleep efficiency of 86% (25/29), which was higher than the 85% cutoff, whereas the depression group showed 83% (15/18), lower than the cutoff. The between group difference was different at the P=.08 level regarding sleep efficiency.
A binary logistic regression was performed to explore the best model fit using various combinations of variables (EMA score, activity, white and RGB light exposure, and sleep efficiency) depending on depression groups (nondepression group=0 and depression group=1). We compared diverse models using different means based on EMA and activity data. For example, we compared daily, morning-evening means, and all 4 scores as well as 4 different times of light exposure and other sleep components. Finally, we reached the conclusion that the variables of the final model were the best when using daily mean EMA score, daily mean activity level, white and RGB light at 4:00 pm to 8:00 pm exposure, and daily sleep efficiency. To consider collinearity between white and RGB light change and the small sample size, a computing score was calculated integrating means of white and RGB light between 4:00 pm and 8:00 pm. Table 3 shows goodness of fit depending on each variable. Our findings suggest that lower activity, higher EMA depressed mood, and exposure to white and RGB were associated with a higher likelihood of being classified as a depression group. Furthermore, this model showed a good model fit (accuracy: 0.705; precision: 0.770; specificity: 0.859; and AUC: 0.754) except 1 false-positive case. Table 4 shows the validation and test results for the prediction model based on machine learning. All variables included in the logistic regression were utilized (EMA score, activity, white and RGB light exposure, and sleep efficiency) to classify depression groups (nondepression group=0 and depression group=1). For cross-validation, bootstrapping with data partitioning and training models was performed 100 times.
Comparison of Four Machine Learning Methods
Among the 4 methods, the logistic regression model (accuracy: 0.910; precision: 0.929; and specificity: 0.940) had the best fit compared with the boosted trees and random forest models. Decision tree suffers from possible overestimated model fit because of sample size. boosted trees model showed relatively better fit compared with decision tree because of the modification process to correct errors through pruning. Random forest seemed less suitable for this study because of the small sample size and the limited number of tested variables [38].
Principal Findings
The primary purpose of this study was to identify factors associated with geriatric depression in older adults living alone. We focused on developing a prediction model to classify the depression groups (probable depression vs nondepression) among older adults living alone. Along with the conventional instruments for depression screening, EMA of daily mood, actigraphy data of activity, and light exposure were utilized. Comparing diverse combinations of the selected variables, daily mean EMA score, daily mean activity, daily sleep efficiency, and exposure to white and RGB light between 4:00 pm and 8:00 pm for 2 weeks were selected for modeling. The cross-validation process was used to build a prediction model. Logit model showed compatible evaluation metrics compared with the traditional binary logistic regression. The use of both EMA and sensor data seems promising to develop a machine learning model for better identification of a probable depression among older adults [25].
Comparison With Previous Studies
The depression group reported higher levels of daily depressed mood than did the nondepression group. In a small-scale study [17], the major depressive disorder (MDD) group reported higher levels of diurnal symptom patterns of negative affect with great variability than did the control group. Our study found that even the daily average of depressed mood was significantly related to the classification of depression groups determined by conventional screening [2,43]. This could help overcome a clinical challenge in diagnosing a psychiatric disorder during the first medical examination or interview by unfamiliar clinicians [2]. It can also be difficult for a person with a serious mental illness to complete the self-reporting questionnaire and for clinicians to make a quick diagnosis with limited time [6,44]. The findings of this study suggest that individuals with these difficulties may complete self-reports of depressed mood daily in their home environment. A simple question, although repetitive, might be more helpful to classify the probable depression groups as opposed to the application of complex and multiple-item questionnaires.
A significantly low level of daytime activity was observed in the depression group, similar to the findings of previous research [45]. A study [15] using actigraphy reported that patients with depression displayed less daytime motor activity than did individuals without depression. Related to the EMA of depression, high levels of momentary depressed mood had a lagged effect of prolonging the current status of being at home. Those who experience an increase in the average depressed mood the previous day tend to stay at home the following day [46]. Previous studies have reported that depressed people have decreased levels of daytime activity compared with healthy controls [2,15]. In adults aged over 60 years, depression is associated with lower levels of physical activity in global measure, compared with nondepressed individuals [47], similar to our study findings. In a meta-analysis of adults with MDD [48], more time was spent in sedentary behavior and less time in vigorous physical activities. Considering the relationship between physical activity and depression, promoting daytime physical activity could prevent the risk of depression in older adults [49].
Furthermore, sleep efficiency may be a potential factor to aid in classifying depression groups. Existing studies have confirmed the association between disrupted sleep and depression in adults [50][51][52] because sleep disturbance is one of the common misinterpreted symptoms of geriatric depression [6]. A study of older adults living alone found that depressed participants reported sleep-related issues [53]. However, our study did not clearly indicate that sleep efficiency was associated with the depression group because the sample size had low power to detect the significance (P<.08). Comparing with the findings of previous studies [12,54], this discrepancy should be interpreted with caution because the previous studies measured sleep components once through self-report, whereas our study assessed daily sleep efficiency for 2 weeks through repeated measures using actigraphy. Older adults living alone may have more difficulty in the early detection of sleep disturbances than do those living with others because of the absence of a bedroom partner [55]. Thus, further research using both subjective and objective measures is needed to confirm the predictive value of sleep efficiency with the larger sample.
Contrary to our expectation, higher levels of ambient light exposure were observed in the depression group, compared with the nondepression group. We expected that lower levels of ambient light exposure would be observed in the depression group compared with the nondepression group that had a less sedentary lifestyle and engaged in more outdoor activities [48]. There are limited explanations for this finding contrary to our hypothesis. A small-scale observational study [56] explored the subjective perception of lighting in depressed patients' environment. When they were asked whether the lights in the surroundings seemed dimmer than usual, depressed patients answered that they perceived the environment to be dimmer than usual. Moreover, there was a significant association between depression severity and perception of dimness. Patients with severe (65%), moderate (21%), and mild (14%) depression responded that their ambient environment appeared dimmer than usual. To compensate for the perceived dim environment, they may turn on the lights at home most of the time or stay in places with more light. In addition, a depressed individual may turn on the lights to read books or do other things to compensate their awakening time at night. This behavior may affect heart rate variability, which is associated with mood regulation through ambient biofeedback [57]. Many relaxation methods or interventions use dimmed ambient lighting to soothe the patient [58], but individuals with depression may not prefer a dimmer environment that may increase the depressed mood.
Implications for Geriatric Depression Research
Our study findings support 3 major inferences: (1) diversifying data collection methods to build an accurate model of depression prediction, (2) emphasizing the need to assess depressed mood at different time points in a day, and (3) monitoring multiple symptoms continuously. First, inactivity and poor sleep have been known to be symptoms of geriatric depression; however, the assessment of these symptoms is limited to using subjective reports of inactivity, sedentary lifestyle, or night-time sleep. Our study emphasizes the need for collecting both subjective and objective data related to mood disorders and examining intraday change patterns. For example, it is difficult to apply the Pittsburgh Sleep Quality Index global score to depressed older adults. Thus, clinicians should evaluate sleep efficiency through actigraphy as an objective measure, which may prevent the misinterpretation of a suspected depression [6]. Second, time-specific assessment is required to capture the significant features of specific data. Our study findings confirmed group differences in daytime activity, light exposure in the late afternoon, and night-time sleep. Thus, we suggest the need to collect data throughout the day and identify features closely related to depression [12,45]. Third, we suggest using sensor data, such as Actiwatch or activity tracking, for monitoring purpose. For example, strategies to improve physical activity should be included in the treatment for geriatric depression [48]. Monitoring daily activity of people with mood disorders using Actiwatch may help diagnose depression depending on the activity-based interventions [15]. In addition, our study shows some potential to use environmental data, such as ambient light, in identifying depression in the community setting. Sensor devices have a great potential to continuously capture diverse mental health-related information from the environment or context [24,25] in mental health.
Implications of Electronic Health for Older Adults
EMA could be used as a diagnostic tool for depression in older adults. It has been used to assess multiple mental health indicators related to depressed mood [9]. In particular, older adults have less accurate retrospective memory, and therefore, the use of EMA for older adults can increase the accuracy of the assessment [10,59]. EMA has also been shown to have acceptability and feasibility for older adults, even those with cognitive and emotional difficulties [60]. In particular, EMA has the advantage of continuously reporting subjective symptoms according to the individual's life patterns as digital phenotyping [44,61]. Thus, it is possible to collect personalized data, monitor the individual pattern, and examine fluctuations in the depressed mood within and between a day while reducing retrospective bias [10,11].
However, a limitation of this method is that multiple and repeated reports in a day are associated with a high dropout rate [11]. When applying EMA to older adults, sufficient device training and the practice of technical details could maximize the usability and appropriateness of EMA for older adults [60]. In our study, a dropout rate of 9% (5/53) was observed; thus, preventing dropout is important to ensure that older adults with depression complete multiple reports during the day after careful training and educating about the significance of EMA [11,16,62]. Selection of valid questions to measure momentary depressed mood and standardizing the EMA protocols are also important. Therefore, geropsychiatric clinicians should prepare to select the proper device, train older adults, and prevent dropout [16].
The paper-based diary approach has been traditionally used for older adults [6,63]; however, sometimes they do not accurately report their depressed mood because of response or retrospective bias or the social stigma associated with mental health illnesses [6,10,44,59]. Actigraphy is a noninvasive technique that provides simple and scientifically accurate data on a daily basis and can be used to improve health-related behaviors [64], thus suggesting its suitability for older adults. The 9% (5/53) dropout rate in our study indicates the acceptability and applicability of the Actiwatch for older adults, similar to a previous finding [11], especially as we identified a variety of subjective and objective factors that reflected an individual's real-world environment, such as depressed mood, activity, sleep, and different light using a ubiquitous device.
Actigraphy also measures fluctuating subjective mood on a daily basis in real time and a natural environment through continuous recording [9] along with physical and environmental data [11]. An integrated approach to passive, objective, and continuous measurement has been used in psychiatry practice and research [44]. Moreover, passive actigraphy data and active EMA responses can be useful for understanding the characteristics of depression, offering insight into biological and environmental mechanisms underlying the depression, and developing individualized interventions for older adults [65]. Considering the rapid development of technologies, various types of sensor data may be used in geropsychiatric care for screening, monitoring, and diagnosing mental health [66][67][68]. Therefore, we expect that new measures, such as using sensor data, will help diagnose depression quickly and improve mental health problems.
Limitations
Our study has several limitations. First, we used 2 screening instruments that relied on subjective reports to classify individuals in the 2 depression groups because these instruments had been widely used in community settings. However, a medical diagnosis based on Diagnostic and Statistical Manual of Mental Disorders-5 may be needed to define the clinically diagnosed depression group for data processing and representation [6]. In addition, further study may replicate our model including nondepressed older adults to establish screening models in general population. Although the EMA provides mood reports of older adults at the 4 time points every day for 2 weeks, our predictive model used daily grand mean that was relatively less frequent for EMA data collection. Replication studies that utilize data variability within a day should be tested over a longer time frame and use a more convenient mode to complete EMA to reduce subjective burden. Generalization of the EMA data and levels of activity is limited because every individual has different levels of normal with reference to a depressed mood or inactivity [6,11]. Thus, EMA with a real-life tagging system will be helpful to examine to what extent the data fluctuate from the average level for each individual. In addition, studies with larger samples are needed to ensure the generalizability of our findings considering the possible use of machine learning, which is an innovative method of feature extraction from data [21]. Specifically, the limited generalizability is expected for depressed men in older populations because the majority of older adults were women in Korea. Thus, it is necessary to oversample the groups of men for the next study on this topic [4].
Conclusions
This study provides evidences to support the feasibility of machine learning in classifying depression groups based on EMA and actigraphy data. Specifically, our machine learning approach is applicable for females and elderlies with depressive mood when living alone in the community setting. The study findings provide 2 major inferences: (1) diversifying data collection methods to build an accurate model of prediction and (2) emphasizing the need for assessments at different time points in a day to obtain diverse data. Future researchers and clinicians should consider EMA and actigraphy to obtain data on daily mood, levels of activity and ambient light exposure, and sleep components in depressed older adults living alone. | 8,174.6 | 2019-10-01T00:00:00.000 | [
"Medicine",
"Computer Science",
"Psychology"
] |
2D quantum gravity at three loops: a counterterm investigation
We analyse the divergences of the three-loop partition function at fixed area in 2D quantum gravity. Considering the Liouville action in the Kahler formalism, we extract the coefficient of the leading divergence in $\sim A\Lambda^2 (\ln A\Lambda^2)^2$. This coefficient is non-vanishing. We discuss the counterterms one can and must add and compute their precise contribution to the partition function. This allows us to conclude that every local and non-local divergence in the partition function can be balanced by local counterterms, with the only exception of the maximally non-local divergence $(\ln A\Lambda^2)^3$. Yet, this latter is computed and does cancel between the different three-loop diagrams. Thus, requiring locality of the counterterms is enough to renormalize the partition function. Finally, the structure of the new counterterms strongly suggests that they can be understood as a renormalization of the measure action.
Introduction
The coupling of conformal matter and two-dimensional quantum gravity is a subject which has been deeply studied, with a broad variety of approaches, from the discrete -triangulations [1] and matrix models [2] [3] [4] [5] -to the continuum approaches [6] [7] [8]. In the continuum approach, most of the computations have been done within the conformal gauge. When the conformally coupled matter is integrated out, one ends up with the Liouville action as an effective gravity action. An interesting object to characterize is the partition function at fixed area Z[A], where A is the area of a Riemann surface of genus h. In the conformal gauge, g = e 2σ g 0 , where σ is the conformal factor and g 0 a background metric, Z[A] may be formally written as where κ 2 = 26−c 3 . One of the main difficulties when computing the partition function lies in the complicated non-flat measure Dσ for the conformal factor.
KPZ were the first to characterize this partition function at fixed area [6] for a two-dimensional quantum gravity. They derived the scaling law and a formula for the string susceptibility γ str , in the light-cone gauge for genus zero. Working in the conformal gauge, [7] and [8] managed to generalize the KPZ formula for surfaces of arbitrary genus, making several simplifying assumptions and using consistency conditions: While alternative derivations such as [9] and [10] for c ≤ 1 and h = 0 have more recently been obtained, no obvious way to circumvent the so-called "c = 1 barrier", stating that this formula turns complex for 1 < c < 25, has yet been found. The recent development of efficient multi-loop regularization methods on curved space-times [11] opened the way for a precise and well-defined perturbative computation of this fixed-area partition function in the Kähler formalism where the conformal factor is traded for the (Laplacian of the) Kähler potential as the basic quantum field. In [12] the string susceptibility was computed in this framework up to one loop for surfaces of arbitrary genus using a somewhat more general quantum gravity action including the Liouville and Mabuchi actions; the latter corresponds to possible couplings to non-conformal matter. Of course, for conformal matter only, the one-loop KPZ result was reproduced. This was to be expected since the non-trivial nature of the quantum gravity integration measure only shows up at two and higher loops.
In [13] this computation was then extended to two loops with the Liouville action only. The regularized fixed-area partition function depends on the cut-off Λ and the area A through divergent terms of the form AΛ 2 , ln AΛ 2 , ln AΛ 2 2 and AΛ 2 ln AΛ 2 . While the first term only contributes to the divergent cosmological constant (which can be adjusted by a corresponding local counterterm), and the coefficient of the second term determines γ str , the third and fourth terms are unwanted, non-local divergences. Quite non-trivially, all contributions to the third term added up to zero ! However, this was not the case for the AΛ 2 ln AΛ 2 divergences which remained. As carefully argued in [13] one can and must introduce local counterterms other than just the cosmological constant. Such local counterterms then also contribute, via one-loop diagrams, to the two-loop partition function. In particular, they can cancel the AΛ 2 ln AΛ 2 divergences, but they could not cancel any ln AΛ 2 2 divergences. Happily, the latter cancelled among themselves without any need of counterterms. The precise coefficients of the counterterms were determined up to regulator-independent finite constants by requiring that the two-point function of the Kähler fields, or equivalently of e 2σ e 2σ , be finite and regulator-independent. Their contributions to the partition function was exactly the one required to make it finite and regulator-independent. Yet, two finite "renormalization" constants -on which the two-loop contribution to γ str depends -remained undetermined. By a locality argument, one of these renormalization constants was fixed, precisely to the value consistent with the KPZ value of γ str . However, the other renormalization constant had no particular reason to be fixed to the KPZ value, thus allowing a one-parameter family of quantization schemes which could eventually open the possibility to go beyond the c = 1 barrier.
The presence of this free parameter is intriguing and a natural question is whether the structure of the counterterm action introduced at two-loops is enough to cancel also the divergences at three (and higher) loops or whether new counterterms, with additional undetermined finite renormalization constants are required. This is the motivation for the present paper. In section 2, the Liouville action, measure and (two-loop) counterterm actions are expanded to the order relevant for the computation of the partition function at three loops. In particular, this leads to new vertices. Then the three-loop vacuum diagrams are enumerated. As could be expected, there is quite a large number of these diagrams. In section 3, the allowed divergences are investigated in some detail and the leading divergence ∼ AΛ 2 ln AΛ 2 2 is fully computed with the result leading div where the R i [ϕ] are four different regulator dependent constants. 1 Since this divergence does not cancel, new counterterms are required. Section 4 is dedicated to the discussion of such new counterterms that contribute via two-loop diagrams to the three-loop partition function, and in particular to the freedom to adjust them to cancel the divergences in the partition function. Of course, to really determine the coefficients of these counterterms one needs to compute the three (and four)-point functions at one loop and the two-point function at two loops and to require them to be finite and regulator independent. (Actually, just as in [13], this would fix the diverging, as well as the finite regulator-dependent parts of the counterterm coefficients, but not certain finite "renormalization constants".) While this computation is beyond the scope of the present paper, it is already interesting to check if every divergence can be cancelled through the introduction of local counterterms. We call a counterterm local if it is a local expression in the Kähler field and if its coefficient is local. In particular, a counterterm coefficient involving ln AΛ 2 is not local. However, coefficients proportional to 1 A are allowed in the first place, since such terms already naturally appear through the measure action. (It is interesting though to require their absence from the combined counterterm and measure action, a condition that we will refer to as the "strong locality condition".) At two loops, such local counterterms could cancel all the two-loop divergences but ln AΛ 2 2 . Thus, for consistency, this divergence had to cancel by itself, which was the case, as already mentioned above. The same situation repeats itself at three loops where the counterterms generate exactly the necessary terms to cancel all divergences of the three-loop partition function, except for a ln AΛ 2 3 divergence which, if present, cannot be cancelled by a local counterterm. This divergence is present in individual three-loop diagrams but we show that the different contributions cancel among themselves. This is an encouraging result meaning that all the non-local divergences appearing through the computation of the partition function may be offset by local counterterms. We end with a discussion of how many counterterm parameters one expects to be fixed and how many free finite "renormalization constants" remain after imposing cancellation of the divergences, of any regulator dependence and requiring the strong locality condition. 1 In the general spectral cut-off regularization scheme used, one introduces quite arbitrary regularization functions ϕ and then Ri[ϕ] = ∞ 0 dα1 . . . dαi ϕ(α1) . . . ϕ(αi) 1 α 1 +...+α i .
The Kähler formalism
In two dimensions, any metric g on a compact Riemann surface may be written in the conformal gauge in terms of a reference metric g 0 and the conformal factor σ. Moreover, in two dimensions all the metrics are Kähler's, so that one can rewrite the metric in terms of the Kähler potential φ (and the background metric g 0 ): where A and A 0 are the areas of the metrics g and g 0 respectively and ∆ 0 denotes the Laplacian for the reference metric. Throughout this paper, we will consider the Liouville action, The classical saddle points of this action are the constant curvature metrics g * of arbitrary area A and genus h. Thus, choosing the background metric g 0 to be a constant curvature metric of given area A 0 , the Liouville action may be trivially rewritten in terms of σ and the rescaled g * , ∆ * , R * as The field considered in the following will not be exactly the Kähler potential but rather which appears naturally when writing both the Liouville and the measure actions in the Kähler formalism. The explicit introduction of the factor containing κ, where allows the loop-counting parameter 1 κ 2 to appear clearly in the expansion of the action performed lateron. Note that the relation (2.1) defines A andφ uniquely for given σ.
In quantum gravity one needs to integrate over the space of metrics modulo diffeomorphisms. As emphasized in [12,13], the integration measure Dσ over the conformal factor σ is not the measure of a free field. Using the parametrization (2.1) induces a non-trivial measure [12,13]: where D * φ is the standard free field integration measure in the background metric g * deduced from the metric δφ 2 * = d 2 x √ g * δφ 2 . The notation Det means that the zero-modes are not taken into account when computing the determinant, which is consistent with the fact thatφ has no zero-mode. The measure D * φ can be expressed in the traditional way by expandingφ in eigenmodes of the Laplace operator ∆ * . Choosing 0 = d * 0 < d * 1 ≤ d * 2 ≤ · · · to be the eigenvalues of ∆ * and ψ r its eigenfunctions, that are chosen to be real, theñ 8) and the measure is defined as The study made in [13] showed that the insertion of a counterterm action was required for the finiteness of the two-point function. Therefore, the quantum gravity partition function at fixed area one considers as a starting point for the present work is The measure action is thus defined as (2.11)
Three-loop expansions of the actions
To compute the partition function at three loops, one has to expand the Liouville, the measure and the two-loop counterterm actions around the classical saddle points up to order κ −4 . The classical solutions σ cl are simply the constants e 2σ cl = A A 0 . Hence, from (2.1), σ cl being a constant, it disappears from the Laplacian term in (2.3). Moreover, the curvature term being linear, one has (2.13) Expanding the logarithm straightforwardly leads to the expansion of the Liouville action as relevant for the 3-loop computation: (2.14) The first term in the first line gives the classical contribution while the second term of the first line yields the one-loop determinant studied in [12]. This latter term also provides a standard propagator for the present three-loop investigation, namelyG(x, y) = x| (∆ * − R * ) −1 |y , where the tilde on G and the prime indicate that the zero-mode is excluded. The second and third lines provide the vertices relevant for the two-loop vacuum diagrams computed in [13]. The last two lines yield the quintic and sextic vertices which appears only at three (or higher)-loop computations. Note that the propagator does not carry any factor of κ, while the vertices involve various powers of 1 κ in such a way that an L-loop diagram is acompanied by a factor 1 κ 2(L−1) . In particular, in (2.14) we have displayed all the Liouville vertices contributing to vacuum diagrams with up to three loops. They can be grouped as follows. Two quintic vertices and three sextic vertices for the "pure three-loop" contribution. The bold parts of the vertices encode the ∆ * acting on one or several propagators. For example, for the two quintic vertices, the ∆ * − 4 5 R * in the first vertex acts on the single propagator connected to the bold line, while in the second one ∆ * may act either on the product of the two propagators connected to the bold part of the vertex on the right or on the three other ones. The vertices already used to compute the two-loop vacuum diagrams in [13] are one cubic and two quartic vertices: As it was already the case at two loops, the non-trivial measure action also contributes to the vacuum diagrams. To determine the expansion of the measure action (2.11) up to three loops, one needs to evaluate the trace of an operator O, which was done in [13]: is a formal writing which has to be regularized in a consistent way. After regularization, and since the considered metrics are of constant curvature, this quantity becomes independent of x. Sinceφ has no zero-mode, the first term in the action drops out. This action provides a quadratic, a cubic and a quartic vertex: As already mentioned in section 1, counterterms are required for the two-point function to be finite at one loop, as well as for the partition function to be finite at two loops [13]. This two-loop counterterm action is thus to be considered also for the three-loop computation: where [13] c φ (Λ, α i ) = 1 2π The c φ , c R and c m are regulator independent constants, while the other parts of these counterterms are to be understood as c . Note that they are local, as suitable for counterterms, except for c m because of the term 1 A . Such a non-local term, however, naturally appears in the measure action (2.31), making this counterterm measure-like and hence acceptable. Moreover, imposing the "strong locality condition", i.e. locality on the joint measure and counterterm action up to two loops fixed c m = −1. This value of the counterterm will be used in the following. The counterterm action provides a quadratic vertex: Note that all these vertices are normalized without including any symmetry factors so that one has to count all possible contractions when evaluating the diagrams.
Diagrams
We now enumerate all "three-loop" vacuum diagrams. More precisely, we give all diagrams contributing at order 1 κ 4 . This involves genuine three-loop diagrams made from the Liouville vertices only, as well as two-loop and one-loop diagrams involving also the vertices from the measure or counterterm action. Combining all these vertices gives twenty-nine types of vacuum diagrams, each of them receiving contributions from subdiagrams. Fifteen of these diagrams come from pure Liouville contributions, nine involve the measure and six the two-loop counterterms . The decomposition of the diagrams is detailed hereafter. The sextic vertices give one diagram, the "flower diagram", which may be written as the sum of five subdiagrams: The weight factors in front of the different subdiagrams take into account the multiplicity of the diagram, including the symmetry factors and the contractions. Combining the quintic and cubic vertices yields two types of diagram: composed of respectively eight and ten subdiagrams. Using two quartic vertices gives two diagrams: made of respectively five and eleven subdiagrams. Five types of diagrams are built by a quartic vertex and two cubic vertices: These diagrams consist of thirteen subdiagrams each for the diagrams of the upper line, and of seventeen, ten and eighteen subdiagrams for the bottom line, from left to right. Finally, the last five pure Liouville diagrams come from using four cubic vertices: composed of eleven, thirteen, six, six and eighteen subdiagrams, from left to right and from top to bottom. The measure and counterterm vertices contribute to fourteen diagrams. They may be classified according to the corresponding "two-loop" terminology. In [13] there were four types of diagram: the "figure-eight", the "setting sun", the "glasses" and the "measure" diagrams. At the three-loop order, there are three "figure-eight-like" diagrams, three "setting sun-like" diagrams, five "glasses-like" diagrams and finally three "measure-like" diagrams.
From left to right, both the "figure-eight" and "setting sun" diagrams have respectively one, four and five contributions. Concerning the "glasses" diagrams: the upper diagram has two contributions, and, from left to right, the diagrams in the second line have respectively four and three contributions, and the diagrams involving the counterterms six and four respectively. Finally, the "measure" diagram on the right gets two contributions whereas both diagrams involving the measure vertex have no other subdiagram.
Regularization
The sums appearing in the diagrams, such as , are formal writings of expressions which need to be regularized. The regularization scheme used in the present paper is the spectral cut-off approach developed in [11]. This regularization scheme was used in the two-loop study of the partition function and details can be found in [13]. The sums are regularized by inserting a rather arbitrary 2 regulator function ϕ and a cut-off λ r being the eigenvalues of the operator D * = ∆ * − R * appearing in the propagator. The tilde indicates that the zero-mode is excluded. The regularized quantities, and in particular the regularized Green's 2 The function ϕ must obey the obvious normalization condition ∞ 0 dαϕ(α) = 1, as well as certain regularity requirements at 0 and ∞, but is otherwise arbitrary. function, are related to the heat kernel or "hatted heat kernel" defined in [11]: These quantities satisfy the following relations: Furthermore, these sums are convergent for t > 0, even for x → y. For large Λ, t = α Λ 2 is small and the well-known small t-expansion of the heat kernel can be used, see [11]: Due to the exponential term e −l 2 /4t , where l is the geodesic distance between x and y, this small t-expansion is also a short distance expansion and normal coordinates around x or y can be used.
These expansions lead to the following expressions for the heat kernel K and "hatted heat kernel" K at coinciding points with the zero-modes excluded: Note that R * is a constant curvature and, hence, K(t, x, x) does not depend on x. Furthermore, µ is an arbitrary scale andG ζ (x) is the "Green's function at coinciding points", obtained through a specific ζ-function regularization scheme. It coincides, up to an additive constant, with the result obtained by subtracting the logarithmic short-distance singularity ofG(x, y) and by taking y → x. Its area dependence is given byG such that K may be rewritten as Despite the appearance and as explained in [13], the K do not depend on the arbitrary µ and A 0 but only on AΛ 2 , as well as on α and on various dimensionless moduli characterizing the geometry of the Riemann surface and coded inG ζ . Furthermore, since the zero-modes are excluded from the sums, the following integrals vanish: As an example, using K(t, x, x) to regularize r>0 ψ r (x) 2 , the measure action becomes This structure is very similar to those of the counterterm action and in particular to those of the c m term (2.21).
In the sequel of this paper, when computing the regularized diagrams, all propagators are replaced as To simplify the notation, we will not write the 3 On the divergences
Expected divergence structure
All vacuum diagrams are dimensionless and can depend on A and Λ only through the dimensionless combination AΛ 2 . They contribute various divergences to the partition function. Standard power counting shows that any loop-diagram has a superficial degree of divergence equal to 2. This means that divergences such as AΛ 2 ln AΛ 2 # are allowed. To have a more precise idea of the leading divergence, consider a diagram with I internal lines and V vertices. Each internal line, that is to say each regularized propagator K, gives a logarithmic divergence, according to (2.29). Besides, each vertex, carrying a Laplacian, transforms such a propagator into the corresponding heat kernel K thanks to (2.25), leading to a quadratic divergence (2.27). Each vertex also implies an integration over the manifold. Due to the term e −l 2 /4t in the heat kernel (2.26), every integration contributes a factor t i ∼ 1 Λ 2 at most. (The subtraction of the zero-mode terms ∼ e R * t A does not change the final conclusion.) For the last integration, however, all quantities to be integrated only depend on one point, hence no Gaussian integration can be performed and one just gets a factor of A. Putting everything together, the leading singularity of this L-loop vacuum diagram is since I −V = L−1 for every diagram. Therefore, the leading divergence at three loops is AΛ 2 ln AΛ 2 2 . Note that the vertices not only contain a Laplacian but also terms ∼ R * ∼ 1 A . Picking the contribution coming from V − V Laplacians and V terms ∼ R * leads to the divergence For V > 1 this is vanishing. This means that the subleading divergence with the largest power of logarithms is ln AΛ 2 L . Consequently, the expected divergences in ln Note that the term ∼ ln AΛ 2 , although divergent, has a physical meaning. Indeed, once all other divergences cancelled by appropriate counterterms, one has ln showing that d 6 is the three-loop plus counterterm, order 1 κ 4 , contribution to γ str .
Cancellation of the Λ 4 divergence
Moreover, contrary to the preceeding, somewhat naive power counting argument, one observes "unexpected" Λ 4 divergences appearing in the diagrams indicated in Tab. 1. They appear through the following integrals: where i, j, k, l, m and n are different. From (2.27) one gets the leading divergences with J = d 2 x g * (x)d 2 y g * (y) K(t m , x, y) K(t n , x, y). Thus these Λ 4 divergences come with three different structures in the α i . We display all these unwanted divergences in Tab
A simple computation: the flower diagram
The true leading divergence contributing to the partition function at three loops is in AΛ 2 ln AΛ 2 2 . As already emphasised, the main goal of this work is to investigate this leading divergence, check that it does not "miraculously" cancel between the diagrams and determine the structure of the required counterterms.
Out of the twenty-nine vacuum diagrams displayed in section 2.2, only the fourteen diagrams shown in Fig. 1 contribute to the leading divergence in AΛ 2 ln AΛ 2 2 . Note that all the diagrams with a single propagator between two vertices (i.e. one-particle reducible) do not contribute, as it was already the case in [13]. This is because there is no zero-mode and a single propagator connecting two parts of a vacuum diagram should carry only the zero-mode. 3 Consider again the flower diagram made from the sextic vertices, whose decomposition in subdiagrams was given in the previous section. Since only one vertex is involved, no integration has to be done to extract the divergences and it is the second simplest diagram to compute. (The simplest is the figure-eight diagram coming from the quartic measure vertex.) The first subdiagram may be written in our regularization as: where (2.25) was used. The second subdiagram is slightly more complicated, because of the Laplacian acting on several propagators: The Laplacian term gives Inserting (3.9) into (3.8), and integrating the last term by parts leads to: Similarly, the third and fifth subdiagrams give x, x) , while one reads directly the fourth subdiagram The overall contribution from the flower diagram is thus I = 15 I + 9 I + 6 I + 3 I + 12 I (3.13) (Note that the K K∆ K terms have cancelled.) The leading divergence of the second term is in ln AΛ 2 3 . These divergences will be discussed in a seperate subsection where we show that all ln AΛ 2 3 divergences cancel between the different diagrams. The first term on the right-hand-side of (3.13) contributes to the leading divergence, giving − 30
Leading divergence of the partition function per diagram
We have just seen that the contribution of the flower diagram to the leading divergence of the partition function is 14) The only other diagram involving only one vertex is one of the measure "figure-eight" diagrams. It contributes There are six diagrams built from two vertices that contribute to the leading singularity: , , , , and . The integrals to perform are similar to those done in [13] to compute the two-loop vacuum diagrams. It is rather straightforward to obtain: being a number once the regularization function ϕ(α) is chosen.
Note that that the results for the diagrams involving the counterterm vertex, I in (3.16) and I in (3.17) below, does not depend on the free (two-loop) renormalization constants c φ and c R since the latter do not contribute to the leading divergence. In the next section we will carefully study the full contributions of the counterterms to all divergences and then, of course, the result will depend on c φ and c R .
When considering three vertices or more, computations become more technical. While for and it is easy to get: with and already, one stumbles over the same kind of technical difficulties as those faced when computing the one-loop two-point Green's function at coinciding points in [13]. One of the integrals encountered in is for instance Trouble comes from the fact that the three Ks in the integral force the three variables x, y and z to be all close to each other. For instance, integrating over y through the term K(t 4 , y, z) requires to Taylor expand When x, y and z are close, such that l 2 (x, y) ∼ l 2 (x, z) ∼ 1 Λ 2 , all terms in the expansion give contributions of the same order. One gets: d 2 xd 2 yd 2 z g * (x)g * (y)g * (z) K(t 1 , x, z) K(t 2 , x, z) K(t 3 , y, z) K(t 4 , y, z) K(t 5 , x, y) As just explained, the terms + . . . contribute at the same order and cannot be dropped. Keeping only the terms that contribute to the leading singularity AΛ 2 ln AΛ 2 2 , we have so that one can easily resum all the terms. Therefore, the previous integral contributes to the leading divergence by: Of course, this is valid for α 4 α 2 +α 5 < 1. However, the initial expression was symmetric under exchange of α 2 and α 4 (upon also exchanging α 1 and α 3 ). Hence, if α 4 α 2 +α 5 > 1 one simply exchanges the roles of α 2 and α 4 in the derivation (since now α 2 α 4 +α 5 < 1) and one gets the same result. Considering carefully each integral, finally one gets 4 for and : One encounters similar problems for the diagrams with four vertices and . Taylor expanding leads to series of divergent contributions. In addition to the series (3.22), one obtains More details on the integrals generating such series are given in the appendix. Thus, one gets: We see that the leading divergence in AΛ 2 ln AΛ 2 2 is not vanishing and new counterterms will be required. They should be determined by ensuring that the one-loop three-point and four-point functions, as well as the two-loop two-point function be all finite. The computation of these one-loop n-point functions is beyond the scope of this paper, but it is nevertheless already interesting to look at the possible counterterms one could consider and to calculate their contributions to the various divergences of the partition function. This will be done in the next section.
Cancellation of the (ln AΛ 2 ) 3 divergence
Below, when we compute the counterterm contributions to the three-loop partition function, we will see that local counterterms with local coefficients (i.e. not involving explicitly ln AΛ 2 ) cannot give contributions to the ln AΛ 2 3 divergence. Now, it is easy to see that such ln AΛ 2 3 divergences are present in individual three-loop diagrams. In particular, this was the case for the flower diagram, see (3.13) and the remarks that followed. The only way to ensure finiteness of the partition function then is that these individual divergences cancel between the three-loop vacuum diagrams. Among the twentynine diagrams, eight contribute to the ln AΛ 2 3 divergence. Their contributions are not too difficult to compute. We display the result in Tab. 2. Indeed, when summed, they vanish! This is similar to what happened for the ln AΛ 2 2 divergence in the two-loop partition function, and one expects the ln AΛ 2 L divergence to cancel in the L-loop partition function.
Counterterms
There are several types of counterterms one may add in the three-loop computation. Cubic or quartic counterterms lead to diagrams similar to the ones generated by the cubic and quartic measure vertices. One may also expand the coefficients of the quadratic counterterms already present in the two-loop computation of [13] and consider their κ −4 contributions. Of course, only local counterterms will be introduced. This means, on the one hand, that the counterterms are polynomial in the Kähler field φ with only finitely many derivatives acting on them, and, on the other hand, that the coefficients of these counterterms are local expressions. In particular, a counterterm coefficient involving the area e.g. through ln AΛ 2 is non-local. However, following [13], we do allow for counterterm coefficients ∼ 1 A since they are already present in the measure action due to the absence of the zero-mode. Remarkably, imposing a "strong locality condition", i.e. absence of these 1 A terms, on the joint measure and counterterm action of the two-loop computation [13] fixed one of the two finite renormalization constants (namely c m ) precisely to the KPZ value. In this section, we will write out the counterterms contributing to the partition function at the same order as the three-loop diagrams, i.e. at order 1 κ 4 and give their diverging contributions to ln Z[A]. Since the divergences in AΛ 2 can always be absorbed in the cosmological constant they will be ignored in the following. Similarly, we will not spell out the finite contributions of the counterterms.
Cubic conterterms
The new counterterms one may introduce are cubic and quartic ones. The allowed cubic counterterm action is where By dimensional analysis, the coefficients f (1) i and f (2) i are dimensionless "numbers". As already emphasized in the two-loop analysis of [13] they may depend on the regularization through the α i and are then to be integrated with the given ϕ(α i ), resulting in a number. But they do not depend on the cut-off Λ 2 . The action (4.1) contributes via the two two-loop diagrams and at the same order in κ −4 as the three-loop diagrams studied above.
We first show that the glasses diagram gives no relevant contribution. It may be written as a sum of four subdiagrams. One gets: +3R * (f m + f R R * ) K(t 3 , x, y) . Integrating and taking into account the absence of zero-modes leads to: Using the scaling relation (2.4) and (2.28), one may rewrite this as The first term is obviously independent of the area A and thus of no interest here. The only A dependence in the second line comes from the However, the parenthesis being A independent, this term can be included in the cosmological constant and is not significant.
The last term is slightly more subtle to handle because of the remaining K 0 ( A 0 A t 3 , x, y) term. For the non divergent counterterms f (1) R and f (2) m A , the short-distance logarithmic singularity in K 0 ( A 0 A t 3 , x, y) being integrable, one may take the limit t 3 → ∞. Doing so leads to an A independent quantity. Finally, doing a finite expansion in x − y in the integral yields either A-independent or 1 Λ 2 -terms or terms that vanish exponentially as Λ → ∞. Thus, the remaining quadratically divergent counterterm m only leads to terms finite or to be included in the cosmological constant. None of these terms is of any interest here. This glasses diagram thus gives no contribution to the pertinent divergences of the partition function (3.3). Note that diagrams with a single propagator joining two or three loops were already discarded from the diagrams contributing to the leading divergence in the previous section.
The setting sun diagram gets two contributions according to which line of the cubic counterterm vertex is connected to the bold part of the cubic Liouville vertex. Thus one obtains This leads to the following divergences: is a constant independent of A. The expression (4.7) is the full contribution from the cubic counterterms to the diverging part of the partition function.
Quartic counterterms
The quartic counterterm action is (4.10) Again, the coefficients q (j) i may depend on the α k but not on the cutoff Λ. This action gives a "figureeight" diagram : which contributes as with b 1 given in (4.8).
Quadratic two-loop counterterms
The quadratic counterterms (4.13) did contribute via one-loop diagrams to the two-loop partition function, but also via two-loop diagrams to the three-loop partition function as shown in the above computation. However, as always, the counterterm coefficients get contributions at different orders in perturbation theory. If we call c φ , c R and c m the coefficients in (2.21), we may add to them an additional piece 1 Overall, the c are accompanied by a factor 1 κ 4 and they contribute via one-loop diagrams to the three-loop partition function. Thus we also add the following counterterm action where, again, (4.14) The counterterm action (4.13) then provides a new one-loop diagram of order κ −4 : leading to the following divergences: Moreover, two parameters of the two-loop counterterms (2.21) are still unconstrained: c φ and c R . Although only c R appears in the two-loop partition function, both may contribute to the divergent part of the partition function at three loops, through the diagrams , , and . Their diverging contributions are displayed below: and None of these contains a AΛ 2 (ln AΛ 2 ) 2 divergence and this is why these finite counterterm coefficients c φ and c R did not contribute to our computation in section 3.
Total counterterm contribution to the partition function
Since the glasses diagram has no divergence other than in AΛ 2 , the total contribution one could get from the counterterms to the three-loop partition function is given by summing (4.7), (4.12), (4.16), (4.17) and (4.18). Recalling AR * = 8π(1 − h), cf. (2.4), one gets: with where b 1 was defined in (4.8). This is the total contribution to the three-loop partition function of the counterterms that have not been previously fixed by the order 1 κ 2 ("two-loop") computation of [13]. Requiring the one-loop two-point function to be finite and regulator independent fixed c m and parts of c φ and c R . Thus, only their so-far undetermined regularization-independent parts c φ and c R have been included in (4.20). One way to determine some of these counterterms is to compute the two-loop two-point function (order 1 κ 4 ) and the one-loop three-point function (order 1 κ 3 ) and one-loop four-point function (order 1 κ 4 ) and to require them to be finite and regularization independent. Imposing finiteness will completely determine certain combinations of the counterterm coefficients, while imposing regularization independence of the finite terms will fix certain other combinations up to constants. The computations of the two-loop two-point function and of the one-loop three-point and four-point functions clearly are beyond the scope of this work. However, without actually doing this computation, there are still interesting remarks we can make. We can rather easily determine the contributions of the counterterms to these n-point functions. This will tell us which combinations of the counterterm coefficient would be fixed by such a computation. We will find that the relevant combinations are indeed the same as those appearing in the Ω i of the three-loop partition function. Although "expected", this is by no means obvious and constitutes a nice consistency check.
It is straightforward to see that the cubic and quartic counterterms contribute to the diverging parts of the three-and four-point functions as Thus finiteness of these functions fixes both f (1) m and q (1) m and hence, Ω 1 . Finiteness of the two-point function at one loop (order 1 κ 2 ) was already imposed in [13] and resulted in the determination of c m to this order. Here we will only consider its two-loop 1 κ 4 part. We find that the contributions of the counterterms to the diverging part of the two-loop two-point function is with The expression F α i , f Thus, all the coefficients Ω 1 , Ω 2 and Ω 3 of the diverging parts of the counterterm contributions to the partition function (4.19) are exactly determined by the requirement of the finiteness of the two-loop two-point function and of the one-loop three-point and four-point functions ! Obviously, we expect this determination to be such that (4.19) precisely cancels the divergences of the genuine three-loop part of this partition function, as was indeed the case for the two-loop computation of [13].
Let us next discuss Ω 4 which is the counterterm contribution to the order 1 κ 4 part of γ str . With the f (1) m , q (1) m and the ρ i been fixed, also Ω 1 , Ω 2 and Ω 3 are fixed and we consider Ω m and c (1) R . Furthermore, one may require the "strong locality condition" that the non-local terms in the measure (2.31) and counterterm actions (4.13), (4.1), (4.9) cancel out. This fixes q (2) m , f (2) m and c (2) m as Thus, among the six undetermined constants in Ω 4 , only c (2) m is fixed, and we end up with five free finite renormalization constants on which Ω 4 depends: Ω R . We conclude that in addition to the undetermined c R which already entered as a free parameter in the two-loop expression of γ str , at three loops γ str depends on four additional undetermined parameters.
Finally, as anticipated in section 3, none of the counterterms contributes to the ln AΛ 2 3 divergence. The only way to generate such divergences would be by introducing non-local counterterm coefficients that already involve a factor of ln AΛ 2 . However, as repeatedly argued, such counterterms should be forbidden. Then, since there is no possible counterterm for a ln AΛ 2 3 divergence, such a divergence is required to cancel in the first place between the three-loop vacuum diagrams. As shown above, this is indeed the case.
Discussion
The purpose of our work was to check if and which new counterterms are required at three loops. We have therefore computed the leading divergence of the three-loop partition function at fixed area, cf (3.26). It does not vanish and thus genuine three-loop counterterms are required. It is interesting to note that the two-loop computation already pointed to the insertion of new counterterms at three loops. Indeed, the counterterms inserted at two loops have a strong similarity with the measure terms at two loops. Yet, at three loops, the measure action gives rise to cubic and quartic vertices unlike the two-loop counterterm vertices. Therefore, one could have expected additional counterterms to be needed. This argument can be generalized to all orders, as the measure action gets additional structures at every order in the loop expansion. If the counterterms are to be understood as a renormalization of the measure action, the latter itself coming from the regularization of the measure for the metrics, then new counterterms have to be introduced at every order in the perturbation series. On the other hand, what is really surprising and encouraging is that if one requires the counterterms to be local, in particular that no counterterm coefficient with a ln AΛ 2 divergence is allowed, then all the divergences may be offset but the ln AΛ 2 3 divergence. However, as we showed, this divergence cancels out between the threeloop diagrams, meaning that local counterterms are enough to balance all the non-local divergences. Moreover, the required counterterm action has a structure similar to those of the measure action, supporting the understanding of counterterms as a renormalization of the measure.
Nevertheless, with no other way to discriminate the counterterms than to forbid (ln AΛ 2 )-like nonlocal terms, many new free parameters appear. At three loops, doing so gives rise to twelve new parameters. Imposing the divergences to vanish in the one-loop three-and four-point functions and in the two-loop two-point function fixes two parameters and three combinations of the parameters. We found that with these parameters and combinations of parameters fixed, the diverging part of the threeloop partition function is also completely fixed with no additional adjustable parameter remaining. Obviously, as was the case at two loops, we expect this to happen in precisely such a way that all divergences in the three-loop partition function cancel, except for the (ln AΛ 2 )-piece that yields the three-loop contribution to the string susceptibility. Indeed, this is the only coefficient of the three-loop partition function which contains undetermined finite renormalization constants. More precisely, it depends on six unconstrained renormalization constants. We argued that there are two different notions of locality of the counterterm coefficients: while coefficients involving ln AΛ 2 were excluded, we did allow coefficients proportional to 1 A since such non-local terms already appeared through the measure action. Introducing such 1 A counterterms in precisely such a way as to cancel the corresponding 1 A terms in the measure action was referred to as "strong locality condition". Imposing this condition fixes one of the six free parameters in the contribution to the string susceptibility, leaving us with five free renormalization constants on which the three-loop contribution to γ str depends. One of these free renormalization constants was already present in the two-loop string susceptibility, so that at three-loops, four new constants play a role.
Several additional requirements should be considered, such as the condition that neither the npoint functions nor the partition function should depend on the choice of regularization. In particular, the regularization function ϕ(α i ) satisfies ∞ 0 dα i ϕ(α i ) = 1 and certain regularity conditions at 0 and infinity, but is otherwise arbitrary. Its choice should not impact any final, physical result. This means that all the dependence in the α i must disappear in the end. Although important, this argument is not enough to fully determine the counterterms, in particular it cannot fix any α-independent pieces. Another criterion is the background independence. Physical results must not depend on the background metric g 0 arbitrarily chosen to write the Liouville action, define the conformal factor σ and thus the Kähler fieldφ. The eigenmodes of ∆ * are also defined through this choice of the reference metric. One way to check for background independence is to derive the cocycle identities for the various actions. It is easy to check that the Liouville action satisfies this condition. Formally, the same is true for the measure action. However, as usual, the need to introduce an explicit regularization, making reference to a background metric, obscures the background independence and makes it difficult to be verified. It could well be that some indirect criterion for background independence fixes some or all of our free renomalization constants.
two Ks after doing the expansions are thus discarded in the following. Remembering that t = α Λ 2 , the previous integrals may then be rewritten as: where all the terms in O(AΛ 2 ln AΛ 2 ) are discarded. The fact that the α i (and thus t i ) are dummy variables that can be renamed and are symmetrized, has been used to simplify the writings of J (2) and J (3) . Finally, the term J 1,2 4,5 in J (2) is the term proportional to Λ 4 defined in (3.5). All contributions ∼ Λ 4 have been discussed in section 3.2 and are summarized in Tab. 1. At present we are only interested in the other types of divergences and thus we will simply drop the term J 1,2 4,5 in the following. We conjecture: α a and α b being dummy variables, this may be rewritten as if n = 0 .
(A. 6) and From (2.25), one observes that which is verified by the above expressions, before considering the symmetries between the α i . Putting everything together and remembering once more that the α i are dummy variables, one gets: up to subleading divergences.
One encounters also integrals such as L = dν(x)dν(y)dν(z) K 1 (z, z) K 2 (x, y) K 3 (x, y) K 4 (x, z) − d dt 5 K 5 (y, z) (A.10) whose explicit computation requires to Taylor expand a product of two K or K. We denote such integrals by L. Integrating over z around x through the exponential term in K 4 (x, z), see (2.26), leads us to Taylor expand K 1 (z, z) d dt 5 K 5 (y, z) in (z − x) around x. After integration, one gets terms such as: dν(x)dν(y) K 2 (x, y) K 3 (x, y)∂ x i 1 . . . ∂ x ir K 1 (x, x)∂ x j 1 . . . ∂ x js − d dt 5 K 5 (x, y) (A.11) with r + s even. If s is odd, then the function K 2 (x, y) K 3 (x, y)∂ x j 1 . . . ∂ x js − d dt 5 K 5 (x, y) is odd and performing the integral over y kills the contribution: r and s have to be even. Since K(t, x, x) depends on x only throughG A 0 ζ (x), see (2.29), for r ≥ 2, ∂ x i 1 . . . ∂ x ir K 1 (x, x) does not contribute to the leading divergence by neither a factor ln AΛ 2 nor Λ 2 . The diverging contributions may thus only come from the integral over y. However, applying s derivatives on d dt 5 K 5 (x, y) for any even s leads to terms similar to (−1) s 2 d dt 5 1+ s 2 K 5 (x, y). Integrating over y one gets B 1+ s 2 (t 2 , t 3 , t 5 ; x) which only produces one of the two ln AΛ 2 of the leading divergence. The only terms contributing to AΛ 2 ln AΛ 2 2 are thus the terms with r = 0 and s even. | 11,302.2 | 2015-04-07T00:00:00.000 | [
"Physics"
] |
Spatiotemporal pattern of periodic rhythms in delayed Van der Pol oscillators for the CPG-based locomotion of snake-like robot
This is the further research on the delayed half-center oscillator (DHCO) neural system presented in our previous paper (Song and Xu in Nonlinear Dyn 108:2595–2609, 2022. https://doi.org/10.1007/s11071-022-07222-y). The DHCO is used to construct a CPG (central pattern generator) neural system to control locomotion of a snake-like robot with pitch-yaw connecting configuration. To this end, we firstly give an improved model of the VDP (Van der Pol) oscillator. Employing mutually coupled delay, a pair of VDP oscillator is connected to produce an half-center oscillator (HCO) module with time delay that is called as a DHCO (delayed HCO) model. Based on the analysis of the Hopf bifurcation, periodic rhythm and their spatiotemporal patterns of the DHCO are illustrated in the different regions of parameters. The DHCO presents periodic rhythms with synchronous and anti-synchronous patterns, which is to control joint actuators combined in snake-like robot with pitch-yaw connecting configuration. To realize a backward propulsive wave to promote snake-like robotic locomotion, based on the DHCOs, we construct a chain type of the CPG neural system combined with a new unidirectional delay in which phase difference can be regulated. Numerical simulations are illustrated that the CPG neural system can control snake-like robot to move with serpentine, rectilinear, and side-winding patterns in the forward and backward directions. The results show that the snake-like robot can be controlled in expected locomotion patterns for a region but not a fixed value of the controlling parameters. Further, the corresponding regions of the parameters are obtained by using theoretical dynamical analysis but not a trial-and-error method. The snake-like robot gets smooth and stable gait transition with parameter changing.
Introduction
In recent decades, with the development of biological neurosciences, some effective control strategies inspired by biological mechanism have been attracting great attentions for engineers and scientists to solve locomotion control of biomimetic robot in unstructured environment [1,2]. In fact, natural animals including humans have high stability movement. They can walk, craw, swim, and even worm in any occasion and at any time, which is produced and controlled by a special neural network-central pattern generator (CPG) neural system located in spinal cord [3,4]. Generally, the biological CPG model is a neural network system including some special types of neurons [5,6]. Rhythmic activity and pattern transition are simulated by employing neural models and their connections [7,8]. In the application of the robot engineering, a variety of CPGs are designed to control the locomotion of bio-inspired robots. The CPG controllers can produce continuous and smooth output signals due to self-adjusting ability when the turning parameters are changed. As a result, the bio-inspired robots can smoothly and rapidly switch their states based on controlling parameters or external environment [9,10].
Snake-like robot is an interesting and exciting masterpiece inspired by nature snake. It has a redundant and flexible body. The limbless structure with number of joints gives snake the superiority to move in wide range of environments. Snakes can travel through narrow space, slither on dry/muddy ground, and swim insider water by employing many different types of motion patterns [11]. To achieve locomotion patterns of snake-like robots, some CPG-based biomimetic strategies have been proposed [12][13][14]. Matsuoka model is a typical motif oscillator to construct controller of robotic locomotion. It has definite biological meanings, such as sensory feedback, external input, and even control signals from brainstem. For the snake-like robotic locomotion, Lu et al. [15] used a bidirectional cyclic inhibitory (BCI) CPG model and achieved three types of rhythmic patterns called as the serpentine, concertina (rectilinear), and side-winding locomotion. Wu and Ma [16,17] presented that the snake-like robot with BCI-CPG controller has adaptive creeping locomotion in a complex environment. However, rhythmic signals of Matsuoka model are obtained through a dual-or trineuron interconnection consisting with extensor and flexor neurons. This structure involves so many parameters to cause difficulties in selecting proper parameters [18].
On the other hand, to simplify dynamic relationship of system parameters and output signals, phase oscillator, i.e., Kuramoto model is introduced to construct CPG controller in the robotic engineering applications because of the specified parameters of frequency and amplitude. Ijspeert [19] achieved gaits transition of swimming and walking locomotion for a snake-like salamander robot. The serpentine and sidewinding motions of snake-like robot were generated to fit with an unstructured environment [20]. Based on the Kuramoto oscillator, Nor and Ma [21] presented a CPG controller for a planar snake-like robot. Bing et al. [22] designed a lightweight CPG model and realized the snake-like robot to obtain smooth slithering gait transition. However, the Kuramoto-based CPG controller is just the coupled phase oscillators through sine function, which lacks of well-defined biological meanings. The phase difference between adjacent oscillators must be cooperatively adjusted by multiple parameters. Further, to obtain sidewinding locomotion in complex terrains, Qiao et al. [23] suggested a 3D motion control method based on triplelayered CPG by using Kuramoto oscillators. Recently, Manzoor et al. [24,25] constructed a CPG model to generate rhythmic patterns and obtain smooth transition of snake-like robot locomotion. The proposed Kuramoto-based CPG system proposed a triple-layer structure and became more complicated.
In fact, the basic function of motif oscillator in the CPG system is to generate rhythm activity [26,27]. The Hodgkin-Huxley (H-H) types of neural models have many parameters and intractable transcendental functions of channel conductance [28][29][30]. To obtain a biological-like CPG system with fewer parameters, the nonlinear oscillator is a suitable mathematical model to construct the CPG controller of robotic locomotion. The Van der Pol (VDP) oscillator was originally proposed as an electronic circuit model [31,32] and extensively studied as a well-known selfsustained prototype in a great number of applications including heartbeats, circadian rhythms, and biological rhythms [33,34]. Dutra, et al. [35] used coupled VDP oscillators to obtain a bipedal locomotor and control hips and knees of human. The VDP-based CPG controllers for quadruped and hexapod robots were proposed [36,37]. For the hexapod robot, Yu et al. [38] verified that the VDP-based CPG controller can generate walking gaits and smooth transition. To the best of our knowledge, there is no published information on constructing CPG neural controller by using the delayed oscillators. In this paper, we will give a special network structure of CPG neural system based on coupling delays. The mapping function from modulating parameters to output signals is presented by theoretical dynamical analysis, but not a trial-anderror simulation method. The snake-like robot presents some locomotion gaits with smooth switching based on the controlling parameters. This motivates our present research.
In fact, the HCO (half-center oscillator) is a basic and pivotal motif in biological CPG neural system [39,40]. In our previous paper, we have proposed the DHCO (delayed HCO) neural model and analyzed symmetric patterns of rhythm activity [41,42]. The DHCO system presents many types of rhythm activities with self-and mutual-symmetric patterns by different bifurcation mechanisms [43]. In this paper, to imitate biological CPG system, we firstly exhibit a DHCO neural module based on the mutually delayed VDP oscillators. Employing the Hopf bifurcation analysis, we present a series of parameter regions, in which the DHCO system exhibits periodic rhythms with synchronous and anti-synchronous spatiotemporal patterns. For the pitch-yaw connecting snake-like robot, we regulate joint actuators by periodic rhythms. At last, we construct a chain type of CPG system by using unidirectional coupling delay, where the delay is to regulate phase difference between the adjacent DHCO modules. The snake-like robot can propose the serpentine, rectilinear, and side-winding movement gaits in the forward and backward directions, respectively.
Generally, based on the above-mentioned CPG controlling strategy to control the snake-like robotic locomotion, there are some following advantages. Firstly, the VDP oscillator used in the paper presents well-defined corresponding parameters to regulate its frequency, amplitude, and phase difference. Secondly, periodic rhythm activities of the DHCO module present synchronous and anti-synchronous spatiotemporal patterns. Time delay varied in different parameter regions can rapidly switch the spatiotemporal patterns without any intermediate states. The snakelike robot presents shorter stage of the locomotion transition process. Thirdly, there exists just one state of rhythm activity in the fixed delay regions. The time delay varied in the fixed region cannot destroy the spatiotemporal patterns. The CPG system has a good robustness to control the snake-like robotic locomotion. Further, to regulate phase difference of the DHCO modules and get a suitable traveling wave, a unidirectional delay is introduced in the chain type of CPG system. So, all activity joints can be regulated by the unidirectional delay, which reduces many redundant controlling parameters. In a word, the proposed CPG neural system based on the delayed VDP oscillator can be used to control snake-like robot with fewer controlling parameters and richer locomotion patters. A group of parameter values is chosen for an expected motion based on the theoretical dynamical analysis. In addition, there is a very interesting phenomenon in the delayed CPG neural system. The unidirectional delay can multiply switch the spatiotemporal patterns of periodic rhythms from synchrony to anti-synchrony and then return to synchrony. Snake-like robots controlled by the CPG system will multiple switch locomotion in the forward and backward direction when the unidirectional delay increases.
The paper is organized as follows. In Sect. 2, an improved VDP oscillator is proposed to regulate frequency and amplitude of the periodic rhythm. In Sect. 3, the DHCO module is designed to present controlling signals by using a pair of VDP oscillator with mutually coupling delay. The unit actuators in the pitch-yaw connecting snake-like robot can be controlled by the DHCO module system. In Sect. 4, employing the DHCO modules, we construct a chain type of CPG neural system with a unidirectional delay. Numerical simulations show that the CPG system can control the snake-like robotic locomotion with the serpentine, rectilinear, and side-winding patterns in the forward and backward directions, respectively. Finally, Sect. 5 gives some conclusions.
An improved VDP oscillator model
Mathematic model of the VDP oscillator is a secondorder differential equation described by: where x denotes time-dependent position of the oscillator, and e is a real number of nonlinear damping ratio. The parameter a [ 0 is introduced to determine oscillation amplitude, and b [ 0 is to natural frequency of the oscillation.
As is known to all, system (1) has a trivial equilibrium (0, 0). The characteristic equation corresponding to the trivial equilibrium is kðk À eaÞ þ b ¼ 0. We obtain the linearized system's eigenvalues k 1;2 ¼ ðea AE ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi e 2 a 2 À 4b p Þ=2. It implies the dynamic behavior of system (1) is stable when e \ 0 since the eigenvalues have Reðk 1;2 Þ\0, while unstable when e [ 0. The time history and phase diagram are shown in Fig. 1. It follows from Fig. 1a, b that the trajectory of system (1) is closed to the trivial equilibrium for e = -0.1. However, the trivial equilibrium is unstable for e = 0.1. The system exhibits a stable periodic orbit surrounding the unstable trivial equilibrium, as shown in Fig. 1c, d. It is a self-excited oscillation.
The parameters a [ 0 and b [ 0 are introduced to regulate the frequency and amplitude of the periodic rhythm in system (1). In fact, the snake-like robot controlled by the CPG neural system can move with different velocity and patterns by adjusting the frequency and amplitude. The improved VDP oscillator can regulate the frequency of rhythm by adjusting a and amplitude by b independently, as shown in Fig. 2 for e = 0.1. It follows that we can freely change the frequency and amplitude of periodic rhythm activity in the improved VDP oscillator.
Delayed half-center oscillator
In this section, we propose a delayed half-center oscillator (DHCO) module employing the abovementioned VDP oscillator, where two VDP oscillators connect each other by mutually coupled delays. Different periodic rhythms with synchronous and anti-synchronous spatiotemporal patterns are exhibited using theory analysis and numerical simulation. The DHCO model is given by the following differential equation.
where k is a coupling strength, and s [ 0 is time delay. It follows that the dynamic behavior of system (2) is determined by time delay s and coupling strength k.
Here, we present periodic rhythms and their regions of system's parameters by using the Hopf bifurcation. The time delay can induce periodic rhythm having synchrony and anti-synchrony patterns. To this end, we rewrite system (2) by transformation
System
(3) has a trivial equilibrium ðu 1 ; u 1 ; u 3 ; u 4 Þ ¼ ð0; 0; 0; 0Þ. The linearized system of (3) at the trivial equilibrium leads to The characteristic equation of system (4) is given by Let s = 0 in Eq. (5), we have the following four roots, which is ; : ð6Þ When e [ 0, the trivial equilibrium is always unstable for the case s = 0. For s [ 0, we assume characteristic Eq. (5) has a pure imaginary root. Letting k = ix, (x [ 0) and separating the real and imaginary parts, one has It follows that x is satisfied with Equation (8) at most has four positive roots x i (i ¼ 1; Á Á Á ; 4). It follows from Eq. (7) we obtain the following critical delays where / i 2 ð0; 2p and is satisfied with To determine the transversality condition of the Hopf bifurcation, we differentiate Eq. (5) with s, which yields Employing the Hopf bifurcation theory, we can draw the following conclusions. When Eq. (8) presents no positive root, the dynamic behavior of system (3) near the trivial equilibrium is locally stable. When (1) Eq. (8) presents one positive root, there exists a critical delay s c satisfied with Eq. (9). The trivial equilibrium is locally stable for s [ [0, s c ). System (3) exhibits a Hopf bifurcation s = s c with transversality condition Reðdk=dsÞ 6 ¼ 0: A stable periodic orbit will be presented when time delay increases and crosses through s c . It implies that the DHCO module exhibits a periodic rhythm. Further, when Eq. (8) has at least two positive roots, there are a series of critical delay s j i ; i ¼ 1; 2; j ¼ 0; 1; 2; Á Á Á satisfied with Eq. (9), which divides the parameter plane into a series of islands of amplitude death (AD). In AD islands, the system has a stable trivial equilibrium, as shown in Fig. 3. Moreover, the DHCO system will exhibit periodic rhythm activities with different spatiotemporal patterns when time delay varies and switches from these AD islands.
In fact, the spatiotemporal patterns of periodic rhythm produced by time delay in system (3) can be analyzed through the equivariant Hopf bifurcation. However, the method is very intractable and complicated because of the normal form and center manifold in the Banach function space. So, for the convenience of reader's understanding, we just exhibit some numerical simulations to exhibit the spatiotemporal patterns. To this end, we firstly give the critical values of time delay defined by Eq. (9) when HðxÞ ¼ 0 has two positive roots. The critical delays will determine a series of parameter regions. Then, for these regions, we illustrate time histories of the DHCO system to show the spatiotemporal patterns of periodic rhythms. It should be noticed that there is just one type of spatiotemporal pattern (synchronous or anti-synchronous) for the chosen delayed regions. The initial functions are fixed as u 1 (t) = 0.1, u 2 (t) = 0.2, u 3 (t) = 0.3, u 4 (t) = 0.4 for t 2 ðÀs; 0. Taking the system parameters as a = 1, b = 1, e = 0.03 and k = 0.5, we obtain two oscillating frequencies such as x 1 = 0.843343 and x 2 = 1.18576 employing HðxÞ ¼ 0 in Eq. (8). It follows from Eq. (9) We rearrange the critical delays as The periodic rhythms and their spatiotemporal patterns are shown in Fig. 4. It follows the DHCO system presents a stable synchronous rhythm activity when time delay is less than s 0 1 , as shown in Fig. 4a for s = 0.2. If time delay s passes through s 0 1 and enters into the first AD island, system (3) presents a stable trivial equilibrium by the reverse Hopf bifurcation. Further, the stable trivial equilibrium loses its stability and enters into a periodic rhythm employing the Hopf bifurcation when time delay crosses through the critical delay s 0 2 . At this time, the rhythm activity exhibits an anti-synchronous pattern, as shown in Fig. 4b for s = 3.0. Moreover, when s crosses through the critical delay s 1 1 and enters into the second AD island, the anti-synchronous rhythm activity will evolve into the stable trivial equilibrium. Then the trivial equilibrium loses its stability and evolves into the synchronous rhythm activity as time delay s is being away from the second AD island and passes through the critical delay s 1 2 , as shown in Fig. 4c for s = 6.0. Following this way, the periodic rhythm activity evolves into anti-synchronous pattern from the synchronous state undergoing the stable trivial equilibrium, as shown in Fig. 4d for s = 10. In a word, the DHCO model presents the periodic rhythms with synchronous and anti-synchronous patterns when time delay increases to cross through the series of critical delays employing the reverse and forward Hopf bifurcation, respectively. Figure The DHCO model is a unit and motif of the CPG neural system. To control the snake-like robotic locomotion with different patterns, we use the above-mentioned DHCO model to construct a CPG neural system. The controlling relation of the DHCO and the robotic module is shown in Fig. 6a. The snakelike robot is designed as a symmetrical structure with pitch-yaw connecting modules rotating around the pitch and yaw axis, respectively. In fact, the pitch-yaw configuration presents much more kinds of locomotion patterns, such as serpentine, rectilinear and sidewinding locomotion compared with the pitch-connecting or yaw-connecting configuration. Further, based on the biological mechanism of excitatory and inhibitory, the output signals of the DHCO are designed as Pout 1 = (x 1 ? x 2 ) / 2 and Yout 1 = (x 1x 2 ) / 2, where Pout 1 and Yout 1 are the controlling signals to drive the joint actuators rotating around the pitch and yaw axis. The periodic rhythms of the VDP oscillators, i.e., x 1 and x 2 , present synchronous and anti-synchronous patterns in different parameter regions.
The output signals of the DHCO model with different delayed values are shown in Fig. 6b The snake-like robotic module rotates around the yaw axis and achieves serpentine locomotion pattern. In the next section, we will explain in detail the different locomotion patterns. By using the DHCO module, the CPG neural system is constructed to control the pitch-yaw connecting snake-like robot. It is the periodic rhythm pattern for the combined actuators rotating around the pitch and yaw axis, respectively.
CPG-based Locomotion of the Snake-like Robot
In this section, based on the above-mentioned DHCO modules, we propose a CPG neural system to control snake-like robot with many types of locomotion patterns including serpentine, rectilinear and sidewinding locomotion in the forward and backward directions, respectively. The schematic diagram of the snake-like robot is designed in Fig. 7, where the pitchyaw connecting modules rotate h i (i = 1, …, 2n) around pitch and yaw axis. Considering the configuration of the snake-like robot, we choose equal-length distance between pitch and yaw modules to avoid the effect of the movement stability. The DHCO modules are connected in series from the head to the tail by excitatory synapse connections with time delay l. The new delay l is introduced to regulate phase difference of the adjacent DHCO modules. The chained-type CPG neural system and their control principle are illustrated in Fig. 7. The control parameters can be used to steer the movement patter and their speed of the snake-like robot. The CPG neural system has n DHCOs. Each DHCO consists of two VDP oscillators corresponding to the combined joints with pitch-yaw connecting configuration. The corresponding Numerical simulations are illustrated in Fig. 8 to show their phase difference. The output signal of the VDP oscillator x i (i = 1, …, 5) is regulated by time delay l, where the DHCO number is n = 5 in the CPG neural system. The parameters of the VDP oscillator are fixed as e = 0.03, a = 1, b = 1, k = 0.1 and s = 0.1. It follows that the VDP 1 oscillator (x 1 in black) has a preemptive phase when l = 5.5. The output signals of the CPG system have a phase lag, which achieves backward locomotion of the snakelike robot. It follows from Fig. 8 that the phase difference of the DHCO module decreases with time delay increasing. When time delay increases to l = 6.2, all signals of the DHCO modules almost share a same phase. The phase difference is almost zero. However, the phase difference switches from phase lag to phase lead when time delay changes to l = 7. At this time, the first oscillator (VDP 1 ) has a lagging phase, and the end one, i.e., VDP 9 , has a € x 1 þ e x 2 1 À a 1 Fig. 7 The CPG neural system and control principle for the snake-like robot with pitch-yaw connecting configuration preemptive phase. The output signals of the CPG system realize forward locomotion. It implies that the new time delay l regulates the snake-like robot locomotion in forward and backward direction. Further, the internal parameters of the DHCO module, such as a i and b i , adjust the amplitude and frequency of periodic rhythm. The time delay s determines the spatiotemporal pattern of the periodic rhythms. In a word, by adopting suitable parameters of the CPG neural system, we can obtain different locomotion patterns including serpentine, rectilinear and sidewinding patterns in the forward and backward directions. The corresponding simulation experiments will be illustrated to demonstrate the snake-like robotic locomotion.
Serpentine locomotion
Serpentine, also called as lateral undulation locomotion, is a most common and realest pattern of the snake movement, which is described as a sinusoidal waveform from the top view of the body. In this pattern, the joint actuators combined in the horizontal direction will rotate around the yaw axis. The vertical units will be to remain stationary. The pitch-yaw snake-like robot moves like a real snake propelled by the lateral wave thrusting from the tail to the head. To achieve the serpentine locomotion of snake-like robot, the output signals of the CPG neural system should be satisfied with Pout i = 0. The periodic rhythms Yout i loaded the horizontal units form backward or forward wave propulsion. The snake-like robot moves along a given curve path and maintains its longitudinal axis with the same orientation. Fig. 9 The output signals of the CPG neural system with s = 3 and l = 7 generated the forward serpentine locomotion The CPG output signals are shown in Fig. 9 for the fixed time delays s = 3 and l = 7. It follows that the CPG outputs Pout i = 0. The DHCO systems are chosen as a = 1, b = 1, e = 0.03 and k = 0.5. The joint actuators combined in vertical direction are all in silent states. The rhythm activities Yout i configure a backward moving wave with the identical frequency and increasing amplitude. The pitch-yaw connecting snake-like robot presents a forward serpentine locomotion. Fixed time delay as s = 3 and decreased the delay to l = 5.5, the CPG outputs are illustrated in Fig. 10, where Pout i = 0. The periodic rhythms Yout i generate a forward-moving propulsive wave. It follows that the first HCO unit of the CPG neural system has a lagging phase, while the last one presents an early phase. The snake-like robot will adopt the backward serpentine movement. The MATLAB-ADAMS co-simulation is used to describe the detailed patterns of the serpentine locomotion, as shown in Fig. 11, where the swing angles h i of the joints are represented as a linear mapping function of the CPG outputs signals, i.e., h i = m i . Yout i (i = 1, 3, …, 2n-1) Fig. 10 The output signals of the CPG neural system with s = 3 and l = 5.5 generated the backward serpentine locomotion Fig. 11 The screenshots of the ADAMS simulation for the serpentine locomotion and h i = n i . Pout i (i = 2, 4, …, 2n). It follows that the snake-like robot controlled by the CPG neural system performs the serpentine locomotion.
Rectilinear locomotion
Rectilinear, also called as straight-line locomotion, is a special type of movement pattern, which is presented by a sinusoidal waveform from the side view of the snake body. From the top view, the whole body of the snake is to maintain a straight-line state when it moves. For the pitch-yaw connecting snake-like robot, the locomotion depends on the vertical actuators rotating around the pitch axis. The rhythm activities Pout i of the CPG neural system form backward or forward wave propulsion. The horizontal actuators remain stationary, that is Yout i = 0. The snake-like robot moves like a worm along with a straight-line path propelled by the touching friction between the snake body and ground.
To obtain the rectilinear locomotion, we choose time delays as s = 0.2 and l = 7 for the fixed parameters a = 1, b = 1, e = 0.03 and k = 0.5. It follows that the VDP oscillators in the DHCO module present a synchronous pattern for s = 0.2. The output signals of the CPG neural system propose Yout i = 0, as shown in Figs. 12 and 13. When time delay is fixed as l = 7, the periodic rhythms Pout i configure a backward moving wave with the identical frequency and increasing amplitude. The head actuators illustrate small amplitude and early phase. At this time, the forward rectilinear locomotion is accompanied by means of the backward-moving wave from head to tail. On the other hand, to generate a backward rectilinear locomotion, the CPG units in the vertical direction should produce a forward-moving propulsive wave. The horizontal units should be silent. Thanks to freely adjust the phase difference of the DHCO modules by time delay l, we can illustrate the backward control signals, as shown in Fig. 13 for l = 5.5. It follows that the output signals of CPG illustrate a reversed phase difference. The first unit at the head has a lagging phase, while the last one at the tail presents an early phase. The snake-like robot moves by adopting the backward rectilinear locomotion. To make it easier to understand for the reader, we present the ADAMS simulation as shown in Fig. 14, where the snake-like robot exhibits the rectilinear locomotion under the control of the CPG neural system.
Side-winding locomotion
Side-winding locomotion is characterized by two sinusoidal waveforms from the top and the side view of the snake body, which is a typical 2D movement. It has a more high efficiency for the snake-like robot. In the pitch-yaw connecting snake-like robot, the joint actuators combined in the horizontal and vertical directions rotate around the pitch and yaw axis, respectively. The snake-like robot can move in the side-winding locomotion by using the combination of the horizontal waves and vertical waves. To simulate the side-winding locomotion, we choose the different parameter values of the VDP oscillators in DHCO system. In fact, a i [ 0 is to control the amplitude of the Fig. 12 The output signals of the CPG neural system with s = 0.2 and l = 7 generated the forward rectilinear locomotion periodic rhythm in the DHCO modules. So in this section, we fix a i = 1 for the odd number of the VDP oscillators and a i = 4 for the even ones. The output signals of the CPG neural system are shown in Fig. 15 for time delays s = 0.2 and l = 7. It follows that the periodic rhythm activities Pout i and Yout i configure two different sinusoidal waveforms, which drives the horizontal and vertical actuators rotating around the pitch and yaw axis, respectively. The ADAMS simulation is shown in Fig. 16. The snake-like robot moves with the side-winding locomotion pattern.
Turning maneuver
Turning maneuver is a necessary ability to change movement direction for the snake-like robot. To obtain turning maneuver, we should construct the asymmetrical undulation outputs of the CPG neural system. Replacing x i with x i À b i ;ði ¼ 1; Á Á Á ; 2nÞ in Eq. (15), we rewrite the CPG dynamical system having the bias parameters b i , which is added to modulate oscillating center. Whether in forward and backward movement, we can adjust the bias parameter to turn locomotion direction, as shown in Fig. 17 Fig. 13 The output signals of the CPG neural system with s = 0.2 and l = 5.5 generated the backward rectilinear locomotion Fig. 14 Fig. 15 The output signals of the CPG neural system generate the side-winding locomotion pattern with the different VDP oscillators a i = 1 (i = 1, 3, 5, 7, 9) and a i = 4 (i = 2, 4, 6, 8, 10) Pout 5 Fig. 17 The output signals of the CPG neural system to obtain turning maneuver by changing the bias from b i = 0 before t = 150 to b i = 1 b i = 0 to b i = 1 at t = 150. The backward locomotion changes its direction to achieve the turning maneuver.
In the end of this section, we present multiple switching of movement direction. In fact, with time delay l increasing, the locomotion direction of snakelike robot can multiply switch from forward to backward and then back to forward, as shown in Fig. 18 for e = 0.1, a = 1, b = 1, k = 0.1 and s = 1. It follows that the CPG system generates a backward movement pattern when l = 6, where the head joint has small amplitude and early phase, as shown in Fig. 18a. The backward locomotion will switch its direction and present a forward movement pattern when l = 7. At this time, the head joint exhibits lagging phase, as shown in Fig. 18b. Further, the backward and forward movement will be presented in succession when time delay continuously increases to l = 12.5 and l = 13.5, as shown in Fig. 18c, d. The CPG neural system offers an effective and interesting method to realize the locomotion transitions of the snake-like robot.
Conclusions
In this paper, based on the DHCO modules, we presented a VDP-based CPG neural system to control a snake-like robot with pitch-yaw connecting configuration, which moves with different locomotion patterns by changing coupled delays. The improved VDP oscillator model was adopted to adjust the amplitude and frequency of periodic rhythms. Employing mutually coupled delay, we firstly constructed a DHCO module consisting with a pair of VDP oscillator. By analyzing the Hopf bifurcation of the DHCO system, we obtained different periodic rhythms with synchronous and anti-synchronous spatiotemporal patterns. The corresponding parameter regions were exhibited by Hopf bifurcation curves. Based on the DHCO modules, we constructed a CPG network system with a chain-type configuration to control a snake-like robot. The different locomotion patterns were obtained by adjusting coupling delays. The pitch-yaw connecting snake-like robot achieved the serpentine, rectilinear and side-winding patterns locomotion in the forward and backward directions. Simulation results show that the presented CPG neural system can be used to control the snake-like robot with different locomotion patterns. The DHCO modules coupling with time delay is a useful method to achieve smooth transition when the speed or direction of the snake-like robot locomotion is changed.
As a matter of fact, in this paper, we just presented researching results of theoretical analysis and numerical simulations. The MATLAB-ADAMS co-simulation was performed to prove the validity and feasibility of locomotion control by using the proposed CPG neural system. Further work will focus on experiment researches of the snake-like robot. Since the output of the CPG system is dimensionless, it cannot be used directly as a joint control signal. A simple and common method is to give a mapping function to transform the output of the model from the dimensionless outputs to the swinging angle of the module joints. In the experiment, host computer will calculate the CPG model presented by Eq. (15) and obtain the corresponding locomotion gaits to control the snakelike robot. Meanwhile, actuator unit receives the instructions from the host computer and completes the whole movement action according to the CPG neural system. The module runs a PD controller to control the servo so that the joint can reach the desired position. | 7,693 | 2022-08-22T00:00:00.000 | [
"Engineering"
] |
Inducing Domain-Specific Sentiment Lexicons from Unlabeled Corpora
A word's sentiment depends on the domain in which it is used. Computational social science research thus requires sentiment lexicons that are specific to the domains being studied. We combine domain-specific word embeddings with a label propagation framework to induce accurate domain-specific sentiment lexicons using small sets of seed words. We show that our approach achieves state-of-the-art performance on inducing sentiment lexicons from domain-specific corpora and that our purely corpus-based approach outperforms methods that rely on hand-curated resources (e.g., WordNet). Using our framework, we induce and release historical sentiment lexicons for 150 years of English and community-specific sentiment lexicons for 250 online communities from the social media forum Reddit. The historical lexicons we induce show that more than 5% of sentiment-bearing (non-neutral) English words completely switched polarity during the last 150 years, and the community-specific lexicons highlight how sentiment varies drastically between different communities.
Introduction
The sentiment of the word soft varies drastically between an online community dedicated to sports and one dedicated to toy animals ( Figure 1). Terrific once had a highly negative connotation; now it is essentially synonomous with good ( Figure 2).
Inducing domain-specific sentiment lexicons is crucial to computational social science (CSS) research. Sentiment lexicons allow researchers to an- alyze key subjective properties of texts, such as user opinions and emotional attitudes (Taboada et al., 2011). However, without domain-specific lexicons, analyses can be misled by sentiment assignments that are biased towards domain-general contexts and that fail to take into account community-specific vernacular or demographic variations in language use (Hovy, 2015;Yang and Eisenstein, 2015). Experts or crowdsourced annotators can be used to construct sentiment lexicons for a specific domain, but these efforts are expensive and timeconsuming (Mohammad and Turney, 2010; Fast et al., 2016). Crowdsourcing is especially problematic when the domain involves very non-standard language (e.g., historical documents or obscure social media forums), since in these cases annotators must understand the sociolinguistic context of the data.
Recent work has shown that web-scale sentiment lexicons can be automatically induced for large socially-diffuse domains, such as the internet-atlarge (Velikovich et al., 2010) or all of Twitter (Tang et al., 2014). However, in cases where researchers want to analyze the sentiment of domain-specific language-such as in financial documents, historical texts, or tight-knit social media forums-it is not enough to simply use generic crowdsourced or webscale lexicons. Generic lexicons will not only be inaccurate in specific domains, they may mislead research by introducing harmful biases (Loughran and McDonald, 2011) 1 . Researchers need a principled and accurate framework for inducing lexicons that are specific to their domain of study.
To meet this need, we introduce SENTPROP, a framework to learn accurate sentiment lexicons from small sets of seed words and domain-specific corpora. Unlike previous approaches, SENTPROP is designed to maintain accurate performance when using modestly-sized domain-specific corpora (∼10 7 tokens), and it provides confidence scores along with the learned lexicons, which allows researchers to quantify uncertainty in a principled manner.
The key contributions of this work are: 1. A state-of-the-art sentiment induction algorithm, combining high-quality word vector embeddings with an intuitive label propagation approach. 2. A novel bootstrap-sampling framework for inferring confidence scores with the sentiment values. 3. Two large-scale studies that reveal how sentiment depends on both social and historical context. (a) We induce community-specific sentiment lexicons for the largest 250 "subreddit" communities on the social-media forum Reddit, revealing substantial variation in word sentiment between communities.
(b) We induce historical sentiment lexicons for 150 years of English, revealing that >5% of words switched polarity during this time.
To the best of our knowledge, this is the first work to systematically analyze the domain-dependency of sentiment at a large-scale, across hundreds of years and hundreds of user-defined online communities. All of the inferred lexicons along with code for SENTPROP and all methods evaluated are made available in the SOCIALSENT package released with this paper. 2 1 http://brandsavant.com/brandsavant/ the-hidden-bias-of-social-media-sentiment-analysis 2 http://nlp.stanford.edu/projects/socialsent Figure 2: Terrific becomes more positive over the last 150 years. Sentiment values and bootstrapped confidences were computed using SENTPROP on historical data (Section 6).
Related work
Our work builds upon a wealth of previous research on inducing sentiment lexicons, along two threads: Corpus-based approaches use seed words and patterns in unlabeled corpora to induce domainspecific lexicons. These patterns may rely on syntactic structures (Hatzivassiloglou and McKeown, 1997;Thelen and Riloff, 2002;Widdows and Dorow, 2002;Jijkoun et al., 2010;Rooth et al., 1999), which can be domain-specific and brittle (e.g., in social media lacking usual grammatical structures). Other models rely on general cooccurrence (Turney and Littman, 2003;Riloff and Shepherd, 1997;Igo and Riloff, 2009). Often corpus-based methods exploit distant-supervision signals (e.g., review scores, emoticons) specific to certain domains (Asghar et al., 2015;Blair-Goldensohn et al., 2008;Bravo-Marquez et al., 2015;Choi and Cardie, 2009;Severyn and Moschitti, 2015;Speriosu et al., 2011;Tang et al., 2014). An effective corpus-based approach that does not require distant-supervision-which we adapt here-is to construct lexical graphs using word cooccurrences and then to perform some form of label propagation over these graphs (Huang et al., 2014;Velikovich et al., 2010). Recent work has also learned transformations of word-vector representations in order to induce sentiment lexicons (Rothe et al., 2016). Fast et al. (2016) combine word vectors with crowdsourcing to produce domain-general topic lexicons.
Dictionary-based approaches use hand-curated lexical resources-usually WordNet (Fellbaum, 1998)-in order to propagate sentiment from seed labels (Esuli and Sebastiani, 2006;Hu and Liu, 2004;Kamps et al., 2004;Rao and Takamura et al., 2005;Tai and Kao, 2013). There is an implicit consensus that dictionary-based approaches will generate higher-quality lexicons, due to their use of these clean, hand-curated resources; however, they are not applicable in domains lacking such a resource (e.g., most historical texts).
Most previous work seeks to enrich or enlarge existing lexicons (San Vicente et al., 2014;Velikovich et al., 2010;Qiu et al., 2009), emphasizing recall over precision. This recall-oriented approach is motivated by the need for massive polarity lexicons in tasks like web-advertising (Velikovich et al., 2010). In contrast to these previous efforts, the goal of this work is to induce high-quality lexicons that are accurate to a specific social context.
Algorithmically, our approach is inspired by Velikovich et al. (2010). We extend this work by incorporating high-quality word vector embeddings, a new graph construction approach, an alternative label propagation algorithm, and a bootstrapping method to obtain confidences. Together these improvements, especially the high-quality word vectors, allow our corpus-based method to even outperform the state-of-the-art dictionary-based approach.
Framework
Our framework, SENTPROP, is designed to meet four key desiderata: 1. Resource-light: Accurate performance without massive corpora or hand-curated resources.
Interpretable:
Uses small seed sets of "paradigm" words to maintain interpretability and avoid ambiguity in sentiment values. 3. Robust: Bootstrap-sampled standard deviations provide a measure of confidence. 4. Out-of-the-box: Does not rely on signals that are specific to only certain domains.
SENTPROP involves two steps: constructing a lexical graph from unlabeled corpora and propagating sentiment labels over this graph.
Constructing a lexical graph
Lexical graphs are constructed from distributional word embeddings learned on unlabeled corpora.
Distributional word embeddings
The first step in our approach is building highquality semantic representations for words using a vector space model (VSM). We embed each word w i ∈ V as a vector w i that captures information about its co-occurrence statistics with other words (Landauer and Dumais, 1997;Turney and Pantel, 2010). This VSM approach has a long history in NLP and has been highly successful in recent applications (see Levy et al., 2015 for a survey).
When recreating known lexicons, we used a number of publicly available embeddings (Section 4).
In the cases where we learned embeddings ourselves, we employed an SVD-based method to construct the word-vectors. First, we construct a matrix M P P M I ∈ R |V|×|V| with entries given by wherep denotes smoothed empirical probabilities of word (co-)occurrences within fixed-size sliding windows of text. 3 M P P M I i,j is equal to a smoothed variant of the positive pointwise mutual information between words w i and w j (Levy et al., 2015). Next, we compute M P P M I = UΣV , the truncated singular value decomposition of M P P M I . The vector embedding for word w i is then given by Excluding the singular value weights, Σ, has been shown known to dramatically improve embedding qualities (Turney and Pantel, 2010; Bullinaria and Levy, 2012). Following standard practices, we learn embeddings of dimension 300. We found that this SVD method significantly outperformed word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) on the domainspecific datasets we examined. Our results echo the findings of Levy et al. (2015) that the SVD approach performs best on rare word similarity tasks.
Defining the graph edges
Given a set of word embeddings, a weighted lexical graph is constructed by connecting each word with its nearest k neighbors within the semantic space (according to cosine-similarity). The weights of the edges are set as
Propagating polarities from a seed set
Once a weighted lexical graph is constructed, we propagate sentiment labels over this graph using a random walk method (Zhou et al., 2004). A word's polarity score for a seed set is proportional to the probability of a random walk from the seed set hitting that word. Let p ∈ R |V| be a vector of word-sentiment scores constructed using seed set S (e.g., ten negative words); p is initialized to have 1 |V| in all entries. And let E be the matrix of edge weights given by equation (3). First, we construct a symmetric transition matrix from E by computing T = D 1 2 ED 1 2 , where D is a matrix with the column sums of E on the diagonal. Next, using T we iteratively update p until numerical convergence: where s is a vector with values set to 1 |S| in the entries corresponding to the seed set S and zeros elsewhere. The β term controls the extent to which the algorithm favors local consistency (similar labels for neighbors) vs. global consistency (correct labels on seed words), with lower βs emphasizing the latter.
To obtain a final polarity score for a word w i , we run the walk using both positive and negative seed sets, obtaining positive (p P (w i )) and negative (p N (w i )) label scores. We then combine these values into a positive-polarity score asp P (w i ) = p P (w i ) p P (w i )+p N (w i ) and standardize the final scores to have zero mean and unit variance (within a corpus).
Many variants of this random walk approach and related label propagation techniques exist in the literature (Zhou et al., 2004;Zhu and Ghahramani, 2002;Zhu et al., 2003;Velikovich et al., 2010;San Vicente et al., 2014). We experimented with a number of these approaches and found little difference between their performance, so we present only this random walk approach here. The SOCIALSENT package contains a full suite of these methods.
Bootstrap-sampling for robustness
Propagated sentiment scores are inevitably influenced by the seed set, and it is important for researchers to know the extent to which polarity values are simply the result of corpus artifacts that are correlated with these seeds words. We address this issue by using a bootstrap-sampling approach to obtain confidence regions over our sentiment scores. We bootstrap by running our propagation over B random equally-sized subsets of the positive and negative seed sets. Computing the standard deviation of the bootstrap-sampled polarity scores provides a measure of confidence and allows the researcher to evaluate the robustness of the assigned polarities. We set B = 50 and used 7 words per random subset (full seed sets are size 10; see Table 1).
Recreating known lexicons
We validate our approach by recreating known sentiment lexicons in the three domains: Standard English, Twitter, and Finance. Table 1 lists the seed words used in each domain.
Standard English: To facilitate comparison with previous work, we focus on the well-known General Inquirer lexicon (Stone et al., 1966). We also use the continuous valence (i.e., polarity) scores collected by Warriner et al. (2013) in order to evaluate the fine-grained performance of our framework. We test our framework's performance using two different embeddings: off-the-shelf Google news embeddings constructed from 10 11 tokens 4 and embeddings we constructed from the 2000s decade of the Corpus of Historical American English (COHA), which contains ∼2 × 10 7 words in each decade, from 1850 to 2000 (Davies, 2010). The COHA corpus allows us to test how the algorithms deal with this smaller historical corpus, which is important since we will use the COHA corpus to infer historical sentiment lexicons (Section 6).
Finance: Previous work found that general purpose sentiment lexicons performed very poorly on financial text (Loughran and McDonald, 2011), so a finance-specific sentiment lexicon (containing binary labels) was hand-constructed for this domain (ibid.). To test against this lexicon, we constructed embeddings using a dataset of ∼2×10 7 tokens from financial 8K documents (Lee et al., 2014).
Twitter: Numerous works attempt to induce Twitter-specific sentiment lexicons using supervised approaches and features unique to that domain (e.g., follower graphs; Speriosu et al., 2011). Here, we emphasize that we can induce an accurate lexicon using a simple domain-independent and resourcelight approach, with the implication that lexicons can easily be induced for related social media domains without resorting to complex supervised frameworks. We evaluate our approach using the test set from the 2015 SemEval task 10E competition (Rosenthal et al., 2015), and we use the embeddings constructed by Rothe et al. (2016). 5
Baselines and state-of-the-art comparisons
We compared SENTPROP against standard baselines and state-of-the-art approaches. The PMI baseline of Turney and Littman (2003) computes the pointwise mutual information between the seeds and the targets without using propagation. The CountVec baseline, corresponding to the method in Velikovich et al. (2010), is similar to our method but uses an alternative propagation approach and raw cooccurrence vectors instead of learned embeddings. Both these methods require raw corpora, so they function as baselines in cases where we do not use off-the-shelf embeddings. We also compare against DENSIFIER, a state-of-the-art method which learns orthogonal transformations of word vectors instead of propagating labels (Rothe et al., 2016). Lastly, on standard English we compare against a state-of-theart WordNet-based method, which performs label propagation over a WordNet-derived graph (San Vicente et al., 2014). Several variant baselines, all of which SENTPROP outperforms, are omitted for brevity (e.g., using word-vector cosines in place of PMI in Turney and Littman (2003)'s framework). Code for replicating all these variants is available in the SOCIALSENT package.
Evaluation setup
We evaluate the approaches according to (i) their binary classification accuracy (ignoring the neutral class, as is common in previous work), (ii) ternary classification performance (positive vs. neutral vs. negative) 6 , and (iii) Kendall τ rank-correlation with continuous human-annotated polarity scores.
For all methods in the ternary-classification condition, we use the class-mass normalization method (Zhu et al., 2003) to label words as positive, neutral, or negative. This method assumes knowledge of the label distribution-i.e., how many positive/negative vs. neutral words there are-and simply assigns labels to best match this distribution.
Evaluation results
Tables 2a-2d summarize the performance of our framework along with baselines and other state-ofthe-art approaches. Our framework significantly outperforms the baselines on all tasks, outperforms a state-of-the-art approach that uses WordNet on standard English (Table 2a), and is competitive with Sentiment140 on Twitter (Table 2b), a distantlysupervised approach that uses signals from emoticons (Mohammad and Turney, 2010). DENSIFIER also performs extremely well, outperforming SENT-PROP when off-the-shelf embeddings are used (Tables 2a and 2b). However, SENTPROP significantly outperforms all other approaches when using the domain-specific embeddings (Tables 2c and 2d).
Overall, SENTPROP is competitive with the stateof-the-art across all conditions and, unlike previous approaches, it is able to maintain high accuracy even when modestly-sized domain-specific cor- pora are used. We found that the baseline method of Velikovich et al. (2010), which our method is closely related to, performed very poorly with these domain-specific corpora. This indicates that using high-quality word-vector embeddings can have a drastic impact on performance. However, it is worth noting that Velikovich et al. (2010)'s method was designed for high recall with massive corpora, so its poor performance in our regime is not surprising.
Inducing community-specific lexicons
As a first large-scale study, we investigate how sentiment depends on the social context in which a word is used. It is well known that there is substantial sociolinguistic variation between different communities, whether these communities are defined geographically (Trudgill, 1974) or via underlying sociocultural differences (Labov, 2006). However, no previous work has systematically investigated community-specific variation in word sentiment at a large scale. Yang and Eisenstein (2015) exploit social network structures in Twitter to infer a small number (1-10) of communities and analyzed sentiment variation via a supervised framework. Our analysis extends this line of work by analyzing the sentiment across hundreds of user-defined commu-nities using only unlabeled corpora and a small set of "paradigm" seed words (the Twitter seed words outlined in Table 1). In our study, we induced sentiment lexicons for the top-250 (by comment-count) subreddits from the social media forum Reddit. 7 We used all the 2014 comment data to induce the lexicons, with words lower cased and comments from bots and deleted users removed. 8 Sentiment was induced for the top-5000 non-stop words in each subreddit (again, by comment-frequency).
Examining the lexicons
Analysis of the learned lexicons reveals the extent to which sentiment can differ across communities. Figure 3 highlights some words with opposing sentiment in two communities: in r/TwoXChromosomes (r/TwoX), a community dedicated to female perspectives and gender issues, the words crazy and insane have negative polarity, which is not true in the r/sports community, and, vice-versa, words like soft are positive in r/TwoX but negative in r/sports.
To get a sense of how much sentiment differs across communities in general, we selected a random subset of 1000 community pairs and examined the correlation in their sentiment values for highly sentiment-bearing words (Figure 4). We see that the distribution is noticeably skewed, with many community pairs having highly uncorrelated sentiment values. The 1000 random pairs were selected such that each member of the pair overlapped in at least half of their top-5000 word vocabulary. We then computed the correlation between the sentiments in these community-pairs. Since sentiment is noisy and relatively uninteresting for neutral words, we compute τ 25% , the Kendall-τ correlation over the top-25% most sentiment bearing words shared between the two communities.
Analysis of individual pairs reveals some interesting insights about sentiment and inter-community dynamics. For example, we found that the sentiment correlation between r/TwoX and r/TheRedPill (τ 25% = 0.58), two communities that hold conflicting views and often attack each other 9 , was actually higher than the sentiment correlation between r/TwoX and r/sports (τ 25% = 0.41), two communities that are entirely unrelated. This result suggests that conflicting communities may have more similar sentiment in their language compared to communities that are entirely unrelated.
Inducing diachronic sentiment lexicons
Sentiment also depends on the historical time-period in which a word is used. To investigate this dependency, we use our framework to analyze how word polarities have shifted over the last 150 years. The phenomena of amelioration (words becoming more positive) and pejoration (words becoming more negative) are well-discussed in the linguistic literature (Traugott and Dasher, 2001); however, no comprehensive polarity lexicons exist for historical data (Cook and Stevenson, 2010). Such lexicons are crucial to the growing body of work on NLP analyses of historical text (Piotrowski, 2012) which are informing diachronic linguistics (Hamilton et al., 2016), the digital humanities (Muralidharan and Hearst, 2012), and history (Hendrickx et al., 2011).
The only previous work on automatically inducing historical sentiment lexicons is Cook and Stevenson (2010); they use the PMI method and (a) Lean becomes more positive. Lean underwent amelioration, becoming more similar to muscular and less similar to weak.
(b) Pathetic becomes more negative. Pathetic underwent pejoration, becoming similar to weak and less similar to passionate. a full modern sentiment lexicon as their seed set, which problematically assumes that all these words have not changed in sentiment. In contrast, we use a small seed set of words that were manually selected based upon having strong and stable sentiment over the last 150 years (Table 1; confirmed via historical entries in the Oxford English Dictionary).
Examining the lexicons
We constructed lexicons from COHA, since it was carefully constructed to be genre balanced (e.g., compared to the Google N-Grams; Pechenick et al., 2015). We built lexicons for all adjectives with counts above 100 in a given decade and also for the top-5000 non-stop words within each year. In both these cases we found that >5% of sentiment-bearing (positive/negative) words completely switched polarity during this 150-year time-period and >25% of all words changed their sentiment label (including switches to/from neutral). 10 The prevalence of full polarity switches highlights the importance of historical sentiment lexicons for work on diachronic linguistics and cultural change. Figure 5a shows an example amelioration detected by this method: the word lean lost its negative connotations associated with "weakness" and instead became positively associated with concepts like "muscularity" and "fitness". Figure 5b shows an example pejoration, where pathetic, which used to be more synonymous with passionate, gained stronger negative associations with the concepts of "weakness" and "inadequacy" (Simpson et al., 1989). In both these cases, semantic similarities 10 We defined the thresholds for polar vs. neutral using the class-mass normalization method and compared scores averaged over 1850-1880 to those averaged over 1970-2000. computed using our learned historical word vectors were used to contextualize the shifts.
Some other well-known examples of sentiment changes captured by our framework include the semantic bleaching of sorry, which shifted from negative and serious ("he was in a sorry state") to uses as a neutral discourse marker ("sorry about that") and worldly, which used to have negative connotations related to materialism and religious impurity ("sinful worldly pursuits") but now is frequently used to indicate sophistication ("a cultured, worldly woman") (Simpson et al., 1989). Our hope is that the full lexicons released with this work will spur further examinations of such historical shifts in sentiment, while also facilitating CSS applications that require sentiment ratings for historical text.
Conclusion
SENTPROP allows researchers to easily induce robust and accurate sentiment lexicons that are relevant to their particular domain of study. Such lexicons are crucial to CSS research, as evidenced by our two studies showing that sentiment depends strongly on both social and historical context. The sentiment lexicons induced by SENTPROP are not perfect, which is reflected in the uncertainty associated with our bootstrap-sampled estimates. However, we believe that these user-constructed, domain-specific lexicons, which quantify uncertainty, provide a more principled foundation for CSS research compared to domain-general sentiment lexicons that contain unknown biases. In the future our method could also be integrated with supervised domain-adaption (e.g., Yang and Eisenstein, 2015) to further improve these domain-specific results. | 5,364.8 | 2016-06-09T00:00:00.000 | [
"Computer Science"
] |
The pincer movement of The Idea of a Social Science: Winch, Collingwood, and philosophy as a human science
This article argues that, in order to understand Peter Winch's view of philosophy, it is profitable to read him together with R. G. Collingwood's philosophy of history. Collingwood was both an important source for Winch and a thinker engaged in a closely parallel philosophical pursuit. Collingwood and Winch shared the view that philosophy is an effort to understand the various ways in which human beings make reality intelligible. For both, this called for rapprochement between philosophy and the humanities. Like Collingwood, Winch wanted to reformulate philosophy as a form of human science. Both thinkers advanced a conception of logic where the validity of judgements, propositions, and thought are dependent on their function as instruments in human dialogue. In their treatments of logic, Winch and Collingwood were fleshing out their idea that questions concerning human meaningful behaviour also tie back to the question of what philosophical analysis is about. There is a deep connection between two main issues in both Collingwood's and Winch's writings: on the one hand, the need for ‘internal’ understanding of how human beings relate to reality, and on the other hand, their critique of the idea of logic as a self-sufficient system, external to historically embedded forms of life. At the core of their shared vision there was a comprehensive critique of metaphysical realism.
Introduction
In the beginning of The Idea of a Social Science (ISS), Peter Winch announced a 'pincer movement', aiming to show that 'any worthwhile study of society must be philosophical in character and any worthwhile philosophy must be concerned with the nature of human society' (Winch, 1990(Winch, [1958]]: 3).The first part of Winch's 'movement' was one that his readers immediately noted and debated.Winch argued that social studies were more intimately related to conceptual inquiry than was usually acknowledged.But the argument cuts both ways.In this article we wish to focus on his idea that philosophy, too, needs to reconsider its relation with the human sciences; that a philosophy that ignores the socially and culturally embedded character of thinking is deficient.After ISS, Winch mostly worked on traditional core areas of philosophy, not on the philosophy of the human or social sciences.However, we would maintain that, in much of that later work, he was spelling out the implications of his pincer movement to the practice of philosophy.Conversely, attending to his general take on philosophy helps us to understand the aims and methods of ISS.
Secondly, we hope to highlight some crucial similarities between Winch and R. G. Collingwood.Collingwood was both an important source for Winch and a thinker engaged in what in many ways was a closely parallel philosophical pursuit.These points of connection were not noted in the initial reception of ISSthe reception that, until today, has defined the place of ISS in philosophical and methodological debate.Collingwood was seen as an outsider, not an obvious point of reference in late 20th-century analytic philosophy.Our current situation is different.On the one hand, thanks to new scholarship partly based on manuscript sources published only from the 1990s onwards, a more nuanced understanding of Collingwood is available.On the other hand, we now have access to Winch's later work, both published and unpublished.The overarching similarity between Collingwood and Winch was their view on philosophy as conceptual analysis.Philosophy makes explicit what lies implicit in thinking as its conceptual conditions.As Winch put it, philosophy is the study of 'man's [sic] relation to reality', of 'what difference this will make to his life' in various realms of inquiry (Winch, 1990(Winch, [1958]] : 9).It is neither an investigation of realityof what kinds of entities exist in the worldnor a mere empirical description of how people think of reality.Winch also formulates the task of philosophy in the 'Kantian' question, 'How is such an understanding (or indeed any understanding) possible?' (ibid.: 22).For Winch as well as Collingwood, to describe human thinking already faces us with questions about how to distinguish between good and bad thinking, because that distinction in itself is part of the activity of thinking.But thinking belongs to a context of action or inquiry, and its implications can been seen only in light of that.
In the philosophy of the human sciences, their shared approach meant outlining the kind of epistemic interest that characterised explanations of action and distinguished them from explanations of natural events.In the philosophy of logic, it involved attention to linguistic meaning and its connections with practices of argumentation.Both thinkers engaged in criticism of a formalist take on logic (dubbed 'Aristotelian Logic') in favour of a context-sensitive approach (dubbed 'Socratic Logic').An important implication of their view on philosophy as conceptual analysis was their rejection of ontology as a meaningful pursuitontology being understood as the idea of a context-free 'description of the world and what it's like and what is in it' (Winch, 1995: 212). 1 More specifically, it implied criticism of metaphysical realism and the context-free epistemology associated with it.
For these reasons, it will be fruitful to read Winch and Collingwood together.The connection between the critique of formalist logic and a critique of realism is especially explicit in Collingwood.
Winch and the (un)recognised influence of Collingwood
Winch was very explicit about his indebtedness to Wittgenstein, to the extent that, in the ensuing debates on ISS, readers often took him simply to be presenting what would have been Wittgenstein's views on the themes he was addressing (see Pleasants, 1999: 33-4).In contrast, while Collingwood was a frequent reference in Winch's work, systematic attempts to place that work in dialogue with Collingwood's philosophy are almost completely lacking. 2 The title of Winch's first book is an unmistakable allusion to R. G. Collingwood and The Idea of History (1994History ( [1946]]).The title was a suggestion by R. F. Holland, the series editor. 3 Winch cites Collingwood, 'that under-estimated philosopher' (Winch, 1990(Winch, [1958]]: 90), for support of some of his key arguments (ibid.: 90-1, 103, 113-14, 126, 129, 131-3), leading to 'a new appreciation of Collingwood's conception of all human history as the history of thought ' (ibid.: 131).In addition to The Idea of History, The Principles of Art (1938) is among the Collingwood references in ISS, presenting a critique of the idea of magic as 'pseudo-science' (Collingwood, 1938: 58-61).In his other early work, Winch cites Principles of Art in 'Understanding a Primitive Society' in 1964 (Winch, 1972: 8-49).Collingwood probably read Evans-Pritchard's book on the Azande already at the manuscript stage (James, 2005: lxx).In 'Human Nature' (published in 1971), Winch cites Collingwood's Autobiography, a central presentation of his philosophical ideas (Winch, 1972: 85-6).
Manuscript material in the Winch Nachlaß at King's College London contains early notes by Winch on Collingwood, including a detailed resume of Part V ('Epilegomena') of The Idea of History. 4 The notes are undated, but appear to be written before or right after the completion of ISS.In his late years, Winch re-engaged with questions about the status of logic.That work remains partially unpublished.It does not include references to Collingwood, but his theme, the critique of 'Aristotelian Logic', is similar to Collingwood's critique of 'Aristotelian Logic'.
On some points, however, it appears that Winch was not quite aware of how close he was to Collingwood.In ISS, Winch repeats some of the then current criticism levelled at Collingwood's idea of re-enactment (Winch, 1990(Winch, [1958]]: 132).He points out that complete re-enactment of past states of mind is impossible.On this score, Winch buys into the then current misconception of re-enactment as psychological Einfühlung and not, as Collingwood would have it, a critical reconstruction of past reasoning along with the historically contingent conditions of its meaningfulness (D'Oro and Connelly, 2015).Our present reconstruction (or re-enactment) of Winch's reasoning goes beyond his self-understanding, in the spirit of Collingwood's idea of re-enactment as critical engagement with the historical material.
Winch's dominant take on Collingwood's philosophy of history, however, was that Collingwood was highlighting the form of inquiry characteristic of explanations of action.Later Collingwood scholarship, partly based on previously unpublished material such as the additions included in the new edition of The Idea of Historyhas laid to rest some moot questions of interpretation.Today it is obvious that Collingwood's idea of re-enactment was a description of the epistemic form of historical explanation, not a methodology based on the psychology of Einfühlung (van der Dussen, 1994: xxviii).This means that Winch was correct in his general perception of Collingwood, and that the parallel between Winch and Collingwood is closer than was understood when ISS was originally published.
The epistemic interest in explaining action
The main task for Winch in ISS was similar to the one Collingwood undertook to solve in The Idea of History: to describe the epistemic interest involved in explaining action.The idea was not so much to propose a new methodology (for instance, qualitative as opposed to quantitative) as to explicate a form of understanding already implicit in the human sciences (see Ahlskog, 2022).A difference was that, for Collingwood, historians were already more or less practising the forms of inquiry he believed to be essential in the research of action; the task was simply to make them explicit and to defend them.Winch believed, in contrast, that the social sciences were going seriously astray because they had misunderstood key assumptions implicit in their research questions.
The most obvious aim for both writers was to emphasise the difference between explaining action in terms of reasons and explaining natural occurrences in terms of causes.Both authors agreed that the word 'cause' can also be applied in explanations of action, but as Winch puts it, the 'form' of the explanation was different (Winch, 1990(Winch, [1958]]: xii; cf.Collingwood, 1998Collingwood, [1940]]: 285).The specific point about the difference between reasons and causes became a focal point of the subsequent debate on ISS.
A central question in that debate was whether reasons are a form of causal influence on behaviour, as Donald Davidson maintained in his classic article 'Actions, Reasons, and Causes' (Davidson, 1963).Davidson's question concerned the ontological status of reasons, i.e. whether reasons were ontologically distinct from causes or, instead, ultimately a species of causes.In analytical philosophy of mind and action, the position was subsequently entrenched that actions allow of causal explanation (D'Oro, 2012).The opposing position, sometimes described as rationalist, was represented by writers such as G. H. von Wright, G. E. M. Anscombe, and William Dray.For them, to ask for the reasons of an action was not to ask for its causally efficacious mental antecedents.Rather, it involved, roughly, looking for a justification of some kind.We must assume an internal (or 'logical') relation between action and justification.They maintained that the connection between actions and reasons was best captured in the 'practical syllogism' formulated by Anscombe (1957: §33), where a combination of beliefs and desires logically entails action as a practical conclusion.
It is true that Winch and Collingwood were closer to the rationalist position, as they emphasised the role of reasons in explaining action.However, in an important respect, their chief concern was quite elsewhere.The reasons versus causes debate focused on the question what brings about the behaviour we understand as 'action'.Is it legitimate to say that reasons bring about action, the same way as causes bring about events in the natural world?The question that interested Winch and Collingwood was, instead, what it means to understand human behaviour as action.To say that reasons 'lie behind' action is to say that the right way to understand action is to understand its reasons.The thing to look for is intelligibility, not causal antecedents, whatever they are.The epistemic interest involved in the human sciences was to understand behaviour as the expression of meaningor as Collingwood put it, as the expression of 'thought' (Collingwood, 1994(Collingwood, [1946]]: 217; cf.Winch, 1990Winch, [1958]: 131)]: 131).This strand of their thinking was not completely appreciated in the debate.
Winch's position came under scrutiny in subsequent debates on social ontology.Both critics and sympathisers have tended to understand the question he addressed as one of ontologyi.e. a question of the ways in which 'meanings' figure as building blocks of social reality.Importantly, this is also an idea that underpins relativist interpretations of Winch.In relativism of that kind, social worlds are seen as self-enclosed entities, with languages of their own, with set meanings, unavailable to scrutiny from 'outside' (see Ahlskog and Lagerspetz, 2015;Gunnell, 2016;Hutchinson, Read, and Sharrock, 2008).
It is common, especially among naturalists, to construe Winch as relying on a hypothesis about meanings as hidden mental entities.For example, Paul Roth derides, with obvious reference to Winch, philosophers of social science 'who believe that there exist conceptual models lurking in mental space awaiting discovery' (Roth, 1987: 138).He believes the assumption of definite meaning entities is a hypothesis completely undermined by Quine's argument for the indeterminacy of translation (see Ahlskog, 2022).
The emphasis on ontology, or on what kinds of entity exist in the social world, is quite alien to the concerns Winch was addressing.In the opening chapter of ISS, he indeed singles out the concept of 'reality' as a central concern for philosophy.However, philosophy is not about reality in itself, but about 'the force of the concept of reality': the difference that the concept of reality makes in human life (Winch, 1990(Winch, [1958]]: 9; emphasis in original).Nigel Pleasants (1999: 39) misrepresents the relevant passage as stating that Winch 'seeks an account of being', that is, an 'ontology'.He further maintains that Winch, later in the book, supports a form of realist ontology of institutions, which according to Pleasants stands in contrast with Winch's 'idealist' take on epistemology.As proof, Pleasants cites Winch's opposition to Popper's methodological individualism (Ahlskog, 2022: 161-2;Pleasants, 1999: 45-6;Winch, 1990Winch, [1958]]: 127-8).For Winch, institutions have a real presence in shaping the actions and thinking of individual agents.That is the view that Pleasants identifies as Winch's 'realist' ontology of institutions.
More recently, Alice Crary construes Winch as proposing 'a distinctive social ontology … on which objective features of the social world are irreducibly ethical' (Crary, 2018: 31;emphasis in original;cf. Tsilipakos, 2018: 82).Winch, on Crary's reading, tells us that values are just (or almost?) as real as are physical objects (Crary, 2018: 36).However, it seems to us rather that Winch was not making an attempt to determine what kinds of thing generally exist in the social world (or elsewhere).Instead, he asks: given specific needs of social explanation, what is the reality that needs to be taken into account?In other words, he defines reasons and meanings not as social or mental entities giving rise to behaviour, but as that which sociologists and historians must look for in order to render behaviour intelligible as action.
Against rationalism
The causalist take on social explanation meant that to explain a particular case of human behaviour (or indeed any event, natural, mental, or social) was to show that it was an instance of a general regularity.This view was described by Hempel as 'the Covering Law Model'.As G. H. von Wright (1971) points out, explanation in terms of the Covering Law Model meant that explanation and prediction logically have the same structure.To explain is to show that the explanandum could have been predicted, because both explanation and prediction rely on one's knowledge of general laws together with prevailing initial conditions.This general view, drawing on Hume and exemplified by J. S. Mill, was of course one of the targets of the criticism of ISS.However, this does not imply that Winch chose instead to subscribe to the rationalist view.As von Wright also argued, the rationalist viewentailing the reconstruction of behaviour in terms of a practical syllogismhas the same structure as the Covering Law Model (von Wright, 1971: Chapter 1, Section 9).In both cases, to explain is to bring the explanandum under a general law or rulewhich once more means that the resulting behaviour could have been predicted.On the rationalist reconstruction model, provided that the agent acts consistently, his or her action necessarily follows from the considerations included in the practical syllogism.
However, to think that Winch adhered to the rationalist reconstruction model is to ignore one of the central arguments of his book.Winch pointed out that any account of a relationcausal or conceptualholding between two individual phenomena presupposes criteria of classification (judgements of identity) where individual phenomena are classified as either the same or not the same.To claim that a logical entailment relation holds between specific cases of, say, desire, belief, and action is to presuppose that one has understood the relevant judgements of identity.The further point here is that those judgements are embedded in the specific intellectual culture and cannot be assumed a priori as a rationalist armchair exercise.As Lars Hertzberg puts it, 'Rationalism to the contrary, then, a person's practical reasoning cannot be used as a clue to understanding his way of life, for we must already have an understanding of his life if we are to understand his reasons' (Hertzberg, 1980: 154). 5 The question of what, for a given person, counts as an 'appropriate' response to a situation can be solved only by attending to how this specific agent makes sense of the situation (see Winch, 1987: 30-1; see also Collingwood, 1994Collingwood, [1946]]: 475, 494-5).In many cases, several responses may count as appropriate; but to recognise their appropriateness requires, in any case, engaging with culturally specific ideas of what counts as what (Winch, 1990(Winch, [1958]]: 86-8).
Somewhat ironically, Ernest Gellner, Winch's would-be sociological nemesis, notes the implication that any analysis of conceptual connections in social life must take account of the facts of their role in that life.However, Gellner takes that to be an argument against Winch's views, while it should rather count as one of the central points Winch was pressing.Gellner writes: The Wittgensteinian idea of the correlativeness of activities (or institutions) and conceptseven assuming that Wittgenstein had solved the problems involvedcuts both ways: it implies not merely that in order to understand outer facts we have to know the ideas which give them life, but also that, in order to understand those ideas, we have to look at the outer goings-on which give them substance.And this is precisely what very many social scientists are doing anyway.(Gellner, 2003(Gellner, [1973]]: 49) The culturally or historically embedded character of logic motivated both Collingwood and Winch to consider the status of logic as part of thinking.That question was not fully explored in ISS or in The Idea of History, but it is addressed in their other work.
On the one hand, logic is applicable to all thinking, just because 'thinking', by definition, implies requirements of consistency.Collingwood put this by saying that the study of thought is a 'criteriological' activity (1999: 84-5, 108).When Winch describes meaningful behaviour as rule governed, his point is, similarly, that meaningful behaviour implies the application of a 'criterion' for right and wrong ways of doing things (Winch, 1990(Winch, [1958]]: 58).On the other hand, what 'consistency' amounts to in a given case can be seen only in the actual contextseeing in what ways specific connections obtain between different concepts.
Both Collingwood and Winch focused on questions about the status of logic as a representation of thinking, not on specific questions about formalisation.Their inquiries are therefore largely independent of any subsequent developments of formal logic.They asked what is implied when formalisations are presented as models to which thinking must adhere.Winch put the central idea this way: The notion of logic is the notion of what is and what is not intelligible in human behaviour and it can be applied to anything men do.If it is abstracted from the ways in which men live it loses its significance as logic even as applied to relations between statements, for a statement is essentially something which men may make in the course of their lives.(Winch, 1972: 56) This implied a criticism of attempts to present context-free applications of logical rules, which Winch and Collingwood dubbed 'Aristotelian Logic'.
Critique of 'Aristotelian logic': Collingwood
In his Autobiography, Collingwood (2013Collingwood ( [1939]]) gives pride of place to a conception of a 'logic of question and answer', today often seen as one of his most central and original ideas.His suggestion is that truth, properly speaking, does not pertain to single statements but rather to 'complexes' of questions and answers.A statement, seen in isolation, might mean anything or nothing.What you might think of as two instances of the same statement would mean quite different things (and hence could have different truth-values) depending on how you identify the questions to which they are offered as answers.
Collingwood says that he got his idea of a logic of question and answer from his work in archaeology.Even today, history is (at least popularly) often described as collective memorya repository of (preferably sacred) testimonies, handed down from a succession of authoritative sources.However, archaeology cannot raise such pretensions, since it is dependent on non-verbal material.Potsherds are normally not placed on a site in order to bequeath a testimony to future generations.Archaeologists must state their own questions and find their answers through targeted inquiries, directed at objects not originally intended to convey any kind of narrative.According to Collingwood, the special nature of its material made archaeology the launching pad of a methodological revolution in the historical disciplines.
In Collingwood's The Idea of History, the idea of questions and answers enters in two ways.On the one hand, historical scholarship makes progress because historians pose new questions to the material, rather than just repeating testimonies.On the other hand, understanding the actions, testimonies, and statements of historical agents is itself based on viewing those actions, testimonies, and statements as answers to questions, that is, as responses to challenges the agents encountered at the time.
The opposite of this conception was 'Aristotelian logic'.Collingwood's polemic against it is included in The Principles of Art (1938: 259-69), An Autobiography (2013[1939]: 30-43), and The Idea of History (1994History ( [1946]]: 253-5).Collingwood's description of 'Aristotelian Logic' was his reconstruction of what he took to be the underlying, unifying assumption behind a number of superficially unrelated philosophical projects.It was the assumption that we can assess the validity of any actual case of reasoning by merely attending to its abstract form (Collingwood, 1938: 259;1994[1946]: 253-4).Collingwood was not specifically targeting Aristotle, but also contemporary propositional logic, which of course rather saw itself as making a significant departure from Aristotle.Nevertheless, he thought the overarching aim had not changed.It was 'to make language into a perfect vehicle for the expression of thought' (Collingwood, 1938: 259).A 'frightful offspring' of this 'propositional logic out of illiteracy' was the project of 'reducing' propositions 'to logical form', Collingwood said, 'ending, for the present, in the typographical jargon of Principia Mathematica ' (2013[1939]: 35-6, n. 1).The basic unit of the analysis was the 'thought' or 'proposition', the best linguistic expression of which would be the indicative sentence in the present tense.In the ideal case, according to 'Aristotelian' logic, 'there is, or ought to be' a 'one-one correspondence between propositions and indicative sentences, every indicative sentence expressing a proposition, and a proposition being defined as the unit of thought, or that which is true or false' (Collingwood, 2013(Collingwood, [1939]: 35-6;]: 35-6;cf. Russell, 1905).
In his alternative view, Collingwood emphasised that a sentence that stands alone is not a carrier of meaning or truth.In a statement that Winch also included in ISS, Collingwood compares a grammarian with a butcher, but preferably a butcher that, according to reported African custom, can 'cut out a steak of a living animal, and cook it for dinner, the animal not being much the worse' (Collingwood, 1938: 259;Winch, 1990Winch, [1958]]: 126).The grammarian should inspect the sentence without killing the context.
A proposition has a recognisable meaning only as an answer to some possible (but not necessarily explicit) question.Conversely, to inquire into the meaning of a proposition is to try to see the question to which it is meant to be an answer (Collingwood, 2013(Collingwood, [1939]]: 29-43).This is an 'historical' exercise in the sense that the investigator must get an idea of what questions had 'arisen' when the proposition was put forward.To judge the truth of a statement is also to judge the previous questions and answers that have led up to it.The same thing, of course, is true of questions as well: in order to understand the meaning of a question you can ask why anyone would raise itand you may question the meaningfulness of an interrogatory sentence if no context is provided.In other words, assertions and questions are equally context-dependent.One might think that this gives rise to an infinite regress: an assertion is dependent on a question, which is dependent on other questions and so on without end.But in practice that is not the case, for a question can be legitimately dismissed when it does not 'arise' (ibid.: 38).
Collingwood argued for what he called a 'Socratic' conception (1998[1940]: 156-8; 2005[1933] : 10-11; 2013[1939]: 35), in which the validity of judgements, propositions, and thought are dependent on their function as instruments in reasoning.Our assessment of the formal validity of a given instance of reasoning is parasitic upon our grasp of the sense of the relevant propositions and thoughts, as they unfold in lived experience: A poet will say at one time that his lady is the paragon of all the virtues; at another time that she has a heart as black as hell.At one time he will say that the world is a fine place; at another, that it is a dust-heap and a dunghill and a pestilent conglomeration of vapours.To the intellect, these are inconsistencies.A lady, we are told, cannot be a paragon of virtue at one time and as black as hell at another; therefore a person who says it must be making it up.…On the poet's behalf it may be replied, to some one who argues that a lady cannot be both adorably virtuous and repellently vicious, or that the world cannot be both a paradise and a dust-heap, that the arguer seems to know more about logic than he does about ladies, or about the world.(Collingwood, 1938: 287-8) One might believe Collingwood is just making the trivial point that you can mean words and sentences in different ways.The woman is virtuous in one sense and vicious in another.The relevant statements could, then, be disambiguated and formalised after all.Collingwood would no doubt have been happy to admit that such formalisations are available.The deeper point is that the very process that leads to the disambiguationwhere we assess the different ways 'virtuous' and 'vicious' might be understood in the contextis where the actual work is done.Once that assessment is complete, there is nothing left for the formalisation to achieve.There may be an additional point: it may be important, especially in the context of a poem, to keep the sense of paradox.The analysis, 'x is virtuous in one sense (V 1 ) and non-virtuous in another (∼V 2 )', will solve the contradiction but it will also dissolve the sense of bewilderment that the poet wanted to express.
Collingwood's 'revolution' in logic signals the detection of what Bernard Williams would call an 'impurity' at the heart of philosophical analysis.As Williams (1995: 148) writes, if philosophy is to have anything important to say, it must 'address a lot more than philosophy'.Meaningful philosophy cannot consist of the mere analysis of pure logical form.It will forever be contaminated by the fact that philosophical analysis depends on our pre-existent, independent understanding of our forms of knowledge and experience.It relies on an understanding external to any a priori self-definition of the proper aims and methods of philosophy (see Moran, 2016: 317).The dependence of philosophy on history does not simply mean that history provides raw material for logical analysis.Collingwood argues that the authority and sense of compulsion associated with logical argument is itself is a function of the historically specific reactions and responses of human beings in actual arguments.To make sense of logical relations, one must look at their instantiations in vivo, not in vitro.
Critique of 'Aristotelian logic': Winch
Winch was mainly interested in the question of the status of logic as a formal science of thinking.A central point of reference for him was Wittgenstein's Philosophical Investigations (1953: I, §81), where Wittgenstein raised the question in what way logic was 'a normative science'.On Winch's view (which he takes also to be Wittgenstein's), logic is normative because it presents an abstract object of comparison, applicable as a corrective to any reasoning.However, its application is context-dependent and often contestedwhich, in a sense, gives the formalisations of an actual argument a tentative and descriptive aspect.
Logic, as a science of thinking, is different from psychology because its subject matter is the distinction between valid and invalid arguments.In his Lectures on Logic, Winch says that a psychological survey might concern 'e.g., the extent to which tiredness, noise, emotional disturbances etc. may interfere with people's reasoning capacities'. 6 As far as the psychologist's empirical survey is concerned, the fact that people reason incorrectly is just as important as the fact that they reason correctly (in some contexts it may be more so).The distinction between valid and invalid arguments is something that a psychologist making the sort of investigation I have given as an example would have to understand, but it would be presupposed by his investigation; it would not itself be the subject-matter of his investigation. 7 Without the distinction between sound and unsound reasoning, the notions of reasoning (or reason, rationality, thinking) 'would have no sense'. 8 To say that someone was thinking or reasoning, and then reject as inadmissible the question whether he was reasoning soundly or not would be to be inconsistent.(Contrast, in this very important respect, the statement that somebody is, e.g., day-dreaming: here the question of the soundness or otherwise of his mental processes just does not arise). 9 A special trait in logic is that pointing out logical implication can be a corrective to someone's reasoning.From certain premises, a conclusion must follow.That in turn invites the question 'What is the authority of logic?', or 'What is the authority of reason?'.Winch repeatedly quotes Lewis Carroll's short story 'Achilles and the Tortoise' (Winch, 1990(Winch, [1958]]: 55-7). 10The story is a reductio ad absurdum, the point of which is to demonstrate that logic by itself cannot 'take' anybody 'by the throat' and force them to accept a conclusion.The 'authority' of logic derives from the fact that it depicts an instance of reasoning going on between people.You cannot reject the conclusion of an argument by pain of having to withdraw from further discussion. 11 In the 1990s, Winch used the expression 'Aristotelian logic' to describe the sort of formalist approach that he opposed (Lagerspetz, 2019).This is apparently a term he got not from Collingwood, but from Wittgenstein.Winch pointed out that 'Wittgenstein, so [Rush] Rhees says, used the term "Aristotelian logic" (with which he contrasted his own way of treating logic) to include Frege and Russell!'. 12 Winch wanted to apply and vindicate (later) Wittgenstein's idea of logic.
For the 'Aristotelian', logic is a formal system implemented on actual cases of reasoning, to be used as a standard for judging their correctness.Logic is a kind of machine, in one end of which you feed premises and out of the other come conclusions.The machine never makes mistakes, thus whenever your premises are true, the conclusions will also be true.In Wittgenstein's words, the machine is 'ideally rigid', for the possibilities of its future movements 'are already there in it in some mysterious way' (Wittgenstein, 1953: I, §194).This is not an empirical discovery but a conclusion from the abstract structure, which knows no wear and tear and no distortion of the parts.Moreover, you may feed the machine profitably with premises even if you have no idea of what they mean.The system itself guarantees the correct solutions.According to Winch's summary, the 'Aristotelian' conception of logic has 'two aspects': 1.A conversion of what we say into 'canonical form'which is supposed to express the real form of our thought, a form that is supposed to take its authority from its mirroring the structure of reality.2. The idea that this structure exercises a special sort of constraint on our thinking. 13 For instance, the proposition 'It is raining and it is not raining' can be formalised as 'p & ∼p'.The logical form (the typographical form) itself supposedly indicates that it is self-contradictory.Winch's response to this is that no two propositions per se contradict each other.Contradiction arises when people use propositions to say contradictory things.We find out whether two sentences contradict when we see how people react to themin other words, what they mean by them.For instance, 'It is raining and it is not raining' may involve logical contradiction, but it may also be a good description of Swansea weather.Of course, if someone says, 'It's raining and not raining' and obviously means (something by) it, our first reaction is typically not to dismiss it but to give it a reading that makes sense.The typographic image of 'p & ∼p', on the other hand, is of course neither self-contradictory nor coherent in itself, for it might just be a design for a wallpaper pattern. 14The expression 'form' papers over a certain ambiguity.On the one hand, 'form' is 'the geometry of the written pattern' (Carnap, 1963: 28), and logic may be seen as the art of converting one pattern to another according to rules of syntax.On the other hand, what makes the written pattern a logical form is its connection with thinkinga connection that, in any given case, must be demonstrated.
A consequence of this view is that you can never know whether two statements contradict each other unless you see how they are used.This may mean that you must see how you would use them.In his paper 'Darwin, Genesis and Contradiction', Winch takes issue with the idea that two statements 'as such' may be contradictory.Discussing the contrast between Genesis and evolutionary theory, he argues that you can settle the question of whether the two ideas about the origins of humankind contradict each other only by considering what kind of use you can think of giving to them -'whether, and if so how, [you] can live with both of them' (Winch, 1987: 137).Thus it is a mistake to think that the work of defining their relation 'has, as it were, already been done in a hidden realm of logic and simply needs to be revealed to view ' (ibid.: 139).
Questions about logical relations must be addressed from inside the various ways in which the world is intelligible to us.It is not as if we have, on the one hand, our ways of making sense of the world and then, separately, a cut-and-dried system called Logic.It is not as if the latter is a mind-independent ontological structure, somehow running through and regulating the former (cf.Diamond, 2003: 52).The project of weeding out incoherent or logically contradictory ideas must wait until we understand what, in the given case, would count as contradiction.
If logic is a formalisation of how human beings think it must, in other words, be logic-in-use.On this view, logic is an aspect of a social life where human beings agree or disagree.In ISS, Winch writes, 'Criteria of logic are not a direct gift of God, but arise out of, and are intelligible only in the context of, ways of living or modes of social life' (Winch, 1990(Winch, [1958]]: 100).
Winch contrasted 'Aristotelian' logic with what he called Socratic logic; in other words, the kind of dialectic that Socrates both exercises and describes in Plato's dialogues.The central feature of this dialectic is that it consists of interaction between specific people.The 'binding' character of an argument is in part dependent on the attitude of the person who puts it forwardwhether she believes it, for what reason she presents it and what kinds of consequences she is prepared to accept.This dialectic has a moral aspect, which Socrates often stressed.In the Gorgias, as Winch pointed out, at crucial junctures Socrates makes his interlocutors feel ashamed for the positions to which they have committed themselves. 15They are involved in self-contradiction, not because it is a contradiction per se to say disgraceful things, but because it is a disgrace for the speaker to do so.For instance, if Callicles took seriously his own insistence that there is no distinction between shameful and acceptable pleasures, he would come across as shamelessas a person of the kind that he does not want to be.Socrates was trying to call his interlocutors back to sobriety and the love of truth 'through the candor on which he insisted'. 16
Critique of metaphysical realism
By focusing on the context-dependence of linguistic meaning, Collingwood and Winch also produced an argument against ontology as a philosophical pursuit, especially targeting metaphysical realism.Metaphysical realism is the doctrine that the existence of objects (at least some of them) is independent of any process of our coming to know them.Hence, for the realist, it must be meaningful to state that an object exists without assuming anything about our possible relations with that object.The problem here is that the nature of our possible relation to the object, in part, specifies the kind of object it is.For Collingwood and Winch, talk of the 'reality' of any kind of 'thing'external objects, mathematical objects, God, the mind, the past, good and evil, and so onmust in each case be understood in relation to possible ways of relating to that 'thing'.Lacking such connections, ontology is left without a defined subject matter; in other words, it is not a meaningful inquiry at all.
For Collingwood, resistance to realism was a life-long project.He gives it a central place in his Autobiography, connecting it with his proposal of a logic of question and answer.His criticism included Cook Wilson at Oxford as well as G. E. Moore and Bertrand Russell at Cambridge (Collingwood, 2005: 245).His main objection in the Autobiography was that realism treats questions of truth simply as a matter of matching claims about reality with reality in itself.The idea of simple comparison ignores the crucial function of thinking as problem solving.
The Oxford 'realists' talked as if knowing were a simple 'intuiting' or a simple 'apprehending' of some 'reality'.At Cambridge, Moore expressed, as I thought, the same conception when he spoke of the 'transparency' of the act of knowing.…This doctrine, which was rendered plausible by choosing examples of knowledge statements like 'this is a red rose', 'my hand is resting on the table', where familiarity with the mental operations involved has bred not so much contempt as oblivion, was quite incompatible with what I had learned in my 'laboratory' of historical thought.(Collingwood, 2013(Collingwood, [1939]: 25-6) ]: 25-6) Realists thought of empirical knowledge in terms of a kind of presence; the best example being the experience of immediate eye contact or tactile contact, such as seeing a coloured patch or feeling one's own hand.In such 'immediate knowledge' (Moore, 1957(Moore, [1910-11]-11]: 123), no question about evidence or research method muddies the waters between the epistemic subject and the object of knowledge.
'My hand is resting on the table' was a kind of standard example, typical of philosophy seminar situations.In 1939, Moore used the example, slightly modified, in his (in)famous 'Proof of an External World' (Moore, 1959(Moore, [1939]]).Moore supposedly demonstrated the existence of objects external to the mind by producing his two hands.Collingwood's implicit objection in the Autobiography was that Moore had not specified a question to which his example was an answer.No one had doubted that Moore had two hands.
Winch was more lenient in his comment on Moore in ISS.Presumably drawing on interpretations by Norman Malcolm and Alice Ambrose (Ambrose, 1952(Ambrose, [1942]]; Malcolm, 1952Malcolm, [1942]]), Winch presented a rather benevolent gloss on what Moore had been doing: Moore was not making an experiment; he was reminding his audience of something, reminding them of the way in which the expression 'external object' is in fact used.And his reminder indicated that the issue in philosophy is not to prove or disprove the existence of a world of external objects but rather to elucidate the concept of externality.(Winch, 1990(Winch, [1958]]: 10) you would not be able to match the utterance confidently with anything at all.His utterance does not (at least yet, with the information that was given) amount to anything one would recognise as a claim and so we would not know what to make of it.
Winch argues that the realist is 'obsessed with a certain picture', 'a mythological account of p's truth-conditions'.This mythology, roughly, presupposes an 'ontological realm of facts', with which the proposition p is compared (Winch, 1987: 39).Winch's point is that the real question here should be: What sense can we make of such comparisons in different circumstances?In other words, how do we actually go about it when we make comparisons between a proposition and the facts?Answering the question requires us to discuss criteria of verification, not as an afterthought once the truth claim itself has been made and understood, but as a way of spelling out what the truth claim amounts to. 17 In other words, the idea of 'claims about reality' is context-sensitive.Not only is our thinking influenced by context, but the very intelligibility of raising questions about what we 'know', 'believe', 'assert', or 'must conclude' presupposes an idea of the ways in which the propositions are 'about' reality.Philosophical realism assumes that that question has already been solved, whereas Winch saw it as one of the chief questions facing the philosopher (Winch, 1987: 195).
Conclusion: Reading Winch and Collingwood together
In making his 'pincer movement', Winch argued that understanding meaning in human life is a concern equally crucial to philosophy and to the interpretive human sciences.In their treatments of logic and metaphysical realism, Winch and Collingwood were fleshing out their idea that questions concerning meaningful behaviour also tie back to the question what philosophy is about.
By connecting Winch's work with Collingwood's 'logic of question and answer', we also see more clearly that Winch's concern did not lie with ontology.When Winch wrote, 'What is real and unreal shows itself in the sense that language has' (1972: 12; emphasis in original), he was not advancing the idea that 'language' comes before 'experience', which has a been major issue in structuralist and post-structuralist philosophy of language.He was voicing the point he also made in ISS about the 'force' of reality claims: the meaning of a given reality claim is something we must grasp, so to speak, from the inside of the relevant question-and-answer complexes.Conversely, if we were to read Winch as proposing an ontology, he would naturally come across as a relativist and a linguistic idealist.He would indeed have been a relativist if he had proposed that a specific set of logical rules, embodied in language, exists for each individual culturerules that regulate all possible thinking, enclosing us in set views on what is real and unreal. 18What he did say, instead, was that not only the force of a given assertion, but also the rules that hold for its relations with other assertions, need to be considered in the concrete give-and-take of human communication.
In sum, there is a deep connection between two main issues in both Collingwood's and Winch's writings: on the one hand, the need for 'internal' understanding of how human beings relate to reality, and on the other hand, their critique of the idea of logic as a selfsufficient system, external to historically embedded forms of life.This clarifies the general perspective that informed the approach to social explanation proposed in ISS.
12. Winch, 'The Authority of Reason',p. 3;'Persuasion and Reason',p. 20,n. 10; unpublished MS on file with authors), p. 14 (9 October 1990).Wittgenstein makes comparisons between wallpaper patterns and rules of calculus several times in his 1939 lectures (Wittgenstein, 1989: 34, 46-7, 59-60, 70, 120, 171).However, the best formulation of the point above is from Rhees: 'Wittgenstein was constantly thinking about logical necessity, about the "force" of a logical conclusion, and about the difference between a calculation and any other transformation of signs according to rules, which might be a wallpaper pattern.…The conflict [e.g. of a proposition and its negative] comes from the usual uses which we make of them, and it is in this connexion that we rule out contradictions.The "impossibility" comes not from any characteristics of the symbols themselves' (Rhees, 1998: 80).15.Winch, 'Lectures on Moral Philosophy', p. 11 (20 September 1990). 16. Winch, 'Persuasion and Reason', p. 19; emphasis in original.17.In her essay 'Unfolding Truth', Cora Diamond describes Winch's position in the essay we have discussed as one of 'reject[ing] … a gap between the truth-conditions of a proposition and the conditions in which we are entitled to assert it', a position that 'commits him to a kind of verificationism' (Diamond, 2003: 44, 36).Diamond does not take verificationism lightly.In another paper, adopting a phrase from Putnam, she describes the form of verificationism described above as 'tired pseudo-Wittgensteinianism' (Diamond, 1999: 102).18.This is the view that Winch criticises, for instance, in 'Understanding a Primitive Society ' (1972: 22-3).
emphasis in original.13.Winch, 'The Authority of Reason', p. 4. 14.Winch made this point in his lectures: Peter Winch, 'Lectures on Moral Philosophy', University of Illinois at Urbana-Champaign, fall term 1990 (notes taken by Olli Lagerspetz; | 10,476.2 | 2023-05-04T00:00:00.000 | [
"Philosophy"
] |
Efficient neural codes can lead to spurious synchronization
Experimental and computational evidence shows that cognitive function requires an optimal balance between global integrative and local functionally specialized processes (Tononi et al., 1998). This balance can be described in terms of transient short-lived episodes of synchronized activity between different parts of the brain (Friston, 2000; Breakspear, 2002). Synchronization over multiple frequency bands is thought to subserve fundamental operations of cortical computation (Varela et al., 2001; Fries, 2009), and to be one of the mechanisms mediating the large-scale coordination of scattered functionally specialized brain regions. For instance, transient synchronization of neuronal oscillatory activity in the 30–80 Hz range has been proposed to act as an integrative mechanism, binding together spatially distributed neural populations in parallel networks during sensory perception and information processing (Singer, 1995; Miltner et al., 1999; Rodriguez et al., 1999). More generally, synchrony may subserve an integrative function in cognitive functions as diverse as motor planning, working or associative memory, or emotional regulation (Varela, 1995).
Over the past 15 years, cognitive neuroscientists have tried to capture and quantify neural synchronies across distant brain regions both during spontaneous brain activity and in association with the execution of a wide range of cognitive tasks, using neuroimaging techniques such as functional resonance imaging, electro- or magneto-encephalography. Theoretical advances in various fields including non-linear dynamical systems theory have allowed the study of various types of synchronization from time series (Pereda et al., 2005), and to address important issues such as determining whether observed couplings do not reflect a mere correlation between activities recorded at two different brain regions but rather a causal relationship (Granger, 1969) whereby a brain region would cause the activity of the other one.
However, not all measured synchrony may in fact represent neurophysiologically and cognitively relevant computations: various confounding effects may mislead into identifying functional connectivity, defined as the temporal correlations between spatially remote neurophysiological events, with effective connectivity, i.e., the influence one neuronal system exerts over another (Friston, 1994). For instance, measured synchrony may stem from common thalamo-cortical afferents or neuromodulatory input from ascending neurotransmitter systems, or may be the visible part of indirect effective connectivity. Other technique-specific artifactual sources of synchrony, for instance induced by volume conduction, are also well-known to cognitive neuroscientists (Stam et al., 2007).
Here, we address a further (extra-cranial) confounding source: the appearance of simultaneous, yet uncorrelated stimuli. We show how the activity of two groups of binary neurons, whose output code is optimized to represent rare events with short codes, can exhibit a synchronization when such rare events appear, even in the absence of shared information or common computational activities.
Experimental and computational evidence shows that cognitive function requires an optimal balance between global integrative and local functionally specialized processes (Tononi et al., 1998). This balance can be described in terms of transient short-lived episodes of synchronized activity between different parts of the brain (Friston, 2000;Breakspear, 2002). Synchronization over multiple frequency bands is thought to subserve fundamental operations of cortical computation (Varela et al., 2001;Fries, 2009), and to be one of the mechanisms mediating the large-scale coordination of scattered functionally specialized brain regions. For instance, transient synchronization of neuronal oscillatory activity in the 30-80 Hz range has been proposed to act as an integrative mechanism, binding together spatially distributed neural populations in parallel networks during sensory perception and information processing (Singer, 1995;Miltner et al., 1999;Rodriguez et al., 1999). More generally, synchrony may subserve an integrative function in cognitive functions as diverse as motor planning, working or associative memory, or emotional regulation (Varela, 1995).
Over the past 15 years, cognitive neuroscientists have tried to capture and quantify neural synchronies across distant brain regions both during spontaneous brain activity and in association with the execution of a wide range of cognitive tasks, using neuroimaging techniques such as functional resonance imaging, electro-or magneto-encephalography. Theoretical advances in various fields including non-linear dynamical systems theory have allowed the study of various types of synchronization from time series (Pereda et al., 2005), and to address important issues such as determining whether observed couplings do not reflect a mere correlation between activities recorded at two different brain regions but rather a causal relationship (Granger, 1969) whereby a brain region would cause the activity of the other one.
However, not all measured synchrony may in fact represent neurophysiologically and cognitively relevant computations: various confounding effects may mislead into identifying functional connectivity, defined as the temporal correlations between spatially remote neurophysiological events, with effective connectivity, i.e., the influence one neuronal system exerts over another (Friston, 1994). For instance, measured synchrony may stem from common thalamo-cortical afferents or neuromodulatory input from ascending neurotransmitter systems, or may be the visible part of indirect effective connectivity. Other technique-specific artifactual sources of synchrony, for instance induced by volume conduction, are also well-known to cognitive neuroscientists (Stam et al., 2007).
Here, we address a further (extracranial) confounding source: the appearance of simultaneous, yet uncorrelated stimuli. We show how the activity of two groups of binary neurons, whose output code is optimized to represent rare events with short codes, can exhibit a synchronization when such rare events appear, even in the absence of shared information or common computational activities.
THE MODEL
We suppose that a neuron codifies an external stimulus with a set of spikes, to transmit information about the event to other regions of the neural system. For the sake of simplicity, let's also suppose that all stimuli are drawn from a finite set of events E = {e 1 , . . . , e N }, N being the total number of events. Each event i is characterized by two strongly related features: the frequency of appearance f i and the importance factor m i . Clearly, rare events are also the most important ones. For instance, the image of a group of trees is quite common for an animal, and should not attract his attention. On the other hand, a predator appearing behind such trees is far less frequent, and the importance of a fast response to the event, high. Therefore, for each event i, the relation m i = 1/f i is defined.
Each neuron optimizes its code to represent such an environment, i.e., it assigns a symbol s i drawn from an alphabet S to each input event i. As the neuron natural language is composed of spikes, each symbol s i is defined as a sequence of spikes and silences; this is represented by a sequence of 0's and 1's, of arbitrary length, forming a Boolean code. In other words, and from an information science perspective, each symbol s i is a number in its Boolean representation.
In the creation of the code, the neurons use all their available knowledge concerning their environment, given by f i and m i , trying to fulfilling two conditions. First, the cost associated with the transmission of information should be minimized, thus as few spikes as possible should be generated; this favors large symbols with few 1's and a large proportion of 0's. This condition is energy saving, but increases the neuron's response time. Therefore, a second condition ensures that the neuron minimizes symbol length, particularly those associated with events or
COMPUTATIONAL NEUROSCIENCE
items of great importance, i.e., with low f i and high m i . A cost given by: accounts for the trade-off between these conditions is associated to each code, and minimized by the neuron in a training phase representing a natural selection process. The contribution of each symbol i to the total is given by two termssee Equation 1. The first, involving the number of spikes in the symbol (b i ), its expected frequency of appearance (f i ) and its length (l i ), expresses the probability of having the neuron spiking , at a given time, and thus the expected energetic cost of the code. The second term penalizes the appearance of long symbols codifying important messages. Finally, the parameter α defines the balance between both contributions to the total cost: for α ≈ 0 (α ≈ 1) the total cost is dominated by the length of important symbols (by the energetic cost). Two additional requirements are added. First, for different events no to be confused, all symbols should be different, i.e., s i = s j . Second, all symbols should start with a spike (a 1) and have at least one zero, in order to be recognizable and to avoid codes composed only of silences or spikes.
Due to the computational cost of optimizing such codes when multiple events are considered, the process is performed by means of a greedy algorithm Cormen et al. (2001), that is, by starting with an empty set, and adding one symbol at the time, making the locally optimal choice at each iteration.
RESULTS
We now explore how a spurious synchronization between different neurons (or groups of them) can be achieved even in the absence of any information transfer.
Neurons are supposed to work independently, that is, they receive independent inputs from the environment and create their optimal code to process and transmit such information. For instance, two groups of neurons may receive two different and uncorrelated stimuli, corresponding to the image of a predator and the sound of a thunder.
Following this idea, a large number of neurons are modeled and their codes created. Each neuron has its independent set of stimuli, half of them highly probable (and therefore, less important), and half of them with low probability of appearance.
Using this information, all codes are generated, and a time series for each neuron is created, by presenting sequences of stimuli at random, and recording the neuron's corresponding activity. Time series are divided into two parts of equal length. During the first half, neurons are stimulated by high-probability events; the opposite occurs during the second half. Following the previous example, we suppose that the organism is resting quietly at the beginning, and then spots a predator and hears a thunder. Furthermore, we suppose that neurons do not respond with the same velocity to the external stimuli: each neuron receives its inputs with a delay drawn from a uniform distribution defined between 0 and 400 time steps. Figure 1 Left depicts the evolution of the time series generated by two groups of neurons, each one composed of 500 neurons, for α = 0.1, 40 stimuli, and a transition interval of 400. Each series is clearly divided in two epochs, the first one corresponding to the time window [0, 5000], in which no relevant event appears, and a second window [5000,10000] in which neurons respond to rare external stimuli. As previously described, an efficient code requires important stimuli to be codified with short symbols, which, in turns, are associated with high spike densities. This effect is clearly shown in Figure 1 Left, where the proportion of spiking neurons after time 5000 is roughly increased by 0.05.
As neural codes are independently generated for the 1000 neurons considered, with different probability distributions, and external stimuli are also triggered in an independent way, no synchronization is expected between both time series. Indeed, if one computes the Pearson's correlation coefficient between both series within the time window [0, 5000], the result is in the order of 10 −4 . Nonetheless, an interesting result is obtained when the correlation is calculated by means of a sliding window; in other words, a time-varying correlation is obtained, whose value at time t represents the dynamics of both neural groups in the interval [t − 200, t + 200]. Intuitively, when analyzing the series near time 5000, both series share the same trend, i.e., an upward dynamics, thus leading to a positive synchronization. Such FIGURE 1 | (A,B) Time series of the proportion of spiking neurons generated by two groups of 500 neurons (gray and red lines). In panel A (panel B), the probability of finding rare events is changed at time 5000 (is continuously changed). Time series of group 2 is represented with an offset of 0.25.
The black solid line represents the evolution of the Pearson's correlation coefficient between both groups, calculated with a sliding window of size 400. (C) Average values for the four synchronization metrics, using the same event sets of panel (A). All neural codes are optimized for α = 0.1.
Frontiers in Computational Neuroscience
www.frontiersin.org September 2013 | Volume 7 | Article 125 | 2 effect is shown in Figure 1 Left, black line and right scale: around time 5000 the Pearson's correlation coefficient jumps to 0.6. To confirm this result, Figure 1 Right reports the average synchronization level obtained in 100 realizations of the previously described process, as obtained by 4 commonly used metrics for the assessment of synchronization in brain activity: • Correlation: Pearson's linear correlation between the two time series. • Granger causality: following the original definition in Wiener (1956), a time series is said to cause a second one if one can improve the prediction of the evolution of the latter by incorporating information about the past dynamics of the former. Such relationship is tested by means of bivariate autoregressive models (AR). The value here reported is the value of 1 − α * , α * being the critical level of significance for which the first time series can be considered causal to the second one. • Mutual information: assesses the quantity of information, measured in bits, that two time series share. In other words, it measures how much knowing one of these time series reduces uncertainty about the other. • Synchronization Likelihood: arguably one of the most popular index for assessing the presence of generalized synchronization, returns a normalized estimate of the dynamical interdependencies between two or more time series (Stam and Van Dijk, 2002). It relies on the detection of simultaneously occurring patterns, even when they are different in the two signals.
As can be seen in Figure 1 Right, all four metrics present a peak around time 5000, indicating that they all detect this spurious synchronization between the two groups of neurons. This spurious synchronization is caused by the optimization of the neural code, in which the length of important events is minimized, thus increasing the proportion of spiking neurons when rare events are presented to the system. The example proposed in Figure 1 Left is not very ecological as the set of events presented in the two halfs of the considered period only included frequent ([0, 5000]) and infrequent ([5000, 10000]) events. Figure 1 Center presents a more realistic example, in which the probability of finding rare events is continuously varied between two intermediate values.
The resulting time series (gray and light red lines) are highly noisy, while it is still possible to detect some trends. The black solid line represents the evolution of the Pearson's correlation coefficient calculated over a sliding window of size 400. Even in this noisy configuration, it is possible to detect regions in which the correlation between the two time series is strongly increased -similar results were obtained with the three other considered metrics.
DISCUSSION
In conclusion, we showed that synchronization can appear when the response of two groups of binary neurons is modulated by the simultaneous appearance of uncommon stimuli, even if both groups do not share information and are not performing a common computation. This is due to the way neural codes are constructed, i.e., to the preference of short symbols, with high spiking rates, representing uncommon events. The present toy model is not intended to mirror actual neural functioning, but rather to draw attention to a possible source of spurious synchronization occurring at the system level of description of neural activity typical of standard neuroimaging techniques. In particular, our results show that even a measure such as the Granger causality can be fooled into signaling causal relationships in the presence of mere coincidences corresponding to no underlying computation. This confirms that claims of causality from (multiple) bivariate time series should always be taken with caution (Pereda et al., 2005), as true causality can only be assessed if the set of two time series contains all possible relevant information and sources of activities for the problem (Granger, 1980), a condition that a neurophysiological experiment can only rarely comply with. Finally, it is impor-holds true even for resting brain activity, which is operationally defined by the absence of exogenous stimulation. This is explained by the fact that resting brain activity is characterized by unobservable, endogenous activity stemming from numerous simultaneous sources rendering spurious coincidences a plausible occurrence.
ACKNOWLEDGMENTS
Authors acknowledge the usage of the resources, technical expertise and assistance provided by supercomputing facility CRESCO of ENEA in Portici (Italy). | 3,880.8 | 2013-09-10T00:00:00.000 | [
"Computer Science"
] |
Smart Consumer Wearables as Digital Diagnostic Tools: A Review
The increasing usage of smart wearable devices has made an impact not only on the lifestyle of the users, but also on biological research and personalized healthcare services. These devices, which carry different types of sensors, have emerged as personalized digital diagnostic tools. Data from such devices have enabled the prediction and detection of various physiological as well as psychological conditions and diseases. In this review, we have focused on the diagnostic applications of wrist-worn wearables to detect multiple diseases such as cardiovascular diseases, neurological disorders, fatty liver diseases, and metabolic disorders, including diabetes, sleep quality, and psychological illnesses. The fruitful usage of wearables requires fast and insightful data analysis, which is feasible through machine learning. In this review, we have also discussed various machine-learning applications and outcomes for wearable data analyses. Finally, we have discussed the current challenges with wearable usage and data, and the future perspectives of wearable devices as diagnostic tools for research and personalized healthcare domains.
Introduction
Wearables, which refer to smart consumer devices that record digital health data, are becoming an integral part of our daily lives. This reflects the growing health consciousness among people. Wearable biosensors are low-price, non-invasive, and non-irritating devices that function by continuously measuring a person's physiological parameters in real time [1,2], which can be used for the early as well as in-depth diagnosis of several conditions. They also facilitate personalized patient health monitoring outside the clinical setting, which is an advantage considering the restricted movements during the COVID-19 pandemic [3,4]. More than 500 health-related sensors are available in the market [1][2][3][4][5], and the sale of such devices has experienced more than a 20% annual growth rate, with an estimated market size of more than EUR 150 billion by 2028 [6]. Wearable devices are available in different forms that are in contact with different body parts, and are also available as devices attached with fabrics. Based on their point of contact, they can be categorized into head, limb (which includes arms), leg, eye, and torso wearable devices [7]. Based on their probing method, they can also be categorized as skin-based or biofluid-based [8]. Apart from consumer devices, wearable devices are available for specialized monitoring, such as wearable smart insoles for diabetic foot monitoring, devices for real-time heart attack detection, and smart-digital stethoscope systems [9], the use of which is often suggested by clinicians. In this review, we have primarily focused on skin-based wrist-wearable consumer devices that provide continuous data, which are used for the diagnosis of several disease conditions. Wearable devices contain different types of sensors that collect data on step counts, heart rate, sleep duration, calories burnt, stress, and oxygen levels [10]. The parameters
Wearables as Digital Diagnostics
Wearable devices are revolutionizing the healthcare system by monitoring health even outside the clinics. This has enabled medical practitioners to adapt to wearables for monitoring as well as diagnosing their patients. Here, we discuss the major outcomes obtained from wearable data that are used for digital diagnostics only. Table 1 summarizes different wearable devices and their applications as digital diagnostic tools, as reported in different studies. Step count and heart rate Monitors frailty in cardiovascular patients when 6MWTs are conducted in both clinical settings and at home.
Assesses frailty with 90% sensitivity in a clinical setting and with 83% sensitivity at home. [22] Detects AF by training using a deep-learning network.
Assesses heart beat rhythm by using a trained deep-learning algorithm with a sensitivity of 98%. [23] Wrist, finger, chest, and abdomen Heart rate Could be useful in the detection of several cardiovascular diseases such as myocardial ischemia or cardiac arrhythmias.
Recording from this smartwatch shows feasibility, with a good signal quality of ECG (QT interval) and a correlation of 0.994. [24,25] Kick LL Wrist Respiration and heart rate Measures respiration and heart rate using a PPG sensor.
Allows real-time and remote measurements. [26] Honor Monitors sleep patterns and quality to understand the cardiovascular risk and premature telomere shortening of an individual. [29] Step count and sleep Tracks physical activity in diabetic patients.
The physical activity record could have an impact on glucose control. [30,31] E4 Empatica Wristband Wrist EDA and temperature Uses EDA recordings to monitor the activity of the sympathetic nervous system during epileptic seizures.
Allows continuous and long-term measurements of EDA. [32] Huawei Watch 2 Wrist
Sleep
Detects PD at an early state using the sleep patterns of an individual.
Smartwatch-based detection shows a significant correlation of 0.46 to the clinical setting. [33] The 3D acceleration and orientation of velocity signals Measures movement with inertial sensors in PD patients.
Assesses the eating difficulties in PD patients.
[34] StepWatch Wrist Step count Step activity monitor (SAM) to count strides; shows a correlation of 0.99 and 1.0 with the gold standard (GaitMait) in PD and MS patients, respectively.
Reliable, easy-to-use, and valid step monitoring tool for PD and MS patients. The tracking of physical activities using the smartwatch results in a reduction in waist circumference, blood pressure, and blood sugar by 40%. [41] Samsung Gear Sport Watch Wrist Sleep Assesses sleep quality by evaluating sleep parameters; shows a significant correlation of 0.59 with an actigraphy report.
Enables long-term home-based sleep monitoring.
[42] Accurately measures both dream and slow-wave sleep. [44] FitBit Charge 2 Wrist Steps, heart rate, energy expenditure, and sleep Tracks physical activity and sleep to understand the behavior and physiology to detect mental disorders such as depression.
A supervised machine-learning algorithm with these data was able to detect the risk of depression with an accuracy of 80%. [45]
Cardiovascular Diseases
As the primary data generated by wearable devices include the heartbeat rate, step count, and energy consumed, researchers have concentrated on associating cardiovascular disorders with these data. Cardiovascular diseases cause millions of deaths globally every year [46]. Continuous monitoring and the diagnosis of abnormalities are important for reducing fatalities. Wearable technology has made this more feasible [47]. A clinical trial with over 60 adults showed that wearing smartwatches with blood-pressure-monitoring features lowered the patients' blood pressure and resting heart rate, elucidating the effect of self-monitoring [48]. Self-monitoring can also lead to early diagnosis [49]. In a study by Rens et al., cardiovascular disease patients were made to take a 6-minute walk test (6MWT) and their activity data were collected with an iPhone and an Apple Watch using the Vasc-Trac app. The home-based 6MWT assessed frailty with 83% sensitivity and 60% specificity. Hence, functional capacity and frailty could be monitored in cardiovascular patients safely and with a higher resolution by using wearable devices [22]. Another study by Teo et al. tracked sleep and collected multi-modal phenotypic data and questionnaire responses from normal volunteers. The sleep data derived from the wearables and by self-reporting were compared on the basis of total sleep time (TST) and sleep efficiency (SE). From a data analysis of a multi-modal phenotype, it was found that the TST and SE derived from wearables showed an association with the markers of cardiovascular disease, such as waist circumference and body mass index. However, the self-reported data did not show such associations. A lack of sleep could lead to telomere shortening, which is a tumor suppressor mechanism (premature telomere attrition) (confidence interval [CI] = 74.573-636.538, p = 0.016); hence, the sleep data from wearables were useful for providing insights into the cardiovascular disease risk (β = 1.275, CI = 0.187-2.363, p = 0.023) [29]. The usage of wearables has allowed people to track their own heart rhythms for a very long period [50]. By using heart rate and step count data from wearable smartwatches, machine-learning algorithms have been developed by different research groups for detecting atrial fibrillation (AF), which is a leading cause of stroke worldwide. A study by Tison et al. presented a deep-learning algorithm for the detection of AF. The neural network showed a 95% CI of 0.94-1.00 (p < 0.001) for the detection of AF compared to the AF diagnosis based on ECG results, which was used as reference. The sensitivity was observed to be 98%, with 90.2% specificity [23]. Similarly, another study by Inui et al. used wearables such as an Apple Watch and a FitBit and compared them with ECG data for the detection of paroxysmal AF. The correlation between the Apple Watch pulse rate data and the ECG heart rate data was found to be better than that between the FitBit data and the ECG data. The coefficient of determination for the Apple Watch was R 2 = 0.685, whereas that for the FitBit was R 2 = 0.057. Hence, the Apple Watch was proven to have better AF detection precision than the FitBit [51]. When the PPG screening app was used for AF detection, a positive predictive value of 91.6% was observed in patients who were confirmed to have AF (CI: 91.5-91.8%) [27]. Bashar et al. also proposed a method of AF detection that detects noise artifacts and motion by performing a time-frequency PPG signal analysis. Further, their algorithm to detect premature atrial contraction was used for AF detection with a higher accuracy. The proposed method showed a specificity, sensitivity, and accuracy of 97.43%, 98.18%, and 97.54%, respectively [28]. In a study by Koshy et al., the researchers monitored sinus rhythm using two different wearables (FitBit and Apple Watch) that collected heart rate data. For the detection of atrial arrhythmias, both the devices showed good results. However, the Apple Watch (r = 0.83) showed a better correlation than the FitBit (r = 0.56) [52]. Photoplethysmogram (PPG) signals derived from wearables or smartphones could be useful for monitoring cardiac health after signal corruptions and noise are removed. It was found that these denoised PPG signals could effectively predict coronary artery disease (CAD) [53]. Apart from the commercial smartwatches, smartwatches such as Kick LL are being developed for the purpose of monitoring respiration and heart rate [26].
Smartwatches have emerged as a new-age diagnostic tool for recording multichannel ECGs [24]. For this purpose, smartwatches can be attached to different body parts such as the chest or abdomen. Samol et al. have shown the possibility of an early ECG differential diagnosis of cardiac diseases [54]. The QT interval was also measured using a smartwatch, and the result showed a correlation of up to 0.994 with standard ECG data [25].
Neurological Disorders and Stress
Wearable devices have allowed for the continuous monitoring of our physiology, which has made the detection and treatment of chronic diseases, such as neurological disorders and mental health problems, possible. Electrodermal activity (EDA) shows the activity of the sympathetic nervous system, and thus is a potential tool for tracking arousal and autonomic regulation. EDA data are usually collected from the fingertips, wrists, or ankles. It is known that measuring EDA consumes less power than other monitoring methods and is a simple process. There are EDA-measuring wristbands on the market with embedded EDA sensors where the wristbands are made up of electrically conductive fabric [55]. However, EDA values can be affected by various other factors, including the environmental, skin, and room temperatures [56]. These limitations become especially important when an EDA sensor is employed in a wearable device controlled by temperatures [57]. The EDA sensor indicates the activity of eccrine sweat glands, which varies with the psychological state [58]. There is a positive correlation between EDA values and skin temperature (r = 0.13, p < 0.001). A study was performed to understand the performance of a student in real-time during an exam [59]. It has also been found that EDA measurements from wearable sensors are useful for detecting epileptic seizures. A surge in EDA was detected during an epileptic seizure, which implies a great sympathetic discharge [15,32,60]. Another study showed that wearable sensors could also be used to detect social anxiety in people, and thereby improve the monitoring and treatment of social anxiety. The data used for this purpose were heart rate, EDA, and skin temperature (ST). This study also demonstrated that these sensors could distinguish among different levels of anxiety in an individual [61].
Wearable devices appear to be a useful tool for characterizing different parameters in different dementia-type diseases such as Parkinson's disease (PD) and Alzheimer's disease (AD) [62]. Sensors carried by the wrist-worn device StepWatch are used for the quantitative diagnosis of Parkinson's disease and multiple sclerosis by counting the strides of the users [35]. Researchers are also using inertial sensors in wearables for the continuous detection of rest tremors and dyskinesia in patients suffering from PD [63,64]. The accelerometers in these watches can differentiate between postural tremors and essential tremors in PD patients by calculating the peak harmonic power and frequency. They accurately provide diagnostic information in terms of postural tremors [65,66]. Sigcha et al. also showed a high correlation (0.969) between measurements of resting tremors using smartwatch data and clinical measurements [37]. Based on tremor measurements using wearable devices, the classification between differential diagnoses and healthy patients reached 86.5% precision [67]. EchoWear, a smartwatch-based speech and voice exercise monitoring system, was implemented to detect voice and speech disorders in PD patients [36]. A framework called SPARK, employing wearable devices and smartphones, was developed for the detection of multiple symptoms associated with PD [68]. The early diagnosis of PD is also possible from activity data during sleep and sleep quality data [33]. Apart from the tremor detection, smartwatches are used to measure 'plate-to-mouth' time during eating, which reflects the intensity of the disease [34].
For AD patients, wearables are used as digital biomarkers [69]. They are used for the inference-based diagnosis of behavioral events using inertial motion data [70]. The early diagnosis of mild cognitive impairments (MCIs) is also possible by using wristworn wearables [71]. Apart from the diagnosis, consumer-wearable devices have a great usefulness for patient care and the monitoring of elderly AD patients [72]. By implementing specific sensors into wearable devices, Al-Naami et al. developed a smart wearable device for alerting AD patients to fall-down conditions [73].
Fatty Liver Diseases
Nonalcoholic fatty liver diseases (NAFLDs) are rapidly increasing in number and becoming the primary cause of most liver-associated deaths globally. The major cause of all liver diseases is physical inactivity. Wearable devices help individuals to track their physical activity at a minute level. Hence, data from wearable devices act as a wellness indicator for patients suffering with liver diseases. An improvement in physical activity leads to an improvement in cardiorespiratory fitness, and this can be measured with cardiopulmonary exercise testing (CPET). CPET is found to be useful in identifying risks in transplant hepatology [74]. These wearables are not only useful for detecting and identifying liver diseases, but are also useful for keeping track of physical activities that have shown to be helpful for NAFLD and hepatocellular carcinoma (HCC) patients. In a study by Kim et al., patients were monitored using Neofit (Partron Co), which recorded the calories burnt, step count, exercise duration, and heart rate. After 12 weeks of following the exercise program, the body composition and physical fitness significantly improved in the HCC patients who completed their therapy [39]. Similarly, a study by Schneider et al. recorded the physical activity of participants using a wrist accelerometer and detected that an increase in physical activity resulted in a dose-dependent reduction in liver disease, which appeared to be independent of adiposity [38].
Corona Virus Diseases
In the context of the pandemic caused by the 2019 coronavirus disease (COVID-19), researchers used data on heart rate, step count, and calories burnt, recorded by wearable devices, to detect COVID-19 infections in pre-symptomatic and asymptomatic conditions [75]. Lonini et al. have demonstrated how these consumer-grade wearables collecting data for a very long period could be useful for detecting the symptoms of such viral infections in an individual. A wearable designed to be worn on the suprasternal notch can track physical activity, cough sounds, and cardio-respiratory function [76]. Snyder et al. used the resting heart rate difference (RHR-diff) method and the heart-rate-over-steps anomaly detection (HROS-AD) method for the early detection of anomalies in the recorded data of COVID-19 patients, even 3 days (median value) before the onset of symptoms [16,17]. In another study, a gradient-boosting algorithm was used to detect an infection and the important symptoms [77]. Quer et al. provided a wearable device data model that complemented conventional virus-testing methods to detect COVID-19 infections [78]. In another study, Bogu and Snyder showed that using wearable data 7 days prior to COVID-19 detection and 21 days after the detection could recognize COVID-19 infections using a deep-learning-based method of a long short-term memory network-based autoencoder (LAAD). LAAD detects COVID-19 based on an abnormal resting heart rate during the period of infection. It was able to detect COVID-19 in the pre-symptomatic period as well as the symptomatic phase of the patients, with a precision score of 0.91 (CI: 0.854-0.967) [10]. Cho et al. proposed a one-class SVM method that can detect COVID-19 23.5-40% earlier compared to the method of Mishra et al. .
Metabolic Disorders
Metabolic diseases, such as diabetes, affect millions of people around the world every year. They increase the chance of multiple organ failure and result in a decreased quality of life [80]. Consumer wearables such as fitness trackers are also useful in diabetes patients. It has been found that physical activity (PA) has a major effect on glucose concentration. The effect of PA depends on the intensity, mode, and duration of the exercise [81]. Wearable smart devices are useful tools for the self-monitoring of activity by the patient and for remote monitoring by the caregivers. A clinical trial is ongoing to explore the efficacy of integrated do-it-yourself smartwatch glucose monitoring compared to scanned continuous glucose-monitoring systems [82]. In another study, Fitbit ® data from diabetic patients were used to correlate the association of physical activity with glycemic exposure. Further, assessing PA quantitatively may show to be useful in making mealtime treatment decisions. It was also observed that participating in PA every day demonstrated an immediate or later impact on glucose control [30]. Akyol et al. reported a novel consumer-wearable device called Diafit that works as a customizable glucose monitor for diabetes patients [40]. In a study by Weatherall et al., the researchers demonstrated an association between sleep and PA data from these wearable devices (Fitbit Charge HR) and the information reported by the type 2 diabetes mellitus (T2DM) patients themselves. It was observed that the self-reported data were positively associated with both the PA data (r = 0.35, p = 0.001) and the sleep data (r = 0.24, p = 0.04) [31]. Hence, it is believed that monitoring patients extensively could allow them to make decisions on disease treatments. In addition, data from these wearables have the potential to improve patient-reported outcomes and their care. There is also a noninvasive method of monitoring glucose that is performed by pressing the wrist or fingertip on the thin glass behind any smart wristwatch, which consist of a chemochromic mixture that has the same function as a PPG sensor. These chemochromic components facilitate the measurement of various metabolites from sweat, which are further used to obtain the glucose concentration using neural network algorithms built into the PPG sensor. The values obtained from this showed a high correlation with invasive methods of monitoring glucose. Hence, wearables provide a non-invasive, miniaturized, easy-to-operate, and novel method for glucometry, which could be used as an alternative to invasive tools in clinical settings [83]. In another study reported by Lee et al., smartwatch data along with other digital data were used to enable the better prevention of metabolic syndromes by the continuous detection of several health factors [41].
Sleep Quality
Sleep is important for normal bodily functions and for good health. A lack of sleep can have physical, emotional, and mental effects and can lead to serious health conditions, especially among diseased individuals. Both PA and sleep are related to each other. Wearable technology is currently being used to track PA and sleep, which could help researchers study sleep science in-depth, resulting in the better diagnosis of sleep-related disorders [84]. Sathyanarayana et al. demonstrated that deep learning can be used to predict sleep quality (whether it was good or poor) by making use of an actigraph obtained from the waking hours of an individual [85]. In another study by Berryhill et al., it was reported that a wearable sleep tracker could improve sleep quality in healthy people and track the quality as well as quantity of sleep. They also compared the sleep quality measured by wearables and polysomnography. The wearables showed a low precision error (17.8 min) when measuring sleep duration [44]. Currently, there are so many sleep trackers available on the market that it is difficult to discern which one is the best. Lees et al. performed a comparison among various wearables that track sleep and time in bed by using a sleep diary (SD). The Jawbone UP3 and Fitbit Charge Heart Rate devices showed the greatest equivalence to the SD in terms of sleeping time. The SenseWear Armband, Garmin Vivosmart, and Jawbone UP3 devices showed the greatest equivalence to the SD in terms of time in bed [86]. Meharabadi et al. used a wearable ring and watch to measure sleep quality, and observed that for total sleep time, the correlation of the actigraphy data with the ring data was 0.86 (p < 0.001); with the watch data, the correlation was much lower, at 0.59 (p < 0.001) [42]. Topalidis et al. also observed that wrist-worn device data and actigraph reports that derived the wake-up time and sleep time had high correlations (0.96 and 0.84, respectively; p < 0.001) with subjective reports [87]. In a study conducted by Chen at al., a PPG smartwatch outperformed the polysomnography method for detecting obstructive sleep apnea. An accuracy of 81.1% was achieved [43]. Papini et al. also observed that a wrist-worn PPG-integrated smartwatch could complement the standard apnea diagnostic techniques with a relatively lower correlation of 0.61 [88]. Ko et al. conducted a study on sleep quantification in PD patients using smartwatches, and detected abnormal rapid eye movements. They also observed that the percentage of the deep-sleep stage differs between healthy (38.1) and PD (22.0) patients [89].
Psychological Illness
Apart from detecting physiological illnesses, wearable devices play an important role in addressing psychological characteristics that are often neglected due to a lack of symptomatic evidence. Wearable device data equipped with ML algorithms are helpful for extracting the highly personalized nature of psychological conditions such as depression and mood swings. A recent study on 14 young people using EEG data, neurocognitive assessments, and lifestyle data from wearable devices revealed that each person had distinct depression determinants [90]. Hence, highly personalized diagnoses and treatments are required. In another interesting study, pictures were shown to the participants of the study. A machine-learning analysis further identified important features, and classifiers were used to predict the valence and arousal. Although the accuracy was not significantly high (69.9%), it showed the possibility of identifying emotional states using wearable devices [91][92][93]. Apart from the emotional state, the supervised machine-learning and gradient-boosting algorithm DART (dropouts meet multiple additive regression trees) [94] has been used for the detection of depression in a group of working young people wearing a Fitbit wristband. This was further evaluated by performing a k-fold cross-validation on the test sets. The study showed that the severity of depression symptoms was associated with nighttime heart rate variation [45]. Anxiety and depression have also been diagnosed in children with the help of wearable data devices and a machine-learning method such as k-nearest neighbor (kNN). A diagnosis accuracy of 75% was achieved by using the kNN method [95]. Stress is another mental health issue that has become very prevalent among adults. A study by Nath and Thapliyal proposed a new prototype for detecting stress among people using a wristband embedded with EDA, PPG, and ST sensors, which provided EDA, blood volume pulse (BVP), IBI, and ST signals that could distinguish between the stressed and non-stressed state of a person [96].
Role of Machine Learning in Diagnostics
Very often, the data from wearable devices are used as an additive to other medical data. These wearable devices generate a huge amount of multivariate time-series data. Extracting deeper insight from the first level of data requires data preprocessing and extensive analyses, for which machine-learning (ML) algorithms have become an indispensable tool for researchers [97]. Machine learning belongs to the field of artificial intelligence. In ML, programs perform some tasks, learn from the performance, and perform new tasks based on their prior learning. Figure 1 shows how machine-learning algorithms are used for the analysis of data extracted from wearable devices. In the context of big data, ML-based algorithms have outperformed conventional algorithms. Moreover, wearable devices are particularly useful for revealing the underlying personalized characteristics of several physiological and psychological diseases. Providing personalized diagnoses and treatments has become feasible due to ML algorithms. In this section, we discuss different ML algorithms used for the analysis of data from wearable devices and their outcomes in the context of different diseases.
An analysis of wearable device data is primarily focused on identifying anomalous behaviors in the recorded data, and also predicting future events [98]. This is achieved by training the machine-learning model with the recorded data of known anomalous events and testing the model's performance with previously unseen data. Apart from statistical methods based on the resting heart rate difference coupled with step counts [16,17], ML algorithms such as the support vector machine (SVM) method [99], the random forest (RF) method [100], gradient boosting decision trees [101], and the k-nearest neighbors (kNN) method [102] have been used. Among them, SVM performs best [103]. These conventional algorithms were used to build models for analyzing ECG data, and these models were used for stress classification based on smartwatch data. Feature engineering played an important role in improving the performance of the built models [104]. Feature engineering includes converting time-series data to the frequency space and extracting seasonality, frequency spectra, and power spectral density [105].
Machine-learning algorithms have been applied to data from implanted electroencephalography (EEG) electrodes and wearable devices for the detection of epileptic seizures, as well as for the prediction of seizure events. The wearable devices have a reported sensitivity of more than 90% for detecting seizures [106]. Weiting et al. used several algorithms, including SVM, RF, and naive Bayes, to build an ML algorithm ensemble for the purpose of predicting the cardiovascular risk from wearable healthcare data-collection devices [107]. In a study involving 407 participants using smartwatches, a gradient-boosting algorithm identified and predicted SARS-CoV2 infections [108]. Researchers have used multiple-instance learning via an embedded instance selection (MILES) method for feature transformation to detect obstructive cardiomyopathy [109]. However, conventional algorithms such as kNN perform better than deep-learning approaches at detecting out-of-distribution events for human activity recognition [110]. Neural networks are the building blocks of deep-learning methods. The autoregressive integrated moving average (ARIMA) model [111], which evolved from neural networks, is popularly used for time-series data analyses. It has also been used to analyze data from wearables. The ARIMA model is a type of auto-regression model. It predicts future observations based on past observations, while considering seasonal effects. The current value of time-series data is considered as the linear combination of past records. Applying random forest and ARIMA on blood pressure data has identified a personalized dependence of blood pressure on other lifestyle factors [112]. DeepBeat, a deep-learning method based on a convolution neural network (CNN), has been developed to assess data quality as well as abnormal heart rhythms and atrial fibrillation [14]. In a recurrent neural network (RNN), the features are connected by temporal sequences and the past inputs are stored for a certain amount of time, which often leads to vanishing gradient problems that can be overcome by long short-term memory (LSTM) algorithms [113,114]. LSTM algorithms are also used for predictive analyses. Matthew et al. used ARIMA and LSTM along with other ML models to predict future heart rate irregularities, and they observed that ARIMA performed better compared to the other algorithms [115]. The LSTMbased method has successfully been used to detect COVID-19 using wearable data [10]. LSTM has also been used for detecting congestive heart failure [116]. Cho et al. claimed that one-class SVM provided a 23.5-40% earlier detection of COVID-19 compared to the LSTM method [79]. LSTM has also been applied for estimating sleep stages from wearable data [117].
In addition, several ML-based algorithms and platforms have been developed for analyzing wearable data. PRISM uses Fourier-transform-based engineered features coupled with text data, analyzed by text mining, to provide a data-driven platform for monitoring mental health [118]. A correlation-based emotion recognition algorithm (CorrNet) recognizes emotions when a person watches videos. It also employs feature engineering based on correlations [119]. Kong et al. developed an algorithm that can remove non-stationary motion artefacts in heart rate data by converting the data into a frequency domain [120]. The ROAMM framework was developed to detect the real-time activity of a user. It is coupled with a server for remote analysis [121]. The deep-learning-based android app 'SmartFall' uses smartwatch data to detect falls [122]. Kwon et al. implemented a neuralnetwork-based smartwatch interface for the recognition of gesture patterns [123]. The Roche PD Mobile Application was developed for the remote quantification of motor sign severity in early-stage PD patients [124]. Zylstra et al. developed a mobile health platform for the daily collection of clinically relevant measurements for patients with neurological disorders [125]. The iSenseSleep app works to detect sleep duration based on wearable data and smartphone usage data [126].
Future Perspectives and Challenges
Advancements in technology have allowed for the generation of wearables that can track data, such as heart rate, steps, and calories, in a humongous amount. Researchers have now started to branch out from physical activity tracking to focusing more on major healthcare challenges, including diabetes management and the remote monitoring of older individuals. Hence, to achieve this goal, researchers have been working on the development of biosensors that are incorporated with bioreceptors such as antibodies, enzymes, or cell receptors [127].
The rapid progress in the development of wearables is very evident from the increase in the reporting rate of proof-of-concept studies. However, there are many challenges associated with wearables in healthcare. One of the major challenges of using wearables as smart diagnostic tools is associated with their precision and accuracy. A recent study by Filippo et al. highlighted the deviation in the results obtained by smartphone applications and wearable devices [128]. In addition, the results from different devices vary. Gloria et al. performed a comparative study involving different smartwatches [129]. Scarlet et al. compared the diagnostic accuracy of smartwatches for detecting cardiac arrhythmia [130]. Hahnen et al. conducted a study with over 127 individuals and observed that the accuracy and precision of heart rate data met the accuracy guidelines, but the blood pressure and oxygen saturation data failed the guidelines [131]. In a study conducted by Nelson et al., the data from an Apple Watch and a Fitbit device were compared with ambulatory echocardiogram (ECG) data collected from the same subject. The Apple Watch and Fitbit data showed agreement with the ECG data up to 95% and 91%, respectively [132]. However, the Fitbit data did not outperform the ECG data in the detection of epileptic seizures [133]. The accuracy of wearable devices must at least be at a comparative level with the conventional diagnostic methods.
Another challenge is the energy consumption, especially in MEMS-based inertial measurement units and wearable sensors. There is a need to reduce the order of magnitude of the energy in sensing and wireless communication by utilizing technologies that can help with energy reduction and overcome this challenge. Most of these technologies are under clinical evaluation and require regulatory approvals before commercialization [134]. To manage the energy consumption, the size of sensors has been reduced. Wearable devices also require internet access. A limited internet connectivity limits the use of wearables in the rural areas of under-developed countries. For poverty-stricken countries, the current cost of wearables and internet service has made wearables out-of-reach for many people. The wearability of these devices is also an issue. Users prefer them to be comfortable and light enough to wear and carry around, without hindering their daily activities. Hence, the tradeoff between the complexities associated with the computations and the weight and size of the wearable is one of the major challenges. Another challenge is the safety of the user, which could come into the picture when using wearable devices that use wireless technology for transferring data and that involve radiation, which could have a negative effect on the user's health [135].
Data security and privacy are other major challenges when it comes to data from wearables. Implementing security policies while maintaining the size and computational complexities of the wearables is quite a challenge. Wearables have poor data encryption and protection. Patients also have concerns over data security and may refuse to use wearable devices [136]. The health literacy of patients is an associated issue.
The application of wearables comes with many regulations and legal frameworks, as it involves individual data collection, processing, storing, sharing, and further analysis for research purposes. Hence, the privacy and security of an individual's sensitive information comes into question [137]. Vast applications of wearable technology could be possible, especially through the development of regulatory modifications in the data privacy aspect. It has become important to make the data exchange among health app providers, wearable manufacturers, and health insurers more transparent [138]. These aspects may also create a barrier in the market. Every country has their own requirement or certification policy for market access, and these need to be considered during the early stages of product development. There are different acts that protect and secure the data of every individual. These acts include the Health Insurance Portability and Accountability Act of 1996 (HIPAA) in the USA and the General Data Protection Regulation ((EU) 2016/679, GDPR) in Europe [21].
There is an immense scope and necessity for advancements in wearable technology and data processing. These can be achieved by incorporating the Internet of things (IoT), which has numerous applications [135,139,140]. The successful application of wearables as diagnostic tools also involves identifying and addressing the concerns of clinicians, healthcare providers, researchers, industry, and users [141].
Conclusions
The advancements in wearable technology have expanded the horizon of medical research in many directions. Wearable technology has manifested in divergent forms that can be carried on the wrist, head, foot, and other body parts. Among them, wrist-worn devices are most used and do not need any intervention from clinicians [142]. This drove us to set an objective to investigate the current landscape of wrist-worn wearables. We focused on wrist-worn devices working as digital diagnostic tools because of their potential to leverage the healthcare system in different circumstances, such as caring for elderly people, providing remote healthcare, and providing healthcare in ailing socio-economic conditions. Wearables have proven their usefulness in the diagnosis and monitoring of diseases such as cardiovascular diseases, neurological diseases, liver diseases, and even coronavirus diseases. It has also been shown that there are various machine-learning models and algorithms that can be applied for the analysis of wearable data. The growth in this field has led to the early detection of diseases, a faster response to drugs, and higher health literacy, resulting in better patient outcomes. Further enhancements in wearable technology are required to overcome the current challenges as discussed in this paper, including data security and privacy through improved regulation mechanisms. | 8,274 | 2022-08-31T00:00:00.000 | [
"Computer Science"
] |
Silver Nanoparticles Coated Poly(L-Lactide) Electrospun Membrane for Implant Associated Infections Prevention.
Bacterial infection has been a critic problem for implant infections. Poly(L-lactide) (PLLA) membrane has great potential for Guided bone regeneration (GBR), however, PLLA lack antibacterial property and thus may face bacterial infections. In this work, a mussel inspired method was used to treat PLLA membrane with dopamine and formed polydopamine (PDA) coated PLLA (PLLA@PDA), and then silver Nanoparticles (AgNPs) was immobilized on the surface of PLLA via the reduction effect of PDA. The XPS results showed that the silver element contents may be tuned from 1.6% to 15.4%. The AgNPs coated PLLA (PLLA@Ag) showed good antibacterial property (98.3% bactericidal efficiency may be obtained) and good biocompatibility, implying that the PLLA@Ag membrane have potential application as antibacterial GBR membrane, which may enhance the application of PLLA.
INTRODUCTION
Guided bone regeneration (GBR), is a common and promising augmentation technique to regain sufficient width and height of the jawbone at oral implant sites, and has been an important surgical procedure to heal peri-implantitis defects (Lundgren et al., 1994;Wang et al., 2016;Spinell et al., 2019). A critical factor that determines the success of GBR technique is the GBR membrane, which can prevent epithelial or undesirable tissues migration into the defective area (Wang et al., 2016). Up to now, various materials, such as collagen, polytetrafluoraethylene, have been widely used as GBR membrane. Polylactic acid, an important biodegradable polymer, due to its good biocompatibility, suitable mechanical strength, and controllable degradation, has been approved by Food and Drug Administration (FDA) and used on clinic (Santoro et al., 2016;Tyler et al., 2016;Hamad et al., 2018), showing great potential as GBR membrane (Wang et al., 2016).
Bacterial infection has been a common headache in our daily life or clinics. Bacterial adhere on the surface of medical materials, resulting in infections or even failure of materials or surgery operation (Chen et al., 2016;Behzadi et al., 2017;Jensen et al., 2017;Shen et al., 2019), as for GBR technique, bacterial infection has been a major reason for failure of GBR in vivo (Slutzkey et al., 2015). And thus, it is definitely needed to develop an effectively antibacterial GBR membrane, which can regain sufficient bone, and reduce bacterial infections (Shi et al., 2019;Wang et al., 2019). Recently, GBR membranes with antibacterial activity have been prepared by the addition of antimicrobial agents, such as doxycycline (Kutan et al., 2016), metronidazole (Xue et al., 2014a;Xue et al., 2014b), chlorhexidine (Qian et al., 2018), and so on, showing great potential for bone regeneration. However, antimicrobial resistance has limited the applications of these organic antimicrobial agents, as Rams et al. Reported,71.7% of 120 peri-implantitis patients appeared resistance to one or more antibiotics (Rams et al., 2014). Therefore, it is essential to develop GBR materials with alternative antibacterial agents.
Owing to the broad-spectrum antibacterial and few drug resistance properties, silver nanoparticles (AgNPs) have been widely used as antibacterial agent or surface coating in dental instrument, contact lenses, cardiovascular stent, implant, et al. (Le Ouay and Stellacci, 2015;Qiao et al., 2019;Shakya et al., 2019). However, during the synthesis process, toxic reducing agent are always used, which may increase the cytotoxicity or potential risk of environment pollution (Duan et al., 2015). Hence, it is essential to design a novel, ecosystem friendly way to prepare AgNPs loaded antibacterial GBR membranes.
Inspired by the adhesion property of mussel, polydopamine (PDA) has been coated onto various surfaces via the selfpolymerization of dopamine (Lee et al., 2007;Xin et al., 2018). PDA contains many catechol, amino, and quinone groups, which can be used to connect other molecules or functional particles on PDA treated surfaces (Huang et al., 2017;Lin et al., 2017). Furthermore, due to the reduction effect of catechol groups, AgNPs can be synthesized via the in-situ reduce reaction of silver ion, and thus this mussel inspired method have been widely used to immobilize AgNPs on various matrix, such as cellulose paper (Islam et al., 2018), mesoporous silica (Song et al., 2018), and polymer electrospun fibers .
Although PLLA has showed many potential applications as GBR membrane, it is much important to prepare antibacterial PLLA, which can prevent the implant associated infections. In this work, poly(L-lactide) (PLLA) electrospun nanofibers membrane was prepared as a GBR membrane model. Mussel inspired method was used to prepare AgNPs coated PLLA membranes (PLLA@Ag), which can combine the antibacterial property of AgNPs and bioproperties of PLLA. The modified PLLA membranes showed superior antibacterial property and good biocompatibility, showing great potential for clinic application as GBR membrane.
Preparation of PLLA Nanofiber Membrane
PLLA membranes were prepared via the electrospinning method. Briefly, PLLA was dissolved in volume ratio of 4:1 chloroform/N, N-dimethyl formamide (DMF) mixture under stirring for 24 h at room temperature to obtain a homogeneous electrospinning solution. The electrospinning using the following parameters: cylinder collector speed of 300 rpm, fixed spinning distance of 20 cm, flow rate of 1.2 ml/h, and spinning voltage for 18 KV. The obtained membranes were collected at room temperature and then dried under vacuum for at least 48 h to remove the remained solvent completely.
Characterization
The morphology of electrospun membranes were observed with a field-emission scanning electron microscope (FE-SEM, JSM -6701F, JEOL, Japan). X-ray photoelectron spectrometer (XPS, ESCALAB 250, Thermo-fisher, USA) was carried out to analyse surface elements of membranes. X-ray diffractometer (XRD, D8 Focus, Bruker, Germany) was used to investigate the crystalline phase of membranes. The surface contact angle of the membranes was measured using a static contact angle meter (JC2000A, Powereach, China)
Antibacterial Activity Test
The antibacterial activity against Staphylococcus aureus (S. aureus) (ATCC 25586) of PLLA, PLLA@PDA, and PLLA@Ag samples was tested using agar diffusion assays as well as modified colony counting method. In agar diffusion assays, disk-shaped samples (5 mm in diameter) were prepared and sterilized through ultraviolet irradiation for 1 h (30 min each side), before placing on the cultured blood agar plates. 100 ml aliquot of approximately 3*10 8 cfu/ml bacterial suspension was spread onto an agar plate. Disk-shaped samples were placed on agar plates (repeat three times for every sample). Then the plates were incubated for different times at 37°C. The bacterial growth on the plate was visualized directly and the diameter of the inhibition zone was measured on days 1 day, 3 days, 7 days, and 14 days.
In modified colony counting method, sections (1 cm × 1 cm) of nanofiber membrane were placed in a sterilized flask containing 100 ml of test organisms and incubated at 37°C for 4 h in a laboratory shaker at 200 rpm. After 4 h incubation, serial dilutions of the liquid were made in PBS. Dilution of 10 4 , 10 5 , and 10 6 were used for colony counting method. 50 ml of every dilution was spread on to the agar plate and then incubated at 37°C for 24 h. After incubation, the number of viable colonies were counted manually. The percent reduction in number of colonies in PLLA (in their different) treated sample as compared to the untreated samples gives the antibacterial activity of the treated samples.
The bactericidal efficiency of the membranes was calculated by the following formula: Bacteridical efficiency ( % ) = A − B A * 100 % where A is the average number of the colonies of untreated group and B is the average number of the colonies of the treated samples.
Cytotoxicity Test
Cell proliferation was determined using a cell counting kit (CCK-8) assay. All the nanofiber membranes were cut into squares (1 cm × 1 cm), sterilized under UV light (30 min each side), immersed in 70% ethanol for 30 min, and then washed twice with sterile PBS. The sterile membranes were incubated in 1mL Dulbecco's modified eagles medium (DMEM) for 48 h at 37°C under a 5% CO2 humidified atmosphere, and then the supernatant was filtered by Millipore ® membrane to obtain the extract liquid. The extract liquid was serially diluted into 50%, 25%, 12.5%, and 6.25%, respectively.
Next, osteoblast MC3T3-E1 cells (1×10 4 cells/ml) were seeded into each well (100 μl/well) and incubated at 37°C under 5% CO 2 atmosphere for 24 h to make them proliferate and adhere on the wall. And then, 100 ml extract liquid and the corresponding dilutions were added into each well to replace the media, and cultured for 48 h. 20 μl of CCK-8 buffer was added to each well, and cells were incubated at 37°C for an additional 2 h. The absorbance was measured at l= 450 nm on a plate reader.
Another cell line (fibroblast L929) was also used to do the same experiments according to the above the procedure.
Statistical Analysis
The obtained data were presented as mean ± SD. Statistics differences were checked by One-Way Analysis of Variance, and the significance was set at P < 0.05.
Characterizations of Nanofiber Membranes
Electrospinning has been a versatile technique to prepare nonwoven membranes used for bone tissue engineering or bone regenerative Feng et al., 2019). In this work, we firstly prepared PLLA membrane via electrospinning method, and then mussel inspired method was used to prepare silver coated PLLA membrane. To observe the change of electrospun fibers, SEM were used to observe the morphology of different samples. The PLLA membranes show porous structure, and the fibers surface are smooth ( Figure 1A). When the PLLA membranes were immersed in basic dopamine solution, dopamine will self-polymerize and form polydopamine (PDA) on the surface of PLLA fibers. As shown in Figure 1B, the morphology of PLLA@PDA membrane are almost the same with that of PLLA ( Figure 1A). The PDA layers have strong interaction with metal ions and can also reduce silver ion (Ag + ) to AgNPs (Wu et al., 2015;Wang et al., 2017). When PLLA@PDA was immersed in AgNO 3 solution, the Ag + will bind to PDA and be reduced to AgNPs, with the increase of reaction time, the AgNPs particle size or amount will increase. As shown in Figure 1C, only a few particles were anchored on the surface of PLLA fibers, when the reaction time was 3 h, much more AgNPs appeared on the surface of fibers. If the reaction time was 6 hours, it can be seen that, the surface of PLLA fibers were nearly coated with AgNPs. After 24 h, the AgNPs will aggregate heavily on the surface of PLLA fibers ( Figures 1H, I).
To further analyze the change of surface composition of PLLA membrane, XPS measurements were carried out. The PLLA surface consist of C and O elements. When PDA were coated on the surface of PLLA fibers, a new element, N will appear, as shown in Figure 2, in the spectrum of PLLA@PDA, a weak peak centered at 399.0 eV was found, which was the characteristic signal of N element, strongly confirming the formation of PDA coating on PLLA fibers. When AgNPs were formed on the surface of PLLA@PDA, XPS will show signals of AgNPs, as shown in Figure 2, all the PLLA@Ag samples showed two individual peaks at 368.7 eV and 374.7 eV with a spin-orbit separation of 6 eV, which proves the existence of the AgNPs. According to the quantitative results of XPS, the Ag Content (At. %) of PLLA@Ag1, PLLA@Ag3, PLLA@Ag6, PLLA@Ag9, and PLLA@Ag24 membranes were 1.6%, 4.9%, 7.9%, 8.2%, and 15.4%, respectively, implying that the AgNPs' content can be easily tuned by tuning the reaction time.
The XRD patterns of various samples are shown in Figure 3. As for neat PLLA electrospun nanofibers membrane, a weak peak appeared at 16.6°, corresponding to (200)/(100) planes, however, PLLA is a semi-crystalline polymer, most of the fibers are in amorphous state (Figure 3). The XRD pattern of PLLA@ PDA nanofibers has no difference with that of PLLA, suggesting that dopamine self-polymerization does not affect the structure of PLLA nanofibers. When PLLA@PDA was immersed into the solution of AgNO 3 , Ag + will be quickly chelated, then the Ag + will be reduced to Ag 0 and formed AgNPs on the surface of PLLA fibers. As shown in the XRD patterns (Figure 3 With the increase of reaction time (from 1hour to 24 h), the intensity of peaks increased, demonstrating that the amount of AgNPs is time-dependent. Furthermore, these results can clearly verify that it is an effective method to prepare silver coated PLLA membranes.
Antibacterial Activity
Antibacterial property is very critical for GBR membranes. Concerning the perspective of PLLA membrane used as GBR membrane, it is an valuable thing to endow PLLA membrane with antibacterial property. Due to the excellent antibacterial property of AgNPs, the prepared PLLA@Ag must have antibacterial property. Up to now, many researches have proved that AgNPs or AgNPs coated materials showed excellent antibacterials to both the Gram-positive S. aureus and Gram-negative E. coli (Wu et al., 2015;Wang et al., 2017). On considering that S. aureus is an important bacterial related with stomatologic problem, such as maxillofacial infections and periimplantitis (Persson and Renvert, 2014), and thus in this work, S.aureus was selected as the bacterial representative to assess the antibacterial activity via observation the growth of S. aureus on the agar plate. As shown in Figure 4A, Bacterial inhibition zones are clearly observed around PLLA@Ag membrane (PLLA@Ag 24 was used in Figure 4A), whereas, PLLA and PLLA@PDA did not show any inhibition zone ( Figure 4A a, b). Figures 4B-F showed the bacterial inhibition effect of PLLA@Ag1, PLLA@Ag3, PLLA@Ag6, PLLA@Ag9, and PLLA@Ag24, respectively, and the results clearly showed that the prepared PLLA@Ag have effective antibacterial effect. PLLA@Ag1 membrane presented only 7.19 ± 0.18 mm diameter and kept less than 7 days ( Table 1). By contrast, all the other PLLA@Ag membranes showed much better antibacterial activity. It is worth mentioning that the inhibition zones are still obvious even after 14 days except the results for PLLA@Ag1. However, from the results of PLLA@Ag3, PLLA@Ag6, PLLA@Ag9, and PLLA@Ag24, the increase of AgNPs amount did not result in higher inhibition zones ( Figures 4C-F), while this was consistent with bactericidal efficiency results calculated from the modified colonies counting tests ( Table 1). All the bactericidal efficiency of PLLA@Ag membranes were more than 95% against the targeted bacteria, except PLLA@Ag1 membranes (only 32%). Hence, it can be concluded that the incorporation of AgNPs endowed PLLA membrane antibacterial property. Although the mechanism for AgNPs antimicrobial activity is still controversial, it is no doubt that the presence of AgNPs could act as Ag + reservoir, and provide continuously a high enough concentration of silver species in their surroundings to maintain its antibacterial activity for several days (Le Ouay and Stellacci, 2015). Herein, because of the strong interactions between AgNPs and PDA coating, the AgNPs would not easily detached from the PLLA matrix, and thus the prepared PLLA@ Ag can have a long-time antibacterial activity.
Up to now, commercial polylactide membrane (including PLLA and PDLLA) and their blend with other polymers or functional agents have been used as GBR membrane in clinic (Wang et al., 2016), however, the bacterial infections have been a challenge for commercial GBR membrane. And thus, the prepared PLLA@Ag membrane showed some advantages over the pure PLLA membrane. Furthermore, the surface of materials may have great affection on their properties, many methods have been designed to modify polymer membranes used as GBR membrane, such as antibiotic coating, polymer coatings (Florjanski et al., 2019). In this work, we prepared AgNPs coated PLLA membrane via a mild mussel inspired method, the most important results maybe endowing the antibacterial properties to PLLA. Besides, the surface wettability were also improved, as shown in Figure 4, the surface of PLLA membrane was hydrophobic ( Figure 4G), while PLLA@PDA ( Figure 4H) and PLLA@Ag24 ( Figure 4I) are hydrophilic. In addition, the PDA may also promote the biomineralization of hydroxyapatite. And thus, compared with the PLLA membrane, the PLLA@Ag membrane have many advantages. However, the potential in vivo toxicity of PLLA@Ag should be further investigated.
Cytotoxicity Test
PLLA is a biocompatible polymer, while PDA is also biocompatible, and thus the PDA coated PLLA fiber membrane also showed low toxicity. As shown in Figures 5A, B, the cell viability of both MC3T3 and L929 cell line treated with different amount of PLLA@PDA are much better, with a cell viability more than 100%. Generally, AgNPs showed low toxicity and have been widely used as antibacterial agent. When AgNPs were immobilized on the surface of PLLA@PDA fibers, it can combine the biocompatibility of PLLA@PDA and antibacterial property of AgNPs, and thus PLLA@Ag can work as antibacterial GBR membrane. Besides, the biocompatibility of PLLA@Ag is an important factor that determines the application of PLLA@Ag. As shown in Figure 5, the cell viability of MC3T3 on PLLA@Ag1 and PLLA@Ag3 are about 120%, respectively, however, with the increase of AgNPs amount, the samples will show toxicity. For example, the MC3T3 cell viability of PLLA@Ag6, PLLA@Ag9, and PLLA@Ag24 membranes were lower than that of PLLA@Ag1 and PLLA@Ag2 ( Figure 5A). To further test the biocompatibility of PLLA@Ag membrane, L929 cells were used, it is interesting to find that almost all the nanofiber membranes presented good cytocompatibility on L929 ( Figure 5B). However, when the sample concentration was too high, for example, when 100% extract were used, the cell viability of PLLA@Ag24 was only 53.32% ( Figure 5B), much lower than that of other samples. From results in Figure 5, it can be concluded that the biocompatibility of PLLA@AgNPs membrane are very good, the samples prepared in this work may have potential application as GBR membrane.
CONCLUSIONS
In this work, AgNPs was immobilized on the surface of PLLA nanofibers via the simple mussel inspired method to obtain antibacterial PLLA nanocomposites (PLLA@Ag). The in vitro test showed that the PLLA@Ag have excellent antibacterial property, more interestingly, the PLLA@Ag can have long term antibacterial effect, even after 2 weeks, the prepared sample can still prevent the growth of bacterial. The cytotoxcity test showed that the PLLA@Ag sample showed little toxicity, when MC3T3 cell was used, all the samples showed a little toxicity when compared with PLLA. However, when L929 cells were used, the samples showed comparable biocompatibility with PLLA. The method used in this work was simple and versatile, and may be used to immobilize AgNPs on other materials, besides, due to the various in vivo application of PLLA, such as GBR membrane, the PLLA@Ag may also have broad applications as implants that can prevent infections.
DATA AVAILABILITY STATEMENT
The datasets generated for this study are available on request to the corresponding authors.
AUTHOR CONTRIBUTIONS
JWe and LL proposed this project. JWa prepared all the materials. LZ carried out the antibacterial test and cytotoxicity test. XZ and RW help to analyze the results. JWa and JWe wrote the manuscript. JWa and LZ have the same contribution to this work. | 4,356.8 | 2020-04-08T00:00:00.000 | [
"Materials Science",
"Medicine",
"Engineering"
] |
Application and optimization of drag reduction characteristics on the flow around a partial grooved cylinder by using the response surface method
ABSTRACT It is well known that drag reduction occurs when the flow is passing by a grooved circular cylinder at certain Reynolds numbers, which has been used as a powerful energy saving method in a broad range of circumstances. However, a challenge here is how to evaluate the combined effects of depth, width and location of a given triangular groove set covering half of the cylindrical surface area. A useful approach to quantitatively analyze the influence of these different factors on drag reduction using the response surface methodology is described here. The flow characteristics, including drag coefficient, flow velocity, turbulent kinetic energy and vorticity, were calculated by numerical simulation. The results showed a great drag reduction effect under the legitimate set of groove structure parameters at a super-critical Reynolds number providing a base for optimization process in various engineering applications. The drag reduction mechanism found from this research could extend to other cases and should provide insights into engineering applications like car grilles.
Introduction
With the increase in energy consumption, there is a growing pressure to improve aerodynamic performance in order to achieve the energy saving for different applications, especially through reducing the drag force, a major component in energy consumption. For example, the overall drag costs a significant portion of energy consumption in transportation systems, theoretical as well as experimental researches have been conducted to reduce the vehicle aerodynamic drag. Since the radiator grille caused drag force affects the aerodynamic outcome of a vehicle including the cooling system, it has been considered as a major target for drag reduction as well as efficient fuel utilization. In some cases, the drag reduction can be achieved up to 7% by changing the model of radiator grille (Jama, Watkins, & Dixon, 2006). Therefore, it is attractive to modify the structure of grille leading to the optimal gas flow around the grid that will give rise to the reduction in aerodynamic force.
For some vehicles, the grille is designed in a cylinderlike shape, and therefore, the principles of the flow around the cylinder can be applied to grille design for the goal of reducing the aerodynamic resistance coefficient, Moreover, it can also be used in many other applications such as the design of the natural gas pipeline surface (Luo & Zhang, 2012). Previous studies revealed CONTACT Xiaowen Song<EMAIL_ADDRESS>that cylindrical surface covered with a non-smooth structure made significant changes in the flow characteristics at the high Reynolds number (Achenbach, 1971). When the boundary layers separate from the sides of a cylinder and form the free shear layers, the Karman vortex street generated behind the cylinder is unstable as the Reynold number of the flow over cylinders correspond to the sub-critical Reynolds number (Doolan, 2010). However, the non-smooth cover on the surface of the cylinder will reduce the coefficient of drag in great efficiency as it causes the critical regime at a lower Reynolds number (Rodriguez et al., 2017). This phenomenon has been verified by different experiments, which tested different non-smooth structures on the cylindrical surface of certain applications such as dimples (Bearman & Harvey, 1993;Tan, Koh, & Ng, 2016), grooves (Yamagishi, Kimura, & Oki, 2013;Yokoi, Igarashi, & Hirao, 2011). Numerous researches have drawn the conclusion, that non-smooth structure can delay the full separation of the boundary layer and optimize the flow condition behind the cylinder as the drag force is directly related to the Karman vortex street and the full separation point of the airflow (Yamagishi & Oki, 2007). In addition, the same non-smooth surface with different designed parameters have also been examined as an important aspect in drag reduction, for example, the influence of single rectangular groove on the flow characteristics around circular cylinder have been reported (Canpolat, 2015;Canpolat, 2017). U-grooved (Alonzo-García et al., 2013), Vgrooved (Alonzo-García et al., 2014) or cactus-inspired grooved cylinder (El-Makdah & Oweis, 2013;Karaki, Abboud, Daher, Osman, & Oweis, 2008) have been designed and tested in research, and the effects of groove number (Yamagishi & Oki, 2005), and sharpness (Yamagishi & Oki, 2007) were also investigated with respect to their influences on the aerodynamic performance. Therefore, the drag reduction characteristics of flow around the cylinder covered with non-smooth surface mimic the condition of a moving automobile at highspeed. And the theoretical base of the engineering application in this situation can be established. Besides, previous studies focused on exploring the drag reduction impacts by individual parameters of non-smooth structure (Aoki, Lee, & Oki, 1998;Yamagishi & Oki, 2005, 2007. In order to achieve a better design method for applications, the effects of various structural parameters on drag reduction were studied in considering these parameters collectively rather than individually. The interactive influences were also explored. By using the response surface methodology, studying the interactive relationship among various factors in a given situation was achieved in this research. Meanwhile, through the establishment of the hierarchy of the effects among these factors, a standard procedure concerning the nonsmooth design for engineering application has been set up. To validate the accuracy of the flow field obtained by numerical simulation, a particle image velocimetry (PIV) test was conducted.
Problem description
Increasing flow characteristics based researches demonstrated that the drag reduction is caused by the delay of full separation point due to the turbulence transition in the boundary layers, which suggests that the groove placements in front of the separation point play a major role in the occurrence of drag reduction. However, in most of the studies, the non-smooth structures were arranged as a whole unit covering the entire surface of the cylinder without considering the regional influence (Yamagishi & Oki, 2004, 2005, 2007. Considering the previous analyses were based on full coverage of the nonsmooth structures on the cylinder surface, alternative experiments should be designed to explore whether the grooves on the back half of cylinder will help to reduce the drag force, and to analyze the differences in flow characteristic between the semi-grooved cylinder and the fully grooved one.
Model description
To reduce the challenges in the manufacturing process while still meeting the requirements for the engineering application of interest, we choose the drag reduction model based on triangular groove structure covering the surface of the cylinder. The analysis cylinder was 0.025 m in diameter and 0.15 m in length resembling the size of most car grilles. The size ratio between groove and cylindrical was setup based on the analysis model in Yamagishi's experiment which gave better drag reduction, and two models with different non-smooth surface coverages were established: a fully grooved model, Model A, in which a 0.025 m diameter cylinder was attached with the grooves that were 1.875 millimeter in width and 0.26 millimeter in depth following the ratio between groove and cylinder in the model described by Yamagishi' s experiment (Yamagishi & Oki, 2005). A total number of 32 grooves were arranged with an angular interval of 11.25°. A semi-grooved model, Model B, was also established, in which half of its exterior surface area was covered by the grooves with the same size as model A (Figure 1).
Calculation domain
The analysis was made in the unsteady three-dimensional turbulent flow with following calculation domain set: the distance from the center of the circle to the upstream boundary surface was 6d (Note: d represented the diameter of the cylinder), whereas the distance to the downstream boundary was 16d. The wall boundary surface on both sides was 6d each away from the center of the circle. The size of the calculation domain is slightly larger than what is reported by Yamagishi and Oki (2005) to ensure the solution is independent of domain extents. The three-dimensional calculation domain is shown in Figure 2.
The boundary conditions for numerical simulation
When a fluid flows pass a cylinder, the increase in Reynolds number leads to the sudden drop of drag coefficient after sub-critical region, followed by a slight rebound in the super-critical region, resulting a constant value in the trans-critical region (Zdravkovich, 1994). The value of C d will be small in the super-critical region, where the Reynolds number for the flow passing the cylinder with 32 grooves is ranging from about 4 × 10 4 to 6 × 10 4 (Yamagishi & Oki, 2005). The Reynolds number of the model was designed within this range to reduce the drag force. Meanwhile, for studying the effect of grooved surface on drag reduction in high-speed wind, the inlet velocity was set at 35 m/s (78 mph or 126 km/h) modeling the car speed on highway giving the Reynolds number Re d = 5.83 × 10 4 (Re d = (ρUd/μ), ρ: air density, ρ = 1.2 kg/m 3 , μ : dynamic viscosity of air, μ = 1.8 × 10 −5 Pa s), which is the exact conditions of the supercritical region. The outlet boundary was defined under the assumption that the flow is fully developed. One side boundary was set up as the symmetry wall, and others were established as stationary walls (slip wall).
Inlet boundary conditions: a uniform incoming fluid velocity is set at the entrance of the computation region. The speed components in three vector directions were set as follows: Outlet boundary conditions: The outlet boundary was set based on the complete development of the flow field at the back of the calculation domain. The pressure and velocity were set as follows: The establishment of physical model The flow is unsteady and incompressible. The Realizable k-model (Shih, Liou, Shabbir, Yang, & Zhu, 1995) was used in combination with the standard wall function near the cylinder surface as this turbulent model gives a superior performance in predicting the turbulent boundary layer separation at high Reynolds number (Mulvany, Tu, Chen, & Anderson, 2004). And the numerical results calculated by the realizable k-model are close to the experimental results under the same Reynolds number (Chen, 2013). The conservation equations of mass and momentum were also used to control the main flow parameters.
The mass equation is shown below: Momentum equation is as follows: ρ: air density; τ ij : shear stress, in which the i is the normal direction of the surface, whereas the j is the direction of the force; F x : the body force in the direction of x; F y : the body force in the direction of y; and F z : the body force in the direction of z. (The body forces in three directions were set as zeros since the gravity is insignificant.)
The establishment of grid models
The grid model for the rectangular cuboid was generated using unstructured grids by STAR CCM + 12.02, which simulates wake of cylinder accurately (Vermeire, Witherden, & Vincent, 2017). Also, it is reliable in predicting the boundary layer transition (Lin, Raghunathan, Raghunathan, & Mcilwain, 2012). To verify the independence of the computational grid, the drag coefficients of the fully grooved model (Model A) were calculated in STAR CCM + 12.02 under five grids with multiple densities. The calculation results are shown in Table 1. (The drag coefficient C d is defined as (2F d /ρu 2 A), in which the F d represents the drag force that is by definition the force component in the direction of the flow velocity, the ρ stand for the mass density of the fluid, u means the flow speed of the object relative to the fluid, and A is the reference area.) The calculation results proved that when the number of elements in mesh reached 989867, the calculated values of the two parameters remain approximately the same: the difference between C d values calculated by models with 989867 and 1211330 elements is 0.362%. Therefore, the flow solution showed almost no change as the computational mesh was refined in this case. Thus the mesh with 989867 elements was chosen for this study. The total thickness of the boundary layer was 0.8 mm, and the average first-cell y+ value was 33.724, close to the 30 used in a turbulent wall-model. The whole meshes of the calculation domain are shown in Figure 3, and the shapes of grooves and meshes adjacent to the circular cylinder surface are shown in Figure 4.
The verification of the hypothesis
The grid models were incorporated into the versatile fluid analysis software package STAR CCM + 12.02, where the flow characteristics were calculated by finite volume method. Before that, to make sure that the solution does not change within refined time interval with a tolerance less than 5%, the independence of time interval was verified by calculating the drag coefficient of Model A under five different time interval, the computational result is shown in Table 2, the time interval was presented in a non-dimensional form T. (T = (u × t/d), u: the velocity of fluid, t: the time step, d: the diameter of cylinder).
As the value of the drag coefficient did not change beyond the tolerance when the non-dimensional time interval refined to 0.112, this size was chosen to define the time interval. The measurement of the smooth cylinder was also conducted in parallel for comparison, the simulation result of drag coefficient is 1.21 when the value was averaged over 0.1s, which agrees with the measurement of Wieselsberger (1921) at Reynolds number Re d = 5.83 × 10 4 and suggests the reliability of this drag measurement.
The time series of the instantaneous drag (C d ) and lift (C l ) coefficients for models A and B are presented in Figure 5, the numerical simulation confirms that cylinder surface with semi-grooved attachments oriented toward the incident flow showed more significant effect on drag reduction compared with the cylinder fully covered with grooves, as the drag coefficient for model B is about 0.785, smaller than 0.83 obtained from Model A at Re d = 5.83 × 10 4 . And the amplitude value of the lift coefficient for model B is 0.33, also smaller than the 0.37 for Model A when the amplitude value of both cases tend to be stable, showing the positional effects of the groove on drag reduction.
which is by definition the force component in the direction of the flow velocity; ρ is the mass density of the fluid; u is the flow speed of the object relative to the fluid; A is the reference area. C l = (2L/ρu 2 A), where: L is the lift force; ρ is the mass density of the fluid; u is the flow speed of the object relative to the fluid; A is the reference area.)
Optimization procedure
In our efforts to optimize the model, it is obviously time consuming to perform a great deal of CFD simulations, one way to overcome that is to simplify the grid model and use some advanced turbulence model like modified Spalart-Allmaras model (Pasha, Juhany, & Khalid, 2018) to predict the boundary-layer interactions. Also, it can be solved by an alternative better performed method than the calculation of the CFD tools Fotovatikhah et al., 2018). Some of the advanced algorithms like PSO (IPSO) algorithm and neural network are used for optimization (Zhang, Zhang, Tezdogan, Xu, & Lai, 2018).
Response surface method
The response surface is a method that uses a series of definitive experiments to fit the response surface for simulating the real limit state surface. The basic idea is to set an analytic expression between the limit state function and the basic variable. The advantage of the response surface method is that every level of the experimental parameters can be continuously analyzed during the optimization for the test conditions. The effects of different factors can be studied simultaneously and the research results can be demonstrated by graphics intuitively. The response surface method can obtain an optimal result within a certain range more efficiently by establishing the multivariate quadratic regression model.
Selection of optimal parameters
The comparison experiment between the full-and semigrooved cylinder shows a negative effect on drag reduction by the grooves oriented in the direction against the incident flow. In order to conduct a comprehensive study to evaluate how the groove set near the separation point affects the drag reduction, the layout of the grooves on the cylinder surface was changed by adjusting the angle α between the rearmost groove and the forward stagnation point from 90°to 100°when the cylinder surface was partially covered by grooves ( Figure 6).
And the size of groove, including the width and the depth, is also thought to give a great influence on the characteristics of the flow near the cylinder surface. Thus the three levels' response surface method (RSM) test, by which the influences of different factors can be considered simultaneously for engineering application, was designed to find the influence of multiple factors on the drag reduction. At the same time, the combined effects of different factors on drag reduction can also be studied in this process for future application. According to the aforementioned predication based on the simulation results, the depth, the width, and the angle of the rearmost groove on both side of the cylinder surface were chosen as the independent variables. For an affordable manufacture process, the width and the depth of grooves were changed on the basis of model A and met the requirements of the cylinder size as used in engineering application, since it was proved to give a better effect on drag reduction (Yamagishi & Oki, 2007). The depth and width are fixed for all grooves in a model. The angle α of the rearmost groove was changed from 90°to 100°i n order to discuss the influence of groove in this position, which was located near the separation point with its orientation against the incident flow.
The level-factors and the test plan are presented in Table 3:
Results of the optimization
In order to intuitively establish the corresponding relationship between the studied factors and the responses, as well as to improve test efficiency, the Box-Behnken (Box & Behnken, 1960) method was used to establish the response surface model, in which the final optimization results are evaluated by the reduction of the drag coefficient. Also, the CFD simulations were performed and used as inputs for the optimization procedure for the relative accuracy of the simulation results.
Due to the design requirements of the Box-Behnken method, the experiments with a center value of the three parameters were repeated five times. A series of tests were conducted according to the test plan set up by STAR CCM + 12.02. The test plan and the simulation results of the drag coefficient for every test are shown in Table 4: Compared with the fully grooved model A, the drag coefficient value in test 6 under the partially grooved model is reduced to 0.719. The drag reduction ratio was calculated using the following equation: where the C d1 is drag coefficient of model A and the C d2 is drag coefficient of an optimized model. For the model used in test 6, the value of its R D reaches to 13.37%, which positively correlates with the observed drag reduction.
The test results demonstrated that, by adjusting the sizes and placements of the rearmost grooves, the capability of grooves on cylinder surface drag reduction will be further improved.
Analysis of the simulation results
In order to study the effects of individual factors on the drag reduction performance and explore the crosseffects among them, the analysis software Design-Expert (Montgomery, 2004) was used to establish the response surfaces to draw a limit-state surface and to deal with the conversion between each parameter (input) and the C d value (output). The significance of linear and quadric terms consist of different parameters, including the A, B, C, AB, AC, BC, A 2 , B 2 and C 2 (A = depth, B = width, C = angle α of the rearmost groove) were analyzed by the variance module. To ensure the significant characteristic of each independent variable in the regression equation, the model was optimized based on the initial test result of each term to guarantee the statistical significance of the regression equation.
The results of the variance analysis calculated by ANOVA for the response surface quadratic model (Montgomery, 2004) are shown in Table 5 without insignificant terms.
Based on the results of variance analysis, the model is sure of statistical significance as the p-value is less than 0.05. It means that the model has a high reference value on research. Meanwhile, the significant analysis of independent variables shows the significance of linear terms A, B, C, and quadratic terms AB, A 2 , B 2 , C 2 (p-value < 0.05).
The significance of the quadratic term AB verifies the reciprocal effects between these two factors. It explains why the effect of one independent variable results inconsistent changes of another one at a different level, which should be considered in the process of design optimization for engineering application and cost saving.
The p-value of fit-lack is defined as an x 2 -test to measure how well the model fits the data and whether the model error is equal to the assumed value. The pvalue of fit-lack reaches 0.1479 and proves the reliability of the model containing all the necessary items. The null hypothesis states that the model error mean square is equal to the hypothesized value (pure error), versus the alternative that it is greater than the hypothesized value. When the test p-value is greater than 0.05, the null hypothesis can be accepted in the form without fit. Therefore, the regression equation can be used to replace the real value of reality for analyzing. The equation of quadratic response surface regression was built as follows: (A = depth, B = width and C = angle α of the rearmost groove) The diagnosis of the distribution relationship between the residuals, the predicted value and the computational value are shown in Figure 7: The normal plot of residuals shows a linear correlation, which means there is a normal distribution difference between the actual observations and the regression estimated by regression analysis. The corresponding relationship shows a non-regular distribution between the predicted value and the residual value, but a linear distribution between the computational value and the predicted value. This regularity in distribution represents the high confidence of the predicted value calculated by the regression model. The contour maps of the influence of the factors on the drag coefficient were used to demonstrate the degree of influence, as well as to express the interactive effect among the factors. The results are presented in Figure 8.
The density of the contour line is a standard used here to evaluate the influential levels of different factors on drag coefficient, namely, when the contours in one certain direction show a tendency to be more denser than the other during the change of independent variables, it means that the alternation of the parameter represented by this direction can lead to a larger change of the drag coefficient translating to a higher impact ability of this factor. Through the analysis of the three contour plots, it can be concluded that depth has been shown to have the greatest impact on the solution, followed by the effect of angle, then width afterwards, which provide a practical reference in the application.
From the three-dimensional response surfaces shown in Figure 9, it can be concluded that the influence of the width and the angle α of the rearmost groove on the drag coefficient are quadratic, which means there is a minimum drag coefficient within the interested range of these two parameters. And the relationship between factor A (depth) and the drag coefficient shows a linear correlation, indicating that the drag force is reduced when the value of depth decreases within the range allowed by grilles size. Therefore, a better groove structure can be designed to reduce the drag coefficient under this condition.
Selection of the optimal model
Through the analysis of the regression equation, an optimal model can be built up to have all these three factors with significant effects on drag reduction. The optimal solutions obtained by the numerical optimization module in the software Design Expert is shown below ( Table 6): Five different optimized solutions were obtained by the regression equation, and an optimal solution that shows the highest perspective was selected for numerical simulation and experimental validation. To explore the mechanism of the drag reduction by this optimized groove structure, the simulation model was built up with a groove structure of 0.13 mm in depth and 1.886 mm in width. The angle α between the rearmost groove and the forwarding stagnation point was 92.904°, the diameter of the cylinder, the size of the calculation domain, and the settings of the boundary conditions were the same as the simulation of Model A to ensure the reflection of numerical simulation. Figure 10 presents the numerical results of the drag coefficients C d of the three models.
Based on the figure, the value of the drag coefficient finally averages around 0.695, with an error of 0.967%. Compared with the predicted result in the optimal solution designed by regression equation, the numerical simulation is considered to be reliable as the error is less than 5%, which is also verified for the reliability of the response surface. The optimal model shows 12% larger drag reduction compared to model B, and this value is larger than any test in the response surface method. Therefore, it can be concluded that the response surface method can be reliably used for designing the parameters of the non-smooth structure within the established parameter range to set up the optimal model for engineering application. Meanwhile, the drag coefficient value of the optimal model based on the aforementioned solution is smaller than other models with different curved sectional grooves reported in the previously study (Yamagishi & Oki, 2007).
Results and discussion
For achieving a better result in drag reduction when flow passes a cylinder, and expanding the useful field of non-smooth surface in engineering applications, the mechanism of drag reduction was studied through the measurable results of numerical simulation.
While moving around a cylinder, the flow will complete the transition from the laminar flow to the turbulent flow on the cylinder surface, and turns irregular. In some real cases of applications, positive pressure on object surface generally varied greatly, among which a narrow windward tends to suffer higher wind pressures, while a wide one is associated with severer negative wind effects (Mou, He, Zhao, & Chau, 2017). In this case, a large region of inverse pressure will be formed within the Karman vortices behind the cylinder. With the pressure difference between the front and rear area of the cylinder, the total drag force increases significantly, and the pressure drag becomes the main component of it, contributing more than 98% proportion wisely, whereas the skin friction (or viscous) component is responsible for the remaining 1-2% (Achenbach, 1971). Based on this mechanism, the roughness induced drag reduction on the cylinder surface could be interpreted by the convergence of the difference in pressure. This forms a specific aspect in the current research.
It is thought that the roughness on cylinder surface will cause partial separation or reattachment of the flow nearby the edge of the grooves. The flow situation will be changed by these repeated cycle of partial separationattachment, and a closed circulating flow will be formed within the cavity of the groove, leading to the turbulence of the flow in the boundary layer, especially near the cylinder surface. As the energy containing parts of the flow plays an important role in controlling vortices formation in the wake region (Apacoglu, Paksoy, & Aradag, 2011), the disturbance of flow accompanied by the momentum transfer causes higher kinetic energy of the flow at the bottom of the boundary layer. Thus the area of the rear adverse pressure can be changed, and the flow is able to overcome more wall shear force on the cylinder surface and to shift the full separation point towards the downstream side. In this way, the drag force reduction is achieved when the pressure drag becomes smaller.
However, the above function of drag reduction is only applicable by the grooves arranged at the position upstream of the full separation point. When the nonsmooth structure located near the full separation point, the turbulence of flow at this position may speed up the vortex shedding as the flow separating off at the edge of the groove would not be able to reattach to the cylinder surface anymore. Also, the grooves set at the rear surface of the cylinder may not help to reduce the drag force, but increase the friction force. Furthermore, the larger turbulence as well as the higher Reynolds number of flow caused by grooves at this area may lead to the lower pressure, all of which have negative effects on drag reduction.
According to the aforementioned flow characteristics influenced by grooves at different regions of the cylinder surface which is responsible for the main causes of the drag reduction, the parameters of flow near the cylinder surface, the turbulence behind the cylinder, and the vorticity distribution were all discussed. The fully grooved cylinder model A, the semi-grooved cylinder model B and the optimal model C designed by the response surface method were used for analysis when the Reynolds number Re d = 5.83 × 10 4 . By contrasting the different flow parameters of these three models obtained from the numerical simulation, the mechanism of drag reduction when the cylinder was partially grooved with optimal structure was studied deeply. The flow is orientated from left to right in all figures.
Turbulent distribution
The distributions of the turbulent kinetic energy near Model A, Model B and optimal model surfaces with Re d = 5.810 4 are shown in Figure 11. By comparing the distributions of turbulent kinetic energy in model A and model B, it is clear that the groove structure located near the full separation point causes larger flow turbulence at the corner, which can be seen in the figure, and the partial separation at this place may trigger the final separation of the flow early without reattachment anymore. The region of high turbulent kinetic energy is more concentrated and closer to the cylinder at the downstream side in model B. That demonstrates that the grooves have a negative effect on the delay of boundary layer separation if they are too close to the full separation point.
The different distributions of turbulence between model B and optimal model show the advantage of the optimized grooves in drag reduction: the optimal model has a larger turbulent kinetic energy near the cylinder surface compared with Model B. The region with large turbulent kinetic energy adheres to the surface more broadly in the optimal model, leading to the exchanges of the momentum within boundary layer closer to the cylinder surface. Thus the flow at this place has higher kinetic energy to overcome more viscous force as well as adverse pressure, resulting the full separation point shifting towards the downstream side. Figure 12 presents the distributions of turbulent kinetic energy near the surface of cylinders at angle θ = 100°from the stagnation point, revealing the characteristic of this flow parameter near the cylinder surfaces at downstream side. The distribution of turbulent kinetic energy near the cylinder surface at rearward position shows a regular pattern that, the position of maximum turbulent energy in the boundary layer of the three models is located at different heights above the cylinder surface. As the aerodynamic drag decreases, it corresponds to the phenomenon that the position of maximum turbulent kinetic energy is closer to the cylinder surface and the full separation point is much delayed. This simulation results can well explain the aforementioned theory: The large turbulent flow of model A is farther away from the cylinder surface at rearward location compared with model B. Meanwhile, less exchange of flow kinetic energy happens near the cylinder surface in model A. These are adverse effects on the delay of the full separation point. Meanwhile, in model B, the value of turbulent kinetic energy is larger at this position, which corresponds to the visual result of turbulent distribution in Figure 11.
Similarly, because of a smaller geometric structure in the optimal model, the reattachment of flow in optimized groove is more inside compared with that in model B. The position of maximum turbulent kinetic energy for optimal model is about 0.5z/d when it is about 0.5025z/d in model B (z is the height from center of cylinder in radial direction, d is the diameter of cylinder). Thus the optimal model seems to keep the transfer of the momentum closer to the cylinder surface.
Velocity distribution
The computational results of the velocity distributions near the surfaces of model A, model B and optimal model with Re d = 5.83 × 10 4 are shown in Figure 13.
Because of the gas viscosity, velocity gradient of flow exists in the boundary layer. The disturbance of the flow causes the exchange of energy between the high-speed flow and low-speed flow, which increases the velocity of flow at the bottom of the boundary layer. The simulation results clearly verify that: as the flow at the bottom of boundary layer in model B contains more turbulent kinetic energy, the region with high velocity becomes larger and gets closer to the cylinder surface compared with that in model A, which confers the flow near the cylinder surface more kinetic energy to adhere to the surface and to delay the full separation of the boundary layer in the area of reverse pressure. Meanwhile, this characteristic also leads to a consequence that the large velocity of flow is more close to the cylinder compared with model A, and decreases the width of the wake. The same velocity distribution characteristics also appear in the comparison of the visual results between model B and optimal model. To verify the above regularity, the velocity distributions in the boundary layer at angle θ = 100°from the stagnation point near the cylinder surface of models A, B and the optimal model were shown in Figure 14.
From the numerical results of model A and model B, it can be found that at θ = 100°, the value of velocity at the rear positions of the semi-grooved cylinder becomes larger compared with the surface of the full-grooved cylinder. The position where the velocity reaches the maximum value shifts to a low z/d, which is much closer to the cylinder surface.
Meanwhile, at the same distance from the cylinder surface, the optimal model shows the large flow velocity, which is consistent with the turbulence distribution at this location. Thus, the maximum flow velocity increase in turn from model A to model B and optimal model.
The pressure coefficient C p
To confirm the influence of the studied parameters on the backward delay of the full separation point, the full separation points of these different models at Re d = 5.83 × 10 4 need to be precisely determined through numerical simulation. Therefore, the pressure coefficient C p was introduced to determine the location of the full separation point, which was calculated using the following equation: where P is the static pressure on the cylinder surface, P 0 is the static pressure on the inflow boundary surface, ρ is the air density and U is the uniform flow velocity. Yamagishi drew a conclusion that when the partial separation and reattachment of turbulent flow repeated in the area nearby the grooves, the value of the relative pressure coefficient C p changes periodically on the cylinder surface in the upstream side, the pressure becomes large in the valley of the groove and small in the peak of the groove. In additional, the backpressure coefficient in the rear area becomes almost invariable after a certain position point as the flow would not reattach to the surface any more, and this position point can be considered as the point of full separation (Yamagishi & Oki, 2007). Thus the pressure coefficient C p can be used to predict the position where flow separating. The computational results of the pressure distributions on models A, B and optimal model surfaces at Re d = 5.83 × 10 4 are shown in Figure 15. The ordinate shows the pressure coefficients C p and the abscissa shows the angle from the forward stagnation point. Figure 15 demonstrates that the point where the pressure coefficient C p of model B begins to stabilize is at about 105°from the forward stagnation point, larger than the approximal 103°for model A, which is corresponding to the position of the boundary layer separation. And this result is similar to the ones reported in Yamagishi's experiment (Yamagishi & Oki, 2007). Also, the backpressure coefficient of model B becomes larger compared with that of model A near the back surface of the cylinder that is against to incident flow, verifying the smaller pressure drag in model B. Meanwhile, the full separation point of the optimal model is about 110°accompanied by the largest backpressure coefficient, which indicates a better effect of optimized grooves on drag reduction.
The velocity gradient
The velocity gradient near the cylinder surface is affected by the combined effect of the boundary layer as mentioned above, and the value of velocity gradient (du/dz) = 0 corresponds to the partial separation or reattachment of the flow at this location where the reverse flow and the small vortex happened. More precisely, the velocity gradient below 0 corresponds to the Figure 16. The full separation points determined by this method are also marked in the figure, which is consistent with the distributions of the pressure coefficients.
The vortex shedding
The delay of vortex shedding and the smaller width of the Karman vortex street caused by the retardation of the separation point are the key reasons for drag reduction. For verifying the size and the shape of the shedding vortices, visual simulation analysis based study of flow characteristic behind the cylinder is required. The computational results of the vorticity distributions behind models A, B and optimal model with Re d = 5.83 × 10 4 are shown in Figure 17(a) (b) and (c), which clearly demonstrates that the value of vorticity and the region of wake in model B become smaller compared with that in the models A. The same performance also occurs in the comparison analysis between model B and optimal model. And this characteristic of vortex shedding behind the cylinder is mainly responsible for the reduction of pressure drag.
The computational results of the vorticity at x/d = 1.1 behind the cylinder of the three models at the same time point of T = 0.2 s are shown in Figure 18. The trends of the numerical distribution are in close agreement with the simulation results: the vorticity behind the cylinder of model B becomes smaller at approximately 2.6% compared with that of model A by quantitative analysis, and the position where the value of vorticity reaches the maximum shifts to the low y/d, which means a small region of shedding vortices. The percentage difference in the position of maximum vorticity between model A and model B is about 5.8% in the z-direction. In the case where there exist the optimal grooves on the cylinder surface, the distribution of vorticity tends to be more in the middle of flow field compared with model A and model B, which shows successful optimization on drag reduction as the percentage difference between model B and optimal model is about 4.1%.
In addition, it is also feasible to characterize the width of the wake qualitatively using the distribution of turbulent kinetic energy at a certain position behind the cylinder. The computational results of the distribution of turbulent kinetic energy k/U 2 at x/d = 1.1 behind the cylinder are shown in Figure 19, which are also in agreement with the aforementioned results. The value of turbulent kinetic energy behind the cylinder of model B becomes smaller compared with that behind the cylinder of model A in quantitative analysis, the difference is about 6%, which proved the grooves set at the rear surface of the cylinder may cause larger turbulence. The percentage difference in wake size between model A and model B is about 6.4%, and the difference is about 6.8% between model B and the optimal model.
The backpressure
The computational results of the gauge pressure behind the cylinder of the three models at the same time point of t = 0.2 s are shown in Figure 20. Comparing the backpressure of model A and model B, it is evident that the pressure at the rear of the cylinder in model A is significantly smaller than that in model B, showing a larger area of adverse pressure, and the region of adverse pressure tends to be closer to the surface of cylinder. The different value of the backpressures causes the difference of the pressure drag between models can be explained as follows: the grooves arranged on the back surface of the cylinder also trigger the turbulence of flow and generate the vortices in the near region of the cylinder surface. Therefore, in model A, the velocity of the flow behind the cylinder is larger, and also the backpressure is lower compared with that in model B. That is the main reason why the grooves oriented in the direction against the incident flow have a negative effect on drag reduction.
The larger backpressure of the cylinder in the optimal model is mainly because of the retardation of full separation point caused by the optimized grooves, resulting in the delay of vortex shedding and the reduction of high vorticity area behind the cylinder. Thus the backpressure of the cylinder in an optimal model is larger and the drag force is smaller than that of the cylinder in model B.
The power spectra
As the frequency in generating the vortex is calculated from the fluctuation of the lift coefficient, Figure 21 shows the power spectra of C l for models A, B and optimal model. It is clear that the frequency of the generating vortex of model A becomes smaller compared with that of model B.
Based on the simulation results it can be found that the Strouhal number St (St = (f × d/u), f : frequency, d: diameter of the cylinder, and u: uniform velocity) of model B becomes larger compared with that of Model A, which also exists in the comparison between model B and optimal Model.
PIV tests in the wind tunnel
To validate the reliability of simulation experiment, the PIV technology was used in a wind tunnel to measure the flow characteristics, since the PIV technology can record the information of velocity distribution from a large number of space points at the same transient state.
Considering that the drag reduction of flow around the cylinder is mainly related to the position of the full separation point, the verification was implemented though the comparison of this parameter which is obtained by simulations as discussed earlier. A low-speed, return-flow wind tunnel with the test section of 0.3 m × 0.3 m was adopted in the PIV test. In order to accurately capture the distribution of velocity vector near the cylinder surface, the trace particles made of atomized oil produced by a particle generator were ejected into the wind tunnel and displayed under the laser irradiation. The PIV setup and post-processing related insight package is consist of two resonator laser for light emission, one high resolution CCD camera for image acquisition, one synchronizer for control coordination, and one flow display system. This insight package then combines with the external interface to give rise to the whole test device. The structure of the wind tunnel and the testing devices are shown in Figure 22. When the velocity of airflow near the cylinder surface decreases to a certain level under the action of the adverse pressure, the flow reverses its direction (compared to the main flow) and thus recirculation occurs. As the two kinds of flow from different directions meet, the flow in the boundary layer separates off the surface, and generates a large number of vortices. Different from the partial separation caused by grooves, this flow separation is more ferocious accompanied with larger vortices at the downstream side, and is not reattaching to the surface anymore. To present this phenomenon clearly, the streamlines visualization is shown in Figure 23.
For verifying the accuracy of this simulation result, an original image of the velocity vector field distribution in the x-z plane of an optimal model by the PIV test is shown in Figure 24.
The position of the final separation point in this experiment is about 110 degrees from the forward stagnation point, which agrees with the simulation result, proving that the high credibility of numerical simulation.
Achievements
The main conclusions and achievements are the following: (1) The mechanism of drag reduction in the flow passing the surface of a grooved cylinder was discussed. As the full separation point is decayed by the nonsmooth surface, the region of adverse pressure will be reduced. This is different from the mechanism of drag reduction when flow passes a plain surface with grooves, as the low-speed roller-bearing-like swirl is formed within the non-smooth unit, which translates sliding friction between the object surface and fluid into rolling friction, leading to skin friction reductions (Song, Lin, Liu, & Zhou, 2017;Song, Zhang, & Lin, 2017).
(2) The semi-grooved surface has more significant effects on drag reduction than the fully grooved pattern, with a maximum ratio of drag reduction up to 5.42% when the fluid speed is 35 m/s. The grooves oriented in the direction against the incident flow show a negative effect on the drag reduction as they generate the turbulence of flow and form the vortices near the rear area of the cylinder, which leads to lower backpressure. (3) An equation of quadratic response surface regression was established to demonstrate the relationship between the drag coefficient (C d ) and the parameters of the grooves, which can be used for describing the different effects of parameters on drag reduction more accurately providing a reference for engineering design. (4) The Depth (factor A) has the greatest impact on the solution, followed by the effect of the Angle α between the rearmost groove and forward stagnation point (factor C), and then the Width (factor B).
The experiments of response surface verify the interactive relationship between the Depth (A) and the Width (B), which explains that the effect of one factor on drag reduction shows inconsistencies at each level of the other one and offers a high reference value for engineering application on drag reduction when a fluid flow around a grooved cylinder. (5) An optimized groove structure was proposed with a better geometric structure to make the circle vortex smaller in the groove, which makes the disturbance of flow and the momentum transfer adhere to the surface of the cylinder more broadly and leads to the higher kinetic energy of flow at this place. The position of the rearmost groove was also discussed with respect to its delay the full separation point at most. (6) The position of full separation point in the optimal model was captured though the velocity vector field distribution in a low-speed and return-flow wind tunnel, and the simulation results were validated by the PIV experiment with respect to its accuracy in predicting the location of separated flow.
Limitations and improvements
The influence of different factors is explored and discussed for model optimization only focus on the effect of groove structure on drag reduction. In a practical application, there are some other factors that may influence the effect of drag reduction such as the interaction between multiple cylinders and the spacing of neighbouringgrilles, the starting angle of grooves, which was not considered. The effect of these factors on drag reduction will be investigated in future research. Besides, the study is limited to the engineering application of highway speed air flow, and it remains to be investigated whether it can be extended to the low or medium speed aeronautical applications. At the same time, the flow characteristics obtained by the study can be further explored in the water flow to extend the scope of potential applications. | 11,256 | 2019-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Profitability and Constraints Analysis of Women Entrepreneurs in Lagos State , Nigeria
This research assessed the constraints limiting the success of women entrepreneurs in selected local government areas of Lagos State, Nigeria. A sample of 120 women entrepreneurs were selected from three Local Government Areas of Lagos State in a two-stage sampling procedure. The data collected were analyzed using descriptive statistics, constraint analysis, budgetary analysis and multiple regression analysis. The predominant primary occupation was found to be trading 45 with 92.5% of the women in their productive years. The finding also revealed that a vast majority (92.5%) of the women entrepreneurs had formal education above primary level with 43.3% of them spent not less than 10 years in their business, which was presumed to benefit their enterprises. The women entrepreneurs faced certain constraints which affect their businesses, the highest ranked ones include; poor shop location (ranked 1), lack of long term finance (ranked 2) and competition from rivals (ranked 3) among others. The women entrepreneurs earned 40 kobo on every 1 naira sale revenue. The multiple regression results revealed that main occupation, business membership strength, initial capital outlay and total variable cost had significant effect on the net income of the women. Policy options from the findings include: Increment in funds invested in the business enterprises of these women entrepreneurs alongside reduction in cost could boost the possible expansion of their enterprises; provision of psychological, moral and financial support from members of the family is needed for entrepreneurial development and Government should provide cheaper sources of credit to the women with little or no collateral to encourage their enterprises growth, self-reliance which are necessary ingredients for nation’s development.
INTRODUCTION
Academics and government appear to be focused on entrepreneurship, because it symbolizes innovation and a dynamic economy.Female entrepreneurs have been identified as a major force for innovation, job creation and economic growth (OECD, 1997).This finding has spurned a lot of researches into women's ownership.Many women are entrepreneurs however; the global impact of female entrepreneurs is just beginning to gain intensity.The number of female business owners continues to increase steadily worldwide and it is estimated that sums owned by women account for between 25 and 33% of all businesses (Carter, 2000;Carter and Rosa, 1998).
In some regions of the world, transformation to a market economy threatens to sharpen gender inequality.Some of these changes are simply the legacy of a gender imbalance that exists prior to political and economic returns.Other changes reflect a return to traditional norms and value that relegated women to a secondary position.As countries become more democratic and gender inequalities lessen, more productive atmosphere for both sexes is provided (Allens and Truman, 1992;Anna et al., 2000).Women's productive activities, particularly in industries that empower them economically and enable them to contribute more to overall development; whether they are involved in small or medium scale production activities, or in the informal or formal sectors, are not only a means for economic survival but also have positive social repercussions for the women themselves and their social environment (UNIDO, 2001).
In many societies women do not enjoy same opportunities as men.In many transitional economies, progress has been achieved in opening doors to education and health protection for women but political and economic opportunities for female entrepreneurs have remained limited.Concerted efforts are needed to enable female entrepreneurs to make better economic choices and to transform their businesses into competitive enterprises and high-generating income economic activities (OECD, 1997).Entrepreneurship represents an appropriate opportunity for women all over the world, as entrepreneurship responds flexibly to entry, change and innovation.This potential has not yet been realized in an optimal fashion in most developing countries.A large number of women work in the informal sector but their contribution is not included in rational accounts (UNIDO, 1995).
There are a variety of constraints on women and the ability of women to upgrade their production continuously.These include poor access to market information, technology and finance, as well as poor linkages with support services and an unfavorably policy and regulatory environment (UNIDO, 2001).Although many of the constraints are shared by both female and male entrepreneurs, women entrepreneurs face additional obstacles; this is due to deeply rooted discriminatory socio-cultural values and traditions embedded, particularly in the policy and legal environment as well as in institutional support mechanism.In many instances, women are unable to benefits from services and must struggle to overcome or circumvent discriminations in business circles (UNIDO, 2001).Concern for the welfare of women prompted the UNDP to provide intervention fund for business start up.This scheme however, along with many NGO programmes provide funding for women; but concentrates on micro financing of business start-up by a group of women.The question of access to mainstream financial resources for female entrepreneurs is largely ignored by male dominated government and financial policy makers.
Finance is the most important aspect of any business.Non-availability of long-term finance and long procedures to access financial help, where available, have been identified as major constraints faced by women entrepreneurs (Otunaiya and Idowu, 2009).During the process of marketing of products, women entrepreneurs are faced with problem of poor location of shop, lack of transport facility and tough competition from larger and established units.Other challenges of women entrepreneurs include nonavailability of raw materials, high cost of required machine or equipment, lack of training facilities and non availability of labor.
When necessary resources are available to women entrepreneurs, women still hesitate to set up units or do not succeed in their ventures due to constraint imposed on them by their immediate environment such as family commitment (Aculai et al., 2006;Aidis, 2006).There is the need to consider the constraints limiting their success so as to determine the possible ways of solving these constraints.This study therefore, identify the constraints faced by women entrepreneurs, examine the cost and returns of women entrepreneurs and examine the effect of socioeconomic factors on women entrepreneurs' net income in some selected local government areas of Lagos State, Nigeria.
METHODOLOGY
The study area: The study was conducted in Lagos state, Nigeria in 2011.The state is the most populous conurbation in Nigeria.It is currently the second most populous city in Africa, behind Cairo and is currently estimated to be the second fastest growing city in Africa and the 7 th fastest in the world (UNDP, 2008).It has a landed area of 999.6 km 2 (385.9 m 2 ) and a population density of 7,941/km 2 (20,569.9/m 2 ).
A two-stage random sampling technique was used to obtain data for the study.The first stage involved the purposive sampling of three (3) Local Government Areas from five Local Government Areas under Lagos division.The three selected Local government areas included were Apapa, Surulere and Eti-Osa due to high presence of women entrepreneurs.The second stage involved the selection of 40 women entrepreneurs within each local government area, thus making a sample size of 120 respondents used for the study.
Analytical procedure: The data collected were analyzed using descriptive statistics, constraints analysis, budgetary analysis and multiple regression analysis.
Importance indices were constructed to identify the relative importance of constraints in women entrepreneurship.Women entrepreneurs were asked to rank the identified constraints on an ordinal scale (1 = not a problem, 2 = very serious problem).In the final analysis, the constraint with the highest value rank first.The use of important index in constraint analysis is replete in the literature (Alimi, 2001;Alimi et al., 2004).
Budgetary technique was used to determine the cost and returns of women entrepreneurs in the study area: The fixed inputs identified were vehicle, generator, furniture, electronic gadget, tool, drier among others.The components of the variable cost include, labor cost, transportation cost, energy cost, rent and other cost.
The effect of economic and social factors on the income of women entrepreneurs was captured by ordinary least square method using multiple regression model.The model in the explicit form is specified as: The explicit form of the model is:
RESULTS AND DISCUSSION
Socio-economic characteristics of women entrepreneurs: The distribution of women entrepreneurs is presented in Table 1.Large proportions (49.2%) of the women were aged between 31 and 40 years.Age of women entrepreneur is an important factor that affects their level of productivity.It shows that female entrepreneur posses an advantage of age to exploit the vast opportunities abounds them.Majority (75.8%) of the women were married.The status of women martially is believed to have significant effect on her attitude to work and in most cases it is used to measure the level of responsibility.Thus a vast majority of them have support from the home to run their business activities.Also, majority (60.8%) of the women had post-secondary education.Education is of great importance in the success of any business venture.It can indirectly determine the level of profitability of business as well as enhance the adoption of new innovation and technologies.Majority (76.7%) of the women had their family size ranging between 4 and 6.The size of the household has been a major determinant of involvement in entrepreneurship in the area of startup, credit procurement and repayment and utilization of microfinance facilities as reported by Aculai et al. (2006).
Majority (45%) of the women have trading as main occupation.Also, majority (55.8%) of the women sampled were traders when viewed from the form of business enterprise.The trading ranges from petty trading to large supermarkets; 35.8% were service providers such as event planning, event decoration and Master of Ceremony for social events while 8.3% were artisans (hairdressers, fashion designers).This result implies that majority of the women entrepreneurs depend largely on their enterprises for sustenance.Women entrepreneurial activities are not only a means for economic survival but also have positive social repercussions for the women themselves and their social environment as reported by UNIDO (2001).Hence, the women entrepreneurs participate in other activities and this implies that entrepreneurs are diversifying their source of income.The success or otherwise of women entrepreneurs is determined by entrepreneurs years of experience.This presupposes that a large percentage of the entrepreneurs have spent longer years on their enterprise implying sustainability.-------------------------------------------Not a problem -------------------------------------------Final ----------------------- Funding in business goes a long way in determining the amount of profit to be realized.Majority (46.7%) of the women sourced for fund through personal saving followed by 31.7% from spouse.This presumes that the women entrepreneurs are supported by their spouses after they have shown interest through their own personal savings.
Constraints faced by women entrepreneurs in their various enterprises:
The distribution of constraints facing the women in entrepreneurship is presented in Table 2.There are a variety of constraints facing women in their entrepreneurship development.These include poor access to market information, technology and finance, poor linkages with support service and an unfavorable policy and regulatory environments.The constraint analysis displays the challenged in order of importance or predominance amongst women entrepreneurs, the highest rank (first) being poor shop location, the secondly ranked is the absence of long term finance, the thirdly ranked is the presence of tough competition from surrounding homogenous enterprises and the least being the lack of transport and storage facilities.Other problems were ranked in order of importance.As observed by UNIDO (2001) these constraints are compounded by the need to compete in an aggressive business environment with rapid technological changes and the globalization of production, trade and financial flows.
Budgetary analysis:
The result of the budgetary analysis on the profitability of business enterprise of women entrepreneurs is presented in Table 3.The total variable cost, total fixed cost and total revenue were estimated as 85,200.83, 307,302.66 and 654,750 N, respectively.The gross margin and net income of the women were estimated as 569,549.17and 262,246.51N, respectively.This implies that the women entrepreneurs were making huge return in from their business activities.The profitability index of 0.40 implies that the women get 40 kobo on every N1 sale revenue.
Effect of socio-economic factors on the income of the women entrepreneurs:
The result of the multiple regressions on the effect of socio-economic factors on the income of the women entrepreneurs is presented in Table 4.The F-value of 7.8 was significant at 1%.This attests to the overall good fit of the model.The adjusted R-square of 0.76 implies that about 76% of the variation in the net income is jointly explained by the explanatory variables.The type of occupation the women entrepreneurs focused on mainly (Business/artisanship) had a positively significant effect (1%) on the net income of the women.This implies that the artisans predominantly made more income than other groups.The amount of labor available to these women entrepreneurs had a positively significant effect (at 5%) on the net income raked in by them.It can thus be presumed that as they have more family labor which is relatively free, they make more profit.This may however not be the case if they were to employ the services of hired labor.The initial capital investment had also a positively significant effect on the net income of the women entrepreneurs.The coefficient was positive with a significance level of 1%.We can thus presume that as the amount of capital invested increased, the net income of the women entrepreneur also increased.The cost expended on variable inputs (total variable cost) or cost of running the business enterprises by the women entrepreneur also had a positively significant effect on the level of net income accruable to them.In other words, as more is invested in the daily running of the business, the profit increases.This might however be confusing because in economic theory, we expect profit to increase when variable cost decreases.But the meaning of the earlier relationship is that, the more the additional capital invested in the business by the women entrepreneurs, the more the profit they make i.e., as the business expands, it enjoys economies of scale.
CONCLUSION
This research assessed the constraints limiting the success of women entrepreneurs in selected local government areas of Lagos Division.The predominant primary occupation was found to be trading with majority of the women in their productive years.The study also showed that a vast majority of the women entrepreneurs had formal education above primary level with 43.3% of them spent not <10 years in their business, which was presumed to benefit their enterprises.The women entrepreneurs faced certain challenges which affect their business, some of which include; poor shop location (ranked 1 st ), lack of long term finance (ranked 2 nd ) and competition from rivals (ranked 3 rd ) among others.The women entrepreneurs earned 40 kobo on every one naira sale revenue.The multiple regression results revealed that main occupation, business membership strength, initial capital outlay and total variable cost had significant effect on the net income of the women.
RECOMMENDATIONS
From the findings of the study, these following recommendations are inferable: • Increment in funds invested in the business enterprises of these women entrepreneurs alongside reduced cost of expenses could boost the possible expansion of their enterprises.
Table 1 :
Distribution of socio-economic characteristics of women
Table 2 :
Distribution of respondents based on constraints faced
Table 3 :
Cost and return of women entrepreneurs
Table 4 :
Result of multiple regression on the effect of socioeconomic factors on the income of women entrepreneurs • Provision of psychological, moral and financial support from members of the family is needed for entrepreneurial development.• Government is advised to provide cheaper sources of credit to the women with little or no collateral to encourage their enterprises growth, self-reliance, which are necessary ingredients for nation's development.OECD, 1997.Entrepreneurship and SMEs in transitional economics.The Visegrad Conference, OECD Proceedings Plans.Otunaiya, A.O. and A.O. Idowu, 2009.Credit and factors affecting farm incomes in food crop production in yewa north local government area of ogun state.Nigeria Int.Multidiscip.J. Sci.Res., 2 (2): 86-91.UNDP, 2008.United Nation Development Programme.UNIDO, 1995.Integration of women industrial development unit, participation of women in manufacturer patterns.Determinants and Future Trend, Regional Analysis, ECA Regions, Final Report, Vienna.UNIDO, 2001.Women Entrepreneurship Development in Selected African Countries, Working Papers, No. 7. | 3,658.2 | 2013-01-15T00:00:00.000 | [
"Economics"
] |
An Approach of Path Optimization Algorithm for 3D Concrete Printing Based on Graph Theory
: In this paper, a method of 3D concrete printing is used to find the optimal path of the nozzle running path. We propose a path optimization algorithm based on graph theory to solve two key problems in 3D concrete printing. The partitioning algorithm based on graph theory was adopted to improve the forming quality of concrete components, and ant colony algorithm was used to reduce printing time. The method was evaluated with 3D concrete printing experiments after introducing the process of implementing the partition algorithm and ant colony algorithm. The experiment results show a significant reduction in the idle strokes and the nozzle head-up times of the running path planned by the method in this paper. This has a direct impact on shortening the printing time and improving the forming quality. Compared with the other three conventional algorithms, the idle strokes of the nozzle planned by the method in this paper are reduced by 18.94%, 37.88%, and 66.67%, and the nozzle head-up times are reduced by 1.59%, 2.15%, and 8.69%. It provides a practical reference for the path optimization of 3D concrete printing.
Introduction
Three-dimensional concrete printing is a kind of additive manufacturing based on the digital model, with special "ink" made of cementitious materials, admixtures, additives, special fibers, and aggregates [1]. The printing process converts the architectural model into a three-dimensional design drawing with computer graphics, moves according to the set printing path, continuously extrudes the concrete slurry, and adds material layer by layer through layered processing and superposition molding to build the building [2]. The molding process is like fused deposition modelling (FDM), which is a linear lamination molding of the slurry extruded through the print head without the aid of a template [3].
In recent research, 3D concrete printing technology has shown great application potential in the construction field because it has higher construction efficiency, lower production cost, and less human resource investment than conventional construction methods [4]. Designers can also design the internal structure of concrete components according to mechanical principles to complete more complex and high-quality architectural designs because of the characteristics of the layered printing mode of 3D concrete printing [5]. However, the current 3D concrete printing technology is not yet mature, and there are still problems, such as the rough surface of concrete components, low forming quality, and long printing time [6]. These problems have seriously affected the development of 3D concrete printing. At present, scholars are solving the problem of long printing time by optimizing the printing path [7]. In the work of Kai-Yin Fok et al. [8], the nozzle path planning problem is formulated as an undirected rural postman problem (URPP), and a computationally efficient heuristic search algorithm is proposed to find fast routes and mitigate overheads in printing processes. Both simulation and experimental results concur that the proposed algorithm can significantly speed up printing processes. Gregory Dreifus et al. [9] presented a graphical model of the three-dimensional (3D) printing process and used the solution to the Chinese postman problem (CPP) to optimize the motion of an extruder on a given mesh. It enables the printer to print arbitrary lattice structures at the optimal time. This method significantly saves printing time, which is essential for better AM. Transitions are movements of the nozzle from a path endpoint to a path start point. These transitions diminish printing quality by causing strings. Liu et al. [10] proposed a method that minimizes the number of transitions based on directional parallel line segments. A genetic algorithm solver was developed by designing a TSP-oriented mutation method. Compared with other algorithms, their algorithm generates paths with fewer transitions. Falliano Devid et al. presents a novel type of foamed concrete [11]. This novel material can keep its shape in the fresh state due to enhanced consistency and viscosity. This peculiarity lends itself to being implemented in automated extrusion production processes and 3D printing applications without the use of formwork. Kenneth Castelino et al. [1] described an algorithm for minimizing the nonproductive time or 'airtime' for milling by optimally connecting different nozzle path segments. This problem is formulated as a generalized traveling salesman problem with precedence constraints and is solved using a heuristic method. This method gives a significant improvement over random solutions and local search methods. The time savings depend on the problem size, and as the size of the problem increases, the benefits of the algorithm become apparent.
In addition, scholars have proposed a variety of path optimization algorithms to solve various problems encountered in 3D concrete printing. Among them, Jiang et al. [12] proposed a support generation method through printing path planning, with the aim of solving 3D concrete printing still suffering from redundant support material usage when printing parts with overhanging features. The new support generation method can significantly save more material. Bashir Hosseini Jafari and Nicholas Gans [13] presented a novel approach to design and carry out trajectories over regular curved surfaces. Their course provides a unified methodology for surface fitting from 3D surface measurements and mapping a curve from 2D onto a 3D surface with minimal distortion. Wu et al. [14] reconstructed CAD models from an original (GATech Buzz) sample with 2D image information. A CAD model for optimization and validation is adopted to sustain manufacturing reproduction based on system simulation feedback. To avoid collision with the produced path from the retraction path, their pick adaptive ring path generation and prediction in each planning iteration may also minimize material removal. Alejandro Vargas-Uscategui et al. [15] focused on understanding how the nozzle path planning strategy and robot kinematics affect the geometry and porosity distribution in a 3D object and found that the nozzle path has an essential impact on the formation of under-built and over-built structures. The knowledge generated from this research can significantly influence the development of nozzle path planning strategies in AM, providing the means to produce improved near-net shapes with controlled porosity formation. Wang et al. [16] proposed a load-dependent 3D printing path planning method for continuous fiber-reinforced plastics, which has been submitted. By using the topological optimization methods, the load transmission paths were reordered and extracted. Then the developed stress vector tracing algorithm was used to generate the continuous load-dependent printing path of CFRPs from the extracted features, which contain geometry and stress vectors. The printing path planned by this method precisely follows the load transfer path of the part, which can provide higher mechanical properties. Jin et al. [17] proposed a level-set-based optimization method to generate contour-parallel deposition paths for material extrusion-based additive manufacturing. The resulting contour path is smoothed using the level set from the input footprint; local optimization is then applied to smooth the points on the path by adaptively adjusting the position of the path. This deposition using the generated paths can effectively improve the manufacturing quality. DAI et al. [18] introduced a methodology to compute advancing fields for material accumulation by always performing material deposition along the surfaces of convex hulls; therefore, the printing process is guaranteed to remain collision-free. The 3D printing path is generated by their algorithm to fully fabricate models with large overhangs and high-genus topologies without any support structures. Asprone Domenico et al.'s method consists in the partition of a RC member into different concrete segments printed separately and assembling them into a unique element along with the steel reinforcement system [19]. The approach is expected to facilitate the production of free-form structurally optimized RC elements with the final aim of saving concrete material and, at the same time, fabricating lighter structures. Prashant Gupta, Bala Krishnamoorthy, and Gregory Dreifus [20] developed a framework that creates a new polygonal mesh representation of the sparse infill domain of a layer-by-layer 3D printing job. It guarantees the existence of a single, continuous nozzle path covering each connected piece of the domain in every layer in this graphical model. Their algorithm produces a nozzle path that avoids material collisions and crossovers and can be printed in a continuous fashion, irrespective of the complex geometry or topology of the domain.
Through multiple experiments, we found two crucial influencing factors of the nozzle running path on 3D concrete printing. They are: (1) When the nozzle runs a longer distance (idle stroke) in the non-printing area, the print time will be longer. (2) The nozzle head-up will produce a local accumulation effect, and when the print size is relatively small, the local cumulative effect is more obvious. This will lead to: (1) The printing efficiency will decrease when the nozzle runs with a longer idle strokes. (2) The forming quality of concrete components will be worse when the nozzle head-up times increase. Therefore, a path planning method is proposed to shorten the idle strokes and the head-up times of the nozzle running path to improve the printing efficiency and the forming quality of the concrete components. First, model the concrete components, and then partition the model based on the principle of graph theory. The algorithm in this paper divides the running path of the nozzle into partitions, and each partition can be printed once without the need for the nozzle head-up. After the partition is completed, the ant colony algorithm is used to find the shortest path between each partition, minimizing the idle strokes of the nozzle running path.
The format of this paper is as follows: We introduce the current problems of 3D concrete printing technology and give the solutions, then use simulation printing experiments and the actual printing experiments to verify the feasibility of the method in this paper. In the experiments, the advantages and feasibility of the algorithm in this paper are compared with three conventional 3D concrete printing path planning algorithms [21,22].
Modeling
To build the 3D concrete components model, it is necessary to collect various geometric data (length, width, height, etc.) of the concrete components. These data must be as accurate as possible because they affect the size of the nozzle and the number of slices when the concrete components are finally printed. After collecting the data, they are imported into Sketch Up to build the corresponding model ( Figure 1). Sketch Up is software that can create, share, and display 3D models. Unlike 3Dmax, it uses plane modeling. Through an easy-to-use and precise color, line, and text prompt guidance system, people do not have to type in coordinates to help them track their positions and complete related modeling operations.
Export the model as an STL file after modeling. The STL file is imported into Cura for slicing operations, and the slicing model ( Figure 2) is obtained. Cura is 3D printing software designed by Ultimaker. It was developed using Python and integrates the Cura Engine developed by C++ as the slicing engine. It is characterized by fast slicing speed, good user experience, and allows users to adjust the print quality and materials used. However, because Cura can only recognize model files in STL format, it is necessary to convert the built model into STL file format when using it. Export the model as an STL file after modeling. The STL file is imported into Cura for slicing operations, and the slicing model ( Figure 2) is obtained. Cura is 3D printing software designed by Ultimaker. It was developed using Python and integrates the Cura Engine developed by C++ as the slicing engine. It is characterized by fast slicing speed, good user experience, and allows users to adjust the print quality and materials used. However, because Cura can only recognize model files in STL format, it is necessary to convert the built model into STL file format when using it.
Model Partition
Convert the slicing model created in the previous section into computer-processed geometry ( Figure 3) to use the partitioning algorithm in this section. The intersection of multiple line segments in the geometric figure is marked as a geometric point ( Figure 4) and assigned numbers as , , …… . Then, each geometric point is assigned a Cartesian coordinate value as , , , , , …… , according to the geometric data measured in the 3D concrete printing component model. For each line segment in the geometric figure, a matrix describes their length and the number of intersections, and the elements in the matrix as weights. In this paper, the number of elements in represents the intersection point of the nozzle running path, and the number of line segments in the geometric figure represents the connection path between these intersection points. Therefore, there is a linear relationship between the number of line Export the model as an STL file after modeling. The STL file is imported into Cura for slicing operations, and the slicing model ( Figure 2) is obtained. Cura is 3D printing software designed by Ultimaker. It was developed using Python and integrates the Cura Engine developed by C++ as the slicing engine. It is characterized by fast slicing speed, good user experience, and allows users to adjust the print quality and materials used. However, because Cura can only recognize model files in STL format, it is necessary to convert the built model into STL file format when using it.
Model Partition
Convert the slicing model created in the previous section into computer-processed geometry ( Figure 3
Model Partition
Convert the slicing model created in the previous section into computer-processed geometry ( Figure 3) to use the partitioning algorithm in this section. The intersection of multiple line segments in the geometric figure is marked as a geometric point ( Figure 4) and assigned numbers as S 1 , S 2 , S 3 . . . . . . S n . Then, each geometric point is assigned a Cartesian coordinate value as (x 1 , y 1 ), (x 2 , y 2 ), (x 3 , y 3 ) . . . . . . (x n , y n ) according to the geometric data measured in the 3D concrete printing component model. For each line segment in the geometric figure, a matrix T describes their length and the number of intersections, and the elements in the matrix as weights. In this paper, the number of elements in T represents the intersection point of the nozzle running path, and the number of line segments in the geometric figure represents the connection path between these intersection points. Therefore, there is a linear relationship between the number of line segments in the geometric figure and the number of elements in T, and the length of the line segments and the size of elements in T are also linear.
Create a matrix T 1 to describe whether each geometric point in the geometric figure is connected. Among them, the elements in T 1 of the row vector are the numbers of the marked geometric points, and the elements in T 1 of the column vector represent their connection relationship. To obtain the elements in T 1 , it is needed to traverse the elements in T and find the geometric points that meet the following four rules: These rules do not need to be satisfied for all elements in T. When rule (1) is satisfied, the corresponding geometric point is put into T 1 . When rule (1) and rule (2) are satisfied simultaneously, the priority of putting the geometric point is lower than the first one, and so on. Create a matrix to describe whether each geometric point in the geometric figure is connected. Among them, the elements in of the row vector are the numbers of the marked geometric points, and the elements in of the column vector represent their connection relationship. To obtain the elements in , it is needed to traverse the elements in and find the geometric points that meet the following four rules: The first-row element in B is the number of the marked geometric point; the second-row element in B is the number of connections between each geometric point and the rest of the geometric point; and the initial value is 0. Select a starting point S i in the first row of an element in T 1 to connect to the point S j in the next row, set the element in T to inf, and add 1 to the element in B 2i . After finding column S j in T 1 , repeat the above operation to find the partition that can traverse at once. The following are the connection rules: When S i is reconnected, it indicates that a partition has been searched. Next, use S i as the center of the circle and increase the radius gradually to find the unconnected point. Repeat the above operation to find a new partition and filter out the partition with the longest connection path. Repeat the above operation until the entire graph is traversed ( Figure 5). Because the geometry divided by the partitioning algorithm is closed, any geometric point in the partition can be traversed at once.
Start the first partition search phrase as creation is completed. At this time, matrix is created to record the connection times of each geometric point in the geometric figure.
The first-row element in is the number of the marked geometric point; the second-row element in is the number of connections between each geometric point and the rest of the geometric point; and the initial value is 0. Select a starting point in the first row of an element in to connect to the point in the next row, set the element in to inf, and add 1 to the element in . After finding column in , repeat the above operation to find the partition that can traverse at once. The following are the connection rules: (1) Connect , , make , 1; (2) Find , , if && , connect , ; When is reconnected, it indicates that a partition has been searched. Next, use as the center of the circle and increase the radius gradually to find the unconnected point. Repeat the above operation to find a new partition and filter out the partition with the longest connection path. Repeat the above operation until the entire graph is traversed ( Figure 5). Because the geometry divided by the partitioning algorithm is closed, any geometric point in the partition can be traversed at once.
Optimal Path
The geometry is divided into n areas after the partition is completed. Create n arrays to store the geometric points in each partition geometry; each array is denoted as R n . The geometric point in each array is denoted as S ti , and mark its Cartesian seat as x S ti , y S ti . Extract a geometric point from each array and apply the ant colony algorithm to get the shortest path between them. This process needs to be repeated several times until the geometric points in each array are traversed to find the final optimal path. However, such a process is complicated and time-consuming. Therefore, constraints are introduced in this paper (the constraints are as follows: Equations (1)-(3)). Use the ant colony algorithm to find the shortest path, as the judgment result is F = 1. On the contrary, discard the extracted geometry point combination if the judgment result is F = 0.
of 12
The following are the critical steps of the ant colony algorithm used in this section: Step (1): Initialize the various parameters, such as the number of ants µ, the influence of pheromone concentration on the direction of ant transfer α, the relative importance of visibility β, the number of volatiles ρ, and the maximum number of iterations.
Step (2): Construct a solution space, place each ant randomly at a different starting point, and calculate the next printing point to be visited for each ant k (k = 1, 2, 3 . . . n) until all ants have visited all the printing points.
Step (3): Update the pheromone, calculate the path length d k of each ant (k = 1, 2, 3 . . . n), and record the optimal solution (shortest path) in the current iteration times. At the same time, the pheromone concentration on the connecting path of each printing point is updated.
Step (4): Determine whether to terminate; if iterations < max(iterations), set iterations = iterations + 1, clear the record table of the path that the ant passes, and return to step (2); otherwise, terminate the calculation and output the optimal solution ( Figure 6).
The following are the critical steps of the ant colony algorithm used in this section: Step (1): Initialize the various parameters, such as the number of ants , the influence of pheromone concentration on the direction of ant transfer , the relative importance of visibility , the number of volatiles , and the maximum number of iterations.
Step (2): Construct a solution space, place each ant randomly at a different starting point, and calculate the next printing point to be visited for each ant k (k = 1, 2, 3 … n) until all ants have visited all the printing points.
Step (3): Update the pheromone, calculate the path length of each ant (k = 1, 2, 3 … n), and record the optimal solution (shortest path) in the current iteration times. At the same time, the pheromone concentration on the connecting path of each printing point is updated.
Step (4): Determine whether to terminate; if iterations < max(iterations), set iterations = iterations + 1, clear the record table of the path that the ant passes, and return to step (2); otherwise, terminate the calculation and output the optimal solution ( Figure 6).
Results
This section divides the experimental part into simulation printing experiments and actual printing experiments. Simulation printing experiments, including the algorithm of this paper and three other conventional path planning algorithms, will be performed under the same conditions. These conventional algorithms are the parallel straight-line scanning algorithm, zigzag scanning algorithm, and offset contour scanning algorithm. Due to the complexity of the concrete components in this paper, the parallel straight-line scanning algorithm and the zigzag scanning algorithm require the nozzle to rise frequently, which cannot be applied to the actual printing experiment. The actual printing experiments will use the proposed and offset contour scanning algorithms because their printing paths have fewer inflection points.
In the experiments, concrete components of the same size (1300 mm, 1300 mm) and the same number of layers (4 layers) were selected for printing under the same conditions. Record the running distance and the nozzle head-up times of the nozzle running path in the experiments, as they will significantly affect the forming quality and printing time of concrete components. The forming quality of the concrete components will be poor because every time the nozzle is raised, the cohesion of the material will lead to excessive concrete accumulation. The printing time of the concrete components will be longer because it has a linearly increasing relationship with the travel distance of the nozzle.
Simulation Printing Experiment
The algorithm is run on the MATLAB R2018 platform. However, it is unintuitive and complicated to record the running distance and the head-up times of the nozzle running path in the simulated printing experiments on MATLAB R2018. Therefore, it is necessary to perform the simulation printing experiments on CIMCO Edit 8.
CIMCO Edit 8 is a famous CNC program editing and simulation tool. It can store and retrieve NC programs, NC program optimization, post-processing, and fast NC program simulation. The following figure (Figure 7) shows the simulation interface running in CIMCO Edit 8.
concrete components. The forming quality of the concrete components will be poor because every time the nozzle is raised, the cohesion of the material will lead to excessive concrete accumulation. The printing time of the concrete components will be longer because it has a linearly increasing relationship with the travel distance of the nozzle.
Simulation Printing Experiment
The algorithm is run on the MATLAB R2018 platform. However, it is unintuitive and complicated to record the running distance and the head-up times of the nozzle running path in the simulated printing experiments on MATLAB R2018. Therefore, it is necessary to perform the simulation printing experiments on CIMCO Edit 8.
CIMCO Edit 8 is a famous CNC program editing and simulation tool. It can store and retrieve NC programs, NC program optimization, post-processing, and fast NC program simulation. The following figure (Figure 7) shows the simulation interface running in CIMCO Edit 8. In the simulation experiments, it is necessary to import the concrete components model generated in Section 2.1 Sketch Up into CIMCO Edit 8 to determine the size of the concrete components and the starting point of printing. Then the nozzle running path is planned by several conventional algorithms. The algorithm in this paper is compiled into G code. Finally, the G code is imported into CIMCO Edit 8 to start the experiments. During In the simulation experiments, it is necessary to import the concrete components model generated in Section 2.1 Sketch Up into CIMCO Edit 8 to determine the size of the concrete components and the starting point of printing. Then the nozzle running path is planned by several conventional algorithms. The algorithm in this paper is compiled into G code. Finally, the G code is imported into CIMCO Edit 8 to start the experiments. During the experiments, the experimental distance and the head-up times of the nozzle (Table 1) running path are recorded.
Print Experiment
The feasibility and effectiveness of the algorithm in this paper can be verified more intuitively and better by the actual experiments carried out on the nozzle running path planned by the algorithm in this paper and the offset contour scanning algorithm.
It is necessary to import G-code into a 3D concrete printer in the actual experiments. The printer used for the test has a print range of 3 m × 3 m × 1.2 m (Figure 8). The nozzle size was 40 mm, the preset height of the single layer was 18 mm, the moving speed of the nozzle was 120 mm/s, while the extrusion speed was 200 r/min [23]. In the experiments, the running distance, printing time, and head-up times of the nozzle (Table 2) were recorded.
intuitively and better by the actual experiments carried out on the nozzle running path planned by the algorithm in this paper and the offset contour scanning algorithm.
It is necessary to import G-code into a 3D concrete printer in the actual experiments. The printer used for the test has a print range of 3 m × 3 m × 1.2 m (Figure 8). The nozzle size was 40 mm, the preset height of the single layer was 18 mm, the moving speed of the nozzle was 120 mm/s, while the extrusion speed was 200 r/min [23]. In the experiments, the running distance, printing time, and head-up times of the nozzle (Table 2) were recorded.
Experimental Validation
Idle strokes: During the experiment, we cannot directly measure the idle strokes of the nozzle. Therefore, the effect of this method on the idle stroke is judged by the total distance in which the nozzle operates, because for the same concrete member, the effective running distance of the nozzle (the distance required for the nozzle to extrude the material) is the same. It can be seen from Table 1 that the running path of the nozzle
Experimental Validation
Idle strokes: During the experiment, we cannot directly measure the idle strokes of the nozzle. Therefore, the effect of this method on the idle stroke is judged by the total distance in which the nozzle operates, because for the same concrete member, the effective running distance of the nozzle (the distance required for the nozzle to extrude the material) is the same. It can be seen from Table 1 that the running path of the nozzle planned by the parallel line scanning algorithm is the longest, which leads to its algorithm difficulty being the simplest compared to other algorithms, but it is rarely used. Although the nozzle running distance of the zigzag scanning algorithm is about 50% of the previous method, it still does not meet our expectations. The offset profile scanning algorithm has the shortest running distance of the nozzle among the three conventional algorithms, so it is currently the most widely used. The running distances of the nozzle planned by the method in this paper are 18.94%, 37.88%, and 66.67% of the other three algorithms.
Printing time: By combining Tables 1 and 2, it can be concluded that the running distance of the nozzle will greatly affect the printing time. The printing time of the method in this paper is 49.95% of the offset contour scanning algorithm, which is of great significance for 3D concrete printing.
The head-up times: It can be found from Table 1 that the method in this paper can greatly reduce the head-up times of the nozzle, which is 1.59%, 2.15%, and 8.69% of the previous algorithms. The influence of nozzle head-up on the forming quality of concrete components is huge because it will produce a local accumulation effect. At the same time, too many head-up times of the nozzle will cause the printing time to be too long.
Forming quality of concrete components: It can be seen from Figures 9 and 10 that the concrete components printed by the method in this paper are significantly better than the conventional methods in terms of forming quality. The head-up times of the nozzle does have a huge impact on the forming quality of the concrete components. We can see that there are many fractures in the concrete components printed by conventional methods. This is caused by the phenomenon of uneven extrusion material caused by the head-up of the nozzle in a short time.
Forming quality of concrete components: It can be seen from Figures 9 and 10 that the concrete components printed by the method in this paper are significantly better than the conventional methods in terms of forming quality. The head-up times of the nozzle does have a huge impact on the forming quality of the concrete components. We can see that there are many fractures in the concrete components printed by conventional methods. This is caused by the phenomenon of uneven extrusion material caused by the head-up of the nozzle in a short time. too many head-up times of the nozzle will cause the printing time to be too long.
Forming quality of concrete components: It can be seen from Figures 9 and 10 that the concrete components printed by the method in this paper are significantly better than the conventional methods in terms of forming quality. The head-up times of the nozzle does have a huge impact on the forming quality of the concrete components. We can see that there are many fractures in the concrete components printed by conventional methods. This is caused by the phenomenon of uneven extrusion material caused by the head-up of the nozzle in a short time.
Discussion
Changing the running speed of the nozzle can also affect the printing time, but we did not do that. The reason is that the running speed of the nozzle can only be changed in a small range during the actual printing process; otherwise, a fracture surface will be generated inside the 3D concrete components. The consequence of this result is extremely serious and will change the interfacial force transmission path of the components. At the same time, the volume and shape of the final 3D concrete component are fixed no matter what method is used to print. This means that the required material and the path that the nozzle must run are the same during the printing process. Therefore, we focused our attention on shortening the unnecessary path of the nozzle run, also known as reducing the idle strokes; this will not change the shape and volume of the concrete components, but it can shorten the printing time.
The flowability of the material is an important factor that affects the forming quality of the concrete components. The means of influence is that when the nozzle needs to stop feeding, the concrete component will cause local accumulation or local surface fracture due to the fluidity of the material. For this problem, we choose to reduce the number of the nozzle needs to stop feeding (reduce the head-up times of the nozzle) to solve it. The benefit is obvious because no matter how strong or weak the material flowability is, the impact on the forming quality of concrete components during the continuous printing process is always relatively small. Reducing the number of nozzle head-ups ensures the continuity of the printing process. Of course, in actual printing, there are many factors that can affect the forming quality of concrete components; what we do is maximize the forming quality by changing the running path of the nozzle.
In the 3D concrete printing process, the formation of a complete concrete component requires the accumulation of materials layer by layer. This means that the time reduction and the local accumulation reduction of each layer will eventually accumulate when printing, and these reductions can be considered when printing large-scale components. In fact, many large concrete components cannot be printed due to the long printing time and large local accumulation. The method in this paper can provide a feasible printing path.
Conclusions
A path planning algorithm is proposed in this paper to optimize the problems of excessive printing time and poor forming quality in 3D concrete printing. In the algorithm, the commonly used concrete components in 3D concrete printing are taken as the research object. Shortening the printing time and improving the forming quality are the research goals; combined with the partitioning algorithm based on graph theory and the ant colony algorithm, the 3D concrete printing path is optimized, and the following conclusions are drawn: (1) Compared with the conventional path planning algorithm, the algorithm in this paper has more advantages. The idle strokes and the head-up times of the nozzle running path are significantly reduced when using the algorithm in this paper to plan the printing path. This effectively saves the printing time and significantly improves the forming quality of concrete components. (2) The algorithm in this paper is feasible and has specific practical significance. The final output results will be an array of coordinates when planning the nozzle running path according to the algorithm in this paper. Therefore, a corresponding printing program can be quickly written to perform actual printing.
However, the algorithm in this paper still has many limitations. Compared with the reinforcement learning algorithm, the algorithm in this paper has no significant advantages in planning the nozzle running path for the irregular concrete components. Therefore, how to use the principle of graph theory to split irregular concrete components into regular components will be our next research direction.
Patents
The work of this paper has published an invention patent. | 8,156.4 | 2022-11-08T00:00:00.000 | [
"Computer Science"
] |
Intestinal Inflammation and Altered Gut Microbiota Associated with Inflammatory Bowel Disease Render Mice Susceptible to Clostridioides difficile Colonization and Infection
ABSTRACT Clostridioides difficile is a noteworthy pathogen in patients with inflammatory bowel disease (IBD). Patients with IBD who develop concurrent C. difficile infection (CDI) experience increased morbidity and mortality. IBD is associated with intestinal inflammation and alterations of the gut microbiota, both of which can diminish colonization resistance to C. difficile. Here, we describe the development of a mouse model to explore the role that IBD-induced changes of the gut microbiome play in susceptibility to C. difficile. Helicobacter hepaticus, a normal member of the mouse gut microbiota, triggers pathological inflammation in the distal intestine akin to human IBD in mice that lack intact interleukin 10 (IL-10) signaling. We demonstrate that mice with H. hepaticus-induced IBD were susceptible to C. difficile colonization in the absence of other perturbations, such as antibiotic treatment. Concomitant IBD and CDI were associated with significantly worse disease than observed in animals with colitis alone. Development of IBD resulted in a distinct intestinal microbiota community compared to that of non-IBD controls. Inflammation played a critical role in the susceptibility of animals with IBD to C. difficile colonization, as mice colonized with an isogenic mutant of H. hepaticus that triggers an attenuated intestinal inflammation maintained full colonization resistance. These studies with a novel mouse model of IBD and CDI emphasize the importance of host responses and alterations of the gut microbiota in susceptibility to C. difficile colonization and infection in the setting of IBD.
I nflammatory bowel diseases (IBDs), including Crohn's disease and ulcerative colitis, are chronic and progressive conditions characterized by inflammation of the digestive tract. The incidence of Clostridioides difficile infection (CDI) has significantly increased among hospitalized patients with IBD over the past 2 decades (1-3). C. difficile is a spore-forming bacterium that produces enterotoxins that damage the intestinal epithelium (4). C. difficile was initially described as a cause of antibiotic-associated diarrhea (5,6). Normally, an intact intestinal microbiota provides resistance to C. difficile (7). However, antibiotic exposure can render otherwise healthy individuals susceptible to CDI due to disruption of the microbiota. Although antibiotic use is a well-known risk factor for CDI, other risk factors have been recognized, including immunosuppression and preexisting IBD (8,9). The gut microbiota plays a critical role in the pathogenesis of IBD (10,11), and CDI is associated with more severe intestinal microbiota disturbances among patients with IBD (12). In addition, underlying IBD lowers the long-term efficacy of fecal microbiota transplantation (FMT) to treat recurrent CDI, and this is associated with less robust engraftment of donor microbes (13). These results suggest that the pathophysiology of IBD influences gut microbiota composition and CDI outcomes.
The pathogenesis of IBD and CDI have long been studied in animal models (14,15); however, a robust mouse model of comorbid IBD and CDI in the absence of antibioticinduced perturbation of the microbiota has yet to be described. Colonization with enteric Helicobacter species, including Helicobacter hepaticus, has been shown to trigger colitis in genetically predisposed mice, such as those lacking the regulatory cytokine interleukin 10 (IL-10) (16)(17)(18). IL-10 2/2 mice reared under Helicobacter-free, specific-pathogen-free (SPF) conditions develop colitis that resembles human IBD when colonized with H. hepaticus (16,17,19), and this intestinal inflammation is associated with alterations in gut microbiota community structures (20). Interestingly, the ability of H. hepaticus to trigger IBD in IL-10 2/2 mice depends on the presence of an indigenous microbiota, as ex-germfree mice mono-colonized with H. hepaticus do not develop severe colitis (21). Most previously described mouse models of CDI require antibiotic administration to disrupt the intestinal microbiota and render animals susceptible to C. difficile colonization and disease (15,22) and have revealed that colonization resistance and protection from CDI are mediated by the microbiota and (7,23) host immune responses (24,25).
In the present study, we sought to replicate the relationship between IBD and CDI in a mouse model. We wished to develop a system where we could evaluate the specific role of intestinal inflammation and gut microbiota in inducing susceptibility to C. difficile colonization and infection. We utilized a genetic model of murine IBD where intestinal inflammation is triggered by a normal member of the gut microbiota and show that the development of colitis is associated with a loss of colonization resistance to C. difficile.
RESULTS
Intestinal inflammation in IL-10 2/2 mice colonized with H. hepaticus is associated with altered gut microbiota. Wild-type (WT) and IL-10 2/2 C57BL/6 mice reared under specific-pathogen-free (SPF) conditions received H. hepaticus or sterile broth via oral gavage. Animals were monitored for the development of intestinal inflammation by measurement of the inflammatory marker lipocalin-2 in feces. We confirmed that wildtype mice did not develop signs of intestinal inflammation, regardless of H. hepaticus colonization status, whereas IL-10 2/2 mice colonized with H. hepaticus did develop colitis. The level of lipocalin-2 was significantly increased in the feces of IL-10 2/2 mice 7 days after H. hepaticus colonization, and this increase was sustained at 14 days postcolonization (Fig. 1A). Histological examination of colon sections harvested from IL-10 2/2 mice colonized with H. hepaticus revealed pathology consistent with inflammatory bowel disease, including loss of goblet cells, inflammatory cell infiltration, and crypt elongation 14 days after H. hepaticus colonization, compared to that of WT mice of either genotype mock challenged with sterile tryptic soy broth (Fig. 1B).
We determined whether intestinal inflammation was associated with changes in the gut microbiota. H. hepaticus colonization in WT mice did not significantly impact the diversity of the colonic microbial community ( Fig. 1C and D). However, intestinal inflammation in SPF IL-10-deficient animals induced by H. hepaticus colonization was associated with an altered colonic microbial community structure (Fig. 1D). Differences in microbial community structure were driven by treatment (r 2 = 0.193, P = 0.0008) and mouse genotype (r 2 = 0.245, P = 0.004) (Fig. 1D). Similar changes in the microbiota after the development of colitis were observed in both the cecum and proximal colon (Fig. S1). Taken together, these data demonstrate that the development of IBD in IL-10 2/2 mice colonized with H. hepaticus is accompanied by alterations in the distal gut microbiota.
The development of IBD in IL-10 2/2 animals results in susceptibility to C. difficile colonization. We explored whether mice with intestinal inflammation due to IBD are susceptible to CDI. SPF IL-10 2/2 mice were colonized with H. hepaticus to trigger IBD prior to challenge with spores of C. difficile strain VPI 10463 ( Fig. 2A). As a control for the development of CDI in IL-10 2/2 animals, we utilized our standard CDI model with antibiotic pretreatment, administering the broad-spectrum antibiotic cefoperazone for 10 days, followed by C. difficile spore challenge (22). Both groups of animals were monitored for C. difficile colonization and signs of clinical disease. As noted above, intestinal inflammation triggered by H. hepaticus colonization was associated with a fecal microbiota structurally distinct from that of noncolonized controls. Furthermore, animals treated with cefoperazone had a fecal microbiota at the time of C. difficile challenge that was different from those of the other two experimental groups (Fig. 2B). Animals were either treated with the antibiotic cefoperazone to alter their gut microbiota or infected with H. hepaticus to trigger colitis. (B) Principal-coordinate plots of Bray-Curtis distances of bacterial communities in feces collected at baseline (day 214, prior to any experimental treatment) and on the day of C. difficile spore challenge (day 0). Animals that received sterile broth had no change in bacterial community structure at day 0 compared to baseline, while cefoperazone treatment and the development of colitis after H. hepaticus colonization caused marked, but differing, alterations in the microbiota. (C) Colonization dynamics in IL-10 2/2 mice following C. difficile spore challenge. IL-10 2/2 mice that had developed colitis after H. hepaticus colonization shed variable amounts of C. difficile in their feces after challenge. Lines show the colonization trajectory of individual mice. All colitic IL-10 2/2 mice had C. difficile detectable in their feces at some point in the 7 days after challenge, and 8 of 9 mice had C. difficile isolated from cecal contests at the time of necropsy 9 days after spore challenge. No C. difficile was ever recovered from the feces or cecal contents of noncolitic (i.e., noncolonized with H. hepaticus) mice at any point after challenge. The dotted line indicates the limit of detection for C. difficile quantification (10 2 CFU). Data represent results from 2 independent experiments. A two-tailed unpaired t test was performed (P , 0.01). IL-10 2/2 mice challenged with C. difficile spores in the absence of H. hepaticus-triggered colitis or antibiotic pretreatment were resistant to C. difficile colonization (Fig. 2C). As we have observed previously in wild-type animals, cefoperazone-treated IL-10 2/2 mice had high levels of C. difficile colonization 1 day after the spore challenge (Fig. S2). Interestingly, 68% of IL-10 2/2 animals with H. hepaticus-triggered colitis shed C. difficile in their feces 1 day after challenge (Fig. 2C). At 7 days after spore challenge, 89% of mice with IBD shed C. difficile (Fig. 2C). As opposed to what we had observed in antibiotic-treated mice, there was temporal variability in the shedding of C. difficile in the feces of mice with IBD. All mice with IBD had a detectable level of C. difficile colonization at some point during the course of monitoring post-spore challenge, but variation in the levels was seen over the 9 days of the experiment (Fig. 2C).
The clinical course of IL-10 2/2 animals that were challenged with C. difficile after pretreatment with cefoperazone followed what we had reported for wild-type animals (22,26). At the time of necropsy, animals had gross colitis. Histopathologic analysis revealed severe colitis with edema, epithelial damage, and a marked neutrophilic infiltrate (Fig. S2). Animals with concurrent IBD and CDI lost significantly more weight than mice with IBD alone at day 7 and day 9 after C. difficile spore challenge (Fig. 3A). Overall, IL-10 2/2 animals with H. hepaticus-triggered colitis and infected with C. difficile had higher clinical disease scores than mice with IBD alone (Fig. 3B). These results suggest that IBD superimposed with CDI results in more severe clinical disease than IBD alone.
Nine days after challenge with C. difficile spores, the IL-10 2/2 animals were euthanized and tissue was collected for histopathologic analysis. The ceca and colons of mice were examined by a veterinary pathologist and scored for edema, inflammatory infiltrate, and epithelial damage (22). While clinical disease was greater in animals with comorbid CDI and IBD, the histopathology scores were not different between mice with IBD alone and mice with comorbid CDI (Fig. 3C). As the pathogenesis of C. difficile infection is due to the production of the toxins TcdA and TcdB (27), we measured the production of toxin in the ceca of animals at the time of necropsy and quantified the C. difficile burden in the cecal contents of animals with IBD and superimposed CDI. As a group, the level of C. difficile colonization in animals with CDI and IBD was not significantly different from that of animals that were rendered susceptible by antibiotic administration (Fig. 3D). Despite these similar levels of colonization, mice with CDI following cefoperazone treatment had significantly more active C. difficile toxin in their cecal contents than mice with comorbid IBD and CDI (Fig. 3E). Several mice with IBD did not have detectable C. difficile toxin activity, despite being colonized with C. difficile.
Inflammation following H. hepaticus colonization is necessary to render IL-10deficient mice susceptible to C. difficile colonization. Our data demonstrate an association between H. hepaticus colonization, the development of gut inflammation, and microbiota changes with susceptibility to C. difficile infection. In an effort to disentangle these potential factors leading to the loss of colonization resistance against C. difficile, we utilized an isogenic mutant of H. hepaticus that is unable to produce cytolethal distending toxin (CDT), a bacterial genotoxin known to modulate immune responses (28). We previously demonstrated that IL-10 2/2 mice colonized with an H. hepaticus strain deficient in CDT production (HhCDT 2 ) develop significantly attenuated colitis despite colonization at the same levels as for wild-type H. hepaticus (HhCDT 1 ) (29, 30). We used this isogenic mutant to further explore the contribution of H. hepaticus colonization and the development of colitis to C. difficile susceptibility in IL-10 2/2 animals. IL-10-deficient mice were colonized with either the wild-type strain (HhCDT 1 ) or the HhCDT 2 mutant and compared to controls that received sterile broth via oral gavage. Fourteen days later, all three groups of mice were challenged with C. difficile strain VPI 10463 spores and monitored for C. difficile colonization and disease (Fig. 4A). Mice colonized with wild-type H. hepaticus had significantly higher levels of fecal lipocalin-2 ( Fig. 4B) than those colonized with the CDT-deficient mutant and a significantly higher degree of histopathologic intestinal inflammation (Fig. S3).
Colonization of IL-10 2/2 mice with either HhCDT 1 or HhCDT 2 was associated with lower microbial diversity in the cecum than in mice that received sterile broth (Fig. 4C). Similarly, examination of the cecal microbial community structure demonstrated that animals colonized with either strain of H. hepaticus were distinct from animals that received sterile broth ( Fig. 4D; Fig. S4). The colonization treatment conditions explained 40.7% of the variation in microbial community structure (P = 0.0002) (Fig. 4D). Colonization with either HhCDT 1 or HhCDT 2 was associated with an expansion of Enterobacteriaceae and Lactobacillaceae and a loss in Lachnospiraceae (Fig. S1). Despite similar changes to the gut microbiota in animals colonized with HhCDT 1 and HhCDT 2 , challenge with C. difficile spores demonstrated differential effects of these two H. hepaticus strains on colonization resistance. C. difficile was not detectable in the feces of IL-10 2/2 mice colonized with HhCDT 2 at any time point following C. difficile spore challenge (Fig. 4E). At the time of necropsy, 31 days after challenge with C. difficile spores, we confirmed this finding in cecal contents and found no detectable C. difficile colonization in mice colonized with HhCDT 2 , while animals initially colonized with HhCDT 1 had high levels of C. difficile in their cecal contents (Fig. 4F). These data suggest that changes in the microbiota following colonization of IL-10 2/2 animals with H. hepaticus is not sufficient to lower colonization resistance against C. difficile in the absence of a threshold degree of inflammation.
DISCUSSION
The clinical intersection between inflammatory bowel disease and C. difficile infection has long been recognized (2). Patients with underlying IBD have an increased incidence of CDI, and this is associated with a worse clinical course (31). Despite our recognition of this relationship between the two conditions, the mechanisms that underlie this confluence between IBD and CDI are not well defined. The indigenous microbiota is one obvious link between these diseases, as alterations in the gut microbiota are thought to play a key role in the pathogenesis of IBD (10,32). Additionally, an altered microbiota structure and function underly the loss of colonization resistance to C. difficile (33). However, in patients with IBD, the presence of an altered microbiota can be due to a number of factors. It may be related to changes in the microbiota that predispose to IBD, the treatment of IBD with antibiotics or biologics, or the effect of chronic inflammation on the host and the indigenous bacteria. Determining which of these lead to the susceptibility to CDI seen in patients with IBD would be important to help guide rational prevention and treatment strategies.
Clinically, the poor outcomes seen in patients with IBD who have CDI may reflect the direct effect that infection with toxin-producing C. difficile has on the altered epithelium/immune system present in patients with IBD. While this is a straightforward hypothesis, it is interesting to note that a recent retrospective study suggests that patients with more severe IBD are more prone to CDI, and thus, the observed relationship may simply reflect the baseline severity of patients with IBD rather than the C. difficile infection increasing the severity of disease (34). Disentangling the complex relationship between IBD and CDI is challenging due to the lack of appropriate animal models to model this interaction. Recently, reports have investigated how gut inflammation induced by dextran sodium sulfate (DSS) administration in mice influenced subsequent C. difficile infection. Zhou et al. demonstrated that mice with concurrent DSS-induced colitis had a worse clinical outcome when infected with C. difficile (35). Saleh et al. subsequently demonstrated more severe clinical disease due to CDI even when the animals were challenged with C. difficile 3 weeks after cessation of DSS treatment, at a time when the acute colitis due to DSS administration had resolved (36). In both of these studies, susceptibility to C. difficile colonization and increased disease manifestations required the administration of antibiotics before challenge with spores of C. difficile. Zhou et al. did note that about 40% of animals that received DSS without antibiotic treatment could be colonized by C. difficile without antibiotic administration, but in this case, worsened histopathologic colitis was not observed in animals that harbored C. difficile (35). We previously showed that DSS administration alters the fecal microbiota prior to development of severe histopathologic disease (37), and it is possible that this alteration of the microbiota leads to a loss of colonization resistance.
In the current study, we describe a novel murine system that can be used to study the intersection between IBD and CDI in a model that does not require antibiotic administration to render animals susceptible to C. difficile colonization. Furthermore, the model of IBD that we employ is characterized by the development of gut inflammation in a susceptible host that exhibits a dysregulated immune response to the normal gut microbiota. In our SPF colony of IL-10 2/2 mice, colonization with H. hepaticus rapidly triggers the development of typhlocolitis (20,29), while no colitis is seen in wild-type animals carrying H. hepaticus. H. hepaticus and other enteric Helicobacter species are found as naturally occurring members of the gut microbiota of wild rodents (38)(39)(40). Recent studies have demonstrated that inbred laboratory mice carrying microbiota derived from wild mice, which includes Helicobacter species, exhibited immune responses that more closely reflected that seen in humans (41,42). Therefore, the model described here recapitulates the complex relationships between the host, the indigenous microbiota, and the pathogen seen in patients with IBD who are at risk for the development of C. difficile infection.
Our results demonstrate that loss of colonization resistance against C. difficile is not due solely to the changes in microbiota structure that follow colonization with H. hepaticus. While we demonstrate that H. hepaticus colonization of IL-10 2/2 mice results in an altered cecal and colonic microbiota, the development of inflammation is a critical requirement to overcome colonization resistance. Similar changes in the microbiota of the distal gastrointestinal tract were observed in IL-10 2/2 animals colonized with either wild-type H. hepaticus or an isogenic H. hepaticus mutant that does not express cytolethal distending toxin. However, only animals infected with wild-type H. hepaticus, which developed much more severe baseline intestinal inflammation, were susceptible to colonization with C. difficile. This indicates that in this system, changes in the microbiota itself do not lead to susceptibility to C. difficile. Only when these changes in the microbiota are accompanied by the development of intestinal inflammation is a luminal environment created that is permissible for the establishment of C. difficile colonization.
The relationship between inflammation, microbiota alterations, and pathogen susceptibility is a theme that has emerged as an important factor in the pathogenesis of a number of gastrointestinal bacterial infections (43). Salmonella has been shown to utilize the alternative electron acceptors present in the lumen of the inflamed intestine to provide a growth advantage over the indigenous microbiota (44). Furthermore, intestinal inflammation limits the availability of micronutrients, including transition metals, such as iron and zinc (45). Some bacterial pathogens have evolved systems to deal with this aspect of so-called "nutritional immunity." Indeed, the levels of zinc within the gastrointestinal tract have been shown to be a key factor in protection against C. difficile (46). C. difficile has been shown to be metabolically flexible, allowing it to occupy multiple nutrient niches within the gastrointestinal tract (47). One metabolic feature that has been shown to favor the growth of C. difficile is the presence of luminal amino acids, such as proline, which can be used by the pathogen in energy-generating Stickland fermentation reactions (48). Competition for proline is a critical determinant for the success of C. difficile versus other gut microbes (49). Furthermore, amino acid availability has been shown to be increased in patients with diarrhea, including inflammatory diarrhea, affording a permissive environment for C. difficile (50). The triggering of gut inflammation by the activity of C. difficile toxins was recently shown to specifically alter the intestinal environment in a manner that favors the growth and persistence of C. difficile (51). These findings are in concordance with our current finding that the development of inflammation in a murine model of IBD leads to susceptibility to C. difficile colonization.
The system described here will permit detailed studies of the complex interplay between the indigenous microbiota, host inflammatory responses, and pathogen that underlies the clinical relationship observed between C. difficile infection and inflammatory bowel disease. This model (i) will permit mechanistic studies of the interaction between altered host responses and gut microbes that leads to a breakdown in hostmicrobe homeostasis and (ii) also will serve as a test bed for novel strategies to prevent and treat C. difficile infection in patients with IBD.
MATERIALS AND METHODS
Mice. Male and female C57BL/6 wild-type or IL-10-deficient mice were maintained under specificpathogen-free (SPF), Helicobacter-free conditions. Mice were at least 8 weeks of age at the start of experiments. All mice were from a breeding colony at the University of Michigan that was originally derived from the Jackson Laboratories in 2002. Euthanasia was carried out via CO 2 inhalation at the conclusion of the experiment. Animal studies were approved by the University of Michigan's Committee on the Care and Use of Animals, and animal husbandry was performed in an AAALAC-accredited facility.
Bacterial strains and growth conditions. H. hepaticus strain 3B1 (ATCC 51488) was obtained from the American Type Culture Collection (Manassas, VA). The isogenic mutant 3B1::Tn20 has a transposon inserted near the start of cdtA and no longer produces cytolethal distending toxin (CDT) (29). Wild-type H. hepaticus 3B1 and 3B1::Tn20 were grown on tryptic soy agar (TSA) supplemented with 5% sheep blood at 37°C for 3 to 4 days in a microaerobic chamber (1 to 2% oxygen; Coy Laboratories). The isogenic mutant 3B1::Tn20 is chloramphenicol resistant and was grown on medium supplemented with 20 mg/ml chloramphenicol (Sigma, St. Louis, MO). Spores of C. difficile reference strain VPI 10463 (ATCC 43255) were prepared and used as previously described by Theriot et al. (22). Spores were enumerated by plating them on prereduced taurocholate cycloserine cefoxitin fructose agar (TCCFA), prepared as previously described (26).
Infection studies. H. hepaticus suspensions for animal inoculation were prepared by harvesting organisms from culture plates into Trypticase soy broth (TSB). Mice were challenged with 10 8 CFU of H. hepaticus by oral gavage. H. hepaticus colonization status was confirmed by PCR of the cdtA gene (52) on fecal DNA extracted using a DNeasy UltraClean microbial kit (Qiagen) by following the manufacturer's instructions. TCCFA plates with fecal or cecal samples or spore inoculum were incubated in an anaerobic chamber (Coy Industries) at 37°C for 18 h prior to colony enumeration.
For our previously published model of C. difficile infection following antibiotic administration, mice received 0.5 mg/ml cefoperazone (MP Pharmaceuticals) in sterile distilled drinking water (Gibco) ad libitum. The antibiotic-supplemented water was provided for 10 days, followed by 2 days of drinking water without antibiotics (22). Animals were then challenged by oral gavage with 10 3 to 10 4 CFU of C. difficile spores suspended in 50 ml of distilled water (Gibco) or mock challenged with water. To develop a model of C. difficile challenge after the development of colitis, 2 weeks after colonization with H. hepaticus or mock colonization with sterile TSB, animals were challenged by oral gavage with 10 3 to 10 4 CFU of C. difficile spores suspended in 50 ml of distilled water (Gibco) or mock challenged with water. Over the course of each experiment, mice were regularly weighed, and feces were collected for quantitative culture. Fresh feces were collected from each mouse into a preweighed sterile tube. Immediately following collection, the tubes were reweighed to determine fecal weight and passed into an anaerobic chamber (Coy Laboratories). Each sample was then diluted 10% (wt/vol) with prereduced sterile phosphate-buffered saline (PBS) and serially diluted onto prereduced TCCFA plates. The plates were incubated anaerobically at 37°C, and C. difficile colonies were enumerated after 18 to 24 h of incubation.
Clinical disease severity, necropsy, and histopathologic scoring. Mice were monitored daily for clinical signs of disease. Disease scores were averaged based on scoring of the following features for signs of disease: weight loss, activity, posture, coat, diarrhea, and eyes/nose. A 4-point scale was assigned to score each feature, and the sum of these scores determined the clinical disease severity score (53). At the termination of each experiment, animals were euthanized by CO 2 inhalation. The cecum and colon were harvested and fixed in formalin. Sections were stained with hematoxylin and eosin and scored by a veterinary pathologist (Ingrid L. Bergin) in a blind manner. Histopathologic damage was scored using epithelial destruction, immune cell infiltration, and edema on a 4-point scale for each category, and the sum of these scores determined the histological score (22,26,54).
Quantitative detection of C. difficile toxin in cecal contents. The levels of functional C. difficile toxin were measured using a real-time cellular analysis (RTCA) assay (55). The RTCA assay was used to detect changes in cell-induced electrical impedance in cultured colorectal cell monolayers in response to cecal contents collected from mice with CDI to determine concentrations of active toxin. Cecal contents collected from mice at the time of euthanasia were weighed and diluted 1:1,000 (wt/vol) with sterile PBS. After homogenization, particulate matter was allowed to settle in the original collection tubes prior to transference of supernatant aliquots to fresh tubes. Cecal content supernatants were then filtered through a sterile 0.22-mm 96-well filter plate, and plates were centrifuged at 5,000 Â g for 10 min at room temperature. HT-29 cells, a human colorectal adenocarcinoma cell line with epithelial morphology (ATCC HTB-38), were seeded in electrode-lined 96-well plates (E-Plate View 96; ACEA Biosciences) in Dulbecco's modified Eagle medium (DMEM) and allowed to grow to a confluent monolayer overnight prior to loading of the processed cecal content supernatant. Samples were run in triplicate. Prior to addition of the cecal content supernatant samples to the plates containing HT29 monolayers, an aliquot of each sample, also run in triplicate, was incubated in parallel with antitoxin specific for C. difficile toxins A and B (C. difficile toxin/antitoxin kit T5000; TechLab, Blacksburg, VA) for 40 min at room temperature as a specificity control for the presence of C. difficile toxin A and B in samples. Active C. difficile toxin has cytotoxic effects on HT29 cells, which results in a dose-dependent and time-dependent decrease in cell impedance (CI). A standard curve was generated using wells that received purified C. difficile toxin A (List Biological Labs). CI data following incubation with cecal contents from mice with CDI were acquired and analyzed using the xCELLigence RTCA system and software (ACEA Biosciences, San Diego, CA). A normalized CI was calculated for each sample by normalizing the CI to the last CI measured at the time point prior to the addition of cecal content to the well.
Fecal lipocalin-2 quantification. Fresh feces were collected from individual mice and stored at 280°C until processed and assayed by enzyme-linked immunosorbent assay (ELISA). Fecal pellets were weighed and homogenized in PBS with 0.1% Tween 20. The fecal suspension was centrifuged, and lipocalin-2 levels in the supernatant were quantified using the mouse lipocalin-2/NGAL DuoSet ELISA kit (R&D Systems, Minneapolis, MN) according to the manufacturer's instructions.
DNA extraction and 16S rRNA gene sequencing. Cecal and colon luminal contents were separately collected from mice with IBD and without IBD at the time point immediately preceding C. difficile spore challenge. The University of Michigan Microbiome Core extracted total DNA from cecal and colon contents and prepped DNA libraries as previously described (56). The V4 region of the 16S rRNA gene was amplified from each sample using the dual-indexing sequencing strategy as described previously (57). Sequencing was done on the Illumina MiSeq platform using the MiSeq reagent kit V2 (MS-102-2003) to sequence the amplicons (500 total cycles), with modifications found in the Schloss SOP (https://github .com/SchlossLab/MiSeq_WetLab_SOP). The V4 region of the mock community (ZymoBIOMICS Microbial Community DNA Standard; Zymo Research) was also sequenced to supervise sequencing error. Data were analyzed using mothur (v 1.42.3) (58).
Statistics. Statistical analysis for continuous variables was performed using the unpaired Student t test or one-way analysis of variance (ANOVA), and Tukey's post hoc test was performed using R. A P value less than 0.05 was considered statistically significant. Categorical and ordinal variables were analyzed using nonparametric tests as indicated.
Data availability. Code and processing information are available in GitHub repository at the following URL: https://github.com/AbernathyClose/AbernathyClose_IbdCdi_mBio_2020.
SUPPLEMENTAL MATERIAL
Supplemental material is available online only. | 6,830.6 | 2021-06-15T00:00:00.000 | [
"Biology",
"Medicine"
] |
Quantum pathways for charged track finding in high-energy collisions
In high-energy particle collisions, charged track finding is a complex yet crucial endeavor. We propose a quantum algorithm, specifically quantum template matching, to enhance the accuracy and efficiency of track finding. Abstracting the Quantum Amplitude Amplification routine by introducing a data register, and utilizing a novel oracle construction, allows data to be parsed to the circuit and matched with a hit-pattern template, without prior knowledge of the input data. Furthermore, we address the challenges posed by missing hit data, demonstrating the ability of the quantum template matching algorithm to successfully identify charged-particle tracks from hit patterns with missing hits. Our findings therefore propose quantum methodologies tailored for real-world applications and underline the potential of quantum computing in collider physics.
Introduction
In collider physics, the endeavour of accurately associating the multitude of hits in the detectors recorded during high-energy particle collisions with the original charged particle tracks that traversed the detector emerges as a particularly challenging combinatorial problem [1,2].The precise assignment of these detector hits is pivotal for deducing the underlying nature and dynamics that catalysed the fundamental interactions being probed in such collisions.The critical endeavour of tracking therefore fosters a deeper understanding and elucidation of new physics phenomena, thereby acting as a linchpin in advancing high-energy physics.
The gamut of issues encountered in high-energy physics often resembles database search algorithms, where the solution to a particular problem is embodied as a notable element within a specified dataset.A prime exemplification of this is identifying charged particle tracks within a detector experiment, as seen in the eminent CMS [3] and ATLAS [4] experiments at CERN.This task can be conceptualised as a variant of a search algorithm known as template matching.The primary objective is to discern charged particle tracks traversing the tracker detector by juxtaposing the raw detector response against a preestablished database encompassing hit patterns that correspond to physical particle tracks obtained from simulation.Upon the recognition of a physical track within the data, attributes of the track, such as momentum and angular distribution, can be gleaned from the template database.This procedure is prominently recognised as Associative Memory, and has been shown to be a highly effective approach to track finding in high-energy experiments, employing Application Specific Integrated Circuits (ASIC) [5] to perform the template matching.The method of template based track finding is used in modern detector experiments and is marked as one of the potential approaches to be used at future colliders.
The proficiency of template matching algorithms is heavily contingent on the efficiency at which one can traverse through the template database.In an unstructured database comprising N elements, conventional search algorithms exhibit a scaling of O(N ), necessitating, on average, N/2 queries to the database to pinpoint the matching element.Contemporary particle colliders witness an increase in the number of potential tracks encoded in the database, congruent with the escalating energy and luminosity of the collisions within the detectors.Concurrently, since the advent of advanced tracking technology, tracking detectors have been evolving to become highly granular, thereby amplifying the resolution of the tracks and, consequently, the quanta of track patterns necessitated to be encoded into the template database.As the frontier of high-energy and high-luminosity experiments beckons, the practice of identifying charged particle tracks via Associative Memory is confronted with a duo of challenges: (1) the rapidly increasing number of tracks encoded in the template database demands a significant amplification in storage capacity to accommodate the probable tracks, and (2) the temporal resources required to sift through a burgeoning number of tracks is inefficient for modern tracking objectives.
With its rapid and continuous development, quantum computing offers a paradigm shift in information science and has the potential to revolutionise modern computational techniques.Particle physics will benefit from any speedup that quantum computers can provide and the devices' ability to compute in a regime that has never been accessible before.Already, there has been a quickly developing research effort into proof-of-principle algorithms for applications in particle physics ranging from the simulation of quantum field theories [6][7][8][9][10][11][12] and collision events [13][14][15][16][17][18][19], to event classification [20,21] and analysis [22,23].Quantum tracking algorithms have gained a lot of interest [24][25][26][27][28] in an attempt to combat the problems facing classical techniques.Quantum computers offer a solution to the limitations of Associative Memory.The exponentially growing Hilbert space of qubit-based systems allows large datasets to be encoded onto quantum devices with efficient resource usage [24,29].Furthermore, it has been shown that a polynomial speedup can be achieved for search algorithms by leveraging the Grover Search Algorithm [30,31], which has been suggested as a tool to achieve the crucial speedup required for Associative Memory to be effective for tracking algorithms [24].
This paper proposes a proof-of-principle quantum algorithm which extends on the regular Grover search approach to track finding via Associative Memory, proposed in Reference [24], by abstracting the oracle operation to perform a template matching algorithm to match detector-hit data with a pre-established database of physical tracks.Following the oracle construction method of Reference [32], it will be shown that a single, general oracle operation can be constructed for the template matching approach to successfully identify particle tracks.Additionally, we will demonstrate that the template matching approach further improves on the regular Grover search approach by allowing for data with missing hits to be efficiently reconstructed, a highly non-trivial task for classical tracking algorithms.
Grover Search and Quantum Amplitude Amplification
The Grover Search is an optimal quantum search algorithm [30,31] which amplifies the amplitudes of marked states within a uniformly distributed database to successfully identify elements of interest, achieving a polynomial speedup over classical search techniques for unstructured databases.Consider an unstructured dataset of N elements, X = {x 1 , x 2 , . . ., x N }, which has one or more elements of interest, m j , and can be encoded on n = log 2 (N ) qubits as an equal superposition, where A G = H ⊗n is an n-qubit Hadamard transformation which prepares the state |s⟩, and |x i ⟩ represents the database element x i as a state on the quantum device.The Grover Search aims to identify the elements of interest, m j , in the database X and amplify their amplitudes.To first identify the marked elements, one can define a Boolean function, f (x), such that This function can then be used to construct the oracle, such that the amplitude of an element of interest is marked by inverting the amplitude of the state and leaving all over states unchanged.Marking the states alone is not enough to successfully identify the elements of interest, as a measurement at this stage will still return each element with equal probability.Therefore, one must amplify the amplitudes of the marked states such that a measurement returns one of the marked states with a high probability.Geometrically, the amplification process can be modelled as a reflection of the whole system about the equal state |s⟩ from Equation 2.1, reducing the amplitudes of the unmarked states, and amplifying the marked states.This can be achieved by applying the Grover diffuser, which has the form where S 0 is a phase inversion on the zero state, and in the case of the Grover Search, A † G = A G , as the Hadamard transform is Hermitian.
Combining the diffuser with the oracle, one step of the algorithm can be defined as a single, unitary operation, the Grover Iterator, which can be applied iteratively to amplify the amplitudes of all states of interest in the database.For an unstructured database of N -elements with m-elements of interest, the optimal number of applications of Q to achieve the highest probability of measuring a state of interest is (2.6) The Grover Search, therefore, scales as O( √ N ), providing a remarkable polynomial speedup over a classical search algorithm.Consequently, the Grover Search offers a substantial speedup when searching large databases, typical of those produced by modern particle physics experiments.
It should be noted that it is possible to construct a database for which the number of elements of interest, m, is not known a priori.Therefore it is not clear how many iterations of Q should be applied to reliably return an element of interest from the database upon measurement.To establish m, one can use Quantum Counting [33] which leverages Quantum Phase Estimation [34] to estimate the number of interesting states in the database, and, by extension, the number of applications of Q.For the examples considered in this paper, the number of elements of interest is known by construction of the database, however future implementations may benefit from the Quantum Counting routine.
Quantum Amplitude Amplification
The Grover algorithm performs a search on a uniform, unstructured database encoded onto n-qubits using a Hadamard transformation, A G = H ⊗n .However, it is often the case that it is not efficient to encode the database as a uniform superposition, but instead as an arbitrary state, |s ′ ⟩, prepared using the unitary operation A. Quantum Amplitude Amplification (QAA) [35] is a generalisation of the Grover Search algorithm which can perform a search on |s ′ ⟩ by modifying the Grover Iterator from Equation 2.5.
As shown in Equation 2.4, the amplification of a marked state is performed by reflecting around the state |s⟩.Generalising to an arbitrary initial state, the diffuser operation now reflects around the state |s ′ ⟩, and thus has the form D ′ = AS 0 A † .The Grover Iterator becomes, such that the Grover Iterator from Equation 2.5 can be retrieved by identifying the preparation of the state |s⟩ as an n-qubit Hadamard transform.Figure 1 shows a schematic circuit diagram for Quantum Amplitude Amplification.
Oracle construction
The explicit form of the oracle, S F , has so far remained an undefined black box in both the Grover and QAA routines.The only speculation is that the oracle must mark any interesting states within the database by inverting the phase of the marked states' amplitudes.
Consider the example where a database of four states is encoded onto two qubits via a Hadamard transform, . . .
Figure 1: Schematic circuit diagram for Quantum Amplitude Amplification (QAA) on n qubits.The circuit is initialised by preparing an arbitrary state using the unitary operation A. The state is then parsed to the QAA routine which amplifies the amplitudes of interesting states in the initial state.The QAA routine is applied t-times to return an interesting state with high probability, upon measurement.The QAA routine is constructed from two operations: the oracle, which marks the interesting states by inverting their phase, and the diffuser, which performs a reflection to amplify the amplitudes of the marked states.
It is possible to define an oracle which will search this database for the state |11⟩ by applying a controlled-Z gate operation, which will apply the Z-gate operation to the target qubit if the control qubit is in the '1' state.The oracle operation therefore has the form (2.9) Acting the oracle on the initial state, we find thus the state |11⟩ has been marked by the oracle.However, if the target state now changes from |11⟩, the form of the oracle has to change.For the problem of track finding, this is limiting as one would have to transpile the circuit each time a data string is retrieved from the detector to correctly search for, and identify, a matching hit-pattern template in the database.In Section 4, an algorithm is proposed that removes this limitation by generalising the oracle construction, allowing for the same circuit to be used for all data retrieved from the detector, without having to know how to construct the oracle a priori.
Silicon Layer
Primary Vertex
Possible Tracks
Figure 2: A single track through a 12-module detector, arranged in four layers of three detector modules.The red track shows the "true" track through the detector, with the blue circles representing the hits in the detector.The black tracks show a selection of possible tracks which can also lead to this hit pattern in the detector.Increasing the granularity of the detector decreases the number of tracks corresponding to a single hit pattern, but increases the combinatorial challenge of finding possible hit patterns left by charged particles.
Track Finding via Associative Memeory
Modern high-energy collider experiments collide particles together at unprecedented energies in the centre of close-to-fully hermetic detectors.These detectors comprise many sub-detector regions immersed in strong magnetic fields.The experiment aims to precisely reconstruct the energy and momentum of each particle created in the collision event to unveil the underlying physics in play.The reconstruction of particle tracks through the detector segments can be separated into three main steps: (1) the reconstruction of the charged particle trajectories as they traverse the detector layers through particle tracking, (2) the determination of the particle energies using calorimetry, and (3) the reconstruction of muons in dedicated tracking modules on the outer layer of the detector device.From this process, essential characteristics of the underlying physics can be obtained.For example, the particle species can be identified, and any missing energy can be established.This paper will focus on the first step, designing a quantum algorithm to identify charged particle tracks in the detector efficiently.
To successfully record the trajectory of a charged particle through a detector requires a method of measuring the particle without disturbing its path through the detector.In state-of-the-art collider experiments such as CMS and ATLAS at CERN, this is achieved using high-granularity silicon tracking layers, which can accurately record the precise track of the particle.These sub-millimetre-thick layers of silicon sample a particle's trajectory, recording a hit every time it passes through a layer.Strong magnetic fields are used to bend the track of the charged particle proportionally to the inverse of the particle's momentum.With a position granularity of tens of micro-metres, the detector can accurately reconstruct the particle's trajectory, allowing for the particle's charge and its momentum to be determined.Furthermore, the high precision of the trackers allows for accurate reconstruction of jets of particles in the detector, with individual constituents of jets being separately identified and displacements of hundreds of micro-metres being reconstructed, thus allowing for the identification of complete decay chains, such as those from b quarks.
In the tracking detectors, the only information collected is the position at which each track traversed a silicon layer.The hits belonging to an individual particle track must, therefore, be identified from the raw detector output.Once identified, a fitting operation is performed to dress the track with parameters such as the azimuthal angle, ϕ, at which the particle has been produced, and the reconstructed transverse momentum, p T .Due to the wide range of possible interactions, the paths of the charged particles in the tracking detectors vary, and the possible combination of hit patterns they produce is extensive.Furthermore, each hit pattern is associated with many tracks.For example, Figure 2 indicates some of the possible tracks that have the same hit pattern in a simple, 12-module detector.With increasingly granular detectors, the number of possible hit patterns is increasing to unmanageable levels.For a single particle crossing the detector, identifying the hits corresponding to the particle's track is seemingly a trivial task.However, in particle collisions, thousands of charged particles traverse the detector every fraction of a second.Each particle leaves a set of hits in the detector, leading to tens of thousands of hits in the detector from the particle trajectories, all overlapping.Therefore, the reconstruction of particles becomes a highly challenging combinatorial problem.
Classical techniques such as Associative Memory, which employs a template matching approach to track finding, have been shown to be highly effective at identifying hit patterns in the tracking detectors [36].However, as the number of particles through the detector increases with the collider energy and luminosity, and tracking detectors become more granular, the number of hit patterns that need to be stored and compared becomes increasingly unmanageable, and the time taken to find the correct match grows quickly.With the exponentially growing Hilbert space of a qubit system and the polynomial speedup of Quantum Amplitude Amplification (QAA), quantum computers provide a potentially powerful tool for tackling the track finding problem.In Section 4, a proof-of-principle quantum tracking algorithm is proposed, which harnesses the advantage over the QAA routine.In Section 5, it will be shown that this algorithm can be extended to handle imperfect data which has missing hits efficiently.
Quantum Template Matching for Track Finding
To successfully identify tracks in hit data from the detector via a quantum template matching algorithm, the oracle, S F , must have a general construction to identify the correct track without prior knowledge of the input data.Following the oracle construction from Reference [32], it is possible to design a general oracle by introducing an additional register to the QAA circuit in Figure 1, the data register.The retrieved detector-hits data from the experiment is encoded on this register for each event with the unitary operation A D .The register retained from the QAA routine will now encode the template database, the template register.For a tracker in the same configuration as Figure 2, with 12 tracker modules arranged in layers of threes, there are 15 possible hit patterns for particles traversing the detector, neglecting multiple track signatures and requiring one hit per detector layer.
These hit patterns are one-hot encoded into bit strings of 12 bits, with each bit corresponding to a detector module.If a hit is detected on the module, the bit is flipped to the '1' state.Otherwise, it remains in the '0' state.The templates are encoded onto the template register as a linear superposition of all possible tracks through the unitary operation A T * .
The individual hit-pattern encodings are displayed in Table 1.The state preparation has the general form where n = 12 for the example from Figure 2.
To construct the general oracle, we allow for the oracle to now act across the two registers, controlling from the data register and applying a series of cnot operations to The state preparation step encodes hit data from the detector onto the data register, and the database of hit-pattern templates onto the template register using the unitary operations A D and A T respectively.The Quantum Amplitude Amplification (QAA) routine is then applied t-times to correctly identify the hit pattern within the database, with high probability.The general oracle marks the state in the template database which corresponds to the hit pattern from the detector, encoded on the data register.The diffuser operator then amplifies the marked amplitudes.A measurement is then performed to return the matched template.
the template register.If the hit pattern encoded onto the data register is in the template database, then the corresponding state on the template register will be flipped to the zeroth state.The matched state can be marked by applying a phase inversion on the zero state, S 0 .Finally, the oracle returns the marked state to its original bit combination by applying the series of cnot operations again, in the same order, controlling from the data register and acting on the template register.Through this oracle operation, the track template matching the hit pattern encoded on the data register is marked with a negative phase without the need for a bespoke oracle operation designed for the input data.To amplify the marked state, the QAA diffuser from Equation 2.7 is then applied to the template register, where here A = A T .To achieve the greatest probability of selecting the correct track, the oracle and diffuser are then applied t-times, according to Equation 2.6.Figure 3 shows a schematic of the circuit for the quantum template matching algorithm, outlining the structure of the oracle explicitly.
Adopting the procedure of the quantum template matching algorithm for track finding allows for data from the detector to be parsed into the quantum algorithm and matched to a track template "on-the-fly", as the circuit is general for all possible hit patterns handed 1, shown in the top right-hand corner of the plots.The algorithm successfully identifies the correct hit-pattern templates from the database with high probability, greater than 90%.The algorithm requires three iterations of the QAA routine, and has been run on the qasm simulator for 10 4 shots on the device.to the algorithm.The data is one-hot encoded onto the device using the unitary operation A D , which applies a series of not-gate operations to load the data onto the device.The efficiency of the track finding algorithm has been tested for two hit patterns, corresponding to Tracks 1 and 5 in Table 1.To successfully determine the matching efficiency, the circuit has been run for three iterations of the QAA routine, and for 10 4 shots on the qasm simulator without a noise model † .The results are displayed in Figure 4, showing that the correct match is achieved with high probability, greater than 90% efficiency.
The success in matching the data to the correct hit pattern with very high probability and the QAA routine's polynomial speedup over classical search algorithms means the algorithm is well suited to quick and efficient track finding.In practice, one would only have to run one shot of the circuit to retrieve the correct track match, with high probability, thus providing fast track finding using the quantum device.Currently, however, the circuit from Figure 3 requires the data to match precisely with a hit-pattern template in the database.In practice, this is not always the case, as data from the detector may be missing hits from specific tracking modules.In Section 5, it will be shown that, by modifying the oracle further, the quantum template matching algorithm can identify possible tracks in imperfect data, a highly non-trivial task for current classical techniques.
Figure 5: Results from the quantum template matching algorithm with the modified oracle for data with a missing hit in the third detector layer.The results show the correct identification of the two possible hit patterns, Tracks 8 and 9 from Table 1.The algorithm successfully identifies the correct high-pattern templates with high probability, greater than 80%.The algorithm requires two iterations of the QAA routine, and has been run on qasm simulator for 10 4 shots on the device 5 Track Finding on Data with Missing Hits One of the primary challenges in track finding via Associative Memory arises when a particle passes through the detector and one or more of the detector modules on its trajectory fails to register a hit.Current, state-of-the-art track-reconstruction techniques struggle with this scenario as the combinatorics between layers with missing hits quickly become unmanageable.Overcoming this problem is paramount as the energy and luminosity of colliders are increasing, and the detectors are becoming more granular, elevating the combinatorial problem.In this Section, the quantum template matching algorithm from Section 4 is extended to allow for the identification of tracks from imperfect data, without an increase in computation complexity or resources.
In the quantum template matching circuit shown in Figure 3, the oracle is essential for accurately selecting and marking the identified track by comparing, exactly, the bit strings in the data register and the template register.Consider parsing a hit pattern to the algorithm which does not contain a hit in the third layer of a detector like the one shown in Figure 2. Running the algorithm for many shots would not return a decisive answer to which hit pattern matches the trajectory of the particle through the tracker, as the hit pattern without the third hit is not in the template database.To combat this problem, the oracle must be modified to identify the correct match to the hit-pattern templates correctly.
Using the example of a hit missing in the third detector layer, the oracle can be constructed such that it does not act on the qubits corresponding to the third detector level and acts only on the "good" subset of qubits corresponding to the first, second and fourth detector layers.The modified oracle then works in the same form as in Section 4, but only acting on this good subset.First, a series of cnot operations controlled from the good subset of qubits on the data register and acting on the good subset of qubits on the template register is applied.The good subset of qubits on the template register will have been flipped to the zeroth state if there is a match between the good subsets of the data and template registers.To mark the state, a phase inversion on the zeroth state is then performed on the subset of template qubits, S ′ 0 .Finally, the bit strings are returned to their original combinations by once again applying the series of cnot operations to the good subsets in the same order.
Employing this oracle in the quantum template matching algorithm from Section 4 will return all possible hit-pattern templates with the matching subset of qubits, allowing for efficient identification of the particle's trajectory through the detector.For the example outlined here, two states corresponding to Track 8 and 9 from Table 1 will be marked, therefore Equation 2.6 states that two iterations of the QAA routine will yield the best match ‡ .Figure 5 shows the results from 10 4 shots on the qasm simualtor for two iterations of the QAA routine using the modified oracle.The algorithm successfully predicts the possible path that the particle could have taken through the tracker, returning correct hit patterns for Tracks 8 and 9 from Table 1.Remarkably, the computational complexity and the required quantum resources do not increase when dealing with imperfect data, which is not the case using classical techniques.On the right of Figure 5, an illustrative number of combinations of tracks passing through the two hit patterns shows how the combinatorics for this problem will increase dramatically for missing-hits data.
When dealing with real detector data, it is not always known which detector modules will likely fail to record a hit.Therefore, the choice of which part of the data bit-string to examine can be randomised to correctly identify the possible hit-pattern matches in imperfect data to a high degree of accuracy.Due to the extreme combinatorics in modern particle collider experiments, this is becoming an unmanageable problem for classical approaches, such as Associative Memory.The algorithm presented here can match both perfect and imperfect data, retrieving the correct match with high probability without any increase in computational complexity or resources for the latter.The simple but effective algorithm, therefore, provides an advantage over classical template matching techniques, both in polynomial speedup and the ability to match data with missing hits.This speedup and accuracy will become necessary as the field moves to an era of higher energies and luminosities.
Conclusion
Charged-track finding in high-energy particle collisions is a complex combinatorial task, fraught with challenges stemming from the sheer volume of data, noise, and intricacies of particle interactions.In this article, we present general and extendable quantum algorithms for the identification of particle tracks through a detector.As an application, a ‡ In practice, Quantum Counting can be used to determine the number of interesting states, m.
simplified detector model has been used, constructed from 12 detector modules arranged in four layers of three tracking modules.The quantum algorithms employ a novel oracle design to successfully identify particle tracks traversing the detector by matching detectorhit data to a hit-pattern template in a pre-established database of possible hit patterns.By abstracting the Quantum Amplitude Amplification (QAA) routine to encompass an additional data register, the identified template has then been amplified to deliver the correct match upon measurement.Exploiting the established polynomial speedup provided by the QAA routine [35], the quantum template matching algorithm provides an advantage over classical tracking techniques via Associative Memory.Figure 4 contains the results from running the quantum template matching algorithm on the qasm simulator for 10 4 shots, showing that data parsed to the algorithm has been correctly matched to a hit-pattern template.
Confronting the prevalent issue of data with missing detector hits, the quantum template matching algorithm has been adapted and tested to mitigate the complexity of reconstructing tracks from imperfect data.By modifying the general oracle from Section 4, the quantum template matching algorithm has been used to correctly identify hit-pattern templates for a track traversing the detector with one detector layer failing to register a hit.This task is highly non-trivial for classical track-identification techniques.Remarkably, the quantum resources required, and the complexity of the circuit, do not increase for imperfect data, providing a quick and efficient method for identifying tracks from data with missing hits.Figure 5 demonstrates the quantum template matching algorithm's ability to successfully return possible hit-pattern templates for data with missing hits, with high probability.
The quantum methodologies presented in this article not only adeptly manage incomplete data but also underline the durability and adaptability of quantum algorithms when faced with real-world, inconsistent datasets.Furthermore, our exploration of track encoding and utilising templates for various hit patterns in detectors provides deeper insights into the potential of quantum techniques in collider physics.These findings emphasise the immense promise of quantum computing in high-energy physics.
To conclude, while acknowledging the challenges inherent to charged track finding, our research underscores the pivotal role and promise of quantum algorithms in setting new benchmarks and advancing the task of charged-track finding in particle collision studies.
Figure 1 :
Figure 1: Schematic circuit diagram for the Grover Search algorithm on n qubits.
Figure 2 : 2 Figure 3 :
Figure 2: Schematic circuit diagram for the the template matching algorithm.
Figure 4 :
Figure 4: Results from the quantum template matching algorithm for two detector-hit data scenarios.Figure (a) and (b) show the correct identification of Tracks 1 and 5 from Table1, shown in the top right-hand corner of the plots.The algorithm successfully identifies the correct hit-pattern templates from the database with high probability, greater than 90%.The algorithm requires three iterations of the QAA routine, and has been run on the qasm simulator for 10 4 shots on the device.
Table 1 :
Track templates for all possible hit patterns in a 12 module detector, with the modules arranged in four layers of three modules.Each track is required to have one hit in each layer.The track templates are one-hot encoded, with each bit corresponding to a detector module.If a hit has been detected in the module, the bit is flipped to the '1' state, otherwise it remains in the '0' state.The bit strings read left to right, with the first three bits corresponding to the first detector layer, the next three bits corresponding to the second layer, and so on. | 7,044.8 | 2023-11-01T00:00:00.000 | [
"Physics"
] |
MAPLE Assembled Acetylcholinesterase–Polyethylenimine Hybrid and Multilayered Interfaces for Toxic Gases Detection
Developing a controlled method for obtaining hybrid enzymatic-based interfaces for sensing application require the use of a multiuse, reusable sensor. By controlling the interface characteristics in terms of the surface chemistry, thickness, and roughness, a tailored response toward various toxic compounds can be obtained, regarding both materials used as active surfaces and fabrication methods. Herein, we report a preliminary study on using a laser-based method (i.e., matrix-assisted pulsed laser evaporation, or MAPLE) for obtaining active polymeric–enzymatic interfaces as hybrid or layered coatings for detecting toxic vapors. The MAPLE fabrication consisted of the simultaneous alternating evaporation of layers of polyethylenimine (PEI) and acetylcholinesterase (AchE) in order to obtain active surfaces as both hybrid PEI-AchE and a PEI/AchE layered coating, respectively. The deposition processes of the polymer and enzyme were carried out using a double-target system and a Nd:YAG pulsed laser, operating at 0.45 J/cm2 fluences with a wavelength of 266 nm and a repetition rate of 10 Hz. Fourier transform infrared spectroscopy revealed no significant changes in the functional groups of both hybrid and layered coatings compared with the initial material. The thickness and roughness, as well as the morphologies of the coatings revealed by atomic force microscopy and scanning electron microscopy showed coatings thicker than two μm that had smooth surfaces and average roughness values below six nm. The sensors were tested with simulants for nerve gases and pesticides containing phosphonate ester groups, namely dimethyl methylphosphonate (DMMP) and diisopropyl methylphosphonate (DIMP), and a different sensitivity was shown to the selected chemical agents for each of the sensors. The best sensitivities for DMMP and DIMP obtained by using a PEI-AchE coated sensor are 65 kHz and 200 kHz, respectively, whereas the best sensitivity when using multilayered interfaces is 30 kHz and 10 KHz for DIMP and DMMP, respectively.
Introduction
Nowadays, the fast, sensitive detection of specific harmful chemical agents presents interest as a research topic, which is caused by security concerns and safety hazards. Therefore, detecting quantities lower than those that can affect negatively life and health, as well as discriminating between chemical toxic compounds is of interest for wide area of applications related to health and security.
Among the most used sensors, surface acoustic wave (SAW) sensors are used at a larger scale for research related to the detection of volatile toxic compounds due to specific characteristics: high sensitivity, small size, low cost, very good response time, and ability to work in wireless mode.
The SAW sensor sensitivity and selectivity are related both to sensor characteristics and active area interface properties. This implies the use of specific compounds and methods that are compatible with them, which allow tailoring their characteristics [1][2][3][4][5][6][7].
The organophosphorus compounds have a significant negative impact as related to terrorism. Such compounds are also environmental and food chain pollutants (i.e., DMMP, or dimethyl methylphosphonate), as they are used as common additives for anti-foaming agents, plasticizers, stabilizers, textile conditioners, and antistatic agents [8]. Moreover, DMMP is used due to its nontoxicity and organophosphorus compound elemental composition for mimicking nerve agents, being considered an appropriate simulant for both insecticides and G-series nerve agents. Besides DMMP, diisopropyl methylphosphonate (DIMP) can be used as well as simulant for G-series nerve agents.
Therefore, one major requirement for obtaining highly sensitive active elements for sensors is the synergy between specific functional materials with advanced fabrication technology. Polyethylenimine (PEI) and acetylcholinesterase (AchE) were chosen for obtaining the hybrid and multilayered coatings due to their peculiar characteristics suitable for sensing DMMP and DIMP. Specifically, among the polyamines, polyethyleneimine (PEI) represents one of the most interesting candidates for binding specific volatile compounds due to its high amine density and accessible primary amine sites on the chain ends (for example, for CO 2 capture ability, etc.) [9]. In the literature, AchE immobilized through physical adsorption is mainly used for biosensors, especially for testing/screening therapeutic drugs for Parkinson's and Alzheimer's diseases, as well as in clinical diagnosis [10][11][12][13]. For this direction, films of silica sol-gel incorporating gold nanoparticles (AuNPs-Si-SG) coated with AchE, or platinum-coated with AchE, were used to test various drugs [14].
The use of active coatings with tunable characteristics in sensing fields is directly correlated with the surface chemical and topographical properties. Therefore, the method of preparation, and the type of analytes implied must to be correlated to the specific application.
There are chemical methods and physical methods that can be used to modify/coat a surface; this involves processes from adding suitable functional groups on the surface (i.e., chemical vapor deposition (CVD)) to adding physical material onto a surface (i.e., spin coating, dip coating, vapor deposition, sputtering, arc vapor deposition, and ion plating). If in the case of CVD, the thickness of the film can be controlled even to an atomic level, but the precursors are highly toxic, corrosive, or explosive, causing the destruction of the biocompounds or adverse toxic effects. In the case of spin coating and dip coating, it is difficult to control the thickness of the film, and no hybrid materials that imply organic solvents and proteins/enzymes can be obtained [15].
In the last years, matrix-assisted pulsed laser evaporation (MAPLE) was successfully used for depositing sensitive materials such as polymers and proteins, but it was also shown to be an appropriate approach for embedding in a controlled manner not only ceramic materials or graphene, but also active proteins such as lactoferrin, or, for a narrow window of parameters, functional Micrococcus bacteria for biosensing applications [16][17][18][19][20][21][22].
MAPLE provides a suitable process for transferring various small or large molecular weight species as coatings from the condensed phase into the vapor phase. The process starts with the laser energy being mostly absorbed by the solvent molecules, therefore preventing the target molecules from being damaged by the high-energy laser beam. The solvent vaporization mechanism includes the photo thermal process that converts the absorbed energy of the photons from the frozen solvent molecules to thermal energy [15]. Therefore, when "target molecules absorb enough energy through collisions with solvent molecules under the evaporation process, the target molecules are transferred to the vapor phase". It is important to underline that the MAPLE target solution is depleted layer by layer; the concentration does not change within the experiment time [15]. Depending on the experimental conditions, and due to the low solvent adhesion coefficient, most of the solvent molecules are pumped away [15][16][17][18][19][20][21][22]. When it is necessary to obtain coatings from not miscible elements, targets can also be designed as a multi-compartment system, therefore allowing a unique method to "mix" proteins with polymers that use organic solvents [18].
Within this context, in this work, the matrix-assisted pulsed laser evaporation (MAPLE) technique was used to yield hybrid PEI-AchE, which can form respectively layered PEI/AchE thin films for efficient DMMP and DIMP capture. In previous work by Viespe et al. [23], it was observed that if the PEI is deposited by air-brush technique, the film morphology is affected, with direct consequences on the signal-to-noise ratio (SNR).
In this work, MAPLE was used to evaporate simultaneously or alternatively layers of PEI and AchE. Micron-thick sensitive layers formed active surfaces: hybrid PEI-AchE and PEI/AchE layered coatings, repsectively. These were tested with DIMP and DMMP. Our approach presents not only the advantage of controlling the morphology of the polymer and enzyme films deposited on quartz substrates, but, more importantly, when using the MAPLE technique, the solvent used for the PEI polymer will not contact the AchE enzyme; therefore, it will not affect its functionality.
Target Solutions Preparation
The chemicals were obtained from Sigma-Aldrich (Saint Louis, MO, USA). Deionized water and methanol were used as solvents for AchE and PEI, respectively. Solutions of 2% weight PEI (408719 Aldrich, average Mw~800 by LS, average Mn~600 by GPC) in methanol, and 0.1% weight AchE (C3389 Type VI-S, lyophilized powder, 200-1000 units/mg protein) in double-distilled water were obtained. The PEI solutions were subsequently sonicated for several minutes (30 min) (Sharpertek Digital Ultrasonic cleaner XP PRO).
Matrix-Assisted Pulsed Laser Evaporation System
A "Surelite II" pulsed Nd:YAG laser system (Continuum Company) (five to seven ns pulse duration) at 266 nm and a 10-Hz repetition rate was used to irradiate the frozen targets. In order to avoid the influence of methanol on AChE, we used a double-target system, consisting of 75% area occupied by AchE and 30% occupied by PEI ( Figure 1). In order to form the target, the solutions were a sonicated for five minutes and rapidly frozen in a liquid nitrogen-cooled copper container. Firstly, the PEI solution was confined within 30% of the target by using a removable Teflon separator; secondly, the AchE solution was frozen, and the teflon separator was removed. The container was mounted on a cryogenic holder inside the deposition chamber (Neocera spherical vacuum chamber with 12" diameter). The target was maintained frozen by a circulating liquid nitrogen system; the temperature was checked by the two thermocouples placed directly onto the target holder. The laser fluence was 0.45 J/cm 2 and the number of pulses was kept at 54 k pulses (for AchE) and 36 k pulses (for obtaining a 2.4-um thick PEI and a 200-nm thick PEI-AchE enzyme coatings). Also, in order to avoid damage by local overheating and drilling following multiple pulses of laser irradiation, the target was rotated with 20 rpm using a motion feed-driven motor. Due to the absorbed laser energy in the first frozen layer of the target, which consisted mostly of frozen solvent, vaporization takes place in order to entrain the enzyme and polymer particles toward the substrate, while volatile solvent molecules were removed from the deposition chamber by the vacuum pumps. The substrates are placed parallel to the target and situated at a distance of 3.5 cm. The background pressure in the chamber was maintained at 1-2 × 10 −3 Pa. The hybrid PEI-AchE coatings were obtained by simultaneously scanning the laser beam onto the dual target; within the same experiment, respectively layered PEI/AchE thin films were obtained by firstly scanning the laser beam onto the PEI target, and secondly onto the AchE target surface ( Figure 1).
Substrate Preparation
Two types of substrates were used: double-polished Si (100) transparent in the infrared (Neyco), and SAW sensors.
The Si substrates that were used for Fourier transform infrared (FTIR) measurements, as well as for atomic force microscopy (AFM) and SEM were cleaned by sonication with alcohol and water and blow-dried under N 2 gas before use. All of the substrates were placed at a distance of 3.5 cm from the frozen target and kept at ambient temperature during the deposition.
The SAW sensors were fabricated on ST-cut quartz with propagation in the X-direction. The interdigital transducers consisted of 200 nm of gold on 10 nm of chromium as an adhesive layer. A double-double finger design was used with a periodicity of 11 µm. The SAW sensors' operation frequency was~69 MHz. For the measurement circuit, a DHPVA-100 FEMTO (10-60 dB, 100 MHz) amplifier was used, and the frequency shift of the system was read using a CNT-91 Pendulum counter analyzer [23,24].
Chemical and Morphological Characterization of the Deposited Thin Films
Fourier transform infrared spectroscopy (FTIR) was used to evaluate the characteristic vibrations of the functional groups of the substrate-deposited thin films. The infrared spectrum of the native molecule deposited by drop cast on the Si substrates was used as the control. The FTIR measurements were carried out using a Jasco FT/IR-6300 type spectrometer in the 400-4000 cm −1 range, with a resolution of 4 cm −1 . The spectra were measured by transmission through a coated Si wafer, and then the absorption was calculated by the accumulation of 1024 scans. For maintaining a steady atmosphere in the measurement chamber, silica gel and regularly purging the spectrometer with argon gas were used. The employed substrate that was used as the background as well was a thin silicon wafer (transparent to infrared).
Morphology characterizations were performed by optical microscopy, atomic force microscopy (AFM), and scanning electron microscopy (SEM). For optical microscopy, the images were acquired using an Axiovert 200 Microscope coupled to a Carl Zeiss AxioCamMRm camera. AFM (XE 100 AFM, Park systems) measurements were performed in non-contact mode, and allowed for surface roughness analyses.
SEM investigations were carried out on a field emission scanning electron microscope (JSM-531 Inspect S Electron Scanning Microscope, FEI Company (Hillsboro, OR, USA)).
DMMP and DIMP Measurements
The different concentrations of DMMP and DIMP were detected using a testing system developed by Viespe et al. [25]. The DMMP and DIMP liquid was injected into a gas mixture and mixed with air. The amount of DMMP and DIMP, as shown in Table 1, were six ppm and five ppm respectively, after the total evaporation of the analyte that was circulated in the system by a diaphragm pump (Pfeiffer model MVP 035-2). The mixture temperature and the flow rate were maintained constant during the experiments. Repeating 10 measurements of the frequency deviation for each of the sensor films yielded errors below as ± 4%.
Results and Discussion
There are various surface and interface characteristics of an active sensor element (i.e., thickness, uniformity, roughness, chemistry) that can influence and dictate the response of a sensor to a specific analyte. By the ability to control and tailor both the physical and chemical characteristics of the interfaces, an enhanced response and use of these active coatings can be obtained. In this work, we used the MAPLE technique for depositing hybrid and layered active surfaces based on PEI and AchE for detecting DMMP and DIMP, respectively. It was observed that the control samples, which were obtained by drop casting, as shown in Figure 2, are characterized by relatively smooth surfaces, but AchE presented an irregular accumulation of material as nanograins onto the surface. The optical microscopy images show a larger accumulation of these grain-like structures over the entire area of the samples. The irregular shapes are present in the case of PEI too; the material is spread non-uniformly.
Morphological Characterization
In contrast with the control samples, the main characteristics for the MAPLE-obtained samples, as confirmed by both SEM and AFM analysis, are uniformity and low roughness surface, as can be seen for the PEI, AchE, and PEI/AchE thin films (Figures 3-5).
However, it was observed as well that in the case of layered coatings, the protein can accumulate, leaving part of the PEI exposed. However, in the case of hybrid coatings, the proteins deposit seems to be embedded within the polymeric layer.
For a better visualization of the surface topography and understanding of the material organization on the surfaces, AFM measurements were performed, showing slightly porous surfaces, with pores varying within tens of nanometers in the case of hybrid PEI_AchE coatings ( Figure 5). The non-contact mode AFM images showed smooth structures of both the single elements and hybrid surfaces; roughness levels below six nm were observed (i.e., 2.2 nm for PEI, 4 nm for AchE, 2.8 nm PEI/AchE, and 5.9 nm for PEI_ACHE).
Chemical Characterization
The differences induced by deposition methods (MAPLE and drop cast) in the functional groups among the PEI, AchE, PEI-AchE, and PEI/AchE samples were determined by FTIR measurements. The similarity between the absorbance bands for drop cast and the thin films obtained by MAPLE are depicted in Figure 6 and Table 1. The significant absorption regions of the deposited material and control are shown, allowing the optimum visualization of the peaks and the changes induced by the MAPLE process. The composition was preserved after dissolving the PEI in methanol (drop cast), as well as for MAPLE transfer (Figure 6). The unmodified functional groups of PEI were indicated by the signals at 3500 cm −1 corresponding to NH asymmetric stretching, and at 3391 cm −1 and 3365 cm −1 , corresponding to NH symmetric stretching. The CH symmetric and asymmetric stretching were confirmed by the signals from 2881 cm −1 and 2915 cm −1 , respectively. The N-H deformation was confirmed at 1647 cm −1 , while the peaks observed at 1496 cm −1 and 1043 cm −1 corresponded to C-H deformation and C-N stretching, respectively [25].
Moreover, the functional groups of AchE samples obtained by MAPLE were indicated by the signal at 3282.25 cm −1 , corresponding to AchE's free hydroxyl stretching mode [26][27][28][29]. Due to adsorbed hydrocarbons on the surface, the C-H stretching vibrations are confirmed by the signal at 2926 cm −1 . Although the samples were maintained through using silica gel and purging the spectrometer with argon gas, the CO 2 interferences from ambient conditions are noticed as a double peak located at 2340 cm −1 and 2360 cm −1 (asymmetric stretching). The signals at 1641 cm −1 were associated with the NH bending and scissoring mode, while the signal in the range 1451-1410 cm −1 was assigned to the C==C stretching vibration arising from the deposited enzyme on the surface [26][27][28]30,31]. Furthermore, the spectrum of MAPLE-deposited AchE-PEI hybrid coatings showed absorptions similarities with that of the drop-casted samples for both components. However, as previously reported, the presence of enzyme immobilized onto a surface was confirmed by the observation of broad and intense bands for amide I at~1655 cm −1 (νC=O stretching vibrations) and amide II at~1530 cm −1 (combination of δN−H bending and νC−N stretching modes) [28], while in our case, despite the similarity between the drop-cast spectra and the MAPLE spectra, the peaks corresponding to amide I were observed at 1641 cm −1 , while the Amide II band was observed at 1514 cm −1 . If in the works reported previously, the bands follow the general trend of displaying broad unstructured bands without any overlaid fine structures, in our case, several components bands are observed at 1636 cm −1 , 1649 cm −1 , 1672 cm −1 , and 1690 cm −1 . As reported by Gorne-Tschelnokow et al., by analyzing the deconvoluted spectra of the enzyme, components were observed as well at 1631 cm −1 , 1648 cm −1 , and 1656 cm −1 ; meanwhile, weaker bands appeared near 1622 cm −1 , 1640 cm −1 , and 1672 cm −1 [26]. Other minor peaks (between 1700-1850 cm −1 ) are related to the carbonyl stretching absorption. Also, as previously observed in the work of Khaldi et al. [28], the peaks observed in the region of 1400−1200 cm −1 are assigned to amide III (νC−N stretch and νN−H bend near 1300 cm −1 ).
These observations demonstrate a non-destructive laser transfer of the PEI and AchE thin film, without methanol solvent molecules on the substrate or interfering with AchE after MAPLE deposition, allowing the deposition of two different configurations for the active element of the sensor, as both layered and mixture/hybrid coating.
DMMP and DIMP Measurements
A comparison between the frequency shifts values for the coated sensors with either PEI or hybrid and layered coatings, together with the effects of the presence of DMMP and DIMP, is shown in Table 2. An increase of the value of frequency shift-meaning a better response for DIMP-was observed for all of the samples, while hybrid PEI-AchE coatings gave a better response as compared with both PEI and PEI/AchE layered coatings. The frequency shift that was obtained for a DMMP concentration by us was better than the results obtained with SAW sensors using ZnO [29] and polysiloxane [32] for DMMP detection. The response time was between 9-15 s and 90-100 s in the case of DIMP and DMMP, respectively. This response could be explained by the vapor pressure of DMMP and DIMP. DMMP has a higher vapor pressure at 25 • C (0.962 mmHg) than DIMP (0.28 mmHg), which makes it evaporate over a longer time in site. By comparing the results for the three types of sensors, one can notice a difference in frequency change depending on the type of film deposited. Thus, for the PEI film, where interaction took place through weak hydrogen bonds [33], the lowest values were obtained. For the layered film, where the enzyme interacts directly with the analyte, the results are intermediate. In this case, the signal is received by an effect, π, (between the electrons p of the nitrogen and the electrons π of the phosphate group), providing stronger bonds than the hydrogen bonds formed by PEI [34][35][36].
The better response given by the hybrid coating can be explained by the synergistic effect of PEI and AchE, with both the polymer and the enzyme acting in the presence of DMMP and DIMP, through weak hydrogen bonds and the p-π effect. Also, the AchE is well-known for providing a binding site through the exposure of both an esteratic subsite (Ser-His-Glu) and peripheral binding anionic subsite that has the ability to bind to many different types of ligands [37]. Moreover, as shown in Figure 4, the hybrid surfaces are also characterized by the presence of nm-sized pores that could lead to increasing the active surface area of the sensor. Nevertheless, the differences between the results for DIMP and DMMP can be explained by the electron repellent inductive effect of the methoxy groups and the methyl radicals, which form a larger electronic cloud on the DIMP phosphorus than the DMMP [38].
These non-covalent interactions of this system with DMMP could also represent the basis for the realization of reversible detectors of nerve agent simulants in solution and also on solid supports. Nevertheless, as sensitivity and selectivity are crucial factors within sensor applicability, the perspective envisages discriminating volatile nerve agents and/or other toxic organophosphates compounds from non-toxic substances in a complex gaseous environment by using sensor array based on SAW resonators and active elements based on AchE, but in combination with different types of chemoselective polymers that could give the system both selectivity and sensitivity toward various chemical agents.
Conclusions
This study demonstrated the feasibility of obtaining hybrid polymer-enzyme active interfaces for PEI-AchE sensors by using MAPLE and a modular target system. The two different configurations of PEI-AchE depositions were characterized by SEM and AFM, and revealed smooth surfaces, except for the presence of nm-sized pores in the case of the hybrid surfaces. Moreover, no significant modifications of FTIR spectra were observed either after MAPLE deposition or AchE inclusion within the PEI polymer. On the other hand, the inclusion of AchE within the PEI coating played a major role in increasing the measurement sensitivity for both DIMP and DMMP detection. This demonstrates that the enzymes and polymer deposited by MAPLE can provide good premises for the development of a sensor with suitable stability, good reproducibility, and high sensitivity toward DMMP and DIMP.
Such a laser-based coating deposition approach demonstrates the great potential of PEI-AchE-based enzymatic interfaces in a sensing system, with potential perspectives to be used for biomedical and clinical diagnostic applications. | 5,290.6 | 2018-12-01T00:00:00.000 | [
"Chemistry",
"Materials Science",
"Environmental Science",
"Engineering"
] |
Bernoulli Wavelets Operational Matrices Method for the Solution of Nonlinear Stochastic Itô-Volterra Integral Equations
This article gives an effective strategy to solve nonlinear stochastic Itô-Volterra integral equations (NSIVIE). These equations can be reduced to a system of nonlinear algebraic equations with unknown coefficients, using Bernoulli wavelets, their operational matrix of integration (OMI), stochastic operational matrix of integration (SOMI) and these equations can be solved numerically. Error analysis of the proposed method is given. Moreover, the results obtained are compared to exact solutions with numerical examples to show that the method described is accurate and precise.
Introduction
Wavelets are mathematical functions that isolate the data and analyze each variable with the corresponding resolution in various frequency components. As a mathematical tool, wavelet can be used to extract information from the different forms of data, including, seismic waves, earthquakes, music, image processing, signal processing, acoustics, nuclear engineering, and astronomy.
We lack enough knowledge in certain issues related to behavioral analysis for some processes to decide how it conforms, or its operation is so complex that it is irrelevant or difficult to explain precisely. In such a scenario a probabilistic model is always helpful. Due to their significance in modeling, science and technological phenomenon, nonlinear stochastic and deterministic functions have been widely investigated and studied.
The Itô integral typically used in applied mathematics is named after Kiyoshi Itô. In analytical forms, stochastic integral equations cannot be resolved normally and therefore a necessary prerequisite for their solution in applied mathematics become necessary. Throughout recent years, the computational methods such as [20][21][22][23][24] have been used to solve stochastic integral equations. However, very few articles exist on stochastic integral equations. The operational matrix method using Bernoulli wavelets for solving linear Itô-Volterra integral equations was recently employed by Mirzaee and Samadyar [25]. This article however attempts to frame a stochastic operational matrix of integration of Bernoulli wavelets (SOMIBW) and is employed to obtain the solution to the particular case of the NSIVIE as follows are functions of x and t, )) ( ( x y µ and )) ( ( x y σ are known functions, ) (x y is the unknown that is to be determined and ) (x W is the Brownian motion process defined on probability space ) , , ( P F Ω that consists of the sample space , Ω a σ -algebra F of subsets of Ω which we call events, and a real-valued set function P defined on F that is called probability.
Equation
(1) appears in various fields including engineering, mathematics, biology, health and social science. It is very hard or even difficult to solve this equation, so we develop an efficient method for solving it. In this article, equation (1) is numerically solved by the use of operational matrix of integration (OMI) and stochastic OMI based on Bernoulli wavelets. This equation is reduced to a nonlinear system of algebraic equations by the use of collocation points and these matrices can be solved by an effective numerical method such as Newton's method.
397
The rest of the work is structured accordingly. Section 2 provides some basic definitions and characteristics of stochastic calculus, wavelets, and Bernoulli wavelets. Also, in this section, OMI and stochastic OMI based on Bernoulli wavelets are obtained. The proposed method of solution is given to estimate the solution of non linear Itô-Volterra integral equation in Section 3. In Section 4, Numerical examples are presented to show the efficiency and reliability of the proposed method. Finally, the conclusion of the article is given in Section 5.
Brownian motion
has continuous sample paths.
• The process has independent, stationary increments.
Definition 2.2.
An n-dimensional process, ( is a standard Brownian motion and the i x W ) ( 's are independent of each other.
Wavelets
A family of functions is generated by mother wavelets by dilating and translating itself, which we call wavelets. The wavelet family is as given below [26]: where a and b are denoted respectively for the dilation parameter and translation parameter. Nevertheless, b varies continously.
the family of discrete wavelets may be given as, where the wavelet bases in )
Bernoulli polynomials
Bernoulli polynomials are defined in general as [25]: where , are Bernoulli numbers. For instance, starting five Bernoulli numbers are: . 30 And the first four Bernoulli polynomials are:
Bernoulli wavelets
Bernoulli wavelets are defined as follows [25]: denotes the order of the Bernoulli polynomials and
Approximation of function
∈ is expanded in terms of the Bernoulli wavelets as Truncating the above infinite series, we get ,
OMI and stochastic OMI of Bernoulli wavelets (SOMIBW)
OMI of Bernoulli wavelets is given in detail in [25]. Now, we derive the stochastic OMI of Bernoulli wavelets as follows: The stochastic integral of ) (x ψ can be obtained as follows: where s P is a m m× matrix and is called the SOMIBW. In particular, for 2 = M and 10 10 0 Using equations (9) to (12), we get where, ) (x ψ is the Bernoulli wavelet coefficient matrix and F is a m m× matrix given by , where .
Also, for a m m× matrix C, is a m -vector and .
Remark 2. If µ is an analytic function on R and
) (x C T ψ be the the expansion of ) (x f in terms of Bernoulli wavelets, where C is given in equation (6), then where [ ].
Remark 3. If µ is an analytic function on R and ) (x C T ψ be the the expansion of ) (x f in terms of Bernoulli wavelets, where C is given in equation (6), then is the Bernoulli wavelets coefficient matrix given in (7) and is given in Remark 2.
Bernoulli Wavelets Method of Solution (BWM)
In this section, we use the newly derived SOMIBW for the numerical solution of NSIVIE. Here we consider the equation in (1) as: with respect to Bernoulli wavelets as follows: , where C is given in equation (6) and is the unknown vector to be determined.
where C and F are Bernoulli wavelet coefficient vectors and 1 K and 2 K are Bernoulli wavelet matrices. Substituting (19), (20), (21) and (22) in (18), we get Now, by using Remark 3, equation (23) can be rewritten as, Using equation (24) and Remark 1, we get where 1 X and 2 X are m m× matrices described in Remark 1. Applying the OMI of Bernoulli wavelets P explained in [25] and the stochastic OMI of Bernoulli wavelets described in Section 2, equation (25) reduces to: Let us assume that P X K 1 1 1= δ and .
Again using Remark 1, where 1 δ and 2 δ are m -vectors containing a nonlinear combination of elements of C.
Replacing − by = , equation (27) reduces to a nonlinear system of equations .
Solving this nonlinear system, we get the unknown vector C. Substituting this obtained vector in equation (19), we obtain the solution of NSIVIE (1).
Numerical Experiments
Test problem 1. Consider the SLSVIE [27], is the unknown stochastic process defined on the probability space is the Brownian motion process. Table 1 where the exact solution of this problem is found to be , ) 20 where ) (x y is the unknown stochastic process defined on the probability space is the Brownian motion process. Table 3 shows the numerical results obtained by the method described in Section 3 (BWM), exact solution and absolute errors (AE) for Table 4 shows the comparison of absolute errors of test problem 2 for different values of k and M and Figure 2 shows the graph of exact and approximate values of test problem 2 for
Conclusion
In this article, an effective strategy is provided to solve NSIVIE. This technique reduces these equations to a system of nonlinear algebraic equations with unknown Bernoulli coefficients, by using Bernoulli wavelets, their operational matrix of integration and stochastic operational matrix of integration which are solved by Newton's method. Error analysis of the proposed method is given. Moreover, the results obtained are compared with the exact solution with two Volterra integral equations and two NSIVIE in order to show the difference between the nonlinear Volterra integral equations and NSIVIE, and these examples show that the method described is precise and accurate and the results are in good agreement with the exact solution. By this we can conclude that the predicted algorithm is well organized and efficient. | 1,969.8 | 2020-11-10T00:00:00.000 | [
"Mathematics"
] |
The Interdomain Linker of AAV-2 Rep68 Is an Integral Part of Its Oligomerization Domain: Role of a Conserved SF3 Helicase Residue in Oligomerization
The four Rep proteins of adeno-associated virus (AAV) orchestrate all aspects of its viral life cycle, including transcription regulation, DNA replication, virus assembly, and site-specific integration of the viral genome into the human chromosome 19. All Rep proteins share a central SF3 superfamily helicase domain. In other SF3 members this domain is sufficient to induce oligomerization. However, the helicase domain in AAV Rep proteins (i.e. Rep40/Rep52) as shown by its monomeric characteristic, is not able to mediate stable oligomerization. This observation led us to hypothesize the existence of an as yet undefined structural determinant that regulates Rep oligomerization. In this document, we described a detailed structural comparison between the helicase domains of AAV-2 Rep proteins and those of the other SF3 members. This analysis shows a major structural difference residing in the small oligomerization sub-domain (OD) of Rep helicase domain. In addition, secondary structure prediction of the linker connecting the helicase domain to the origin-binding domain (OBD) indicates the potential to form α-helices. We demonstrate that mutant Rep40 constructs containing different lengths of the linker are able to form dimers, and in the presence of ATP/ADP, larger oligomers. We further identified an aromatic linker residue (Y224) that is critical for oligomerization, establishing it as a conserved signature motif in SF3 helicases. Mutation of this residue critically affects oligomerization as well as completely abolishes the ability to produce infectious virus. Taken together, our data support a model where the linker residues preceding the helicase domain fold into an α-helix that becomes an integral part of the helicase domain and is critical for the oligomerization and function of Rep68/78 proteins through cooperative interaction with the OBD and helicase domains.
Introduction
The four adeno-associated virus (AAV) Rep proteins are generated from a single open reading frame by the transcriptional use of two different promoters (p5 and p19) and subsequent alternative splicing mechanisms [1,2,3]. These reactions produce proteins that share three functional domains: an origin binding domain (OBD), a SF3 helicase domain and a putative zinc-finger domain [4,5]. The combination of these domains imparts these proteins with striking multifunctionality. In particular, the larger proteins Rep78 and Rep68 function as initiators of DNA replication, transcriptional regulators, DNA helicases and as key factors in site-specific integration [6]. The smaller Rep proteins Rep40 and Rep52, play a critical role during packaging of viral DNA into preformed empty capsids, where they are thought to be part of the packaging motor complex [7,8,9]. Although in terms of domain architecture the AAV Rep proteins resemble other members of the SF3 protein family, the peculiar OBD with its additional nuclease activity and the complex character of their oligomeric properties, set them apart from other SF3 helicases such as simian virus 40 large T antigen (SV40-LTag) and papilloma virus E1 (PV-E1) proteins [10,11,12,13]. In both of these proteins, the minimal SF3 helicase domain assembles into a hexameric ring in a process that can be induced by the presence of ATP and/or single-stranded DNA [14,15]. In contrast, Rep40 containing only the helicase domain and Rep52 with an additional Zn-finger domain, appear to be monomeric [16,17]. This indicates that oligomerization of AAV Rep proteins requires the presence of both the OBD domain and the helicase domain. This combination imparts both Rep68 and Rep78 with a complex and dynamic oligomeric behavior in-vitro that is modulated in large part by the nature of the DNA substrate [18]. The monomeric behavior of both Rep40 and Rep52 is striking in that they appear to contain the required structural features that are present in other SF3 helicase members. The X-ray structures of both SV40-LTag and PV-E1 show that their helicase domains assemble as hexameric rings and that the oligomerization interface is bipartite [15,19]. One interface is formed by the interaction of neighbouring N-terminal oligomerization domains (OD). The second interface is formed by the interaction of the C-terminal AAA + domains and is further stabilized by the presence of nucleotides [11,15]. In order to understand the structural features that promote AAV Rep oligomerization, we pursued in this study a detailed structural comparison of SF3 helicases. We show that the OD domain in Rep40/52 has been hindered in its ability to oligomerize by the transcriptional use of the p19 promoter. This event generates proteins with a smaller OD domain as compared to other SF3 helicases. More importantly, we show that in the context of Rep68/78 the required oligomerization is supported by the interdomain linker which is directly involved in oligomerization interface and we provide evidence that the tyrosine residue preceding the start of Rep40/52 (Y224) is critical in the oligomerization and therefore activity of the large AAV Rep proteins. Taken together, our results support a model where oligomerization of Rep68/78 is mediated by a composite oligomerization interface formed by the OBD, helicase and linker domains, with the latter playing an essential role in the inducing the oligomerization process.
Results
The oligomerization domain (OD) of AAV Rep40 differs from the OD's of other hexameric SF3 helicases As a first step in our attempt to determine the structural features that promote oligomerization in AAV Rep proteins, we analyzed the oligomeric interface of SF3 family members SV40-LTag and PV-E1. As previously described, the helicase domain contains two subdomains: a N-terminal helical bundle of four a-helices known as the oligomerization domain (OD) and the C-terminal AAA + subdomain ( Figure 1A). In PV-E1 the oligomerization interface spans both subdomains forming two extended surfaces at opposite faces of the proteins. In the AAA + subdomain, one face comprises all the catalytic residues, including: the P-loop, its subsequent helix, the b-strands with the associated Walker B residues, sensor 1 motif, and one side of the b-hairpin ( Figure 1B). The neighboring subunit interacts through areas that are located in the a-helices ''behind'' the b-sheet and on the opposite side of the b-hairpin ( Figure 1B). Overall, about 20% of the solvent accessible area takes part in the interface and includes about 34% of all residues. In PV-E1, the OD domain consists of 68 residues forming a four helical bundle. The oligomeric interface comes from interaction of residues located in helices 1 and 4 in one monomer, with residues in helices 2, 3 and part of helix 4 in the other subunit ( Figure 1B). Most of the interface is hydrophobic with many tyrosine and isoleucine residues. Similar types of interactions are seen in the interface formed by the SV40-LTag OD domains. This domain is a lot bulkier, spanning 89 residues that form a five-helix bundle. The extra helix originates from an additional Zn-finger motif. Significantly, the OD of Rep40, on the other hand, has only 52 aminoacids and, thus, is significantly shorter than PV-E1 and SV40-LTag OD domains. The direct result of this difference is a decrease in the total accessible surface area by more than 1000 Å 2 . In addition, the packing of the helices is less compact, producing a more dynamic structure ( Figure 1C). We hypothesize that the smaller OD domain of AAV Rep proteins imparts these proteins unique oligomeric properties where the smaller Rep40/52 are mostly monomeric while Rep68/78 -with the additional OBD domain-form oligomers. However, the measurable ATPase activity in all Rep proteins, suggest that Rep40/52 should oligomerize in the presence of nucleotides [20].
AAV-2 Rep40 forms a transient dimer in the presence of nucleotides
To determine if the presence of nucleotides can induce oligomerization of Rep40 -containing the minimal helicase domain-, we carried out sedimentation velocity experiments in the presence and absence of nucleotides at different concentrations. The sedimentation velocity profiles offer a complete characterization of the number and type of oligomers in solution. The data were analyzed using the program sedfit [21,22]. Figure 2A shows plots of the c(s) distribution against the sedimentation coefficient (s) for two concentrations of Rep40 in the absence of nucleotides. A single peak whose s 20,w increases slightly with increasing concentrations is observed. The slight but significant increase in s and calculated molar mass is consistent with a weak and transient dimerization (for hydrodynamic reasons, s is expected to decrease with increasing concentrations of an ideal solute). The data where also fitted using the program sedphat to a monomer-dimer association were the process is in rapid exchange on the time scale of the centrifuge [22]. Table 1 shows that the dissociation constant in the absence of nucleotides is ,10 23 M, which is at the upper end of detection by sedimentation velocity. Similar distributions of Rep40 (at 36 mM) in the presence of either 5 mM ATP or ADP are shown in Figure 2B and 2C. Here an increase is observed in the width of these peaks if compared to those for Rep40 alone. This is a well-understood behavior for a associating system whose exchange kinetics are neither slow of fast on the time scale of the centrifuge, thus, broadening the c(s) distribution peak [23]. The presence of a small shoulder suggest that dimer formation is occurring here as well, although perhaps its rate of dissociation is slower than for Rep40 alone. The s-value of the shoulder is consistent with a transient Rep40 dimer that represents ,0.2% of the total amount of protein. The relatively low ATPase activity of Rep40 reported in the literature supports our model of transient dimerization promoted by the binding and/ or hydrolysis of ATP [20].
Addition of linker region to Rep40 constructs induces oligomerization
In order to assess whether the interdomain linker connecting the OBD domain and the helicase domains contains additional
Author Summary
Viruses have to optimize the limited size of their genomes in order to generate the proteins required for infection and replication. Several mechanisms are used to accomplish this including the use of multiple promoters and alternative splicing. These processes generate gene products with diverse functions through the combinatorial assembly of a small number of protein domains. The small genome of the adeno-associated virus has two major open reading frames that generate seven proteins, four non-structural Rep proteins and three capsid proteins. The non-structural Rep proteins share a motor domain that uses hydrolysis of ATP to generate the conformational changes that drive DNA replication, transcriptional regulation, site-specific integration and the packing of viral genome into capsids. These functions depend upon the oligomerization of Rep proteins on specific DNA sites through the cooperation of the N-terminal origin binding domain and the C-terminal helicase domain. We provide evidence that the linker that connects the two domains is an integral feature of the helicase domain and contains a conserved aromatic residue that is critical for oligomerization. This residue emerges to be a signature motif of SF3 helicases and is also present in a subset of bacterial Rep proteins that support rolling circle replication mechanism.
regions of distinct structure that may play a role in promoting oligomerization, we first carried out secondary structure prediction analysis to determined if the linker contains additional regions of structure. The results suggest that the region from residue 215 to 224 has the potential to form an a-helix ( Figure 3A). We hypothesized that this region could extend the first helix of the OD domain ( Figure 3A) and the ensuing increase in surface accessible area may be sufficient to drive oligomerization. To test this hypothesis, we designed a new Rep construct beginning at the start of the linker region and extending to aminoacid 536 (a truncated version of Rep68 without the OBD domain, Rep68D200), and performed sedimentation velocity and cross-linking studies in order to characterize its oligomerization properties. The sedimentation profile of Rep68DN200 shows the presence of two peaks, one corresponding to the monomeric species (,2.53S) and the other to a dimer (,3.71S). The amount of formed dimer increases at higher concentrations as expected from a monomer-dimer equilibrium system ( Figure 3B). Formation of dimers was also observed when we performed cross-linking experiments. Figure 3C shows that the amount of dimeric species has significantly increased in Rep68DN200 as compared to Rep40wt. We calculated the dimerization constants of Rep40wt and Rep68DN200 from a global fitting of the sedimentation velocity data to a monomer-dimer model (Table 1). In summary, we determined that the presence of the linker region increases the strength of dimerization by about 10-fold relative to that of Rep40.
Extension of the linker region to residue 215 defines the minimal length required to promote oligomerization Next, we sought to determine the minimal length of linker that is needed to promote oligomerization. We generated three additional constructs, named Rep68DN209, Rep68DN214 and Rep68DN219 and tested their ability to oligomerize ( Figure 4). Our results indicate that Rep68DN214 contains the minimal length of linker that is required to promote detectible oligomerization, although with the shorter construct Rep68DN219, a small shoulder is seen at higher concentration (data not shown). These results confirm that the linker region from 215 to 224 may fold into a a-helix, resulting in an increase of the surface accessible area of the OD domain that mediates oligomerization. This increase, however, is not sufficient to produce higher order oligomers.
ATP and ADP induce formation of higher order oligomers of the extended linker Rep protein constructs
In order to determine the contribution of ATP and ADP to the oligomerization of the extended linker Rep linker constructs, we performed sedimentation velocity studies in the presence of nucleotides. Our hypothesis was that if oligomerization reflects the functional state of these proteins, the addition of nucleotides should support and induce further oligomerization. Figure 5 shows that the presence of ATP and ADP induces the formation of higher order oligomers. Formation of dimeric species at this concentration can be seen with Rep68D214 as well as the longer constructs RepDN209 and RepDN200. In the later two, ADP produces two main populations sedimenting at ,3S and ,7S with additional intermediate oligomers. ATP on the other hand, seems to generate more stable species at ,7S. Again, these data show that the presence of the linker region induces oligomerization of the Rep constructs and that the addition of nucleotides, in particular ATP, induces formation of larger oligomers, possibly through the stabilization of the interface formed by the AAA + domains. This finding is in good agreement with the unique characteristics of the AAV Rep nucleotide binding pocket, which, based on its open conformation together with the presence of an arginine finger predicts the nucleotide contribution to oligomerization [24].
Linker substitution abolishes oligomerization of Rep68
To determine if the linker is critical for the oligomerization of Rep68, we replaced it with an unrelated sequence and examined its effect on oligomerization using sedimentation velocity. The only prerequisite for the substitute linker were a lack of structure and no impact on the native structures of the connected domains. We chose a sequence from the transcription factor Oct-1. This transcription factor has two DNA binding domains connected by a linker of 29 residues. The X-ray structure of this protein shows that the linker is unstructured and flexible. In addition, it has been used to connect different protein domains without affecting their properties [25,26]. We generated a Rep68 mutant protein (Rep68 octlink ), where residues 206 to 224 were replaced with 18 residues from the Oct-1 linker and tested its ability to oligomerize. The sedimentation profile of Rep68 typically shows two populations with sedimentation coefficients of ,3S and ,13S ( Figure 6A). We have determined that the 13S peak corresponds to a mixture of oligomeric rings (data not shown). Figure 6B shows that the replacement of the linker completely abolishes the oligomerization Figure 6C). These results show that replacement of the linker produces a Rep68 protein whose ability to oligomerize has been severely affected.
Presence of the linker region induces oligomerization of the OBD domain
The above findings indicate that the linker region plays a central role in the oligomerization of AAV Rep proteins. To confirm that the linker region has an intrinsic property to induce oligomerization, we generated a construct that spans the OBD domain and the linker region (OBD-linker residues 1-224) and measured its ability to oligomerize. We first analyzed the OBD domain (1-208) to determine any oligomerization up to concentrations of 1 mg/ml (43 mM). Our results show that while OBD is a monomer (Figure 7A), the OBD-linker protein construct displays formation of dimers at increasing protein concentrations ( Figure 7B). These results support the hypothesis that the linker region has an intrinsic property to induce oligomerization Linker residue Y224 is critical for oligomerization and represents a conserved feature in SF3 helicases We generated a model of the Rep68DN214 construct using the X-ray structure of Rep40 (residues 225-490) and 9 residues of the linker (215-224) that were added as a helical extension to the Nterminus. The model of the a-helix was generated using Robetta [27]. Figure 8A and 8B shows the structural alignment of the OD domain of the Rep68DN214 model with the OD domains of PV-E1 and SV40-LTag. The alignment shows that residue Y224 superimposes with aromatic residues F313 and W270 located at the beginning of helix 1 in the OD domains of PV-E1 and SV40-LTag respectively. Analysis of the structures of both proteins reveals that these aromatic residues play a critical role in forming and stabilizing the oligomerization interface. They pack against both the N-terminal end of helix 4 of the same subunit and the Cterminus end of helix 4 of the neighboring subunit. In order to test the hypothesis that Y224 plays an equivalent role in AAV Rep proteins, we mutated it to alanine and tested its effect on the oligomerization of Rep68DN200. Mutation to the smaller residue alanine should have a direct effect in the oligomerization of this protein because of the significant reduction of surface exposed area. Figure 8C shows the sedimentation profile of this mutant protein showing that it completely abolishes the formation of dimers. To confirm that residue Y224 plays an important role in the oligomerization of AAV Rep proteins, we generated a Rep68Y224A mutant and compared its ability to form oligomers with respect to wild type Rep68. Analysis of the Rep68Y224A mutant reveals that at low concentration the protein is mostly found as a monomer with a sedimentation coefficient of ,3S. At higher concentrations, we observed the appearance of multiple peaks that correspond to dimers, trimers and larger oligomers; nevertheless, the majority of the protein is present as a monomer. The presence of ATP induces a small degree of stability to the dimeric species at 5 mM and both the 5S and 11S species at 10 mM. However, the 13S complex observed with the wild type Rep68 is not formed and most of the protein is still found as a monomer ( Figure 8E). These results indicate that residue Y224 is critical for the oligomerization of AAV Rep proteins.
Residue Y224 is critical for AAV virus viability
To assess if the disruption of oligomerization observed with the Rep68Y224A mutant has any consequences on the AAV viral life cycle, we produced recombinant AAV2 particles expressing the GFP gene in presence of a helper virus containing the Y224A mutation in the Rep ORF. The cells were harvested and lysed, and the crude lysate (treated with an endonuclease) was used to infect Hela cells. Strikingly, the crude lysate from cells transfected with the mutant helper plasmid didn't contain any infectious rAAV2-GFP particles, as determined by FACS analysis of GFP positive cells (Figure 9). These results show that the residue Y224 of AAV Rep proteins, and the oligomeric properties it confers to these proteins, have a crucial role during the AAV life cycle.
Discussion
In this study we report that the interdomain linker present in the larger AAV Rep68/78 proteins is an integral part of their oligomerization interface. We showed that the linker region is in fact an extension of the OD domain of AAV Rep proteins. Our results have shown that Rep40 constructs containing either a complete or half linker have the ability to oligomerize. This effect is enhanced in presence of ATP or ADP. We hypothesized that the linker region from residues 215 to 224 forms a a-helix that is connected to the first a-helix of the SF3 helicase domain. Secondary structure prediction and modeling of the linker region supports this argument ( Figure 3A and 8B). Furthermore, we have identified a critical aromatic residue (Y224) located at the end of the linker region that is conserved in Rep proteins from all AAV serotypes. The bulky nature of this aromatic residue appears to be a conserved feature in SF3 helicases ( Figure 8A). Structural alignment of the OD domain of a Rep40 model with an extended helical linker and those of SV40-LTag and PV-E1 shows that residue Y224 aligns with equivalent aromatic residues Trp270 and Phe313 respectively ( Figure 8A, 8B). A detailed analysis of the oligomeric interface of these proteins shows that these aromatic residues have a dual role: they stabilize the hydrophobic core of the OD domain helical bundle, and are part of the oligomerization interface between neighboring subunits. Our results reveal the critical role of the OD domain in the formation of stable oligomers in SF3 helicases. The larger OD domains of SV40-Tag and PV-E1 proteins in cooperation with the AAA + motor domain generate a helicase domain that forms stable hexamers. Constructs of SV40-LTag and PV-E1 without the OD domain fail to oligomerize [14,19]. Another example that shows the fundamental role of the OD domain in oligomerization comes from the study of the evolutionary related proteins involved in rolling circle replication (RCR) of plasmids. The protein RepB from streptococcal RCR plasmid pMV158 is a hexameric protein that initiates replication of plasmid DNA and has a domain structure that resembles SF3 helicases but lacks the AAA + subdomain [28]. Its N-terminal OBD domain is structurally and functionally related to the OBD from AAV Rep proteins due to the presence of the HUH motif critical for DNA nicking. Its C-terminal domain only consists of a 4 helical bundle that is similar to the OD domains of SF3 helicases and is responsible for hexamerization. Structural alignment shows that RepB has an aromatic residue (Phe143) equivalent to residue Y224 in AAV Rep68/78. We hypothesize that the role of this residue has been conserved throughout evolution to serve as a modulator of oligomerization in SF3 helicases and related RCR proteins. The smaller AAV Rep proteins Rep40/52 with truncated OD domains are missing the Y224 residue and thus are not able to sustain a stable oligomerization interface and are mostly monomeric. Consequently, the stable oligomerization of AAV Rep proteins requires the cooperative interaction of the OBD domain, the linker and the helicase domain. In this context, the OD sub-domain, and in particular the aromatic residue at the C-terminus of linker, appear to be the triggering element required for the oligomerization of AAV Rep proteins.
The critical role of residue Y224 in the overall AAV-2 viral life cycle is illustrated by the complete abolishment of production of infectious particles from AAV-2 vector constructs produced in the context of Rep carrying the Y224A mutation ( Figure 9). This result prompts the question of which specific functions are affected by this mutation. We think that most of the biochemical activities of Rep68/78 will be affected due to the impairment in oligomerization. Remarkably, an earlier report by Walker et al. on the identification of residues necessary for site-specific endonuclease activity showed that a Y224 mutant was defective in AAV hairpin/DNA binding, trs endonuclease, DNA helicase and ATPase activity [29], suggesting that correct oligomerization of Rep proteins may be important in all of these functions.
In agreement with our results, a recent report has shown that the presence of the linker in an AAV5 Rep40 construct induces oligomerization in presence of DNA. However, the authors concluded that the linker effect is primarily due to its interaction with DNA [30]. As we demonstrated in this report, the oligomerization effect is an intrinsic property of the linker due to its critical role in the formation of an oligomerization interface as part of the OD domain. The presence of DNA induces further oligomerization as seen with all helicases [13]. However, it appears that the linker also plays an additional role in protein-DNA interaction that may be important during the assembly of Rep68/ 78 on DNA substrates such as the AAV origin of replication and AAVS1 integration site.
The use of alternative gene promoters is a common mechanism to generate protein diversity and flexibility in gene expression. At the same time it allows to obtain multiple functions from a limited number of genes, thus optimizing the size of the genome. It is clear that in the case of the Rep proteins from the AAV virus, nature has generated two sets of proteins that differ primarily in their ability to oligomerize. Rep proteins obtained from the AAV P 19 promoter generate Rep40 and Rep52 with truncated OD domains and are thus unable to oligomerize. Both proteins play a critical role during DNA packaging into capsids; however, the mechanism of action of monomeric Rep40/52 during packaging remains elusive. Rep proteins generated from the P 5 promoter, on the other hand, require the cooperative interaction of three different oligomeric interfaces produced by the OBD domain, the linker and the helicase domain. This feature potentially provides an additional dimension for the regulation of the diverse Rep activities when compared to the related proteins from SV40 and PV. We suggest that the cooperative interactions and the modulation of these interfaces -in particular in the presence of various specific DNA substrates -orchestrate the variety of functions performed by Rep68/Rep78 proteins and may thus represent a key to our understanding of the underlying mechanisms.
Finally, our report introduces the possibility of two distinct helicase modes for the biological functions supported by AAV Rep proteins. In the context of the large Rep proteins, a complete OD domain directs the formation of stable oligomers with a DNA unwinding mode likely to resemble that of the related viral proteins SV40-Tag and E1. The small Rep proteins, however, appear to utilize an incomplete OD domain that retains Rep40/52 in a monomeric state with formation of transitional dimeric complexes required for ATP hydrolysis. It is intriguing to speculate that this unique arrangement allows AAV to utilize two distinct motor activities with a single AAA + domain. As Rep40/52 have been demonstrated to be required for genome packaging it is feasible to address the question whether this process requires a Rep40/52-mediated dimeric DNA helicase activity by a mechanism that is as yet undiscovered or whether further oligomerization is induced by interaction with capsid proteins.
Cloning and mutagenesis of Rep expression constructs
All mutant proteins were generated using the pHisRep68/15b plasmid, which contains the AAV2 Rep68 ORF subcloned in vector PET-15b (Novagen). Site-directed mutagenesis for mutants Y224A was generated using the QuickChange mutagenesis kit (Stratagene). Rep constructs with different linker extensions were generated by PCR with primers designed to encompass the particular protein region. Primers included restriction enzyme sites NdeI and XhoI, and the sequence of the TEV protease site. The Rep68 protein used in these studies contained a Cys to Ser mutation that prevented aggregation but was functionally identical to the wild type protein (data not shown). The Rep68 octlink construct was generated by substitution of residues 206 to 224 of AAV2 Rep68 with the mouse Oct-1 linker residues 328-346 (GeneBank CAA49791) using the gene synthesis services from GeneScript. The sequences of all constructs were confirmed by DNA sequencing (GeneWiz).
Protein expression and purification
All proteins were expressed using the pET-15b vector, expressed in E. coli BL21(DE3) cells (Novagen), and purified as described before [18]. The final buffer contains (25 mM Tris-HCl [pH 8.0], 200 mM NaCl, and 2 mM TCEP). His6-PreScission Protease (PP) was expressed in BL21(DE3)-pLysS at 37uC for 3 h, in LB medium containing 1 mM IPTG. Cell pellets were lysed in Ni-Buffer A (20 mM Tris-HCl [pH 7.9 at 4uC], 500 mM NaCl, 5 mM Imidazole, 10% glycerol, 0.2% CHAPS, and 1 mM TCEP). After five 10-s cycles of sonication, the fusion protein was purified using a Ni-column -equilibrated in Ni-buffer A. Protein eluted was desalted using buffer A and a HiPrep TM 26/10 desalting column (GE Healthcare). His-PP tag was removed by PreScission protease treatment using 150 mg PP/mg His-PP-Rep68. After overnight incubation at 4uC, buffer was exchanged using the same desalting column and Ni-Buffer A. Subsequent Nicolumn chromatography using the buffer B (same as buffer A but with 1 M imidazole), was performed to remove the uncleaved fusion protein, and untagged Rep68 was eluted with 30 mM imidazole. Rep68 was finally purified by gel filtration chromatography using a HiLoad Superdex 200 16/60 column (GE Healthcare) and Size Exclusion buffer. N-terminus His6-tagged WT and mutant Rep68 proteins were concentrated to 10 mg/ml, flash-frozen in liquid N 2 , and kept at 280uC until use.
Cross-linking of Rep40
The cross-linking reactions for Rep40 and Rep68DN200 were made according to an adapted protocol from Packman and Perham [31]. The reaction mixture was in cross-linking buffer (25 mM HEPES, 200 mM of NaCl, pH 8.0) and protein concentration was 2 mg/ml. A 30 fold molar excess of 100 mM DMP (dimethyl pimelimidate dihydrochloride, MP Biomedicals, LLC) was added to the reaction and incubated 60 min at room temperature. The reaction was quenched by addition of 1 M Tris, pH 7.5 to a final concentration of 50 mM. The samples were analyzed in an 8% SDS-PAGE.
AAV Infectious particles assay
Hek 293T cells were triple transfected using polyethylenimine (PEI) with an AAV2 ITR-containing plasmid including the GFP gene, a helper plasmid expressing AAV2 Rep (wt or Y224A cloned from the pHisRep68Y224A/15b) and Cap, and a third construct containing the adenovirus helper functions (pXX6, University of North Carolina Vector Core Facility). The presence of the Y224A mutation was confirmed by sequencing (Eurofins). After 72 h, the cells were harvested and lysed in 150 mM NaCl, 50 mM Tris at pH 8.5, followed by three freeze -thaw cycles. The lysate was treated for 30 minutes at 37uC with 150 units/ml of benzonase endonuclease (Sigma). HeLa cells were infected with increasing amounts of crude lysate, and the percentage of GFP-positive cells was determined three days post-infection.
Analytical ultracentrifugation
Sedimentation velocity experiments were carried out using a Beckman Optima XL-I analytical ultracentrifuge (Beckman Coulter Inc.) equipped with a four and eight-position AN-60Ti rotor. Rep protein samples were loaded in the cells, using in all cases buffer used in the final purification step. Samples in double sector cells were centrifuged at 25,000 rpm for Rep68 proteins (Rep68 and Rep68Y224A). For Rep40 and linker constructs sedimentation was performed at 40,000 rpm. In all experiments, temperature was kept at 20uC. Sedimentation profiles were recorded using UV absorption (280 nm) and interference scanning optics. For the analysis of the results the program Sedfit was used to calculate sedimentation coefficient distribution profiles using the Lamm [21]. | 7,191.8 | 2012-06-01T00:00:00.000 | [
"Biology"
] |
Selective elimination of high constitutive activity or chemokine binding in the human herpesvirus 8 encoded seven transmembrane oncogene ORF74.
Open reading frame 74 (ORF74) encoded by human herpesvirus 8 is a highly constitutively active seven transmembrane (7TM) receptor stimulated by angiogenic chemokines, e.g. growth-related oncogene-alpha, and inhibited by angiostatic chemokines e.g. interferon-gamma-inducible protein. Transgenic mice expressing ORF74 under control of the CD2 promoter develop highly vascularized Kaposi's sarcoma-like tumors. Through targeted mutagenesis we here create three distinct phenotypes of ORF74: a receptor with normal, high constitutive signaling through the phospholipase C pathway but deprived of binding and action of chemokines obtained through deletion of 22 amino acids from the N-terminal extension; an ORF74 with high constitutive activity but with selective elimination of stimulatory regulation by angiogenic chemokines obtained through substitution of basic residues at the extracellular ends of TM-V or TM-VI; and an ORF74 lacking constitutive activity but with preserved ability to be stimulated by agonist chemokines obtained through introduction of an Asp residue on the hydrophobic, presumed membrane-exposed face of TM-II. It is concluded that careful molecular dissection can selectively eliminate either agonist or inverse agonist modulation as well as high constitutive activity of the virally encoded oncogene ORF74 and that these mutant forms presumably can be used in transgenic animals to identify the molecular mechanism of its transforming activity.
Open reading frame 74 (ORF74) encoded by human herpesvirus 8 is a highly constitutively active seven transmembrane (7TM) receptor stimulated by angiogenic chemokines, e.g. growth-related oncogene-␣, and inhibited by angiostatic chemokines e.g. interferon-␥inducible protein. Transgenic mice expressing ORF74 under control of the CD2 promoter develop highly vascularized Kaposi's sarcoma-like tumors. Through targeted mutagenesis we here create three distinct phenotypes of ORF74: a receptor with normal, high constitutive signaling through the phospholipase C pathway but deprived of binding and action of chemokines obtained through deletion of 22 amino acids from the N-terminal extension; an ORF74 with high constitutive activity but with selective elimination of stimulatory regulation by angiogenic chemokines obtained through substitution of basic residues at the extracellular ends of TM-V or TM-VI; and an ORF74 lacking constitutive activity but with preserved ability to be stimulated by agonist chemokines obtained through introduction of an Asp residue on the hydrophobic, presumed membrane-exposed face of TM-II. It is concluded that careful molecular dissection can selectively eliminate either agonist or inverse agonist modulation as well as high constitutive activity of the virally encoded oncogene ORF74 and that these mutant forms presumably can be used in transgenic animals to identify the molecular mechanism of its transforming activity.
Chemokines are chemotactic cytokines that regulate immunological processes through interaction with 7TM 1 G-proteincoupled receptors expressed mainly on leukocytes. For example, during inflammation chemokines secure appropriate cell recruitment. Chemokines are also involved in tissue householding processes such as angiogenesis (2,3). Genes coding for homologs of mammalian chemokine and chemokine receptors have been found in a number of herpes-and poxviruses (4 -7). These molecules have presumably been obtained by the virus through an ancient act of molecular piracy and are structurally optimized for a particular pharmacological phenotype of benefit to the virus. The proposed functional properties of virally encoded chemokines are multiple. Some act as chemokine antagonists, for example vMIP-II from HHV-8 (8,9) (HHV-8 is also known as Kaposi's sarcoma-associated herpesvirus) and MC148 from Molluscum contagiosum (9,10), and some act as agonists, for example UL146 from human cytomegalovirus (11) and vMIP-II (12).
In contrast to the viral chemokines, the function of the virally encoded chemokine receptors is not that clear yet. In general these receptors are not required for viral replication in vitro (13). However gene deletion experiments in both mouse and rat cytomegalovirus have shown that, for example the UL33 receptor is essential for targeting and/or replication of the virus in salivary glands (14). Several ␥2-herpesviruses including HHV-8 (15), herpesvirus Saimirii (16), equine herpesvirus 2 (17), and the murine ␥-herpesvirus 68 (18) encode homolog versions of a CXC chemokine receptor with highest homology to CXCR2 among mammalian chemokine receptors. In HHV-8 the receptor is known as ORF74, but it is also frequently referred to as Kaposi's sarcoma-associated herpesvirus-G-protein-coupled receptor (Fig. 1). A prominent pharmacologic feature of ORF74 from HHV-8 is its high degree of constitutive, ligand-independent signaling through the phospholipase C (19,20) as well as the c-Jun N-terminal kinase and the p38 mitogen-activated protein kinase pathways (21). Furthermore ORF74 has angiogenic properties and its signaling is closely coupled to production and secretion of vascular endothelial growth factor and to cellular transformation and formation of highly vascularized tumors in SCID mice (21). In humans, ORF74 is expressed in Kaposi's sarcoma lesions (15) and body cavity-associated lymphomas (22) and has been proposed to be causatively involved in these malignancies (21). Recently, transgenic mice expressing the ORF74 receptor under control of the CD2 promoter have been reported to develop highly vascularized Kaposi's sarcoma-like tumors (1), thus supporting the oncogenic potential.
ORF74 from HHV-8 binds various human CXC chemokines (19,20). The properties of these chemokines on ORF74 signaling cover the whole pharmacological spectrum: GRO␣, GRO and GRO␥ are agonists, IP-10, stromal cell-derived factor-1␣, granulocyte colony stimulating factor-2, and vMIP-II are inverse agonists, whereas the inflammatory CXC chemokines IL-8, neutrophil-activating peptide-2, and epithelial cell-derived activating peptide-78 are neutral ligands, which despite high affinity binding do not affect signaling of the receptor (20). Interestingly, the chemokines, which act as agonists on ORF74, are normally angiogenic chemokines in the host, whereas the chemokines that act as inverse agonists are normally angiosta-tic or angiomodulatory messengers (23). Constitutive activity has been described in many 7TM receptors, for instance the adrenergic (24 -27), the angiotensin and bradykinin (28,29), and the glucagon receptors (30).
The present study is aimed at trying to characterize, in the ORF74 receptor from HHV-8, the structural basis for its broad spectrum binding profile of chemokines as well as the structural basis for its high constitutive activity based on knowledge on ligand binding and signaling properties in other 7TM receptors. Thereby, we created specific ORF74 mutants in which certain elements of the pharmacological repertoire have been selectively eliminated; to exploit these mutants in future transgenic studies to identify the molecular mechanism that HHV-8 has exploited in ORF74 to precipitate the different clinical features of Kaposi's sarcoma and other HHV-8-related malignancies.
Iodination of IP-10 -The Bolton-Hunter reagent was dried by a gentle stream of nitrogen for 30 -60 min. 5-10 g of IP-10 was incubated on ice with 1.5 mCi of Bolton-Hunter reagent in a total volume of 50 l of 0.1 mM Borat buffer, pH 8.5, for 1 h, and the reaction was terminated by the addition of 0.5 ml of H 2 O supplemented with 0.1% v/v trifluoroacetic acid. The iodinated chemokines were purified by reverse phase high pressure liquid chromatography.
Construction of Mutant Receptors-The cDNA encoding the ORF74 receptor was cloned into the eukaryotic expression vector pTEJ-8 (31). Mutations were constructed by polymerase chain reaction using either the overlap extension method (32) for mutations located internal in the receptor or flanking-extended primers for the N-terminal truncated mutation. The polymerase chain reaction products were digested by the appropriate restriction endonucleases, purified, and cloned in the pTEJ8-ORF74. All experiments were performed using the pfu-polymerase, and mutations were verified by restriction endonuclease mapping followed by DNA sequence analysis using the Thermo Sequenase fluorescent labeled primer cycle sequencing kit with 7-deaza-dGTP on an Alfexpress DNA sequencer according to the manufacturer's instructions (Amersham Pharmacia Biotech).
Transfections and Tissue Culture-COS-7 cells were grown at 10% CO 2 and 37°C in Dulbecco's modified Eagle's medium 1885 supplemented with 10% fetal calf serum, 2 mM glutamine, and 0.01 mg/ml gentamicin. Transfection of the COS-7 cells was performed by the calcium phosphate precipitation method (33).
Binding Experiments-COS-7 cells were transferred to culture plates one day after transfection. The number of cells seeded/well was determined by the apparent expression efficiency of the individual clones and was aimed at obtaining 5-10% specific binding of the added radioactive ligand. 3 ϫ 10 5 and 5 ϫ 10 5 cells/well were used for mutations with eliminated specific binding. Two days after transfection, cells were assayed by competition binding for 3 h at 4°C using 12 pM 125 I-IL-8, 125 I-GRO␣, or 125 I-IP-10 plus unlabeled ligand in 0.5 ml of 50 mM Hepes buffer, pH 7.4, supplemented with 1 mM CaCl 2 , 5 mM MgCl 2 , and 0.5% (w/v) bovine serum albumin. After incubation cells were washed quickly four times in 4°C binding buffer supplemented with 0.5 M NaCl. Nonspecific binding was determined as the binding in the presence of 0.1 M unlabeled chemokine. Determinations were made in duplicates.
Phosphatidylinositol Assay (PI Turnover)-One day after transfection, COS-7 cells (5 ϫ 10 5 cells/well) were incubated for 24 h with 5 Ci of myo-[ 3 H]inositol in 1 ml/well inositol-free Dulbecco's 1885 medium supplemented with 10% fetal calf serum, 2 mM glutamine, and 0.01 mg/ml gentamicin. Cells were washed twice in 20 mM Hepes, pH 7.4, supplemented with 140 mM NaCl, 5 mM KCl, 1 mM MgSO 4 , 1 mM CaCl 2 , 10 mM glucose, and 0.05% (w/v) bovine serum albumin and were incubated in 0.8 ml of buffer supplemented with 10 mM LiCl at 37°C for 90 min in the presence of various concentrations of chemokine. Cells were extracted with 10% ice-cold perchloric acid followed by incubation on ice for 30 min. The resulting supernatant was neutralized with KOH in Hepes buffer, and the generated [ 3 H]inositol phosphates were purified on an AG 1-X8 anion-exchange resin (34). Determinations were made in duplicates.
Calculations-IC 50 and EC 50 values were determined by nonlinear regression, and B max values were calculated using the GraphPad-Prism 2 software (GraphPad Software, San Diego).
RESULTS
The N-terminal region of the ORF74 receptor is characterized by the occurrence of many acidic residues, whereas multiple basic residues are found at the extracellular ends of TM-V and TM-VI and in extracellular loops 2 and 3 as shown in Fig. 1. These two regions were initially targeted for mutagenesis to try to selectively alter the ligand binding without affecting signaling of the virally encoded receptor.
N-terminal Truncation of ORF74, a Phenotype with Eliminated Chemokine Binding and Activity but Preserved High Constitutive Activity-Gene dosage experiments showed that deletion of the N-terminal 22 amino acids, including 7 acidic residues (⌬22-N-terminal), did not affect the basal signaling activity of the receptor, i.e. as determined by PI turnover ( Fig. 2A). However, neither the agonist GRO␣ nor the inverse agonist IP-10 could affect the high constitutive signaling activity of the mutated receptor (Fig. 2C). A whole panel of human CXC chemokines were tested on the ⌬22-N-terminal mutant form of ORF74, but none of them were found to have any influence on its signaling (Fig. 2F), although several of these chemokines act either as agonists or inverse agonists on the wild type receptor (20). The three radioligands, 125 I-GRO␣, 125 I-IP-10, and 125 I-IL-8, which all bind with high affinity to the wild type receptor did not show any binding to the ⌬22-N-terminal mutation (Table I).
An ORF74 Phenotypes with Preserved High Constitutive Signaling and Preserved Inverse Agonist Activity but with Impaired Agonist Activity-Based on the observation that IL-8 binding to CXCR2 is highly dependent on the presence of two arginine residues located at the extracellular end of TM-V (35) and that they are among the few residues shared between CXCR2 and ORF74, we substituted these residues as well as two arginine residues located in the corresponding region at the extracellular end of the neighboring TM-VI in ORF74 (Fig. 1).
The two arginine-substituted ORF74 mutants, like the wild type receptor, displayed high constitutive signaling as demonstrated both by gene-dosing experiments and by the suppressive effect of the inverse agonist, IP-10 ( Fig. 2, A, D, and E). However, GRO␣, which acts as an agonist on the wild type ORF74, did not stimulate signaling detectably above the high basal level in the arginine-substituted ORF74 mutants (Fig. 2, D and E). This is surprising, because GRO␣ did bind with high, normal affinity, albeit with a diminished B max value, to the mutated receptors (4.5 and 8.3 fmol/10 5 cells for the TM-V mutant (R208H/R212H)-ORF74 and the TM-VI mutant (R278A/R279A)-ORF74, respectively, versus 44 fmol/10 5 cells for the wild type receptor, Table I). The binding of the neutral ligand IL-8 was totally eliminated in (R208H/R212H)-ORF74, whereas in (R278A/R279A)-ORF74, IL-8 binding was not affected in respect of affinity (K d ϭ 0.88 nM versus 1.5 nM for wild type receptor) but was severely decreased in respect of B max (2.5 fmol/10 5 cells as compared with 42 fmol/10 5 cells for the wild type ORF74) ( Table 1). In contrast, for the inverse agonist, IP-10-increased B max values were observed in both of the arginine-substituted ORF74 mutants (46 and 170 fmol/10 5 cells for (R208H/R212H]-ORF74 and [R278A/R279A)-ORF74, respec-tively, versus 28 fmol/10 5 cells for the wild type receptor) with preserved high affinity IP-10 binding (Table I).
An ORF74 Phenotype with Eliminated Constitutive Activity but Preserved Binding and Activity of Agonist Chemokine-A number of residues in the transmembrane segments have been implicated as being important for the signal transduction process in 7TM receptors in general. As shown in Fig. 1, many of these residues have been mutated by the virus in ORF74. In an attempt to localize the structural basis for the high constitutive activity of the virally encoded receptor, we re-introduced "normal" 7TM residues at a number of places in the transmembrane segments of ORF74.
Most of the substitutions in the transmembrane segments of ORF74 did not affect the receptor neither in respect of ligand binding nor in respect of constitutive signaling. Thus, in gene dosage experiments the (Y128S)-, (V142D)-, (N92D/V142D)-, and (V310N)-ORF74 mutants all had similar basal activity as compared with wild type ORF74 (Fig. 3A). These mutants were also affected by chemokine ligands in a similar manner as the wild type receptor was as they were unaffected by IL-8, stimulated by GRO␣, and inhibited by IP-10 (Fig. 3B). One of the mutants, (Y128S)-ORF74, did show a slightly higher constitutive activity and was affected less by both GRO␣ and IP-10 than the wild type receptor. The competition binding analysis for these mutations also gave results rather similar to those observed in the wild type receptor, although the expression level of (Y128S)-, (V142D)-, and (N92D/V142D)-ORF74 was lower for all tested radioactive ligands (Table I).
Further mutational analysis was performed in TM-II. This segment normally holds AspII:10, which is highly important for signaling in 7TMs in general. Asn 92 in ORF74 was the most obviously candidate for a mutated AspII:10. However, "reintroduction" of an Asp at this position either alone or in combina-tion with an Asp in position III:25 in the "DRY" motif did not have a major effect on either ligand binding (Table I) or receptor signaling (Fig. 3 and 4). We then decided to try to reintroduce an Asp residue at other positions in TM-II around Asn 92 . Substitution of the other polar residue in this region, Ser 93 , with an Asp had as little effect on receptor signaling and ligand binding as the substitution of Asn 92 (Fig. 4A).
Surprisingly, introduction of an Asp residue for either Leu 91 or Leu 94 , both of which would be expected to be facing more or less into the lipid bilayer ( Fig. 1), eliminated the high constitutive activity of the ORF74 receptor while preserving its ability to be stimulated by GRO␣ (Fig. 4). Because of the low basal activity, the relative stimulatory effect of GRO␣ was even larger, ϳ3-fold, in the (L91D)-and (L94D)-ORF74 constructs as compared with the 2-fold stimulation found in the wild type receptor. Also because of the low basal activity, which was similar to that observed in cells transfected with the empty expression vector, IP-10 was unable to show any inverse agonistic activity in these two mutants (Fig. 4D).
The expression level for the four different Asp mutations in TM-II was relatively similar, between 3.6 and 12 fmol/10 5 cells, but less than the expression level of the wild type receptor as determined by the B max values using the neutral ligand 125 I-IL-8 (Table I). Similarly, the B max values for 125 I-IP-10 were comparable in this group of mutants, between 3.2 and 16 fmol/ 10 5 cells (Table I). In contrast, the B max values for 125 I-GRO␣ were very low, 0.28 and 0.74 fmol/10 5 cells, for the two mutants in which the constitutive activity had been silenced, (L91D)and (L94D)-ORF74 as compared with the B max for GRO␣ in the (N92D)-and (S93D)-ORF74 and the wild type ORF74, 13, 14, and 44 fmol/10 5 cells (Table I). Nevertheless, despite the apparent low B max values, GRO␣ was able to efficiently stimulate signaling of the (L91D)-and (L94D)-ORF74 constructs.
DISCUSSION
In the present study three distinct phenotypes of ORF74 are generated through targeted mutagenesis directed toward either presumed ligand binding epitopes in the extracellular domains or toward the seven helical bundle, which is responsible for the signal transduction: 1) a receptor with eliminated ligand binding but with maintained high constitutive activity; 2) a receptor with selective elimination of agonist action, GRO␣, but with preserved inverse agonist action, IP-10, and also with preserved high constitutive activity; and 3) a receptor with eliminated constitutive activity but with preserved ligand binding and importantly with maintained ability to be stimulated by agonists.
General and Selective Elimination of Chemokine Binding and Action in ORF74 -The elimination of all chemokine binding through truncation of the N terminus of the receptor is in agreement with the observation that peptide and protein ligands of the 7TM receptors in general often have important interactions with the extracellular segments (36). Especially in chemokine receptors, ligand-receptor interactions have often been located to their N-terminal segment (37)(38)(39). Most significantly, Handel and co-workers (40) have recently by NMR demonstrated the precise molecular binding mode of a peptide from the N-terminal segment of the CCR2 receptor to a structurally complementary groove on the main agonist for this receptor, MCP-1 (monocyte chemotactic protein-1). In the case of ORF74, Gershengorn and co-workers (41) recently found in analogy with the observation in the present study that truncation of the N terminus eliminated agonist binding. Importantly, in both studies it was found that the high constitutive activity of the N-terminally truncated ORF74 receptor was similar to that of the wild type receptor (Fig. 2).
In an attempt to affect the binding and action of various types of chemokine ligands selectively, we focused on the extracellular ends of TM-V and -VI, because these epitopes are known often to be involved in both agonist and antagonist binding in 7TM receptors (36). Specifically in CXCR1 and -2, which are the endogenous receptors with ligand binding profiles most similar to that of ORF74, two Arg residues are located at positions V:01 and V:05 facing the main ligand binding crevice and are known to be crucially involved in the binding of IL-8 (35). These residues are conserved in ORF74, and we find here that substitution of the two Args with His residues in ORF74 also eliminated binding of IL-8, which is a neutral ligand in this virally encoded receptor. However, from a functional point of view this double mutation was even more interesting than expected, because it surprisingly eliminated the action of the agonist chemokine GRO␣ without affecting the binding or action of the inverse agonist chemokine, IP-10 ( Fig. 2). GRO␣ still bound to the mutated receptor, albeit with a 5-fold reduced affinity and a 10-fold reduced B max . The inability of the R208H/R212H mutation to be stimulated by GRO␣ cannot be explained simply by the low B max for the agonist, because, for example the L91D and L94D mutants display even lower B max values for GRO␣ both in total numbers and relative to the B max values for the inverse agonist IP-10 (Table I). Instead, an explanation could be that the double mutation eliminates a ligand-receptor interaction, which although being essential for the binding of the neutral ligand, IL-8, is only of limited importance for the binding of the agonist GRO␣ but is an interaction that is essential for the function of GRO␣ as an agonist. Importantly, the mutated receptor signals with high constitutive activity and this activity can be blocked normally by the inverse agonist peptide IP-10. Thus this mutation provides a unique tool to determine the relative importance of agonist versus inverse agonist regulation of ORF74 activity for the angiogenic property of the virally encoded receptor to be determined, for example in transgenic animals (1).
"Normalization" of the Basal Signaling of ORF74 without Affecting Ligand Binding-The virally encoded receptor differs structurally at many positions from the classical 7TM pattern of conserved residues. Based on these differences, a number of substitutions were made in the transmembrane helical bundle to normalize the receptor structurally and thereby try to nor- malize the high constitutive activity of ORF74. For example, at the intracellular end of TM-III in most 7TM receptors is found the tri-peptide sequence Asp-Arg-Tyr (DRY) of which the Asp and Arg are highly conserved and crucially involved in receptor G-protein interaction and signaling (26,27). In ORF74 the Asp in the DRY sequence is substituted with a Val. In the middle of TM-III of ORF74 a large aromatic side chain is located at position III:11 (Tyr 128 ), which in the angiotensin AT1 receptor (28) and in the bradykinin receptor (29) creates a receptor phenotype dominated by high constitutive activity. In TM-VII the highly conserved Asn at position VII:16, which normally is believed to be involved in creating an important interhelical hydrogen bond network, is in ORF74 substituted with a Val lacking the ability to form hydrogen bonds. In TM-II the functionally highly important Asp residue is substituted with either an Asn or a Ser residue depending on how the sequence alignment is performed. However, normalization of ORF74 at these positions did not affect the high constitutive signaling of the receptor (Fig. 4). This was most surprising in the case of the DRY sequence, because substitution of the Asp in several cases, one of which is CXCR2, has created highly constitutively active receptors (26,27,42). In the CXCR2 study, it was even argued that this substitution probably was the reason for the high constitutive activity of ORF74 (42); however, the mutation performed here in the ORF74 could not confirm that notion. Thus, the rational approach to normalizing ORF74 signaling was not successful in our hands.
Nevertheless, introduction of an Asp residue at either position 91 or position 94 in TM-II surprisingly created the desired phenotype, i.e. low constitutive signaling activity combined with the ability to be stimulated by agonists. Because of the poorly conserved primary structure around TM-II, it was unclear how the sequence of ORF74 was best aligned with the consensus 7TM sequences, which basically only dictates the presence of an Asp residue located at position II:10 in an otherwise rather hydrophobic segment. Most likely it is either the Asn 92 or perhaps the Ser 93 residue that corresponds to the TM-II Asp found in almost all 7TM receptors. However, be-cause the introduction of an Asp at these positions had almost no effect on the basal signaling of ORF74, the neighboring positions were also probed and with a surprisingly positive result (Fig. 1). The L91D and L94D substitutions abolished the constitutive activity, without influencing the surface expression of the receptor more than introduction of Asps at the neighboring positions 92 and 93 did; the latter two mutations were displayed as high basal activity as observed in the wild type receptor. From the helical wheel diagram of TM-II, it is predicted that Leu 91 and Leu 94 are both located on a hydrophobic, presumed membrane-exposed face of this helix (Fig. 1). It is unclear what the structural basis for the effect of these substitutions is. It could be envisioned that introduction of the polar, potentially charged Asp residue in the middle of the hydrophobic face would destabilize the receptor structure. However, how this could lead to diminished basal signaling but preserved responsiveness to the agonist is unclear.
Exploiting ORF74 Mutants to Characterize Relationship between HHV-8, Kaposi's Sarcoma, and Angiogenesis-It has been suggested that ORF74 through its ligand-independent high constitutive activity should be causatively involved in the development of Kaposi's sarcoma as well as certain types of B-cell lymphomas (19,21). Recently it was elegantly shown by Lira and co-workers (1) that transgenic mice expressing the wild type ORF74 receptor under control of the CD2 promoter develop Kaposi's sarcoma-like lesions. This obviously strongly supports a pathogenic connection between ORF74 from HHV-8 and Kaposi's sarcoma. However, it is still unclear whether it is merely the constitutive activity of the virus-encoded oncogene as such that causes the lesions or whether the carefully evolved control of this activity by angiogenic as well as angiostatic or modulator chemokines is involved in the development of these specially highly vascularized tumors. It should also be noted that Kaposi's sarcoma predominantly develops in immune compromised individuals and that the function of ORF74 in the life circle of HHV-8 may be more subtle in the normal HHV-8infected individuals. Nevertheless, expression of the various ORF74 mutants with and without high constitutive activity as well as with and without chemokine or selective angiogenic chemokine regulation in transgenic animals should be able to clarify the molecular mechanism exploited by HHV-8 in cell transformation and angiogenesis. FIG. 4. PI turnover in TM-II located mutants. Biological activity measured as PI turnover in whole COS-7 cells transiently expressing the ORF74 constructs as described under "Experimental Procedures." A, basal activity, measured as cpm/90 min/5 ϫ 10 5 cells in a gene dosage experiment, using 5, 10, 20, and 40 g of DNA of the following constructs given from the left side: empty expressing vector, pTEJ8 (white bars), ORF74 wild type (black bars); TM-II mutants, (L91D)-(green bars), (N92D)-(light gray bars), (S93D)-(dark gray bars), and (L94D)-ORF74 (green bars). B and C, gene dosage experiments with total activity of unstimulated receptors (filled circles), receptors stimulated with IL-8, 10 Ϫ7 M (unfilled triangles) and GRO␣, 10 Ϫ8 M (unfilled circles), and receptors inhibited with IP-10, 10 Ϫ8 M (unfilled squares) in the (L91D)-(B) and in the (L94D)-ORF74 (C). D, influence on the basal activity (first bar) of IL-8, 10 Ϫ7 M (second bar), GRO␣, 10 Ϫ8 M (third bar), and IP-10, 10 Ϫ8 M (fourth bar) for mutants presented in A, with the same color code as presented in A, given in percent of basal activity, where the basal activity corresponding to 40 g of each mutation equals 100%. | 6,179.6 | 2000-08-25T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Simulation Study of Radio Frequency Safety and the Optimal Size of a Single-Channel Surface Radio Frequency Coil for Mice at 9.4 T Magnetic Resonance Imaging
The optimized size of a single-channel surface radio frequency (RF) coil for mouse body images in a 9.4 T magnetic resonance imaging (MRI) system was determined via electromagnetic-field analysis of the signal depth according to the size of a single-channel coil. The single-channel surface RF coils used in electromagnetic field simulations were configured to operate in transmission/reception mode at a frequency of 9.4 T–400 MHz. Computational analysis using the finite-difference time-domain method was used to assess the single-channel surface RF coil by comparing single-channel surface RF coils of varying sizes in terms of |B1|-, |B1+|-, |B1−|- and |E|-field distribution. RF safety for the prevention of burn injuries to small animals was assessed using an analysis of the specific absorption rate. A single-channel surface RF coil with a 20 mm diameter provided optimal B1-field distribution and RF safety, thus confirming that single-channel surface RF coils with ≥25 mm diameter could not provide typical B1-field distribution. A single-channel surface RF coil with a 20 mm diameter for mouse body imaging at 9.4 T MRI was recommended to preserve the characteristics of single-channel surface RF coils, and ensured that RF signals were applied correctly to the target point within RF safety guidelines.
Introduction
Recently, preclinical researches of human diseases have been frequently studied by using small animal models of artificially generated human diseases such as tumors and neurodegeneration. The use of mice or rats enables significant cost savings, particularly for studies using expensive chemicals such as contrast mediums, even when factoring in the cost of managing animals. Therefore, the rodent has become a key animal in the development of model diseases [1][2][3][4][5][6][7][8][9][10][11]. Magnetic resonance imaging (MRI) is particularly suitable for animal model investigation because it provides functional, anatomical, and/or physiological information without affecting animal integrity. Furthermore, due to the high image quality produced by ultra-high field (UHF) MRI scanners, they have been used in a wide range of preclinical applications. As interest grows in the use of small rodents such as mice and rats in animal models for conducting researches on human diseases, there has emerged a new trend of utilizing high-resolution images produced with preclinical MRI scanners at UHF strengths (≥7.0 T) [12,13]. For this reason, preclinical researches in UHF MRI have been actively underway, and the importance of high-performance radiofrequency (RF) coils has emerged to obtain high-resolution anatomical images in preclinical MRI systems for small animals [14][15][16][17][18][19]. The researches to improve the performance of RF coils have been conducted in various ways in both preclinical MRI and clinical MRI fields, of which studies using additional structures (such as wireless element and HPM) [20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35], transmission line based RF coil to allow it to be independent of the length of the RF coil [36][37][38][39][40][41], and researches on the dedicated shapes of RF coils themselves have mainly focused [15,[42][43][44][45][46][47][48]. In addition, research on volume coils such as birdcage coils at various magnetic field strengths [49] and the development of new RF coil techniques have been actively conducted [50].
Especially in preclinical magnetic resonance (MR) experiments, MR images are obtained by controlling animal movements with anesthesia. Preclinical studies are mainly conducted using single-channel surface RF coils, thus making it easy to install and capable of high signal sensitivity images that are close to the target; however, preclinical MR images must be acquired quickly before the animal awakens from anesthesia. Due to these limitations, single-channel surface RF coils have been designed for operation primarily in transmit/receive (Tx/Rx) configurations [24,51,52]. However, preclinical experiments have been conducted without considering the optimal size of RF coils depending on the main magnetic field strength and type of animal models. Therefore, it is necessary to configure a single-channel RF coil that provides sufficient signal depth for the size of the animal used in the preclinical experiment, as well as the distance to the target point for image acquisition.
In addition, preclinical MRI studies should be conducted in adherence to ethical guidelines for animal use [53][54][55][56][57][58][59]. In terms of animal ethics and morality of animal use in human society, this paper aimed to improve animal ethics by securing the minimum RF safety of experimental animals sacrificed for humans in MRI preclinical experiments of mice. In MRI experiments, the RF safety guideline that is relevant to the selection of RF coils is the risk of burn injuries, which may occur when the RF coil system is not configured properly for the animal models; it can be mitigated by considering the specific absorption rate (SAR) [60][61][62][63][64][65]. Although RF safety is generally considered when conducting clinical MRI studies, most preclinical MRI studies have relatively neglected the consideration of RF safety due to a lack of established ethical guidelines for the SAR in small animal models. Nevertheless, the physiological effects of tissue heating due to heavy RF deposition in small animals could be a serious confounding factor in preclinical MRI studies and should not be ignored. Because it may be difficult to obtain accurate MR images without considering the SAR, tissue changes caused by tissue heating may not protect the rights of animals. In MRI studies, most burn injuries occur due to SAR problems in RF coils, and this risk increases as magnetic field strength increases [66][67][68].
In this work, we proposed the optimal size of a single-channel surface RF coil that could provide sufficient signal depth when acquiring mouse body images using preclinical MRI at 9.4 T. To propose the optimal size and RF safety of a single-channel surface RF coil, electromagnetic field (EM-field) analysis was performed using finite-difference timedomain (FDTD) methods by adjusting the diameter of the single-channel coil from 30 mm to 10 mm with 5 mm intervals. The single-channel surface RF coils used in the EM-field simulation were 10 mm, 15 mm, 20 mm, 25 mm, and 30 mm in diameter. Three types of numerical phantoms for EM-field simulation were used: oil phantom, water phantom, and mouse phantom. The oil phantom and water phantom were used to compare the relative changes in signal depth under various dielectric properties to the size of a single-channel surface RF coil, whereas the mouse phantom was used to assume an actual MR experiment.
Materials and Methods
To evaluate the optimal size of single-channel surface RF coils for preclinical MRI, EM-field simulations were performed using Sim4Life™ v4.4 (Zurich MedTech AG, Zürich, Switzerland) commercial software, which is widely used in numerical calculations using the FDTD method based on Yee cells [69]. EM-field analysis was validated in terms of the signal depth of |B 1 |-field and SAR distribution due to the |E|-field concentration, depending on the size of the single-channel surface RF coil. In Figure 1, for EM-field simulation, the single-channel surface RF coils were constructed in a square shape using perfect electric conductor material. The diameter of single-channel surface RF coils was set at a 5 mm interval from 10 mm to 30 mm (1 mm, 15 mm, 20 mm, 25 mm, and 30 mm). The single-channel surface RF coil for the Tx/Rx mode had four ports as a voltage source of 1 V and the geometrical phase (0 • , 90 • , 180 • , and 270 • ). The EM-field calculation was performed with a target frequency of 400 MHz for the 9.4 T MRI system. The target frequency (ω) was defined as the gyromagnetic ratio (γ) and main magnetic field strength (B 0 ) as following: using the FDTD method based on Yee cells [69]. EM-field analysis was validated in term of the signal depth of |B1|-field and SAR distribution due to the |E|-field concentratio depending on the size of the single-channel surface RF coil. In Figure 1, for EM-field simulation, the single-channel surface RF coils were co structed in a square shape using perfect electric conductor material. The diameter of si gle-channel surface RF coils was set at a 5 mm interval from 10 mm to 30 mm (10 mm, mm, 20 mm, 25 mm, and 30 mm). The single-channel surface RF coil for the Tx/Rx mod had four ports as a voltage source of 1 V and the geometrical phase (0°, 90°, 180°, an 270°). The EM-field calculation was performed with a target frequency of 400 MHz for th 9.4 T MRI system. The target frequency ( ) was defined as the gyromagnetic ratio ( ) an main magnetic field strength (B 0 ) as following: The volumetric RF coil, since the RF transmission field was excited by the iso-cent of the RF coil, achieved a B1 + -field strength of 2.0 μT based on the center of the RF co However, it was difficult to quantitatively analyze the surface coil because various resul were derived according to the change in the position of the target point when analyzin the field by randomly setting the target point. Therefore, in general, in the FDTD analys of the surface coil, the RF power was fixed, and the analysis was performed in terms signal depth and sensitivity of the RF coil.
The numerical phantoms for EM-field simulation utilized the cylindrical phanto (oil phantom and the water phantom (shown in Figure 1a)) and the mouse phantom (Ma PIM1 Mouse by IT'IS Foundation (Information Technologies in Society), Switzerland), shown in Figure 1b. The oil phantom and water phantom had a diameter of 30 mm and length of 100 mm. The oil phantom consisted of dielectric properties with a conductivi of 0 S·m −1 and a permittivity of 4. Meanwhile, the water phantom using distilled wat consisted of dielectric properties with a conductivity of 5 × 10 −5 S·m −1 and a permittivity 76.7. The reason for performing the simulation using the oil phantom was to evaluate th quantitative performance of the RF coil and to verify the EM-fields generated by the R coil itself under ideal conditions. As the strength of the magnetic field increases, RF fie The volumetric RF coil, since the RF transmission field was excited by the iso-center of the RF coil, achieved a B 1 + -field strength of 2.0 µT based on the center of the RF coil. However, it was difficult to quantitatively analyze the surface coil because various results were derived according to the change in the position of the target point when analyzing the field by randomly setting the target point. Therefore, in general, in the FDTD analysis of the surface coil, the RF power was fixed, and the analysis was performed in terms of signal depth and sensitivity of the RF coil.
The numerical phantoms for EM-field simulation utilized the cylindrical phantom (oil phantom and the water phantom (shown in Figure 1a)) and the mouse phantom (Male PIM1 Mouse by IT'IS Foundation (Information Technologies in Society), Switzerland), as shown in Figure 1b. The oil phantom and water phantom had a diameter of 30 mm and a length of 100 mm. The oil phantom consisted of dielectric properties with a conductivity of 0 S·m −1 and a permittivity of 4. Meanwhile, the water phantom using distilled water consisted of dielectric properties with a conductivity of 5 × 10 −5 S·m −1 and a permittivity of 76.7. The reason for performing the simulation using the oil phantom was to evaluate the quantitative performance of the RF coil and to verify the EM-fields generated by the RF coil itself under ideal conditions. As the strength of the magnetic field increases, RF field inhomogeneity occurred by shifted |B 1 + |-field and |B 1 − |-field, it became difficult to quantitatively evaluate the unique characteristics of RF coils [70,71]. In particular, it was difficult to quantitatively evaluate the RF field due to the inhomogeneity of the |B 1 + |-field and |B 1 − |-field since the reciprocity theorem was not applied to ultra-high field MRI above 7.0 T, and this inhomogeneity became more severe as it has higher conductivity. For this reason, the EM-field generated by the RF coil could be quantitatively evaluated in simulation using the oil phantom, whereas the EM-field simulation using the water phantom, which consisted of distilled water, could calculate EM-fields similar to the MR images using the actual mice assuming that there was a 1 H proton signal inside the water molecule of the experimental mice. The mouse phantom had a length of 98 mm without a tail and a mass of 45 g, with 49 tissue parameters. The PIM1 mouse phantom was based on MR segmented data, and each of the 49 defined tissue parameters was described analytically. The tissue parameters were assigned, such as electric conductivity, permittivity, and permeability for EM-field simulations. Tissue parameters for application to thermal simulations (such as thermal conductivity, heat generation rate, heat transfer rate, and heat capacity) were also defined in detail. The tissue parameter values, including density and dielectric properties, were included in the material database provided by IT'IS Foundation (DOI: 10.13099/VIP91201-01-0) [72].
The distance between the single-channel surface RF coil and each phantom (Oil phantom, water phantom, and mouse phantom) was set to be the closest possible distance of 1 mm. In clinical MRI, MR images could be obtained that provide higher sensitivity and uniformity by using a multi-channel RF coil configured with volumetric features in ultra-high field MRI. However, in the case of preclinical MRI, it was difficult to insert both volumetric multi-channel RF coils and experimental animals within the limited magnet bore size. Therefore, in order to obtain a higher SNR MR image, the surface coil was used mainly in contact with object as close as possible to obtain the image.
For numerical calculations, the computational space composed of Yee cells was set to 86 × 83 × 156 cells (1.114 mega cells) for oil phantom and 171 × 166 × 418 cells (11.865 mega cells) for mouse phantom along the x, y, and z directions. Along the x, y, and z-axis, three-dimensional Yee cells were adopted with a resolution of less than 1 mm, including the absorbing boundary condition with a perfectly matched layer for the acquisition of accurate EM-field distribution and −70 dB conversions to produce a steady-state equilibrium condition of a single-channel surface RF coil. EM-field simulations were calculated by a complex data matrix using MATLAB (Version 2020a, MathWorks, Inc., Natick, MA, USA). The EM-field simulation results calculated using MATLAB were displayed by setting the voxel resolution to 0.2 × 0.2 × 0.45 mm.
The single-channel surface RF coil produces a highly local EM-field sensitivity depending on the signal depth. The signal depth is defined as the point at which the sensitivity of the single-channel surface RF coil drops to 37% of that at the center of the single-channel surface RF coil. The signal depth of the single-channel surface RF coil is proportional to the diameter of the coil [73,74]. In other words, as the coil diameter increases, the interference with the target to obtain the MR image increases, which means that noise coming through the target tissue is affected. In addition, as the size of the single-channel surface RF coil decreases, a lot of RF power is required to acquire MR images in the target region. For this reason, signal depth optimization had to be performed to optimize the single-channel surface RF coil. In addition, RF safety optimization considered SAR should also be performed to ensure the safety of experimental mice.
To analyze the performance of the single-channel surface RF coil in terms of the signal depth and sensitivity, we compared the |B 1 |-, |B 1 + |-, and |B 1 − |-field distributions depending on the size of the single-channel surface RF coil at the center slice. |B 1 |-, |B 1 + |-, and |B 1 − |-fields refer to absolute values of B 1 , B 1 + , and B 1 − fields. The B 1 field distribution involving the xand y-component as expressed as B 1xy as follows: (2) The signal depth of the |B 1 |-field was compared using a slice profile in the direction of RF penetration from P1 to P2 in the A-P (anterior to posterior) direction along the y-axis, from the center of the single-channel surface RF coil, as shown in Figures 2 and 3. Meanwhile, the SAR-map on the basis of the |E|-field concentration was validated for determining the RF safety of a single-channel surface RF coil. In addition, for a more quantitative comparison, the maximum, mean, and standard deviation (STD) values were measured from the SAR-map results.
And B1 includes two circularly polarized components defined as B1 + and B1 − , where B1x and B1y are B1 components on the x-axis and y-axis, respectively. B1 + and B1 − can be defined as follows: The signal depth of the |B1|-field was compared using a slice profile in the direction of RF penetration from P1 to P2 in the A-P (anterior to posterior) direction along the yaxis, from the center of the single-channel surface RF coil, as shown in Figures 2 and 3. Meanwhile, the SAR-map on the basis of the |E|-field concentration was validated for determining the RF safety of a single-channel surface RF coil. In addition, for a more quantitative comparison, the maximum, mean, and standard deviation (STD) values were measured from the SAR-map results.
Results and Discussion
EM-field distributions using an oil phantom were calculated using the |B1|-, |B1 + |-, |B1 − |-, and |E|-field and were compared according to changes in the size of the singlechannel surface RF coil, as shown in Figure 2. Typical surface RF coils exhibited strong signal sensitivity at the center of the RF coil and formed a semicircular |B1|-field distribution, where relatively low signal sensitivity was observed; however, the signal sensitivity and |B1|-field distribution changed as the diameter increased for the single-channel surface RF coils. Figure 2 shows EM-field distributions in the axial slice (x-y plane) using an oil phantom with cylindrical geometry. In Figure 2a, the |B1|-field distributions of the single-channel surface RF coil with diameters of 10 mm and 15 mm each appeared as typical semicircles in the oil phantom. On the other hand, in the case of a single-channel surface RF coil with a diameter of 20 mm, the difference in |B1|-field sensitivity between the center of the coil and the periphery regions was markedly reduced. For the |B1|-field with diameters of 25 mm and 30 mm, a lower signal sensitivity was observed at the center of the singlechannel surface RF coil than in the periphery region. In Figure 2b, the |B1 + |-field distribution tended to be similar to that of the |B1|-field distribution, whereas the |B1 − |-field distribution in Figure 2c showed a different tendency from that of the |B1 + |-field distribution. According to the reciprocity theorem [70], the surface RF coils had an equal distribution as that of the |B1 + |-field for RF transmission and the |B1 − |-field for RF reception. However,
Results and Discussion
EM-field distributions using an oil phantom were calculated using the |B 1 |-, |B 1 + |-, |B 1 − |-, and |E|-field and were compared according to changes in the size of the singlechannel surface RF coil, as shown in Figure 2. Typical surface RF coils exhibited strong signal sensitivity at the center of the RF coil and formed a semicircular |B 1 |-field distribution, where relatively low signal sensitivity was observed; however, the signal sensitivity and |B 1 |-field distribution changed as the diameter increased for the single-channel surface RF coils. Figure 2 shows EM-field distributions in the axial slice (x-y plane) using an oil phantom with cylindrical geometry. In Figure 2a, the |B 1 |-field distributions of the single-channel surface RF coil with diameters of 10 mm and 15 mm each appeared as typical semicircles in the oil phantom. On the other hand, in the case of a single-channel surface RF coil with a diameter of 20 mm, the difference in |B 1 |-field sensitivity between the center of the coil and the periphery regions was markedly reduced. For the |B 1 |-field with diameters of 25 mm and 30 mm, a lower signal sensitivity was observed at the center of the single-channel surface RF coil than in the periphery region. In Figure 2b, the |B 1 + |field distribution tended to be similar to that of the |B 1 |-field distribution, whereas the |B 1 − |-field distribution in Figure 2c showed a different tendency from that of the |B 1 + |field distribution. According to the reciprocity theorem [70], the surface RF coils had an equal distribution as that of the |B 1 + |-field for RF transmission and the |B 1 − |-field for RF reception. However, as the magnetic field strength increased, the reciprocity theorem was not established due to inhomogeneity between RF transmission and RF reception. Figure 2d shows the |E|-field concentration of a single-channel surface RF coil. The maximum |E|-field value was observed with a diameter of 10 mm, and the |E|-field distribution relatively decreased as the diameter of the single-channel surface RF coils increased. According to the EM-field simulation results shown in Figure 2, the farther the distance from the single-channel surface RF coil along P1-P2, a more linear reduction field pattern was observed in the EM field. As shown in Figure 2, a single-channel surface RF coil consisting of a diameter of 20 mm provided optimal |B|-field distribution. On the other hand, surface RF coils with a diameter of more than 25 mm increased the signal sensitivity of the |B|-field in the periphery region rather than in the center of the coil.
EM-field simulation results of the water phantom using distilled water modeled to the same size as the oil phantom were calculated as shown in Figure 3. EM-field distributions using the water phantom were calculated using the |B 1 |-, |B 1 + |-, |B 1 − |-, and |E|-fields and were compared in the same way as the EM-field simulation results using the oil phantom in Figure 2.
In the EM-field simulation result using the water phantom, as shown in Figure 3, a deeper signal depth could be observed in the |B 1 |-and |B 1 + |-fields compared to the result using the oil phantom (in Figure 2). This was due to the characteristics of the water phantom using distilled water that had higher dielectric properties than the oil phantom. The simulation results using the water phantom showed that the EM-field was distorted along the y-direction (near the P2 point), but it was not found in the EM-field simulation result using the oil phantom. It was possible to observe an EM-field distribution in which the signal strength increases in the opposite direction where the single-channel surface RF coil was located.
In Figure 4, EM-field distributions using a mouse phantom were verified for the |B 1 |field, |B 1 + |-field, |B 1 − |-field, |E|-field, and SAR-map, depending on the change in the size of the single-channel surface RF coil. Figure 4 shows that the results of using a mouse phantom were similar to the results obtained using an oil phantom and water phantom, as shown in Figures 2 and 3. However, due to the narrowed dorsal shape of the mouse phantom, high signal sensitivity patterns close to the microstrip line of the single-channel surface RF coil were not observed, which contrasted with the patterns shown in the oil phantom and water phantom results. In addition, the distorted EM-field distribution with reversed signal intensity near the P2 point observed in the water phantom result was equally observed in the |B 1 |-, |B 1 + |-, and |B 1 − |-field distribution. In Figure 4a, typical |B 1 |-field distribution was observed in a single-channel surface RF coil with a diameter of 20 mm, but a single-channel surface RF coil with a diameter of more than 25 mm was observed to flatten the |B 1 |-field distribution. For the |B 1 + |-field distribution, shown in Figure 4b, we verified field patterns that were similar to those of the |B 1 |-field distribution, as shown in Figure 4a. The |B 1 − |-field distribution shown in Figure 4c produced a non-linear field pattern in which the |B 1 − |-field distribution increased at a certain point without linearly decreasing. Figure 4d shows the |E|-field distribution using a mouse phantom as the single-channel surface RF coil diameter increased. The |E|-field concentration generated a maximum value near the single-channel surface RF coil, and penetration into the mouse phantom was observed as the diameter of the single-channel surface RF coil increased. Meanwhile, unlike that shown in Figure 2d, the penetration depth of the |E|-field was observed to deepen due to the dielectric properties of the mouse phantom consisting of various tissues. In terms of RF safety, the SAR-map results in Figure 4e showed a similar tendency as the |E|-field results shown in Figure 4d. The results of the single-channel surface RF coil with a diameter of 10 mm showed that the SAR field was concentrated close to the single-channel surface RF coil. Meanwhile, the SAR distribution penetrated the mouse model due to an increase in the diameter of the single-channel surface RF coil. In Figure 4a, typical |B1|-field distribution was observed in a single-channel surface RF coil with a diameter of 20 mm, but a single-channel surface RF coil with a diameter of more than 25 mm was observed to flatten the |B1|-field distribution. For the |B1 + |-field distribution, shown in Figure 4b, we verified field patterns that were similar to those of the |B1|-field distribution, as shown in Figure 4a. The |B1 − |-field distribution shown in Figure 4c produced a non-linear field pattern in which the |B1 − |-field distribution increased at a certain point without linearly decreasing. Figure 4d shows the |E|-field distribution using a mouse phantom as the single-channel surface RF coil diameter increased. The |E|-field concentration generated a maximum value near the single-channel surface RF coil, and penetration into the mouse phantom was observed as the diameter of the single-channel surface RF coil increased. Meanwhile, unlike that shown in Figure 2d, the penetration depth of the |E|-field was observed to deepen due to the dielectric properties of the mouse phantom consisting of various tissues. In terms of RF safety, the SAR-map results in Figure 4e showed a similar tendency as the |E|-field results shown in Figure 4d. The results of the single-channel surface RF coil with a diameter of 10 mm showed that Figure 4. The maximum SAR value was highest at 4.661 × 10 −3 W/Kg on a single-channel surface RF coil with a diameter of 10 mm and decreased as the diameter of the single-channel surface RF coil increased. The maximum value of the SAR-map was reduced by approximately 25% when the diameter of the single-channel surface RF coil increased from 10 mm to 15 mm, and by approximately 47% when the diameter increased from 15 mm to 20 mm. Increasing the diameter of a single-channel surface RF coil from 20 mm to 25 mm reduced the maximum SAR by 20%, and increasing from 25 mm to 30 mm reduced the maximum SAR by 16%. The SAR values varied rapidly with a single-channel surface RF coil with a diameter of 20 mm, and there was a similar tendency in the mean value and STD distributions. In Figure 5, the signal depth according to the size of the single-channel surface RF coil was compared with a resolution of 0.2 mm per point in the y-axis direction using a slice profile between the P1 and P2 points, as shown in Figures 2-4. As can be seen from the simulation results using the oil phantom in Figure 5a,d,g, |B 1 |-and |B 1 + |-fields indicated similar slice profiles, except for differences in the signal sensitivity. The signal sensitivity of the |B 1 |-and |B 1 + |-fields tended to decrease with the increasing diameter of the single-channel surface RF coil, and in the |B 1 − |-field distribution, the signal sensitivity tended to increase with the increasing diameter of the single-channel surface RF coil. The |B 1 − |-field results in Figure 5g showed an opposite tendency from the |B 1 |-and |B 1 + |field distributions. This tendency was similar in the EM simulation of the water phantom (shown in Figure 5b,e,h) and mouse phantom (shown in Figure 5c,f,i), but unlike EM simulation with the oil phantom, signal sensitivity was reversed at approximately 23.6 mm (at 118 points) from a single-channel surface RF coil in the |B 1 − |-field distribution of the water phantom (in Figure 5h). This signal reversal was also observed in the |B 1 − |-field distribution of the mouse phantom (in Figure 5i), with the signal sensitivity reversed at approximately 21 mm (105) on the single-channel surface RF coil. The |B 1 − |-field sensitivity reversed in this way was caused by the inhomogeneity of the main magnetic field and the complex dielectric properties of the mouse phantom in UHF MRI [75,76].
The signal depth, an important measure for evaluating the performance of a single channel surface coil, was defined as the point where the coil's sensitivity drops to 37% of that at the center of the coil [73,74]. To evaluate the signal depth of the central slice profile in the |B 1 |-and |B 1 + |-fields, the point at which the signal intensity decreased by 37% based on the field sensitivity value of the point P1 (in Table 2) was measured and shown in Table 3. In the results of the oil phantom and mouse phantom in Figure 5 and Table 3, for single-channel surface RF coils with a size of 10 mm and 15 mm, the signal depth of |B 1 |-and |B 1 + |-fields were less than 10 mm and sufficient signal depths were not provided. However, in the results of the |B 1 |-and |B 1 + |-fields, a single-channel surface RF coil with a diameter of 20 mm provided a sufficient signal depth of more than 10 mm. Exceptionally, the water phantom results showed that a sufficient signal depth of more than 10 mm was provided in a single-channel surface RF coil with a diameter of 15 mm. The results of the |B 1 |-and |B 1 + |-fields of the water phantom showed that the signal depth increased rapidly as the diameter of the single-channel surface RF coil decreased compared to the oil phantom result. It was confirmed that there was a high similarity between the water phantom result and the mouse phantom result in the |B 1 − |-field distribution. In |B 1 − |-field results (shown in Table 3), the difference in signal depth according to the size change of the single-channel surface RF coil was measured to be up to 3.8 mm with the water phantom and up to 1.6 mm with the mouse phantom. Center slice profile (from P1-P2) of |B1|-, |B1 + |-, and |B1 − |-field using oil phantom, water phantom, and mouse phantom: (a) center slice profile of |B1|-field using oil phantom, (b) center slice profile of |B1|-field using water phantom, (c) center slice profile of |B1|-field using mouse phantom, (d) center slice profile of |B1 + |-field using oil phantom, (e) center slice profile of |B1 + |field using water phantom, (f) center slice profile of |B1 + |-field using mouse phantom, (g) center slice profile of |B1 − |-field using oil phantom, (h) center slice profile of |B1 − |-field using water phantom, (i) center slice profile of |B1 − |-field using mouse phantom.
The signal depth, an important measure for evaluating the performance of a single Figure 5. Center slice profile (from P1-P2) of |B 1 |-, |B 1 + |-, and |B 1 − |-field using oil phantom, water phantom, and mouse phantom: (a) center slice profile of |B 1 |-field using oil phantom, (b) center slice profile of |B 1 |-field using water phantom, (c) center slice profile of |B 1 |-field using mouse phantom, (d) center slice profile of |B 1 + |-field using oil phantom, (e) center slice profile of |B 1 + |-field using water phantom, (f) center slice profile of |B 1 + |-field using mouse phantom, (g) center slice profile of |B 1 − |-field using oil phantom, (h) center slice profile of |B 1 − |-field using water phantom, (i) center slice profile of |B 1 − |-field using mouse phantom. The brain of the mouse phantom used in the EM-field simulation had a height of 8 mm, and the signal depth had to be 10 mm or more to obtain an image of the entire mouse brain region, considering the skull and skin. In addition, the signal depth had to be at least 10 mm to obtain spinal cord images of the mouse's body. In other words, in order to obtain mouse brain and spinal code images in preclinical MRI experiments using mice, a single-channel surface RF coil with a diameter of at least 20 mm had to be used.
The effectiveness of the proposed 20 mm diameter surface RF coil in mouse experiments using preclinical 9.4 T MRI could be confirmed not only in axial slices but also in the results of the sagittal slice (in Figures S1-S3) and coronal slice (in Figures S4-S6). As a result of the distribution of EM-fields using the oil phantom (in Figure S1), water phantom (in Figure S2), and mouse phantom (in Figure S3) in the sagittal slice, it was confirmed that the same distribution as the axial slice appeared. In addition, according to the results of the EM-fields using the oil phantom (in Figure S4), water phantom (in Figure S5), and mouse phantom (in Figure S6) in the coronal slice, it was confirmed that the single-channel surface RF coil with a diameter of 20 mm is the minimum usage criteria.
In this paper, according to the results while using the oil phantom, water phantom, and mouse phantom, a single-channel surface RF coil consisting of a 20 mm diameter provided an optimal |B 1 |-field distribution and confirmed that single-channel surface RF coils with a diameter of more than 25 mm could not provide a typical |B 1 |-field distribution within the guidelines for RF safety. To preserve the characteristics of typical single-channel surface RF coils and ensure that the RF signal is correctly applied to the target point, we proposed using a single-channel coil with a diameter of 20 mm for 9.4 T MRI.
To discuss the results of this study, we proposed a single-channel RF coil with a diameter of 20 mm optimized for small mice at 9.4 T MRI. The proposed 20mm diameter single-channel RF coil provided signal depth of RF coil suitable for mouse experiments in the 9.4 T MRI study and confirmed the RF safety of experimental animals.
Although the proposed 20 mm diameter single-channel RF coil was expected to be readily applicable to preclinical 9.4 T MRI studies, this study was conducted with two limitations.
First, comparative studies on MRIs of various magnetic field intensities had not been conducted. The EM field characteristics and SAR distribution of the RF coil change due to the decrease in the wavelength of the RF frequency depending on the strength of the magnetic field. In this paper, we proposed a single-channel RF coil optimized only for 9.4 T MRI but failed to compare the electromagnetic field and RF safety of a single-channel RF coil in preclinical MRIs with different magnetic field intensities. In the following study, a comparative analysis of preclinical MRI of various magnetic field strengths such as 3.0 T, 7.0 T, 11.4 T, and 21.0 T is required [44,[77][78][79][80].
Second, the RF coil size required in MRI experiments is typically defined according to the type and size of the target animal and the image acquisition region. The EM-field simulations using various experimental animal models are required to accurately assess the performance of the single-channel surface RF coil in preclinical MRI. Therefore, comparative studies on experimental animals of various sizes and types are essential. The EM-field simulation program used in this study can apply various animal phantoms provided by the IT'IS Foundation, enabling comparative studies on animals of various sizes and types.
Experimental animal models for EM-field simulations provided by the IT'IS Foundation provide 17 animal phantoms consisting of a total of 9 species of animals, as shown in Table S1. However, it takes a lot of time to conduct comparative research on various species of animals, and it is difficult to compare all of them within a limited page-length manuscript. Therefore, in this study, size optimization was performed according to the size of the single-channel surface RF coil using only mice, which are the most frequently used in animal experiments [81,82]. In recent research trends using experimental animals (especially in Asia, the European Union, and the United Kingdom), experiments using mice and rats account for 70-80% of the total animal experiments, of which the utilization rate of mice is overwhelmingly high at 60-70% [83].
To solve these two limitations, a comparative study using various main magnetic field strengths and various animal phantoms will be performed. In addition, based on the proposed single-channel surface RF coil with a diameter of 20 mm in this study, we are planning to extend the suggested single-channel surface RF coil to a multi-channel RF coil and are also planning an optimized diameter to provide RF safety of experimental animals.
Conclusions
In preclinical MRI, single-channel surface RF coils have been used in the Tx/Rx mode for small animal studies using rat or mouse models due to their high signal sensitivity. However, as the main magnetic field strength increases, single-channel surface RF coils of optimized size have been used without performing quantitative analysis or following the guidelines of RF safety. Therefore, it is essential to define a single-channel surface RF coil that has been optimized according to the main magnetic field strength and with consideration for the signal depth to ensure that the RF signal is correctly applied to the target image acquisition point within the RF safety guidelines.
From the results of this study, the sensitivity and signal depth of |B 1 |-, |B 1 + |-, and |B 1 − |-field, including RF safety, were verified by using numerical calculations for preclinical MRI. A single-channel surface RF coil of 10 mm and 15 mm each provided high |B 1 |-field sensitivity, and the small size of the single-channel surface RF coil caused an |E|-field concentration, thus resulting in an increase in the SAR value. The use of singlechannel surface RF coils with diameters of 10 mm and 15 mm in a preclinical MRI doubled the SAR compared to the use of single-channel surface RF coils with a diameter of 20 mm at a location close to a single-channel surface RF coil, thus resulting in many drawbacks in SAR safety for experiments using small animals. In preclinical MRI, single-channel surface RF coil acquires images as close as possible to the target small animal, and an increase in the maximum SAR due to the |E|-field concentration lead to a burn injury or changes in the biological properties of the small animal.
Considering RF safety for small animals, the optimal single-channel surface RF coil size should be at least 20 mm, and it is recommended to use a 20 mm single-channel surface RF coil while considering high signal sensitivity, signal depth, and RF safety. In this paper, we verified the efficiency and safety of the single-channel surface RF coil with a diameter of 20 mm for mouse experiments in 9.4 T preclinical MRI. The proposed 20 mm diameter single-channel surface RF coil showed excellent performance in terms of |B 1 |-field signal depth and SAR, which could be utilized to improve the quality of small animal images for preclinical MRI.
In addition, we also expect to be able to conduct research on RF coils that are suitable for more diverse purposes using methods considering the signal depth and SAR analysis for determining optimized single-channel surface RF coils for preclinical MRI studies. In addition, the proposed 20 mm diameter single-channel surface RF coil can be easily applied to multi-channel RF coils in preclinical 9.4 T MRI systems and will provide high |B 1 |-field sensitivity with sufficient signal depth, and will also satisfy SAR safety of experimental mice. Data Availability Statement: Not applicable.
Conflicts of Interest:
The authors declare no conflict of interest. | 9,445.8 | 2022-06-01T00:00:00.000 | [
"Physics"
] |
An Effective Hybrid Algorithm Based on Particle Swarm Optimization with Migration Method for Solving the Multiskill Resource-Constrained Project Scheduling Problem
The paper proposed a new algorithm to solve the Multiskill Resource-Constrained Project Scheduling Problem (MS-RCPSP), a combinational optimization problem proved in NP-Hard classification, so it cannot get an optimal solution in polynomial time. The NP-Hard problems can be solved using metaheuristic methods to evolve the population over many generations, thereby finding approximate solutions. However, most metaheuristics have a weakness that can be dropping into local extreme after a number of evolution generations. The new algorithm proposed in this paper will resolve that by detecting local extremes and escaping that by moving the population to new space. That is executed using the Migration technique combined with the Particle Swarm Optimization (PSO) method. The new algorithm is called M-PSO. The experiments were conducted with the iMOPSE benchmark dataset and showed that the M-PSO was more practical than the early algorithms.
Introduction
Scheduling problems are studied to solve many cases of science and practice, especially big projects which have many tasks. Actually, each real project often has a much larger amount of tasks than the number of renewable resources: people, tools, machines, devices, etc., which can be used to do all of them. erefore, it is very important to find a good solution to assign resource-to-task efficiently, satisfying the project's preconstraints. e research to solve this problem has many useful implications such as scheduling of resource coordination in operating systems [1] and scheduling for production lines or applications in subjects of economy and finance [2], military [3], cloud computing [4], fog computing [5], wireless sensor networks [6], etc. Many research results published have shown that this problem is classified into NP-Hard [7][8][9].
Resource-Constrained Project Scheduling Problem (RCPSP) [1,10,11] is a project scheduling problem with limited resources, which has been proven to be an NP-Hard problem. e goal is to find a good schedule with minimal time in terms of resource constraints. Currently, this problem is studied and applied in many fields. According to the problem definition, it has two important constraints: (i) Each renewable resource can only perform a task at a time. Until that task completes, the resource can be used to execute another task.
(ii) When a task has started, it cannot be interrupted until it completes.
In each project, tasks have precedence constraints, which means some tasks need to be finished before others.
An important problem extended from RCPSP is MS-RCPSP [7,8,12] (multiskill RCPSP) that has a new constraint on the skill factor of implementation resources. e new characterization of this problem states that a resource may have more than one skill type, and each skill type has a specific skill level. Each task will have exact requirements about the skill type and skill level of the performing resource.
is definition makes the MS-RCPSP more appropriate to real-life projects. e problem is solved by initializing a population of many feasible schedules for the project. e population will evolve step by step through generations to find the best schedule. However, evolution methods often fall into local extremes, so the population cannot run further.
is paper proposes a new algorithm using the Migration method combined with PSO traditional to crack the disadvantage when falling into the local extremes. e innovations and main contributions of this paper are described as follows.
(i) Detecting local extremes in the population evolution. (ii) Proposing the method to escape local extremes by moving the population to a new values space using the Migration. e rest of the paper is separated into six sections. Section 2 introduces some early approaches to decoding the MS-RCPSP problem. In Section 3, we raise the mathematical statement of the MS-RCPSP problem. Section 4 presents a new efficient algorithm for finding the approximate solution based on the PSO strategy [2,13]. In this section, we focus on detailing the Migration technique, which is a major component that makes up the power of the proposed algorithm. To evaluate the algorithm, this section also introduces the particle presentation. Section 5 experiments to demonstrate the efficiency of the proposed algorithm. e examination is conducted on some instances of the iMOPSE benchmark dataset. All experimental results are compared and analyzed with other algorithms to show the effectiveness of the new algorithm. Finally, Section 6 arranges the paper and draws further research directions to enhance the quality of the MS-RCPSP problem.
Approximation Methods for MS-RCPSP.
In recent years, researching good methods to solve the MS-RCPSP [7,12,14] problem has become important in deploying it into many domains. Since this is a combinatorial optimization problem belonging to the NP-Hard [11,15,16] classification, we cannot find an optimal solution in polynomial time, so the objective of methods is to find an approximate result based on metaheuristic techniques. Authors usually use evolutionary approaches as GA [17,18], PSO [19][20][21][22], Greedy, Min-Max, etc. to solve and get out the approximate solutions. Some of the outstanding related works are shown in Table 1.
Myszkowski et al. [8,9,23] have researched this problem for many years, and they have many quality publication papers presented regarding that. e authors proposed algorithms based on evaluation methods such as Tabu Search [8], Ant colony, GA [9,24,25], etc. Most of them are traditional methods, so they usually have limited effective results. However, the best contribution of Myszkowski is to build and publish the standard dataset used to experiment with the new algorithms for the MS-RCPSP. It is called the iMOPSE dataset [9].
Hosseinian and Baradaran [7,26] have many published papers about MS-RCPSP too. In 2018, Hosseinian and Baradaran published a paper [7] on the Multimode MS-RCSPSP (MMSRCPSP), a subclass of the MS-RCPSP problem. MMSRCPSP has added constraints where each task can only be executed in a few predefined modes. When the mode is selected, it cannot be changed. e authors suggested a method to make individuals in the next generation, which is built from the GA algorithm combined with the decision-making method based on Shanon-entropy data measure. Experiments were carried out on randomly generated datasets by ProGen software. After that, in 2019, the authors published a paper [26] on the Dandelion Algorithm [27]. In 2020, this group of authors continued to publish an article that solves the multiobjective MS-RCPSP problem with two objectives: total time and cost to complete a project. e authors use the Pareto-based Gray Wolf Optimizer algorithm and continue to test the results on the iMOPSE standard dataset [9].
In 2017, Javanmard et al. used two traditional evolutionary algorithms, GA and PSO, to schedule multiskill resources (actually workers and engineers) to minimize the total cost in the chemical industry [28]. e proposed algorithms are tried on the PSPLIB dataset [29] not fully suitable for scheduling problems because of no cost parameter objectives. Moreover, the authors cannot compare their traditional algorithms with newer ones at the experimental stage but only compare them.
Also studying multiobjective problems such as Hosseinian et al., Davari-Ardakani [30] proposed a multiobjective variant of the MS-RCPSP problem to minimize the time and cost of the current project. e proposed issue is called MSPSP, which is limited to projects with the following two characteristics: (i) e timing of implementation is arbitrary; for example, the project can be done in the evenings or on weekends (ii) Energy costs are very high, comparable to wages paid to employees.
However, the authors only gave the problem model without any new solution method to solve it. ey only conducted experiments with the Max-Min method, a very traditional method.
PSO Method.
Particle Swarm Optimization (PSO) [31][32][33][34][35] is an evolutionary algorithm. Similar to other evolutionary algorithms, PSO will perform a populationbased search; initially the population is randomly initialized with a certain number of individuals. However, PSO differs from other evolutionary algorithms in that each individual in the population is determined by two basic parameters, the position vector (representing the individual's experience over generations) and the velocity vector (representing the population's experience over generations). Each individual will move in the solution space at a particular velocity. After each generation, the individuals will move towards the best position of the individual in the past and the best position of the population and after each generation, the instances will move towards better search regions in the search space. In the process of finding individuals, the displacement vector and the position vector will be updated according to the best value of that individual in the past and the best position of the population according to the following formula.
An effective algorithm suggested by Kennedy and Eberhart [32] in 1995 in the evaluation optimization method is PSO (Particle Swarm Optimization). In the PSO, each particle in each generation is evaluated from two values containing position and velocity where the position is calculated as follows: where i are velocity of particle i at generation k and k + 1.
i are position of the particle i at generation k and k + 1. (iii) ω is inertia weight; c 1 , c 2 are speedup coefficients. (iv) rand 1 , rand 2 are the values between 0 and 1 randomly generated. (v) pbest i is the best position of particle i. (vi) gbest: the best particle position in a population.
Problem Definitions
e MS-RCPSP [7,8,12,14] is a subproblem of RCPSP (Resource-Constrained Project Scheduling Problem) that added the skill domain to the renewable resources. e objective of this problem is to find the minimum makespan, i.e., the minimum time to complete the full project. In the running time, each task requires a particular skill, where skill level is equal to or greater than the task's requirement.
To define formulations of the MS-RCPSP, we need a notation system presented in Table 2.
Using the notations presented in Table 2, we can construct the mathematical model for the MS-RCPSP as follows.
subject to where we have the following: (i) Constraint (3): a resource must have one or more skills. (ii) Constraints (4) and (5): the duration of any task must be equal to or greater than zero (in fact, any real task's duration is a positive number; only two dummy tasks have the duration equal to 0, which shows the start and finish time of the project). (iii) Constraint (6): the parent task (task i) must be finished before the child task (task j) starts. When task i ends are denoted E i , the time when subtask j starts is E j − t j . (iv) Constraint (7): any task i ∈ W k , having skill S ∈ S k satisfied to g S � g S i shows that the skill of r is the save with skill of L i . h S q ≥ h r i : skill level of using the resource is equal to or greater than the task's requirement. (v) Constraint (8): at any time, each resource can execute only one task. To check the availability of the resource, we use the expression n i�1 A q i,k ; if it results in 0 value, the k resource cannot be assigned to do any task, else k is used. (vi) Constraint (9): each task is executed by only one resource.
In the MS-RCPSP problem, the resource must meet the skill type and skill level to perform a task. Accordingly, a Hosseinian [4], [5], [6], [7] GA, PSO ProGen, iMOPSE 4 2019 H. Davari-Ardakani [21] Min-Max iMOPSE resource can commit the task if its skill is the same as the task's required, and the skill level must be equal to or greater than the tasks needed. e resource-to-task assignment can be presented as a matrix illustrated in Figure 1.
In Figure 1, S i.j denotes S i skill type and it has j skill level. Tasks that require execution resources must meet a specific skill type and skill level. e feasible resource should have the same skill type and skill level greater than or equal to the required skill level. Example 1. Figure 1 presents a project with four tasks and four renewable resources. e resources have skill types from S 1 to S 2 , and each of them can have a skill level from 1 to 3. According to MS-RCPSP constraint, to do task W 1 , the resource must meet the task's requirement that means the resource has to have skill type S 2 and skill level equal to or greater than 2. Looking into the resource list, we determine that resource L 1 has skill type S 2 and skill level is 2, so L 1 can be allocated to execute W 1 .
Schedule Representation.
In the scheduling algorithm, a schedule is represented as a vector and the vector's item number is equal to the task number of the project. e value of each item shows the resource index to perform it. e corresponding resource has to meet the requirement constraint of the task. e priority graph of the tasks is shown in Figure 2. e duration of tasks (in hours) is shown in Table 3. Figure 3 shows the assigning of resources to execute tasks of the project. e resource to perform the task needs to meet the priority constraints of the problem containing the skill requirements and skill level. Can be assigned Can not be assigned Tasks Resources Figure 1: e relation matrix of resources-tasks assignment.
Applied Computational Intelligence and Soft Computing
A vector can present the schedule shown in Figure 3 as Table 4. Table 4 presents the task-resource assignment of the project, where the resource L 1 performs tasks: W 1 , W 4 , W 5 , W, W 7 , W 10 ; the resource L 2 is assigned to execute tasks: W 2 , W 3 , W 8 , W 9 .
Migration Method.
To find a feasible solution with the minimum makespan for the MS-RCPSP problem, we study the Migration method combined with the PSO [32][33][34]; the new algorithm is named M-PSO.
Migration Method.
e PSO algorithm tends to fall into the local extreme when performing evolution. e Migration method moves the population from the local extreme to new space while expanding the search space.
Definition 1.
A population is said to be not successfully evaluated if the makespan of the population is still fixed between two continuous generations. e Migration method runs over steps as follows: Step 1: detecting the local extremes: To detect local extremes, we use a variable n f to count the number of times the population has failed to evolve consecutively; if this value is greater than a specified threshold (n max ), then the population has fallen into the local extreme. Equation (10) shows how to evaluate n f .
Step 2: moving the population to a new space: To move the population staying local extreme to new space, we continue with some next steps: (i) Consider each particle of the population. (ii) For each task i of the particle, find the set L i of resources that can perform task i, sorting the resources in L i by the index of each resource.
(iii) Replace the current resource used to do i task by resource staying at the opposite position in L i sorted. Equation (11) represents this step. Figure 4 illustrates the Migration method by changing execution resources. Specifically, the task being performed by resource L 2 will be assigned to L m − 1 and the task being performed by resource L m will be allocated to L 1 .
e new assigning resource-to-task has to ensure the problem's constraints. After all the particles of the population are moved, we will get a new population that is migrated to a new space, which means the population is escaping to the local extreme.
e Migration pseudocode is implemented as follows (Algorithm 1):
e M-PSO Algorithm.
e M-PSO algorithm is improved from the PSO algorithm integrated with the Migration method to improve the efficiency of results. e detail of M-PSO is described in Algorithm 2 as follows: Lines 21 to 25 show the way to find out the local extreme. Lines 31 to 34 check the threshold and call Migration function to move the population to a new space, making the population escape the current local extremum area and expand finding space.
Experiment
To estimate the efficiency of our proposed method, we developed the simulator on the Matlab environment conducted on the benchmark iMOPSE dataset. e test results are compared with GA-M [9] and GRASP [9,23], showing that M-PSO has a positive effect.
e Benchmark Dataset.
In the simulated program, we use the iMOPSE [9] installers to investigate many existing algorithms as GRASP, GA-M, etc. e iMOPSE dataset has the following characteristics: For j � 1 to n (5) L i ← the subset of resource can be performed the task L i (6) L i ← Sort(L i ) (7) idx ← index of resource executed the task i End for // j (11) End Function ALGORITHM 1: Migration.
Input: n max: the threshold to find local extremal, t max: number of evolution generations. Output: the best solution g best (1) Begin (2) P all ← Init data from iMOPSE dataset and create population.
(ii) Data: 30 iMOPSE installers that are shown in Table 5. (iii) Number of particles in the population N p : 100. (iv) Number of evolutionary generations N g : 50,000.
(v) Number of executing times for a data install: 35.
All result data contains average, standard deviation, and best values.
Experiment Results.
e results of the M-PSO are presented in Table 6; this table also shows the values of GA-M and GRASP algorithms published together with the iMOPSE dataset.
In Table 6 Comparing the M-PSO with GRASP, we have the following: (i) e results of M-PSO have better results than GRASP in most cases, specifically better than GRASP 27/30 instances (from 0.6% to 15.8%) with the BEST value and 27/30 instances data with AVG value (from 0% to 16.9%).
e M-PSO can detect local extremes and escape them using the Migration technique, achieving higher efficiency than other comparison algorithms. e experiment results also show that the more the tasks in the project are, the more the M-PSO becomes satisfactory. e superiority of M-PSO 100_5_22_15 100_5_46_15 100_5_64_15 100_5_64_9 100_10_64_9 100_10_65_15 100_20_22_15 100_20_46_15 100_20_47_9 100_20_65_15 100_20_65_9 100_10_26_15 100_10_47_9 100_10_48_15 100_5_48_9
Conclusion and Future Work
In this paper, we have presented the MS-RCPSP problem, a combinatorial optimization problem with many scientific and practical applications. It has described the mathematical model of the problem and proposes a new algorithm to find a feasible solution for the MS-RCPSP problem. e proposed algorithm is M-PSO, which is improved from the PSO algorithm combined with the migration method to escape the local extremes and expand the searching space. e conduction evaluated the effectiveness of our proposed algorithm on the iMOPSE dataset (standard dataset used for MS-RCPSP problem). All experiment results were collected and compared with other algorithms such as GA-M and GRASP. Experimental results show that M-PSO's results are better than previous algorithms, specifically better than GA-M from 6.2% to 32.97% and GRASP from 0.6% to 15.8%.
In the future, the authors will continue to research and improve the algorithm based on other approximation methods, using random moves based on Gauss, Cauchy, etc. to improve the suggested effectiveness of the algorithm.
Data Availability e paper uses the standard iMOPSE dataset to test the efficiency of the algorithm. is dataset is publicly available at https://imopse.ii.pwr.wroc.pl/ and is free of charge.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 4,561.2 | 2022-02-15T00:00:00.000 | [
"Computer Science"
] |
Murine splenic B cells express corticotropin-releasing hormone receptor 2 that affect their viability during a stress response
Chronic stress is now recognized as a risk factor for disease development and/or exacerbation. It has been shown to affect negatively the immune system and notably the humoral immune response. Corticotropin-releasing hormone (CRH) is known to play a crucial role in stress response. CRH receptors are expressed on different immune cells such as granulocytes, monocytes and T cells. However, up to now, no CRH receptor has been described on B cells which are key players of the humoral immune response. In order to highlight new pathways by which stress may impact immunity, we investigated the role of CRH in B cells. Here we show that splenic B cells express the CRH receptor 2 (CRHR2), but not CRHR1. This receptor is functional since CRH treatment of B cells activates different signaling pathways (e.g. p38) and decreases B cell viability. Finally, we show that immunization of mice with two types of antigens induces a more intense CRHR staining in secondary lymphoid organs where B cells are known to respond to the antigen. Altogether our results demonstrate, for the first time, that CRH is able to modulate directly B cell activity through the presence of CRHR2.
B cells are key players of humoral immunity through their ability to produce antibodies and enhance antibody affinity via somatic hypermutation 28 . This latter phenomenon contributes to a better protection of the organism. Depending on the nature of the antigen (T cell-dependent or T cell-independent), B cells require or not cooperation with T cells to mount their response. As T cells express CRHR, CRH can affect this cell type and consequently B cell responses in the case of T cell-dependent antigens (indirect action). However, it is also of crucial interest to determine if B cells can be directly affected by CRH. Some studies have tried to address this question but conflicting results were reported. Using human blood mononuclear cells, Leu and Singh showed that CRH inhibits antibody production while Smith et al., using murine splenocytes, showed that CRH enhances antigen-specific antibody production 29,30 . CRH microinjection into the lateral ventricle of rats was shown to slow down antibody induction in response to a T cell-dependent antigen both after primary or secondary immunization 31 . When CRH was injected intraperitoneally or subcutaneously, this diminution did not occur. Finally, it was shown that immunization of CRH transgenic (CRH-Tg) mice decreased humoral immune response and germinal center formation 32,33 . Taken together, these studies strongly suggest that CRH is able to modulate B cell activity. However, the presence of CRH receptors on these cells has never been reported. Thus, it is not known if CRH can acts directly on B cells.
As a first step to determine whether CRH action on B cells could be direct, we searched for CRHR on purified splenic murine B cells and discovered that these cells express CRHR2 but not CRHR1. These CRH receptors are functional as the addition of CRH to splenic B cells activates p38β, p38δ MAP kinases, GSK3β phosphorylation and the PKA/CREB signaling pathway. Secondly, we were interested in the CRH effect on splenocyte viability. In the presence of the hormone, splenic B cell viability was decreased while the one of T cell remained unchanged. Finally, we showed that immunizations led to a more specific CRHR labeling in secondary lymphoid organs correlating with B cell areas labeling. Taken together, these data show that CRH can act directly on B cells and affect their physiology.
Murine splenic B cells express CRHR2.
The presence of CRH receptors has been reported on murine splenic T cells and macrophages but not on splenic B cells. Thus, we investigated the presence of CRH receptors on B cells. As two types of CRH receptors have been described, CRHR1 and CRHR2, we wondered which one could be expressed by murine splenic B cells. To address this question, B cells were purified by negative selection from murine splenocytes and purity, determined by flow cytometry, was comprised between 92 and 97%. Then, RNA was extracted from these splenic B cells and RT-PCRs were performed with two pairs of primers defined in specific parts of CRHR1 and CRHR2 transcripts. As shown in Fig. 1a, no amplification was obtained with CRHR1 primers even after 40 cycles of amplification whereas amplicons were obtained with both pairs of CRHR2 oligonucleotides. Sequencing of these PCR products confirmed that amplicons were CRHR2 cDNA fragments demonstrating that murine B cells express only these receptor transcripts.
Then, we investigated whether CRHR2 protein expression level could be modulated by CRH. To address this question, splenocytes were cultured for 48 h with CRH 1 or 100 nM. As for RT-PCR experiments, B cells were purified by negative selection and the expression of CRHR2 was assessed by western blotting. Figure 1b shows that B cells express CRHR2 independently of the CRH treatment. Interestingly, two bands were observed and might correspond to different CRHR2 isoforms. To confirm CRHR2 expression by murine splenic B cells, splenocytes were cultured with 1 or 100 nM of CRH for 48 h and analyzed by immunofluorescence. As shown in Fig. 1c, CRHR2 was found to be expressed on B cells (CD19 + ). Altogether these results firmly demonstrate that splenic B cells express CRH receptors and more specifically CRHR2.
CRH activates different signaling pathways in murine splenic B cells. To determine if CRHR2
are functional, splenic B cells were purified and incubated with CRH 100 nM for different durations (15,30 or 60 min). Proteins were then extracted and applied onto a proteome phospho-MAPK array to screen signaling transduction pathways that can be activated by this hormone. No activation was noted after 15 min of incubation with CRH (Fig. 2). After 30 min, an increase of the phosphorylation levels of mTOR (≈ 20%), GSK3β and p38β (≈45%), and p38δ (≈100%) was observed in B cells treated with CRH compared to non CRH-treated B cells. Moreover, the phosphorylation level of CREB was increased by approximately 45% after 60 min of CRH incubation. These results indicate that different signaling pathways could be activated by CRH and that CRHR2 are functional in B cells.
CRH affects murine splenic B cell viability. As it was shown that CRH can induce peripheral blood lymphocyte apoptosis 34 , we next studied the effect of CRH on B cell viability. Splenocytes were cultured with or without CRH 1 or 100 nM for 48 h. Then, T and B cells were labeled with anti-CD3 and anti-CD19 antibodies, respectively, before (T0) and after 48 h of culture (T48). Flow cytometry analyses showed that the percentage of B cells decreased after 48 h of culture with or without CRH while the percentage of the T cells increased (≈ 60% of B cells and 30% of T cells at T0 with a T/B cells ratio of 0.48; ≈50% of B cells and 45% of T cells at T48 with a T/B cells ratio ranging from 0.88 to 0.96) (Fig. 3a and c). Analysis of Annexin-V (Anx-V) staining in both T and B cell populations showed that B cells were more sensitive to apoptosis than T cells (≈10% of B cells and 3% of T cells were Anx-V positive at T0 while 66% of B cells and 35% of T cells were Anx-V positive at T48 without CRH), (Fig. 3b and d). Furthermore, the treatment of splenocytes with CRH significantly increased the rate of Anx-V stained cells only in the B cell population (+6.5% and +5% with CRH 1 nM and 100 nM respectively). These last results demonstrated that, in vitro, CRH decreased B cell survival.
Scientific REPORtS | (2018) 8:143 | DOI:10.1038/s41598-017-18401-y CRHR expression is increased in the spleen after intraperitoneal immunization. As reduced humoral immune response and germinal center formation were noted after immunization of CRH-Tg mice, we performed in vivo experiments to further understand the function of CRH receptors on splenic B cells. Mice were immunized with two T cell-dependent antigens, BSA (bovine serum albumin) and NP-KLH (4-hydroxy-3-nitrophenylacetyl hapten conjugated to keyhole limpet hemocyanin), or with a T cell-independent antigen, LPS (lipopolysaccharide). Then, immunofluorescence staining was used to assess the expression of CRHR within the spleen (Fig. 4). In non-immunized mice, splenic CRHR labeling showed no precise localization. After immunization with BSA, CRHR staining was increased into B cell areas corresponding to follicles where B cells are known to respond to T cell-dependent antigens and lead to germinal center formation. This result did not depend on antigen composition because immunization with another T cell-dependent antigen, NP-KLH, led to the same CRHR staining localization (white arrows). After immunization with LPS, a more specific CRHR labeling was observed around B cell areas, corresponding to marginal zones (red arrows). In these areas, B cells are known to be activated with T cell-independent antigens. Taken together, these results suggest that whatever the antigen used for immune system activation, the splenic CRHR expression is more specifically localized in B cell areas.
Discussion
Over the past few years, it has become clear that CRH can exert direct effects on immune tissues, such as the spleen, through the presence of CRH receptors in these tissues. Nevertheless, direct CRH effects on leukocytes, notably on B cells, are not well known yet. Thus, the purpose of our study was to better understand the impact of CRH on B cells.
We first demonstrated by western blotting and immunofluorescence that splenic murine B cells express CRHR and, by sequencing of RT-PCR products, that they express only CRHR2 but not CRHR1. This result is very interesting because it shows for the first time that CRH can act directly on B cells to modulate their physiology during a stress response. Western blotting experiments performed with the anti-CRHR antibody revealed two bands. In rodents, different CRHR2 mRNAs are generated by alternative splicing leading to two isoforms: CRHR2α and CRH2β 35 . This alternative splicing leads to the deletion of exons 1 and 2 in CRHR2α mRNA and to the deletion of exon 3 in CRHR2β mRNA 36 . The predictive size of these two isoforms is 47 kDa for CRHR2α and 50 kDa for CRHR2β (UniProt; http://www.uniprot.org/uniprot/Q60748). Thus, the two bands observed in our western blotting experiments likely correspond to CRHR2α for the lower band and to CRHR2β for the higher band. This indicates that murine splenic B cells express both isoforms.
CRH is known for having a greater affinity for CRHR1 than for CRHR2 (Kd of 3 nM and 10-40 nM, respectively) 37 . However, different studies have demonstrated that CRH can act via its type 2 receptor on neurons or macrophages [38][39][40] . Once activated, rodent CRHR can activate different phospho-MAP kinases pathways like CREB (cyclic AMP response element-binding), p38 and GSK3β [41][42][43] . Consequently, we screened different signaling pathways that could be activated by CRH in murine B cells.
Based on the literature, cells were incubated with CRH 100 nM, to mimic high stress levels, because in vivo studies estimated that CRH level could rise above 100-200 nM in the hypothalamus and hippocampus of animals placed under stressing conditions 44 . Our results suggest that CRHR2 binding induces the activation of different MAP kinases pathways such as p38 (more precisely p38β and p38δ), GSK3β and CREB. Then, we further investigated the consequences of the presence of CRHR on B cell survival. Indeed, it has been shown that CRHR2 activation induces apoptosis in a murine macrophage cell line (RAW264.7) 43 . Our data show that both CRH 1 (to mimic mild stress levels) and 100 nM lead to an increase of murine splenic T/B cells ratio which is due to a decrease of B but not T cell viability. These results are consistent with in vitro studies showing that 1 nM of CRH could affect PC-12 rat adrenal cell line apoptosis and murine oocyte maturation 45,46 . Interestingly, the diminution of B cell survival correlates with proteome phospho-MAPK array results, showing the activation of different MAP kinases pathways such as p38, GSK3β and CREB known to be involved in cell death. Indeed, the increase of the phosphorylation levels of such protein kinases has been associated with apoptosis induction in various cell types such as p38δ in keratinocytes 47 or GSK3β in neurons 48 and hepatocytes 49 . In the same manner, in animal models of ischemia-reperfusion injury or neonatal hypoxia-ischemia, GSK3 inhibition protects the heart and the neurons respectively via the diminution of inflammation and apoptosis 50,51 . How CRH specifically impacts B cell survival remains unclear. However, based on our data and on the literature, one could speculate that such early signaling events induced by CRH might lead to the reduced B cell survival. Studies performed on CRH-Tg mice revealed the same alteration of T/B cells ratio in the spleen, associated with a B cell maturation blockage and a decrease of germinal center formation after immunization 32,33 . Nevertheless, in vivo studies did not dissociate CRH and glucocorticoid action. As CRH overexpression leads to glucocorticoid overproduction and because B cells express glucocorticoid receptors 53 , it is difficult to know if such results are due to a direct impact of CRH on lymphocytes. CRH could potentiate glucocorticoid action and vice-versa. Indeed, in our study, we observed an increase of glucocorticoid receptor transcription levels in murine splenocytes after CRH treatment during 48 h (Supplementary Fig. S2). However, during our in vitro studies, there was no glucocorticoid stimulation. Thus, the impact on B cell viability is the direct consequence of CRH action. Finally, we focused on the in vivo role of CRH in splenic B cell maturation and selection. Immunofluorescence staining of spleen sections from mice immunized with different antigens was performed. In non-immunized mice, immunostaining did not show any CRHR specific localization. This result is not surprising because other splenic cells such as macrophages, dendritic cells and T cells are known to express CRHR 54,55 . After immunization with T cell-dependent antigens (BSA or NP-KLH), CRHR staining was more localized in B cell follicles where germinal centers can develop after B cell activation. Germinal center formation, necessary for B cell maturation and selection, takes place in response to T cell-dependent antigens. After immunization with a T cell-independent antigen (LPS), CRHR staining seemed to be situated in marginal zones located around B cell follicles. These observations suggest that, in addition of being a stress hormone able to act directly on B cells, CRH could also play a role in B cell selection and maturation during immunization/infection. This hypothesis could explain why CRH-Tg mice exhibit a diminution of germinal center formation in the spleen, associated with a decrease of immunoglobulin class switching and antibody affinity maturation 32,33 ; this last observation having also been done in stressed animals 56 .
Conclusion and Perspectives
In conclusion, we show for the first time the expression of CRHR2 on splenic murine B cells thereby rendering them more sensitive to death induced by CRH produced during stress responses. Moreover, the increase of CRHR expression in or around B cell follicles after immunization raises the question of the function of CRH in B cell maturation and selection during immune responses. In the future, it would be interesting to determine if peripheral blood B cells also express CRHR2. Indeed, when a tissue is injured, inflammation occurs and permits not only the entry of circulating leukocytes in the stressed tissue but also the increase of HPA axis activity. As this activation leads to an increase of CRH production, it will be important to investigate if this overproduction modulates the physiology of infiltrated leukocytes, especially in the brain where CRH concentration is high. Indeed, neuro-inflammation studies have shown that blood B cells infiltrating the brain could cause deleterious or protective effects depending on the B cell subset [57][58][59] .
Methods
Mice. C57Bl/6J male mice aged 14-18 weeks (Charles River Laboratories, Saint Germain Nuelles, France) were housed in vented animal cabinets (Noroit, Bouaye, France) under controlled temperature (22 °C) and 12 h light-dark cycle with free access to food and water. Animals were treated in accordance with the national legislation. Experimental protocols were approved by the local ethics committee (Comité d'Ethique Lorrain en Matière d'Expérimentation Animale, permit number: CELMEA-2012-008).
Cell culture and B cell isolation. Mice were anaesthetized with isoflurane (AbbVie, Rungis, France) and euthanized by cervical dislocation. Each spleen was collected and dissociated with 70 μm nylon cell EASYstrainer (Greiner bio-one, Dutscher, Brumath, France). Then, splenic red blood cells were lyzed into 2 mL of 1X RBC lyzis buffer (eBioscience, Affymetrix, Rennes, France) for 2 min at 37 °C and the reaction was stopped by adding 10 mL of RPMI 1640 medium. After centrifugation at 300 g during 5 min, the pellet was suspended in RPMI 1640 medium enriched with 10% heat inactivated foetal calf serum (Sigma-Aldrich, L'Isles d' Abeau Chesnes, France), 100 U/mL penicillin, 100 μg/mL streptomycin, 10 mM HEPES, 2 mM L-glutamine, 1 mM sodium pyruvate and 1X non-essential amino-acids (all purchased from Sigma-Aldrich). After cell count, splenocytes were cultured at a density of 1 × 10 6 cells/mL at 37 °C under 5% CO 2 for 48 h with or without CRH 1 or 100 nM (to mimic mild and high stress levels, respectively), (Bachem, Interchim, Montluçon, France). After incubation with CRH, splenic murine B cells were purified using the MagCellect TM Mouse B cell isolation kit according to manufacturer's instructions (Stemcell Technologies, Grenoble, France) to perform western blotting experiments. B cell purity was checked by flow cytometry after co-staining with anti CD19-PC7 and anti CD3-APC antibodies. The degree of purity was comprised between 92 and 97%. For RNA extraction followed by RT-PCR and for protein array experiments, B cells were directly purified from mice spleen with the MagCellect TM Mouse B cell isolation kit.
RT-PCR.
Total RNA was extracted from purified splenic B cells using Trizol reagent (Invitrogen, Thermo Purification and sequencing of amplicons. CRHR1 and CRHR2 PCR products were separated on an agarose gel stained with ethidium bromide. Amplicons were excised and purified using the lysis NucleoSpin Gel and PCR clean-up kit (Macherey-Nagel, Hoerdt, France). Then, purified PCR products were sent for sequencing to GATC Biotech (Mulhouse, France).
Protein preparation. After CRH stimulation, purified splenic B cells were suspended in NP-40 cell lysis buffer (150 mM sodium chloride, 50 mM Tris pH 8.0, 1% NP-40) and protein extraction was performed in the presence of a complete protease and phosphatase inhibitor mixture (Roche Molecular Biochemicals, Mannheim, Germany). Protein concentrations were determined using a Coomassie protein assay (Bio-Rad, Ivry sur Seine, France).
Scientific REPORtS | (2018) 8:143 | DOI:10.1038/s41598-017-18401-y Protein Array. Purified B cells from a pool of three spleens were incubated with or without 100 nM of CRH during 15, 30 or 60 min. Proteins were then extracted and CRH-activated signaling pathways were analyzed using the "proteome profiler human phospho-MAPK array" kit according to manufacturer's instructions (R&D Systems, Lille, France). 80% of the antibodies present on this array can detect murine proteins according to the manufacturer. Signals were visualized by chemiluminescence (FX7, Vilbert-Lourmat) and analyzed by densitometry (ImageJ ® ).
Immunization of mice and immunostaining of spleen sections. Mice were immunized intraperitoneally with 10 µg of lipopolysaccharide (LPS, Sigma-Aldrich), 100 µg of BSA or 50 µg of 4-hydroxy-3-nitrophenylacetyl hapten conjugated to keyhole limpet hemocyanin (NP-KLH, LGC BioSearch Technologies, Steinach, Germany). All antigens were mixed with an equal volume of Freund's adjuvant (final volume of 100 µL; Sigma-Aldrich). Ten days later, animals were euthanized. After fixation by whole body perfusion, spleens were incubated for 24 h in 4% paraformaldehyde followed by 12 h in PBS containing 30% of sucrose. Frozen spleens were cut using a cryostat to obtain 14 μm slice sections that were fixed on polylysine glass slides (Thermo Scientific Fischer). Sections were permeabilized, blocked as described above and stained overnight at 4 °C with anti-CD19 (1/100, eBioscience) and anti-CRHR antibodies (1/50, BIOSS USA, Interchim) diluted in PBS-BSA 0.5% containing 0.1% of Triton X-100 and 5% of goat serum. After washing, secondary antibodies were applied as described above except that incubation lasted 2 h. Hoechst staining was also performed as described above. Signals were detected using an epifluorescence microscope Eclipse 80i (Objective 20x; Nikon). Statistical analysis. Homogeneity of variances and normality of distribution were controlled with Levene and Kolmogorov-Smirnov tests, respectively. The level of statistical significance was set at p ≤ 0.05. Intergroup differences were estimated by analysis of variances with one-way ANOVA tests followed by pairwise comparisons with Fisher's PLSD tests. All statistical tests were performed using the Statview ® software (SAS institute Inc, Cary, USA). | 4,786.6 | 2018-01-09T00:00:00.000 | [
"Biology",
"Medicine"
] |
Application of Augmented Reality Technology to Access Facial Sunscreen Product Label Information
The goals of this study were to 1) examine the behaviors and demands of users who purchased face sunscreens online, and 2) build augmented reality technology for use in the present, 3) to determine the acceptance and satisfactory after using augmented reality technology to provide facial sunscreen data. This study included a sample group of 30 adults who purchased sunscreen for their faces. The results revealed that it had the highest average system use. Users are concerned with the clarity of the content presented on the screen, as well as the convenience and accessibility with which they may utilize the system. Because of the small size of the product labels on the package, they are difficult to read. Furthermore, consumers have expressed an interest in using the application. Because the use of AR technology to present information in a format that gives users a new experience and feel that they can easily access it. However, there may be constraints in terms of processing time and the display of data details while the data is processed. As a result, the system's performance is the worst of the bunch.
Introduction
More and more individuals are turning their attention to the use of sunscreen these days because it is greatly beneficial. Sunlight does not harm the skin until it burns or develops dark patches hence, this process decreases the risk of skin cancer. Sunscreen ingredients protect the skin in a variety of ways, including by absorbing UV radiation and shielding the skin's layers. that has a rich color or reflects UV rays It will have detrimental consequences on the skin if it receives too much sunlight, such as red skin burns, freckles, sunburn, and aging skin from the sun. As a result, wearing tight clothing or carrying an umbrella may not be enough [17]. To avoid excessive sun exposure, use sunscreen when participating in the non-sun segment. By using a sunscreen with good characteristics, you can hide your protection. Sun protection of at least SPF 15 (SPF15) is recommended.
In case long stay under the sunlight is unavoidable, specific sun cream with higher SPF value is crucially recommended. A proper sunscreen will protect your skin from both UVA and UVB radiation while also preventing inflammation. Protects skin from the sun's burning feeling and aids in the maintenance of a natural skin tone. The greater the SPF, the more visible it is. UV rays are also more effectively blocked by sunscreens [12]. Another key factor to consider before making a purchase is whether the product will suit the user's skin type. One should look over the ingredients to see if there are any compounds that should be avoided and determine whether the user is allergic to/ irritated by certain substances, such as alcohol, perfume, and parabens [10], as well as what type of skin the user has, such as normal, oily, dry, or combo skin, and then choose a product based on the condition for the best results on the user's skin [4].
The product label information on the container is written in a small font that is difficult to see, this may cause customers to select a sunscreen that is inappropriate for their skin type [3], [6]. As a result, the researchers devised a strategy to make it easier for consumers to obtain information on packaging. Using augmented reality technology, a mobile application may access sunscreen product label information for the face [2], [5], [15], [8].
Objective
a. To research consumer behavior and demand for augmented reality technology for obtaining facial sunscreen product label information using a mobile application. b. To develop augmented reality technology to present facial sunscreen information. c. To evaluate the satisfaction study on the application of augmented reality technology to present sunscreen data for the face.
Research conceptual framework
User Response aspect System usage aspect
System operation
Application of augmented reality technology to access facial sunscreen product label information Fig. 1. The research conceptual framework model for application of augmented reality technology to access facial sunscreen product label information Figure 1 presents the conceptual framework of this research, to show the procedure for using augmented reality technology to access facial sunscreen product label information.
Research design
To develop mobile applications, this study used the SDLC (System Development Life Cycle) information system development cycle based on the Water Fall model and Augmented Reality [1], [7], [9], [14], [15]. This study used a sample of 30 buyers who used augmented reality technology to obtain information on facial sunscreen labels via a smartphone application.
a. Data gathering Documents, papers, interviews, and survey reports were used to research the issues related to the sunscreen category, problems and conditions with the skin labels and packaging for sunscreen and exploring the behavior and demands for augmented reality technologies through the use of Augmented Reality. A survey of 30 face sunscreen purchasers' behaviors and demands for augmented reality technology to get facial sunscreen product label information via a mobile application revealed that: In the case of facial sunscreens, 96.7 percent are concerned about the components and would like to use augmented reality technology to get face sunscreen label information via a user's mobile application [2], [3].
b. Design and development
The researcher has concluded the work process of the mobile application in the following system flow diagram below. The procedure for using augmented reality technology to display facial sunscreen information is shown in Figure 2. It is made up of user and administrator workflows that are arranged in a logical manner. This enables users to scan a QR code. The system uses augmented reality technology to deliver information in a user-friendly manner. Shoppers can also use the button to go straight to the store's page and place an order.
All screens and application components, including graphics, were created by the researcher with the following components; images, text, colors, buttons, a menu bar, and an interface for data input and output: The research has been done on the sample group of 30 people.
d. Evaluation
The researchers assessed the satisfaction study on applying augmented reality technology to present sunscreen data for face using an assessment form and a questionnaire.
The satisfaction score is a Likert scale with 5 levels, ranging from 1 to 5. The meaning of each score is listed below so that the analysis of customers satisfaction can be performed [16].
Findings
The researcher brought the developed mobile application to a trial group of 30 people and asked them to answer the questionnaire immediately after using the application. which can be summarized as shown in the table. In terms of system consumption, it has the highest average, as seen in the table above. Because people respect the system's ease of use and accessibility, as well as the clarity of the content presented on the screen [13]. As a result, product labels are small and difficult to read on packaging. Furthermore, consumers have expressed an interest in using the application. While in the user response aspect, the mean is in the second order because the use of AR technology to present information in a format that gives users a new experience and feel that they can easily access it. However, there may be limitations in terms of processing time and the display of details of the data waiting for some time. As a result, the system's performance is the least average.
Discussion
Even now, customers of facial sunscreens can get all of their information from the Internet. However, sunscreen for the face is a product that might cause allergic responses and other adverse effects when used. As a result, the group of purchasers must still go shopping and try out the product before making a purchase. As a result, the usage of mobile applications to display product label information using augmented reality technology has been approved because users still place a premium on readable font sizes and the ease with which they may obtain information via mobile devices. However, the extent of display for AR technology may be limited because it can't be read if it's not in the marker's region. Developing augmented reality technology for access to sunscreen product label information via mobile applications this time. Consider the ease of use and complete information on the product label. Creating a unique way of presenting information on product labels in order to market the product. It's also an easy and quick way to get product label information [18].
Conclusions
Most face sunscreen users prefer to purchase from www.facesunscreen.com which leads to the the goal of this study that is to employ augmented reality technology to access information on face sunscreen product labels via a mobile application. A place where things are sold so that you can try them out and get additional information before buying them, as well as read the product label. However, the label's letters are small, making it difficult to read. As a result, shoppers will be able to access information on product labels more readily and conveniently thanks to the usage of AR technology.
After experimenting with a sample group, the findings of using augmented reality technology to access information on product labels revealed that the system utilization aspect had the highest average because people respect the system's ease of use and accessibility, as well as the clarity of the content presented on the screen. Consequently, product labels are small and difficult to read on packaging. Furthermore, consumers have expressed an interest in using the application.
While in the user response aspect, the mean is in the second order because there is the use of AR technology to present information in a format that gives users a new experience and feel that they can easily access it. However, there may be limitations in terms of processing time and the display of details of the data waiting for some time. As a result, the system's performance is the least average.
AR remains a technology that gives shoppers a fresh impression and experience, making it easier for them to obtain label information and make purchasing decisions [11]. | 2,359 | 2022-01-28T00:00:00.000 | [
"Computer Science"
] |
Logging Data Completion Based on an MC-GAN-BiLSTM Model
Due to environmental interference and operational errors, problems such as incomplete and random missing logging data have occurred during the geophysical logging data collection process. Since it is difficult to establish a geophysical model based on logging data and geological information, the data complementation effect of conventional methods is not very satisfactory. In this paper, we propose an MC-GAN-BiLSTM model based on spatiotemporal sequence prediction. In the model, we adopt a generative adversarial network (GAN) as a network framework, and a long short-term memory (LSTM) neural network and a bi-directional long short-term memory (Bi-LSTM) as the basic modules. We use the LSTM instead of a fully-connected layer in the GAN to extract the potential information in the logging data depth domain. We complete the logging data missing values through an encoding-decoding structure that includes the Bi-LSTM. In addition, the generator module also uses multiscale convolution to fully extract the logging data features. We use logging data random missing values and consecutive missing values to simulate a field data acquisition environment and threshold control to simulate a laboratory processing environment for experiments. The experimental results show that the coefficient of determination (R2) of the GAN-LSTM model reaches 0.906 when 30% of random logging data are missing and 0.851 when 30% of consecutive logging data are missing. The effect of the model proposed in this paper is significantly higher than the commonly used random forest (RF), sequence to sequence (seq2seq) and generative adversarial interpolation network (GAIN) models.
I. INTRODUCTION
High-quality and high-integrity logging data are the prerequisite and basis for geological work such as lithology identification. In the process of geophysical logging data collection, due to the objective natural environment and subjective human operations, problems such as incomplete logging data collection, data omission, and random loss are often caused. To complement the missing logging data, conventional methods of analysis and comparison can be used. These methods use existing complete logging curve data to establish a geophysical model of logging data and stratum lithology based on geological and rock geophysical properties to directly fill in the missing data.
The associate editor coordinating the review of this manuscript and approving it for publication was Nazar Zaki .
These methods have strong theoretical basis and strong pertinence, and are widely used in actual production [1]- [3]. However, the theoretical basis of these methods is based on an extreme simplification of an underground geological environment, which is very hypothetical. When staff are faced with different geological environments, the effect is often unsatisfactory. Additionally, the choice of model parameters is highly subjective. This is manifested in the fact that when facing the same problem, different workers using the same model may have different understandings of the geological conditions and choose different parameters, leading to different complement results.
Another effective method is based on statistics. This type of method uses the principles of statistics, such as regression algorithms, to find the internal relationships and overall characteristics of logging data, and complete a prediction of the missing logging data. For example, Fan et al.. used a ridge regression method (RRM) for logging acoustic curves and achieved high accuracy [4]. However, the relationship between logging data and formation lithology is not a simple linear mapping, but a complex nonlinear relationship. In recent years, many scholars have tried to use data-driven machine learning methods to characterize the relationship between logging data and formation lithology. Cheng et al.. used a combination of principal component analysis and support vectors to establish a nonlinear relationship between logging curves and reservoir permeability [5]. Shi et al. successfully used multiple regression analysis (MRA), a backpropagation neural network (BPNN), and a support vector machine (SVM) to model earth science data [6]. Ibrahim et al.. used an SVM and random forest (RF) to reconstruct a gamma logging curve. The experimental results show that both the SVM and RF-produced models were able to predict the gamma ray (GR) log with high accuracies, and the SVM predicted the natural GR log with R 2 and AAPE values of 0.98, and 1.42%, respectively [7]. Garcia A P et al. strengthened the rock attribute evaluation of the missing data wells, and reconstructed the missing logging data through machine learning and supervised neural networks. The reconstructed well logs agreed with the actual measurements with relative errors of less than 10% [8].
In recent years, deep learning has been widely used in various fields and has achieved better results than traditional machine learning methods [9]- [11]. Deep learning can learn extremely complex multi-layer neural networks and build multiple hidden layers between the input and output layers to extract high-dimensional features of the data for complex and nonlinear modeling [12]- [15]. Deep learning methods are very good at constructing complex nonlinear relationships, and they are used by many scholars in the completion of logging data. For example, Mo etal. used a genetic neural network (GNN), which is better than a traditional back propagation neural network (BPNN), to reconstruct a logging curve [16]. Zhang et al.. proposed a cascade system based on long short-term memory (LSTM) neural networks. Testing using real well log data shows that the results from the LSTM neural network are of higher accuracy than those of a traditional fully-connected neural network (FCNN) [17]. Rolon et al.. uses a generalized regression neural network (GRNN) to generate artificial logging curves, and, compared with the results of an MRA, the network has higher accuracy [18]. Alizedel et al. uses artificial neural network and cluster analysis technology (CA) to successfully establish a model between logging data and organic carbon (TOC) [19].
Logging data not only has horizontal spatial characteristics but also has vertical time series characteristics [20]. Although most deep learning models can learn the distribution of the original data and characterize the spatial characteristics of the data when they are used for logging data completion, it is easy to ignore the longitudinal correlation and change trend of the logging data [21]- [23]. The result is simple, isolated, and lacks the geological significance of the continuity of the underground horizon. A recurrent neural network (RNN) uses previous data and information to comprehensively process current tasks [24]. As a special RNN, an LSTM network can not only process the prior and subsequent data information but also avoid the problem of gradient explosion or disappearance as the sequence increases by using a gated recurrent unit [25], [26]. This feature is very suitable for processing long-sequence logging data that needs to be compared with prior and subsequent information [27]- [29]. A bi-directional long short-term memory (Bi-LSTM) combines a forward LSTM network with a backward LSTM network, making it more suitable for long-sequence data. Khan et al. use multilayer bi-directional long short-term memory (MBD-LSTM) to extract features of Mitochondrial proteins of Plasmodium falciparum. The identification rate of Plasmodium mitochondrial proteins using this model is as high as 99.5% [30]. A generative adversarial network (GAN) provides us with a tolerant framework. Theoretically, all the differentiable functions can construct the generating and discriminating modules of a GAN framework. This paper proposes a GAN-LSTM model for the completion of logging data from the perspective of spatial and temporal characteristics. In the generator, we use a codec structure to obtain the low-dimensional representative features of the logging data [31]. In the encoder of the generator, we abandon the fully-connected (FC) layer and replace it with Bi-LSTM to strengthen the potential connection between the missing data and the logging data of the upper and lower horizon. We trained and tested the model using the data of 170 wells. The performance of the model proposed in this article is better than a RF, sequence to sequence (Seq2seq) model, and generative adversarial interpolation network (GAIN) in random missing data and consecutive missing data completion experiments.
II. PRINCIPLE OF MODELS A. MULTISCALE CONVOLUTION MODULE
When we use the deep learning method to complete the logging data, the missing rate of a certain logging curve data is too large, and it can be directly discarded. If we forcibly complete, the obtained complete data features may introduce noise data, which will affect the final result and subsequent work.
In the GAN generator, we use a multiscale convolution module to extract the spatial features of the logging data. In the convolution operation, different convolution kernels have different receptive fields, and the features of the extracted data are also different. The large-scale convolution kernel is suitable for extracting global information, and the small-scale convolution kernel is suitable for extracting local information. The features of the logging data extracted in this way are more comprehensive. The multiscale convolution structure is shown in Fig. 1. VOLUME 10, 2022 FIGURE 1. Two types of cyclic neural network calculation diagrams. The multiscale convolution adopted by MC-GAN-BiLSTM has three channels. The first convolution of each channel uses a 1 × 1 convolution kernel. The second channel adds a 3 × 3 convolution kernel, and the third channel adds two 3 × 3 convolution kernels.
B. BiLSTM MODULE
An RNN mainly processes sequence-based data, traversing all recurrent units in a recursive manner along the direction of the sequence. An RNN is used to process data with temporal characteristics, such as natural speech sequence processing, speech image processing, and machine translation. As a neural network with short-term memory, an RNN has two major characteristics. First, an RNN can not only receive the current self-information but also the information from the previous time point. Second, an RNN uses the same parameters at all times to realize parameter sharing in the time dimension, as presented in Fig. 2(a). At time t, assuming that the network input is x t , the state of current hidden layer h t is not only related to input x t at the current time but also related to the hidden state h t−1 at the previous time. They are calculated using Eq. (1) and Eq. (2).
where f is an activation function, o t is the output value of the current layer, U, W , and V are weight coefficients, and b and c are bias terms. An RNN can only predict the output of the next time point based on the timing information of the previous time point. However, the predicted value to be output may depend on the entire input sequence in the application of logging data completion. It requires that the current output is not only related to the previous state but also has some relationship with the future state. A bi-directional recurrent neural network (BRNN) can satisfy such needs. In the design of a BRNN, at time t, the input will pass the data in two directions to a hidden layer neuron. The neuron will calculate the deep-level feature information from the forward and reverse directions. The calculated result represents the ''past'' and ''future'' information, and the BRNN combines them to yield an output result, as presented in Fig. 2(b). The values are calculated using Eq. (3), Eq. (4) and Eq. (5).
where f is an activation function, h t is the current forward hidden layer state, g t is the current reverse hidden layer state, o t is the current layer output value, U 1 , W 1 , and V 1 are the weight coefficients of the forward cyclic network, U 2 , W 2 , and V 2 are the weight coefficients of the reverse cyclic network, and b 1 , b 2 , and c are the corresponding bias terms. From the perspective of ''time'' order, a BRNN has two states: one is passed from the beginning of the sequence back with time; the other is passed from the end of the sequence forward with time. As a result, the output can benefit from past data-related features as well as future datarelated features. In general, a BRNN's structure allows the output unit to have the data feature constraints of the past and future sequences at the same time, and it can also maintain the sensitivity of the current data-related features. Additionally, when a BRNN evaluates the state of the current input data, it is not necessary to expand the scope of input in order to increase the overall control. However, whether it is an RNN or BRNN, when long-term tasks are involved, the gradient decreases exponentially with the corresponding calculation, or even disappears. This ultimately causes the weight to update slowly, and the model performance decreases. In practical applications, a gating system is usually used to solve the problem of gradient disappearance in an RNN. LSTM is the most typical gating system, and its exploded view is presented in Fig. 2. LSTM controls the flow of the history and current information through a forget gate, input gate, output gate and cell state. State C is like dealing with a ''fast car'' on a highspeed channel, which only runs in the channel and rarely interacts with the hidden unit of the recurrent neural network. Then it is relatively easy for state C to maintain smooth fluctuations in a long-time scale, as shown in Fig. 3(a). State C passes through the discriminative control of the door, and transmits the original information state h t to itself by adding or removing, as shown in Figure 4-6(b) The forget gate controls the degree of forgetting previous cell states, as shown in Fig. 3(a)- Fig. 3(f). The first gate is called the forget gate, which discriminates the information that the system should discard from the original cell state, see Fig. 3(c) and Eqn. (6).
For the status update, see Figure 3(d) and Figure 3(e). The process of updating from C t−1 to C t determines what new content is delivered to the state. For Figure 3(d), the input is still the accumulation state h t−1 of the hidden unit of the Then, the output can benefit from past data-related features as well as data-related features from the future. recurrent neural network and the current data sequence input x t is then divided into two parts of calculation. The first part, similar to the previous forget gate, uses the sigmoid neural network layer to determine the updated content, and the output is i t ; the second part uses the tanh function layer to create a new candidate value vectorC t . The operating calculations of i t andC t are as follows: Finally, the LSTM merges the product of the updated content i t of the current input information and the candidate valueC t . The specific calculation diagram is shown in Figure 3(e), and the calculation is as follows: whereC t is the candidate information of the current depth. It is multiplied by the input gate state i t to determine which new information is updated to the cell state. The output gate determines the information that is output from the cell state at the current depth, see Fig. 3(f) and Eqn. (10), Eqn. (11).
where x t is the input to the current LSTM unit, h t is the hidden state of the output of the current LSTM unit, σ is a sigmoid function, and is the corresponding element multiplication.
In general, ordinary neurons can only extract data features from the past and current inputs to evaluate the current state. In the process of logging data completion, many current predictions need to rely on the input sequence of the overall logging signal data. For example, when logging data are consecutively missing due to the different logging signal characteristics of different formations, the logging data characteristics of the formation boundary not only depend on the logging data of the previous layer but also the analysis and comparison of the logging data from the next layer or even the overall logging data. If the current data changes are not obvious or abnormal (including unreasonable data processing in the previous period, such as data distortion and other reasons), it may be necessary to expand the data area forward (into the future) or backward (into the past) for identification. An ordinary RNN cannot do this, which brings us to a BRNN. For the problem of vanishing or exploding gradients, LSTM can solve this problem. Therefore, we use Bi-LSTM based on LSTM as the basic module to be introduced into the entire logging data completion framework.
C. NETWORK FRAMEWORK GAN AND GAIN
A GAN is a deep learning architecture based on game theory. Since end-to-end training can be performed in the neural network, it can learn the potential distribution of training samples. In the case of fewer artificial priors, it can use random noise to generate ''real'' samples, and finally generate usable data. Compared with other generation algorithms, a GAN has obvious advantages. It can generate samples in parallel and can be flexibly combined with other networks, such as a CNN or RNN.
The structure of a GAN is composed of two parts: a generator and discriminator. The generator is used to synthesize simulated samples that are almost the same as real samples. The discriminator is used to determine whether a sample comes from the real world or was generated by simulation. The function of the generator is to generate samples that ''mix the spurious with the genuine'', which makes it difficult for the discriminator to distinguish. The function of the discriminator is to distinguish between synthetic samples and real samples. The purpose of the generator and the discriminator is opposite and there is a contradictory relationship. By putting two mutuallyindependent models together for synchronous training, the simulated samples generated by the generator will be more realistic, and the discriminator will make more accurate judgments of the samples.
Specifically, the generator and discriminator are both functions in nature. The generator is responsible for capturing the distribution of sample data and mapping it to a new data space with a noise vector z that obeys a uniform or Gaussian distribution. Finally, it tries to generate simulated samples G(z) that obey the distribution of real data G(z). The discriminator takes simulated sample G(z) or real sample x as input, and the output is a scalar, representing the probability that the input sample is a real sample. The optimization goals of a GAN are as follows: (12) where x is the real sample, z is random noise, p data is the real data distribution, and p z is the noise data distribution. The GAN optimization problem is a maximum and minimum problem. First, we need to define a priori input noise distribution p z (z). During the model learning process, the discriminator should try to improve the prediction accuracy of the discriminant generated samples, and the generator should try to reduce the optimal performance of the discriminator. When the optimal performance of the discriminator reaches the lowest, the generator distribution p g is closest to the real data distribution p data at this moment. The process and structure of a GAN is presented in Fig. 4: The GAIN network was proposed by Yoon etal. [32] in 2018. A GAIN is similar to the GAN model, and it is also composed of a generator and a discriminator. The difference from a GAN is that a GAIN has no special requirements for input data. A GAIN does not need complete data as the input to the model, and directly uses missing data that needs to be completed, as shown in Fig. 5.
Generally, the input data to a GAN is random noise, while the input to a GAIN is data that needs to be completed. The discriminator of a GAN model mainly discriminates the authenticity of a sample, while the discriminator of a GAIN model discriminates the authenticity of each element in the sample. Specifically, the missing data are converted into three different forms as input data. First, the GAIN fills in the missing data with 0, and combines the original data to form a Data matrix. Second, the GAIN fills in the original data with 0, and fills in the missing data with 1814 VOLUME 10, 2022 random data to generate a Random matrix. Finally, the GAIN replaces the original data with 1 and the missing data with 0 to generate a Mask matrix that records the location of the missing data. The above three matrices are used as inputs to the generator. The output of the generator is the Imputed Matrix of the GAIN model. Additionally, the Mask matrix is transformed into a Hint matrix through a Hint generator. The Hint matrix combines the Imputed Matrix output from the generator as the input to the discriminator. The output of the discriminator is also a matrix, and the value of each element of the matrix represents the probability of missing data. The model calculates the error between the input of the generator and the initial completion matrix, which is also called reconstruction error. The reconstruction error, the input to the discriminator and the Mask matrix are used to calculate the cross entropy, and this cross entropy is used as the loss function of the model. Finally, the generator and the discriminator are updated iteratively by backpropagation until the network converges. At this time, the performance of the generator and the discriminator has reached a relatively optimal level. The generator can complete the missing data, and the result of the completion is close to the real data.
D. NETWORK FRAMEWORK MC-GAN-BiLSTM
In a traditional GAN and GAIN, the generator and discriminator use an FCNN. The generator of the MC-GAN-BiLSTM is realized by a self-encoder which consists of an encoder and a decoder. Before input, we use multiscale convolution to fully extract the spatial features of the logging data. In addition, logging data are the response of underground lithology. The formation and lithology of an oil field are related to the depositional cycle. The depositional cycle is caused by periodic changes in the global sea level. Specifically, a depositional cycle affects the deposition and the deposition conditions that allows them to be repeated in the same order and deposited into a sequence. Therefore, a depositional cycle has the characteristics of alternating lithology. As a response to lithology, logging data also has a similar trend. This trend leads to the data that needs to be completed not only being related to the overall characteristics of the input sequence but also related to the top (past information) and bottom (future information) logging data of the missing data. From a geological point of view, missing lithology data are not only related to the entire sedimentary environment but also related to the upper and lower contact lithology of the strata. Therefore, we used a Bi-LSTM network that can extract past information, future information, and overall information as the basic module of the model. The encoder of the MC-GAN-BiLSTM adopts a Bi-LSTM model, which can establish the potential connection between a missing value and the logging value of the upper and lower formations. It compresses the input missing logging data into a low-dimensional intermediate vector z. The decoder uses the LSTM model to obtain the generated complete logging data by decoding z. The discriminator is composed of an LSTM cyclic layer and a fully-connected layer, and its input includes two types: generated complete logging data and real complete logging data. Through the loop layer and the fullyconnected layer, the discriminator maps the input to a onedimensional vector, and obtains the probability value that the input data are ''true''. When the generator and discriminator reach a balanced state, the model training is completed. The network structure of the MC-GAN-BiLSTM logging curve completion model is presented in Fig. 6.
The methodology implemented in this research is shown in Fig. 7. First, due to the different measurement methods, the dimensions of each type of logging data are also different. Unifying the dimensions of the logging data facilitates the joint input of data. Second, according to the content of the experiment, we generated two sets of 5 different test sets, which represented different missing rates. Finally, we use an RF, sequence to sequence Seq2seq, GAIN and the MC-GAN-BiLSTM to analyze and compare the results of each experiment. Through 10 different missing rate experiments, we determined the maximum missing rate of each model when dealing with different types of missing data.
III. MC-GAN-BiLSTM LOGGING DATA COMPLETION PROCESS
The problem solved by a logging data complement method is to complement the logging data of each well in the data set, that is, to replace all missing values with reasonable logging values, as presented in Fig. 5 of the estimated mask matrix.
When the MC-GAN-BiLSTM method complements missing logging data, the first step is to determine the input and output variables. First, we define the missing log data and missing identification matrix. (15) where x t is the logging value at sampling point t, x d t is the d th logging value at sampling point t, m t is the missing identifier at sampling point t, D is the dimension of the data, and T is the length of the data. Therefore, the corresponding missing VOLUME 10, 2022 identification is: where 1 means data exists and none means data are missing. So, the generated complete logging data and real and complete logging data can be expressed as: After determining the input and output variables, we use the generator to generate simulation data from them. The sum of the missing data and the noise vector z is used as the generator input. Adding noise makes the input data and the original data have a certain degree of difference to reduce the over-fitting phenomenon in the process of generating data. The input to the generator is mapped to a low-dimensional semantic encoding vector c through the encoder. The task of the decoder is to reconstruct vector c into complete logging data X and obtain a complete logging sequence.
The loss function of the generator includes two parts: the discrimination loss that makes the discriminator misjudgment and the reconstruction loss that reconstructs the original data. The generator loss is presented in Eq. (19): +λ M ((G(X + z) − X) 2 (19) where λ is a coefficient of reconstruction loss and · 2 is the L2 norm.
We use the discriminator to distinguish the authenticity of the data. The real samples Y and the generated samples X are input into the discriminator. The discriminator is composed of an LSTM cyclic layer and a fully-connected layer. The cyclic layer is responsible for processing the real or generated logging data and obtaining the historical memory vector from the sample data. The fully-connected layer maps the historical memory vector into a one-dimensional output and uses a Sigmoid function to calculate the probability that the input data are ''true''. The training goal of the discriminator is to recognize the real samples as ''true'' and the generated samples as ''false'' as much as possible. The discriminator loss function is: Finally, we optimize the objective function to obtain complementary data through iteration. The generator and the discriminator have opposite goals and oppose each other. In the optimization process of iterative training, the overall performance of the model continues to improve until it reaches a balance. The distribution of the simulated sample X generated at this time is sufficiently close to the distribution of the real sample Y . We replace missing values with generated values to obtain the completed data Y .
IV. DATA PREPARATION, EVALUATION INDEX, AND PARAMETER ADJUSTMENT A. DATA PREPARATION
The study area is the Jing'an Oilfield Dalugou II, located in the north central Shanbei Slope Basin, and the overall monoclinic structure is west-inclined at an angle less than 1 • . The structure in the area is relatively simple, with only a few faults and folds developed, and the distribution of wells is shown in Fig. 8. During the logging phase in the study area, four types of logging curves of spontaneous potential (SP), natural gamma ray (GR), acoustic time difference (AC) and true resistivity (RT) were recorded. There is a complex correlation between these four curves. Physical parameters such as porosity (POR), permeability (PERM) and water saturation (SW) can also be expressed linearly by these logging data, as in Fig. 9.
In the experiment, we build a model for these 7 types of data to complement the 18 wells in the area that had missing logging curves. We used the missing rate to represent the proportion of missing values in the input data, which is expressed by Eq. (24).
where m d t is the missing identifier of the d th type logging value at sampling point t, D is the dimension of the data, that is the type of curve, and T is the length of the data.
According to our statistics, 18 wells in the study area had missing data, and the missing rate was approximately 5-40%. We used the remaining 177 wells with complete logging curves as standard data to establish a complementary model. Among them, 150 wells were used as the training set, and 30% of the data was randomly discarded. Under the guidance of the real data, the training model made the missing values approach the known values. Twenty-seven wells were used as the test set, and the discarding ratios were set to 10%, 20%, 30%, 40%, and 50%. According to the different discarding ratios, two types of random deletion and consecutive deletion in the actual logging data were simulated.
B. PERFORMANCE EVALUATION INDEX
The evaluation index of the model performance adopts the root mean square error (RMSE) and coefficient of determination (R 2 ). The RMSE reflects the difference between the complement value and the true value, which is calculated using Eq. (23). The R 2 represents the degree of fit between the complemented log curve and the original curve, which is calculated in Eq. (24). To prevent the mutation data from adversely affecting the statistics of the completion results, we also adopted the Mean Absolute Error (MAE), which is calculated in Eq. (25) [33]. The MAE is the deviation between the complement value and the true measurement value, and then the absolute value is taken and averaged. The deviation is the absolute value, and there will be no positive and negative offset, Therefore, the MAE can better reflect the actual situation of the predicted value error [34]. where y i is the real log value of the i th sampling point in the missing section, y i is the generated log value of the i th sampling point in the missing section, and N is the total number of sampling points in the missing section.
C. PARAMETER ADJUSTMENT
During the training process, the performance of the model was optimized by adjusting various parameters. In this process, the RMSE was used as the evaluation index.
According to the comparison of the parameters listed in Fig. 10, when the learning rate is 0.001 and the number of hidden layer nodes is 16, the RMSE is the lowest and the model has the best performance. So, we set the number of hidden layer nodes of the LSTM layer and the learning rate to 16 and 0.001, respectively, and then adjusted the parameter λ of the model. The completion effect under different λ values is presented in Fig. 10(c). When the parameter is close to 1 or 0, the completion effect is poor. This implies that the two parts of the generator's loss function are not functioning as they should. When parameter λ is set to 0.3, the RMSE is the lowest and the completion effect is the best. When the number of iterations is 400, the RMSE basically stabilizes, as seen in Fig. 10(d). So, we set the maximum number of iterations to 400. In summary, the learning rate, the number of hidden nodes, the λ parameter, and the number of iterations are set to 0.001, 16, 0.3, and 400, respectively. While optimizing the model parameters, we also adjusted the parameters of the GAIN to be optimized. The GAIN uses a traditional GAN framework, and both the generator and discriminator use a fully-connected neural network. In the following experiments, we use the GAIN model as a reference to compare the effects of the model proposed in this article.
A. RANDOM MISSING DATA COMPLETION EXPERIMENT
In this section, we use conventional RF and Seq2seq models, a traditional GAIN, and the MC-GAN-BiLSTM to compare their accuracies. In the study area, the lack of logging data is mainly concentrated on the AC measurement. We combine the four types of logging data, GR, SP, RT and AC, and the three physical parameters, SO, POR and PERM, as the data input to the model. The advantage is that the joint multi-dimensional data have better constraints on the model, and can increase the completion accuracy of the model after training. For example, when the AC data in the study area is severely missing, it may affect the model to extract the characteristics of the AC data. However, when we combine other logging data as input, we can dilute the impact of the serious lack. In other words, when a certain logging curve data are seriously missing, we can rely on the other 6 logging data to complement the seriously missing data. Compared with the input of a single logging data type, the joint input increases the dimensionality of the input data from 1 to 7 dimensions, which increases the difficulty and cost of calculation. However, we believe that the processing of 7-D data is within an acceptable range compared with the dimensions of image processing.
We focus on the AC measurements with a large range of missing data in the study area. In this experiment, 27 wells were selected as the test set, and the amount of deleted data was controlled at 10%, 20%, 30%, 40%, and 50% by randomly deleting measurement points. This can simulate the real situation of field logging data loss. The performance results of the four models on the test set are shown in Table 1 and Fig. 11. We compare the performance of the four models through three evaluation indicators: the MAE, RMSE, and R 2 . The smaller the value of the RMSE and MAE, the more accurate the completion value, and the better the performance of the model. The larger the R 2 value, the better the fit between the completed data and the real data. In general, the completion results of the MC-GAN-BiLSTM are slightly better than the other three models. The average values of the MAE and RMSE are 0.189 and 0.295, respectively, which is lower than the 0.36 and 0.417, 0.313 and 0.35, and 0.241 and 0.331 of the RF, Seq2seq and GAIN. The R 2 average value is 0.911, which is higher than the 0.813, 0.862, and 0.875 of the RF, Seq2seq and GAIN. Specifically, when the random missing rate is less than 40%, the R 2 completion accuracy is not much different. The three evaluation indicators are all within an acceptable range except for the RF. When the random missing rate exceeds 40%, the accuracy of the RF, Seq2seq and GAIN decreases. The R 2 of the RF is less than 0.85. A large proportion of random data missing has caused similar effects to consecutive missing data, which brings difficulty to the model. For example, the RF ignores the time series characteristics of the logging data, and the completed data are only related to other data values at that location. The RF lost the correlation of logging data from the upper and lower formations. When large areas of data are missing, the accuracy of completion decreases. The MC-GAN-BiLSTM proposed in this paper can effectively extract the spatial and temporal characteristics of the logging data. When the missing rate increases, the MC-GAN-BiLSTM can use the features of the intact data to ensure the accuracy of completion. The specific completion effects of each model are shown in Fig. 12.
Through the above experiments, we can count the missing data in the model applications. When there is less randomly missing data in the study area, we prefer a conventional completion model. This is because the model is simple, and the result will be more efficient within an acceptable range. When the missing rate is large, the MC-GAN-BiLSTM is used to ensure the accuracy of the completion
B. RANDOM MISSING DATA COMPLETION EXPERIMENT
We used the same four models to complete consecutive missing data. Due to the consecutive lack of data, the models lack the context of the data during the process of completion, which makes the experiment in this section more challenging than the random missing data completion experiment in the previous section. The training set also uses 150 wells with complete data, and each well also contains 7 types of measurement data. The test set selects 27 wells to control the VOLUME 10, 2022 consecutive missing rate in the form of deletion at 10%, 20%, 30%, 40%, and 50%, which forms five test sets. We gradually increase the missing rate and analyze the accuracy of the model to determine the maximum missing rate for model completion.
We also train the model according to the joint input of 7 types of data. The output results of the 27 wells in the test set are shown in Table 2. Compared with the random data missing completion experiments, the accuracy of the consecutive data missing completion experiments decreased. For example, in the GAN-LSMT's random missing data completion experiment with a missing rate of 30%, the MAE is 0.3, the RMSE is 0.4, and the R 2 is 0.85. In the consecutive data missing and complete experiment, the MAE is 0.17, the RMSE is 0.27, and the R 2 is 0.92. When the missing rate is less than 30%, compared with the real data, the completion data of the four models have certain errors in their details, as shown with the blue marker boxes in Fig. 13(a). The trend of the curve shape has not changed much, and it is within an acceptable range. As the missing rate increases, the performance drops faster, as shown in Fig. 10. When the missing rate increases to more than 40%, the curve shape trend of the RF, Seq2seq and GAIN model data completion changes greatly (Fig. (b)). When the missing rate exceeds 50%, the curve shape trend of the GAIN-LSTM data completion has also changed ( Fig. (c)). At this time, we should consider manual completion based on geological data.
C. DISCUSSION OF THE EXPERIMENT
The ultimate goal of this experiment is to use an objective and intelligent method to complement the missing logging data as much as possible. In this paper, we use the MC-GAN-BiLSTM model. The biggest advantage of the MC-GAN-BiLSTM model is that not only the spatial characteristics of logging data are used, but the RF and Seq2seq modules in the model can effectively extract the timing characteristics of the logging data. This advantage greatly increases the performance of the model. In addition, we adopt a method of the joint input of 7 types of data, which can effectively improve the accuracy of all models. For example, when the incomplete data are randomly missing, the accuracy of the four methods involved in this article are all within an acceptable range. Finally, we simulate the lack of real data collected in the field through two types of test sets with random deletion and consecutive deletion, so that the experiment is more in line with the actual application environment. Additionally, we gradually increase the missing rate of the data, determine the application scope of the model developed in this article according to the application effect, and indirectly improve the efficiency of logging data completion.
Although the MC-GAN-BiLSTM model proposed in this article has achieved results, it also has shortcomings. First, we use 7 types of data in a joint input methods. The method of joint input helps to improve the performance of the model, but it has strict data requirements. Although the logging data of most oilfields in China contains the same data used in this paper, a very small number of oilfields lack one or more types. Second, the model proposed in this paper requires a large amount of complete data to train the model, which forces the application of the model to be based on the completion of logging work. Finally, because the input of the LSTM module requires a certain amount of data space, the model is not useful for the completion of real-time data from logging while drilling.
VI. CONCLUSION
This paper proposes a new method of complementing logging data based on a MC-GAN-BiLSTM. This method combines the characteristics of a GAN module to learn the distribution of real data and the advantages of an BiLSTM module to process the time series. It can memorize the trend of logging curve changes with depth and the spatial correlation of logging data at different depths, which is more in line with actual geological analysis experience experiments. In addition, we use multiscale convolution to fully extract the spatial features of the logging data. Multiscale convolution combined with BiLSTM makes the extracted features have spatiotemporal characteristics. On this basis, we construct a log curve complement model, using known values to complement the missing values of the curve. The experimental results show that the completion effect of the MC-GAN-BiLSTM method proposed in this paper is better than the traditional GAIN method. It can complete the missing part of the logging curve with reasonable simulation values that are close to the real logging data. This method provides certain guidance for researchers and achieves the goal of reducing logging costs as much as possible. However, due to the complexity of the underground geological environment, when the missing rate of the curve is large (>30%), it is difficult to accurately reflect the missing section based only on the limited known logging data, and there is a large deviation in the completion of this method. For the 18 wells in the study area with missing data, 15 of them were complemented by the MC-GAN-BiLSTM method, and logging data of 3 wells with a high missing rate were discarded. | 9,809.8 | 2022-01-01T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Computer Science"
] |
Recirculation of Process Water in a Wet Fermentation of Organic Fraction of Municipal Solid Waste
The mechanical biological treatment plant of Freienhufen is used to stabilize residual waste. Since the rural districts Elbe Elster and Oberspreewald Lausitz match their waste management with federal law, organic fraction of municipal solid waste (OFMSW) will be collected separately in future. Hence the anaerobic digestion process has to be converted. The accomplishment has to refer to the existing operating regime to reduce investment costs. This contains a wet fermentation. In order to facilitate the conversion of the operating process, suitable particle sizes and volumetric loads have to be examined. In addition, the liquid phase of the digestate shall be recirculated maximal to save both fresh water and waste water disposal costs. The one year lasting investigations were performed in lab-scale with a various number of reactors. Before feeding the bio-waste was pre-treated. In order to do that, the bio-waste was milled to particle sizes of 2, 4, 8, and 10 mm. In addition, the digestate was dewatered to gain process water. While using the process water fresh water was substituted in varying proportions. The feeding of the reactors was adjusted to the standards of the operating plant. For that reason, the dry matter content in the reactor was adjusted at 10.5 %. Depending on the delivered raw material, this restriction led both to unsteady water requirements and volumetric loading. As a result of investigations an optimal particle size as well as optimal proportion of recirculated process water were defined. For that reason, comprehensive analyses were conducted weekly to characterize the delivered raw material as well as the solid and liquid phase of the digestate in order to determine critical moments due to recirculation of process water. In conclusion, liquid and solid phase of the digestate should be evaluated with regards to application as fertilizer. doi: 10.5829/ijee.2019.10.01.08
INTRODUCTION 1
The mechanical biological treatment plant (MBT) of Freienhufen is used to stabilize residual waste. The landfill volume can be reduced while using the organic content for biogas production. Since the rural districts Elbe -Elster and Oberspreewald -Lausitz match their waste management with federal law, organic fraction of municipal solid waste (OFMSW) will be collected separately in future.
Hence the anaerobic digestion (AD) process of the MBT has to be converted. The accomplishment has to refer to the existing operating regime to reduce investment costs. This contains a wet fermentation. In order to facilitate the conversion of the operating process, suitable particle sizes and volumetric loads have to be examined. In addition, the liquid phase of the digestate shall be recirculated maximal to save both fresh water and waste water disposal costs. In the framework of one year lasting experiment the focus was segmented into five parts: Description of quality and quantity of separately collected OFMSW in the course of the seasons Pretreatment of separately collected OFMSW with respect to an efficient operating regime and with due regard to laboratory requirements Elaboration of best practice in AD conforming to specifications of plant operator Development of a process water recirculation management taking enrichment of nutrients and accumulation of contaminants into account Assessment of residuals (digestate) of AD regarding application as fertilizer
CHARACTERISATION OF OFMSW IN COURSE OF SEASON
The OFMSW in two-week cycle in two different territories was separately collected. Following the technical rules of LAGA PN 98 the sampling and pretreatment was realized by the plant operator. For that purpose, the heterogeneous A. Zentner*, C. Dornack Organic fraction Anaerobic digestion
Process water
Wet fermentation Solid waste OFMSW was homogenized and primary grinded to a particle size of approximately 20 mm ( Figure 1). Subsequently, the sampling material was delivered to the Institute of Waste Management and Circular Economy (IAK). The material was sampled on-site again following the technical rules of LAGA PN 98 ( Figure 2). Approximately 6 kg FM were taken and pretreated for AD. Another 6 kg FM conduced as retained samples and for tests in terms of elution.
Impact by course of seasons and settlement structure
The samples from OFMSW were taken both for fresh matter analytics and dry matter analytics. The analytical data of the substrates provide an overview about typical properties of OFMSW in the course of seasons as well as about the composition of waste in dependence of the settlement structure.
All samples of even sampling numbers have their origin in rural area with one bigger core of settlement. The samples with odd sampling numbers have their origin in more rural areas. The one year lasting experiment includes the handling of 52 samples in the period of April 2017 to March 2018. For better understanding an overview about the samples is given in Table 1.
Since the existing operating regime at MBT Freienhufen is adjusted to a dry matter content of 6 % the water content of OFMSW is granted high priority. Additionally, the water content provides data about the composition of the waste and is related to the organic dry matter (Figure 3).
The content of impurities (e.g. metal, batteries, plastics) was below 0.5 wt%. One can obtain that the water content of OFMSW has a huge variety in the range of 48 -70 %. The minimum water content appears in spring and the maximum appears in winter. The organoleptic investigation showed that the composition of OFMSW is dominated by green waste prior in spring and autumn. Thus, both rain events and ratio of green waste/ kitchen waste had a measurable impact to the water content. The impact of settlement structure negligible since there cannot be evaluated any tendencies.
Beyond that, the content of green waste is related to the organic dry matter content. Green waste is scratched and raked from the surface of the ground. Due to this circumstances dirt and soil is collected by coincidence. Figure 4 will provide an overview about that relations. The OFMSW was dried and grinded to a particle size of 10 mm. The distribution of particle sizes after sieving leads to the perception that most of the organic matter occurs in particle sizes between 2 -4 mm. The evaluation of relative concentration reveals that the content of organic matterincreases from 27 % at a particle sizes of less than 0.63 mm to 60 % at a particle sizes of 4 mm. Thus, the thesis above can be proved. While scratching and raking green waste from the surface of the ground inert material such as sand, stones, and soil will be collected as well.
Chemical properties of OFMSW
In addition, the bio-waste was analyzed in the laboratory in regard to parameters being relevant for AD. The focus was on nutrients (e.g. ammonia, TKN, sulphur, phosphorous, TOC) and heavy metals (e.g. Pb, Cd, Zn) (see Table 2). Within evaluation of analytical data of OFMSW inhibiting effects can be excluded. The obtained data are comparable both with literature values and from one's own experience [1][2][3][4][5][6][7][8][9].
Most of the carbon, approximately 98 %, is present as organic matter. The carbon content increases in November 2017 (BA 32). From this point until the end of the experiment it is 30 -55 % higher than in spring and summer 2017 crucial influenced by less content of green waste and major acceptance of separately collection of OFMSW at households. The optimal C:N:P:Sratio of > 600:15:5:1 cannot be adjusted by using OFMSW for AD exclusively [2]. This leads to the assumption of less biogas production due to missing carbon source and ammonification because of surplus of nitrogen.
METHODOLOGY
The investigations were performed in lab-scale. The reactors were driven with a specific volume of three liters. The AD was realized by quasi-continuous feeding using cow manure as inoculum. In order to that, feeding and sampling took place five times a week. Five incremental samples of one week were unified to one mixed sample and prepared for analytics. The focus of process variation was on: Particle size of OFMSW for feeding Ratio of recirculated process water and fresh water in feeding Accumulation of inhibiting contaminants For that reason, comprehensive analyses were conducted weekly to characterize both solid and liquid phase of the digestate. These include important parameters for AD and composting as for instance TOC, DOC, TKN, ammonium, COD, phosphate, organic acids, FOS/TAC, nutrients and
Distribution
Mesh size [mm] particle distribution heavy metals. Additionally, the biogas composition was measured daily to evaluate the effect of varying volumetric loads and determine critical moments due to recirculation of process water. In conclusion, liquid and solid phase of the digestate should be evaluated with regards to application as fertilizer.
Pretreatment of OFMSW for experimental investigations
As shown in section 2 sampling of separately collected OFMSW took place at plant site as well as at IAK. The particle size amounted to 20 mm. Larger wooden components were separated since they are not biodegradable in AD. These wooden components will serve as structure material in subsequently conducted composting facility. By drying the bio-waste at a temperature of 105 °C for 24 hours to different effects were achieved. On the one hand the biowaste was stabilized and a stable quality was guaranteed. On the other hand, the substrate could be grinded to various particle size in the range of 2 -10 mm ( Figure 5).
AD in experimental scale
The experimental investigations were realized in two stages. The first stage represented a preliminary set up consisting of three reactors with a specific volume of three liters. Two reactors were fed with dried and grinded bio-waste using cow manure as inoculum. The third reactor was driven with cow manure as blank value. The first set up pursued the following goals: Adaptation of microorganisms Adjust stable process conditions Adjust dry matter content of 10.5 % Preliminary investigations in dewatering of digestate to gain process water The second stage constituted the major experiment with six reactors ( Figure 6). Two reactors were driven exclusively with cow manure. Four reactors represented two varying scenarios in duplicate. The variations of the process were geared to diversity in particle size and enhancement of fresh water substitution by process water recirculation.
The process is based on operating AD at MBT Freienhufen. Thus, the retention time was adjusted at 25 days in mesophilic conditions of 35 °C. The dry matter content was raised from 6 to 10.5 % since grinded OFMSW has a lower resistance in stirring than residual waste. The feeding of the reactors took place quasi-continuously from Monday till Friday. For that reason, dried and grinded was mixed with fresh water according to the retention time within the reactor. Following adjustments were realized: Different particle sizes were fed: 10/8/4/2 mm Fresh water was gradually substituted by recirculated process water: 50/65/75/100 %
Process water recirculation
The sampling took place simultaneously to feeding. Five incremental samples of one week were unified to one mixed sample and prepared for analytics. According to the retention time five times 168 g (840 g) of digestate were removed of each AD reactor every week. Subsequently the gathered digestate was separated into solid and liquid phase (Figure 7).
RESULTS OF MAIN AD EXPERIMENTS
Varying scenarios were performed during the long-term experiment. Following Table 3 will provide an overview about adjusted parameters. The sampling was conducted five times a week from Monday till Friday. Just in time and onsite the conductivity, pH and redox potential were measured to evaluate the process stability.
pH, conductivity and redox potential
The pH value decreased from 7.8 to a stable range between 7.2 to 7.35 independently from process variation. The high starting point is caused by cow manure which is used as inoculum. Since one stage reactors performed AD the pH is in stable state. Acetogenic and methanogenic microorganisms will find best circumstances for their performance [2,7]. The conductivity decreased from 17.8 mS/cm a stable range between 10 to 12 mS/cm independently from process variation. The high starting point is caused by cow manure [5]. The redox potential is stable within the first six month of the major experiment at -400 to -300 mV independently from process variation. Thus acetogenic and methanogenic microorganisms will find best circumstances for their performance. At the end of the experiment the redox potential raised to -200 mV. In comparison with simultaneously conducted experiments one can conclude that the measuring device was non-conforming.
Production and composition of biogas
The specific biogas yields were approximately 180 NL/(kg org. DM). One could monitor that the yields increased over time (Figure 8) until ca. 240 NL/(kg org. DM). The yields were higher in A-reactors than in B-reactors even when process was changed in particle size and ratio of recirculated process water. In December an obvious change took place and the yields achieved peaks of approximately 500 NL/(kg org. DM). The results were obviously comparable to literature values [9]. This leads to the assumption that biogas production is not influenced by process variation in particle size and ratio of recirculated process water. Moreover, feeding and volumetric loading have an impact to biogas production. According to Figure 9 the biogas production is related to volumetric loading. The volumetric loading increased over time since organic content of OFMSW increased measurable and successively more organic matter is recirculated by process water. Thus, the volumetric loading increased from approximately 2.2 -3.6 kg org. DM/(m³*d). Another effect occurred in relation to feeding procedure. At Christmas break the feeding was converted from quasicontinuous to discontinuous feeding. Simultaneously the specific biogas yield increased to a large extent. After Christmas break the feeding was reconverted to quasicontinuous feeding and the biogas yield decreased again.
The composition of biogas was measured five times a week according to the sampling. As shown in the above figures the development of the biogas composition followed the same tendencies. Methane concentration increased from 45 % to 52 % over time and within increase of volumetric loading (Figure 10). Thus the maximum of volumetric loading is not reached. In other circumstances the biogas yield and the methane concentration would decrease [8]. The measured concentrations for hydrogen and hydrogen sulphide are negligible. Since they were below 200 ppm they were within the related measurement tolerance (Figure 11). There is no tendency to be extrapolated. At the end of the experiment the concentration of hydrogen sulphide raised. This effect is related to process water recirculation and should be investigated in future. Gaps in Figures 10 and 11 Figure 11. Occurrence of hydrogen and hydrogen sulphide
Solid and liquid phase of digestate
The digestate was evaluated for separate utilization of solid and liquid phase. Thus both phases were analyzed for inhibiting effects in AD and composting. Parameters of investigation are presented in the following Table 5.
Liquid phase
The liquid phase gained by dewatering of digestate was foreseen to recirculate as process water. Thus, fresh water should be substituted and waste water should be avoided. For this reason, the quality and quantity of the liquid phase was evaluated. Following descriptions constitute an excerpt.
The dry matter content of the liquid phase increased from approximately 3 to 7.7 % within the major AD experiment. Since we know about the particle size distribution of section 2.1 OFMSW contained a large amount of particle sizes ≤ 1mm. These particles passed the filter in dewatering partly and accumulate in the process water. The organic content of dry matter in liquid phase was about the whole time 70 ± 3 %. As a consequence, the volumetric loading increased by raising the ratio of recirculated process water.
In terms of recirculation of process water nitrogen occupies an important role. Especially ammonia which is soluble in water and accessible to plant. At the beginning of the major experiment the ammonia concentration was about 2 g/l, caused by cow manure. Over time the concentration decreased rapidly to 0.75 -0.85 g/l independent of process variation. After Christmas break the concentration raised again to 1.05 g/l independent of process variation. This leads to the assumption that will accumulate under conditions like high volumetric loading. Additionally, the activity of microorganisms is affected by conversion from quasicontinuous to discontinuous [4]. The inhibiting effect of ammonia is in dependence of temperature and pH-value. The adjusted conditions in major AD experiment should allow concentrations until 3.5 g/l [3]. As a consequence, the liquid phase is suitable for recirculation but should be monitored over time to avoid inhibiting and toxic effects to microorganisms. Additionally, there was an evaluation of organic acids. The sum of organic acids in all reactors and process variations was below 4 g/l. So the process stability was not affected by toxic concentrations. The ratio of acetic acid and propionic acid was > 2. Thus the efficiency in degradation and metabolism is in good circumstances. Furthermore, the FOS/TAC was analyzed. The ratio was below 0.3 and represented stable conditions. Sufficient buffer capacities in order to organic acids are delivered by cow manure as inoculum [6].
Solid phase
The solid phase of digestate contained approximately 73 % water. While using smaller particle sizes the water content tended to 80 % due to large surface area and stronger water binding capacities. While using AD approximately 40 % of organic matter could be degraded. The solid phase of digestate has to be composted after AD since the adjusted parameters were not sufficient for land use. In dependence of available co-substrate, the parameters will be evaluated more in detail in future to achieve certificated quality compost.
WATER BALANCE
Selected scenarios represent the substitution of fresh water by recirculation of process water. A treatment of process water did not take place in lab-scale experiments. The required amount of water was calculated by moisture of OFMSW, retention time and adjusted dry matter content in the reactor. Five incremental samples of one week were unified to one mixed sample. The liquid phase after dewatering was used partly for recirculation. Scenario 1 represents the process without process water recirculation. After dewatering 11.3 % of the digestate was separated in form of solid digestate. 88.5 % of the resulting process water respectively 83.7 % of the entire digestate has to be treated in a waste water treatment plant. The loss in performance and transferring amounts to 5.6 % (Figure 12). Scenario 2 represents the recirculation of process water of approximately 73 %. After dewatering 17.2 % of the digestate was separated in form of solid digestate. 82.9 % of the resulting process water respectively 60.6 % of the entire digestate were recirculated to the AD process. The loss in performance and transferring amounts to 1.8 % (Figure 13). Figure 14 represents the normalization of scenario to an input of 1 ton of bio-waste.
CONCLUSIONS
The project focused on the conversion of input material in AD of MBT Freienhufen. Residual waste shall be substituted by separately collected OFMSW. The experiments in AD, dewatering and recirculation of process water revealed reliable results in anaerobic degradation while implementing different process variations. The success and quality of anaerobic degradation and biogas production are mainly influenced by volumetric loading. Thus the course of seasons as well as quality and ratio of recirculated process water have had the major impact.
Since there was no measurable and reproducible impact of particle sizes it is recommended to avoid maximum shredding. The input of energy will not be in appropriate proportion to higher biogas yields.
If liquid phase of digestate will be used as recirculated process water the MBT Freienhufen is able to substitute large amounts of fresh water. It is recommended to substitute in a ratio of 65 to 70 % to avoid toxic risks (e.g. ammonification) for microorganisms. If the recirculation of process water will be implemented in future to MBT Freienhufen a storage tank with aeration has to be added to the technical concept. The aeration will inhibit AD processes in the storage tank and will prevent settlement of dry matter content from process water.
In order to use solid and liquid phase as fertilizer a post treatment is necessary. The properties of solid and liquid phase did not match the requirements by law. Thus, composting with adequate co-substrate as well as mixing with structure material from pretreatment are possible pathways to ensure a good composition for composting.
The course of seasons revealed a huge variety in composition of separately collected OFMSW. The MBT Freienhufen has to manage the balancing act between municipal waste disposal and gain in energy. Kitchen waste is suitable for AD processes because of high water contents and high content of organic dry matter. By contrast, green waste is more suitable for composting. The implementation of the new bin for OFMSW has to be accepted by citizen. Hence the ratio of kitchen waste can increase which leads to higher performance in AD. In addition, higher ratios of kitchen waste would support the available technical process. The wet fermentation requires large amounts of water which can be covered partly by kitchen waste with high moisture.
AKNOWLEDGEMENTS
This project is kindly financed by Abfallentsorgungsverband Schwarze Elster/ MBT Freienhufen. Authors wish to thank the scientific and technical staff for collaboration and realization of the project. The author is responsible for the content of this publication. | 4,928 | 2019-03-01T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
The grid codes generation for solving problems of the cosmic plasma hydrodynamics on supercomputers
In this paper an attempt is made to create a system for generating the parallel codes to simulate some astrophysical phenomena on modern computing architectures. A code construction scheme for the numerical solution of various problems of the cosmic plasma hydrodynamics is proposed using a knowledge base with an ontological representation of a given mathematical simulation area. The ontology of numerical methods and parallel algorithms and the ontology of parallel architectures and technologies together with the expert (inference) rules allow selecting an efficient numerical method, a parallel algorithm and a computational architecture for solving a necessary problem from the outlined domain. The corresponding program units that implement various stages of solving the problem on a chosen computational architecture are substituted into scheme of solving the problem constructed in this way. Program units can be based on both the author’s implementation and on calling various parallel libraries.
Introduction
It is known that more than 90% of the visible matter in the universe consists of plasma, so the plasma-astrophysics discipline has numerous fields of application. For example, plasma is a component of stars, nebulae, interplanetary, interstellar and intergalactic media. In the formation and dynamics of astrophysical objects, the magnetic field can fundamental affect the plasma dynamics.
The study of many astrophysical problems (for example, the formation of jets and magnetized disks around the newborn black holes, etc [1]) requires the investigation of the hydrodynamics of magnetized liquids.
An crucial part in studying of astrophysical processes is played by mathematical modeling, for which a large number of parallel codes have been developed. Among them, we can distinguish the codes based on the SPH method [2]- [4], the grid codes [5]- [7], codes using the adaptive [8]- [10] and the moving [11]- [13] meshes. Most of them are focused on the classical supercomputer architectures usage, but there are also representatives focused on graphics accelerators [14]- [16] and Intel Xeon Phi accelerators [17].
Most of the existing codes for simulating the cosmic plasma hydrodynamics are aimed at solving certain types of problems, but there are also universal codes [18]- [21] with a modular structure that support the variability of the problems being solved and the methods used. For example, the MPI-AMRVAC code has a certain versatility for solving the problems of stellar [22], but, nevertheless, it has a certain restrictions on solving the problems of galactic evolution or the problems of supernova explosions.
There are also attempts to create systems for generating astrophysical codes. For example, a code generation system using the EXCALC package has been created at the University of Costa Rica. This system transforms MHD equations to the Cartesian coordinates, followed by discretizing the equations by a method selected [23]. Similar works are carried out in Russia. Thus, the Institute of Astronomy of RAS, under the leadership of D. V. Bisikalo, Corresponding member of RAS, has created a code generation system for the evolution of the close binary stars [NSCF, 2013]. Still, all these systems are not completely universal.
This paper proposes a concept for creating a system of generating the grid parallel codes for the numerical solution of various hydrodynamic problems of the cosmic plasma based on a generalized representation of the magnetic hydrodynamics equations using the Godunov method. Using the ontological knowledge of the area in question and the modular structure of the code make it easier to prepare the necessary blocks for its generation.
A generalized description of the problem statement and solution method
The basic equations of the gravitational magnetic hydrodynamics for the vector of conservative variables U have the following form: where F(U) is the flow vector of conservative variables, Q is the right-hand side vector (a source), Φ is the gravitational potential, P (U) is the gravity source. The operator splitting approach allows us to represent this system as a combination of a hyperbolic system ∂U ∂t + ∇ · F(U) = 0, an elliptic system ∆Φ = P (U) and a right-hand side ∂U ∂t = Q. Equation (1) can also include a parabolic term describing, for example, the heat conduction effect, but at this stage of our study this was not considered.
For the numerical solution of system (1-2), various discretization methods can be used. In this paper, we consider a finite difference approach, which is more suitable for modeling media without significant curvilinear inclusions, in contrast to the finite volume method or finite element method. To discretize the computational domain, we introduce a uniform grid with a step h, at which the conservative variables are determined at the cell centers using fractional indices, and the conservative variables flows -at the centers of the interfaces between the cells. Then the Godunov scheme for equation (1) in the general form can be written down as follows: where τ is the time step, calculated from the CFL condition, F x , F y , F z are the conservative variables flows in terms of x,y,z coordinates.
To implement such a computational scheme, it is necessary to find the flow values at the interfaces between the cells, that is, to solve the Riemann problem. For this, various solvers can be applied, including high-order ones with different ways of representing a solution, for example, such as: MUSCL schemes, piecewise-linear and parabolic approximations, WENO schemes, HLL method, etc.
To numerically solve the Poisson equation (2), one can use any known grid method for the elliptic systems, for example, using the fast Fourier transform, various iterative methods, etc. Such above-mentioned generalized description of the equations system and the numerical method for solving problems of magnetohydrodynamics allows us to compose a unified upper algorithm level suitable for solving a wide class of astrophysical problems and to propose a code generation scheme based on it.
A code generation scheme
Let us describe in greater detail the proposed structure of the generated code and the necessary modules from which it is assembled.
The general code organization scheme for the numerical solution of equations of the forms of (1-2) is shown in Figure 1. The first step is to initialize the necessary grid arrays for a selected data structure, which is determined by the specificity of solving equations and the effect on the parallelization efficiency. The second step is to upload the information about a medium under consideration. Further, in the loop, while the time condition is fulfilled, the following actions are sequentially performed: recalculating the current time step to fulfill the CFL condition, fulfilling boundary conditions for primitive variables, reconstructing primitive variables based on the selected local template, fulfilling boundary conditions for reconstructed variables, starting the non-blocking exchange of boundary values between the adjacent nodes (parallelization by decomposition of the computational domain), solving the Riemann problem, computing the solution to equation (1) according to the Godunov scheme, solving the Poisson equation (2), verification of the exchanges completion, saving the solution at the current time step. Figure 1 also lists the necessary modules for assembling the final code, which, together with the universal function main, based on the described scheme, compound the finished version of the code (MPI-program). Let us note that as specific modules one can use both the library solutions and author's implementations. Moreover, specific solving equations determine only a set of conservative and primitive variables; the general solution scheme remains universal. The choice of specific methods to solving the arising sub problem and the corresponding modules is up to the researchers. Therewith, modules that implement solutions to the Riemann problem and calculations according to the Godunov scheme assume a parallel implementation, for example, using OpenMP technology and vector extensions for existing Intel processors and co-processors or using the CUDA core-function on the threads grid for calculations on graphic accelerators. The main structure can be expanded in the case of using the operator splitting approach or adding the calculation of chemical reactions, adaptive grids. However in any case the main framework and all the above modules will remain the same.
Intelligent support ontology for solving compute-intensive astrophysics problems
The development of efficiently parallel codes for modern supercomputer systems requires the knowledge about relevant computational methods, parallel architectures and technologies. To simplify the problem of choosing optimal numerical methods and suitable target architectures, it is proposed to present the accumulated knowledge about solving compute-intensive problems of astrophysics in an obvious ontological form. The ontology of mathematical methods and parallel algorithms and the ontology of parallel architectures and technologies, tied together with the rules of inference, allow one to build an optimal scheme for solving the problem. On this basis, the necessary modules for generating a code are determined according to the above described scheme. Such an approach can essentially simplify the process of developing scientific software and make it accessible to a researcher who is poorly skilled in some blocks of knowledge represented in ontology. Figure 2 shows the top level of the compiled ontology, some basic concepts of which are supplemented by a finite set of objects. The solution to a problem begins with the determination of a Physical Phenomenon under study described by the corresponding Equation system. The Equation System is solved by a Computational Method constructed using the previously mentioned methods. The final Computational Method, in turn, is implemented by a Parallel Algorithm, which is encoded by program Code optimized for a selected Computing Device using assigned Parallel Programming Technologies. In the future, the ontology will be expanded by expert (inference) rules containing the information not explicitly presented in the ontology about the properties of mapping computational methods to computational architectures. The rules will allow one to come to a comparative assessment of the computation time, the size of the memory used, the accuracy of the solution, scalability for a selected bundle: the computational method -the target architecture.
Conclusion
This paper presents the code generation concept for solving the problems of cosmic plasma simulation on parallel computing systems using an ontological representation. The generated code structure and the main modules are described, as well as the top-level ontology of the intelligent support for solving compute-intensive problems of astrophysics. The proposed generation system maintains multipurposeness with respect to the conservation laws of magnetic hydrodynamics and numerical methods in question. The generated code allows one to simulate the dynamics of cosmic plasma for a wide range of problems. | 2,406.6 | 2019-11-01T00:00:00.000 | [
"Physics"
] |
Recovery of Polyphenols from Grape Pomace Using Polyethylene Glycol (PEG)-Grafted Silica Particles and PEG-Assisted Cosolvent Elution
Adsorption on a functionalized surface can be an effective way of purifying polyphenols from complex plant extracts. Polymeric resins that rely on hydrophobic interactions suffer from low selectivity, weak affinity towards polyphenols, and lack tunability therefore making the purification of polyphenols less efficient. In this study, a purification process for the recovery of polyphenols from grape pomace extract was successfully developed using hydrogen bonding affinity ligands grafted on silica particles and PEG-assisted elution solvents. Bare silica (SiO2) and polyethylene glycol (mPEG)-grafted silica microparticles with molecular weights of 2000 and 5000 were tested to determine their polyphenol binding and release characteristics. Functionalizing the surface of bare silica with mPEG ligands increased the adsorption capacity by 7.1- and 11.4-fold for mPEG-2000 and mPEG-5000 compared to bare silica particles, respectively. This was likely due to the introduction of more polyphenol binding sites with mPEG functionalization. Altering the molecular weight (MW) of mPEG grafted on silica surfaces provided tunability in the adsorption capacity. A complete recovery of polyphenols (~99.9%) from mPEG-grafted silica particles was achieved by utilizing PEG–ethanol or PEG–water cosolvent systems. Recovered polyphenols showed up to ~12-fold antioxidant activity compared to grape pomace extract. This study demonstrates that mPEG-grafted silica particles and elution of polyphenols with PEG cosolvents can potentially be used for large-scale purification of polyphenols from complex plant extracts and simplify the use of polyphenols, as PEG facilitates remarkable solvation and is an ideal medium for the final formulation of polyphenols.
Characterization of Silica Particles
Surface functionalization of silica particles with mPEG ligands was confirmed by spectroscopic and gravimetric techniques. Surface area and pore volume of bare and mPEG-grafted silica particles were determined by Brunauer-Emmett-Teller (BET) analysis.
Fourier Transform Infrared (FTIR) Analysis
The surface functional groups on the surface of silica particles were confirmed based on the presence of the characteristic peaks for different moieties, such as C=O, N−H, and −CH 2 (Figure 1). For both bare and mPEG-grafted silica particles, the peaks at~1045 cm −1 are attributed to asymmetric vibrations of Si−O−Si [33]. A characteristic peak of carbonyl group (C=O) stretching in the range of 1715 ± 100 cm −1 appeared for mPEG-functionalized silica particles due to presence carbonyl groups in the mPEG-silane structure (Figure 1). Note that the carbonyl moiety seen in the mPEG structure is not the part of the repetitive unit, however, is present in the structure due to the coupling chemistry used to prepare mPEG-silane. It was also anticipated that the characteristic broad peak in the range of 3000-3600 cm −1 for mPEG-grafted silica would appear due to O−H stretching [34]. The residual moisture upon grafting and washing steps of mPEG-grafted silica particles with water and ethanol caused the broad −OH peak to appear in the aforementioned region. The presence of the residual moisture content is supported by the very small mass loss seen in the initial phase of the thermogravimetric (TGA) profile where the silica particles were heated to 120 • C and remained for 3 min (Figure 2). Although an N−H stretching peak is expected in 3000-3600 cm −1 region, due to the strong −OH peak, the signal associated with the N−H stretching is likely masked. An −NH group is also present in the structure of mPEG silane due to coupling chemistry used. Moreover, the peak around 2800 ± 100 cm −1 appeared is attributed to −CH 2 stretching vibration from the repetitive units of mPEG grafted on silica particles [33,35]. The small peaks at~1462 and 1347 cm −1 are attributed to −CH 2 bending and wagging vibration, respectively [33,35]. In summary, identification of characteristic peaks of carbonyl (C=O), N−H, and −CH 2 suggest that successful coupling of mPEG-silane to silica particles was achieved.
Molecules 2019, 24, x FOR PEER REVIEW 3 of 18 PEG-assisted cosolvent recovery can be an effective and environmentally benign process for obtaining highly potent polyphenols from grape pomace.
Characterization of Silica Particles
Surface functionalization of silica particles with mPEG ligands was confirmed by spectroscopic and gravimetric techniques. Surface area and pore volume of bare and mPEG-grafted silica particles were determined by Brunauer-Emmett-Teller (BET) analysis.
Fourier Transform Infrared (FTIR) Analysis
The surface functional groups on the surface of silica particles were confirmed based on the presence of the characteristic peaks for different moieties, such as C=O, N−H, and −CH2 (Figure 1). For both bare and mPEG-grafted silica particles, the peaks at ~1045 cm −1 are attributed to asymmetric vibrations of Si−O−Si [33]. A characteristic peak of carbonyl group (C=O) stretching in the range of 1715 ± 100 cm −1 appeared for mPEG-functionalized silica particles due to presence carbonyl groups in the mPEG-silane structure ( Figure 1). Note that the carbonyl moiety seen in the mPEG structure is not the part of the repetitive unit, however, is present in the structure due to the coupling chemistry used to prepare mPEG-silane. It was also anticipated that the characteristic broad peak in the range of 3000-3600 cm −1 for mPEG-grafted silica would appear due to O−H stretching [34]. The residual moisture upon grafting and washing steps of mPEG-grafted silica particles with water and ethanol caused the broad −OH peak to appear in the aforementioned region. The presence of the residual moisture content is supported by the very small mass loss seen in the initial phase of the thermogravimetric (TGA) profile where the silica particles were heated to 120 °C and remained for ~3 minutes (Figure 2). Although an N−H stretching peak is expected in 3000-3600 cm −1 region, due to the strong −OH peak, the signal associated with the N−H stretching is likely masked. An −NH group is also present in the structure of mPEG silane due to coupling chemistry used. Moreover, the peak around 2800 ± 100 cm −1 appeared is attributed to −CH2 stretching vibration from the repetitive units of mPEG grafted on silica particles [33,35]. The small peaks at ~1462 and 1347 cm −1 are attributed to −CH2 bending and wagging vibration, respectively [33,35]. In summary, identification of characteristic peaks of carbonyl (C=O), N−H, and −CH2 suggest that successful coupling of mPEGsilane to silica particles was achieved. compared to bare silica ( Figure 2). The additional mass loss observed for mPEG-grafted silica particles suggests that both mPEG-2000 and mPEG-5000 were successfully grafted onto the silica surface. The additional mass loss of 5.0% and 2.1% in mPEG-5000 and mPEG-2000, respectively, compared to bare silica correspond to 0.01 mmol mPEG/g silica for both mPEG-5000 and mPEG-2000. This suggests that grafting densities of mPEG-5000 and -2000 are similar and any differences observed in polyphenol adsorption capacity is associated with the chain length of PEG.
Elemental Analysis of Bare and mPEG-Grafted Silica Particles
Elemental analysis of mPEG-grafted and bare silica particles verified the surface modification. Table 1 shows the C, H, and N composition of bare and mPEG-grafted silica particles. Surface modification with mPEG resulted in higher C, H, and N weight percentages in comparison to bare silica particles where mPEG-5000 showed ~2-fold higher wt % of C and H in comparison to mPEG-2000. This is also in good agreement with TGA analysis showing ~2.5-fold difference in % weight loss between mPEG-2000 and -5000. Increase in the percentage of elemental C, H, and N of mPEG-grafted silica particles in comparison to bare silica is attributed to the successful surface functionalization of silica particles with mPEG ligands. It is worthy to note that bare silica shows ~1% C, ~0.5% H, and ~0.07% N contents, where it is believed that the organic contamination associated with the sample handling during the measurements likely resulted in detection of carbon, hydrogen, and very limited nitrogen content. Similar organic contents of bare silica were reported elsewhere [37,38]. Table 1. Elemental composition and physical characteristics of bare and mPEG-grafted silica particles.
2.1.2. Thermogravimetric (TGA) Analysis of Bare and mPEG-Grafted Silica Particles Figure 2 shows the percent mass loss of bare and mPEG-grafted silica particles. mPEG-2000 decomposition starts at~400 • C, whereas mPEG-5000 started to decompose at~300 • C, which compares to previous research showing a decomposition temperature of 392 • C for mPEG-2000-grafted silica [36]. In order to quantify the mass loss due to mPEG decomposition, the mass loss of bare silica was used as a baseline and any additional mass loss observed in mPEG-grafted silica particles was associated with the decomposition of mPEG. The percent weight loss difference between bare silica and mPEG-2000 was 2.1%, whereas mPEG-5000 had 5.0% additional mass loss compared to bare silica ( Figure 2). The additional mass loss observed for mPEG-grafted silica particles suggests that both mPEG-2000 and mPEG-5000 were successfully grafted onto the silica surface. The additional mass loss of 5.0% and 2.1% in mPEG-5000 and mPEG-2000, respectively, compared to bare silica correspond to 0.01 mmol mPEG/g silica for both mPEG-5000 and mPEG-2000. This suggests that grafting densities of mPEG-5000 and -2000 are similar and any differences observed in polyphenol adsorption capacity is associated with the chain length of PEG.
Elemental Analysis of Bare and mPEG-Grafted Silica Particles
Elemental analysis of mPEG-grafted and bare silica particles verified the surface modification. Table 1 shows the C, H, and N composition of bare and mPEG-grafted silica particles. Surface modification with mPEG resulted in higher C, H, and N weight percentages in comparison to bare silica particles where mPEG-5000 showed~2-fold higher wt % of C and H in comparison to mPEG-2000. This is also in good agreement with TGA analysis showing~2.5-fold difference in % weight loss between mPEG-2000 and -5000. Increase in the percentage of elemental C, H, and N of mPEG-grafted silica particles in comparison to bare silica is attributed to the successful surface functionalization of silica particles with mPEG ligands. It is worthy to note that bare silica shows~1% C,~0.5% H, and~0.07% N contents, where it is believed that the organic contamination associated with the sample handling during the measurements likely resulted in detection of carbon, hydrogen, and very limited nitrogen content. Similar organic contents of bare silica were reported elsewhere [37,38]. Figure 3 shows the adsorption capacities of the adsorbents used in this study. Bare silica showed only 10.1 mg GAE (gallic acid equivalents)/g adsorption capacity whereas surface grafting with mPEG-2000 and -5000 increased the adsorption capacity by 7.1-and 11.4-fold compared to bare silica, respectively. Higher adsorption capacities observed in mPEG-grafted silica particles compared to bare silica can be attributed to the more polyphenol binding sites. Grafting bare silica with mPEG introduces a high number of ether sites which presumably facilitates hydrogen bonding between polyphenols and mPEG ligands. A similar hydrogen bonding phenomenon can be seen between bare silica and polyphenols. Bare silica surface carries silanol groups which can potentially be involved in hydrogen bonding [39]; however, it is likely that the number of these sites are limited. Therefore, bare silica yields poor polyphenols adsorption capacity. This is in good agreement with other studies showing that bare silica surfaces showed very low quercetin adsorption capacity compared to titania-modified silica particles [40,41]. mPEG-5000-grafted silica particles showed 1.6-fold higher polyphenol adsorption capacity compared to mPEG-2000. Although the conformation of mPEG on the silica surface is not known, it is likely that mPEG-5000 exposes more hydrogen bonding sites compared to mPEG-2000 due to the presence of higher number of repetitive PEG units. Exposing more ether sites would likely increase the probability and affinity of polyphenols to bind the PEG chains resulting in higher adsorption capacities. These results are in good agreement with literature showing that increasing the chain length of PEG from 6 to 43 repetitive units resulted in strong bonding to biomolecules available on cell surfaces, presumably through hydrogen bonding [26]. This unique ability of PEG provides surface tunable properties which regulate the polyphenol adsorption capacity with mPEG-grafted silica particles. In addition to the tunable properties, mPEG-grafted silica particles demonstrated higher adsorption capacities in comparison to traditionally used food-grade polymeric resins. Soto et al. evaluated 13 different food grade resins such as Amberlites, Diaions, and SepaBeads with various structures and reported~1.5-2.6 mg GAE/g adsorption capacities for polyphenols recovered from winery waste [42]. It is important to note that the surface areas of these resins range from 300 to 1200 m 2 /g. mPEG-2000-and mPEG-5000-grafted silica particles used in our study have less surface area (Table 1) compared to the resins used in Soto et al. [42] while demonstrating~28-48-and~44-77-fold higher total polyphenol adsorption capacities, respectively. These results confirm the high potential for mPEG-grafted silica particles to be used in the recovery of polyphenolic compounds present in the winery waste. Since mPEG-5000-grafted silica particles showed the highest polyphenol adsorption capacity, desorption experiments were performed using these particles only.
Screening Solvent Systems Coupled with Reagents That Can Break Hydrogen Bonding for the Recovery of Polyphenols from mPEG-Grafted Silica Particles
Effect of Sorbitol and Salt in Aqueous Ethanol on Polyphenol Recovery Figure 4 shows the desorption ratios of various solvent systems. When sorbitol or NaCl was individually added to 30% aqueous ethanol, the desorption ratios increased by~8-fold in comparison to 30% aqueous ethanol, yet the desorption ratios were at~13% level, which is likely considered as a noneffective recovery process for polyphenols. The stronger elution power, seen with the addition of NaCl to the aqueous ethanol, can be partially explained by the salt effect seen in the reverse phase chromatography [43]. When NaCl and sorbitol were added to 70% aqueous ethanol, desorption ratios increased by~4-and 7-fold in comparison to 70% aqueous ethanol, whereas combined sorbitol/NaCl increased the desorption ratios by~11-fold, resulting in a desorption ratio of 36.1% ( Figure 4). We attempted to use hydrogen bond breaker reagents to release the polyphenols strongly bonded to mPEG-grafted silica particles. Sorbitol is a monosaccharide with six hydrogen donors and acceptors and is commonly used as a biomolecule stabilizer. Sorbitol was previously used to elute large molecule biologics presumably by breaking the hydrogen bonds between the resin and the biomolecule of interest [44]. Moreover, in combination with salts, the elution power was strengthened [44]. In analogy to elution of biologics in hydrogen bonding chromatography [44], sorbitol and NaCl was used in an aqueous ethanol binary solvent system. Although the desorption ratio of 100 mM NaCl + 200 mM sorbitol in 70% aqueous ethanol was promising, further optimization of sorbitol concentration was not performed due to the viscosity constraints.
polyphenol adsorption capacity compared to mPEG-2000. Although the conformation of mPEG on the silica surface is not known, it is likely that mPEG-5000 exposes more hydrogen bonding sites compared to mPEG-2000 due to the presence of higher number of repetitive PEG units. Exposing more ether sites would likely increase the probability and affinity of polyphenols to bind the PEG chains resulting in higher adsorption capacities. These results are in good agreement with literature showing that increasing the chain length of PEG from 6 to 43 repetitive units resulted in strong bonding to biomolecules available on cell surfaces, presumably through hydrogen bonding [26]. This unique ability of PEG provides surface tunable properties which regulate the polyphenol adsorption capacity with mPEG-grafted silica particles. In addition to the tunable properties, mPEG-grafted silica particles demonstrated higher adsorption capacities in comparison to traditionally used foodgrade polymeric resins. Soto et al. evaluated 13 different food grade resins such as Amberlites, Diaions, and SepaBeads with various structures and reported ~1.5-2.6 mg GAE/g adsorption capacities for polyphenols recovered from winery waste [42]. It is important to note that the surface areas of these resins range from 300 to 1200 m 2 /g. mPEG-2000-and mPEG-5000-grafted silica particles used in our study have less surface area (Table 1) compared to the resins used in Soto et al. [42] while demonstrating ~28-48-and ~44-77-fold higher total polyphenol adsorption capacities, respectively. These results confirm the high potential for mPEG-grafted silica particles to be used in the recovery of polyphenolic compounds present in the winery waste. Since mPEG-5000-grafted silica particles showed the highest polyphenol adsorption capacity, desorption experiments were performed using these particles only.
Effect of Acid and Base on Polyphenol Recovery
Acidic and basic conditions were screened to test the ability of these conditions to release the polyphenols from mPEG-grafted silica particles. Both 1% HCl and 1% NaOH in 70% aqueous ethanol showed poor desorption ratios (<14%) (Figure 4), whereas 1% HCl showed~2-fold higher desorption ratios compared to 1% NaOH, suggesting that acidic conditions are more favorable than basic for the release of polyphenols. However, further optimization was not performed with the strong acids due to the concerns about leaching mPEG ligands from the surface of silica particles [41]. In contrast to strong acids, previous studies using a weak acid-citric acid-showed promising elution performance for the recovery of a model polyphenol-quercetin-from titania-modified silica particles [41]. Citric acid was also supplemented to an extraction solvent used to isolate rutin from Sophora japonica [45]. This work, therefore, aimed to increase the desorption ratio by supplementing anhydrous ethanol with varying concentrations of citric acid (Figure 4). Unlike quercetin recovery from titania-modified silica particles using 10 and 20% citric acid in ethanol [41], only~12 to 15% desorption ratios were observed when polyphenols were eluted from mPEG-grafted silica particles (Figure 4). Increasing the citric acid concentration to 40% increased desorption ratio by~2-fold in comparison to 10 and 20% citric acid in ethanol with a final desorption ratio of 29%. It was concluded that addition of citric acid was not sufficient to release most of the polyphenols adsorbed onto mPEG-modified silica particles, yet achieved comparable desorption ratios that were observed in 100 mM + 200 mM sorbitol in 70% aqueous ethanol (Figure 4). attempted to use hydrogen bond breaker reagents to release the polyphenols strongly bonded to mPEG-grafted silica particles. Sorbitol is a monosaccharide with six hydrogen donors and acceptors and is commonly used as a biomolecule stabilizer. Sorbitol was previously used to elute large molecule biologics presumably by breaking the hydrogen bonds between the resin and the biomolecule of interest [44]. Moreover, in combination with salts, the elution power was strengthened [44]. In analogy to elution of biologics in hydrogen bonding chromatography [44], sorbitol and NaCl was used in an aqueous ethanol binary solvent system. Although the desorption ratio of 100 mM NaCl + 200 mM sorbitol in 70% aqueous ethanol was promising, further optimization of sorbitol concentration was not performed due to the viscosity constraints. Figure 4. Desorption of total polyphenols from mPEG-5000 grafted silica particles using various solvents. Ethanol is abbreviated as EtOH. A 1 mL desorption solvent was used for recovery of grape pomace polyphenols adsorbed on 25 mg mPEG-5000-grafted silica particles. Error bars indicate a standard deviation observed for duplicates.
Screening PEG Cosolvent Systems for the Recovery of Polyphenols from mPEG-Grafted Silica Particles: Developing PEG-Water and PEG-Ethanol Cosolvent Systems
Due to the limited polyphenol desorption abilities of the solvents discussed above, an alternative solvent system capable of releasing the majority of polyphenols from silica particles was needed. Cosolvent systems are widely used in the pharmaceutical industry to solubilize poorly water-soluble drugs [46]. Propylene glycol (PG), ethanol, glycerin, and PEG-400 are commonly used and the Food and Drug Administration (FDA) approved the cosolvents [47]. A system using PEG-200 or PEG-400 as a cosolvent was developed in this study for the effective release of polyphenols from mPEG-modified silica particles. It is hypothesized that the interactions between polyphenols and mPEG-grafted surfaces are unusually strong [48] and are likely due to the hydrogen bonding network. Therefore, solvents behaving like PEG can possibly break this network effectively. It was intended to create a competition between mPEG on the silica surface and PEG-200 or PEG-400 in the cosolvent and achieve complete recovery of polyphenols from silica particles. Figure 4 shows the desorption ratios of PEG-water and PEG-ethanol cosolvent systems with variable PEG concentrations (v/v). The slight addition of PEG-400 (10%) to ethanol increased the desorption ratio by~7-and 14-fold in comparison to 70% and 30% aqueous ethanol, respectively ( Figure 4). A similar trend was observed when only 10% PEG-400 was added to water, where an increase in the desorption ratio of~7-and 14-fold in comparison to 70% and 30% aqueous ethanol, was determined, respectively. When the PEG-400 concentration was increased from 10% to 30 and 50% (v/v) in ethanol, 3-and 4-fold increases in the desorption ratios compared to the 10% solution were observed, respectively. With that,~99.9% of the adsorbed polyphenols was recovered using 50% PEG-400-ethanol. These high desorption ratios observed in PEG-400-ethanol cosolvent systems are attributed to the ability of PEG to break the strong hydrogen bond network between mPEG-grafted silica particles and polyphenols. It is likely that adding more PEG introduces competition at the polyphenols' binding sites and induces the release of more polyphenols. Previous research showed that PEG-400 has remarkable polyphenol solubility in comparison to traditional solvents such as alcohol, castor oil, and water [30]. Therefore, polyphenols released in PEG solvent will be an ideal platform for the downstream use of polyphenols.
Effect of Water and Ethanol in PEG Cosolvent Systems for the Recovery of Polyphenols
In order to understand the effect of cosolvents in PEG binary solvent systems on polyphenol recovery, ethanol was replaced with water and the concentration of PEG was varied at 10, 30, and 50% levels corresponding to the same concentrations used in ethanolic PEG cosolvent systems. When ethanol was replaced with water in PEG cosolvent systems, the desorption ratios decreased by 7 and 21% in comparison to PEG-400-ethanol cosolvent systems at 30 and 50% levels, respectively (p < 0.05) ( Figure 4). This suggests that ethanol is the preferable cosolvent for the recovery of total polyphenols from mPEG-grafted silica particles in comparison to water in PEG cosolvent systems. However, PEG-water cosolvent system still performed well compared to the rest of the solvent systems tested in this study (Figure 4). There was no statistical difference between ethanol and water cosolvent at 10% PEG level (p > 0.05). This was likely due to low desorption ratios observed with a low amount of PEG available in the cosolvent and therefore the difference between ethanol and water was likely masked at this level. Interestingly, when water and ethanol were interchanged in PEG cosolvent system, the type of polyphenols released from mPEG-grafted silica particles also varied as confirmed by the analysis of individual polyphenols via high-performance liquid chromatography (HPLC) ( Table 2). When water was used as a cosolvent in PEG-400 system, gallic acid, p-coumaric acid, (+)-catechin, malvidin chloride, and isoquercetin were recovered more preferably in comparison to ethanol-PEG-400 cosolvent system (Table 2). Similarly, myricetin was recovered more preferably with water in comparison to ethanol in PEG-200 cosolvent systems (Table 2). In contrast to the individual polyphenols discussed above, only kaempferol was selectively recovered by using ethanol in comparison to water in both PEG-400 and PEG-200 cosolvent systems ( Table 2). The different abilities of water and ethanol to release variable polyphenols can be attributed to their degree of solvation or, in order words, their ability to break hydrogen bonds to a different extent between various type of polyphenols and mPEG-grafted silica particles. This solvent swing approach provides unique flexibility on the selection of the types of the polyphenols to be recovered from mPEG-grafted silica particles. Moreover, a sequential desorption with water and ethanol cosolvent systems can be performed for the effective fractionation of the polyphenol of interest.
Tunable Recovery of Polyphenols Using Cosolvent Systems with PEG-200 and PEG-400
In contrast to adsorption capacity (Figure 3), altering the MW of PEG in the desorption medium did not affect the desorption ratio because there was no statistical difference between 50% PEG-400 in both water and ethanol and 50% PEG-200 in both water and ethanol (Figure 4) (p > 0.05). However, the type and amount of specific polyphenols recovered from the silica particles varied when PEG-400 was replaced with PEG-200 in the desorption medium (Table 2). Table 2 shows the relative peak areas of eluted polyphenols as a function of different solvent systems for individual polyphenolic compounds. When PEG-200 was used in the desorption medium, myricetin and procyanidin-B2 were selectively recovered from silica particles, whereas there was no detectable myricetin and procyanidin-B2 present in PEG-400 cosolvent systems after elution. On the other hand, there was no detectable gallic acid, p-coumaric acid, and isoquercetin present in PEG-200 solvent systems (Table 2). A recovery of~40-50% of gallic, p-coumaric, and isoquercetin from grape pomace extract was achieved by the use of a 50% PEG-400-water cosolvent system. In addition,~82% of malvidin chloride was recovered using the 50% PEG-400-water cosolvent system, whereas only~28% of malvidin chloride was recovered using 50% PEG-200-water cosolvent system (Table 2). It is worthy to highlight the biological activity of malvidin as it is one of the most abundant polyphenol types isolated with PEG cosolvent systems. Theoretical studies showed that malvidin carries high antioxidant activity [49] and experimental studies demonstrated the ability of malvidin to inhibit the growth of human tumor cells in vitro [50]. These higher recovery percentages obtained in PEG-400 cosolvent systems in comparison to PEG-200 cosolvent systems are attributed to the ability of PEG-400 to break hydrogen bonds between these polyphenols and mPEG-grafted silica particles better in comparison to PEG-200. Since the hydrogen bond is directional and the strength of the bond depends on the dipole moment of two interacting moieties [51], the ability of different cosolvents to break these bonds likely varies. Variability in the relative peak areas of polyphenols released from mPEG-grafted particles with various solvents is attributed to the heterogeneity of the structures observed for different polyphenols (Table 2). Interestingly, recovery of delphinidin chloride was not sensitive to the type of the solvents used for the release of the polyphenols from silica particles ( Table 2). No detectable quercetin was identified in the desorption medium for all of the solvents tested and, similarly, a very limited amount of (+)-catechin (~0.1-1%) was recovered with PEG-200 and PEG-400 cosolvents system (Table 2). It is worthy to note that there are considerable amounts of (+)-catechin and quercetin present in the grape pomace extract compared to other polyphenols investigated (Table S1). These poor affinities of elution solvents and mPEG-grafted silica particles to quercetin and (+)-catechin suggest that desorption medium and mPEG-grafted silica particles are selective for certain polyphenols. Figure 5 shows the antioxidant activity of grape pomace extract and recovered polyphenol mixture using various desorption solvents. In order to compare the antioxidant activity of grape pomace extract and recovered polyphenols, the antioxidant activity was normalized based on total polyphenol content present in the grape pomace extract and recovered polyphenols, respectively. This allowed us to evaluate the effect of activity observed due to polyphenol composition and not the amount of polyphenols used in the assay. In all cases, recovered polyphenols with mPEG-grafted silica particles showed stronger antioxidant activity in comparison to grape pomace extract, while the degree of antioxidant activity varied with the PEG percentages in the cosolvent and the types of the secondary solvent used ( Figure 5). It is generally believed that the antioxidant activity of polyphenols is facilitated through hydroxyl groups available in the structure [44]. Since the hydroxyl groups are likely involved in the hydrogen bonding of polyphenols to mPEG-grafted silica particles, it is attributed that the recovery of polyphenols via hydrogen bonding mechanism would yield better antioxidant activity compared to grape pomace extract. In support of this statement, polyphenols recovered with 50% PEG-200-water showed the highest potency in comparison to polyphenols recovered using three other solvents used in this study ( Figure 5). The high potency observed in PEG-200 can be explained by the higher content of both myricetin and procyanidin-B2 in comparison to PEG-400 cosolvent systems (Table 2). Previous antioxidant assays showed that myricetin and procyanidin-B2 have the highest potencies among 20 well-known polyphenols including gallic acid, resveratrol, kaempferol, quercetin, and (+)-catechin [52]. In addition to its antioxidant activity, myricetin shows significant health promoting activities such as anticarcinogenic, antiviral, and antidiabetic activities. An extensive review of the biological effects of myricetin is provided elsewhere [53] and this highlights the importance of the preferable recovery of myricetin using 50% PEG-200-water cosolvent systems. PEG-400-water showed 1.2-fold higher potency in comparison to PEG-400-ethanol ( Figure 5). The high potency observed in PEG-400-water is attributed to~3.5-fold higher gallic acid content in comparison to PEG-400-ethanol (Table 2). Gallic acid was listed as one of the highest potent polyphenols [52] after myricetin and procyanidin-B2. Since PEG-400 cosolvent systems do not contain these polyphenols, it is likely that higher gallic acid content would result in better antioxidant activity seen in the PEG-400-water cosolvent system. Overall, it is likely that enrichment of gallic acid, procyanidin-B2, and myricetin by the use of mPEG-grafted silica particles and PEG cosolvent elution resulted in better antioxidant activity in comparison to grape pomace extract. (Table 2). Previous antioxidant assays showed that myricetin and procyanidin-B2 have the highest potencies among 20 well-known polyphenols including gallic acid, resveratrol, kaempferol, quercetin, and (+)-catechin [52]. In addition to its antioxidant activity, myricetin shows significant health promoting activities such as anticarcinogenic, antiviral, and antidiabetic activities. An extensive review of the biological effects of myricetin is provided elsewhere [53] and this highlights the importance of the preferable recovery of myricetin using 50% PEG-200water cosolvent systems. PEG-400-water showed 1.2-fold higher potency in comparison to PEG-400ethanol ( Figure 5). The high potency observed in PEG-400-water is attributed to ~3.5-fold higher gallic acid content in comparison to PEG-400-ethanol (Table 2). Gallic acid was listed as one of the highest potent polyphenols [52] after myricetin and procyanidin-B2. Since PEG-400 cosolvent systems do not contain these polyphenols, it is likely that higher gallic acid content would result in better antioxidant activity seen in the PEG-400-water cosolvent system. Overall, it is likely that enrichment of gallic acid, procyanidin-B2, and myricetin by the use of mPEG-grafted silica particles and PEG cosolvent elution resulted in better antioxidant activity in comparison to grape pomace extract. Figure 5. Normalized antioxidant activity of grape pomace extract and polyphenols recovered from mPEG-5000-grafted silica particles using various cosolvent systems. Ethanol is abbreviated as EtOH. A statistical difference was observed for all measurements (p < 0.05) Error bars indicate a standard deviation observed for duplicates.
Materials and Methods
Several spectroscopic, gravimetric, and analytical techniques were used for the characterization of the adsorbents and polyphenols. Surface modification of silica particles was assessed by Fourier transform infrared (FTIR), elemental, and thermogravimetric (TGA) analyses. Surface area and pore volumes of silica particles were estimated by N2 adsorption using the Brunauer-Emmett-Teller (BET) method. Grape pomace extract and purified polyphenols were characterized with high-performance Figure 5. Normalized antioxidant activity of grape pomace extract and polyphenols recovered from mPEG-5000-grafted silica particles using various cosolvent systems. Ethanol is abbreviated as EtOH. A statistical difference was observed for all measurements (p < 0.05) Error bars indicate a standard deviation observed for duplicates.
Materials and Methods
Several spectroscopic, gravimetric, and analytical techniques were used for the characterization of the adsorbents and polyphenols. Surface modification of silica particles was assessed by Fourier transform infrared (FTIR), elemental, and thermogravimetric (TGA) analyses. Surface area and pore volumes of silica particles were estimated by N 2 adsorption using the Brunauer-Emmett-Teller (BET) method. Grape pomace extract and purified polyphenols were characterized with high-performance liquid chromatography (HPLC) and, finally, potency of recovered polyphenols was assessed using an antioxidant activity assay.
Preparation of Grape Pomace Samples
Grape pomace (grape skin, seeds, and stem) from red winemaking using merlot grapes harvested in 2017 was kindly provided by Chateau Ste. Michelle Winery (Woodinville, WA). Upon receipt of grape pomace, the samples were frozen at −20 • C and lyophilized. Then, the dried grape pomace samples were ground to powder (>40 mesh) and stored in plastic bags at −20 • C until used for further experiments.
Preparation of Grape Pomace Extracts
An aqueous ethanol solvent extraction was performed to isolate the polyphenolic compounds from grape pomace. The polyphenolic compounds were extracted at a solid to liquid ratio of 1:10 (w/v) using 70:30 (v/v) ethanol: water as the extraction solvent. Following 15 min of sonication treatment, the extraction was performed at 30 • C in an orbital shaker in dark. After 24 h extraction, the grape pomace extract was filtered through Whatman 42 ashless filter paper. The resulting extract was stored at −20 • C until used in further experiments.
Preparation of Silica Particles with Functional Groups
Silica particles were selected as the support material for the surface modification. Surface modification of silica particles was performed by using methoxy polyethylene glycol Silane-2000-and -5000 (mPEG), a type of polyethylene glycol, as the functional groups. A cleaning procedure was performed by sonicating the silica particles in water and ethanol, respectively. Silica particles were dried at 105 • C. Then, silica particles were further washed with piranha solution (4:1 (v/v) H 2 SO 4 : H 2 O 2 ) for an hour at 90 • C to remove the organic impurities. Caution: piranha solution should be handled with extreme care due to its high reactivity. Piranha-solution-treated silica particles were washed with water followed by ethanol. Then silica particles were dried in a stream of nitrogen and stored in a nitrogen environment until further usage. The surface modification of silica particles was performed via a silanization process [54,55]. During the silanization reaction, ethoxysilane moieties available in the mPEG silane (Figure 1) are detached with the coupling reaction and Si-O-Si bonds between Si atoms of mPEG and silica surface are formed [54,56]. The structure of mPEG silane contains −NH and −CO moieties as indicated by the manufacturer (Figure 1). These two moieties and the repetitive ethylene glycol units will remain on the surface of the silica particles upon silanization. It is worthy to note that the number of −NH and −CO moieties compared to repetitive ethylene glycol units are significantly low. The silanization of 1.0 g of dried silica particles was performed in a 3 mM mPEG-silane solution with a working volume of 10 mL toluene [54]. The silanization reaction was performed at room temperature for 18 h under a nitrogen environment. After silanization, the particles were washed with toluene, ethanol, and water twice for each washing steps. The silica particles were then dried in a nitrogen stream and stored in a nitrogen environment until further use.
FTIR Spectroscopy
Surface functionality of bare and mPEG-grafted silica particles was characterized using a Fourier transform infrared spectrometer with an attenuated total reflectance attachment (Shimadzu, Japan). The FTIR spectra were recorded at a resolution of 4 cm −1 over 64 scans in the range of 4000-400 cm −1 .
Elemental Analysis
Elemental analyses of bare and PEG-grafted silica particles were performed using a TruSpec-CHN Micro elemental analyzer (LECO, Saint Joseph, MI, USA) according to the corresponding ASTM procedures (D-5291, E-777, E-778). Briefly, 0.15 g of dried samples were analyzed to determine the total carbon (C), hydrogen (H), and nitrogen (N) content.
TGA
TGA was used to characterize the grafted mPEG groups on the surface of silica particles. A thermogravimetric analyzer, SDTA 851 (Mettler Toledo, Columbus, OH, USA) equipped with STARe data analysis software was used. Approximately 4 mg dried sample was heated under nitrogen flow with a heating ramp of 50 • C/minute from room temperature to 120 • C. The sample was maintained at 120 • C for 3 min to ensure the removal of remaining water/solvent in the sample prior to a new 100 • C/minute heating ramp to 950 • C, where the sample was kept for 5 min. The percent mass loss between 120 • C to 950 • C was calculated for both bare and mPEG-grafted silica particles.
Determination of the Surface Area and Pore Volume of Bare and mPEG-Grafted Silica Particles
The surface area of mPEG-grafted silica particles was determined in order to compare the adsorption capacity of mPEG-grafted silica particles and commonly used polymeric adsorbents without incorporation of the surface area effect. Surface areas and pore volumes of bare and mPEG-grafted silica particles were estimated by nitrogen (N 2 ) gas physisorption analysis with Micromeritics TriStar II PLUS Surface Area and Porosity Analyzer (Norcross, GA, USA). All samples were degassed at 40 • C under a vacuum of 0.05-0.1 mbar for 18 h. N 2 gas adsorption studies were performed at 77 K and in the partial pressure range of p/p o = 0.0001 to 0.99. Surface area and pore volumes were estimated by N 2 adsorption using the Brunauer-Emmett-Teller (BET) method [57].
Analysis of Total Polyphenols
The total polyphenol contents of grape pomace extracts and purified extracts were determined using the Folin-Ciocalteu colorimetric method [58,59]. Briefly, 20 µL of sample was mixed with 1.58 mL water and 100 µL of Folin-Ciocalteu reagent. After 8 min, 300 µL of 20% sodium carbonate solution was added. The mixture was vortexed for 10 s and then incubated at 40 • C for 30 min. The absorbance was measured at 765 nm using ultraviolet-visible spectrophotometer (Shimadzu, Japan). Total polyphenolic contents of samples were expressed as gallic acid equivalents (GAE) in mg L −1 of bulk solution. It is important to note that individual polyphenols can show different gallic acid equivalency [60,61]. Although two different mixtures of polyphenols result in the same GAE, the distribution of the individual polyphenols in a given mixture can significantly differ from the other mixture. Therefore, the composition of the grape pomace extract and recovered polyphenols were further characterized via HPLC.
Batch Adsorption and Desorption of Polyphenols
It is well known that surface functional groups regulate the ability of the adsorbents to capture the molecule of interest. Here, the ability of mPEG grafting to bind polyphenols from grape pomace extract was tested. Bare silica particles were used as a control for evaluating the effect of mPEG grafting on polyphenol recovery from grape pomace extract. The batch adsorption experiments were performed in 2 mL centrifuge tubes loaded with 25 mg dry adsorbents. Prior to adsorption experiments, the bare and mPEG silica particles were prewetted with 1 mL of extraction solvent (70:30 (v/v) ethanol: water) for 2 h. The particles were centrifuged for 15 min at 15,000× g and the supernatant was removed. Then, 1 mL of grape pomace extract (9.1 ± 0.3 mg/mL) was loaded to the prewetted particles. The tubes were shaken in an orbital shaker at 125 rpm for 24 h at 25 • C in dark. After the attainment of adsorption equilibrium, the supernatant was collected and analyzed for total phenolic content. The total phenolic contents of the initial solution and the solution at the equilibrium were measured. The adsorption capacities of the particles were calculated from the difference between the concentrations of initial and final solutions by using Equation (1): where Q e is equilibrium adsorption capacity (mg/g), C e is the equilibrium concentration of solute in bulk solution (mg/L), C 0 is the initial concentration of solute in bulk solution (mg/L), V is the volume of the initial sample solution, and m is the amount of dry adsorbent (g). After acquiring the samples for total phenol analysis, the silica particles were spun and extracts were decanted. Polyphenol-loaded silica particles were washed with 1 mL of extraction solvent (70:30 (v/v) ethanol: water). After vortex mixing, the particles were centrifuged for 5 min at 15,000× g and the supernatant was discarded. Following this wash step, the particles were resuspended in 1 mL of different elution solvents (200 mM NaCl in 30% ethanol, 200 mM sorbitol in 30% ethanol, 100 mM NaCl and 200 mM sorbitol in 30% ethanol, 200 mM NaCl in 70% ethanol, 200 mM sorbitol in 70% ethanol, 100 mM NaCl and 200 mM sorbitol in 70% ethanol, 10-40% (w/v) citric acid in ethanol, 1% HCl in 70% ethanol, 1% NaOH in 70% ethanol, 70% ethanol, 30% ethanol, 10-50% (v/v) PEG-400 in ethanol, 10-50% (v/v) PEG-400 in water, 50% (v/v) PEG-200 in ethanol, 50% (v/v) PEG-200 in water) to recover the polyphenols from silica particles. The tubes were shaken in dark at 125 rpm in an orbital shaker. After 24 h, an aliquot of supernatant was collected and analyzed for the total and individual phenolic content. The desorption ratio of the silica particles was estimated using Equation (2): where D is percent desorption ratio, C d is the concentration of solute in desorption solution (mg L −1 ), V d is the volume of desorption solution (mL), and C 0 and V 0 are the same as described in Equation (1) [62].
Developing the Green Solvents for the Recovery of Adsorbed Polyphenols onto the mPEG-Modified Silica Particles
Since there is strong interaction expected between mPEG ligands and polyphenols, a robust process for the complete recovery of polyphenols is needed. The release of total polyphenols can be facilitated with an elution step. Choosing the right elution solvent for the recovery process is extremely important due to the strict regulations imposed for food and pharmaceutical industry. Therefore, a "green" solvent approach has to be implemented for the complete recovery of polyphenols from silica particles. Twenty-one different solvent systems were tested in this study aiming to achieve a complete recovery. Moreover, tunable cosolvent systems were designed to facilitate preferable recovery of certain types of polyphenols from mPEG-grafted silica particles.
Antioxidant Activity
Antioxidant activities of the polyphenols in grape pomace extract and the polyphenols recovered from mPEG-5000-grafted silica particles were tested using a 2, 2-diphenyl-1-picrylhydrazyl (DPPH) free radical assay [63,64]. Briefly, the activity of each sample was determined by mixing 20 µL sample with 980 µL 6.1 × 10 −5 M DPPH• methanol solution in 2 mL cuvettes. The cuvettes were covered and incubated in the dark for 16 min. The absorbance at 517 nm was measured against a blank by using ultraviolet-visible spectrophotometer (Shimadzu, Japan). Since the total polyphenol contents of each sample vary, the results were normalized to the total polyphenolic content of each sample and expressed as normalized antioxidant activity (inhibition percentage (%)/(mg GAE/L)) in Equation (3).
Analysis of Individual Polyphenols via High-Performance Liquid Chromatography (HPLC)
HPLC analysis of individual polyphenols was performed using an Agilent 1200 series HPLC (Agilent Technologies, Santa Clara, CA, USA) equipped with ChemStation software, a degasser, a quaternary gradient pump, an autosampler, a column oven, and a multiwavelength detector. Separation of polyphenols was achieved using a Poroshell 120 Stable bound SB-C18 column (150 × 4.6 mm, 2.7 µm particle size Agilent Technologies, Santa Clara, CA, USA), with a Poroshell 120-C18 guard column (5 × 4.6 mm, 2.7 µm particle size Agilent Technologies, Santa Clara, CA, USA), operated at 30 • C. The mobile phase consisted of 0.1% formic acid in water (Solvent A) and 0.1% formic acid in acetonitrile (Solvent B). The multiwavelength detector was set to 280 nm. Then, 1 µL of each sample was directly injected after filtering through a 0.2-µm PTFE (Polytetrafluoroethylene) syringe filter. The flow rate was set at 0.3 mL/min. The gradient program started at 0% B and increased to 15% B over the course of 5.25 min and held at 15% B for 5.4 min. The percentage of mobile phase B was then increased to 40% over 26.85 min and kept at 40% B for 1.5 min. The mobile phase was ramped to 100% B in 1.5 min and held at 100% B for 3 min. Then, the gradient program was set to its initial conditions of 0% B within 1.5 min and followed by re-equilibration of the column for 10 min. Individual polyphenolic compounds were identified by comparing the retention times of analytical reference materials. Concentrations of the individual phenolic compounds were quantified using external calibration curves generated for the analytical standards.
Statistical Analysis
All of the experiments were performed in duplicates and data were reported as mean values ± standard deviation. A statistical test was performed to determine the statistical difference between samples investigated using one-way analysis of variance (ANOVA) followed by a pairwise comparison using Tukey test. The mean values were considered to be significant when p < 0.05. The mean of the coefficient of variation (CV) of all measurements in this study was 5.3% indicating that duplicate measurements are sufficient to describe the population of the measurements.
Conclusions
Functionalized silica particles are promising adsorbents for the recovery of polyphenolic compounds from grape pomace. Modifying bare silica surface with mPEG ligands provided a dramatic increase in the adsorption capacity whereas altering the MW of mPEG-grafted on silica surface provided tunability in the adsorption capacity. A complete recovery of polyphenols from mPEG-grafted silica particles was achieved by utilizing PEG-ethanol or PEG-water cosolvent systems. Recovered polyphenols showed up to~12-fold potency in comparison to grape pomace extract as determined by DPPH antioxidant activity. Alternating the cosolvent from ethanol to water in PEG cosolvent systems provided preferable isolation of certain type of polyphenols adsorbed onto mPEG-grafted silica particles. This study demonstrated the suitability of the use of mPEG-grafted adsorbents and PEG cosolvents for the large-scale recovery of polyphenols from grape pomace extract.
Supplementary Materials: The following are available online, Table S1: Individual phenolic compounds of grape pomace extract. | 10,407.6 | 2019-06-01T00:00:00.000 | [
"Chemistry"
] |
Optimization method for human-robot command combinations of hexapod robot based on multi-objective constraints
Due to the heavy burden on human drivers when remotely controlling hexapod robots in complex terrain environments, there is a critical need for robot intelligence to assist in generating control commands. Therefore, this study proposes a mapping process framework that generates a combination of human-robot commands based on decision target values, focusing on the task of robot intelligence assisting drivers in generating human-robot command combinations. Furthermore, human-robot state constraints are quantified as geometric constraints on robot motion and driver fatigue constraints. By optimizing and filtering the feasible set of human-robot commands based on human-robot state constraints, instruction combinations are formed and recommended to the driver in real-time, thereby enhancing the efficiency and safety of human-machine coordination. To validate the effectiveness of the proposed method, a remote human-robot collaborative driving control system based on wearable devices is designed and implemented. Experimental results demonstrate that drivers utilizing the human-robot command recommendation system exhibit significantly improved robot walking stability and reduced collision rates compared to individual driving.
Introduction
Different from conventional terrestrial moving equipment such as wheeled or tracked vehicle, legged robot's track on ground is a series of discrete footprints, and this non-continuous support characteristic effectively increases its adaptability to the uneven road.Legged robots have fully studied from the structural characteristics and movement patterns of legged animals and insects.For example, quadruped robots have drawn inspiration from the musculoskeletal structures of animals like gazelles (Li et al., 2022a), cheetahs (Lei et al., 2022), and mice (Bing et al., 2023), as well as the movement patterns of quadruped animals (Massi et al., 2019).Considering the impressive load-bearing capacity and motion stability of arthropod leg structures, hexapod robots have also borrowed from creatures such Chen et al. 10.3389/fnbot.2024.1393738Frontiers in Neurorobotics 02 frontiersin.orgas cockroaches (Massi et al., 2019), ants (Zhakypov et al., 2019), and lobsters (Shim et al., 2016).In recent years, an increasing number of scholars have fully recognized that hexapod robots, with non-continuous contact points with the ground, can adapt to terrain environments with geometric and physical feature variations.They exhibit high load-bearing capacity and stability, making them an ideal mobile system for outdoor environments.Unlike conventional robots with simple structures, hexapod robots have as many as 18 degrees of freedom in their legs alone.This high level of complexity, especially when carrying out tasks in complex environments, can impose a heavy burden on the operator and significantly reduce the overall motion coordination of the robot.Therefore, conventional approaches to legged robot self-locomotion intelligence on uneven terrain have yielded increasingly complex self-training architectures.Many rely on training locomotion controller by reinforcement learning in simulation, then transplant the training result to real terrain.ETH Zurich's ANYmal is one of the most promising legged systems of this kind (Wangbo et al., 2019).They deployed learning agile and dynamic motor skills for their quadrupedal robot system.Other systems use rapid adaptation training at the robot motors (Choi et al., 2023), which can be successful in 70% of the trials when walking downstairs along a hiking trail (Kumar et al., 2021).
However, research on autonomous intelligent systems for robots in recent years has shown that the emergence and development of artificial intelligence technology has provided many new methods for robot intelligence, greatly advancing the process of robot intelligence.As for autonomous intelligent systems for robots, it is a highly complex control system that integrates various functions such as environmental perception, dynamic decision-making and planning, behavior control, and execution.Due to the lack of human drivers' ability to handle unexpected and imprecise events, the overall intelligence level, flexibility, and adaptability of the system have been greatly limited.This is particularly true for legged mobile robots, as their walking environments are mostly characterized by unknown and rugged complexity, making it difficult for them to rely solely on autonomous intelligent systems.In fact, legged mobile robots often use a human-in-the-loop collaboration approach to accomplish mobility tasks.
Different from early human-robot collaborative methods that required real-time switching of control between humans and robots (Merat et al., 2008(Merat et al., , 2014;;Eriksson and Stanton, 2016), the current mainstream human-robot collaboration method is human-in-the-loop coordination.According to the position of the human operator, it can mainly be divided into two categories: manned shared control and driver remote participation coordination.Among them, the first type, manned shared control, has been widely applied in the fields of intelligent manufacturing and intelligent driving of vehicles.For example, Ma proposed a shared steering controller based on Nash game strategy, considering the differences in human-machine goal consistency (Ma et al., 2019).They used a non-cooperative MPC method to model the interaction path tracking tasks between the driver and the automated system, achieving the correctness of cooperative path tracking control between the driver and the vehicle's onboard intelligent system.Huang proposed a humandriver in-loop coordination/shared steering control framework, applying state space small gain theory to the driver-vehicle coupled system, enabling the onboard intelligent system to work in coordination with the driver to achieve ideal lane-keeping performance (Huang et al., 2019).In addition, manned shared control theory not only enables machine intelligence at the operational control layer (Zhou et al., 2022;Xu et al., 2023) but also starts to share human work at the motion planning layer of robots (Xu et al., 2023).
For the second type of human-in-the-loop collaborative method, namely driver remote participation coordination, it is mostly used for hexapod robots in underwater (Yoo et al., 2016;Picardi et al., 2020), planetary surface (Arm et al., 2023), resource extraction, and other hazardous environments.This is because the mobile operating environment poses risks that make it unsuitable for manned shared control of human-robots collaboration (Si et al., 2022).Li developed a new semiautonomous bilateral control dual-master/single-slave tactile remote operation system for hexapod robots.Through this system, not only was the sharing of environmental haptic information between the robot and the operator achieved, but also the maneuverability and local autonomy of the robot's remote operation system were improved (Li et al., 2022b).Schwarz developed a control system for the rescue robot Momaro that can perform multi-task collaborative processing (Schwarz et al., 2017).By coordinating multiple operators to manipulate the robot, they completed the supervision and control of the entire operation process of the robot.However, the main issue faced by driver remote participation coordination at present is that the status information between humans and robots cannot be timely exchanged, severely limiting the effectiveness of human-robots collaboration.
To address the issue of insufficient flow of status-constrained information between humans and machines, particularly the challenge of robots being unable to perceive drivers' dynamically adjusting collaborative strategies, researchers utilize wearable physiological signal acquisition equipment to detect and assess driver states.For example, by wearing muscle electrical signal acquisition devices to sense and identify drivers' motion intentions, facilitating interpersonal collaborative control (Zhang et al., 2022;Lyu et al., 2023).After obtaining driver status information, Seet determine the required assistance level based on the driver's workload and performance, increasing the involvement of the assistance system when the driver is overloaded or distracted, and reducing the assistance level when the driver's workload is moderate to ensure driving stability and safety (Seet et al., 2023).Nguyen proposed a human-machine collaborative steering control strategy considering driver behavior states (Nguyen et al., 2017).They allocate assistance weights based on the driver's behavior state and use fuzzy control theory to address speed and assistance weight variability issues, reducing human-machine conflicts and enhancing collaborative performance between humans and vehicles.Bueno et al. analyzed the impact of changes in driver cognitive load on human-machine driving authority switching through simulating non-driving tasks, indicating that regardless of the cognitive load size, engaging in non-driving tasks negatively affects the switching of human-machine driving authority due to reduced concentration (Bueno et al., 2016).Additionally, in driver remote participation Frontiers in Neurorobotics 03 frontiersin.orgcollaborative control, the intelligent system interacts with the driver using tactile, visual, and auditory information to stimulate driver focus, while Ji experimentally verified that using tactile seats effectively enhances driver focus during driving, thereby improving safety and smoothness during human-machine driving authority switching (Ji et al., 2011).Forster use voice prompts and warning sounds to alert drivers about upcoming authority switches (Forster and Naujoks, 2017).These methods aim to enhance mutual perception between humans and machines, utilizing perceptual information to promote and assist the emerging trend of remote collaborative control between drivers and robots more effectively.Based on the above discussion, in this paper, we consider how to quantitatively analyze the state constraints between humans and robots in remote control mode, assisting drivers in forming reasonable human-robot collaborative control commands.Especially, we place great emphasis on the geometric motion constraints of hexapod robots in irregular terrains and the fatigue state constraints of drivers.Using these two types of humanrobot constraint conditions, we filter the feasible set of all human-robot collaborative control command solutions.The selected human-robot commands combinations by the driver are then chosen and issued to the robot, greatly reducing the driver's burden and enhancing the safety and efficiency of remote collaboration The remainder of this paper is divided into the following sections: Section 2 proposes the mapping process framework of human-robot decision target values to command combinations.Section 3 quantifies the geometric motion constraints of hexapod robots in irregular terrains and the fatigue state constraints of drivers.Experimental investigations are conducted in Section 4.
2 Method for generating command combinations from human-robot decision target values
Framework of overall process
For robots performing tasks in unstructured terrain environments, the complexity of behavioral decision-making and control by remote operators is a crucial issue that cannot be ignored.In particular, unlike structurally simple conventional wheeled robots, hexapod robots have as many as 18 degrees of freedom.If controlled one by one, it not only imposes a heavy driving burden on the driver but also significantly reduces the overall motion coordination of the robot.During the phase of issuing commands with high control workload, it is particularly necessary to utilize the intelligent system carried by the robot to assist in rapid and efficient command issuance, thereby reducing the workload of the driver.
Our team recorded and summarized the real-time decisionmaking and control processes of highly experienced hexapod robot drivers through a large number of experiments.After summarizing, it was found that both drivers and robot decision intelligence tend to focus on the top-level decision-making of hexapod robot motion behavior, specifically targeting the next moment's target walking distance, walking speed, and walking direction of the hexapod robot, forming decision goal values mutually recognized by humans and machines.Furthermore, the driver or robot intelligence system then decomposes and maps the decision goal values into corresponding specific control commands.In this process, for the driver, instructions are formulated in the brain based on the observed environment and robot state information, as well as driving experience, and implemented through operating external hardware devices; for the robot, theoretical formulas are established based on the robot's kinematic characteristics to autonomously calculate positions and speeds at the bottom execution layer and generate instructions.
Specifically, as shown in Figure 1, this article outlines the main steps in the process from behavior decision goal values to recommended selectable human-robot command combinations as follows: (1) confirming and inputting behavior decision goal values; (2) mapping and calculating all human-robot commands from the decision goal values to form a feasible set of human-robot commands, including all four types of command combinations under humanrobot collaborative modes (driver control, human primary and Hexapod robot physical prototype. machine auxiliary, machine primary and human auxiliary, and robot autonomous mode); (3) filtering the command combinations in the feasible set based on command constraints, which include geometric motion constraints of the robot and driver fatigue constraints; (4) after filtering based on constraints, recommending human-robot commands are output to assist the driver in control.
For example, When a robot is moving in the slop terrain, a command combination generally includes the selection command for the robot's gait type, commands for gait period, step stride, step stroke, and body posture adjustment.Moreover, the commands included in the combination correspond to specific recommended values and the authority for human-robot modifications.Therefore, the primary function of command combinations is to provide the human operator with the types, values, and permissions of recommended commands.Additionally, driver can make real-time modifications to the command online before the robot carries them out.
Hexapod robot motion characteristics
Unlike drivers who rely on experience to generate control commands, machine intelligence needs to establish a kinematic model based on robot motion characteristics to generate control commands.The physical prototype of the hexapod mobile robot is shown in Figure 2, which belongs to a type of insect-like electrically driven multi-legged robot.The robot mainly consists of a body and six legs.The body is hexagonal in shape, with the six legs evenly distributed on each side.Each leg has three degrees of freedom, composed of the coxa segment, thigh segment, and shank segment.The coxa segment is connected to the body via a base joint, the thigh segment is connected to the coxa segment by a hip joint, and the shank segment is connected to the thigh segment by knee joint.The robot's foot is rigidly connected to the end of the shank segment.Each of the mentioned rotating joints is driven by a motor. Σ , and Σ L X Y Z − 3 3 3 represent the global, body and single-leg coordinate systems, respectively.The base joint angle is denoted by α , the hip joint angle is denoted by β , and the ankle joint angle is denoted by γ .The length of the coxa segment is represented by L c The length of the thigh segment is represented by L t , and the length of the is represented by L s .The vertical height from the body's centroid to the ground is denoted by H.The forward kinematics and inverse kinematics models of a single leg of the hexapod mobile robot can be determined by Formula (1) and Formula (2): 3 Quantitative methods for human-robot state constraints
Geometric motion constraints of the robot
In order to ensure the safety of hexapod robots walking in complex terrain environments, it is necessary to impose specific constraints on the generated commands for both the robot and the driver based on terrain features.This article establishes geometric constraint models between terrain and joint space for sloped terrain, obstacle terrain, and ditch terrain, thereby ensuring that the robot's joint motion space remains within a safe range.This includes constraint equations for joint motion based on terrain feature values, resulting in target step stride, step stroke, pitch, and roll angle constraints for the robot's body pose changes.
Specifically, considering that terrain geometry features can greatly impact the robot's joint motion space, the control process of the robot requires real-time monitoring of the joint's safe working space to prevent issues such as joint position exceeding limits, instability and overturning during movement, and body collisions.
In this section, we first utilize the robot's body perception characteristics to establish terrain features, such as estimating the slope of sloped terrain, dimensions of obstacles in obstacle terrain, and the width of ditches in ditch terrain.Subsequently, based on constraints for robot body collision safety, joint limit constraints, and walking safety constraints, a mathematical model for the joint constraints of the hexapod robot is established.Finally, the constraints for the target commands for the robot's body pose changes are obtained based on the constraints imposed by the terrain on the joints.This achieves the necessary rationalization of the feasible command set for human-robot instructions, narrowing the range of recommended commands while improving their rationality.This enhancement ensures that both the driver and the robot intelligence effectively improve the efficiency and safety of controlling the robot's movement using the feasible command set.
Geometric constraint model for sloped terrain
When a hexapod robot walks on sloped terrain, it needs to adjust the pitch and roll angles of its body as well as the step length in real time to adapt to the changing terrain based on the estimated slope of the ground and joint constraints.Specifically, when the robot is traversing sloped terrain, constraints need to be established based on the joint's extreme positions or potential interference between the robot's body and the geometric terrain, in order to obtain constraints for the numerical values of the hexapod robot's motion commands.
Specifically, during the uphill process, in order to ensure the stability margin of the hexapod robot, a uniformly distributed standing method is adopted, as shown in Figure 3.When the terrain slope is steep, the knee joints of the front and rear legs will reach their limit positions.Therefore, by establishing geometric constraints on the height from the body to the slope surface, the geometric relationship between the terrain and the robot's body can be mapped.The geometric relationships between the joints and the ground during the transition phase from flat ground to a slope for the hexapod robot are shown in Figure 3.The height of the front leg base joint position from the slope surface is determined by the knee joint's limit position and the walking step length.The defined limit height of the base joint from the slope surface is denoted as h lim , with the vertical distance being the length of point AB.According to forward kinematic analysis, the limit position γ lim of the knee joint will mainly affect the value of h lim .Based on the limit height of h lim , the limit value of the knee joint position can be determined by the Formula (3): (3) Since the limit position of the knee joint depends on the robot's leg mechanical structure and joint motor limits, it is a fixed value.According to the Formula (3), it can be seen that the limit height of the base joint from the slope surface, the robot's real-time step stride, and the slope angle will determine the real-time position of the knee joint.Considering that the limit height of the base joint from the slope surface is a predetermined value for safety reasons, and the slope angle is also an estimated determined value based on the robot's body perception.Therefore, the real-time step length is an important factor determining the knee joint position in real time.To ensure that the knee joint's limit position does not exceed its maximum set value, the real-time step length must not exceed a maximum limit value, as shown in Formula (4).By establishing the maximum real-time step length for a hexapod robot walking uphill, it can set practical constraints on step length.This will improve the effectiveness of instruction sets used by both the driver and the robot for controlling robot motion, enhancing human-robot interaction during driving.
Geometric constraint model for obstacle terrain
Obstacle terrain is the most common non-flat terrain encountered by hexapod robots in complex outdoor environments.According to the geometric dimensions of the obstacles, obstacle terrain can be divided into two categories: obstacles that can be crossed (obstacle width is less than the leg support width, and obstacle height is lower than the robot's standing height), as shown in Figure 4; obstacles that can be climbed (slope greater than the leg support width, and obstacle height is lower than the robot's standing height), as shown in Figure 5.For obstacles that can be crossed, due to the long lengths of the robot's thigh and shank joints, the robot's standing height can be raised above the height of the obstacle terrain, and a normal walking gait can be used to pass through the obstacle terrain smoothly.For obstacles that can be crossed, when the robot's standing height is greater than the height of the obstacle, the leg posture can be adjusted to achieve a new body standing height.The constraints that need to be satisfied in this state as shown in Formula (5).
Robot in slope terrain.
Where L c represents the length of the hexapod robot's leg base joint; L t represents the length of the hexapod robot's leg tibia joint; L s represents the length of the hexapod robot's leg femur joint; L B represents the width of the hexapod robot's body, w 1 represents the width of a local obstacle in the terrain environment, hrepresents the height of a local obstacle in the terrain environment, andh s represents the safety distance between the hexapod robot's leg base joint and the obstacle.
For obstacles in a climbable form, where the obstacle width is greater than the leg's support width and the body height is lower than the maximum standing height, the legs can step on the obstacle and perform climbing actions.By setting a limit value h s for the distance between the leg base and the obstacle surface, we can determine the limit value η robot for the body's pitch angle and establish a geometric constraint model between joint space and obstacle terrain.When the robot's front legs land on the obstacle surface, the joint motion space of the front legs is limited, requiring adjustment of the body's pitch angle to adapt to the terrain changes.The constraints that need to be satisfied in this state as shown in Formula (6): Therefore, for climbable obstacles, the motion instructions of the hexapod robot adhere to the above constraints, effectively achieving reasonable and effective constraints on the pitch angle for both the driver and the robot intelligence when utilizing feasible instruction sets for robot motion control.This enhances the effectiveness of the feasible instruction set in assisting human-machine interaction during driving and control.
Geometric constraint model for ditch terrain
Based on the different geometric dimensions of the ditch terrain, the ditch terrain can be divided into two categories: ditches that can be crossed in a single step where the width of the channel is less than the robot's single support width; and ridges that can be crossed in multiple steps where the width of the channel is greater than the robot's single support width.For ridges that can be crossed in a single step, where the width of the channel is less than the robot's single support width, the robot can increase its step length to autonomously cross the channel, as shown in Figure 6.The constraints that need to be satisfied in this state as shown in Formula (7): Where λ min represents the real-time minimum step length of the hexapod robot, and w represents the width of the channel.
For ditches that can be crossed in multiple steps, where the width of the channel is greater than the robot's single support width, it is not possible to cross the ridge with a single adjustment.However, the robot can achieve the crossing by making multiple adjustments with its legs.In this case, the supporting legs need to take larger steps, which may lead to situations where the joint reaches its limit position, as shown in Figure 7.The constraints that need to be satisfied in this state as shown in Formula (8): Where γ lim represents the knee joint limit value, λ D represents the real-time dynamic step length of the hexapod robot, and w represents the width of the channel.
Through the above equation, the robot's real-time dynamic maximum step stride can be calculated.When a single leg reaches its maximum step stride and cannot cross the channel, it is necessary to Obstacle that can be climbed.Ditch that can be crossed in a single step.Obstacle that can be crossed.
readjust the positions of each leg and the body, and then retry the crossing.Based on the geometric constraints model of the channel terrain mentioned above, autonomous step adjustment for the robot to cross the channel within the range of leg joint limit positions is achieved, enabling the robot to perform the crossing action.Moreover, when encountering obstacles that cannot be overcome or when the landing area is complex and requires finding a suitable landing point, external visual perception of the robot can be used to model the terrain and detect landing points.By modeling the terrain using external visual sensors and equivalent the robot's envelope range to a virtual body model, obstacle detection and avoidance are carried out based on artificial potential field methods.Furthermore, analyzing the ruggedness of the terrain, terrain height, and the area of safe landing zones based on visual information, an evaluation function for the terrain is established to select landing points, avoiding instability of the robot caused by walking on special terrain.
Therefore, for ditches that can be crossed in multiple steps, the motion command s for hexapod robots should prioritize the constraints mentioned above.This will effectively realize the collaboration between the driver and the machine intelligence when using a feasible command set for robot motion control, providing reasonable and effective constraints on dynamic step length.It enhances the effectiveness of the feasible command set in assisting human-machine collaboration in driving and control tasks.
Driver fatigue constraint
Due to its inherent stability under high load and its ability to maneuver in extreme environments, hexapod robots are more likely to perform tasks in complex environments compared to other types of mobile robots.In order to ensure the passability and safety of hexapod robots in complex and unknown environments, remote operation and control of the robot's motion behavior are often carried out through human-robot collaboration.However, the redundancy of the robot's control degrees of freedom and the complexity of environmental tasks will impose a significant burden on the remote operators.This not only significantly affects the comfort of the operators but also has a detrimental impact on the safety and efficiency of the hexapod robot's movement.
Therefore, it is necessary to assess the driver's fatigue status in real-time to determine the optimal human-robot collaborative control mode, which can then be used to optimize the combined form of control commands for humans and robots.For example, when the driver is not fatigued or only mildly fatigued, the control system can switch to manual control mode, allowing the driver to participate in the position control of the hexapod robot's single leg, foot end, and joints.When the driver is moderately fatigued, the control system can switch to human primary and machine auxiliary mode, enabling the driver to participate in the control of the hexapod robot's body posture and gait parameters while disabling manual control mode.In cases of severe fatigue, the control system can switch to machine primary and human auxiliary mode, where the driver is only required to monitor and intervene in emergency situations concerning the hexapod robot.
As shown in Figure 8 The remote human-machine collaborative driving control system.Chen et al. 10.3389/fnbot.2024.1393738Frontiers in Neurorobotics 08 frontiersin.orgthe normalized signals using a 50-sample moving window.The main formulas and their meanings involved in each process are as follows.
Sub-process 1: Denoising of the original signal by subtracting the mean amplitude of the signal from the signal amplitude within the window, as shown in Formula (9): Where S i 1 ( ) represents the sEMG value with linear noise removed; S i 0 ( ) represents the value of the original sEMG signal; N represents the sampling window size; i represents the instantaneous moment of processing the sEMG signal.Sub-process 2: Full-wave rectification of the signal obtained from process 1, as shown in the formula, as shown in Formula (10): Where S i 2 ( ) represents the amplitude of the sEMG signal after full-wave rectification, ensuring that the amplitude of the abs signal is entirely non-negative; N represents the sampling window size; i represents the instantaneous moment of processing the sEMG signal.Sub-process 3: Using a 4th-order Butterworth bandpass filter to limit the frequency to the range of 30-100 Hz, the signal is processed to remove high-frequency noise through filtering.This mainly involves processing the amplitude of the denoised signal, as shown in the Formula (11): Sub-process 4: Normalizing the sEMG signal obtained from process 3, as shown in the Formula (12): Where S i 4 ( ) represents the signal amplitude, and MVC represents the maximum voluntary contraction strength of the muscle.Sub-process 5: Smoothing the signal after normalization, as shown in the Formula (13): Where S i 5 ( ) represents the amplitude of the signal after processing, and t t i i − −1 represents the time difference value.The sEMG signal obtained through the above five processing steps can directly reflect the changing characteristics of the sEMG signal, including the linear variation pattern of the sEMG signal amplitude.
After the original electromyographic (EMG) signals are collected, it is necessary to preprocess the EMG signals and extract features based on the processed signals.The purpose is to extract components of the EMG signals that can reflect the degree of fatigue.Different degrees of fatigue have their own characteristics, and the more representative the feature selection, the more accurate the pattern recognition.Based on the common time-domain and frequency-domain features of EMG signals and their clinical significance, this study selects four main features-mean power frequency (MPF), median frequency (MF), root mean square (RMS), and integrated electromyogram (IEMG)-to reflect the muscle's fatigue state.
The time-domain features of muscle fatigue can be used to describe the amplitude changes in electromyographic (EMG) signals during the process of muscle fatigue.Calculating the integrated electromyogram (IEMG) and root mean square (RMS) can visually reflect this change.Let y t ( ) represent the preprocessed original EMG signal, the calculation formulas are shown in Formula ( 14) and Formula (15): The frequency-domain features of electromyographic (EMG) signals are obtained by transforming their time-domain signals into frequency-domain signals using Fourier transform, and then analyzing the signal's power spectrum or frequency spectrum.The selected features in this study are median frequency (MF) and mean power frequency (MPF).Let P f ( )represent the power spectral density and d f ( )represent the signal's frequency resolution, the calculation formulas are as shown in Formula (16) and Formula (17): The similarity between the two lies in that the types of sEMG features covered are all included in the four features listed in this paper.The difference lies in the fact that general sports movements have larger amplitude and intensity but are relatively singular.This results in sports-related sEMG features showing large numerical values but being singular in type, usually consisting of 1-2 of the four features.However, during the driving operation of a hexapod robot driver, although the amplitude of movements is not large, the driver's movements are more diverse and of longer duration, generally encompassing all four listed features.It is necessary to comprehensively analyze all four features to determine the driver's fatigue state.In order to establish and utilize a neural network model for real-time assessment of driver upper limb fatigue, this study recorded the electromyographic (EMG) signal data of several hexapod robot drivers while operating the hexapod robot, along with their subjective perception data of upper limb fatigue.A BP neural network was used to correlate the EMG feature data with the upper limb fatigue data.The subjective perception data of upper limb fatigue come from the drivers' self-rated mental fatigue scores, where higher scores indicate higher levels of current mental fatigue.The feature data of the EMG signals, which include various information such as EMG integral value, median frequency, root mean square value, change when the muscles are fatigued, were used as inputs.Participants' comprehensive fatigue values were provided as outputs for model training.The training model includes input layer, output layer, and intermediate layers.A total of 500 sets of data were collected from different participants, with 400 sets chosen for training and 100 sets for testing.The training set includes EMG feature data and drivers' subjective fatigue levels.An intermediate layer was set up, and during the training process, the connection weights and thresholds of each layer were calculated to obtain the network model for predicting driver arm fatigue (DAF).
Since drivers may still persist in operating the vehicle with mental strength when experiencing muscular fatigue, sEMG features may not reflect the driver's mental fatigue at this time.Considering that the driver's mental fatigue can be reflected by operational error rate and trajectory deviation, this paper utilizes the Driver Manipulation Error Rate (DMER) and Trajectory Offset Rate (TOR) to assist sEMG features in determining the driver's fatigue state together.
Specifically, the Driver Manipulation Error Rate (DMER) is used to describe the rate of inappropriate manipulation by the driver when issuing control commands to the hexapod robot.
Specifically, the Trajectory Offset (TO) can be described as the deviation of the actual path traveled by the hexapod robot from the average trajectory while being driven by the driver.Assuming the sampling interval for the distance traveled by the hexapod robot is T, the average speed of the hexapod robot within this interval is v, and the number of samples is N, with the actual position traveled denoted as S, the trajectory offset TO can be expressed as Formula (19):
Experiment
In order to enable drivers to remotely control hexapod robots in a human-machine collaborative manner and validate the effectiveness of the proposed method, this study has built a remote human-machine collaborative driving control system based on wearable VR glasses, EMG armbands, and other human-machine interaction devices.Using wireless network signals, drivers can control the robot from an operating platform 20 meters away.Specifically, considering that the perception and feedback loops between humans and machines constrain the efficiency of human-machine collaborative decision-making, to effectively enhance the depth of human-machine integration, this study processes robot visual camera data and transmits it to virtual reality devices, allowing drivers to experience immersive driving from a firstperson perspective.Additionally, drivers wear EMG armbands on both arms to monitor upper limb fatigue in real time.The system has successfully integrated various control hardware such as multifunctional joysticks, throttle levers, touchscreens, etc., greatly enhancing the driver's sense of presence during remote driving control and enabling better collaborative decision-making tasks with the robot.The remote human-machine collaborative driving control system described above is shown in the Figure 9.
Based on the remote human-machine collaborative driving control system described above, this study conducted physical experiments on integrated terrains with obstacles, gullies, and slopes.As shown in Figure 10, the speed of the robot's center of mass when driven alone through integrated terrains is recorded (green dashed line).In addition, with the assistance of the auxiliary system proposed in this study, the driver navigates through integrated terrains and records the speed of the robot's center of mass (blue dashed line).The auxiliary system can generate a feasible set of instructions in real-time during the robot's travel based on decision-making target values, and, after filtering through constraint conditions, provide real-time recommended human-machine instructions to improve the efficiency of driver instruction issuance.The red solid line in the figure represents the maximum limit value of the center of mass speed prompted by the auxiliary system.It can be observed that from 0 to 18 s, the robot walks on flat terrain, during which the maximum prompted speed limit for the center of mass by the auxiliary system is 0.25 m/s.From 18-60s, the robot moves through obstacle terrain, during which the prompted maximum speed limit for the center of mass by the auxiliary system decreases to 0.15 m/s.From 60 to 78 s, the robot once again walks on flat terrain, during which the prompted maximum speed limit for the center of mass by the auxiliary system returns to 0.25 m/s.From 78 to 106 s, the robot walks through gully terrain, during which the prompted maximum speed limit for the center of mass by the auxiliary system decreases to 0.09 m/s.From 106 to 122 s, the robot walks on flat terrain, during which the prompted maximum speed limit for the center of mass by the auxiliary system is 0.25 m/s.From 122 to 170 s, the robot walks on uphill terrain, during which the prompted maximum speed limit for the center of mass by the auxiliary system decreases to 0.09 m/s.From 170 to 185 s, the robot walks on flat terrain, during which the prompted maximum speed limit for the center of mass by the auxiliary system returns to 0.25 m/s.In summary, when the driver navigates alone through integrated terrains, there is a slight lag in speed control during transitional phases of terrain changes.However, after adopting the auxiliary system for maximum speed prompts, the driver can promptly adjust the speed before the terrain changes and, knowing the maximum speed limit, can also raise the speed in real-time in a reasonable manner, ensuring travel safety and efficiency.
As shown in Figure 11, the robot's stride when driven alone through integrated terrains is recorded (green dashed line).Additionally, with the assistance of the auxiliary system proposed in this study, the driver navigates through integrated terrains and records the robot's stride (blue dashed line).The red solid line represents the maximum and minimum limit values prompted by the auxiliary system.It can be observed that from 18-60s, the robot walks on obstacle terrain, during which the maximum stride limit prompted by the auxiliary system is 0.1 m and the minimum limit is 0.5 m; from 78 to 106 s, the robot walks on gully terrain, during which the maximum stride limit prompted by the auxiliary system is 0.14 m and the minimum limit is 0.1 m; from 122 to 170 s, the robot walks on uphill terrain, during which the maximum stride limit prompted by the auxiliary system is 0.09 m.In summary, when the driver navigates alone through integrated terrains, there is a slight lag in controlling the robot's stride during transitional phases of terrain changes.However, after using the auxiliary system for maximum stride prompts, the driver can promptly adjust the stride before the terrain changes and, knowing the real-time limits of the stride, can increase it in a reasonable manner in real time, further ensuring travel safety and efficiency.
As shown in Figure 12, the robot's step height when driven alone through integrated terrains is recorded (green dotted line).Additionally, with the assistance of the auxiliary system proposed in this study, the driver navigates through integrated terrains and records the robot's step height (blue dashed line).The red solid line represents the maximum and minimum limit values prompted by the auxiliary system.It can be observed that from 18 to 60 s, the robot walks on obstacle terrain, during which the minimum step height limit prompted by the auxiliary system is 0.05 m; from 78 to 106 s, the robot walks on gully terrain, during which the minimum step height limit prompted by the auxiliary system is 0.01 m; from 122 to 170 s, the robot walks on uphill terrain, during which the The velocity of robot in terrain.
FIGURE 11
The leg stride of robot in terrain.
FIGURE 9
The remote human-machine collaborative driving control system.et al. 10.3389/fnbot.2024.1393738Frontiers in Neurorobotics 11 frontiersin.orgmaximum step height limit prompted by the auxiliary system is 0.06 m.In conclusion, compared to driving alone through integrated terrains, using the auxiliary system for maximum step height prompts allows for real-time reasonable reduction in step height based on the known real-time limit values, further ensuring travel safety and efficiency.
Chen
To further compare the impact of driving alone versus using a driving assistance system on the performance of the hexapod robot, this study utilized two driving evaluation indicators: static stability margin and collision coefficient.The stability was quantitatively evaluated during the robot's travel process using established static stability margin evaluation standards, while the collision count was determined and defined by detecting the pausing of swinging legs.As shown in Figure 13, it can be observed that when the driver utilizes the driving assistance system to remotely control the robot, the average stability is higher than the average stability when driving alone.This is particularly evident when traversing obstacle and uphill terrains, where the robot's stability is significantly higher than when driven alone, as depicted in Figure 14.When the driver utilizes the driving assistance system for remote control, the collision count between the robot and the environment is noticeably lower than when driving alone, especially when traversing obstacle and gully terrains.Further analysis indicates that, compared to human driving alone, a hexapod driver with the assistance of auxiliary systems improves robot stability by 12.5% and reduces the number of collisions between the robot and the surrounding environment by 50%.
Based on the analysis of the experimental results, it can be seen that the assistance of auxiliary systems provides command combinations, shifting the driver's task from making decisions to making choices, effectively reducing the driver's decision-making burden.At the same time, by providing the extreme values of each command, not only can it enhance the safety of robot locomotion, but it also to some extent improves the robot's moving speed and traffic efficiency in complex environments.
Conclusion
The most important achievement in this paper is the development of a novel neural human-robot command combination method for improving the hexapod robot's walking performance and reducing the burden on drivers' control.This article first proposes a mapping process that generates human-robot command combinations from decision target values, focusing the robot intelligence on assisting drivers by generating human-robot instruction combinations.In addition, this article quantifies robot motion geometric constraints and driver fatigue constraints.By using constraints to optimize and filter the feasible set of human-robot commands, a small number of human-machine command combinations are formed.A human-robot command assistance recommendation system is developed to provide real-time recommendations of human-robot command combinations to drivers.The results of the designed experimental platform demonstrate the validity of the human-robot command assistance recommendation system.In the future, considering the situation where both humans and machines have operational authority over the same command combination, we will continue to research humanrobot command fusion based on the game theory.The stability margin of robot in terrain.
Collisions Number
Control with driving auxiliary system The collision numbers of robot in terrain.
FIGURE 4
FIGURE 4 , for the quantitative analysis of driver arm fatigue, this paper designs a framework for quantifying upper limb fatigue.The main process includes: real-time collection of the driver's raw electromyography signals from the upper limbs using a myoelectric armband; preprocessing the raw electromyography signals of the upper limbs using data processing methods to extract feature signals; training a BP neural network using the feature signals and the driver's subjective fatigue values as training samples, thereby ultimately establishing and utilizing a neural network model for realtime assessment of driver arm fatigue.Specifically, in the process of collecting the driver's raw electromyography signals from the upper limbs, considering that electromyography signals are the electrophysiological signals generated when muscle tissue contracts, this paper collects 8-channel electromyography signal data using the gForcePro+ myoelectric acquisition armband.In the preprocessing stage of the raw signals, to improve the accuracy and anti-interference ability of the data, the sampling frequency of the signals is set to 1,000 Hz, and methods such as linear noise elimination, low-pass filtering, and moving average filtering are used to preprocess the original sEMG signals.This stage involves roughly five sub-processes: first, linear noise elimination for DC; second, square rectification of the obtained signals; third, further filtering of the rectified signals using filters; fourth, normalization of the processed signals; and fifth, moving average envelope processing of
FIGURE 7
FIGURE 7Ditch that can be crossed in multiple steps.
FIGURE 8
FIGURE 8 given by the driver in a non-fatigued driving state for a particular terrain, and let I i k ( ) represent the control instructions given in different fatigue states on the same terrain, where k = 1, 2… N and i = 1, 2… n.N represents the number of instruction sets, and n represents the number of different fatigue states.The MER can be expressed as Formula (18):
FIGURE
FIGUREThe leg stroke of robot in terrain. | 10,004.4 | 2024-04-05T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
A Platform to Build Mobile Health Apps: The Personal Health Intervention Toolkit (PHIT)
Personal Health Intervention Toolkit (PHIT) is an advanced cross-platform software framework targeted at personal self-help research on mobile devices. Following the subjective and objective measurement, assessment, and plan methodology for health assessment and intervention recommendations, the PHIT platform lets researchers quickly build mobile health research Android and iOS apps. They can (1) create complex data-collection instruments using a simple extensible markup language (XML) schema; (2) use Bluetooth wireless sensors; (3) create targeted self-help interventions based on collected data via XML-coded logic; (4) facilitate cross-study reuse from the library of existing instruments and interventions such as stress, anxiety, sleep quality, and substance abuse; and (5) monitor longitudinal intervention studies via daily upload to a Web-based dashboard portal. For physiological data, Bluetooth sensors collect real-time data with on-device processing. For example, using the BinarHeartSensor, the PHIT platform processes the heart rate data into heart rate variability measures, and plots these data as time-series waveforms. Subjective data instruments are user data-entry screens, comprising a series of forms with validation and processing logic. The PHIT instrument library consists of over 70 reusable instruments for various domains including cognitive, environmental, psychiatric, psychosocial, and substance abuse. Many are standardized instruments, such as the Alcohol Use Disorder Identification Test, Patient Health Questionnaire-8, and Post-Traumatic Stress Disorder Checklist. Autonomous instruments such as battery and global positioning system location support continuous background data collection. All data are acquired using a schedule appropriate to the app’s deployment. The PHIT intelligent virtual advisor (iVA) is an expert system logic layer, which analyzes the data in real time on the device. This data analysis results in a tailored app of interventions and other data-collection instruments. For example, if a user anxiety score exceeds a threshold, the iVA might add a meditation intervention to the task list in order to teach the user how to relax, and schedule a reassessment using the anxiety instrument 2 weeks later to re-evaluate. If the anxiety score exceeds a higher threshold, then an advisory to seek professional help would be displayed. Using the easy-to-use PHIT scripting language, the researcher can program new instruments, the iVA, and interventions to their domain-specific needs. The iVA, instruments, and interventions are defined via XML files, which facilities rapid app development and deployment. The PHIT Web-based dashboard portal provides the researcher access to all the uploaded data. After a secure login, the data can be filtered by criteria such as study, protocol, domain, and user. Data can also be exported into a comma-delimited file for further processing. The PHIT framework has proven to be an extensible, reconfigurable technology that facilitates mobile data collection and health intervention research. Additional plans include instrument development in other domains, additional health sensors, and a text messaging notification system.
Introduction
With 968 million units sold worldwide, mobile phones accounted for 53.6% of the overall mobile phone sales in 2013 [1]. In the United States in 2013, 56% of adults own mobile phones [2] and 34% own tablets [3]. Because mobile devices have become more prevalent, mobile health care (mHealth) apps will play a growing role for those managing their health concerns [4]. Nielsen's Connected Life Report from November 2013 indicates that approximately 46 million users in the United States have accessed apps in the fitness and health category, an 18% increase over the previous year [5]. Unfortunately, many of the available mHealth apps are not evidence based. For example, a review of 98 smoking-cessation apps found most to have a low level of adherence to proven methods defined by the US Public Health Service's Clinical Practice Guidelines for Treating Tobacco Use and Dependence [6].Therefore, mHealth app development and evaluations should be conducted in collaboration with a health researcher who understands the science and can objectively transfer this knowledge to the app developer. Although researchers desire to build quality evidence-based apps to test the mHealth interventions, app development may impose costs of US $50,000-150,000 [7]. For grant-funded research, this can be a significant fraction of project funds, leaving fewer resources for validation studies and efficacy trials. Personal Health Intervention Toolkit (PHIT) helps address this problem by providing a common platform and reusable content for both development and evaluation.
The PHIT platform was conceived to support the PHIT for Duty research projects addressing secondary prevention of psychological and behavioral health problems in persons experiencing symptoms of post-traumatic stress that had not yet risen to the level of a psychological disorder, such as post-traumatic stress disorder (PTSD). In doing this work, we soon realized that a common process model could be derived for mHealth intervention research and implemented in a way to support cost-effective reuse in other health domains and research apps [8]. We therefore set out to design and develop our PHIT framework, with the following goals: • Creating a common platform from which other mHealth intervention apps can be developed; • Standardizing how data collection instruments and interventions are implemented, fostering reuse from a common cross-study library; and • Masking the complexities of software development to reduce development time and enable researchers to focus on the research aims. This paper describes the PHIT platform and illustrates our high-level programming tool set, which facilitates implementation of mHealth apps through reuse of existing software content and easy development of new content according to study requirements.
Overview
The PHIT architectural model is based on the subjective and objective measurement, assessment, and plan note methodology [9] for health status analysis, intervention recommendation, self-help activities, and data presentation, creating a feedback loop of personalized health ( Figure 1). All data collection, analysis, and planning are performed locally on the mobile device, rather than via Internet services, with secure local data storage. PHIT has the following primary features: • Integrates self-reported and physiological sensor instruments; • Analyzes the data in real time on the device via an intelligent virtual advisor (iVA); • Presents a suite of custom self-help activities and interventions; • Collects data and adjusts the activities and interventions over time, tailoring the app to the individual needs of each user; and • Transparently transfers data to a centralized database for conducting data analysis.
Research studies are supported using data objects that tag data with information on the study, protocol, participant identification, and other related information to facilitate analysis. Data access is facilitated using a website dashboard allowing the researcher to monitor the state of longitudinal studies and download study data into comma-separated files for easy analysis. Furthermore, 90% of the app configuration is done via extensible markup language (XML) making it easy to change the behavior of the app.
Instrumentation System: Data Collection
The PHIT architecture allows the researcher to reuse data-collection instruments or create their own instruments. Using XML, each instrument has a series of forms (screens) composed of data-collection entities, which represent a data point the researcher wants to collect. For example, a user history instrument may ask for age, weight, and gender information on the first screen and vital signs such as blood pressure and body temperature on the second. These two screens are considered forms in PHIT, and age, weight, gender, systolic and diastolic measures, and temperature are entities. Entities represent the data the researcher is collecting and have two facets to them, namely, (1) the internal, logic side and (2) the graphical user interface. If the app is configured for data storage, the user's responses are automatically saved. If desired, a researcher may configure the PHIT app to periodically upload data from the local secure database to a backend server, thereby allowing the researcher to examine the collected data and monitor study progress.
The majority of instruments are data-entry instruments, using familiar entities such as text fields, checkboxes, and selection lists. Many of these implement well-established subjective health screeners and assessment instruments previously administered through paper forms. Others had been developed ad hoc as needed for a particular study, such as the military deployment history instrument for the PHIT for Duty project. These also can be reused, or adapted as needed, to support new studies. PHIT has a library of over 50 such standardized instruments, including the examples in Table 1. In addition to the standard data entry fields, radio buttons, and checkboxes, other entities provided by PHIT include audio playback, charts, date picker, image display, Likert scale, and specialty entities, such as game-like cognitive tests (eg, reaction time). These can be combined in various ways for building both data-collection and intervention instruments. For example, following several alcohol reduction-related screens, a series of slides can be combined with audio narration for constructing an integrated multimedia alcohol education instrument.
Instrument Coding
By coding the entire instrument in XML, the question definitions, codebook responses, data validation, skip logic, and overall flow of the instrument are defined in one place, making it easy to adjust as necessary. The XML definition in Textbox 1 drives the first form from the PTSD checklist instrument used in our "Flight Attendant Wellness" app ( Figure 2).
Using simple descriptive text, the form is made up of a single entity, named Q1, comprising a question with a series of radio buttons for the responses. The codebook values are defined in the code attribute of the item element and both the user text and the code are stored in the database. When the user selects one of the five radio buttons, a variable is created as instrumentName_entityName (ie, PCL_Q1), and the variable, a string type, is set to the selected code attribute.
Forms are not restricted in the number of entities they can contain. For purposes of this paper, the form example was kept simple. Textbox 2 shows a more complex example showing 4 entities on a form and the use of the vertical and horizontal elements to control user interface layout ( Figure 3).
In Figure 4, the second form (F2) is expanded to highlight the different events. Of particular note are the onValueChanged and onValidate events. The event is first sent to the entity where the data change occurred to provide specific entity-level processing, and then passed up to the form itself where the form can look at all the entities collectively. All scripted logic is written with simple commands (Textbox 3).
Logic within the instrument XML file allows the researcher to decide exactly what validation and skip logic to have, to set initial conditions for instrument variables, and to execute code when the instrument terminates, such as calculating an overall score or saving data to the local secure database. With the instrument completely defined in the XML file, it becomes a reusable object to be shared across apps with the same expected behavior. Other than content used by the instrument, no other dependencies are required.
Sensors
In addition to form-based data entry, the PHIT platform can also collect objective data from internal device sensors (eg, global positioning system coordinates) and external Bluetooth sensors (eg, heart rate monitor or fitness accelerometer). In the PHIT for Duty study, where individuals with post-traumatic stress are taught mindfulness exercises for stress reduction, the mobile app uses a heart rate monitor during the mindfulness meditation to calculate heart rate variability (HRV) and graphically show whether the user is achieving a more calm state. This is illustrated in the middle-line chart of Figure 5.
Notice the rise in the middle graph after the onset of meditation as the user goes from a stressful state to a calm state.
Whenever sensor data are acquired (eg, the heart pulse rate) and processed to produce a derived measurement (eg, the HRV index), the PHIT software allows for saving interim data at each stage of data processing. Such storage facilitates verification of data-processing algorithms and supports both reanalysis and alternative analysis of raw data at a later date without repeating the data-collection activities. This facilitates exploratory analyses of mHealth data for determining optimal processing methodologies without the expense and effort of repeated field studies, saving a considerable amount of both cost and time.
Background Tasks
In addition to user-facing instruments, PHIT provides a means for performing background tasks using instruments without a user interface. Some examples are (1) querying the battery state, (2) uploading data, and (3) retrieving the current global positioning system location. These script instruments, because they run in the background, do not have forms or entities but they do have the onInstrumentEnter and onInstrumentExit events for which custom logic can be written. Such tasks can be developed to execute singularly on demand, or execute repeatedly at a specified interval, like every 5 minutes.
Home Screen
Tying it all together, the PHIT platform interprets the instrument definitions and creates the user experience, displaying the appropriate instruments in a task list on the "Home" screen. Attributes of the instrument such as title, description, icon filename, and menu index determine exactly what the user sees and in what order. To reflect a change in state, an XML logic can be written to modify these attributes to highlight changing conditions, alert the user to perform a critical task, or simply change from day to day according to a protocol.
To avoid overwhelming the user with many instruments, instruments can be scheduled. This minimizes burden on the user by cleaning up the user interface and only displaying what is currently relevant. Using the built-in scheduler and the hide and show commands callable from the XML logic, only those tasks appropriate at a certain point in time will be displayed. The PHIT scheduler extends RFC5545, the Internet Calendaring and Scheduling (iCalendar) specification [31], specifically the DTSTART, DTEND, DURATION, and RRULE properties of the VEVENT calendar component. For example, the PHIT scheduler can set an instrument to be scheduled to be on the task list each Friday at 8 am.
Virtual Advisor: Tailoring the App for the Users Based on Their Input
PHIT's iVA is an expert system logic layer where data are analyzed and plans are created in real time on the mobile device. The iVA tailors its analysis so that the help the user receives is personal and timely, with reanalysis occurring as frequently as the researcher wants it to happen (ie, daily, weekly, or monthly) using the PHIT scheduling function. The iVA program modules are used to stratify health assessments (eg, normal, moderate dysfunction) and to prescribe and schedule self-help activities (eg, exercise, meditation, alcohol reduction) according to the evidence-based criteria provided by the mHealth app researcher.
Consider this example in which a person with a sleep disorder is being evaluated using the Pittsburgh Sleep Quality Index (PSQI) instrument. Upon completing the PSQI, the sleep improvement protocol may recommend the following activities whenever the score exceeds a value of 16: • Display a slide show on improving the sleeping environment; • Provide a narrated meditation exercise at bedtime for stress relaxation; and • Schedule the PSQI instrument to run every 3rd day at 8 am for reassessment until a downward trend is established in the PSQI score.
Just like the instrument logic, iVA logic is defined in XML that is organized by domain for easy reuse. Continuing the aforementioned example, the sleep assessment portion of the iVA script is shown in Textbox 4. Planning is also handled within the iVA logic. Continuing the previous sleep example where the PSQI score is above 16, the iVA sets the iVA_sleepRisk variable based on the PSQI_score and tests that condition as defined by the protocol in Textbox 5. The PHIT interventions and activities are implemented using the same scripting and script-processing methods as data-collection instruments; the XML constructs are identical. An intervention might display a series of slides, collect heart rate data during a relaxation exercise, or support behavior modification. For example, a series of images can be combined with audio narration to create an alcohol education module that can be reused across different PHIT apps. Interventions are study and protocol specific and with PHIT's library of reusable interventions (
Logic Processing
The logic written for either instruments or advisor coding is processed by the same logic processor. It supports common programming constructs in a simplified language format. Each logic statement reads like a sentence that ends with a semicolon.
Variables are used to keep track of information in the system, can be accessed globally across XML scripts, and persist until the app terminates. The PHIT naming convention for a variable is <objectName>_<variableName>, which provides a somewhat object-oriented variable naming construct. The PHIT variables are wrapped in "{}" (curly braces) to simplify runtime parsing of the logic code as in Textbox 6. When processing this example code statement, PHIT looks up the value of ptHx_status, compares it with the string new and if they are equal, sets ptHx_status to the string started. The variable ptHx_status refers to the instrument named ptHx (patient history), and the status entity within the ptHx instrument. As no other instrument, or object, may have the same name, the identity of the global variable is assured.
When a variable is evaluated, an attempt is made to determine whether it is a number, and if so, evaluates it as a number. Otherwise, it is treated as a string. If ptHx_age is set to 1, then if ("{ptHx_age}" <= "2") becomes if (1 <= 2). The PHIT logic processor is not rigid; it will automatically convert a quoted number to a real number object when the evaluation is performed, allowing the nonprogrammer who is writing this logic to have more flexibility. Boolean variables remain represented as strings, with "true/false" being the default, designated Boolean values.
A variety of programming statements are supported (Table 3), which are sufficient to meet most programming requirements. These statements are supplemented by the PHIT application programming interface (API), a set of global functions to provide scripted access to frequently used processes such as data conversion, database storage, and retrieval, playing sounds for effects and notifications, displaying pop-up messages, and formatting number variables as strings. Table 3 To analyze historical trends, the API provides query functions to the local database as the following shows: • "findLatest" retrieves the last saved value • "findLatestN" retrieves the last N saved values • "findByDate" retrieves values based on data range (eg, everything between June and September) • "findByPlacement" retrieves values based on placement range (eg, everything between the fourth and eighth save) Should a particularly complex XML script cause performance problems due to runtime compilation, you may recode the XML script in native code as an API function as "isPSQITrendingDownward." Although not common, any native code you write is automatically compiled into your PHIT app when the app is built.
Creating Your Own PHIT App: How the Pieces Fit Together
Unlike most mHealth apps, PHIT apps can be configured to support specific research requirements including multiple-treatment research studies. Using XML configuration files, you further define the app into studies and protocols. Each PHIT study contains one or more protocols and each protocol contains the instruments, virtual advisor, and interventions to be used for that protocol ( Figure 6). In this way, different treatment groups are automatically embodied in the app. With customization at the protocol level, each protocol in a study can have very different instruments, interventions, and virtual advisor; thus, creating a very different app. One example is a study in which Protocol A collects data and runs the virtual advisor to display appropriate interventions based on assessment scores, whereas Protocol B may merely collect data and have neither a virtual advisor nor any interventions.
Data Storage
Instrument-defined entity data are optionally saved in a local database with each record tagged with project id, study id, protocol id, case id, observation id, and a date-timestamp. Although not required, the PHIT platform supports, and strongly recommends that all data be stored using encryption to ensure data privacy. If the database is encrypted, PHIT automatically enforces the use of a password to access the app.
Although apps usually require a single database, the PHIT framework allows for multiple databases, including both create, update, and delete and read-only databases. A read-only database might be a lookup table to support app requirements, such as percentile growth chart for a pediatric wellness check [32].
Data Upload
In addition to the local data store, PHIT provides a data upload capability to a backend database. The following two upload options are available: (1) a user-initiated upload function, which initiates transfer and provides status feedback via an upload progress bar, and (2) a utility instrument for uploading to the server in the background. Because instruments can be scheduled, the mobile app can be configured to initiate a background data transfer on a prescheduled time, such as once a day at 12 am, thereby providing little disruption to the user. Of course, the app must be running on the device for this to occur. For privacy, data are uploaded over hypertext transfer protocol secure (HTTPS) with the option to encrypt the data before being uploaded over HTTPS, providing a doubly encrypted upload. Uploading data to a central server is an optional feature, which is most useful for research studies; however, use of a central data store is not a required element of a PHIT app.
To accommodate the typical software development process, three-dimensional different upload URLs can be specified to mimic the different phases of software development: This has the benefit of not polluting the production database with test data.
When merged into one dataset on a backend data server, data can be visualized, studied, and extracted. Access to the project data is controlled via an access control list to ensure data privacy. By default, PHIT uploads no personally identifiable information (PII) ensuring that all data are deidentified but allowing for data to be reported up to the case id level. However, it is up to the app development team creating the mobile app to ensure that PII data are not misidentified, leading to accidental upload.
User Interface Customization
Mobile app developers need to address differences in screen size and density, and differences across mobile device software platforms. Each platform has a distinct user experience and human interface guidelines. The technology PHIT is built on uses a neutral look and feel across these platforms. The appearance of the mobile app can be changed through the use of cascading style sheets (CSS), custom skins, and icons, giving each app its own unique look and feel (Figures 7-10).
What Skills Do You Need?
With minimal customization of the app, only knowledge of XML and workflow logic is required to implement most mHealth apps. Custom apps require various software engineering skills. Some of these optional skills are as follows:
Overview
To date, we have completed 4 mobile apps and 1 desktop app using the PHIT platform. Several additional apps are in development and a half-dozen are in the concept phase. We have found that the time from concept to completion and the cost of implementation were substantially reduced in the later projects, compared with the initial work attesting to the flexibility of the PHIT platform and the reusability of developed components. Examples of our completed apps are as follows:
PHIT for Duty, a Personal Health Intervention Tool for Psychological Health and Traumatic Brain Injury
PHIT for Duty, deployed on Android devices, includes over 30 psychometric, personal/medical history, trauma exposure, and other data-collection instruments and evaluations [8]. Self-help interventions have been developed for stress, sleep problems, and alcohol abuse, including multimedia health information modules, stress relaxation exercises, and cognitive behavior therapies for sleep and alcohol.
Clinical Decision Support for Cardiovascular Health and Risk Reduction in Children and Adolescents
This app implements data collection, risk assessment, and intervention recommendation requirements of a subset of the Guidelines on Pediatric Cardiovascular Health and Risk Reduction [33]. The mobile app, intended for use by pediatricians aids to facilitate their use of the guidelines in daily clinical practice, is easy to use, and available for both Android and iOS devices.
Pre-Deployment Stress Inoculation Training (PRESIT)
This is a desktop app for training in stress reduction techniques as a preventative measure for reducing incidence of post-traumatic stress in service men and women.
ActiSleep
An app used for collecting research data in a study of sleep habits, sleep quality, and substance use in teenagers. The app includes daily diaries for prebedtime activities, substance use, and sleep quality. It also provides step-by-step multimedia instructions to enable participants to carry out biosample collection (ie, saliva) and facilitate use of a sleep activity monitor, thereby maintaining data quality in these ancillary data-collection processes.
Flight Attendant Wellness
An app providing screeners and education to support the prevention of prescription drug abuse, the federal model drug-free workplace, and the Workplace Prevention Research initiative.
Use of the PHIT framework for your apps does not limit you in how you distribute your apps nor do they require any oversight or verification. You are in complete control in distributing your PHIT app, whether it is made available via a public app store or a private distribution. The PHIT for Duty and ActiSleep apps are both for private research studies with a private distribution. The Flight Attendant Wellness app and the Clinical Decision Support for Cardiovascular Health and Risk Reduction in Children and Adolescents app are in the process of being made publically available and should be in the app stores soon.
Discussion
The PHIT framework has proven to be an extensible, reusable, and reconfigurable technology that facilitates mobile data collection and health intervention research. In addition to specific project requirements to enhance the platform, plans are to grow the library of instruments and interventions, add simple texting service prompting and notification, provide distributed advisor processing on the backend, and improve the Bluetooth layer for access to sensors, including wearable sensors. | 5,980.8 | 2015-06-01T00:00:00.000 | [
"Computer Science"
] |
Ion-selective asymmetric carbon electrodes for enhanced capacitive deionization
With the development of capacitive deionization technology, charge efficiency and electrosorption capacity have become some of the biggest technical bottlenecks. Asymmetric activated carbon electrodes with ion-selective functional groups inspired by membrane capacitive deionization were developed to conquer these issues. The deionization capacity increased from 11.0 mg g−1 to 23.2 mg g−1, and the charge efficiency increased from 0.54 to 0.84, due to ion-selective functional groups minimizing the co-ion effect. The charge efficiency and electrosorption capacity resulting from better wettability of these electrodes are effectively enhanced by grafting ion-selective functional groups, which are propitious to ion movement. In addition, asymmetric deionization capacitors show better cycling stability and higher desalination rates. These experimental results have demonstrated that the modification of the ion-selective (oxygen-containing) functional groups on the surfaces of activated carbon could greatly minimize the co-ion effects and increase the salt removal from the solution. These results have indicated that the ion-selective asymmetric carbon electrodes can promote well the development of deionization capacitors for practical desalination.
Introduction
The water crisis is one of the most threatening issues in the foreseeable future due to the decrease of available fresh water caused by the growing population and environmental pollution. [1][2][3][4] Deionization of brackish water can provide abundant fresh water for humans. Various desalination technologies have been developed to conquer this threat. 1,3,5,6 Capacitive deionization (CDI) as a new technology is more energy-efficient compared to such traditional desalination approaches as reverses osmosis and multistage distillation. [7][8][9][10][11][12][13] The cardinal principle of CDI is the same as for electric double layer capacitors (EDLCs), but with distinct differences. [14][15][16] The main difference is that the solution is uid and ions are removed and stored on the electrodes with the applied voltage (1-2 V) during the deionization process, whereas the electrolyte is immovable in EDLCs and is only for energy storage. 14,17,18 The goal of CDI is the removal of ions from the electrolyte rather than energy storage. 16 According to the principle of CDI, carbon materials are more suitable due to their unique properties. Hence, various carbon materials have been developed by us and other research groups. [19][20][21][22] Most of them show good desalination performance but prepared with a complex procedure, high cost and low-scale yield. 23,24 Among them, activated carbon (AC) has been recognized as the most commercially practical electrode material for CDI due to its higher specic surface area, larger pore volume, better stability, lower cost and mass production. [25][26][27][28] Unfortunately, AC has high surface area but always accompanied with a large amount of disordered arrangement of micropores, which restrict the diffusion of salt ions and mass transfer in desalination process. 25,28,29 Although advanced strategies were proposed to solve this issue including increasing the ratio of mesoporous structure and introducing some hydrophilic groups on the AC electrodes. 25,30 The electrosorption capacity and charge efficiency are still much lower than that expected due to the co-ion expulsion effects.
Although the CDI technology has many advantages, the coion expulsion effect is an unavoidable issue, in which the oppositely charged counter-ions are adsorbed to the electrode and co-ions are repelled by applying voltage to electrodes. 2,31 The adsorption and desorption of salt ions occur at the same time in the deionization process, which result in reducing the charge efficiency (CE, dened as the ratio of adsorbed salt over charge) and increasing the energy consumption. 10 Generally, higher charge efficiency leads to lower energy consumption. However, the charge efficiency of most carbon electrodes in the CDI process is lower than 0.6, which is far less than 1 and limits its large-scale industrial application. 2,4,10 To promote the practical application of CDI technology, it is quite urgent to improve the charge efficiency and reducing energy consumption of the electrodes.
These limitations may be overcome effectively by introducing ion-exchange membranes (IEM) into the CDI. 7,32 The ion-exchange membranes capacitive deionization (MCDI) have ion selectivity which prevents reverse adsorption and prohibits the mobility of co-ions. 33 The movement of counter-ions is free and the co-ions are prohibited in the IEM. 34 It will minimize the co-ions expulsion effect and increase CE and salt removal efficiency. It has been demonstrated that the CE of MCDI or revised-MCDI is even up to 0.9. 32 However, there are two disadvantages limiting the commercial application of MCDI. One is that the price of IEM is very high and the other is the high contact resistance caused by the inferior contact adhesion between the CDI electrodes and IEM. 31 The electrode with graing ion-selective groups through covalent bonds is analogous to the MCDI. It can simplify the equipment and overcome the disadvantages of MCDI, which is more economical. Inspired by the MCDI, carbon electrodes with graing ion-selective groups have attracted great interest of the CDI technology. 2,31,[35][36][37] Previously, we designed and prepared a novel ion-selective 3D graphene electrode to overcome the co-ions expulsion effect. 31 However, the construction of ion-selective 3D graphene suffered from high production cost, long-time treatment and only suitable for lab-scale fabrication. 31 It is noted that most of asymmetric CDI were designed and prepared to overcome the co-ions expulsion effects, in which only one pair of asymmetric electrode were studied in a low concentration and small volume of NaCl solution for laboratory research. 31,[35][36][37] The CDI for practical applications is still challenging.
Hence, asymmetric AC electrodes have been designed and graed by sulfonic groups and amine groups on the surface of AC through covalent bonds as a cation-selective electrode and an anion-selective electrode respectively for enhanced capacitive deionization. The NaCl aqueous solution with an initial concentration of 1000 mg L À1 in a total volume of 400 mL was pumped to the cell from beginning to end in this work. Ten pairs of asymmetric electrodes are assembled with sulfonic AC as positive electrodes and aminated AC as negative electrodes. The scheme of co-ion minimization is illustrated in Fig. 1. The ions are selectively absorbed on the surface of asymmetric AC electrodes even before applying the voltage. Ions ux into and store in the oppositely charged electrode pores with electrostatic attraction aer applying voltage. Ion-selective functional groups can prevent reverse adsorption and prohibit the mobility of co-ions similar to MCDI. As a result, the electrosorption capacity and charge efficiency of asymmetric AC electrodes have been signicantly improved because of not only preventing coion expulsion effect but also promoting the wettability and accelerating salt solution inltration. These results will be benecial to solve the technical bottlenecks and accelerate the practical engineering application of CDI.
Experimental details
Preparation AC (TF-B518) was supplied by Shanghai Sino Tech Investment Management Co. Ltd and modied with diluted nitric acid. Sinopharm Chemical Reagent Company provided other chemicals. DI water was used in the whole experiment process.
The preparation of ion-selective function AC was according to the literature with some improvements. 31 For anion groups graing, 10.5 g of sulfanilic acid was dissolved to the NaOH aqueous solution in an ice bath. 65 mL of icy hydrochloric acid was added to the above solution for 15 min. Then, 30 mL of sodium nitrite was added to the above solution slowly for 30 min. Next, the diazosalt of sulfanilic acid was obtained and added slowly to the AC dispersion for 3 h. The products were collected by centrifugation, washed several times with deionized water and absolute ethanol, and dried at 60 C for 12 h. The obtained sample was labelled as S-AC. For cation groups graing, 2.5 mL of 3-aminopropyltriethoxysilane was dispersed to 1000 mL of acetone dispersion of AC and then evaporate all the acetone at about 70 C. The obtained samples was labelled as N-AC. The corresponding electrodes preparation. The carbon ingredient and PVDF were uniformly mixed in N-methyl pyrrolidone with ratio of 85 : 15 in a vacuum mixer. Then the mixture was pressed onto graphite sheets with a mass of 1.0 g and a size of 7.0 cm  11.0 cm  0.01 cm. The corresponding electrodes are obtained aer dried at 40 C overnight.
Electrochemical and desalination experiments
Cyclic voltammetry (CV) was actualized in a 3-electrode system using a CHI 660D, which include an S-AC (or N-AC or AC) as the working electrode, a graphite sheet as the counter electrode, and a saturated calomel electrode as the working electrode.
The electrosoption performance was conducted in electrosorption cell. The electrosorptive cell includes 10 pairs of electrodes and each pair electrodes separated by an inert spacer. Four different CDI cells were assembled for comparison: (1) AC versus N-AC, which is cathode and anode correspondingly; (2) AC versus S-AC, which is anode and cathode respectively; (3) N-AC versus S-AC, where is anode and cathode correspondingly; (4) AC versus AC, which is symmetric combination. Aqueous sodium chloride solution with fresh concentration of 1000 mg L À1 and total volume of 400 mL was pumped to the cell from beginning to end. The conductivity of NaCl solution was detected by a conductivity meter to reect the concentration changes.
Characteristics
The surface prole of the obtained materials is determined by SEM images and shown in Fig. 2. As seen from Fig. 2a and d, both S-AC and N-AC exhibit well connected and irregular network-like porous architectures. It indicates that the morphology and structure of AC are mainly maintained aer modication. The EDS mapping (Fig. 2b-f) conrms that S and N element are uniformly scattered on the whole surface of the S-AC and N-AC correspondingly. It well proved that the ionselective groups are effectively graed on the surface of AC, which should be benecial to electrosorption.
Pore features and specic surface area are represented by N 2 sorption isotherms ( Fig. 3a and b). Generally, the surface area of AC includes micropore surface area and the external surface area. 29 Fig. 3a shows the N 2 adsorption isothermal of AC, N-AC and S-AC. According to the IUPAC classication, all the samples exhibit a typical type I, which indicate that the presence of relatively large micropore in the frameworks. [35][36][37] The BET specic area decreased from 2759 m 2 g À1 for AC to 1090 m 2 g À1 for S-AC and 696 m 2 g À1 for N-AC, especially, the specic area of N-AC is the lowest because the graed groups may increase the total weight of the samples and decrease the specic surface area correspondingly. Although the decrease of the specic surface area of S-AC and N-AC, the wettability of S-AC and N-AC are greatly increased due to the hydrophilic -SO 3 À and -NH 3 +groups on the surface, which can result in full contact between the salt solution and electrodes, accelerate salt ion inltration, and enhance the deionization performance. 2,31 The presence of ion-selective functional groups of the AC, N-AC and S-NC are conrmed by the FT-IR spectra (Fig. 3c). The peaks at around 3500-3400 cm À1 and 1750-1600 cm À1 exist in all the samples, which are assigned to O-H stretching and CO asymmetric stretching vibration, respectively. The characteristic peaks at around 1150-1050 cm À1 and 650-575 cm À1 in the The surface compositions of asymmetric AC was detected by XPS. As displayed in Fig. 3d, the N-AC sample shows an obvious peak at $400 eV assigned to N 1s, and the atomic percentage of N element was about 4.39%. Other samples shows no obvious N 1s peak. The S-AC sample shows a notable peak at 167.5 eV of S 2p. From the inset picture, the atomic percentage of S element was about 0.28%. In addition, no obvious S 2p peak was detected even aer the signal amplication for others. All the results can prove that the ion selective groups have been successfully graed on AC.
The dynamic contact angle measurements are carried out to better understand the function of modication on the wettability. The contact angle changes over time as presented in Fig. 4. The contact angle of AC (105 ) is higher than N-AC (100 ) and S-AC (95 ) when water drops on the electrode surfaces as soon as possible. The contact angle of the N-AC and S-AC are decreased faster with time increasing. Then, contact angles of AC, S-AC and N-AC are down to 35 , 10 , and 20 at the contact time about 90 s. It should be pointed out that N-AC and S-AC have lower contact angle, which indicates that the hydrophilic -SO 3 À and -NH 3 + groups on the surface increase the electrode's wettability. It is consistent with the above XPS and FT-IR results.
Electrochemical properties
The CV was usually employed to evaluate the electrosorption ability of electrodes. 38 All the CV curves show typical capacitorlike characteristics with no obvious oxidation/reduction peaks in the selected voltage range (Fig. S1 †). It manifests that the capacitive behavior of all samples result from EDLC assigning to the coulombic interactions. 39 But the shape of CV curves is This journal is © The Royal Society of Chemistry 2018 slightly deviated from rectangle because of the inherent resistivity of the salt solutions. 40 Generally, the larger area of CV curves indicates higher specic capacitance under the same condition. It needs to note that the CV curve of N-AC and S-AC electrodes exhibit much larger area as compared with that of AC, suggesting a higher specic capacitance of N-AC and S-AC electrodes. The specic capacitances of AC, N-AC and S-AC electrodes are 39.3 Fg À1 , 44.6 Fg À1 , and 57.9 Fg À1 according to the eqn (S1), † respectively. The enhanced capacitances can be attributed to the graed -SO 3 À and -NH 3 + groups. The CV curves at 5 mV s À1 are also carried out (Fig. S1b †). It can be seen that the shape of CV curves are relatively rectangular. Generally, the lower scan rate, the higher specic capacitance. The CV curves at 10 mV s À1 in a 1000 mg L À1 NaCl solution are seriously deviated from the relatively rectangular shape (Fig. S1c †). The area of the CV proles are smaller, indicating a lower specic capacitance.
Desalination performance
In order to investigate the inuence of ion-selective groups of asymmetric activated carbon on deionization performance, four sets of different CDI cells were evaluated for comparison. Four different enhanced CDI cells include three sets of asymmetric electrodes and one set of symmetric electrodes: (1) AC versus N-AC, which is cathode and anode correspondingly; (2) AC versus S-AC, which is anode and cathode correspondingly; (3) N-AC versus S-AC, which is anode and cathode correspondingly. As mentioned above, these three pairs of electrodes work as asymmetric cells. (4) AC versus AC, which is a symmetric combination. The CDI performance of four different cells were carried out and shown in Fig. 5a. The salt adsorption capacity (SAC) is a useful and insightful performance indicator of the electrodes materials itself and doesn't change with any other cell component under the given experimental conditions. 10 Fig. 5a displays the SAC variation along with the desalination time. Obviously, the SAC of all samples increase rapidly at the beginning 10 min, suggesting that Na + and Cl À are absorbed onto the anode and cathode electrodes as soon as the external voltage is applied. With the time going by, the SAC increase slowly in the 10-60 min, indicating that most of ions have been absorbed onto the electrodes. The SAC keeps nearly smooth aer about 60 min, demonstrating that the electrosorption equilibrium time is approximate 60 min.
As calculated, the SAC of asymmetric electrodes were 23.2 mg g À1 , 20.1 mg g À1 and 16.5 mg g À1 for the S-ACkN-AC, S-ACkAC, N-ACkAC cells compared with the symmetric ACkAC cells (11.0 mg g À1 ) at 60 min as the external voltage applied. The corresponding Regone plot SAR (salt adsorption rate) versus SAC (according to the eqn (S2) and (S3) †) are provided in Fig. 5b. The plot curves of the S-ACkN-AC electrodes are in the top and right region compared with ACkAC electrodes in the down and le region, indicating that the S-ACkN-AC electrodes have higher SAC and faster SAR. The results can be attributed to the graed ion-selective -SO 3 À and -NH 3 + functional groups, which can increase the electrostatic interaction force for the counter-ions and repel co-ion to move to the opposite electrode. The hydrophilic -SO 3 À and -NH 3 + groups on the surface increases the electrode's wettability and heighten the interaction between salt solutions and the electrodes' surface. 36 That is why asymmetric electrode have higher electricsorption performance than symmetric electrodes. Generally, for the given electrodes material, the CDI performance mainly depends on the operating conditions such as the cell voltage and ow rate. 10,[41][42][43] The CDI performance of symmetric and asymmetric electrodes at different cell voltages were also investigated ( Fig. 6a and b). When the voltage is zero, there is nearly no electro sorption. It can be obviously seen that the electrosorption capacity increase sharply as soon as the external voltage is applied and then reach the electrosorption equilibrium aer about 60 min when the voltage increase from 0.4 to 1.4 V. There is no hydrolysis of water observed during the desalination process due to the intrinsic resistance in the whole circuit. 29 Too higher voltage is not selected because of too higher voltage leading to the hydrolysis of water. The SAC of the S-ACkN-AC electrodes increased from 8.9 mg g À1 to 23.2 mg g À1 , when the voltage increased from 0.4 V to 1.4 V. However, the symmetric ACkAC electrodes only increase from 4 mg g À1 to 11.0 mg g À1 . The SAC of asymmetric S-ACkN-AC electrodes are higher than that of symmetric AC-AC electrodes ( Fig. S2a and b †) at any voltage. The higher cell voltage can provide stronger electrostatic interaction which results in a higher salt adsorption.
The experiments at different ow rates were also performed ( Fig. 6c and d). The SAC increase with the ow rate increasing from 20 to 40 mL min À1 . The highest SAC is achieved at ow rate of 40 mL min À1 for both S-ACkN-AC and ACkAC capacitors. This may be due to that more salt ions can be provided for the adsorption at higher ow rate. The SAC of asymmetric S-ACkN-AC capacitors at each ow rate are higher than that of symmetric ACkAC capacitors. All the results indicate that the asymmetric S-ACkN-AC capacitors have enhanced desalination performance compared with symmetric ACkAC capacitor ( Fig. S2c and d †).
The repeated deionization-regeneration experiments of S-ACkN-AC and ACkAC capacitors were also evaluated (Fig. 7a). The Na + and Cl À ions are absorbed onto the electrode surface by electrostatic attraction during the charging process. The adsorbed Na + , Cl À ions can be effectively and rapidly back to the solution when the applied potential was reversed. As shown in Fig. 7a, the solution conductivity of S-ACkN-AC capacitors drops more quickly under the applied voltage, and it is more rapidly restored to the initial conductivity once the voltage is reversed compared with ACkAC capacitors. To nish one deionizationregeneration cycle, it takes about one hour for S-ACkN-AC capacitors, but two or three hours for ACkAC capacitors.
The desalination decline of S-ACkN-AC capacitors did not appear even aer 10 cycles. However, the desalination performances tend to decline for ACkAC capacitors aer nishing each deionization-regeneration cycle. The perfect regeneration of the S-ACkN-AC capacitors can be contributed to the enhanced electrostatic adsorption which successfully prohibits co-ion effect by graing ion selective functional groups.
The CE is a powerful tool to evaluate charge utilization and energy consumption. [44][45][46][47][48][49][50][51] The higher CE means the lower energy consumption during the deionization process. 45 The current response curves of S-ACkN-AC and ACkAC capacitors are shown in Fig. 7b. The CE of the S-ACkN-AC capacitors is calculated to be 0.84 which is higher than that of ACkAC (0.54) according to the eqn (S4) † and indicating lower energy consumption in this work. It is also higher than most of that reported in literatures (Table 1). This is attributed to following reasons: (i) the expelling of co-ions are blocked and cannot leave the electrode regions through the additional electrostatic adsorption by the ion selective and charged functional groups, and consequently, the charge efficiency of asymmetric electrodes is effectively improved. (ii) The improved surface wettability by graed ionselective -SO 3 À and -NH 3 + functional groups is benecial to the EDL formation and thus ensured the greater and faster desalination. The higher CE means lower energy consumption during this the deionization process.
Conclusions
In summary, we prepared asymmetric activated carbon electrodes with ion-selective functional groups. The prepared electrodes have higher charge efficiency (0.84) and higher electrosorption capacity (23.2 mg g À1 ) compared to pristine AC electrodes (0.54 of charge efficiency and 11.0 mg g À1 of electrosorption capacity) due to the wettability and hydrophilicity effectively enhanced by graing ion-selective functional groups. Asymmetric AC electrodes show the better cycling stability. These experimental results have demonstrated that the modi-cation of the ion-selective functional groups on the surfaces of AC could reduce the co-ion effects effectively and improve the salt removal efficiency from the solution. This will open the new opportunity to development of capacitive deionization technology for practical desalination.
Conflicts of interest
There are no conicts of interest to declare. | 4,965.8 | 2018-01-09T00:00:00.000 | [
"Chemistry"
] |
Seabed morphodynamics of a coastal lagoon of the Gulf of California
A two-dimensional, vertically integrated, non-linear numerical model was applied to investigate the Urias coastal lagoon’s (URCOL) tide-driven currents, bed load sediment transport, and seabed morphodynamics. The coastal body of water, located on the eastern side of the Gulf of California, includes the Mazatlán harbour, the most important port on the Pacific Mexican coast due to the relevant activities like the heavy vessel traffic. URCOL also includes an extensive aquaculture infrastructure at the lagoon head. The tidal hydrodynamic modelling revealed the mixed character of tides predominantly semidiurnal in the lagoon. The numerical computation at the harbour entrance showed an ebb-dominant tidal distortion. The distribution patterns of the erosion and accretion rates are consistent with the convergent and divergent character of the vectors of sediment transport rates. The sediment accretion has been predicted mainly in the middle part of the channel, right where the channel starts curving, changing the alignment of the lagoon. The tidal hydrodynamics, sharp topographic gradient, and geometric features of the lagoon seem to determine the location of accretion and erosion areas. A mixed character of predominantly semidiurnal tides and an ebb-dominant tidal distortion were predicted in the lagoon. The erosion and accretion areas are consistent with the sediment transport vectors predicted by the model. A non-linear numerical model study determined that tidal hydrodynamics and morphodynamics influence sediment transport in the lagoon. The sediment accretion-erosion areas mostly occur in the middle part of the lagoon. Sediment exchange took place occur between the lagoon and the Gulf of California; the predicted net bed load sediment transport was seaward. A mixed character of predominantly semidiurnal tides and an ebb-dominant tidal distortion were predicted in the lagoon. The erosion and accretion areas are consistent with the sediment transport vectors predicted by the model. A non-linear numerical model study determined that tidal hydrodynamics and morphodynamics influence sediment transport in the lagoon. The sediment accretion-erosion areas mostly occur in the middle part of the lagoon. Sediment exchange took place occur between the lagoon and the Gulf of California; the predicted net bed load sediment transport was seaward.
Introduction
Tides in the Gulf of California play a significant role in the circulation and vertical mixing thereby influencing the coastal lagoon dynamics [1]. The Urias coastal lagoon (URCOL), which includes the Mazatlan harbour, is relevant for the economic development of the northwestern Mexico. The harbour is the outlet overseas and for cargo from several states of Mexico. The ecosystems of the lagoon constitute substantial fishery grounds and excellent conditions for aquaculture; in fact, the lagoon provides a nursery ground for post-larval shrimp [2]. Since most of these grounds are exposed to accretion or erosion threats caused by tidal hydrodynamics or meteorological events, understanding the hydrodynamic regime and sediment transport processes in the URCOL is critical for environment protection and preservation. To achieve this, it is important to know the sediment transport characteristics and erosion and accretion patterns within the lagoon, as well as the mechanisms that force morphological changes. Eventually, the generated information inherent to the URCOL morphodynamics may help solve accretion or erosion problems, and to conserve this invaluable coastal ecosystem.
Regardless of the relevance of URCOL, little is known about the sediment transport processes of this subtropical coastal lagoon. To our knowledge, there are just a few studies on the transport of sediments in the Urias Coastal Lagoon. One was conducted only in the URCOL entrance [3], and the other was about bottom sediment distribution within the lagoon [4]. Montaño-Ley et al. [5] investigated the hydrodynamics and pollutant dispersion in the URCOL. Cardoso-Mohedano et al. [6] implemented a hydrodynamic-biogeochemical model to evaluate the dispersion, under specific tidal conditions, of nutrient discharges from a semi-intensive shrimp farm during spring and neap tide.
In a broad geographical context, in recent years, relevant studies have been addressed to investigate the sediment transport and the morphodynamics of coastal lagoons: Carbajal et al. [7] investigated the tide-driven bed load transport of sediments in the shallow coastal lagoon of Yavaros, located in the southeastern part of the Gulf of California, Mexico. Petti et al. [8] studied the morphodynamics of the Marano and Grado lagoon in Italy using a two-dimensional horizontal (2DH) morphological-hydrodynamic model.
Ahmed-Syed et al. [9] evaluated the morphodynamics of Miani Hor, a coastal lagoon of Lasbela, Balochistan, Pakistan. Mengual et al. [10] investigated bed load transport in the Seine Estuary (France). In the literature, few investigations exist concerning bed load sediment transport and the morphodynamics in coastal lagoons. Hence, it is imperative to investigate this issue.
The main objective of this research was to investigate the sediment transport and the seabed morphodynamics of the URCOL. It has been emphasized the investigation of the influence of the bathymetry, tidal asymmetries, geometry of the lagoon in the tidal hydrodynamics, and the mechanism that forces the bed load sediment transport and sea bottom morphodynamics of the lagoon. This investigation may contribute to the better understanding of similar coastal lagoons in a broader geographical context.
The principal characteristics of field velocities were computed to analyze the mechanisms involved in the lagoon's sediment transport and bottom morphodynamics processes. A two-dimensional non-linear hydro-dynamical-finite difference, semi-implicit model like that described by Backaus [11] and implemented by Carbajal [1], was applied. Van-Rijn [12] parameterized the bed load sediment transport. Also, we incorporated the sediment conservation equation into the two-dimensional hydrodynamic model to determine the seafloor morphodynamics. The seabed erosion/accretion areas have been delineated, providing a conceptual understanding of the sub-aquatic geomorphology development of URCOL.
Study area
The Urias coastal lagoon (23° 11′ N-106° 22′ W) is adjacent to Mazatlán City, Sinaloa State, is located on the western coast of Mexico in the Gulf of California (Fig. 1). URCOL has been classified by Lankford [28] as a coastal lagoon with an internal platform barrier with an L shape. It has a surface area of 18 km 2 and a length of 17 km. A mixed tide dominates this coastal lagoon with an average range of about 1.0 m [13]. Montaño-Ley et al. [5] reports a maximum tidal velocity of 0.6 m s −1 in the navigation channel and predicted tidal ranges of 1.2 m under the spring tide. Soto-Jiménez and Páez-Osuna [14] described estuarine behaviour during the rainy season in this water body but anti-estuarine in the dry season. Its complex morphology causes abrupt changes in the tidal current speed along the lagoon, of 0.91 m s −1 in the harbour area, 0.83 m s −1 in the intermediate area, and 0.31 m s −1 in the upper area [6]. According to Alvarez-Leon [15], the average annual rainfall in the area is 800 mm, the average annual surface temperature is 25 °C, and the monthly average temperature ranges from 19.7 °C in February to 28 °C in August.
Because of limited freshwater discharges, mainly through Jabalies creek (site b in Fig. 1) during the rainy seasons, the salinity is usually high throughout the year. The average annual salinity has been reported to have a value of 34 PSU (practical salinity unit), a maximum during the drought season (39.4 PSU) and a minimum during the rainy season (31.7 PSU) [6]. Water depth varies between 1 and 3 m except in the navigation channel, which is up to 13 m. Prevailing winds are associated with weather systems from the NW [5]. Occasionally, tropical storms migrate along the Pacific Coast of Mexico from the SW, striking Mazatlán city. Confined by two breakwaters, the Mazatlán Harbor is within the Urias Coastal Lagoon. Between the breakwaters (inlet), a water depth of up to 13 m allows the exchange of lagoon and ocean waters and the ship to enter safely the harbour through a navigation channel extends for about 3 km inside the coastal lagoon.
It is essential to emphasize the spatial scales of some specific locations in URCOL, including the mouth of the Infiernillo creek (located approximately 4 km from the inlet and 12 km from the head of the lagoon, with 4 m in water depth), the inlet (13 m in water depth and 300 m wide), and head of the lagoon (2 m in water depth and 400 m wide). The middle part of the URCOL is about 2 km wide and consists of a narrow channel and a wide shallow area (less than 2 m in water depth).
Model description and model set-up
The model area of the URCOL comprises a matrix of 81 × 144 elements representing the coastal lagoon's bathymetry. The mesh spacing used was ΔL = 75 m. All data sets are referred to as the mean low water level datum, MLW.
We applied a two-dimensional semi-implicit non-linear model like that described by Backaus [11] and Carbajal [1]. It considers only the effect of tidal hydrodynamics in a homogeneous sea. It has been successfully used to calculate the propagation of tidal waves in regions like the North Sea, Indonesian Archipelago, and coastal lagoons of the Gulf of California [5,16,17]. The model, based on the vertically integrated Navier-Stokes equations of motion for shallow water and the continuity equation, is described as: where x and y are the horizontal space variables, t is the time, U is the water transport (m 2 s −1 ) in the x direction, V is the water transport (m 2 s −1 ) in the y direction, is the sea surface elevation (m); A H is the horizontal turbulent exchange coefficient (m 2 s −1 ), f = 2Ω sin is the Coriolis parameter, Ω = 7.29 × 10 −5 s −1 is the angular velocity of the earth, is the latitude (~ 23°12ʹ N in these calculations), H is the depth (m); g is the gravitational acceleration (ms −2 ); and r is the friction coefficient.
The system was forced at the open boundary by seven tidal constituents, each one of the form: where 0 is the amplitude, the phase and the frequency of the forcing wave. The tidal amplitudes and phase data were available from Carbajal (1993). The values of the tidal harmonics K 2 , S 2 , M 2 , N 2 , K 1 , P 1 , and O 1. The hydrodynamic model applied in the present investigation was calibrated by Montaño-Ley et al. [5] with water level data recorded with a tidal gauge placed at site T (the location has been shown in Fig. 1b). The tidal elevation observations were used to calibrate the model (Fig. 2a). In addition, the observed and predicted current velocities have been shown in Fig. 2b, c, respectively. The current velocities observations have been carried out with a Sensor Data Current meter deployed at site b of the URCOL (Fig. 1). More details of the hydrodynamic model can be seen in Carbajal (1993). A friction coefficient r = 0.003, previously used by Montaño-Ley et al. [5] in the Urias coastal lagoon was also applied in this research.
The erosion-accretion patterns of sediments, i.e., the changes of the sea bottom, were determined by the sediment transport divergence: where H is the water depth, and ⃗ S b is the sediment transport. A review of approaches to long-term modeling of coastal morphology is given by De Vriend et al. [18]. The seabed shape in our URCOL model is updated at every time interval, and the current and sediment transport patterns are recomputed using the new bathymetry. The water elevation over the reference level was imposed at the open ocean boundary using tidal predictions. (1) The depth-integrated tidal velocities calculated through the 2D hydrodynamic model were used as input data to calculate the sediment transport through Eq. 7. The transport parameterized in flow models as a function of the depth-averaged velocities is given by Huthnance [19], Hulsher [20], Schuttelaars and De Swart [21], and Carbajal and Montaño [22]. The volumetric sediment flux (bed load transport) in the active layer, given by Van Rijn [12], is: Here | ⃗ u | is the magnitude of the velocity vector, u cr is the critical velocity for erosion, which for fine sand is 0.3 m s −1 . The term s is the ratio of densities of sediment and water [23]. The diffusive term measured by k * is a bed slope correction term, which models the preferred downhill transport of sediment. The typical value of b = 3 and k * = 2 was chosen in this work.
The parameter s is a function of the sediment properties (Eqs. 7-8), ρ s is the sediment density (2,650 kg/m 3 ) and d 50 the mean sediment diameter. The value of s was calculated for d 50 = 170 microns, according to Eqs. (8) and (9) [12,24,25] that define s as: Sediment properties are fundamentals for obtaining the value of s. Very often, the assumption of homogeneous grain size is considered to study sediment transport in coastal lagoons or coastal areas [26,27]. According to this, and with existing data of URCOL indicating that the dominant sediment in the study area is fine and medium sands, homogeneous grain size values of 170 μ were used to run the sediment transport model [14]. The water density is given by ρ.
Results
This study investigated the tidal hydrodynamic and seabed interaction processes that generate the bed load sediment transport producing areas, either of erosion or accretion, focusing on the seabed morphodynamics. We have gained an insight into the bed load sediment transport in the study area for fine sand (170 μ) and the seabed sedimentation and morphodynamics controlled by tidal hydrodynamics.
Numerous experiments were performed to investigate the hydrodynamic and bedload sediment transport in the Urias Coastal Lagoon. The experiments consisted of running a modified 2D hydrodynamic finite differences model for the actual topography of the seabed (Fig. 1b). The output data consist of the depth-integrated velocity components, which are used as input for the parameterization of the bed load sediment transport module. After their discretization, the bed load sediment transport and the mass conservation equation for sediment were solved numerically to obtain the seafloor changes. The sediment transport computation did not include the effect of orbital velocities generated by the gravity waves or the wind effect.
Because the critical threshold velocity should be exceeded by the tidal flow to start the transport of sediments, the knowledge of patterns of instantaneous velocities is relevant. The spatial distribution of the instantaneous tidal currents at two different tidal stages in the URCOL is shown in Fig. 3.
In general, the magnitude of the velocities in the navigation channel is large and undergoes attenuation as it travels away from the main channels. In the upper part of the lagoon, at the head, tidal currents are small, hardly reaching 0.10-0.30 m s −1 in magnitude. The tidal prism is discharged into the ocean during the ebb (Fig. 3a) and the water from the Ocean enters through the gap between the breakwaters into the lagoon during the flood (Fig. 3b). After leaving the main channel, the intensity of velocities becomes weaker due to bottom friction in the shallower area of the lagoon. In this location, the channel turns east, presenting the most significant tidal velocities.
The predicted tidal signals at four different locations inside the URCOL have been included in Fig. 4. The tidal elevation and the horizontal components of the x-direction, u, and in the y-direction, v, reflect the diurnal-semidiurnal mixed character of the tidal signal. Near the lagoon entrance at point A (see Fig. 1a for the position of the points), v component of the velocity is overriding, tidal currents are aligned along the principal channel, and u component close to zero. At point B, the two components of the velocity are out of phase by 2 and v is slightly larger than u. The calculated duration of rising and fall of the tide revealed in the sea surface elevation a signal distortion of ebb-dominant type, i.e., a longer rise and a short tidal fall. These few time series provide an idea of the complex behavior of the tidal flow in the shallow and relatively small URCOL. Bathymetry control these flows. The time series of the components (u, v) of the horizontal velocity vector show a considerable distortion of the tidal signal, indicating a complicated behavior of acceleration processes. The duration of falling and rising tidal velocities varied for the different flats and channels. At sites C and D (see Fig. 1a for the position of the points), the y component of the tidal velocities v is almost zero, and the u velocity component is also tiny.
The instantaneous bed load sediment transport fields for different tidal stages: t = T/4, t = T/2, t = 3 T/4, and t = T have been displayed in Fig. 5. For the tidal stages, t = T/4 and t = T/2 prevail in the ebb conditions, and the bed load sediment transport is toward the lower part of the lagoon. The bed load transport reverses direction for the flood conditions when t = 3 T/4 and t = T and sediment moves toward the upper part of the lagoon. The order of magnitude of the sediment transport rate is 10 -5 m 3 m −1 s −1 .
The net sediment transport distribution field after one tidal period (M 2 ) has been plotted in Fig. 6. The highest sediment transport rates were predicted in the channel's middle part, right where the channel starts curving, changing its alignment. The time series of cumulative sediment transport at point P in the y-direction has been shown for 60 M 2 tidal periods in Fig. 7a. The units are m 3 per m of transversal bottom length or m 2 . It may be observed that the y component keeps growing negatively, indicating the outflux of sediment from the lagoon into the Ocean. It is worth mentioning that between the tidal periods 9 and 18 and between periods 34 and 48, the cumulative sediment transport is not growing. These intervals correspond to the interval of neap tides (Fig. 7b) when the tidal range is relatively small (approximately 0.65 m) and the critical velocities to move sediment are below the threshold values. After 60 M 2 tidal periods, the cumulative sediment transport has reached 1.0 × 10 −1 m 2 . This implies that has occurred a net sediment transport to seaward of 1 × 10 -1 m 3 per depth meter. Figure 7c shows the computed net bed load sediment transport at the inlet of URCOL. The outcome of the computed time series indicates a seaward net transport of sediment toward the Gulf of California. The seaward net transport reflects the difference between the imported and exported bed load transport in about a year. For extended period, expected changes in sediment transport rates or reverse in the net transport direction might be expected.
Bed erosion and accretion patterns of sediment were determined from the divergence of the bed load sediment transport. The patterns in URCOL related to the vertical changes of the seafloor have been displayed in Fig. 8. The magnitude of the vertical changes of the bottom of the coastal lagoon is shown in 2 (Fig. 8a), 4 (Fig. 8b), 6 (Fig. 8c), and 8 (Fig. 8d) months of calculation. Changes of about 2 × 10 −2 m in the sea bottom have been predicted Fig. 7 a Cumulative bed load sediment transport in the y direction predicted (m 2 ) at point P (see in Fig. 1a). b Tidal elevation (m) at point P. c Cumulative bed load sediment transport through the inlet in m 3 (see in Fig. 1b) 1 3 in this process. After 1.5 years of numerical simulation of sediment transport and applying a constant threshold velocity (0.3 ms −1 ) and friction coefficient (0.003), the areas of noticeable vertical changes on the seafloor have been revealed in the middle section (MS) of the lagoon (see Fig. 1b for location). The affected areas are larger, and the bottom changes are about 5 × 10 -2 m. The influence of the lagoon's L-shape alignment on the bed's vertical changes is displayed in Fig. 9 (see square A).
Discussion
The hydrodynamics and the induced sediment transport in coastal lagoons are essentially modulated by the astronomical ocean tides, the geometry of the lagoons, and sediment availability [28]. The energy available to transport the sediment is provided by tidal forcing, which in Urias coastal lagoon, according to the results of the present study, can generate tidal current speeds close to 1.0 m s −1 at the central section (MS) of the lagoon. Most of the URCOL is very shallow (less than 2 m water depth) with extensive flat areas (Fig. 1b), and the amplitude of the M2 tidal component to the mean water depth of the lagoon exceeds 0.1. Under these circumstances, a depth-integrated two-dimensional numerical model is justified in studying the tidal hydrodynamics at the URCOL. The model implemented was barotropic because it considers only the effect of the tidal hydrodynamics without density effect due to salinity and temperature changes. URCOL receives only negligible freshwater inputs. Because of this, gravitational effects are minimal. According to Aubrey [29], when river inflow is absent, "sedimentation in coastal lagoons primarily depends on the reworking of material within the lagoon, the influx of sediment through the tidal inlet from the ocean, runoff from marginal areas, and biological sources (i.e., marsh grass and shell material)". Therefore, without river influx, as in URCOL, the tidal velocities (Fig. 3) will govern most of the sediment transport regardless sediment transport occurs as bedload or suspended. Hence the flow velocity pattern in URCOL controls much of the sediment transport. We predicted the highest sediment transport rates (Fig. 5) in the middle part of the channel (MS) where the fastest tidal currents, the maxima velocity gradients, and morphological changes of the seafloor occurred.
Two principal factors indicate whether a coastal lagoon or estuary undergoes an asymmetry of the tidal signal [30]. A lagoon dominantly characterized by channels develops an asymmetry typified by a longer falling tide. As the tide propagates, the asymmetry increases with greater friction and a time-variable channel cross-sectional area. If channels and flats characterizes the coastal lagoon, it can produce a longer rising tide if the area with tidal flats is large enough to overcome the effect of channels. According to the above discussion, the effect of channels is overruled in the URCOL lagoon by the tidal flats. Consequently, it produces an asymmetry with a longer rising tide (Fig. 4), i.e., it is an ebb-dominant coastal lagoon. The present investigation reveals that the URCOL contains both characteristics-a channel mesh and extensive flat areas.
Montaño-Ley et al. [3] investigated the sediment exchange between the ocean and URCOL. Based on observations of water-sediment concentration and currents velocities measured at the lagoon entrance, they suggest the advection of sediment in suspension from the lagoon to the adjacent Ocean. The authors also found high suspended sediment concentrations in the upper part of the lagoon. The outcome of present investigation indicates moderated exchange of sediment between the lagoon and the Gulf of California through the inlet. The net bed load sediment transport calculated through the gap between the two breakwaters, shown by the computed time series, resulted in low and seaward transport toward the Gulf of California (Fig. 7). The time series also indicated that in URCOL, lower low water, follows higher high water. This condition usually produces maximum velocities during ebb tide. The low seaward net transport computed does not mean, in any way, low exchange between the coastal lagoon and the Gulf of California but just reflects the difference between the imported and exported bed load transport in about a year. On a longer time, expected changes in sediment transport rates or even an inflection or reversing in the net transport direction might be expected since the rate of export and import of sediments exhibit semi-annual, monthly, and fortnightly oscillations.
Other investigations carried out in coastal lagoons of the region of the Gulf of California like Topolobampo [16] and Yavaros [7], indicate exchange of sediment between the lagoon and the Ocean through the inlets. Both lagoons are exposed to similar tidal conditions than the Urias Coastal Lagoon. The net bed load sediment transport, as in the present investigation, was seaward toward the Gulf of California. In the case of Topolobampo it was also found that the largest bedload sediment transport rates and seabed morphodynamics took place in a narrow passage that connect the northern and southern part of the lagoon. Similarly, in Urias coastal lagoon the more intense sediment transport took place in a narrow area (400 m wide) located in the middle section of the lagoon. In this section also developed the most intense bottom morphodynamics (Figs. 8 and 9), The geometry of the Topolobampo and Urias Lagoon seemed to affect the sediment dynamics of both lagoons. In the case of Yavaros, as the lagoon has a more regular shape, it does not seem to exist any relation between geometry and sediment dynamics.
In other regions of the world like Miani Hor (a coastal lagoon of Lasbela, Balochistan, Pakistan) as in the present investigation, the tidal hydrodynamics is the main forcing function of the sediment transport. According to Ahmed-Syed et al. [9], the rise and fall of the tide in the adjacent sea generates horizontal flow of water into and out of the channel that in turn causes fluctuation of water level of the lagoon. As in the present investigation, wave condition within the lagoon and adjacent sea indicates that offshore deep-water waves do not penetrate the lagoon due to the single narrow inlet. Strong tidal currents generate most of the erosion and accretion within and out of the lagoon.
The bed load sediment transport according to Eq. (6) [12] depends on sediment properties. Seabed sediment in URCOL was assumed to be non-cohesive of uniform grain size. A review of the scientific literature on this subject revealed that strong assumptions had to be made in most sediment transport modeling. A frequent assumption is to consider a uniform grain size in coastal lagoons or coastal areas to study sediment transport [26,27]. On these bases, and with existing data of URCOL indicating that the dominant sediment in the study area is fine and medium sand, uniform grain size values of 170 μ were selected to run the sediment transport model. Using a uniform grain size in the model probably leads to an overestimate of the transport in some areas of the lagoon where the presence of coarser sediment limits the actual transport [26].
A seabed area's net accretion or erosion rates depends on the balance in the entering or leaving sediments. The erosion and accretion patterns were obtained from the bed load sediment transport divergence. Seabed will be eroded if the value of the divergence is positive, whereas sediment will be accreted if the value is negative. The distribution patterns of the erosion and accretion rates are consistent with the convergent and divergent character of the vectors of sediment transport rates. The accretion of sediments has been predicted, mainly shoals on the middle part of the lagoon right where the channel starts curving. The predicted erosion areas were usually very close to where sediment was accumulated, shaping some pattern, as was already expected from the sediment conservation equations.
A seabed area's net accretion or erosion rates depends on the balance in the entering or leaving sediments. The erosion and accretion patterns were obtained from the bed load sediment transport divergence. Seabed will be eroded if the value of the divergence is positive, whereas sediment will be accreted if the value is negative. The distribution patterns of the erosion and accretion rates are consistent with the convergent and divergent character of the vectors of sediment transport rates. The accretion of sediments has been predicted, mainly shoals on the middle part of the lagoon right where the channel starts curving (Figs. 8 and 9). The predicted erosion areas were usually very close to where sediment was accumulated, shaping some pattern, as was already expected from the sediment conservation equations.
The bedload transport in the Seine Estuary (France), according to Mengual et al.
[10] appears as a non-dominant but relevant contributor to the sediment dynamics of the Seine Estuary causing, as in the present investigation of the URCOL lagoon, erosion-accretion patterns, especially over shoals. The sediment flux and the resulting patterns of erosion and accretion look usually similar between suspended load and bedload dynamics. Petti et al. [8] studied the morphodynamics of the Marano and Grado lagoon in Italy, by means of a two-dimensional horizontal (2DH) morphological-hydrodynamic model. They found that most of the erosion areas were concentrated in tidal flats and marshes. In the present investigation, the narrow middle section (MS) of the lagoon (see Fig. 1b for location) showed the largest vertical changes of the seabed of about 5 × 10 -2 m. The influence of the lagoon's inverted L-shape as well as the shallow water (2-6 m) in the narrow (400 m wide) middle section (MS) of the lagoon, seemed to be relevant for the increase of the bottom morphodynamics, either erosion or accretion (Fig. 8) in this part of the lagoon.
This investigation provides an insight into the local seabed geomorphic changes in the Urias Coastal Lagoon. In a broad geographical context, comparison of the present investigation with other coastal lagoons of the Gulf California and around the world, shows that tidal asymmetries were determinant of the net bedload sediment transport direction though the inlets, and that the tidal hydrodynamics and geometry of the lagoons (like narrow channels) play an important role on the location of the intense bed load sediment transport and accretion-erosion areas within the lagoons.
Conclusions
The bedload sediment transport induced by the tidal hydrodynamics and the consequent erosion and accretion process in Urias Coastal Lagoon in the Gulf of California coast was investigated, providing insight into the local seabed geomorphic evolution.
The instantaneous tidal currents predicted in the URCOL showed high spatial and temporal variability. This investigation provides a first approach of the complex behavior of the tidal flow and induced bedload sediment transport in the URCOL. These flows appeared controlled by bathymetric features. The instantaneous velocities patterns indicate that the magnitude of the velocities in the navigation channel is large and undergoes attenuation as it travels away from the main channels. At the head of the lagoon, the tidal currents are relatively minor in magnitude. The location where the channel turns to the east, resembling an inverted L, presented the largest tidal velocities.
The highest sediment transport rates were predicted in the middle part of the lagoon, right on the place where sharp topographic gradients occur and the channel starts curving, changing the lagoon alignment. The sediment pattern suggests sediment exchange between the lagoon and the Gulf of California. The net movement of bedload sediment transport caused by the tidal hydrodynamics resulted in being seaward. The time series indicated that in URCOL, lower low water follows higher high water. This condition usually produces maximum velocities during ebb tide. Ebb velocities affect the flushing sediments from the coastal lagoon into the Gulf of California. The calculations indicate that a relatively stable bed prevails nearby the head of the lagoon.
The distribution patterns of the erosion and accretion rates are consistent with the convergent and divergent character of the vectors of sediment transport rates. The sediment accretion was predicted mainly in the middle part of the channel, right at the place where the channel starts curving to the SW.
Noticeable vertical changes (0.01-0.04 m) in the seabed level of the lagoon were found in the middle part of the channel, where the sediment transport gradients were also more significant. | 7,403.2 | 2023-03-24T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
Dependence of Creep Performance and Microstructure Evolution on Solution Cooling Rate in a Polycrystalline Superalloy
It is well known that the solution cooling rate has a great effect on the creep life of superalloys. In this research, three typical cooling rates were applied to generate different distributions of γ’ precipitates for creep tests. Ingots used to make specimens were manufactured by hot extrusion, and the master alloy had the composition of an FGH4096 power metallurgy superalloy. SEM and SESD were used to observe the microstructure’s evolution. The experimental results show that the fastest cooling rate corresponds to the highest creep life as well as the smallest rupture strain, and vice versa. The microscopic observations disclose that with an increasing cooling rate, the size and area fraction of γ’ precipitates decrease, and the rupture mechanism changes from transgranular to intergranular. Moreover, some γ’ precipitates changed to cuboid after the creep test. The results will provide new technological processes to design more creep-resistant, nickel-base superalloys.
Introduction
Polycrystalline nickel-base superalloy is widely used for fabricating the turbine discs of aero-engines, owing to its superior tensile strength, creep resistance and fatigue crack propagation resistance at elevated temperatures [1][2][3][4].These excellent mechanical properties are closely related to the grain-size distribution and γ' precipitates [5][6][7].It is widely accepted that the hub portion typically suffers from higher stress and lower temperatures compared to the rim, leading to tensile strength, low-cycle fatigue cracks and propagation resistance being the main considered factors in designing the microstructure of turbine discs.Recently, turbine-entry temperatures have been further increased, and the in-service working temperature of turbine disc components has been raised accordingly to improve the operation efficiency of jet-engines, resulting in the design optimization of microstructures and mechanical properties [3].Previous results have documented that creep resistance becomes the paramount concern under higher temperatures, i.e., 700 • C to 900 • C [3,5].
In general, the creep property can be remarkably improved by the optimal heat treatment, which entails controlling the grain size and γ' precipitates.It is widely reported that the solution's cooling rate governs the attributes of γ' precipitates [8][9][10][11][12].A study on the Udimet 500 Ni-base superalloy indicates that size, shape and the area fraction of γ' precipitates are largely influenced by the cooling rate following partial solution treatments, and the increased cooling rate reinforces alloy strength but decreases ductility [9].In other research, the morphology and size of γ' precipitates can be greatly changed by the cooling rate [11,12].According to Caron, the area fraction of γ', the precipitate size, the composition and the coherency strains due to the lattice misfit are the four main aspects that Metals 2018, 8, 4; doi:10.3390/met8010004www.mdpi.com/journal/metalsinfluence creep properties [8].Bhowal et al. (1990) found a deformation mechanism transition when the inter-spacing of precipitates equals to 50 nm in Rene 95 superalloys [13].In addition, there are many researchers adopting different cooling rates at the end of solution heat treatments to investigate the interaction between dislocation and γ' precipitates [14,15].However, the technological process of powder metallurgy superalloys is long and complicated, which usually includes powder making, hot isostatic pressing (HIP) and isothermal forging (ISF).In this research, we investigate an experimental alloy which has the composition of FGH96 powder metallurgy superalloys, but is made by casting and hot extrusion, a relatively short process.To formulate a better heat treatment scheme, different solution cooling modes are selected to see the influences on the microstructure and creep properties.After the creep test, γ' precipitates are investigated near the fracture.
Experimental Procedures
The master alloy was prepared by vacuum induction melting and cast into an ingot with a diameter of 83 mm.The ingot was then coated with a stainless-steel container and annealed for 8 h at 1100 • C. The hot extrusion temperature was 1100 • C and the reduction area ratio was 4:1.The nominal composition is presented in Table 1.To ensure homogeneity, we found an annular region to manufacture the specimens, and the selected area is marked by red-dashed lines in Figure 1a.The grain of the extruded material is shown in Figure 1b, with the average size measured to be 5.22 µm.The cylinder samples with a size of 11 mm in diameter and 80 mm in length were then cut from the extrusion ingot using electric discharging machining.The heat treatment consisted of solution-based heat treatments and aging treatments.The solution treatment was at 1150 • C for 1 h, followed by three different cooling modes: furnace cooling, air cooling, and oil quenching to room temperature.Finally, all the samples were aged at 780 • C for 8 h, then air cooled to room temperature.During the heat treatment, the temperatures were strictly controlled to be within ±5 • C. To measure the cooling rate, a thermocouple was attached to the specimen to monitor the temperature.The cooling rates were determined by averaging the temperature between the solution temperature and 650 • C.
After the heat treatment, the samples were fine-machined to be 68 mm in total length, 27 mm in gauge length and 5 ± 0.1 mm in diameter.Figure 1c shows the sample drawing.All creep tests were conducted in air at 750 • C/650 MPa to failure.Sample temperatures were controlled with about ±1 • C precision, using three K-type thermocouples in the upper, middle, bottom positions, respectively.After creep tests, the fracture and small slices near the fracture were cut off to observe the microstructure.
The γ' precipitates were observed by the scanning electron microscope (SEM, Quanta 650, FEI) (FEI, Hillsboro, OR, USA) through a series of mechanical grinding, polishing and chemical etching with a solution of 10 mL H 2 O, 10 mL acetic acid, 10 mL HNO 3 , and three drops of HF.To remove the residual stress during grinding and polishing, vibration polishing was applied for about 8 h after the standard metallographic procedure for the EBSD (Electron Backscattered Diffraction) observation.A field-emission SEM (FEI, Hillsboro, OR, USA) equipped with an EBSD detector and Channel 5 software (5.11.20405.0,Oxford Instruments, High Wycombe, UK) was used to measure the average grain size.
Results and Discussions
Heat treatment is important in the manufacture and repair of gas turbine blades [11].The purpose of the heat treatment is to optimize the microstructure and its properties in superalloys.The experimental results and discussions that follow are to show the influence of the solution cooling rate on the microstructure and its creep properties.
Microstructure
Cooling curves and the corresponding γ' precipitations are presented in Figure 2. It shows that the γ' precipitates are strongly dependent on the cooling rate.The cooling rate of furnace cooling, air cooling and oil quenching are measured to be 8 °C/min, 307 °C/min and 2029 °C/min, respectively.In the context of furnace cooling, the average size of the γ' precipitates is calculated to be 185 nm and their morphologies are irregularly-shaped, occasionally presenting butterfly-like features.As the cooling rate is increased, such as air cooling, the average size of the γ' precipitates decreases to 64.5 nm and their morphologies tend to become spherical, due to more undercooling.When the specimen is oil-quenched, the sizes of γ' precipitates further decrease and their average value reaches 22.5 nm.The morphologies of γ' precipitates are nearly-spherical.These results are in good agreement with other studies about the impact of cooling rates on the γ' precipitation [11,16].The area fraction of γ' precipitates, as determined from the micrographs, was 37.7%, 29.7% and 20.7% for furnace cooling, air cooling, and oil quenching, respectively.The relationship between the solution cooling rate and average size as well as the volume fraction of γ' precipitates are presented in Figure 2b.
Results and Discussions
Heat treatment is important in the manufacture and repair of gas turbine blades [11].The purpose of the heat treatment is to optimize the microstructure and its properties in superalloys.The experimental results and discussions that follow are to show the influence of the solution cooling rate on the microstructure and its creep properties.
Microstructure
Cooling curves and the corresponding γ' precipitations are presented in Figure 2. It shows that the γ' precipitates are strongly dependent on the cooling rate.The cooling rate of furnace cooling, air cooling and oil quenching are measured to be 8 • C/min, 307 • C/min and 2029 • C/min, respectively.In the context of furnace cooling, the average size of the γ' precipitates is calculated to be 185 nm and their morphologies are irregularly-shaped, occasionally presenting butterfly-like features.As the cooling rate is increased, such as air cooling, the average size of the γ' precipitates decreases to 64.5 nm and their morphologies tend to become spherical, due to more undercooling.When the specimen is oil-quenched, the sizes of γ' precipitates further decrease and their average value reaches 22.5 nm.The morphologies of γ' precipitates are nearly-spherical.These results are in good agreement with other studies about the impact of cooling rates on the γ' precipitation [11,16].The area fraction of γ' precipitates, as determined from the micrographs, was 37.7%, 29.7% and 20.7% for furnace cooling, air cooling, and oil quenching, respectively.The relationship between the solution cooling rate and average size as well as the volume fraction of γ' precipitates are presented in Figure 2b.Typical grains are shown in Figure 3.The grain sizes are inhomogeneous, with big grains surrounded by multiple small ones.The statistics show that the average grain sizes are 11.3 μm, 12.2 μm, 11.8 μm for the furnace cooling, air cooling and oil quenching specimens through collecting more than 1000 grains.This indicates that the solution cooling rate has no remarkable influence on the average grain size.We can conclude from the microstructure analysis that after the heat treatments with different solution cooling modes, grain sizes remain unchanged, but the size and area fraction of γ' precipitate change considerably.
Creep Results
Based on three types of specimens under different cooling rates, creep tests at 750 °C/650 MPa were carried out to monitor the high-temperature behavior.For each cooling rate, two specimens were tested.Figure 4a shows the creep curves, and it is worth noting that the curves of furnace cooling specimens are not complete.The relationship between cooling rate and the creep life, as well as rupture strain are plotted in Figure 4b.In the case of air cooling and oil quenching, a three-stage creep can be clearly observed in Figure 4a, and the air cooling specimens showed early onset of tertiary creep.In addition, the steady-state creep was short compared to the oil quenching specimens.For the furnace cooling specimens, there were no obvious first and second stages.We can see from Figure 4b that the creep life significantly increased with an increasing cooling rate.The failure life of the oilquenched specimen exceeds 80 h and the failure strain is less than 1.5%, while the furnace cooling specimen failed within 32 h and the rupture strain observed is more than 7%.Through comparing this data, it is observed that the microstructure of smaller γ' precipitates benefits creep performance.Typical grains are shown in Figure 3.The grain sizes are inhomogeneous, with big grains surrounded by multiple small ones.The statistics show that the average grain sizes are 11.3 µm, 12.2 µm, 11.8 µm for the furnace cooling, air cooling and oil quenching specimens through collecting more than 1000 grains.This indicates that the solution cooling rate has no remarkable influence on the average grain size.We can conclude from the microstructure analysis that after the heat treatments with different solution cooling modes, grain sizes remain unchanged, but the size and area fraction of γ' precipitate change considerably.Typical grains are shown in Figure 3.The grain sizes are inhomogeneous, with big grains surrounded by multiple small ones.The statistics show that the average grain sizes are 11.3 μm, 12.2 μm, 11.8 μm for the furnace cooling, air cooling and oil quenching specimens through collecting more than 1000 grains.This indicates that the solution cooling rate has no remarkable influence on the average grain size.We can conclude from the microstructure analysis that after the heat treatments with different solution cooling modes, grain sizes remain unchanged, but the size and area fraction of γ' precipitate change considerably.
Creep Results
Based on three types of specimens under different cooling rates, creep tests at 750 °C/650 MPa were carried out to monitor the high-temperature behavior.For each cooling rate, two specimens were tested.Figure 4a shows the creep curves, and it is worth noting that the curves of furnace cooling specimens are not complete.The relationship between cooling rate and the creep life, as well as rupture strain are plotted in Figure 4b.In the case of air cooling and oil quenching, a three-stage creep can be clearly observed in Figure 4a, and the air cooling specimens showed early onset of tertiary creep.In addition, the steady-state creep was short compared to the oil quenching specimens.For the furnace cooling specimens, there were no obvious first and second stages.We can see from Figure 4b that the creep life significantly increased with an increasing cooling rate.The failure life of the oilquenched specimen exceeds 80 h and the failure strain is less than 1.5%, while the furnace cooling specimen failed within 32 h and the rupture strain observed is more than 7%.Through comparing this data, it is observed that the microstructure of smaller γ' precipitates benefits creep performance.
Creep Results
Based on three types of specimens under different cooling rates, creep tests at 750 • C/650 MPa were carried out to monitor the high-temperature behavior.For each cooling rate, two specimens were tested.Figure 4a shows the creep curves, and it is worth noting that the curves of furnace cooling specimens are not complete.The relationship between cooling rate and the creep life, as well as rupture strain are plotted in Figure 4b.In the case of air cooling and oil quenching, a three-stage creep can be clearly observed in Figure 4a, and the air cooling specimens showed early onset of tertiary creep.In addition, the steady-state creep was short compared to the oil quenching specimens.For the furnace cooling specimens, there were no obvious first and second stages.We can see from Figure 4b that the creep life significantly increased with an increasing cooling rate.The failure life of the oil-quenched specimen exceeds 80 h and the failure strain is less than 1.5%, while the furnace cooling specimen failed within 32 h and the rupture strain observed is more than 7%.Through comparing this data, it is observed that the microstructure of smaller γ' precipitates benefits creep performance.Research on U720 powder metallurgy superalloys reported a creep of 650 °C, as well as reporting that a large γ' spacing promotes the Orowan mechanism whereas a small γ' spacing promotes the shearing mechanism; the creep resistance benefits from the shearing mechanism are greater than those of the Orowan mechanism [17], in agreement with the conclusion drawn from another report about an experimental alloy at 700 °C [15].
The shear-stress threshold needed for the Orowan looping may be determined by the following equation [13,18]: Here, μ is the shear module (72 GPa at 750 °C), b is the Burgers vector (2.54 Å), ν is Poisson's ratio (0.379), rs is the average radius of γ' precipitates which can be calculated by , ls is the mean surface-to-surface spacing of γ' precipitates and can be determined by the relation: The calculated values of rs, ls and τ0 for different cooling modes are shown in Table 2. τ0 is calculated to be 281.9MPa, 478 MPa and 726.5 MPa for furnace cooling, air cooling, and oil quenching specimens, respectively.In experiment conditions, the shear stress is about 325 MPa (half of the tensile stress), so only the furnace cooling specimens would promote the Orowan looping.For the air cooling and oil quenching specimens, γ' shearing may be the main mechanism, thus leading to better creep properties.
Fracture Morphology
Figures 5-7 show the morphology after the creep failure, and indicate that the crack behavior is strongly dependent on the cooling rate.Figure 5 shows the macro appearance of the fractures.Based on different appearances, we divide the macro fracture into the initial region, the propagation region, and the final rupture region.The initial fracture areas can be clearly identified by the color, because they are gravely oxidized compared with other regions.Moreover, cracks are confirmed to initiate along the specimen surfaces.The final rupture area is often convex in shape, where cracks extend quickly, thus causing rapid failure at the end of the creep test.Between the initial area and the final Research on U720 powder metallurgy superalloys reported a creep of 650 • C, as well as reporting that a large γ' spacing promotes the Orowan mechanism whereas a small γ' spacing promotes the shearing mechanism; the creep resistance benefits from the shearing mechanism are greater than those of the Orowan mechanism [17], in agreement with the conclusion drawn from another report about an experimental alloy at 700 • C [15].
The shear-stress threshold needed for the Orowan looping may be determined by the following equation [13,18]: Here, µ is the shear module (72 GPa at 750 • C), b is the Burgers vector (2.54 Å), ν is Poisson's ratio (0.379), r s is the average radius of γ' precipitates which can be calculated by r s = (2/3) 1/2 r, l s is the mean surface-to-surface spacing of γ' precipitates and can be determined by the relation: The calculated values of r s , l s and τ 0 for different cooling modes are shown in Table 2. τ 0 is calculated to be 281.9MPa, 478 MPa and 726.5 MPa for furnace cooling, air cooling, and oil quenching specimens, respectively.In experiment conditions, the shear stress is about 325 MPa (half of the tensile stress), so only the furnace cooling specimens would promote the Orowan looping.For the air cooling and oil quenching specimens, γ' shearing may be the main mechanism, thus leading to better creep properties.
Fracture Morphology
Figures 5-7 show the morphology after the creep failure, and indicate that the crack behavior is strongly dependent on the cooling rate.Figure 5 shows the macro appearance of the fractures.Based on different appearances, we divide the macro fracture into the initial region, the propagation region, and the final rupture region.The initial fracture areas can be clearly identified by the color, because they are gravely oxidized compared with other regions.Moreover, cracks are confirmed to initiate along the specimen surfaces.The final rupture area is often convex in shape, where cracks extend quickly, thus causing rapid failure at the end of the creep test.Between the initial area and the final rupture area is the propagation area which is often a transition region.The three regions are marked in Figure 5, with 1 representing the initial fracture area, 2 the propagation area and 3 the final rupture area.
We can see that, for the oil quenching specimen, the fracture surface is quite rough, and the surface oxidation is heavy, based on the color change (Figure 5a).In the context of the air cooling specimen, the fracture pathway is macroscopically zigzagged, yet, the fracture is relatively flat and perpendicular to the direction of tension for the oil quenching specimen.More notably, two or more initial regions can be found in the oil quenching specimen (Figure 5c), which are likely to contribute to failure from multiple directions.In addition, based on the SEM observation, the area reduction ratio of furnace cooling, air cooling and oil quenching specimens are 33.8%,14.5% and 3.5%, respectively.The value of oil quenching specimens is much smaller than that of furnace cooling specimens, indicating a stronger creep resistance.rupture area is the propagation area which is often a transition region.The three regions are marked in Figure 5, with ① representing the initial fracture area, ② the propagation area and ③ the final rupture area.We can see that, for the oil quenching specimen, the fracture surface is quite rough, and the surface oxidation is heavy, based on the color change (Figure 5a).In the context of the air cooling specimen, the fracture pathway is macroscopically zigzagged, yet, the fracture is relatively flat and perpendicular to the direction of tension for the oil quenching specimen.More notably, two or more initial regions can be found in the oil quenching specimen (Figure 5c), which are likely to contribute to failure from multiple directions.In addition, based on the SEM observation, the area reduction ratio of furnace cooling, air cooling and oil quenching specimens are 33.8%,14.5% and 3.5%, respectively.The value of oil quenching specimens is much smaller than that of furnace cooling specimens, indicating a stronger creep resistance.Figure 6 shows the micro morphology of the initial rupture area, small cracks are found in the furnace cooling specimen, which are marked with white arrows in Figure 6a, and the rugged surface is full of steps.For the air cooling specimen, ubiquitous faceted grains are found at the fracture boundaries (Figure 6b), which is different from the observation in the furnace cooling specimen.Small dimples can be seen on the grain facets (Figure 6e), which means that the fracture is not brittle.For the oil quenching specimen, we can also see small dimples in the initial region, but the grain facets are smoother than the air-cooling specimen, with fewer dimples found in the plane (Figure 6f).White dotted lines divide the fracture into three areas: 1 The initial rupture area; 2 The propagation area; 3 The final rupture area.
Figure 6 shows the micro morphology of the initial rupture area, small cracks are found in the furnace cooling specimen, which are marked with white arrows in Figure 6a, and the rugged surface is full of steps.For the air cooling specimen, ubiquitous faceted grains are found at the fracture boundaries (Figure 6b), which is different from the observation in the furnace cooling specimen.Small dimples can be seen on the grain facets (Figure 6e), which means that the fracture is not brittle.For the oil quenching specimen, we can also see small dimples in the initial region, but the grain facets are smoother than the air-cooling specimen, with fewer dimples found in the plane (Figure 6f).rupture area is the propagation area which is often a transition region.The three regions are marked in Figure 5, with ① representing the initial fracture area, ② the propagation area and ③ the final rupture area.We can see that, for the oil quenching specimen, the fracture surface is quite rough, and the surface oxidation is heavy, based on the color change (Figure 5a).In the context of the air cooling specimen, the fracture pathway is macroscopically zigzagged, yet, the fracture is relatively flat and perpendicular to the direction of tension for the oil quenching specimen.More notably, two or more initial regions can be found in the oil quenching specimen (Figure 5c), which are likely to contribute to failure from multiple directions.In addition, based on the SEM observation, the area reduction ratio of furnace cooling, air cooling and oil quenching specimens are 33.8%,14.5% and 3.5%, respectively.The value of oil quenching specimens is much smaller than that of furnace cooling specimens, indicating a stronger creep resistance.Figure 6 shows the micro morphology of the initial rupture area, small cracks are found in the furnace cooling specimen, which are marked with white arrows in Figure 6a, and the rugged surface is full of steps.For the air cooling specimen, ubiquitous faceted grains are found at the fracture boundaries (Figure 6b), which is different from the observation in the furnace cooling specimen.Small dimples can be seen on the grain facets (Figure 6e), which means that the fracture is not brittle.For the oil quenching specimen, we can also see small dimples in the initial region, but the grain facets are smoother than the air-cooling specimen, with fewer dimples found in the plane (Figure 6f).As for the propagation area shown in Figure 7, dimples and tearing traces are prevalent in all specimens.We can also see some grain facets in the air cooling and oil quenching specimens, but they are heavily deformed (Figure 7b,c).The size of dimples in the oil quenching specimen is smaller than that of dimples in the air cooling specimen (Figure 7e,f), which is probably attributable to the smaller γ' precipitates in the oil-quenching specimen.For the final rupture region, all specimens exhibit shallow dimples, and there are not many differences between specimens.The microscopic examination indicates that the transgranular cracking is the dominant deformation mode for the furnace cooling specimen, while the intergranular fracture is the primary deformation mechanism for the air cooling specimen, and it becomes more evident that the intergranular crack governs the failure process for the oil quenching specimen.Therefore, we concluded that with the increasing of the solution cooling rate, the rupture mode gradually changed from transgranular to intergranular.
As for the propagation area shown in Figure 7, dimples and tearing traces are prevalent in all specimens.We can also see some grain facets in the air cooling and oil quenching specimens, but they are heavily deformed (Figure 7b,c).The size of dimples in the oil quenching specimen is smaller than that of dimples in the air cooling specimen (Figure 7e,f), which is probably attributable to the smaller γ' precipitates in the oil-quenching specimen.For the final rupture region, all specimens exhibit shallow dimples, and there are not many differences between specimens.The microscopic examination indicates that the transgranular cracking is the dominant deformation mode for the furnace cooling specimen, while the intergranular fracture is the primary deformation mechanism for the air cooling specimen, and it becomes more evident that the intergranular crack governs the failure process for the oil quenching specimen.Therefore, we concluded that with the increasing of the solution cooling rate, the rupture mode gradually changed from transgranular to intergranular.
Microstructure after the Creep
After the creep test, a small section of 7 mm, near the fracture, was cut off and split along the load axis for every specimen.Figure 8 demonstrates the microscopic observation of γ' precipitates at different positions along the gage length.For the furnace cooling specimen (Figure 8a-c) after creep deformation, most γ' precipitates changed to cuboid from previous irregular-shapes, which is similar to the result of the Ni115 multimodal superalloy after aging for 48 h at 900-1000 °C [19].This transition is attributed to the coalescence of γ' precipitates based on the size measurement.According to the observation from three different positions, the area fraction of γ' precipitates remains stable and implies the absence of temperature and deformation gradients.In the air cooling specimen (Figure 8d-f), a portion of γ' precipitates have evolved into a cuboid morphology with an increased size due to the particle coalescence.It is noteworthy that the size of finer γ' precipitates is increased accordingly, but the morphology remains to be pristine.Likewise, the coalescence of γ' precipitates takes place and the morphology becomes cuboid with a larger size, compared to the air cooling specimen.Noteworthy, the density of large coalesced γ' precipitates is lower and a dominated percentage of γ' precipitates exhibit a very small size.In the oil quenching specimen (Figure 8g-i), the area fraction of cuboid γ' precipitates is lower than in the air cooling specimen, and the remaining spherical γ' precipitates are too small to observe from the figure.These particles are so fine that even
Microstructure after the Creep
After the creep test, a small section of 7 mm, near the fracture, was cut off and split along the load axis for every specimen.Figure 8 demonstrates the microscopic observation of γ' precipitates at different positions along the gage length.For the furnace cooling specimen (Figure 8a-c) after creep deformation, most γ' precipitates changed to cuboid from previous irregular-shapes, which is similar to the result of the Ni115 multimodal superalloy after aging for 48 h at 900-1000 • C [19].This transition is attributed to the coalescence of γ' precipitates based on the size measurement.According to the observation from three different positions, the area fraction of γ' precipitates remains stable and implies the absence of temperature and deformation gradients.In the air cooling specimen (Figure 8d-f), a portion of γ' precipitates have evolved into a cuboid morphology with an increased size due to the particle coalescence.It is noteworthy that the size of finer γ' precipitates is increased accordingly, but the morphology remains to be pristine.Likewise, the coalescence of γ' precipitates takes place and the morphology becomes cuboid with a larger size, compared to the air cooling specimen.Noteworthy, the density of large coalesced γ' precipitates is lower and a dominated percentage of γ' precipitates exhibit a very small size.In the oil quenching specimen (Figure 8g-i), the area fraction of cuboid γ' precipitates is lower than in the air cooling specimen, and the remaining spherical γ' precipitates are too small to observe from the figure.These particles are so fine that even during long-term aging, they could not grow, nor could their magnitude increase.We can also see from the figure that the size of cuboid γ' precipitates is almost the same for these three specimens.
during long-term aging, they could not grow, nor could their magnitude increase.We can also see from the figure that the size of cuboid γ' precipitates is almost the same for these three specimens.Microscopic observation is also carried out on the plane perpendicular to the direction of tension near the fracture surface.Interestingly, in the vicinity of the grain boundaries, a few γ' precipitates approach closely in an aligned manner, which are shown in Figure 9a-c.Figure 9d-f shows the typical microstructures marked with red frames in Figure 9a-c.
For the furnace cooling specimen, a lot of small γ' precipitates are aligned to form the plateshaped precipitates.For the air cooling specimen, a similar process can be observed (Figure 9b) but the large-sized γ' precipitates become further coalesced and connect in a step-wise way (Figure 9e).The black arrows in Figure 9d,e demonstrate the merging of γ' precipitates.The morphology is like the rafted microstructure observed in a directionally solidified superalloy and single-crystal superalloys, such as CM-247LC and CMSX-4 [3,20].Few researchers have focused on rafting in polycrystalline superalloys.In one of them, some Co-base superalloys under compression creep conditions have been analyzed, and rafts perpendicular to the load axis have been discovered [21].A recent study has observed rafting in the Ni-base superalloy IN713LC during the compression creep, but the temperature range used in the creep test was 950-1050 °C [22], which is much higher than the creep temperature in this paper.Rafting formed at intermediate temperatures, i.e., 650-800 °C and on the tensile load has seldom been reported.A ruptured specimen was discovered after a creep test by Takao Murakumo et al. [23], and a rafted structure perpendicular to the stress axis was observed in the alloys with no more than 60% γ' precipitates.This finding agrees with our research.Microscopic observation is also carried out on the plane perpendicular to the direction of tension near the fracture surface.Interestingly, in the vicinity of the grain boundaries, a few γ' precipitates approach closely in an aligned manner, which are shown in Figure 9a-c.Figure 9d-f shows the typical microstructures marked with red frames in Figure 9a-c.
For the furnace cooling specimen, a lot of small γ' precipitates are aligned to form the plate-shaped precipitates.For the air cooling specimen, a similar process can be observed (Figure 9b) but the large-sized γ' precipitates become further coalesced and connect in a step-wise way (Figure 9e).The black arrows in Figure 9d,e demonstrate the merging of γ' precipitates.The morphology is like the rafted microstructure observed in a directionally solidified superalloy and single-crystal superalloys, such as CM-247LC and CMSX-4 [3,20].Few researchers have focused on rafting in polycrystalline superalloys.In one of them, some Co-base superalloys under compression creep conditions have been analyzed, and rafts perpendicular to the load axis have been discovered [21].A recent study has observed rafting in the Ni-base superalloy IN713LC during the compression creep, but the temperature range used in the creep test was 950-1050 • C [22], which is much higher than the creep temperature in this paper.Rafting formed at intermediate temperatures, i.e., 650-800 • C and on the tensile load has seldom been reported.A ruptured specimen was discovered after a creep test by Takao Murakumo et al. [23], and a rafted structure perpendicular to the stress axis was observed in the alloys with no more than 60% γ' precipitates.This finding agrees with our research.That is, rafting is not discovered along the load axis, but observed on the plane perpendicular to the direction of tension.The rafted structure is not observed in the oil quenching specimen.Figure 9f demonstrates the typical feature analogous to Figure 9e, but the grown γ' precipitates have coalesced into green onion-like aggregates, which ends at the grain boundary.Merging of γ' precipitates has been discovered before [12,24], but this special morphology has seldom been observed, and the assembly mechanism remains an open question to be investigated.Noteworthy, the density of large coalesced γ' precipitates is higher near the grain boundary, but lower in the grains, and a dominated percentage of γ' precipitates still exhibit a small size.That is, rafting is not discovered along the load axis, but observed on the plane perpendicular to the direction of tension.The rafted structure is not observed in the oil quenching specimen.Figure 9f demonstrates the typical feature analogous to Figure 9e, but the grown γ' precipitates have coalesced into green onion-like aggregates, which ends at the grain boundary.Merging of γ' precipitates has been discovered before [12,24], but this special morphology has seldom been observed, and the assembly mechanism remains an open question to be investigated.Noteworthy, the density of large coalesced γ' precipitates is higher near the grain boundary, but lower in the grains, and a dominated percentage of γ' precipitates still exhibit a small size.
Conclusions
After the supersolvus solution heat treatment, different cooling rates were applied to the superalloys used in this study, in order to examine their influences on microstructure as well as the creep properties.Based on the experimental results, the conclusions may be summarized as follows: 1.The cooling rate of furnace cooling, air cooling and oil quenching is measured to be 8 °C/min, 307 °C/min and 2029 °C/min, respectively.With increasing cooling rates, γ' size changed from 185 nm to 22.5 nm, the area fraction of γ' changed from 37.7% to 20.7%, but the grain size remained at about 12 μm.2. The data of the creep test at 750 °C/650 MPa shows that for the furnace cooling, air cooling, and oil quenching specimens, the mean creep life is 23.45 h, 70.9 h, 83.35 h, respectively.The creep life of the oil quenching specimen is three times longer than that of the furnace cooling specimen.Deformation mechanisms may change from Orowan looping to γ' shearing as the cooling rates increased.It seems that oil quenching is the best solution cooling mode for the experiment alloy.3. The rupture mechanism changed from transgranular for the furnace cooling specimen to intergranular for the oil quenching specimen.After the creep test, some γ' precipitates changed to cuboid, and aligned similarly to the rafted structure which is usually observed in single crystal superalloys.Other special γ' merging morphologies were also found.The reason remains to be investigated.
Conclusions
After the supersolvus solution heat treatment, different cooling rates were applied to the superalloys used in this study, in order to examine their influences on microstructure as well as the creep properties.Based on the experimental results, the conclusions may be summarized as follows: 1.
The cooling rate of furnace cooling, air cooling and oil quenching is measured to be 8 • C/min, 307 • C/min and 2029 • C/min, respectively.With increasing cooling rates, γ' size changed from 185 nm to 22.5 nm, the area fraction of γ' changed from 37.7% to 20.7%, but the grain size remained at about 12 µm.
2.
The data of the creep test at 750 • C/650 MPa shows that for the furnace cooling, air cooling, and oil quenching specimens, the mean creep life is 23.45 h, 70.9 h, 83.35 h, respectively.The creep life of the oil quenching specimen is three times longer than that of the furnace cooling specimen.Deformation mechanisms may change from Orowan looping to γ' shearing as the cooling rates increased.It seems that oil quenching is the best solution cooling mode for the experiment alloy.
3.
The rupture mechanism changed from transgranular for the furnace cooling specimen to intergranular for the oil quenching specimen.After the creep test, some γ' precipitates changed to cuboid, and aligned similarly to the rafted structure which is usually observed in single crystal superalloys.Other special γ' merging morphologies were also found.The reason remains to be investigated.
Figure 1 .
Figure 1.(a) Cross section of the extrusion ingot.The annular region between red dashed lines is the selected area to manufacture the specimens; (b) Microstructure of the as-extruded material; (c) Specimen drawing for creep test (unit: mm).
Figure 1 .
Figure 1.(a) Cross section of the extrusion ingot.The annular region between red dashed lines is the selected area to manufacture the specimens; (b) Microstructure of the as-extruded material; (c) Specimen drawing for creep test (unit: mm).
Figure 2 .
Figure 2. (a) Cooling curves of the three different cooling modes.The insets in the frame show typical γ' precipitates after heat treatment; (b) Relationship between cooling rate and average size as well as volume fraction of γ' precipitates.Notice that the horizontal axis is logarithmic.
Figure 3 .
Figure 3.The color map of grains after different heat treatments.(a) Furnace cooling; (b) Air cooling; (c) Oil quenching.
Figure 2 .
Figure 2. (a) Cooling curves of the three different cooling modes.The insets in the frame show typical γ' precipitates after heat treatment; (b) Relationship between cooling rate and average size as well as volume fraction of γ' precipitates.Notice that the horizontal axis is logarithmic.
Figure 2 .
Figure 2. (a) Cooling curves of the three different cooling modes.The insets in the frame show typical γ' precipitates after heat treatment; (b) Relationship between cooling rate and average size as well as volume fraction of γ' precipitates.Notice that the horizontal axis is logarithmic.
Figure 3 .
Figure 3.The color map of grains after different heat treatments.(a) Furnace cooling; (b) Air cooling; (c) Oil quenching.
Figure 3 .
Figure 3.The color map of grains after different heat treatments.(a) Furnace cooling; (b) Air cooling; (c) Oil quenching.
Figure 4 .
Figure 4. (a) Creep curves of alloy with different cooling rates; (b) The relationship between cooling rate and creep life as well as rupture strain.Notice that the horizontal axis is logarithmic.
Figure 4 .
Figure 4. (a) Creep curves of alloy with different cooling rates; (b) The relationship between cooling rate and creep life as well as rupture strain.Notice that the horizontal axis is logarithmic.
Figure 5 .
Figure 5. Macro fracture after creep tests.(a) Furnace cooling; (b) Air cooling; (c) Oil quenching.White dotted lines divide the fracture into three areas: ① The initial rupture area; ② The propagation area; ③ The final rupture area.
Figure 6 .
Figure 6.Micro morphology of the initial rupture area.(a,d) Furnace cooling; (b,e) Air cooling; (c,f) Oil quenching.White arrows in (a) show the micro cracks.
Figure 5 .
Figure 5. Macro fracture after creep tests.(a) Furnace cooling; (b) Air cooling; (c) Oil quenching.White dotted lines divide the fracture into three areas:1 The initial rupture area;2 The propagation area;3 The final rupture area.
Figure 5 .
Figure 5. Macro fracture after creep tests.(a) Furnace cooling; (b) Air cooling; (c) Oil quenching.White dotted lines divide the fracture into three areas: ① The initial rupture area; ② The propagation area; ③ The final rupture area.
Figure 6 .
Figure 6.Micro morphology of the initial rupture area.(a,d) Furnace cooling; (b,e) Air cooling; (c,f) Oil quenching.White arrows in (a) show the micro cracks.
Figure 6 .
Figure 6.Micro morphology of the initial rupture area.(a,d) Furnace cooling; (b,e) Air cooling; (c,f) Oil quenching.White arrows in (a) show the micro cracks.
Figure 8 .
Figure 8. Microstructures near the fracture plane; the plane parallel to the tensile axis is displayed: (a-c) Furnace cooling; (d-f) Air cooling; (g-i) Oil quenching; (a,d,g) About 6 mm away from the fracture plane; (b,e,f) About 3 mm away from the fracture plane; (c,f,i) Near the fracture plane.
Figure 8 .
Figure 8. Microstructures near the fracture plane; the plane parallel to the tensile axis is displayed: (a-c) Furnace cooling; (d-f) Air cooling; (g-i) Oil quenching; (a,d,g) About 6 mm away from the fracture plane; (b,e,f) About 3 mm away from the fracture plane; (c,f,i) Near the fracture plane.
Figure 9 .
Figure 9. Microstructure of the plane perpendicular to the direction of tension: (a,d) Furnace cooling; (b,e) Air cooling; (c,f) Oil quenching.(d-f) shows the typical morphology of red frames in (a-c), respectively.
Figure 9 .
Figure 9. Microstructure of the plane perpendicular to the direction of tension: (a,d) Furnace cooling; (b,e) Air cooling; (c,f) Oil quenching.(d-f) shows the typical morphology of red frames in (a-c), respectively.
Table 1 .
The chemical composition of the superalloy (mass fraction, %). | 9,373 | 2017-12-22T00:00:00.000 | [
"Materials Science"
] |
Benchmarking Brain-Computer Interfaces Outside the Laboratory: The Cybathlon 2016
This paper presents a new approach to benchmarking brain-computer interfaces (BCIs) outside the lab. A computer game was created that mimics a real-world application of assistive BCIs, with the main outcome metric being the time needed to complete the game. This approach was used at the Cybathlon 2016, a competition for people with disabilities who use assistive technology to achieve tasks. The paper summarizes the technical challenges of BCIs, describes the design of the benchmarking game, then describes the rules for acceptable hardware, software and inclusion of human pilots in the BCI competition at the Cybathlon. The 11 participating teams, their approaches, and their results at the Cybathlon are presented. Though the benchmarking procedure has some limitations (for instance, we were unable to identify any factors that clearly contribute to BCI performance), it can be successfully used to analyze BCI performance in realistic, less structured conditions. In the future, the parameters of the benchmarking game could be modified to better mimic different applications (e.g., the need to use some commands more frequently than others). Furthermore, the Cybathlon has the potential to showcase such devices to the general public.
INTRODUCTION Noninvasive brain-computer interfaces (BCIs), which measure a human's brain activity and use it to control machines, have the potential to improve human-machine interaction in numerous ways. As assistive devices, they can be used by people with disabilities to control wheelchairs (Carlson and Millán, 2013), orthoses (Ortner et al., 2011;Do et al., 2013), neuroprostheses (Rohm et al., 2013) and robots as well as to write messages . They can also be used by unimpaired people to play computer games (Coyle et al., 2013) and control devices such as mobile robots (LaFleur et al., 2013) or aircraft (Kryger et al., 2017). Alternatively, passive BCIs, which monitor human cognitive and affective states, can be used to detect drowsiness in car drivers (Picot et al., 2012) or high workload in pilots (Berka et al., 2007) and air traffic controllers (Aricò et al., 2016).
While the first BCIs were inaccurate, cumbersome and time-consuming to apply, advances such as dry (Chi et al., 2010) or water-based (Volosyak et al., 2010) electrodes and new sensor fusion methods (Müller-Putz et al., 2015;Novak and Riener, 2015) have significantly increased both their performance and user-friendliness. Nonetheless, the question remains: just how accurate and reliable are BCIs?
Previous Brain-Computer Interface Competitions
Benchmarking the performance of BCI technology is critical, as it allows researchers to evaluate how different approaches compare to each other (Zander et al., 2011). For instance, benchmarking allows us to determine whether a certain classifier allows intended commands to be more accurately identified from electroencephalographic (EEG) data (Zander et al., 2011). In the past, BCI benchmarking mainly focused on comparing different signal processing and classification methods on the same set of previously recorded EEG data. This was done as part of BCI competitions involving many different researchers. The first such competition was announced in 2001 to a smaller BCI community (Sajda et al., 2003), while follow-up competitions focused on topical challenges-from single-trial EEG classification (Blankertz et al., 2004) to classification of continuous EEG data without trial structures and classification of signals affected by artifacts (Tangermann et al., 2012).
The common feature of all previous BCI competitions was that they performed offline analysis of a previously recorded dataset, allowing different algorithms to be compared using exactly the same raw data. However, when different algorithms are compared offline, they can only perform signal analysis; they cannot perform actions in response to the information extracted from the dataset. For example, if the algorithm determines that the data was recorded when the user wanted to perform a certain action, it cannot assist with this action. The next step is thus to benchmark BCIs online: in a situation where the user sends commands and controls devices using the BCI. This is more challenging than offline benchmarking, as multiple BCI approaches cannot be used online with the same user and same device simultaneously. However, it is critical, as online performance cannot necessarily be predicted from offline performance: as users use a BCI for online control, they learn to compensate for systematic errors and better use the device (Cunningham et al., 2011). Similar issues have been noted in other electrophysiological devices such as myoelectric prostheses (Jiang et al., 2014) and emphasize the need for online benchmarking.
The Goals of the Cybathlon 2016
To fulfill the need for online BCI benchmarking outside the laboratory, we established a BCI competition as part of the Cybathlon-a larger event that aims to both evaluate different assistive technologies as well as showcase them to the general public (Riener, 2016). The first Cybathlon was held in October 2016 in Zurich, Switzerland, and was preceded by a rehearsal in 2015. It consisted of six disciplines: powered arm prostheses, powered leg prostheses, powered exoskeletons, powered wheelchairs, functional electrical stimulation, and, Abbreviations: BCI, Brain-Computer Interface; EEG, Electroencephalogram; EMG, Electromyogram; EOG, Electrooculogram; SSVEP, Steady-State Visually Evoked Potential. finally BCIs. The overarching aim of the BCI competition was to benchmark different systems in realistic conditions outside the lab using pilots with severe physical impairments. Specific goals were: -Goal 1: Develop a "benchmark" task that is inherently safe but nonetheless provides a reasonable estimate of how well a given BCI system would perform in a real-world assistive application outside the laboratory. -Goal 2: Establish a set of benchmarking rules that allow different BCIs to be compared to each other in a fair and safe way, using pilots with disabilities who would be most likely to use such BCIs in everyday life. -Goal 3: Using results of the 2016 BCI competition, compare different BCI approaches with regard to task performance in order to identify more or less effective approaches. -Goal 4: Act as an outreach event that increases the general public's interest in BCIs and other assistive technologies.
While the first three goals were purely scientific, the Cybathlon is also a "popular science" event that should be accessible to a broader public. Therefore, the task being performed should be easily understood by an audience of laypersons. For example, commands sent by the BCI should have clearly identifiable consequences in the task. This represents a new type of BCI competition that is less structured and may require different signal processing approaches, but its results would also be more directly applicable to future real-world applications of BCI. This paper is structured as follows. The "Methods for BCI Benchmarking at the Cybathlon" section presents the Benchmarking methods used in the Cybathlon BCI competition: the task to be performed (Goal 1) and the rules (Goal 2). The "Results of the Cybathlon BCI Competition" section presents the results of the Cybathlon BCI competition as well as how they relate to Goals 3 and 4. The Discussion section then discusses our progress toward all four goals as well as how the rules of the Cybathlon could be modified for future BCI benchmarking either at the Cybathlon or at other events.
METHODS FOR BCI BENCHMARKING AT THE CYBATHLON
In offline analysis, BCI performance is generally quantified using its classification accuracy (how often the correct desired command is identified from brain activity). However, while classification accuracy is generally correlated with BCI controllability and overall user satisfaction (van de Laar et al., 2013), there is no guarantee that better offline classification accuracy will translate to better online performance (Cunningham et al., 2011). An alternative performance metric is the information transfer rate (how many commands can be sent to a controlled device per minute) (Nicolas-Alonso and Gomez-Gil, 2012), but this metric depends heavily on the context. Instead, we chose to benchmark BCIs at the Cybathlon according to a different metric: the time it takes a user to successfully complete a real-world task using the BCI. The development of an appropriate benchmarking task was Goal 1 of the Cybathlon, and this task is described in the "Benchmark Game" section. Improvements in BCI performance can be facilitated by improvements at any level of BCI, from hardware improvement to better noise removal and more accurate classification. While we cannot analyze the individual contribution of each BCI component in an online application, it is still important to set rules for each level of BCI (including the human user) to ensure fair comparison of the results between participating teams. Setting these rules was Goal 2 of the Cybathlon; they were first developed prior to the 2015 rehearsal, then modified after the rehearsal following feedback from participating teams. The rules and the justification for them are described in the "Benchmarking Rules" section.
Concept
As stated in Goal 1, a desirable BCI benchmarking task should represent a realistic challenge for BCI (i.e. similar to an actual assistive application), can be understood by the general public, and is inherently safe. We first considered using a BCI to control a wheelchair, as this is a common application (Carlson and Millán, 2013) and since wheelchairs are already present at the Cybathlon. However, the idea was eventually discarded for two reasons. First, there was a risk of an inaccurate BCI causing unsafe behavior of the wheelchair and injuring the pilot. Second, the additional hardware would increase the uncertainty of the benchmarking process, as it would introduce many variables (e.g., the construction of the wheelchair, the control algorithms) that differ from device to device and are unrelated to the BCI itself. Controlling a mobile robot (LaFleur et al., 2013) or any other remote-controlled device was discarded for similar reasons. To maximize safety and minimize dependence on hardware unrelated to BCIs, we instead chose to use a computer game. Computer games are often used as a demonstration of BCIs, are inherently safe, and would look attractive to a layperson, so their use in the Cybathlon was considered appropriate.
The most intuitive competitive multiplayer game is a racing game, and most other Cybathlon disciplines involve racing over an obstacle course. Therefore, we decided to use a racing game where up to four pilots' in-game avatars can compete either on the same track (e.g., car racing) or on parallel tracks (e.g., horse race, sprint). To save development time and effort, commercially available racing games were first considered for use with BCIs. However, while commercial games are visually attractive and would undoubtedly be well-received by the audience, they are not meant for BCI control, as they require many commands to be sent to the game with split-second precision. By contrast, many state-of-the-art BCIs achieve information transfer rates of only approximately 20 bits/min (Nicolas-Alonso and Gomez-Gil, 2012). A racing game that is slow enough to be controllable with BCIs was thus developed in cooperation between ETH Zurich and the Zurich University of the Arts (ZHdK), Switzerland.
A major concern in the design of the BCI-controlled racing game was that some BCIs at the competition could have a very low accuracy and might be unable to effectively control the game. Therefore, one game design rule was that a racer should eventually reach the finish line even if he/she sends no correct commands-correct commands should speed racers up while incorrect commands should slow them down but not stop them completely. Thus, an "obstacle course" game was created where up to four pilots' virtual avatars run along a track with different types of obstacles. Each obstacle appears at the same spot on the track for all competitors. Sending the correct command through the BCI at the correct time, for example, may make the avatar jump over an obstacle while sending no command or sending the wrong command makes the avatar hit the obstacle and temporarily slow down.
To ensure that pilots are not distracted by too many visual features, another design decision was made: different visual displays of the game were provided to pilots and to the audience. Pilots were shown a simplified display that is focused on their own avatar and has no distracting elements (for example, background is removed and textures are simplified). The audience, on the other hand, was shown a more visually rich version of the game. A screenshot of the audience view of the BCI game, which is titled BrainRunners, is shown in Figure 1.
Number and Type of Commands
Assuming different types of obstacles, each pilot's avatar should accept different commands. For example, one command would make the avatar jump while another would make the avatar slide. Each would only be effective for a specific type of obstacle: for example, an obstacle at low height would require the avatar to jump while a high obstacle would require the avatar to slide underneath it. This then raises the question of how many commands can realistically be sent by a BCI. The simplest BCIs only have two output states: "command" or "no command", which are sent depending on the level of EEG activity (low or high). More complex devices can produce several different commands as well as a "no command" state depending on, e.g., which regions of the brain are active. How can we enable simpler BCIs to participate in the competition while still allowing more complex devices to have an advantage?
The first decision was that the game should operate in asynchronous mode (Mason and Birch, 2000): there should be times in the game when no command should be sent to it, and sending any command should be penalized at those times. Such asynchronous control was a major component of previous BCI competitions (Tangermann et al., 2012), and is commonly used in assistive devices-the user may require assistance at any moment, but there are also times when no assistance is needed (Pfurtscheller et al., 2005;Ortner et al., 2011;Sakurada et al., 2013). For example, a BCI-controlled wheelchair should remain stationary while the user focuses on activities such as eating or working, but should also be ready to receive movement commands at any time. The alternative to such asynchronous control would have been to have the game ignore BCI inputs until the avatar reaches an obstacle, then check if the user has sent the correct command. However, given that assistive devices commonly operate in asynchronous mode, this alternative was unrealistic for online benchmarking.
After settling on asynchronous operation with a "no command" state, we implemented three different commands FIGURE 1 | A screenshot of the audience view of the BrainRunners game, which allows up to four avatars to compete on parallel tracks. Each track consists of multiple instances of three different "action" fields colored purple, cyan and yellow (on which the pilot must send the correct command via the brain-computer interface in order to speed up their avatar) as well as gray "no-input" fields (on which the pilot should not send any commands).
("rotate, " "jump, " and "slide") that the pilot can send via the BCI. Each command has a specific time when it should be used; using it at the correct time gives the pilot a bonus while using it at an inappropriate time penalizes the pilot. However, even if the pilot never sends any command, their avatar will eventually reach the finish line. This allows BCI devices based on 2-class classifiers to compete by only using one command (e.g., "jump"), while developers of more complex BCIs have to consider whether the potential benefits of the other two commands outweigh the potential penalties of using them incorrectly.
As seen in Figure 1, the BrainRunners game allows up to four pilots to compete simultaneously, with their avatars running in parallel. There are four types of fields on the track: • No-input field (where no command should be sent)-gray in Figure 1, • spinning winds (where the pilot can send the "rotate" command to speed up and is otherwise slowed down)-cyan in Figure 1, • stumbling blocks (where the pilot can send the "jump" command to quickly hover over the stumbling blocks and is otherwise slowed down)-purple in Figure 1, and • sticky lasers (where the pilot can send the "slide" command to quickly slide under the lasers and is otherwise slowed down)-yellow in Figure 1.
These fields can be seen by pilots at least 10 s before their avatar reaches them, giving pilots time to react to upcoming fields and accounting for potentially slow BCI paradigms.
Sending the correct BCI command on its corresponding field causes the pilot's avatar to speed up and run at a higher speed until the end of the field or until another command is sent. This can be done whenever the avatar is on the field; doing it as soon as the avatar reaches the field results in the greatest benefit, but doing it later is still beneficial (as it makes the avatar cross the remaining part of the field faster). On the other hand, sending the incorrect command (or any command on the noinput field) penalizes the pilot by making their avatar slow down. This penalty slowdown can be overridden on an action field by sending the correct command, which will speed the avatar up again. Similarly, if the player first sends a correct command, then an incorrect one, the speed-up will be overridden by the incorrect command. On an action field, any speed-up or slowdown lasts until the end of the field, after which the pilot's avatar returns to the default speed. On the no-input field, the penalty slowdown lasts for a predetermined amount of time, and the avatar can thus return to the default speed before the end of the field. This is because there is no way to correct an incorrectly sent command on a no-input field.
The penalty and override structure ensures that randomly sending all possible commands one after another is not an optimal strategy and is (depending on the weighting of reward and penalty) possibly worse than sending no command at all. This structure mimics a real-life application of controlling an assistive device where a command being executed can be overridden with another command.
Six different parameters are used to balance the game mechanics: 1. Default speed on no-input field if pilot has (correctly) not sent a command: s default,NoInput . 2. Reduced (penalty) speed on no-input field if pilot has sent any command: s punish,NoInput . 3. Maximum penalty (reduced speed) time on the no-input field: t maxPunish,NoInput . 4. Default speed on an action field (rotate/jump/slide) if pilot has (incorrectly) not sent a command: s default,Action . 5. Increased (reward) speed on action field if the correct command has been sent: s reward,Action . 6. Reduced (penalty) speed on action field if a wrong command has been sent: s punish,Action .
All versions of BrainRunners obey the inequality: s reward,Action > s default,NoInput > s default,Action > s punish,NoInput = s punish,Action . This means that the pilot's avatar has four possible speeds: • The default medium speed is given on the no-input field if no command is sent, • the high speed is only given if the correct command is sent on an action field, • sending no command on an action field results in the low speed, • sending the wrong command on an action field or any command on the no-input field results in the very low speed. This is done because sending no BCI command should still be a better option than sending the wrong command.
An example of game speed on different fields in response to pilot commands is shown in Figure 2. In the game version used at the 2016 Cybathlon, the values are as follows: s reward,Action = 3, s default,NoInput = 1, s default,Action = 0.5, s punish,NoInput = 0.3, t maxPunish,NoInput = 4. This means, for example, that if the pilot sends any command on a no-input field, the avatar's speed decreases from the default value to 30% of the default value for a punishment period of 4 s. Conversely, if the pilot sends the correct command on an action field, the avatar's speed increases from 50 to 300% of the default value until the avatar crosses the field or a different command is sent. If the pilot sends no command, their avatar needs about 6 s to cross a no-input field and about 11 s to cross an action field. The exact values of the different speeds and the maximum penalty time were not made public prior to the 2016 Cybathlon and could only be obtained indirectly by practicing with the training version of the game. This was done since a real-world application would not have a perfectly definable cost/reward structure for BCI control. The values were balanced so that a realistic BCI accuracy achieves a better result than sending no command at all, which in turn achieves a better result than randomly sending commands. For future competitions, we intend to change the different values in order to provide participating teams with a new challenge. A last balancing factor is the prevalence of the different fields. Each action field (rotate/jump/slide) must have the same number of appearances in the race. This ensures that teams that use fewer than three commands are indifferent as to which one they omit. Furthermore, the ratio of no-input vs. action fields has a direct influence on how important it is to avoid false positives or false negatives. In the final version of the game, there are four instances of each action field as well as four no-input fields, for a total of 16 fields that can appear in different orders. We acknowledge that such a short race is vulnerable to random effects and thus does not necessarily produce unbiased BCI results; however, a short duration was necessary since the Cybathlon BCI race was watched by a live audience of laypersons.
Benchmarking Rules
Goal 2 of the Cybathlon BCI competition was to establish a set of benchmarking rules that allow different BCIs to be compared to each other in a fair and safe way. These rules are described in this section, and include acceptable BCI paradigms, acceptable hardware and software, and inclusion/exclusion criteria for human pilots.
Brain-Computer Interface Paradigms
The first, fundamental decision regarding acceptable BCI technology was whether to limit the competition to EEG-based devices. EEG is the most popular BCI modality, and was the focus of previous BCI competitions (Sajda et al., 2003;Blankertz et al., 2004;Tangermann et al., 2012). Other technologies, such as magnetoencephalography and functional magnetic resonance imaging, were excluded simply because they are not portable. Furthermore, invasive (e.g., implanted) BCI technologies were excluded since it would be difficult to find sufficient human pilots with implanted BCI devices and since pilots with noninvasive devices may potentially be at a disadvantage compared to pilots with implanted devices. In the end, we allowed BCIs based on EEG as well as those based on functional near-infrared spectroscopy. Functional near-infrared spectroscopy has been previously used in BCIs (Sitaram et al., 2007;Zimmermann et al., 2013) and can even be combined with EEG to improve performance (Fazli et al., 2012). However, all teams used only EEG-based BCIs.
Within EEG-based BCIs, there are several paradigms for how the user can use their brain activity to control a device. To decide which paradigms to permit, our guideline was that the BCIs should do what the general public believes they do: read internal thought processes. In what proved to be a somewhat controversial decision, we thus forbade BCIs that require additional external stimuli (e.g., a second screen). Principally, this excluded BCIs based on SSVEPs and visually evoked P300 waves. For example, SSVEP-based BCIs provide the user with a dedicated screen, and different parts of the screen flash at different frequencies. The user selects a desired BCI command by looking at its corresponding part of the screen, and the BCI recognizes the command by measuring the dominant frequency of the visual cortex's EEG activity (Middendorf et al., 2000;Gao et al., 2003;Pfurtscheller et al., 2010;Ortner et al., 2011;Sakurada et al., 2013). Similarly, P300-based BCIs commonly evoke P300 waves via the oddball paradigm: different stimuli (most commonly rows and columns of letters) are successively highlighted on a dedicated screen, and the user exhibits a P300 wave in response to the stimulus of interest (e.g., the letter they wish to write) (Fazel-Rezai et al., 2012). Though SSVEP-and P300-based BCIs are relatively fast and accurate (Nicolas-Alonso and Gomez-Gil, 2012), we feel that the required additional screen is inappropriate for many applications and that, in the case of SSVEPs, the same result (gaze-based selection) can be achieved with a camera-based eye tracker.
Due to the exclusion of paradigms that rely on external stimuli, we expected that the most commonly used paradigms would involve motor imagery, where the pilot generates EEG by imagining limb motions, and/or mental imagery, where the pilot generates EEG by carrying out specific mental tasks such as arithmetic (Obermaier et al., 2001;Friedrich et al., 2012). This was indeed the case at the Cybathlon 2016, as described in the "Results of the Cybathlon BCI Competition" section; however, other potentially valid approaches have been proposed. For example, an anonymous reviewer of this paper proposed using error-related potentials (Chavarriaga et al., 2014), which are evoked by external stimuli provided by the game itself (e.g., seeing the pilot's avatar slow-down in response to an incorrectly sent command). In future BCI races, we would consider the use of error-related potentials (and other brain activity evoked by the game itself) to be acceptable since it does not require any stimuli that are not already present in the application.
Hardware, Skin Preparation, and Signal Sites
To capture low-amplitude EEG signals, it is critical to use multiple electrodes (ranging from 4 to 64 in state-of-the-art systems Nicolas-Alonso and Gomez-Gil, 2012) with a high signal-to-noise ratio. For the Cybathlon, the primary rules regarding electrodes were that they should be safe for human use and should not penetrate the skin, though hair removal and light skin abrasion at the electrode sites were permitted. As long as this rule was met, we permitted both wired and wireless electrodes, and placed no restrictions on the use of gel-gel-based, waterbased (Volosyak et al., 2010) and dry electrodes were all acceptable.
The electrodes are connected to signal amplifiers that generally compromise between bulkiness and signal quality. The main Cybathlon rule for amplifiers, again, was that they should be safe for human use; no restrictions were placed on, for example, the use of wireless amplifiers. Teams using commercial devices were asked to provide the manufacturer's statements of conformity while teams that wanted to use their own (built inhouse) devices were required to conduct a risk analysis and provide a full report. All documentation was reviewed by two independent BCI experts, who were provided with a checklist of safety items (e.g., surge protection) and could note outstanding issues to be checked in person prior to the competition. This checklist is provided in Appendix I (Supplementary Materials). However, these safety issues were minimal, as all teams used commercially available devices rather than in-house prototypes.
Finally, all EEG electrode sites (frontal, central, parietal etc.) were permitted. While some areas are known to be more closely related to, e.g., motor or mental imagery, we saw no need to limit teams to those areas. Indeed, most teams chose to use a large number of evenly spread electrodes so that they could perform spatial filtering on the data.
Artifact Removal and Classification
EEG signals have amplitudes in the microvolt range and are vulnerable to different artifacts. For instance, signals from frontal areas could be contaminated by electrooculographic (EOG) artifacts while signals from parietal areas could be contaminated by neck muscle activity (Delorme et al., 2007). Thus, EEG signals are almost always filtered to remove noise and artifacts prior to further analysis (Ramoser et al., 2000;Blankertz et al., 2008). However, unscrupulous participants could also make use of artifacts to unfairly boost the performance of their BCI system: for instance, since blink artifacts have a high amplitude compared to actual EEG, a team could potentially train their pilot to blink in order to send a command.
While we did not expect intentional cheating, we did request that teams provide us with a description of their artifact removal procedure. As EOG was considered to be the most problematic potential source of artifacts, we also requested that teams include EOG recordings in their setup. This way, judges could check whether eye artifacts have been adequately removed from EEG. As an alternative to a description of each team's artifact removal procedure, we briefly considered simply implementing standard artifact removal software and requiring each team to use the same software. However, while this would make it much easier to monitor the teams, it would also require extensive testing to ensure that the artifact removal software is compatible with all teams' systems. Furthermore, as artifact removal is an important part of ensuring real-world BCI performance , we wanted to encourage teams to develop novel artifact removal methods.
Once EEG signals have been filtered to remove noise and artifacts, useful features need to be extracted from the signals, and classification algorithms need to be applied in order to turn the extracted EEG features into discrete commands that are sent to the controlled device Brunner et al., 2011;Nicolas-Alonso and Gomez-Gil, 2012;Müller-Putz et al., 2015;Novak and Riener, 2015). For online performance, asynchronous operations represent a particular challenge: the BCI should allow the user to perform a command at any time, but should also remain idle most of the time when the user does not desire to use the BCI (Mason and Birch, 2000;Pan et al., 2013;Williams et al., 2013). No restrictions were placed on feature extraction and classification, as we again wanted to encourage teams to develop novel classification methods.
Inclusion and Exclusion Criteria for Human Pilots
The final part of planning the Cybathlon's BCI benchmarking process was to define the type of people that will operate the BCI. A few generic requirements were set: pilots should be at least 18 years old, should understand the competition, and should not suffer from epilepsy or cybersickness. This would be sufficient for a general benchmarking competition; however, the Cybathlon focuses on assistive technologies, so the pilots should be drawn from the population that would use such technology.
The main target population for assistive BCIs are severely paralyzed people who cannot use their limbs to operate technology. For example, they could be used by tetraplegics to control wheelchairs (Carlson and Millán, 2013). However, it was difficult to define the minimal acceptable level of disability. Should participation require tetraplegia or only paraplegia? And should the disability be due to a specific cause (e.g., spinal cord injury)? This was primarily a question of fairness, as there is evidence that neural representation of different thoughts differs between spinal cord injury and brain injury survivors (Murphy and Corbett, 2009), that BCI classification accuracies differ between tetraplegics and paraplegics (Müller-Putz et al., 2014), and that brain activation after injury is affected by the degree of clinical recovery (Kokotilo et al., 2009).
Several solutions were considered: allowing only tetraplegia, allowing both tetraplegia and paraplegia, and having separate BCI subdisciplines for severe and lighter impairment. At the 2015 rehearsal, we permitted teams to use unimpaired BCI pilots if they were unable to field a pilot with motor impairment. After the rehearsal, the inclusion and exclusion criteria were finalized: pilots should be tetraplegic or tetraparetic (American Spinal Cord Injury classification levels A to C) due to any injury, with each pilot's eligibility determined individually. This requirement was chosen since all teams at the rehearsal agreed that tetraplegic pilots represent the most promising target population for assistive BCIs. No restrictions were implemented with regard to available sensory channels (e.g., damaged sensory nerves).
To ensure ethical integrity with regard to involvement of human pilots, the Zurich Cantonal Ethics Commission was consulted about the Cybathlon (request no. 2016-00161). The Commission ruled that, as an event, the Cybathlon was exempt from ethics approval since it was considered to be primarily an exhibition and outreach event rather than a fundamental scientific study. Nonetheless, informed consent was obtained from all participating pilots, and participating teams were advised to obtain ethics approval from their own institutions if required for BCI training.
RESULTS OF THE CYBATHLON BCI COMPETITION Event Schedule
Both the 2015 rehearsal and 2016 Cybathlon began with two inspections: a technical check of the hardware and software by independent BCI experts (to ensure safety and general adherence to the rules) and a medical check of each pilot's health status by a team of physicians. After that, qualification races were performed, followed by two final races (A-final and B-final).
There were three qualification races, with four teams participating in each race. As previously described, each race contains four instances of each action field as well as four noinput fields, for a total of 16 fields. All three qualification races had these 16 fields appear in different orders that were not known to participants in advance. In each race, the time needed for each team to reach the finish line was measured. The four fastest teams from all three races advanced to the A-final while the next four teams (ranked 5-8) advanced to the B-final. Each race lasted approximately 3 min, though moving the teams to and from the competition stage took approximately 15 min per race. Again, we acknowledge that such short races were likely influenced by random factors and may not have captured the best performance of each BCI technology; however, the need for short and attractive races was dictated by the live audience of laypersons. Eleven teams then participated in the 2016 competition, though there were significant changes from the 2015 lineup: some teams dropped out after the rehearsal because they could not find a tetraplegic pilot (required in 2016 but not 2015) while other teams either signed up after the rehearsal or had chosen not to participate in the rehearsal. Tables 1-3 present a list of the participating teams at the 2016 competition, as well as a brief overview of the teams' hardware (Table 1), software ( Table 2) and pilots ( Table 3). While information about all teams' approaches was collected by the technical experts of the Cybathlon, it was originally kept confidential. After the competition, we asked teams if they would provide us with a brief publishable summary of their approach; Tables 1-3 contain information for those teams that provided a summary.
Team Approaches and Results
As can be seen from Tables 1, 2, the best approach is difficult to identify. There was no clear advantage to using a particular amplifier, a particular number of electrodes, or particular electrode locations. Most teams used the g.USBamp (g.tec Medical Technologies GmbH, Austria) or BrainVision actiCHamp (Brain Products GmbH, Germany) amplifiers and their associated electrodes, but this was partially for nonscientific reasons: g.tec and Brain Products were sponsors of the Cybathlon, and agreed to provide free BCI hardware to several participating teams. Still, it is interesting that all teams used gelled electrodes, and may suggest that dry electrodes are not yet suitable for use outside the lab; however, due to lack of data, this is only speculation. Furthermore, while some teams originally expressed an interest in consumer-grade EEG devices Teams are ranked according to the time they needed to complete the qualifier race. The top four teams from the qualifier participated in the A-final while the next four participated in the B-final. Blank spaces in equipment columns indicate the team did not provide this data.
*The Brain Tweakers team participated with two pilots, both of which used the same hardware and software. **The pilot of the Lyon team did not pass the medical check and thus did not participate in the qualifier race.
from Neurosky (USA) and Emotiv Systems (Australia), none participated with such devices. Similarly, there is no clear advantage to any software approach, and many teams had similar approaches (e.g., use of spatial filtering and power spectral density based features). This use of similar approaches was likely because we forbade the use of additional external stimuli, which prevented teams from using other approaches such as SSVEPs or visually evoked P300 waves. There is also no clear advantage due to pilot age or injury severity (Table 3), and we believe that Goal 3 was not successfully fulfilled-we were unable to identify any factors that clearly improve BCI performance. We believe that future competitions should examine the effect of other human factors on performance, as explored further in the Discussion section.
Spectator Feedback and Scope of Outreach
The Cybathlon was highly effective from an outreach perspective. In addition to coverage from Swiss news agencies, 140 media representatives from 15 countries registered to cover the event. They included representatives from, e.g., Al Jazeera English, BBC, Bloomberg News, CNN, Die Zeit, ORF, Reuters, Wired Japan, and many more. Over 300 articles were published about the Cybathlon in 2016. Furthermore, the Cybathlon website (www. cybathlon.ethz.ch) was visited by approximately 47,000 different people in October 2016 alone, and approximately 4,800 people from all over the world watched the Cybathlon live via the stream on the Cybathlon website. Finally, the Cybathlon trailer has been viewed over 152,000 times on Youtube (as of November 2017).
The Cybathlon was attended in person by more than 4,600 spectators; of those, 283 participated in a brief survey about their experience. The majority (54%) were between 19 and 30 years old, and an additional 38% were between 31 and 60 years old. 70% were from Switzerland, 25% were from other European countries, and 5% were from outside Europe. 51% attended due to an interest in research and technology while 27% attended due to an interest in medical topics. When asked if the Cybathlon fulfilled their expectations, 44% said that their expectations were fully fulfilled and 47% said that the event exceeded their expectations. Thus, we believe that we successfully generated significant public interest, especially among young people with an interest in technology.
The above data was collected for the entire Cybathlon event, regardless of discipline. In addition to that, spectators were also asked what their favorite discipline was. This result was less encouraging: only 9% chose BCIs as their favorite among the six disciplines (as opposed to, e.g., 33% for wheelchairs and 20% for lower limb prostheses). Spectators commented that BCIs were not as attractive as other disciplines because they involved no physical movement-only pilots motionlessly playing the benchmarking game with their mind. Furthermore, some spectators still had difficulty understanding the cause-effect relationship between the BCI and the in-game actions without the help of the announcer, and requested live commentary of the game in multiple languages.
Feedback from Teams and Cybathlon Staff
The participating teams and the Cybathlon volunteers also provided feedback about how future BCI competitions could be improved. First, the teams emphasized the need to keep locker rooms close to the stage and carefully temperature-controlled. This was because teams needed to set up the EEG cap in the locker room, then wait for the competition and move the (usually wheelchair-bound) pilot through the building to the stage; reducing the distance between the stage and locker room also reduces the need to readjust the EEG cap on stage.
Second, monitoring the pilots and ensuring fair play proved to be a significant challenge. While we asked the teams to Reported only for teams that agreed to make this information public. All teams used imagery (motor or otherwise) to generate BCI commands.
provide access to EEG and EOG recordings for the judges, it was practically impossible to obtain enough expert judges to truly monitor all recordings during the competition. We limited ourselves to watching for voluntary limb movement and excessive eye movement, but this also was not trivial. Impaired pilots could make involuntary motions (e.g., spasms), and it was difficult to draw the line between normal and excessive eye activity. At the rehearsal, one team was warned after the qualification round since their pilot appeared to be blinking very often; however, such assessments are subjective and cheating can be difficult to prove. Nonetheless, despite challenges in monitoring the pilots, both the 2015 rehearsal and 2016 competition proceeded smoothly. All teams were able to set up successfully, and no BCIs failed to function. Furthermore, the teams agreed that the game was a good testing ground for online BCIs, with three commands and a "no input" state being realistic.
DISCUSSION
Based on our experiences, we believe that Goals 1, 2 and 4 of the Cybathlon were successfully achieved while Goal 3 was not fully achieved. Sections Goal 1: Benchmarking game, Goal 2: Rules and inclusion criteria, Goal 3: Identifying effective BCI approaches, and Goal 4: Outreach discuss each individual goal in more detail. The discussion then concludes with a few words of advice to teams who may be interested in participating in the next Cybathlon BCI competition (section Advice to future Cybathlon competitors) as well as a few thoughts about the organizational aspects of the Cybathlon BCI competition (section Importance of organizational aspects).
Goal 1: Benchmarking Game
We successfully developed a benchmarking game that allowed BCI performance to be measured via task completion time, and the teams agreed that the game was an appropriate stand-in for actual assistive technologies without the danger present in actual assistive devices. Similarly to our game, actual assistive technologies require users to choose among a small number of possible BCI commands, such as left/right for wheelchairs (Carlson and Millán, 2013), walk/idle for robotic gait orthoses (Do et al., 2013), left/right/both/none for BCI-controlled artificial arms (Onose et al., 2012) and so on. As in our game, the command must be sent at an appropriate time in order to avoid undesired consequences such as collisions. Furthermore, an incorrectly sent command can be overridden by a correct one-for example, if a BCI-controlled wheelchair is told to move to the kitchen, this command can be overridden by a command to move to the living room. Finally, the game was played in a less structured setting with many possible distractors and stressors, thus more closely approximating real-world conditions compared to a laboratory.
One change that could easily be made to the game would be to make some fields more common than others-for example, to have 90% of the fields be "no-input" fields. This would likely be a more realistic approximation of an assistive BCI, as BCIs are expected to spend most of their time idle (Mason and Birch, 2000), and not all assistive actions would be required equally often. This would also force teams to decide, for example, which paradigm (e.g., motor or mental imagery) to assign to which command. We originally chose to have all three actions equally probable so that BCIs with only one possible command could still compete; however, given that all teams at the 2016 Cybathlon used a 3-command BCI, we believe that this modification would be very useful in future BCI competitions.
Goal 2: Rules and Inclusion Criteria
Most of the rules of the Cybathlon BCI race were not controversial, but it is worth briefly discussing acceptable BCI paradigms and inclusion/exclusion criteria. With regard to BCI paradigms, we excluded approaches that require additional external stimuli: SSVEPs, visually evoked P300 waves and others. A few participating teams complained that they felt constrained by the exclusion of these paradigms, and future real-time benchmarking approaches could consider including all BCI types in order to maximize potential impact. This may, however, require a different game design: a SSVEP-based technology would require the pilot to divide their attention between the game and a secondary visual display, but would also support more than 3-4 commands and would allow faster gameplay due to higher information transfer rates (Nicolas-Alonso and Gomez-Gil, 2012). Furthermore, with regard to pilot inclusion criteria, we focused on tetraplegic pilots since the Cybathlon is meant to be a competition for people with disabilities who use cuttingedge assistive technologies. Future online benchmarking events, however, should choose the pilot inclusion/exclusion criteria based on the characteristics of the target population. A general BCI benchmarking competition could incorporate unimpaired adults with a similar age and level of BCI experience in order to minimize intersubject variability. A competition focused on, for example, P300/SSVEP-based spelling devices, on the other hand, may recruit only tetraplegics and people with locked-in syndrome since these are the ones most likely to use spelling devices (Li et al., 2008;De Vos et al., 2014). Finally, if the goal of a competition is to test generalizability to a wide range of people with disabilities, we could even ask multiple pilots (with different disability levels) to test the same device.
Goal 3: Identifying Effective BCI Approaches
The greatest weakness of the Cybathlon BCI competition was that we were unable to identify any factors that had a clear effect on BCI performance: there was no clear difference between different hardware and software approaches with regard to the race results. Furthermore, there was no clear effect of the measured pilot characteristics (e.g., pilot age, ASIA A vs. ASIA B injuries). This was partially likely due to the small sample size, as it is difficult to find clear effects based on 11 competing teams. Furthermore, the fact that some newer EEG technologies (e.g., dry electrodes) were not present at the Cybathlon may imply that these technologies are not yet ready for widespread use. Nonetheless, we cannot consider Goal 3 to have been successfully completed. In future competitions, this goal could be achieved more successfully by collecting more data about factors that affect BCI performance and/or using additional BCI performance metrics.
Factors that Affect BCI Performance
For future BCI competitions, we recommend not only increasing the number of pilots, but also obtaining more information about the pilots. For example, BCI performance tends to improve as users train with the system (Neuper and Pfurtscheller, 2010;Lotte et al., 2013), and there was significant variability in the amount of time that Cybathlon pilots spent training with their BCI system, from days to months. Furthermore, different user personalities and cognitive profiles may be more effective at controlling BCIs (Hammer et al., 2012;Jeunet et al., 2016), and pilots with higher motivation may be more effective as well (Sheets et al., 2014). None of these factors were evaluated at the Cybathlon, but future BCI races could capture and analyze them by asking pilots about the time they spent training as well as, for example, using personality and motivation questionnaires. Some data on this topic was reported by the winning team (Brain Tweakers), which fielded two pilots: in the months leading up to the Cybathlon, the pilot who eventually won the qualifier race completed 183 practice races while the pilot who won the final race completed 57 practice races against computer opponents (Perdikis et al., 2017).
In addition to training time and pilot characteristics, another human factor should be investigated in more detail: the strategies that pilots develop to deal with the game. For example, each pilot must decide on their own whether to attempt sending a BCI command on an action field as early as possible (and thus risk sending the command too early, resulting in penalties) or to wait until the avatar is comfortably on the field (thus reducing the potential benefit). Several pilots told us that they had developed their own strategies, and the winning team emphasized that it was critical for the pilot and BCI to adapt to each other (Perdikis et al., 2017). However, information about this factor was not recorded in detail, and we thus do not reliably know how it affected pilots' scores.
Other factors that may have contributed to BCI performance include for example, electromagnetic noise in the environment and sudden distractions in the arena. Such factors could be averaged out by holding multiple races. This would be a suitable solution for in-house evaluations of BCIs by their developers, but was not appropriate for the Cybathlon, which was broadcast on television and over the Internet, necessitating a single exciting race. The winning team compensated for possible distractions in the arena by carrying out mock races in the lab with loud spectators (Perdikis et al., 2017), and we believe that training in uncontrolled environments is essential for optimal BCI performance in any real-world setting.
Additional BCI Performance Metrics
We used only a single BCI performance metric: the time needed to complete the race. However, a more detailed analysis of performance could be achieved using more comprehensive measurements of BCI behavior. For example, future races could measure the total number of incorrectly sent commands, the time needed to successfully send a command on an action field, and the time needed to correct an incorrect command (i.e., the time between an incorrectly sent command and a follow-up correct command on the same action field). By comparing these metrics between teams, such an analysis could determine whether some BCI approaches are, for example, slower or more prone to incorrect commands than others.
Goal 4: Outreach
Based on the feedback obtained from spectators and the high media profile of the Cybathlon, we believe that we were successfully able to showcase BCI technologies to a general public that has, with very few exceptions, only seen it in science fiction movies, books and news articles. Admittedly, not all reactions were positive: as mentioned in section Spectator feedback and scope of outreach many spectators did not find the BCI discipline as attractive as other Cybathlon disciplines (e.g., powered wheelchairs) due to the lack of physical movement and the difficulty relating the BCI to in-game commands. Furthermore, some were surprised by the low accuracy and slow speed of state-of-the-art BCI systems.
While future Cybathlon events will take steps to make the BCI competition even more accessible to laypersons (by, for example, providing more detailed live commentary), we believe that it is very important to provide spectators with a realistic picture of the technology: not only its advantages, but also its disadvantages. This balanced presentation helps the public understand not only the potential of the technology, but also its current state and opportunities for improvement. Furthermore, it encourages developers to take steps to improve BCI performance outside the lab, bringing the technology closer to practical, real-world use.
Advice to Future Cybathlon Competitors
As the next Cybathlon is already planned for 2020, we wish to provide a few suggestions to teams interested in participating. Admittedly, these suggestions are purely subjective and based only on our qualitative observations of the 2015 rehearsal and 2016 competition; nonetheless, we believe that they are worth considering.
First, it is critical to find an appropriate pilot early: most of the dropout from the 2015 rehearsal to the 2016 competition was because teams could not find a tetraplegic pilot. Finding an appropriate pilot early also allows training to start early, providing critical BCI experience to the pilot and allowing classification algorithms to be tailored to the pilot. Second, teams should not only aim to improve offline classification accuracy, but should work with the pilot on online testing of the BCI and developing effective strategies for online BCI control (e.g., teaching the pilot how to compensate for possible systematic errors). These points were also emphasized as critical by the winning team (Perdikis et al., 2017). Third, we encourage teams to develop BCI approaches not based only on motor imagery. For example, a combination of motor and mental imagery (used by some of our 2016 teams) could be combined with detection of error-related potentials to provide immediate error correction.
Importance of Organizational Aspects
Finally, we wish to briefly comment on a few additional organizational aspects of the BCI competition: the pilot health checks, the technical safety inspections, and judging the competition for fairness. The Cybathlon 2016 organizational structure was such that all six disciplines had roughly the same technical safety inspections, which may have been excessive for BCIs. BCI systems are safer than the technologies used in the other disciplines, which can seriously injure the user (e.g., via the powerful motors of a robotic wheelchair), and all teams used commercially available hardware (unlike, e.g., the wheelchair discipline, where most devices were prototypes). Thus, future BCI competitions may consider a more streamlined safety check-for example, skipping the hardware check for all cases where common commercial amplifiers are used. We do consider pilot health checks, however, to be important and appropriate: though BCI devices are relatively safe, the pilots were tetraplegic and thus potentially at more risk than pilots in other disciplines (who were either paraplegic or amputees), so it was necessary to be aware of any possible health issues.
Judging the competition for fairness was expected in advance to be difficult, as we did not have the resources to truly monitor all pilots and collected signals during the competition, and even checking for possible voluntary/involuntary movement is not trivial (see section Feedback from teams and Cybathlon staff). One way to address this issue would be to require teams to record both EOG and the electromyogram (EMG) of different muscles during the event, then have judges check the recordings after the race and disqualify competitors later if necessary. This option was considered before the Cybathlon, but ultimately abandoned since it would be prohibitively time-consuming for the small organizing team to manually check all EOG/EMG recordings and cross-reference them with EEG, BCI outputs, and in-game events to determine whether cheating took place. As an alternative, future events could use a "peer review" method where each participating team leader would be randomly assigned to monitor another team during the race, ensuring fairness. A member of the Cybathlon organizing committee (who would be independent of the participating teams) would then be responsible for resolving any issues noticed by these peer reviewers. However, given that we do not expect teams to intentionally try to cheat during the competition, it is unclear whether such steps are truly necessary.
CONCLUSION
Having successfully concluded the Cybathlon 2016, we are confident that our BCI race represents a valid method of benchmarking BCIs online and is a reasonable approximation of actual assistive BCI applications. Furthermore, it provides an easily quantifiable outcome metric (the time needed to complete the task) that reflects both the technical quality of the system as well as the skill of the human user. Finally, different use scenarios can be simulated by changing the game parameters-for example, by changing how often each command is required, how often the user needs to remain idle, and what the penalty for failure is.
Our procedure could be used for online BCI benchmarking in academic and commercial settings. While our race involved tetraplegics, EEG and motor/mental imagery, it could also be modified for other user groups and BCI paradigms. This would allow the advantages of different hardware and software approaches to BCI to be evaluated for many applications. Furthermore, by collecting data about the human user, future benchmarking procedures could examine the influence of both technological and human factors. In this way, the scientific community will obtain a complete picture about how different factors affect online BCI performance in different populations, paving the way for broader adoption of BCIs in many applications.
Finally, as the Cybathlon was streamed over the Internet and recorded by camera teams from all over the world, it has the potential to showcase this technology to a general public that has only seen it in science fiction movies. The Cybathlon has received extensive support, and will become a recurring event, with a Cybathlon 2020 already in planning stages and smaller spin-off competitions foreseen around the world.
AUTHOR CONTRIBUTIONS
DN is an executive board member of the Cybathlon, led the development of most BCI discipline rules, served as a judge and technical expert at the 2015 and 2016 Cybathlon BCI race, and led the manuscript writing process. RS and DW are co-directors of the Cybathlon, and supervised all six disciplines. NG was the head organizer of the BCI race at the 2015 rehearsal and 2016 Cybathlon. RB and UG led the development of the BrainRunners game. Finally, RR is the initiator of the Cybathlon, and was responsible for high-level organization of all the disciplines. | 12,994.8 | 2018-01-11T00:00:00.000 | [
"Computer Science",
"Engineering",
"Medicine"
] |
A Novel Floating Point Fast Confluence Adaptive Independent Component Analysis for Signal Processing Applications
Independent component analysis (ICA) is a technique that separates the independent source signals from their mixtures by minimizing the statistical dependence between components. This paper presents a floating point implementation of a novel fast confluence adaptive independent component analysis (FCAICA) technique with reduced number of iterations that provides the high convergence speed. Fixed point ICA algorithms cover only smaller range of numbers. To handle large as well as tiny numbers and hence to improve the dynamic range of the signal values,floating point operations are performed in ICA. The high convergence speed is achieved by a novel optimization scheme that adaptively changes the weight vector based on the kurtosis value. To validate the performance of the proposed FCAICA, simulation and synthesis are performed with super-gaussian mixtures and sub Gaussian mixtures and experimental results provided. The proposed FCAICA processor separates the super-Gaussian signals with a maximum operating frequency of 2.91MHz with improved convergence speed.
Introduction
ICA, a statistical signal processing technique, is one of the most commonly used algorithms in blind source separation. The term "blind" means that both the original independent sources and the way the sources were mixed are all unknown. Estimates of the source signals are found only from the observed signal mixtures. ICA recovers source signals from their mixtures by finding a linear transformation that maximizes the mutual independence or non-gaussianity of the mixtures regardless of the probability distribution. It plays an important role in a variety of signal processing, image processing techniques and communication networks. Though different ICA algorithms have been reported, the FastICA algorithm has been shown to have advantages in terms of convergence speed [1].It measures non-Gaussianity using kurtosis to find the independent sources from their mixtures [2]. Algebraic ICA Algorithm performs ICA by solving simultaneous equations derived from the definition of the independence. It works very fast for two sources separation but it becomes extremely complex when the number of sources goes more than two [3]. Infomax Estimation is a desirable choice due to its asymptotic optimality properties when the number of samples is large. The simplest algorithm for maximizing the likelihood uses stochastic gradient methods [4]. Maximum likelihood (ML) estimation is based on the assumption that the unknown parameters to be estimated are constants or no prior information is available. Nonlinear Decorrelation Algorithm has been proposed in order to reduce the computational overhead and to improve stability [5]. Another approach to ICA that is related to PCA is the non-linear method. Since learning rule uses higher order information in the learning when nonlinearities are introduced, this method indeed performs ICA, if the data is whitened. Algorithms for exactly maximizing the nonlinear PCA criteria are introduced in [6]. Simple algorithms are derived from the one-unit contrast functions using the principle of stochastic gradient descent. Hebbian like learning rule is obtained by taking the instantaneous gradient of the contrast function with respect to w [7] .Joint approximate diagonalization of eigenmatrices (JADE) is based on the principle of computing several cumulant Tensors. With low dimensional data, JADE is a competitive alternative to more popular FastICA algorithms. Other approaches include maximization of squared cumulants [8] and fourth-order cumulant based methods [9].Fourth-order blind identification (FOBI) method deals with the Eigen value decomposition (EVD) of the weighted correlation matrix [10]. A frequency-domain method of blind source separation (FD-BSS) is able to separate acoustic sources under highly reverberant challenging conditions [11]. In frequency-domain BSS, the separation is generally performed by applying ICA at each frequency envelope. ICA is also done by entropy bound minimization (ICA-EBM) [12].
Fixed-point VLSI architecture was proposed for 2-Dimensional Kurtotic FastICA with reduced and optimized arithmetic units [13]. Implementation of the ICA algorithm on a fixed point platform and floating point processor shows that the accuracy and speed of the fixed point platform were found to be acceptable. In addition, the fixed point processor needs less space and consumes less power. But fixed point processor can handle only smaller range of real values [14]. Due to the computation complexities and convergence rates, ICA is very time-consuming for high volume or high dimensional data set like hyperspectral images. In Parallel ICA (pICA), ICA module is partitioned into three temporally independent functional modules, and each of them is synthesized individually. All these modules are developed for reuse and retargeting purpose. It provides an optimal parallelism environment, a potential faster and real-time solution [15]. FPGA implementation of ICA in digital chip is reported with the modular design concept in [16] and with systolic architecture in [17]. A mixed-signal VLSI system that operates on spatial and temporal differences in the acoustic field at very small aperture to separate and localize mixtures of traveling wave sources is presented in [18].
Evolutionary computation techniques which are population search based optimization methods like genetic algorithms and Particle swarm optimization are used in ICA [19][20] [21]. The only disadvantage of evolutionary computation based ICA technique is that it has heavy computational complexity. But with the advent of highly parallel processors and new technologies like VLSI, these methods provide competitive solutions to the problems.
Though current speech-recognition technologies are quite successful for clean speech signals, their performance is poor in real-world noisy environment which prevents it from becoming popular. Speech enhancement techniques to be made to overcome this difficulty are adaptive noise canceling (ANC) and blind signal separation (BSS). ANC reduces noise when there is knowledge of reference noise signals. When no reference signal is known, then the problem is the BSS problem [23]. ICA technique is mostly used in BSS problems. Two different ICA methods named as Shuffled Frog Leap optimization based ICA and Fast Confluence Adaptive ICA are proposed in floating point arithmetic in this paper. Most commonly used Fast ICA algorithm that provides high convergence speed is also developed for comparison purpose. In order to enable the real-time ICA processing in VLSI and to speed up the computation, the ICA algorithms are written by hand coding HDL code. Various analog VLSI implementations of ICA also exist in the literature. Since digital adaptation offers the flexibility of reconfigurable ICA learning rules, digital implementations are common practice in this field. Though there is software that translates the high-level languages such as C code, MATLAB, and even Simulink into HDL code, hand coding gives the power optimized Implementation with improved performance.
The originality of the proposed FCA ICA is summarized as follows: The early determination of converging weight vector and demixing matrix reduces the number of operations and hence the power consumption. Convergent speed is improved by changing the weight vectors according to fitness value. Floating point arithmetic improves the precision and dynamic range of the signals. This paper is organized as follows. Section II describes the background of ICA.
Section III presents the implementation of the FP arithmetic units. Section IV describes the FastICA algorithm. Section V describes proposed SFLO optimization algorithms for ICA that is based on evolutionary algorithms. Section VI describes proposed FCA ICA and Section VII demonstrates the simulation and implementation results. Finally, conclusions are drawn in Section VIII.
Background of ICA
A long-standing problem in statistics and related areas is to find a suitable representation of multivariate data. Representation here means that data is transformed so that its hidden, essential structure is made more visible or accessible. Blind source separation is a problem of finding a linear representation of hidden data from the mixture in which the components are statistically independent. In practical situations, we cannot in general find a representation where the components are really independent, but we can at least find components that are as independent as possible. Independent component analysis is a major task in signal processing to extract the source signals from the observed mixtures. The relationship between source signals S and observed mixtures X is given in matrix notation as in (1).
A is a full rank matrix which is called mixing matrix. Under some assumptions, ICA solves the BSS problem by finding inverse linear transformation such that, it maximizes the statistical independence between the observed mixtures. In doing this, ICA finds unmixing matrix B. Then the estimate of the source signal (S_est) is found from (2) S_est= B X=S (2)
ICA Preprocessing
In order to simplify the ICA process, it is highly recommended to perform preprocessing of mixtures before applying them to the ICA algorithm. The preprocessing of Advances in Signal Processing 1(3): 37-43, 2013 39 mixed signal involves finding the mixing matrix P. The first step in preprocessing is called centering. Let N statistically independent sources be mixed through NxN nonsingular mixing matrix A so that we obtain the observed signal mixtures given by x 1 (t),x 2 (t)..xn(t) which are the amplitudes of the recorded signals at time point t . For N=2, representation of original source signals is given by s 1 (t),s 2 (t) and mixtures are x 1 (t) and x 2 (t). Centering consists of subtracting the mean from each observed mixtures X 1 (t) and X 2 (t) to produce zero mean outputs C_X 1 and C_X 2 as shown in Fig.1. The second step is called Whitening and consists in linear transformation of the centered mixtures, to obtain new vectors which are white. Fig.2 shows the whitening process. The components of a whitened vector are uncorrelated and their variances equals to unity. This means that the covariance matrix of whitened data is equal to the identity matrix. One way to perform whitening is to use Eigen value Decomposition (EVD). The whitening matrix can be found by using (6) Where E is the orthogonal matrix of eigenvector found from the covariance matrix E {XX T }.D is the diagonal matrix of the eigenvalues associated with each eigenvector. The efficiency of ICA is based on the selection of cost functions, also called objective functions or contrast functions. The Cost function in some way or other is a measure of independence [2]. Some measures of independence are Kurtosis, negentropy and mutual information. Though there are different contrast functions, the most popular contrast function used in ICA is kurtosis.
Floating Point Arithmetic
Based on the storage area available, there are two variants of floating point representation of a real number i.e. IEEE single-precision representation and IEEE double-precision representation. IEEE single precision format, that uses 32 bits, has been used for this proposed ICA algorithm. Format of 32 bit Floating Point Representation is shown in Fig 3.
The Fast ICA Algorithm
Due to simplicity and fast convergence, Fast ICA is considered as one of the most popular solutions for linear ICA/BSS problem .The VLSI implementation of this algorithm involves the preprocessing discussed in chapter II and iteration scheme.
Iteration for One Unit
The fast ICA algorithm for one unit estimates one row of the demixing matrix as a vector that is an extremum of contrast functions. Fast ICA is an iterative algorithm, derived from kurtosis based contrast function. Assuming Z as the whitened data vector and ) 1 ( + k w T as one of the rows of the separating matrix,estimation of ) 1 ( + k w T is done iteratively until convergence is achieved. The Fast ICA algorithm involves the following steps. Choose an initial random vector of unit norm (w old ). Find norm of vectors and divide by corresponding norms. Update the frog by the formula using whitened data vector Z to find w new If w new -w old < ε is not satisfied then go back to step 2. where ε is a convergence parameter (~10 -4 ) and w old is the value of before it's replacement by the newly calculated value w new .
Fixed-Point Iteration for Finding Several ICs
More than one independent components are estimated 40 A Novel Floating Point Fast Confluence Adaptive Independent Component Analysis for Signal Processing Applications using a deflationary approach one by one or estimated simultaneously by symmetric approach.In order to prevent that the algorithm estimates the same component more than one time, the orthogonalization is made using (3) and (4). This verification is done by subtracting the projections of all previously estimated vectors from the current estimate after every iteration step and before normalization.
In the symmetric approach the iteration step is computed for all w p and the matrix W is orthogonalized as The results are obtained in Fast ICA following the deflationary approach.
SFLO ICA
In this ICA method, contrast function optimization is performed based on SFLO for improving the optimality and convergence performance. Mutation operator introduced in SFLO Algorithm avoids the solution from getting trapped in local minima. It converges better in lesser time when compared to other optimization algorithms. In this algorithm, initial weight vectors for estimating the demixing matrix are assumed as frogs and updated by step 3 of the algorithm. Then fitness value is calculated and sorting is done according to fitness value. Based on the fitness values, total population is partitioned into q groups (memeplexes) of p frogs,that search independently. In this process, the first frog goes to the first memeplex, the second frog goes to the second memeplex, frog p goes to the pth memeplex, and frog p +1 goes back to the first memeplex and so on. In each memeplex, the frogs with the best and the worst fitnesses are identified as Xb and Xw , respectively. Also, the frog with the most qualified fitness level among all the memeplexes is identified as Xg. Then improvement is done to improve only the frog with the worst fitness according to step (8). If this process produces a better solution, it replaces the worst frog. Otherwise, a new population is randomly generated to replace that population. This process continues for a specific number of iterations (I max1 ).Then all memeplexes are combined and sorted. Then mutation operation is included using (11) to avoid local minima. If the current iteration number reaches (I max2 ), the search procedure is stopped; otherwise it goes to Step 5. The last Xg is the solution of the problem.
1. Floating Point Iteration
Estimation of ) 1 ( + k w T is done iteratively with following steps until a convergence is achieved. Choose initial population of 'n' frogs (weights) at random. Find norm of pair of frogs and divide by corresponding norms.
Update the frogs by the formula Sort the initial population based on the fitness values with decreasing manner. Partition the sorted population into p memeplexes of q frogs. Select the best frog (Xb), worst frog (Xw) in each memeplex and globally best frog (Xg). Update the position of Xw using X w (new) =X w(old) +C.
where C= rand( ).(Xb-Xw). If it produces better solution, older frog is replaced by updated frog and this process continues for a specific number of iterations (Imax1). Otherwise a new frog is randomly generated to replace Xw and algorithm goes to step 2. All memeplexes are combined and sorted again. Apply mutation If the current iteration number reaches Imax2, the search procedure is stopped or it goes to Step 5. The last X g is the solution of the problem.
Here i rand X is a randomly generated vector, Nmem is the number of memeplexs, i=1,2,……Nmem , (.) rand is random number between (0 and 1) and ε is a convergence parameter (~10 -4 ). With the above steps, Deflationary othogonalization is made to find second independent component.
Novel Floating Point Fast Confluence
Adaptive ICA Though the above algorithm which is based on SFLO improves the optimality performance, it suffers from computational complexity due to large number of iterative calculations in floating point iteration scheme. For reducing the number of manipulations and for improving the performance of ICA algorithm in terms of convergence speed, adaptive optimization of contrast function is proposed in floating point arithmetic. Here, initial weight vectors for estimating the demixing matrix B in (2), are assumed as frogs and are described as memetic vectors. This algorithm computes new weights (frogs) from the initial weights in adaptive manner based on fitness value.
Floating Point Iteration for One Unit
Having done the preprocessing to whiten the mixed signal, this algorithms is used to find the independent component. The proposed Fast Confluence Adaptive ICA algorithm for one unit estimates one row of the demixing matrix. Updation Advances in Signal Processing 1(3): 37-43, 2013 41 of weights continues in iterative manner with following steps until a convergence is achieved.
Choose initial frogs (wi) of 'N' numbers at random. Find norm of frogs and divide by corresponding norms. Update all the N frogs by using the formula and sort the frogs according to the fitness values. Divide the N frogs into M groups (N=2*M) with 2 frogs in each group. The division is done in such a way that 1st frog goes to 1st group, 2nd frog goes to 2nd group and continuous up to M frogs. Then (M+1)th frog goes to 1st group and so on. In each group, determine the best and worst individuals. Update the worst frogs using step 3.
is not satisfied, then go back to step 2 by adaptively taking a new 'wi' lesser than that of worst frog where ε is a convergence parameter (~10 -4 ). When is satisfied, move to next group of frogs and repeat from step 6 until Iteration limit is reached. Then the two vectors with good fitness value can be used as row vectors of demixing matrix. With the above steps, Deflationary othogonalization is then made to find second independent component.
Results and Discussion
For verification of the validity and performance of the Fast ICA and the two proposed ICA algorithms, two different sub Gaussian and super-gaussian signal mixtures are taken and applied to the algorithms. The original signals are mixed with artificial mixing matrix A. The mixing matrix is a full rank matrix of 2 rows and 2 columns. The experiment was carried out first, for small-sized problem with 256 samples. Because the defined algorithm must be capable of efficiently solving the real-world sized instances, another experiment is carried out for large-sized problem with 3000 samples each. The algorithms are written in VHDL and the simulation results are obtained from Modelsim 10.0c Tool. Table 1 compares the performance of Fast ICA, SFLO ICA and FCA-ICA in terms of convergence speed T. Here T represents the time taken for each of the algorithms to reach convergence. The FCA-ICA based extraction of components from their mixtures possesses faster convergence compared to other two.
Results of Subgaussian Mixture
Subgaussian signals have kurtosis value lesser than zero. When kurtosis is zero, the signal is Gaussian for which ICA cannot be applied. Sine waves, sawtooth waves are examples of subgaussian signals. Two signals as in (5) and (6) each are taken and instantaneously mixed by the artificial mixing matrix A shown in (14) S 1 =sin(2*pi*50*t) S 2 =square(2*pi*50*t) (6)
Conclusion
In this paper, new time-domain approaches to estimate the independent components of from observed super Gaussian and sub-gaussian mixtures have been presented. Use of modularity and hierarchy simplifies the design, reduces the area and speeds up the convergence process of ICA. The usage of optimization algorithm enables finding optimal solutions. Floating point manipulations enable increased input signal range. The peculiarity of the resulting system is the capability of providing faster convergence with reduced power. Further research includes the application of the proposed method for other signals, such as Electroencephalograph (EEG), Spread spectrum signals and images under poor Signal to noise ratio (SNR) circumstances. Further improvement is possible by employing this technique with more than two sources. The FCA ICA, Fast ICA and SFLO ICA (Shuffle Frog Leap Optimization based ICA) algorithms converge to the optimal solution at 300ps, 200ps and 500ps respectively. x 10 | 4,642.4 | 2013-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Making quadratic functions interesting: Students teams-achievement division instructional strategy
Algebra plays a central role in school mathematics as it is a critical factor for success in mathematics courses such as calculus and geometry. This investigation aimed to establish the effectiveness of students teams-achievement division (STAD) strategy on students ’ academic achievement in algebra, Lagos State, Nigeria. A quasi-experimental model was employed for the study, which lasted for six weeks. Two secondary schools were chosen randomly and divided into experimental and control groups. The sample comprised 136 students (61 students in control group and 75 students in experimental group) from Ikorodu City, Lagos State, Nigeria. The quadratic functions assessment test, four questions (each with four items) with appropriate validation, served as the tool for data collection. Reliability was attained using the test-re-test method, with Pearson ’ s product-moment correlation analysis yielding a reliability index of 0.86. Three hypotheses were generated and analyzed using analysis of covariance at a significance level of 0.05. The results showed that students exposed to STAD had enhanced performance in algebra. Likewise, gender did not impact the achievement of the participants in algebra. It was recommended that the Nigerian Educational Research Development Council arrange workshops, training, and seminars on incorporating learner-friendly instructional strategies such as STAD.
INTRODUCTION
Algebra is one of the topics that students find challenging when learning mathematics (Hamzat, 2018).Algebra plays a central role in school mathematics as it is a critical factor for success in mathematics courses such as calculus and geometry.It also serves as a "gatekeeper" for further education and producing a skilled workforce for a modern high-technology society.However, Oyekan (2019) pointed out that the algebra taught in schools takes a different form than the algebra taught to mathematics majors.Moreover, the algebra taught in schools is not easily defined.It is thus related to various conceptions such as algebra as generalized arithmetic, equivalence, expressions, equations, inequalities, and functional thinking (Blanton et al., 2018).In this study, we focus on functional thinking (e.g., quadratic functions), which includes "generalizing relationships between co-varying quantities and representing, justifying, and reasoning with these generalizations through natural language, variable notation, drawing, tables, and graphs" (Karadeniz et al., 2017;Musa, 2022).
The covariance approach to functions requires understanding how a change in one variable relates to another or how those variables change together (Kezar, 2018).As a prerequisite for understanding calculus that underlies innovation and economic success across many science and engineering domains, a significant amount of time and emphasis is placed on teaching functions in the high school mathematics curriculum.Therefore, students' ability to define and make sense of functions is significant and should be emphasized (Abidin et al., 2021;Bardini et al., 2014).While the need for a dynamic conceptualization of the understanding and use of functions has been highlighted similarly, the lack of students' understanding of functions has been documented (Mulungye et al., 2016).Students' Odutayo & Fonseca / Making quadratic functions interesting 2 / 11 challenges in understanding functions include the following but are not limited to these; understanding functions in terms of input and output, thinking constant functions (e.g., y=6) are not functions because they do not vary, simply viewing functions as two expressions separated by an equal sign, believing that all functions should be definable by a single algebraic formula, and assuming that functions are linear or quadratic in unwarranted situations (Al-Rababaha et al., 2020;Booth et al., 2017).The high school mathematics curriculum includes numerous types of functions and their concepts, such as the linear, quadratic, cubic, exponential, hyperbolic, and trigonometric functions (Booth et al., 2014).
The quadratic function is one of the most significant ideas for students to learn about in school mathematics (Nielsen & Weber, 2015).Thus, a good understanding is needed since it is foundational in calculus courses (Burns-Childers & Vidakovic, 2018).However, as found by previous studies, high school and undergraduate students need help understanding functions-related concepts.Transitioning between graphical and algebraic representations, the link between the various expressions of an algebraic version of a quadratic function, and variable misconceptions are all common problems for students' understanding of quadratic functions (Ocal, 2017).A'yun and Lukito (2018) added a view of graphs as whole objects, struggles to interpret the role of parameters correctly, and a tendency to generalize from linear functions incorrectly.The science of equations is associated with algebra, and the Greek mathematician Diophantus is regarded as the author of algebra.Algebra is an Arabic word that means "reunification of broken parts" (Oyekan, 2019).It is a type of generalized arithmetic in which the numbers are represented by letters and symbols.
Teachers, parents, and educational authorities across Nigeria have always been concerned about students' low performance in mathematics examinations, specifically in quadratic functions (internal or external).According to the 2022 West African examinations council chief examiners' report, most students avoided quadratic function problems, and those who attempted them failed significantly (Musa, 2022).This observation was linked to students' misunderstandings before, during, and after their classroom experience.Additionally, education specialists conclude that several causes may be connected to this alarming trend, including teachers' adoption of suitable and acceptable teaching strategies (Hamzat, 2018).The conventional method that most teachers employ during classroom experience has been reported to be unfriendly and discourages students from comprehending mathematics, and by extension algebra (Oyekan, 2019).Therefore, efficient delivery techniques while teaching quadratic functions are essential to ensure complete comprehension, correct mastering, and a significant improvement in memory capacity.Students teams-achievement division (STAD) is a strategy that experts advise.This cooperative strategy, STAD instructional strategy, has been reported to promote both collaborative and independent learning simultaneously (Tran, 2013;Victor-Akinyemi, 2022).This helps achieve clearly stated learning objectives.A small group of students, each with a different ability level, work together to complete an expected learning outcome in this type of learning technique.After the teacher has finished teaching the material, students interact in mixed-gender groups to support each other for better comprehension (Wyk, 2013).This is when STAD occurs.Students were assigned to learning groups of four or five people who were diverse in ability, gender, and ethnicity.After the teacher provides a lesson, students work in groups to ensure everyone understands the task.Finally, each student takes a separate exam on the subject, during which they are not allowed to consult with one another (Tiantong & Teemuangsai, 2013).When calculating the number of points granted, the degree to which pupils match or outperform their prior performance is considered.Test results for each student are compiled, and the aggregate of each score creates the team score.Teams may receive certificates or other awards if they achieve specific requirements (Okechukwu et al., 2016).STAD strategy is appropriate for subjects with clearly stated objectives, such as geography, science concepts, and mathematical applications and computations.However, it can be revised for practice with less noticeably identified objectives by combining more essay tests (Khalil & Elkhider, 2016).Based on the students' positive interactions with one another, their improved attitudes toward the subject, their elevated sense of self-worth, and their honed interpersonal skills, it was decided to enroll them in STAD.Also, some
Contribution to the literature
• This study determined that STAD instructional strategy effectively boosted student attitudes toward math and reduced anxiety associated with studying quadratic functions.• Also, STAD method of instruction is an effective tool for instructors seeking to increase student results and foster a good learning environment.• This study has contributed to the existing literature because it reports that STAD is a learner-friendly instructional strategy, not gender bias.
3 / 11 accomplished students assume a tutor position and produce intended outcomes.Additionally, STAD introduces a second method for group learning.Additionally, it prepares pupils for the pressures of modern life by teaching them good teamwork (Oludipe & Oludipe, 2017).
Today, an overwhelming majority of students continue to have difficulties in mathematics.They believe mathematics is challenging because it involves understanding a complex system of rules and dealing with large numbers (Thompson, 2015).The main objective of this study is to ascertain the effectiveness of STAD instructional strategy on secondary school students' academic achievement in algebra in Lagos State.Specifically, the study would investigate the effect of STAD instructional strategy on their academic achievement in algebra based on gender and academic ability level.
Research Hypotheses
The following research hypotheses were used to guide this study.
HO1. STAD instructional strategy does not
significantly improve students' achievement in quadratic functions.
HO2.
There is no significant difference in the performance of male and female students in quadratic functions taught using STAD instructional strategy.
HO3. STAD instructional strategy does not significantly improve students' achievement in quadratic functions based on academic ability level.
Theoretical Framework
The theoretical foundation for this study was founded on Vygotsky's social interdependence theory, which was first proposed in 1962.Vygotsky represented students as social beings entwined with others and eager to learn new things and develop new talents.When students are working toward common objectives, according to this theory's core tenet, their respective actions affect the general outcome of the group (McLeod, 2023).They all have similar objectives, and the actions of other students impact each student's performance.When a student's successes and losses affect those of other students, there is social interdependence.McLeod (2014) states that knowledge is social and created via group efforts to analyze, learn, and solve problems.For pupils to advance beyond their current development, according to Vygotsky, they must connect with other student(s) who are more knowledgeable than themselves.Vygotsky saw learning and development as dynamic processes placed in social and cultural contexts.Therefore, in a constructivist environment, the teacher does not focus mainly on the correctness of the answer but on the procedure followed by the students to arrive at the solution.
Teachers' ought to guide students and allow them to collaborate with more competent classmates.Students' intellectual development may be stunned if cooperative activities are not present to create such a learning environment.This framework demonstrated how communication and social interaction are crucial to learning.Those in the group who get along well together will produce good outcomes.There are only individual efforts when there is neither social interdependence nor social reliance (McLeod, 2014).Integration is necessary for increased learning.As a result, the foundation of this philosophy is collaborative teaching methods.According to Vygotsky, abstract thinking cannot develop independently; it needs language and Western education.Underprivileged in Uzbekistan of varied ages, sexes, and levels of exposure to the newly constructed schools were tested to see if this was true by Vygotsky.Only formal education showed a correlation between abstract thinking and these other factors.This theory is based on the idea that a more competent person (teacher) may give students temporary frameworks that help them reach higher-order thought stages, which are identified as the range of students' sole accomplishments and what they can accomplish with the help of the best possible social support (Victor-Akinyemi, 2022).
Literature Review
Equations are a key algebraic idea.It performs arithmetic operations in accordance with several rules.Data sets with two or more variables can be understood using specific rules.This study examines past studies related to the influence of STAD, cooperative instructional strategy, gender, and ability level on students' academic achievement in algebra and mathematics.
Few studies were available to the researchers on the influence of STAD on the academic achievement of students in mathematics.In a comprehensive review, STAD studies on different subjects were also presented in this section.Majoka et al. (2015) investigated the efficacy of STAD in secondary mathematics classrooms.This study's findings showed that STAD was a better instructional paradigm for mathematics than the conventional approach.Using STAD learning paradigm, Rusi et al. ( 2019) investigated practices to enhance the mathematics learning results of fifth-grade pupils.According to their study, STAD strategy can be employed as a pleasant learning tool and a solution to the challenging issue of pupils comprehending the fifthgrade mathematics curriculum.Simamora (2017) investigated how STAD learning model affected students' capacity for conceptual thinking in mathematics.According to reports, STAD learning approach impacts arithmetic students' conceptual comprehension abilities.The impact of STAD strategy on learning outcomes in terms of numerical skills was assessed by Sa'adiah et al. (2021).The study discovered a substantial relationship between STAD cooperative instructional strategy regarding numerical skills and learning outcomes in mathematics.According to other researchers, STAD encourages the robust and comprehensive advancement of students who participate in classroom activities (Ahmed, 2016;Olaide, 2019;Wang et al., 2017).
Likewise, empirical studies on STAD in other subjects such as sciences and arts, Tiantong and Teemuangsai (2013) explored the use of STAD strategy through the sectional Moodle to develop learning accomplishment in computer programming courses.They claimed that Moodle might be used to successfully implement the student team accomplishment STAD strategy to improve learning objectives on courses in computer programming.Yusuf et al. (2015) investigated the differences between two nations, namely the German nation and the Turkish nation, which have varying degrees of historical, cultural, and political ties to the Balkans.The achievement levels of STAD were used in this investigation.The study's conclusions showed that the introduction of STAD resulted in a significant change in the negative stereotype scores of students according to the country in which learners were raised.Furthermore, Shahedi (2016) researched to ascertain whether cooperative learning can enhance primary students' speaking abilities with a focus on STAD and its potential to affect students' interaction ability.According to this study, adopting this strategy can help students become more accurate speakers of English.
Moreover, using Kemmis and McTaggart's participatory action theory, Jamaludin and Mokhtar (2018) used STAD to assess first-semester students' attitudes toward the tourism geography course.The attitude and the teamwork fulfillment scale were assessed to comprehend students' feedback on STAD teaching method.The study found that STAD technique improved the experimental group's pupils' accomplishment test scores, attitudes, and teamwork.The effectiveness of STAD in enhancing students' talking abilities with English as an alternate language was critically investigated (Ibrahim & Adnan, 2019).According to this review, comprehending STAD improved speaking ability and collaboration fulfillment among students and provided a thorough analysis of contemporary dialogue.Lantajo and Tipolo (2018) investigated how STAD affected grade 8 student's academic performance in physics at a public school.They concluded that by working with their classmates, the students' academic achievement had been enhanced by applying STAD.Using STAD as a technique for peersupported cooperative learning in neuroanatomy was examined by Motwani et al. (2022).They discovered that STAD significantly improves student performance over traditional self-learning.According to Kikiola (2018), Kolapo (2016), Ogunsakin (2020), and Victor-Akinyemi (2022), STAD is a helpful instrument for peer-assisted cooperative learning in other courses like chemistry, economics, English, geography, and physics.Concerning gender differences in the achievement of students, Gupta et al. (2014) investigated the impact of gender on students' mathematics achievement when cooperative learning is utilized as an instructional technique.When students were exposed to STAD, gender did not affect their mathematical achievement.The gender of students exposed to algebra employing PBL did not substantially contrast in performance and retention grades, according to Adediran et al. (2015), Ajai and Imoko (2015), and Nikou et al. (2014); Similarly, Lindberg et al. (2010) and Sinaga and Mukhtar (2019) found no significant influence of STAD on male and female students' academic achievement.Weisman et al. (2020) investigated the association between gender, GPA, and multi-ethnic students in STAD classes.It has been shown that when students are exposed to STAD, gender does not affect their learning outcomes.Jiang (2021) and Rodrguez et al. (2020) claimed that female students' attitudes were not on par with their male counterparts, but no difference in mathematics performance was seen.Furthermore, Anyanwu and Iwuamadi (2015), Victor-Akinyemi (2022), and Yusuf et al. (2014) found no gender differences in student performance.Brown and Kanyongo (2017), on the other hand, discovered a gender difference in mathematics achievement.They stated that male students achieved better than their female colleagues.According to Kyavoa (2017), female pupils underperform male students in mathematics.Male students scored much higher in mathematics than their female counterparts, according to Bertoletti et al. (2020) and Kaiser and Zhu (2022).In contrast, Iweka (2017) and Sam-Kayode and Salman (2015) reported in independent research that student performance in mathematics varied greatly.They went on to say that the mean score of female students was higher than that of male students.Ability level refers to a student's present capacities at a given time.Students are presently adept at doing it individually with a great degree of precision.The capability level is also known as the scoring level because it is based completing of the assignment without assistance from a student (Odutayo & Yusuf, 2020).Students' ability levels are classified as high, average, and low based on their academic ability.There have been studies carried out by researchers on the influence of ability level on students' achievement with varying findings.The survey by Musa (2022) concluded that students' ability levels play no role in their academic achievement in mathematics.In line with this finding, Akintunde (2017) submitted that cooperative instructional strategy influences mathematics students' 5 / 11 learning outcomes based on their scoring level.On the contrary, Ayinla (2018) and Salman (2016) submitted that the ability level of learners impacts the learning outcome of learners.They further stated that averageability students performed significantly better than high and low-ability students.Similarly, Dambatta (2019) and Enikanolaye (2021) also reported in their respective studies that the ability level of learners dictates the outcome of students in mathematics when exposed to collaborative learning styles.However, they opined that high-ability students fared better than the average and low-ability students during instruction.
MATERIALS & DESIGNS
This study's population would be all Lagos State secondary school students.While the target population would be all secondary school two (SS II) students in Ikorodu City, Lagos State, Nigeria.This study employed a quasi-experimental, non-randomized, and nonequivalent control group design.Random sampling techniques were used to select the two secondary schools and subsequently used to assign the schools to either treatment or control groups.Two secondary schools with intact classes in Ikorodu City, Lagos State, Nigeria served as the sample for this study.A total of 136 SS II students served as the sample in this study (STAD group comprising 75 students, and the control group, having 61 students).The instructional package focused on the teaching of graphing quadratic functions.
The instructional activity lasted for six weeks; the first week was used for approval from the authorities of the sampled schools, introduction, and interactions with the participants on the experiment process.The treatment spanned from the second to the fifth week, while the post-test was carried out on the sixth week.A researcher-designed assessment test titled "quadratic functions assessment test (QFAT)" was used to measure students' achievement.Quadratic graphs and quadratic equations were the topics both sets of students were exposed to.QFAT consisting of four questions (each with four items), was used for the pre-and post-tests.The treatment group was taught with STAD instructional strategy, while the control group was taught with the traditional instructional strategy.STAD instructional strategy entails separating the students into small groups, each with a different ability level, working together to complete an expected learning outcome.After that, students interact in mixed-gender groups to support each other for better comprehension.On the other hand, the conventional group were taught with the traditional/conventional method.
QFAT test assessed students' knowledge and skills of graphing quadratic skills.The assessment test items' validity was achieved with the assistance of mathematics, educational research, measurement and evaluation, and teacher education experts.Their comments and observations were integrated into the test items for improvement.The instrument's or QFAT's reliability was tested using the test-re-test method.This instrument was administered to 40 participants who were not students at participating schools.The tests were administered over two weeks.Data from the first and second administrations were collected separately and tested for dependability using the Pearson productmoment correlation statistic, yielding a result of 0.86.With such a strong correlation, the instrument was deemed suitable for the investigation.Analysis of covariance (ANCOVA) was used to test all the hypotheses formulated at 0.05 significance level.The statistical method known as ANCOVA, or analysis of covariance, is frequently utilized to account for the effects of variables in experimental research.Covariates are factors that are not the primary subject of the investigation but might affect the outcome.ANCOVA technique enables researchers to decrease confounding variables and improve the precision of their findings.Furthermore, ANCOVA enhances statistical power by reducing error variance and improving accuracy in STAD treatment estimation.
Ho1. STAD instructional strategy does not significantly improve students' achievement in quadratic functions
The analysis result is shown in Table 1 for the descriptive statistics of mean for the pre-and post-test scores of experimental and control groups and Table 2 for ANCOVA results for the difference between the experimental and control groups.
As shown in Table 1, students taught with STAD had higher mean scores (54.49±13.87)than students acquainted with the conventional strategy, with a mean score (52.58±12.41),which is then reflected in ANCOVA result in Table 2, the significant difference in the performance of the groups, F(2,133=23.1, p<0.00).Consequently, a significant difference exists in students' performance in quadratic functions exposed to STAD instructional strategy.Odutayo & Fonseca / Making quadratic functions interesting
/ 11
Ho2.There is no significant difference in the performance of male and female students in quadratic functions taught using STAD instructional strategy.
For testing hypothesis two, the analysis results are shown in Table 3 for the descriptive statistics of mean for male and female students taught with STAD instructional strategy, and Table 4 for ANCOVA results for male and female students taught with STAD instructional strategy.
Table 3 shows that male students taught with STAD had a higher mean score (55.91±13.17)than female students taught with peer tutoring (54.63±12.01).
The results of ANCOVA analysis, as depicted in Table 4, indicated no significant difference in the performance of students taught with STAD instructional strategy based on gender, F(1, 131)=0.011,p>0.917.Thus, the improvement in the performance of students in quadratic functions exposed to STAD instructional strategy was not affected by gender.
Ho3. STAD instructional strategy does not significantly improve students' achievement in quadratic functions based on academic ability level For testing hypothesis three, the analysis results are shown in Table 5 for the descriptive statistics of mean for students' ability level taught with STAD instructional strategy, and Table 6 for ANCOVA result for students scoring level taught with STAD instructional strategy.Table 7 for Scheffe post-hoc pair-wise comparison to show the source of established significant difference.
As shown in Table 5, students with high scoring level taught with peer tutoring had the highest mean score (73.78±4.63),followed by medium-ability level students (55.61±8.15)with low-ability level students with the lowest mean score (29.71±4.15),which reflected ANCOVA result in Table 6 the significant difference in the scoring level of students taught with STAD instructional strategy, F (1, 131)=6.77,p<0.002.Table 7 showed the high-ability level students taught with STAD strategy were statistically different (18.17) than
DISCUSSION
This study discovered that STAD instructional technique increases students' quadratic function achievement.Compared to traditional teaching strategy, STAD strategy fosters a collaborative learning environment in which students' team up and support each other's knowledge of algebraic ideas.The advantages of this strategy go beyond academic accomplishment since it helps students develop crucial social skills, including communication, teamwork, and leadership.This study's findings are consistent with those of Ahmed (2016), Sa'adiah et al. (2021), Simamora (2017), Olaide (2019), and Wang et al. (2017), all of which found that students exposed to STAD strategy performed better in mathematics and other disciplines than those who were taught using traditional methods.Overall, STAD instructional strategy is promising for improving algebra education and helping students succeed in this important subject area.
This study also discovered that gender does not affect the improvement in quadratic function achievement of students exposed to STAD instructional strategy.This significant outcome implies that male and female students benefit from STAD strategy.It also suggests that gender has no bearing on the success of this teaching strategy.This is crucial information for educators who want to increase student performance in their classes.STAD is a cooperative learning system that emphasizes group work and peer tutoring.It has been proven helpful in raising academic achievement in various subject areas.It is an appealing alternative for teachers wishing to establish an inclusive learning environment because it works equally well for both sexes.Teachers can assist their students in developing crucial abilities such as communication, teamwork, and critical thinking by employing STAD strategy.Overall, this study sheds light on the efficacy of STAD teaching strategy and its capacity to improve student performance regardless of gender (Yusuf et al., 2014;Anyanwu & Iwuamadi, 2015;Rodrguez et al., 2020;Jiang, 2021;Victor-Akinyemi, 2022).However, the findings of this study contradict those of Sam-Kayode and Salman (2015); Iweka (2017); Bertoletti et al. (2020); Kaiser and Zhu (2022), all of which found gender differences in achievement in favor of either male or female students.
Finally, this study found that STAD strategy substantially influenced average ability students' quadratic functions achievement.STAD strategy has been found to improve the academic achievement of average-scoring algebra students significantly.This conclusion is significant because it implies that this method can help bridge the achievement gap between high-and low-achieving pupils.STAD approach is designed to promote collaboration and active learning among students, which can lead to deeper understanding and improved performance.By working together in small groups, students can share their knowledge and skills, identify areas, where they need additional support, and receive feedback from their peers.This collaborative approach also helps build social skills and teamwork abilities, which are essential for success in school and beyond.Overall, STAD instructional strategy is a powerful tool for improving academic outcomes for all students, especially for average achievers in math.The findings of this study are separate from the outcome of Akintunde (2017) and Musa (2022), who concluded that students' ability levels play no role in their academic achievement in mathematics.On the contrary, Salman (2016) and Ayinla (2018) submitted that the ability level of learners impacts the learning outcome of learners in favor of proficient students.
CONCLUSIONS
Based on the findings of this study, it was determined that STAD teaching strategy effectively boosted student attitudes toward math and reduced anxiety associated with studying quadratic functions.Quadratic functions allow pupils to grasp complicated, changing, and abstract concepts, which stimulates the brain and helps students learn to think in new ways.Overall, STAD method of instruction is an effective tool for instructors seeking to increase student results and foster a good learning environment.This will be accomplished because algebra will assist students in organizing their thoughts, making it easier for them to formulate logical solutions when confronted with complex or dynamic situations.
The study's limitation is that not all educational environments or student demographics may be covered by it.STAD teaching strategy's efficacy may vary based on the student's prior knowledge, motivation, and unique learning preferences.Additionally, because the study's primary focus is on quadratic functions, its results could not transfer to other mathematical ideas or fields of study.Therefore, it is recommended that: 1.As a mechanism of the Ministry of Education, the Nigerian Educational Research Development Council should arrange workshops, training, and seminars on incorporating learner-friendly instructional strategies such as STAD.
2. Curriculum planners should support implementing STAD instructional strategy for teaching mathematics while monitoring and supervising schools continuously to ensure compliance.
Educational administrators must encourage a
research culture by urging mathematics teachers to use STAD and submit thoughtful analysis and empirical submissions at the end of the academic session.
4. Administrators should plan incentives and recognition for educators who go above and beyond the standard teaching style to utilize STAD instructional strategy.
Table 1 .
Descriptive statistics for pre-& post-test scores of treatment & control
Table 2 .
Results of analysis of covariance on differences between treatment & control groups
Table 3 .
Result of descriptive statistics for post-test scores students exposed to STAD based on gender
Table 4 .
Results of analysis of covariance on differences between treatment group based on gender
Table 5 .
Result of descriptive statistics on scoring level of students exposed to STAD instructional strategy
Table 6 .
Result of analysis of covariance on post-test scores of different scoring level of students exposed to STAD strategy
Table 7 .
Result of Scheffe post-hoc pair-wise comparisons on students' ability levels in post-test mean scores | 6,409.2 | 2024-01-03T00:00:00.000 | [
"Mathematics",
"Education"
] |
Exact Quantum Decay of an Interacting Many-Particle System: the Calogero-Sutherland model
The exact quantum decay of a one-dimensional Bose gas with inverse-square interactions is presented. The system is equivalent to a gas of particles obeying generalized exclusion statistics. We consider the expansion dynamics of a cloud initially confined in a harmonic trap that is suddenly switched off. The decay is characterized by analyzing the fidelity between the initial and the time-evolving states, also known as the survival probability. It exhibits early on a quadratic dependence on time that turns into a power-law decay, during the course of the evolution. It is shown that the particle number and the strength of interactions determine the power-law exponent in the latter regime, as recently conjectured. The nonexponential character of the decay is linked to the many-particle reconstruction of the initial state from the decaying products.
where A(t) = Ψ(0)|Ψ(t) is the survival amplitude. To appreciate that the decay dynamics of S(t) is generally non exponential, it is convenient to use the Ersak equation, that relates the values of survival amplitude at different times during the course of evolution [7], Above, the memory term is given by where U (t, t ) = T exp(−i t t H(s)ds/ ) is the time evolution operator from time t to t, and Q = 1 − P is the complement of the projector on the initial state P = |Ψ 0 Ψ 0 |. Hence, M (t, τ ) represents the probability amplitude for the initial state to evolve into decay products at time τ , and subsequently reconstruct the initial state at time t. For an exponential decay to hold, the memory term is to vanish. However, the memory term plays a dominant role at short-and long-times of evolution. The short-time decay is known to be governed by the energy fluctuations of the unstable state. Experimentally, it was first demonstrated in [8] and its existence sets the ground for the quantum Zeno effect [9]. It is a consequence of unitary time-evolution, provided that the first and second moment of the Hamiltonian exist, see [10] for exceptional cases. That deviations from exponential decay are as well to be expected at long times was pointed out by Khalfin in 1957, for systems whose energy spectrum is bounded from below [6]. Measurements consistent with these deviations were reported in [11].
The decay dynamics of many-particle quantum systems has recently received a great deal of attention [12][13][14][15][16][17][18][19][20][21][22]. These studies show the need to characterize quantum decay at a truly many-particle level, beyond its description in terms of one-body observables such as, e.g., the integrated density profile [23][24][25][26]. Achieving this is a challenging goal, due to the limitation of reliable numerical techniques. Even at the single-particle or mean-field level, propagation methods based on space discretization in a finite spatial domain can introduce artifacts due to the enhanced reflection from the boundaries of the numerical box, unavoidable for long-time expansions [27]. An attempt to palliate this effect with complex absorbing potentials [28] explicitly suppresses state reconstruction, and delays the onset of power-law behavior in an unphysical way [29]. By contrast, these issues are absent in studies of fidelity decay in spin systems [30]. As an outcome, analytical results in quantum decay, often based on time-dependent scattering theory, are highly desirable. Recent theoretical progress has been mainly restricted to two particle systems [12][13][14][15][16][17][18] and quasi-free few-particle quantum fluids [19,20,22]. Experimentally, the role of Pauli exclusion principle has been demonstrated using analogue simulation in photonic lattices [31], while interaction-induced particle correlations have been measured in optical lattices [32].
In this Letter, we present the exact quantum decay dynamics of an interacting many-body system that is equivalent to a gas of particles obeying generalized exclusion statistics. We rigorously show that the survival probability decays as a power law at long times with an exponent arXiv:1504.01620v1 [quant-ph] 7 Apr 2015 that depends on the strength of the interactions and the particle number. In the non-interacting limit, this result proves the scaling conjectured based on the study of few particles systems [12,13,18,19,21]. The non exponential character of the evolution is linked to the multi-particle reconstruction of the initial state.
Model.-The Hamiltonian model we will consider is that of N particles effectively confined in one-dimension in a harmonic trap and interacting with each other through a 1/r 2 potential. This is the so-called Calogero-Sutherland (CS) model [33,34] Its energy spectrum and the complete set of eigenstates are known [33][34][35]. The CS Hamiltonian includes several relevant limiting cases. It reduces to non-interacting bosons for λ = 0, while the Tonks-Girardeau gas [36] describing hard-core bosons is recovered for λ = 1. For λ = {0, 1}, it describes Haldane anyons [37]. We shall assume that the frequency of the trap for t < 0 is given by To describe the decay of the survival probability, we first note that the CS model belongs to a broad class of systems for which the exact time-dependent coherent states can be found [38,39]. A stationary state Ψ of the system (4) at t = 0 with energy E, follows a self-similar evolution dictated by the SU (1, 1) dynamical symmetry group, where τ (t) = t 0 dt /b 2 (t ). Here, the scaling factor b = b(t) > 0 is the solution of the Ermakov differential equationb where K(t) = [ω(t)/ω 0 ] 2 , and the boundary conditions b(0) = 1 andḃ(0) = 0 follow from the stationarity of the initial state. The ground-state of the CS model has a Bijl-Jastrow form [34] Ψ Here, the normalization constant C N,λ reads it is related to the normalization constant of the probability distribution function for the Gaussian (β = 2λ)ensembles in random matrix theory and derived using Mehta's integral [34,40,41]. Using the dynamics (4), it is found that Upon explicit computation, the following closed expression is obtained where the function α(t) is given by As a result of the boundary conditions, b(0) = 1 anḋ b(0) = 0, α(t) reduces to unity at t = 0. For N = 1, one recovers the survival probability of a single particle in a time-dependent harmonic trap S 1, It follows that the survival probability of a N particle CS system is identical to that of N non-interacting particles obeying Generalized Exclusion Statistics (GES) with exclusion parameter g = λ. GES was introduced by Haldane for systems with a finite Hilbert space [37], and extended by Wu to unbounded Hamiltonians [42]. It accounts for the number of available states excluded by a particle in the presence of others, and it smoothly extrapolates between bosons and fermions, and beyond. The exclusion parameter is defined as the ratio g = −∆d/∆N of the change in the available states ∆d as the particle number number is varied by ∆N. For the CS model the exclusion parameter is precisely given by λ [43]. Particles with fractional excitation λ = {0, 1} are Haldane anyons. From (8), the survival probability for N non-interaction bosons and fermions is recovered for λ = 0, 1 respectively. This leads to the duality that represents a signature on the quantum decay dynamics imprinted by the transmutation of statistics observed in a CS system for different values of the GES parameter λ. Indeed, Eq. (10) resembles the known relation for the equilibrium partition functions obeyed by Haldane anyons [43]. More generally, Sudden expansion.-Suddenly switching off the trap (e.g., K(t) = Θ(−t)), leads to an expansion dynamics with the scaling factor given by b(t) = √ 1 + t 2 > 0. The survival probability decays monotonically as a function of time and there is a smooth transition between the short and long time asymptotics, as shown in Fig. 1 for different values of λ. At short-times, S N,λ (t) is a quadratic function of time, . (12) where the coefficient of t 2 can be interpreted as the variance of the kinetic energy in the initial state |Ψ 0 , and all odd moments vanish identically. As a result of the free dynamics, the exponential regime [5] is absent due to the lack of resonant states and a transition to the long-time behavior follows, when the survival probability is given by, Hence, the survival probability decays as a power-law in time. The power law exponent depends linearly on the interaction strength λ (the GES parameter) and exhibits an at most quadratic dependence on the particle number N. This is the main result of this Letter. Its derivation required (i) the scaling dynamics of the exact timedependent coherent states, (ii) the use the Bijl-Jastrow form of the initial state, and (iii) the identification of the leading term at long times. The scaling dynamics holds exactly for the CS gas and simplifies the ensuing analysis by contrast to other many-body systems such as, e.g., the 1D Bose gas, where only recently moderate progress accounting for its dynamics has been reported [44][45][46][47].
Comparing the first leading terms in a long-time asymptotic expansion, it is found that the power-law (13) sets in when the time of evolution satisfies t 2N(1 + λ(N − 1)).
For free-bosons, the power-law exponent becomes linear in the particle number The case of hard-core bosons correspond to λ = 1, and leads to a power-law exponent quadratic in the particle number The change in the scaling with N was conjectured analyzing quasi-free systems of N = 2, 3 particles [12,13,18,19,21]. Eqs. (13), (15), and (16) prove that this is indeed the case.
Robustness.-It is worth emphasizing that the powerlaw behavior (13) is observed in the long-time dynamics of other multi-particle observables such as the non-escape probability from a region of space, e.g., where the initial state is initially localized. Explicitly, we define the Nparticle non-escape probability as which is the probability for the N particles to be found simultaneously in the ∆-region and can be extracted from the full-counting statistics [19]. Explicit computation shows that Taking ∆ = [−a/2, a/2], the integral I N,λ (t) becomes time-independent at long expansion times when b √ λa, i.e. t |λa 2 − 1| 1/2 , where S n (α, β, γ) is the Selberg integral [41]. As a result, the same power-law scaling sets in, i.e., However, the dependence of the power-law exponent on N and λ is lost when studying the decay in terms of onebody observables such as the one-particle density profile n(x, t) integrated over the region of interest ∆, To illustrate this, let us consider the exact evolution of the density profile that under scaling dynamics is given by n(q, t) = n(q/b, 0)/b. Although an explicit computation of the density profile n(q, 0) is possible in the CS model, it would suffice to consider the large N limit. Then, n(q, 0) follows Wigner's semicircular distribution n(q, 0) = 1 − q 2 2N/π which is already independent of λ. Under free expansion, it is found that p(t) ∼ 2aN/(πt), where the power-law exponent is independent of N. The same conclusion holds when using the expressions for low N and large λ available in the literature for n(q, 0) [41,48], but for the fact that the prefactor acquires a dependence on λ/N. Generally, the 1/t powerlaw decay of p(t) can be expected as the density profile flattens out at long expansion times, becoming approximately constant over the region ∆, so that the integrated density profile p(t) is governed by the normalization factor 1/b(t). One might also wonder whether a non-sudden modulation of the trapping frequency will affect the long time power-law behavior. We show next that as long as the frequency of the trap is permanently switched off after a given time t = t 0 , the same power law scaling sets in. In- which tends to b(t) ∼ t(v 2 0 + 1/b 2 0 ) 1/2 at large expansion times. As a result, only the prefactors of the survival and nonescape probability are affected, and the scaling is still dictated by (20).
Many-particle state reconstruction.-We next analyze the relevance of state reconstruction in the CS model as an example of a many-particle system. Using the Ersak equation (2) [7,29], the following decomposition of survival probability is obtained where the first two terms admit a classical interpretation. In particular, S N,λ (t − τ )S N,λ (τ ) is the probability for the system to survive in the initial state at time t provided that it was in the initial state at time τ . Similarly, M(t, τ ) = |M (t, τ )| 2 accounts for state reconstruction in a classical sense, i.e., it is the probability that the state has decayed at time τ and reconstruct the initial state at time t. The last term in (23) represents the interference between the amplitudes for the two histories just described, i.e., I N, Here, the survival amplitude is to the survival probability. Thus, the state reconstruction governs the long-time decay, except for values of t/τ close to {0, 1}, when all processes remain relevant.
In conclusion, we have characterized the the exact decay of an interacting many-body quantum fluid released from a harmonic trap. Exploiting the self-similarity of the ensuing dynamics, the long-time power-law behavior of the survival probability was shown to be dictated by the strength of the interactions even at arbitrarily large expansion times. The scaling of the power-law exponent is at most quadratic in the particle number. The nonexponential character of the decay can be attributed to the many-particle state reconstruction of the initial state. Our results can be extended to other systems including SU (ν) spin degrees of freedom and fermionic exchange statistics [35]. As an outlook, it is worth exploring higher dimensional systems like the 2D Bose gas, for which selfsimilar dynamics holds [49] up to quantum anomalies [50] and the generalized exclusion parameter is known [51], and systems lacking self-similar dynamics such as the 1D Bose gas [44][45][46][47].
Acknowlegments.-It is a great pleasure to thank | 3,255 | 2015-04-07T00:00:00.000 | [
"Physics"
] |
Impact of the heap shape formation on the local vertical force profile of ensiled granular materials
The free surface, which results from the filling of a container by a dry granular material, takes the shape of a heap. This shape is induced by the particle properties like their size, their friction, their restitution coefficient but also by their kinetic energy induced by the filling conditions. The stabilization of the intergranular contact network inside the heap leads to the formation of a spe-cific local stress field. For five wheat-based powders of same origin but with contrasted properties, beds are formed in a 2D semi-confined cell using a point source. Captures of the bed free surface are realized from pictures taken at different filling states. The local vertical force is measured in the center of the cell at different depths. This work highlights a correlation between the local vertical force profile and the shape of the heap during the filling. Two modes of force distribution can be distinguished and classified according to a parameter of uniformity. These distinctions are linked to the different shapes of the heap, quasi-layering or dune. Our experimental study demonstrates that the history of the powder filling directly impacts on the resulting local force profile.
Introduction
During a container filling by a dry granular material (e.g. storage in a silo, mixer container, packing), the powder flow interacts with the side walls and the bottom. It leads to the formation of a semi-confined free surface in the 'feeding zone' and particle jamming under the free surface [1]. The resultant free surface takes the shape of a heap and the surface raises upward the container with the powder filling. The stabilization of the intergranular force network inside the heap leads to the formation of a local stress field [2,3]. The knowledge of the local stress field is important to evaluate the mechanical stability of storages [4] or the grain re-mobilization under the influence of a shear (mixing, flowing) or a compression (packing, tapping, compaction) stress [5]. This work concerns the experimental identification of relationships between the static mechanical state within an ensiled granular bed and its free surface formation during filling.
Different wheat-based powders of same origin but with contrasted properties have been considered in order to elucidate a potential relation between the structure of the free surface, which results from the local granular arrangements, and the local distribution of the vertical force.
Materials
Five different wheat powders of 1.47 g.cm −3 were selected. The main raw material was durum wheat semolina of industrial quality (Panzani group, France).
The fine fraction of durum wheat semolina was collected by sieving under a metallic sieve of 315 µm mesh. To investigate larger particle size, agglomerated grains of durum wheat semolina, called couscous grains, were selected. Fine size couscous grains and medium size couscous grains came from an industrial production (Zakia, France). The large size couscous grains were collected by sieving the medium size couscous grains over a metallic sieve of 1250 µm mesh.
Particle diameters were characterized using laser granulometry (Mastersizer 2000, Malvern, England). The median diameter, d 50 (i.e. the volume-equivalent 2 diameter for which the number of particles is respectively inferior to 50%), and the size width, d 90 −d 10 (where d 10 and d 90 are the volume-equivalent diameters for which the number of particles is respectively inferior to 10% and 90%), are given in Table 1. We observe a range of median diameters with a factor of five between the smallest and the largest particles and a large size distribution of widths with lower values for sieved powders (i.e. fine semolina and large couscous).
The apparent friction coefficient, µ, of wheat powders has been measured with a FT4 powder rheometer (Freeman Technology Ltd., Worcestershire) and the angle of repose, θ, was deduced from pictures of the heap obtained by pouring 60 g of powder over a plane surface (without wall effect). The measurements have been repeated five times and the mean values are reported in Table 1. The apparent friction coefficient and the angle of repose decrease with the median diameter, but the last parameter allows to discriminate semolina and couscous family. In comparison with couscous, semolinas are characterized by a smaller size and frictional properties more important.
The large variability of the particle surface did not allow us to measure the restitution coefficient. However, we qualitatively observed that the rebound properties are contrasted between both families. Couscous grains have a rebound capacity much more important than the one of semolinas.
Methods
The local weight profile was defined as the vertical contribution of the force network acting above the surface of a probe immersed in the granular bed. This weight was measured using an original device developed by Mandato et al. [6] and
Bed formation
For each selected powder, the bed formation has been studied by analyzing their final compactness and their free surface evolution in 2D configuration. The compactness, Φ, has been calculated from the mass and volume of the powder bed (see Table 2). For couscous, its value is approximatively constant (≈ 0.576) and in comparison with semolina, it is about higher for 30%. Whatever the wheat-powder and the depth, the shape of the free surface of the heap is globally the same. As shown in Fig. 3b, this shape is not an ideal prism because the top and the base of the heap are rounded off. We observe that the shape of the heap depends on the properties of the powder (Fig. 3a) but also on the depth (Fig. 2). In order to compare all these shapes, we have considered two representative criteria of these curves, the deflection, f , and the apparent angle of heap, α, deduced from a Gaussian fit of the free surface data (Fig. 3b). The equation used to represent the shape of the free surface is described with a good agreement by : x is the horizontal coordinate, z is the depth, p(x, z) is the coordinates of the free surface at each depth, f (z) is the deflection of the heap at each depth, and 2ω(z) = ω 1 (z)/ ln(4), where ω 1 (z) is the width of the Gaussian fit at a position equals to f (z)/2. Thanks to this representation, it is also possible to define the apparent angle of the heap, α, corresponding to the value of the slope when the inflection of the free surface is changed [8]. At each depth, this angle is calculated from tan (α(z)) = f (z)exp 0.5 /ω(z). For each granular media, the deflection and the apparent angle of the heap are non-constant during the filling step due to the position of the powder filling which is fixed during time (Fig. 3a). The drop height of the particles decreases with time and slows down their kinetic energy on the surface of the heap. This phenomenon is well described by Grasselli et al. [1].
For each granular media, the deflection fluctuates during the container filling by the granular medium. It can be considered as the sum of its mean value, f , and its fluctuation, δf (z) (f (z) = f + δf (z)). For couscous, the mean value of the deflection, or the mean value of the apparent angle of the heap ( Table 2) and its fluctuation shape (Fig. 4) Table 2). The deflection during pouring strongly fluctuates and probably in a periodic way (Fig. 4). While the heap deflection of the couscous bed increases during pouring, the one of the semolina bed increases then decreases alternatively (Fig. 4). Such a fluctuating behavior is reduced close to the top of the cell because of the progressive decrease of the particle drop height. Each particle family has therefore its own arrangement mode during the bed formation. As the mean deflection is smaller for couscous than semolinas (Table 2), then couscous are spread into a horizontal layer (heap of low amplitude) whereas the semolinas are distributed into sloping heap (as a 'dune'), constituted of oblique layers. We experimentally notice that the largest particles (couscous family) have a rebound ability much more important than the smallest (semolina family). Such effect leads to a heap formation with a deflection which increases monotonically between the bottom and the top of the cell. However, we observe that from a characteristic depth, λ, which is about the width of the cell (about 50 mm), the behavior of both families is almost the same (Fig. 4).
Vertical force profiles in the center of the cell
The both modes of the heap formation (layer mode for couscous or dune mode for semolinas) have an impact on the force distribution, F (x = 0, z), within the granular medium and measured in the center of the cell at different depths for the different selected wheat powders. Fluctuations of the force are not significant. Therefore its mean value, F z , is sufficient to describe the mechanical state in the vertical direction. For each wheat powder, the local vertical force profile (Fig. 5) starts to be linear from the surface, then deviates from it (for semolinas this deviation is positive, i.e. F z becomes higher, and for couscous this deviation is negative, i.e. F z becomes lower) for decreasing until the bottom due to the the capacity of the bottom to redirect the vertical force laterally [9,10,11,12].
Considering the first part from the bed surface, the vertical force profile is well fitted by a linear trend (Fig. 5), where the gradient, S −1 , is comprised between 53.10 −4 N/mm for fine semolina and 97.10 −4 N/mm for fine couscous (Table 3). These values are higher than the gradient of the equivalent hydrostatic state, φρ s gΩ, defined as the weight of the granular column for which the section is equal to the surface of the probe Ω (Table 3). In fact, the gradient S −1 can be associated with a 'pseudo'-hydrostatic weight (its value is approximatively equal to the hydrostatic value increased by 18%) [5] taking into account all the contribution of forces acting on the probe and coming from the different parts of the force network [2,3].
Globally, these vertical force profiles can be compared with Janssen's model [4,10] which describes the normal stress at the bottom of an ensiled granular medium versus the height of the grain bed. This model introduces a characteristic length, the Janssen length λ J = D h /4µK (where D h is the hydraulic diameter of the column, and K is a coefficient which relates the horizontal redirection of the vertical stress), which allows to distinct a linear dependence of the pressure, consistent with a hydrostatic behavior (defined as the total weight of the granular bed which would weigh on the bottom) for heights lower than λ J , and a constant 'saturation' pressure, induced by the walls which absorb the vertical force deflection for higher heights [3,10]. The local vertical profiles of couscous could be then assimilated to a like-Janssen behavior; the layer formation mode of the heap ensuring a hydrostatic mechanic state at each depth far from the bottom.
Local vertical force versus heap formation
To compare the whole data coming from the free surface and the local vertical force, we defined a dimensionless parameter, D, which depends on the depth z: This parameter highlights the correlation between the normalized force (by the 'pseudo'-hydrostatic state) measured at each depth in the center of the cell and the deflection of the heap which has contributed to the establishment of the local mechanical state. As for the deflection, we have written D(z) as follow: close to the bottom, where the interaction of the first poured particles leads to different organizations according to their dissipation properties (restitution coefficient and friction coefficient) [9], the behavior of the powders seems to follow a coherent distribution. Indeed, the dependence of D parameter with the depth indicates that the level of the local vertical force is correlated with the mean shape of the free surface (Fig. 6a). For couscous grains, the particle organization into a flat heap (small f values), which can be assimilated to a layer distribution, is in agreement with the hypothesis of Janssen which deals with the lateral uniformity of the stresses [4]. This behavior, highlighted by a 'pseudo'-Janssen profile, is characterized by a lateral deflection of the force and therefore large values of δD. On the other hand for semolinas the particle | 2,899.4 | 2018-10-01T00:00:00.000 | [
"Physics"
] |
Broadband physical layer cognitive radio with an integrated photonic processor for blind source separation
The expansion of telecommunications incurs increasingly severe crosstalk and interference, and a physical layer cognitive method, called blind source separation (BSS), can effectively address these issues. BSS requires minimal prior knowledge to recover signals from their mixtures, agnostic to the carrier frequency, signal format, and channel conditions. However, previous electronic implementations did not fulfil this versatility due to the inherently narrow bandwidth of radio-frequency (RF) components, the high energy consumption of digital signal processors (DSP), and their shared weaknesses of low scalability. Here, we report a photonic BSS approach that inherits the advantages of optical devices and fully fulfils its “blindness” aspect. Using a microring weight bank integrated on a photonic chip, we demonstrate energy-efficient, wavelength-division multiplexing (WDM) scalable BSS across 19.2 GHz processing bandwidth. Our system also has a high (9-bit) resolution for signal demixing thanks to a recently developed dithering control method, resulting in higher signal-to-interference ratios (SIR) even for ill-conditioned mixtures.
the existing multiplexing schemes. However, this relies on a complicated software-layer radio-signal identification mechanism susceptible to privacy breaches 14,15 . Substantial signal processing and analysis are required to ascertain whether transmissions are attributable to specific users or others. In situations with many users in wide frequency bands, performing these computations in real time may not be possible. The only feasible way to oversee activity and enforce compliance from a regulatory standpoint may be to record spectral data for offline analysis, which introduces a serious risk to content privacy 16,17 . As soon as information is recorded to disk, its security is compromised. Even if the monitoring operator is deemed benign, it may be unknowingly harbouring malware that can access the content of all the spectrum users.
A physical layer cognitive technique called blind source separation (BSS) 6,18 can extract unknown signals (e.g., a signal of interest and an interferer) from their mixtures with minimal assumptions, as shown in Fig. 1b, c. Operating in the physical layer allows the isolation of unwanted transmissions in the analogue domain, reducing the risk of privacy leakage by eliminating the need to record the transmission content digitally 19 . Such a "blindness" feature is also critically demanded by recovering the scientific signals, for which no prior knowledge can be obtained. Another advantage is the agility in recovering sources with arbitrary characteristics. This means discarding substantial information before digitisation and total obliviousness to the frequency, modulation type, and power ratio. This advantage can only be realised when BSS is performed across a wide frequency range. Conventional BSS implementations by electronics are competent in separating narrow-band and low-frequency signals, such as audio signals 20 , but are challenging to achieve a broadband operation due to the limited bandwidth of RF technology. For example, the spectrum of ultrawideband (UWB) signals 21 covers up to 7.5 GHz, and that of Wi-Fi signals has expanded from 2.4 GHz (802.11) to 6 GHz (802.11ax). Having broadband coverage is challenging with a single RF system, as depicted in Fig. 1. Thus, alternative techniques other than conventional electronic processors are required to effectively process RF signals in next generation wireless systems 22 .
By upconverting to frequencies of hundreds of terahertz, photonic signal processors can deal with broadband information [23][24][25] , where GHz signals are regarded as narrowband. As a result, photonic processing can easily satisfy the demanded processing bandwidth requirements of incoming wireless technologies and comes with low energy consumption that does not scale with the signal frequency. A promising on-chip processor is the microring resonator (MRR) weight bank 26,27 , a bank of tunable filters implemented with tiny circular optical waveguides, which provides energy-efficient tuning 28 and scalable parallel processing through wavelength-division-multiplexing (WDM). Such an RF frontend powered by a photonic processor (Fig. 1b) can share the workload of signal processing with a digital signal processing (DSP) backend while enhancing overall performance. One key factor determining BSS performance is the resolution of the weights (the tuning accuracy of MRRs), which was reported up to 7 bits on MRRs 29,30 . We recently developed a dithering control method 31 that improves the tuning accuracy beyond 9 bits.
Here, we report a photonic implementation for BSS based on the dithering-controlled MRR weight bank. We also demonstrate a fully packaged photonic processor with a silicon photonic chip integrated with the MRR driver and control electronics on a single printed circuit board (PCB). We prove this setup can recover a weak transmitted signal in the presence of broadband jamming noise and successfully test it in real-time on a wireless transceiver system. In terms of performance, our setup fully realises the "blindness" agility by achieving a processing bandwidth of up to 19.2 GHz and signalto-interference ratios (SIR) of more than 40 dB in some cases. Besides, the dithering weight control enables a photonic processor with 9-bit accuracy beyond many electronic counterparts, enhancing the BSS with at least one-half reduced residual error than the setup without the dithering control. This work introduces a functional BSS system capable of operating at broad bandwidths. When included in transceiver circuits, it can help cancel interference signals, with potential implications in next-generation wireless cognitive radio to deal with signals across tens of Gigahertz. It also benefits radio astronomy in detecting unknown weak between electronic and photonic BSS implementation. a Frequency allocation for common bands (from 0.5 to 19 GHz) and example processing bandwidths for the two BSS implementations (photonics and electronics). Science service bands comprise the earth explorer satellite service (EESS) and radio astronomy service (RAS). Bands for civil and military applications include global positioning system (GPS), microwave oven, ultra-wideband (UWB), and standard frequency and time signal service (for the first row); broadcastingsatellite service, radar altimeter, Wi-Fi, and aircraft weather radar (for the second row); 5 G cellular networks and military radar (for the third row). b Photonic BSS system diagram. c Electronic BSS system diagram. In electronic BSS, covering the demanded bandwidth requires a complex switching system that consists of multiple electronic BSS setups (quantity of M) corresponding to distinct frequency regimes. Each one requires N ADCs (N is the number of receivers) and a dedicated DSP with N inputs to perform the multiply-accumulate (MAC) operation. Since ADC and DSP require power in scale with the signal frequency, the total power consumption of the electronic BSS approaches is proportional to N × M × f × ðE ADC + E MAC Þ. In contrast, achieving the same coverage requires only one photonic BSS setup consisting of N electrical-to-optical (E/O) converters with only one ADC and a low-end DSP with a single input. This architecture consumes much less power and is proportional to signals that require more than 30 dB signal-to-noise and distortion ratio (SINAD).
Photonic solution for BSS problem
The BSS problem can be formulated as separating an unknown mixture of unknown independent signals. The instantaneous model of generic mixing in a transmission channel is ½r 1 ,r 2 = H½s 1 ,s 2 , where ðs 1 ,s 2 Þ are the source signals, H is a mixing matrix representing the wireless channel, and ðr 1 ,r 2 Þ are the received signals. In the most general case, the source signals and their mixing matrix are unknown to the receiver. The goal of BSS is to estimate the corresponding demixing matrix, H −1 , and apply it to the received signals to arrive at ½s 0 1 ,s 0 2 = H À1 ½r 1 ,r 2 , where ðs 0 1 ,s 0 2 Þ is the estimated recovery of the source signals. In general, minimal prior assumptions are needed about signal characteristics. For example, s 1 and s 2 can occupy the same frequency band, meaning an implementation based on filtering would fail regardless of any receiver-side analysis. It is assumed that the two sources are statistically independent and that all mixing happened linearly. These assumptions are very realistic and are widely used in the radio community 32 .
For a given mixing matrix H, separating each signal component requires the mixtures to be weighted and summed with weights represented by each column of the inverse matrix H −1 . This operation can be implemented with a set of two MRR weight banks acting as an on-chip signal processor. Retaining the signal of interest and eliminating the other one utilises one of the two columns, which requires only one weight bank. As shown in Fig. 2a, c, the MRR weight bank consists of several microring resonators with slightly different radii; each has a Lorentzian-shaped transmission profile (as shown in Fig. 2c) centred at different wavelengths. Each MRR is equipped with a metal heater to allow thermo-optic displacement of the centre wavelength by varying the current applied 33 . The MRR weight bank independently weights the laser amplitudes of different wavelengths. The sum of all optical power can be obtained by a balanced photodetector (BPD) at the output. Utilising this ability of weighted addition, we develop a photonic BSS algorithm, which follows a pipeline consisting of three steps, including principal component analysis (PCA) 34 , whitening, and (bottom left), which also delivers the power for the whole PCB. e Estimation of weighing accuracy. The 2-MRR weight bank in a was tested to tune the weights represented by each grid. The dithering control yielded the red points, and the grey points were obtained without the dithering. f Errors of all the tested weights in d. A 9.0-bit of precision resulted from the dithering control and 6.7-bit for the control without the dithering. The precision is calculated by the standard deviation of the error. Quantitatively, there are 161 among the total 243 tested points inside the 9-bit bound (the solid circle). independent component analysis (ICA) (See details in ref. 35 ). For carrying out these analyses, an iterative algorithm is preferred because of its simplicity in that only one vector needs to be commanded to the MRR weight bank in each step. Essentially, a constrained Nelder-Mead algorithm 36 is carried out that performs an iterative projection-pursuit of the mixtures to search the optimised weighting vectors. The goal is to find the mixture that outputs a weighted addition (Σw i s i ,w i 2 À1,1 ½ ,i 2 ½1,2, Á Á Á N, N is the number of the mixtures) with maximal variance (the second-order statistic) for PCA and the maximal non-Gaussianity (the fourth-order statistic or kurtosis) for ICA.
Hardware implementation
The hardware realisation of this algorithm appears as a control loop (as shown in Fig. 2a). Apart from the photonic chip, also included are a BPD for electrical-to-optical (E/O) conversion (Discovery semiconductor DSC-R405ER), an analogue-to-digital converter (ADC) for signal digitisation (Tektronix DPO73304SX; sampling rate: 625 MS s −1 −100 GS s −1 ), a computer for statistical analysis and actuating weight commands, and a multi-channel current source for MRR tuning (custom-built as shown in Fig. 2d). The sampling rate of the ADC is set to the minimum of 625 MS s −1 for all the tested signals as the BSS method is agnostic to waveform frequencies, and it is switched to the maximum of 100 GS s −1 for recording resulted waveforms with the highest definition. The dithering control 31 implemented here allows tuning the MRRs with less complicated drivers instead of the sourcemeasurement unit (SMU) 29 . In this setup, the MRR driver is directly integrated into the PCB interposer, packaged close to the photonics chip with a much-reduced footprint and cable management 31 . The signal path starts from the Mach-Zehnder Modulator (MZM) and ends at the scope. The highest supported RF frequencies are determined at up to 20 GHz by the BPD and the transmission profile of the MRRs, providing coverage for many commonly used RF bands. Detailed discussion on the bandwidth of the MRR filtering function can be found in Supplementary Fig. 2. It is also worth noting that most of the signal path is in the optical domain, bringing about broadband and flat response and very low latency, which is estimated to be 15 ns by dividing the total waveguide length (3 m) by the speed of light in the waveguide (c 0 =n≈2 × 10 8 m s −1 ).
The photonic chip in this setup has a 4-channel MRR weight bank with resonance frequencies roughly spaced by 200 GHz. The spectra of the four MRRs (at 25 degrees) are shown in Fig. 2c, with the resonance peaks located at 1551.7, 1553.0, 1554.6, and 1555.7 nm. The waveguide of these MRRs is N-doped which can be efficiently thermaltuned from on to off positions with a power consumption of 10 mW 28 . Since this work recovers source signals from two mixtures, we use two MRRs (the leftmost and rightmost ones). The corresponding lasers (Pure Photonics PPCL500) are tuned to be 1551.5 and 1556 nm, then amplified (Pritel FA-23) and combined into a shared waveguide by a WDM multiplexer (Santec MDM-15-8) before coupling into the MRR weight bank through grating couplers.
The implemented dithering control method overcomes the low accuracy incurred by the high sensitivity of the weight bank. As shown in Fig. 2a, the lasers are modulated with either the received mixtures or pre-defined dithering signals. Each time a set of commanded weights are applied, the RF switch (Mini-Circuits RC-2SPDT-A26; DC − 26.5 GHz) passes the dithering signals into the photonic path, which helps adjust the driving currents of the MRRs until the output weights reach the demanded values. Then, the actual mixtures are switched into the weight bank and processed. The weight accuracy is reflected in the error between the target and actual weights. Usually, we quantify the accuracy in bits, which is calculated as log 2 ð2=ðw actual À w target ÞÞ. Figure 2e, f illustrates the weighting accuracy, showing the resulting weights (red dots) of the two MRRs being examined at the tested values represented by each grid point. The grey dots correspond to the same targeted weight without dithering control. An improvement of over 2 bits (from 6.7 to 9 bits) is observed, enabling the MRR weight bank to have competitive performance with its electronic counterparts.
While our setup included the demonstration of the wireless transceiver system, we also had a versatile control setup that provided flexibility and accuracy in controlling the carrier frequencies and the mixing matrix. In the control setup, the signal mixtures were generated by a high-speed multi-channel arbitrary waveform generator (Keysight N8196A; 92 GS s −1 ) and sent to each MZM directly. The generation of the two baseband signals, the up-conversion, and the signal mixing, are all performed via software tools (Python). The mixed signals are then transmitted to the photonic system.
Theoretical impact of weighting accuracy BSS must be able to deal with mixtures that are difficult to separate. Denoting the jth signal component in the ith mixture as y ij ðtÞ, the signal-to-interference ratio (SIR) 18 is defined as which is the ratio of the signal power (||y ij 0 t ð Þ 2 ||) to the rest interference power (Σ j≠j 0 || y ij t ð Þ 2 ||) of the ith mixture. Given a problem with N mixtures containing N sources to be separated, an often-used merit is the overall SIR, which is the average of the SIR of every mixture (1=N × Σ i SIR i , i = 1,2, Á Á Á N). The SIR captures the accuracy of the system, and it can also be stated as a function of carrier frequency, baseband bandwidth, or other metrics. A higher SIR means better suppression of the interference signals. This metric of SIR can be extended to any mixing H through the ill-condition number, κðHÞ 37 , defined as Eq. (2).
This describes the demixing difficulty calculated by the mixing matrix H. Mixtures with a small ill-condition number are easier to solve. Conversely, problems with a sizeable ill-condition number are challenging and prone to smaller SIR. An ill-conditioned BSS problem typically requires the weighting to represent the inverse matrix accurately.
To prove this, consider a simple case where the mixing matrix is symmetric, such that H = ½ 1,1 À a ½ ,½1 À a,a, a 2 ½0:5, 1, which is often the case when two receiver antennas and the two transmitter sources are symmetrically positioned and have identical power. The inverse of matrix H is given in Eq. (3).
And this can be further normalised to the form of Eq. (4), multiplying ð2a À 1Þ=a.
To introduce the weighting error, Eq. (5) describes the actual matrix of mixtures that is input to the photonic BSS, where d denotes the weight error caused by the inaccurate MRR control.
Demonstration on wireless transceivers
Based on the setup described above, we firstly demonstrated our proposed BSS photonic processor in a wireless transceiver system, which emulates the case where a communication link is deteriorated by nearby RF interference. As shown in Fig. 3i, two antennas (Southwest Antennas 1009-002; 1.7-2.5 GHz) transmit the signal of interest and a broadband jammer mixed over the air with a transmission distance of 0.75 m. Then, the mixtures are received by a 2 × 2 MIMO antenna (Southwest Antennas 1055-368; 1.7-2.5 GHz), with two outputs corresponding to the polarisation of 45-degree slant left and 45-degree slant right. The signals emitted from the transceivers have peak-to-peak voltages of 1 V and received signals are 150 and 140 mV peak-to-peak. The transmitted signal carries a repeating sequence of 200 random bits at a baud rate of 50 MHz with a binary phase-shift keying (BPSK) modulation format and a carrier frequency of 2.1 GHz. The interference is an instantaneous broadband jamming signal generated by adding 10,000 single tones with random phases and has a spectrum from 1.7 to 2.5 GHz, covering the entire bandwidth of the antennas. Thus, extracting the signal of interest is largely ineffective via spectral filtering even if the carrier frequency is known. If doable, the signal-to-noise ratio remained the same with the received mixtures at best. In contrast, photonic BSS can recover the communication link without prior assumptions and suppress the interference noise. Figure 3a-h illustrates the spectrum and the constellation diagram before and after the BSS process. The signal-to-noise ratio has almost a 15 dB improvement (20.8-35.5 dB), accompanied by a twofold increase of the Q value (5.84-12.6) from the constellation. This result demonstrates the effective suppression of nearby interference with high power and broadband coverage, maintaining the transmission quality of the wireless communication link.
Broadband capability
Next, we examined the proposed photonic BSS system under different scenarios, with different signal mixtures enabled by programming the arbitrary waveform generator (AWG). The original two signals are repeating patterns of 16 bits, and are in format of binary phase-shift keying (BPSK; bit pattern = [0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1]) and on-off keying (OOK; bit pattern = [0,0,1,0,0,0,1,0,0,1,0,0,0,1,0,0]), respectively. This configuration provided two short data periods that can be easily illustrated and also contributed all the potential bits combinations in their mixtures for a complete examination. Note that our BSS algorithm is based on the kurtosis analysis, independent of the modulation format (see Methods and Supplementary Fig. 1 for details). For consistency, all the experiments shown in the main content were performed on signals in the same digital modulation formats (BPSK or OOK). For reference, we performed an additional experiment showing similar BSS performance on analogue modulated signals (pulsed ultrawideband signal) in Supplementary Fig. 3. Based on this setup, we firstly tested the broadband capability by performing BSS on mixtures of signals from 20 MHz to 19.2 GHz by varying the carrier frequencies, as shown in Fig. 4. The baseband frequencies were also adjusted according to the carrier frequencies, which were 4 MHz for <80 MHz, 16 MHz for 80-480 MHz, 160 MHz for 0.48-1 GHz, 400 MHz for 1-3 GHz, 800 MHz for 3-6 GHz, and 1600 MHz for f carrier ≥ 6:4 GHz. In the spectrum, the transmitted signals are centred close to the carrier frequency with a width (instantaneous bandwidth) of approximately double the baseband. Annotating the two mixtures with M 1 and M 2 and the two sources with S 1 and S 2 , the mixing can be expressed as M 1 = 0:8 × S 1 + 0:2 × S 2 and M 2 = 0:2 × S 1 + 0:8 × S 2 , denoting an illcondition number of 2.26 (according to Eq. (2)). The two mixtures are identical in power, with both the peak-to-peak voltages set to 1 V.
As shown in Fig. 4d, the 27 tested carrier frequencies from 20 MHz to 19.2 GHz show SIR >30 dB. This result proves that our demonstrated photonic BSS system can process RF waveform with a carrier frequency lower than 19.2 GHz and a baseband of up to 1.6 GHz (an instantaneous band of 3.2 GHz). Compared with previous photonic demonstration 35 that dealt with problems of a similar ill-condition number, we obtain almost 55 times broader processing bandwidth (19.2 GHz versus previously 350 MHz centred at 900 MHz) while maintaining a clean signal separation across the entire band (SIR > 30 vs previously SIR≈14). This improvement in error suppression confirms the benefit of the improved dithering control method. Also, based on the Federal Communications Commission (FCC) frequency allocation chart (partly shown in Fig. 1a), this broadband coverage by a single silicon photonic chip translates into the agility of processing multiple commonly used bands. Examples of included bands are cellular (620 MHz-6.425 GHz), Wi-Fi (2.4 GHz, 5-7.125 GHz), military radar (extensive spectral usage above 8.5 GHz), and those for earth explorer satellite and radio astronomy (sparsely spread from 1.4-17.3 GHz). This wide processing bandwidth can also provide full coverage to some challenging bands, such as the ultra-wideband (UWB) services. Since signals remain narrowband at higher frequencies for photonic devices and the state-of-the-art BPD can be 100 GHz or more 38 , this system can easily expand the coverage to other important spectrums like millimetre-wave with just higher speed photodetectors. Due to the shape of the spectral profile (as shown in Fig. 2c), MRR weight banks apply uneven filtering on signals with large instantaneous bands, which could degrade the BSS performance to some extent. Additional discussion and experiments are included in Supplementary Information, where we show that this BSS system maintains decent performance (SIR ≥ 33 dB) on two ultra-wideband (UWB) signals with a 7.5 GHz wide instantaneous spectral coverage (3.1-10.6 GHz). The practical device footprint on-chip is 0.13 mm × 0.42 mm, including four MRRs and the waveguide routing, which slowly scales up linearly with the number of sources.
Test on ill-conditioned mixing
Last, we investigated the performance of the proposed photonic BSS system in solving problems with different ill-condition numbers, which is to justify the significance of the improved weighting accuracy. Here, we fixed the carrier frequency at 1 GHz. The mixing matrix is of a symmetric form H = ½½a, 1 À a,½1 À a,a, where a is varied from 0.1 to 0.45, resulting in ill-condition numbers ranging from 2.05 to 10.1. Figure 5 shows the SIR of the mixtures obtained from the same setup but with the dithering control (compared to the previous control method without the dithering). Even in the presence of similar experimental noise levels, lower SIR is always obtained when not using dithering control. Conversely, the dithering controlled setup maintained constantly high SIR such that the influence of ill-conditioning was less distinguishable. The average SIR is above 35 dB for all tested cases, which generally shows around 20 dB improvement compared to the previous control method (the orange curve in Fig. 5), as expected in the analysis above. This improvement confirms the significance of accurate weight control for MRR-based applications, such as the BSS in this paper.
Discussion
In summary, we explored a physical layer cognitive radio solution based on BSS performed on a dithering-controlled silicon MRR weight bank. This solution is a complete RF frontend with a photonic signal processor that can do intelligent learning through the fully programmable and integrated electronic-photonic system. This setup has a large processing bandwidth of up to 19.2 GHz that can fully demonstrate the capability of the BSS technique. In addition, the high SIR observed for all the frequencies and ill-conditioned problems, together with an example of a wireless transceiver system, confirms the benefits of real-world applications brought by the improved MRR control method. Given the increasing carrier frequencies in the next generation of wireless communications technologies, the superior performance of this photonic approach illustrates the readiness to replace pure-electrical RF implementations, effectively addressing the incoming challenges, including bandwidth limitations, energy efficiency, and latency.
With the availability of higher speed modulators 39 and photodetectors 38 on the silicon platform, as well as the maturity of packaging, including photonic wire bonding 40 , laser integration 41 , and monolithic cointegration of complementary metal-oxide-semiconductor (CMOS) and silicon photonics 42 , this proposed photonic BSS can have future implementations with higher integration and broader bandwidth. With these, we envision a standalone BSS device of a small form factor that is field-deployable for various applications, including interference cancellation in autonomous vehicles and aviation.
Photonic BSS algorithm
The photonic BSS algorithm follows a pipeline of two steps, including principal component analysis (PCA) and independent component analysis (ICA). Firstly, PCA is to find the principal components (PC) from the received mixtures and construct a whitening matrix. In PCA, we use the MRR weight bank to perform a weighted addition of the received mixtures. Regarding all the applied weights as a vector, the PCA is to find the vector that the corresponding output weighted addition has the maximal possible variance. Finding the PC vectors is through the constrained Nelder-Mead (NM) method 43 . Starting from a random vector, the NM method iteratively tries new vectors and updates the old one if the new vector results in a larger variance. The new vectors are generated by manoeuvring the old ones in several different ways, including reflection, expansion, shrinking, etc. Usually, the search converges within 20 steps and takes no more than 5 minutes. Denoting this founded PC as the first PC, the vector perpendicular to the first PC should associate with the minimum variance and is called the second PC. So, finding the first one can easily derive the other by a π=2 shift. With this, a whitening matrix can be constructed using the two PCs and their associated variances. Secondly, the ICA step follows a similar search procedure and finds the vector (denoted as the first independent component (IC)) corresponding to the maximal kurtosis. Based on the central limit theory, the mixture of two uncorrelated signals is more Gaussian-like than any original signals. The kurtosis is a metric describing the non-Gaussianity. Thus, the vector obtained by ICA is exactly one of the correct demixing weights that can recover the one original signal from the mixtures. Finally, using the first IC and the whitening matrix obtained in the PCA step, the second IC can be directly calculated and used to recover the second original signal. This BSS pipeline does not require explicit waveform digitisation but only the second-order and fourth-order statistics (variance and kurtosis) of weighted addition output, unlike conventional BSS solutions such as FastICA. This feature permits a low-cost ADC and DSP working at the sub-Nyquist sampling regime, generally more applicable to broadband BSS. Detailed pseudo codes regarding the algorithm mentioned above can be found in our previous works 34,44 .
Fully packaged photonic processor
We fully packaged the silicon photonic chip with its driver and control circuitry in the same interposer PCB in our experimental setup, as shown in Fig. 2d. This integration benefits a simplified lab setup, lower power consumption (<100 mW), and neat connectivity that a single USB cable handles both the digital interface and the power delivery. In terms of power management, the 5 V input (from the USB cable) generates isolated and low-noise power rails by several dedicated power management chips (Analog Devices LT1533CS and LT3042). These rails power up the precision current sources (Analog Devices LTC2662-16; 16-bit), which drive the MRRs on the photonic chip. The digital interface between the current source and the controller (Sparkfun Arduino Pro Micro) is built on the serial peripheral interface (SPI) protocol and isolated by a dedicated chip (Analog device ADUM4151). This fully isolated MRR driving circuity helps with noise suppression. The host computer commands the weights through serial communication via the USB cable. The onboard controller phrases the received command and talks to the current sources to adjust the current of each microring.
Device fabrication
The silicon photonic chip was fabricated on a silicon-on-insulator wafer with a silicon thickness of 220 nm and a buried oxide thickness of 2 µm. The waveguide is 500 nm wide. The weight bank consists of four MRRs (radius around r = 22 μm) coupled with two bus waveguides in an add/drop configuration, and two among the four were used in the experiments. A slight difference (Δr = 0:32 μm) was introduced in the ring radii to avoid resonance collision. The gap between the ring and bus waveguide is 200 nm, yielding a Q factor of about 6000. Circular metal heaters were built on top of each MRR for thermally weight tuning. Metal vias and traces were deposited to connect the heater contacts of the MRR weight bank to electrical metal pads.
Data availability
The source data generated in this study have been deposited in the Figshare database under accession code https://doi.org/10.6084/m9. figshare.21670880. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/ licenses/by/4.0/. | 7,056 | 2022-05-07T00:00:00.000 | [
"Computer Science"
] |
"Chiral'' and"Non-chiral'' 3d Seiberg duality
We propose a Seiberg duality for a 3d $\mathcal{N}=2$ $Spin(7)$ gauge theory with $F$ spinor matters. For $F \ge 6$, the theory allows a magnetic dual description with an $SU(F-4)$ gauge group. The matter content on the magnetic side is ``chiral'' and the duality connects ``chiral'' and ``non-chiral'' 3d gauge theories. As a corollary, we can construct a Seiberg duality for a 3d $\mathcal{N}=2$ $G_2$ gauge theory with fundamental matters.
Introduction
Supersymmetric gauge theories exhibit various low-energy phases depending on the number of dynamical quarks [1]. For small flavors, strongly-coupled gauge dynamics typically breaks supersymmetry and confines the quarks into mesons or baryons. For more flavors, supersymmetric theories flow to a non-abelian Coulomb phase and there are two equivalent ways of describing the low-energy dynamics [2], which is called "Seiberg duality." After the first Seiberg duality was proposed in a 4d N = 1 supersymmetric SU(N) gauge theory with fundamental flavors, this was immediately generalized to various gauge groups and more complicated matters [3][4][5][6][7]. The similar dualities, which are sometimes called Seiberg-like dualities, were also constructed in three-dimensional spacetime.
Although a lot of 3d Seiberg dualities have been proposed [8][9][10][11][12][13][14][15], its variety of the duality is still limited compared to the 4d dualities. This is because in 3d, there are additional flat directions (moduli spaces) called Coulomb branch from the vector superfield and because the lack of the understanding of its quantum structure makes the construction of the 3d dualities difficult compared to the 4d examples. In this paper, we propose the 3d Seiberg duality for the 3d N = 2 Spin(7) gauge theory with F spinor matters. The corresponding 4d duality was proposed in [16] and generalized to the Spin(N) cases [17][18][19][20][21][22][23][24][25]. We here claim that a similar duality holds with a small modification of the superpotential. As a by-product of the Spin(7) duality, we can also obtain the duality for the 3d N = 2 G 2 gauge theory with F fundamental matters. These dualities connect the "chiral" and "non-chiral" gauge theories from a four-dimensional point of view.
The rest of this paper is organized as follows. In Section 2, we study the low-energy dynamics of the 3d N = 2 Spin(7) gauge theory with spinor matters, which becomes an electric description of the proposed duality. This section is mostly a review of [26]. In Section 3, we propose a chiral magnetic description dual to the theory in Section 2 and give various tests of the proposed "chiral"-"non-chiral" duality. In Section 4, we swap the roles of the electric and magnetic theories. We take the "chiral" theory as an electric description and propose a Kutasov-type duality. In Section 5, we propose the Seiberg duality for the 3d N = 2 G 2 gauge theory with fundamental matters. This duality will be derived from the Spin(7) duality via a certain deformation.
2 3d N = 2 Spin(7) gauge theory We start with the analysis of the Coulomb branch in the 3d N = 2 Spin(7) gauge theory with F v vector matters and F s spinor matters [26,27]. The Coulomb branch is a flat direction spanned by an adjoint scalar in the vector superfield. When the adjoint scalar obtains a non-zero expectation value, the gauge group is spontaneously broken to some subgroup including U(1) factors. Owing to these compact U(1) subgroups, the theory admits monopole configurations which generate a non-perturbative superpotential [28][29][30]. This drastically changes the classical picture of the Coulomb branch and a few directions of the Coulomb branch can be quantum-mechanically stable. The quantum Coulomb branch for the Spin(N) theory was studied in [13,26,27,31].
The first Coulomb branch Y was studied in [13,31] for describing the Coulomb branch of the 3d N = 2 SO(N) or O(N) gauge theory with vector matters. When Y obtains a non-zero vev, the gauge group is broken into 1 so(7) → so(5) × u(1) (2.1) When the Spin(7) gauge theory includes vector matters, the low-energy effective theory along Y becomes the Spin(5) gauge theory with massless vectors. The vacuum of this lowenergy theory is stable and supersymmetric due to these massless dynamical matters [31].
On the other hand, if we consider the Spin(7) gauge theory only with spinor matters, the low-energy theory along Y includes the 3d N = 2 Spin(5) theory without a dynamical quark, which leads to an unstable vacuum and the supersymmetry is lost due to the Affleck-Harvey-Witten superpotential [28]. As a result, we need to introduce this Coulomb branch operator for the Spin(7) gauge theory with vector matters. In what follows, we will only study the 3d N = 2 Spin(7) gauge theory with spinor matters and Y is not necessary. For the Spin(7) gauge theory with spinor matters, there is an additional Coulomb branch [26,27]. The second Coulomb branch denoted by Z corresponds to the following gauge symmetry breaking where the Coulomb branch Z is defined from the U(1) subgroup by dualizing the U(1) vector superfield into a chiral superfied. Since the components charged under the U(1) subgroup have non-zero masses, the vector representation reduces to (3, 1) 0 . When we consider the Spin(7) gauge theory only with vector matters, the low-energy theory along this direction includes a 3d N = 2 pure SU(2) gauge theory whose vacuum is runaway and unstable [28]. Therefore, the Spin(7) gauge theory only with vector matters cannot have this flat direction. 1 For branching rules in this paper, see for example [32,33].
For the theory with spinor matters, there are massless components (2, 2) 0 and the low-energy SO(3) × SU(2) × U(1) gauge theory can have a stable and supersymmetric vacuum. In this paper, we will discuss the Seiberg duality of the 3d N = 2 Spin(7) gauge theory with F spinor matters. Therefore, we only consider the single Coulomb branch Z. Table 1 summarizes the numbers of the fermion zero-modes for the monopoles associated with Y and Z.
The electric theory is a 3d N = 2 Spin(7) gauge theory with F spinor matters [26]. The Higgs branch is described by the meson operator M := QQ and the baryon operator B := Q 4 . The Coulomb branch is described by Z. The quantum numbers of the elementary fields and moduli coordinates are summarized in Table 2.
We briefly summarize the low-energy dynamics for the cases with small flavors F ≤ 5 [26]: For F = 5, the theory is in an s-confinement phase. The low-energy dynamics is described by the gauge-invariant chiral superfields M, B and Z with an effective superpotential At the origin of the moduli space, the confinement phase without symmetry breaking is realized, which is called s-confinement. For F = 4, the theory is again in a confining phase described by M, B and Z with a single quantum constraint where λ is a Lagrange multiplier. One should notice that the origin of the moduli space is eliminated by this constraint. Therefore, the confinement for F = 4 necessarily induces some symmetry breaking. For F ≤ 3, the moduli space is described by M and Z. We can write down an effective superpotential and there is no stable supersymmetric vacuum for F ≤ 3. For F ≥ 6, we can anticipate that the theory is in a non-abelian Coulomb phase and that there is a magnetic dual description. This expectation is plausible as follows: The 3d N = 2 Spin(7) gauge theory with F spinors flows to the 3d N = 2 SU(3) gauge theory with F − 2 fundamental flavors via a deformation with rank M = 2. The low-energy SU(3) gauge theory exhibits a non-abelian Coulomb phase for F ≥ 6 [12,29] and there are additional massless degrees of freedom at the origin of the moduli space. Therefore, it is natural to think that the Spin(7) theory with F ≥ 6 also exhibits an interacting non-abelian Coulomb phase. In the next section, we will propose a magnetic description dual to Table 2 by imitating the 4d construction of the Spin (7) duality [16].
The SU (F − 4) magnetic dual
Now, we propose a magnetic description dual to Table 2, which is very similar to the corresponding 4d duality proposed by Pouliot [16]. The magnetic side becomes a 3d N = 2 SU(F − 4) gauge theory with F anti-fundamental mattersq, a symmetric tensor s and a meson singlet M. The meson field M is identified with the electric meson QQ. Table 3 summarizes the quantum numbers of these elementary fields.
Let us study the Coulomb branch of the magnetic theory. When the bare Coulomb branch operator Y bare SU (F −6) obtains a non-zero expectation value, the gauge group is spontaneously broken to The Coulomb branch is associated with the U(1) 1 subgroup and its coordinate Y bare SU (F −6) is constructed from the U(1) 1 vector superfield. Along this breaking, the components charged under the U(1) 1 symmetry are massive and integrated out. Since the matter content is "chiral," the fermion one-loop diagrams with these massive components induce the mixed Chern-Simons term between the U(1) 1 and U(1) 2 groups. This results in a non-zero U(1) 2 charge of Y bare SU (F −6) [34]. The quantum numbers of the bare Coulomb branch Y bare SU (F −6) is listed in Table 3. In order to parametrize the Coulomb branch, we need to define a so-called dressed monopole operator where the color indices of s F −6 are contracted with an epsilon tensor of the SU(F − 6) subgroup. The U(1) 2 charge of the bare operator is correctly canceled as it should be.
The magnetic theory has a tree-level superpotential which is consistent with all the global symmetries in Table 3. Several comments about this superpotential are in order: As opposed to the 4d case, we need not introduce a term proportional to det s into the superpotential (3.5). The magnetic Coulomb branch Y dressed is excluded from the chiral ring elements via the superpotential (3.5). This monopole superpotential is absent in the corresponding 4d duality but reminiscent of the 4d superpotential since the 4d theory includes the term proportional to det From this point of view, the superpotential (3.5) is similar to the 4d one. Notice that the assignment of the global U(1) charge on the magnetic elementary fields are completely fixed by requiring that the baryon matching B ∼q F −4 and the consistency of the first term Msqq in the superpotential (3.5). The availability of the monopole potential and the other operator matching Z ∼ det s give us a non-trivial consistency check of the proposed duality.
First, we consider a complex mass deformation. This is completely the same as the 4d argument [16]. Let us introduce a complex mass to the last flavor of the spinor matters W = mQ F Q F = mM F F . By taking a low-energy limit at the origin of the moduli space, the electric theory flows to the Spin(7) theory with F − 1 spinor matters. On the magnetic side, the deformation W = mM F F corresponds to the higgsing sq FqF = −m which breaks the gauge group into SU(F − 5). At the low-energy limit, we obtain the SU(F − 5) gauge theory with F − 1 anti-fundamental matters and a symmetric tensor. In this way, the mass deformation preserves the duality with reduction F → F − 1.
Let us further test the validity of our duality. For F = 5, the electric Spin (7) gauge theory exhibits the s-confinement phase (2.9). On the magnetic side, the gauge group is vanishing. The theory is described by the gauge-singlet chiral superfieldsq, s and M which are identified with the electric moduli coordinates B, Z and M, respectively. The magnetic superpotential becomes which reproduces a part of the s-confinement superpotential (2.9). We expect that the missing term Z det M is non-perturbatively generated, which is consistent with the global symmetries in Table 3. Finally, we can compute the superconformal indices [35][36][37][38] from the electric and magnetic descriptions and find a beautiful agreement. We here focus on the case with F = 6 where the magnetic side becomes the 3d N = 2 SU(2) gauge theory with six doublets and an adjoint matter. The magnetic Coulomb branch need not be dressed in the SU(2) case. We computed the superconformal indices from both the electric and magnetic sides and the results are given by
7)
where we set the R-charge of Q to be r Q = 1 4 for simplicity. The fugacity u is associated with the global U(1) symmetry and x counts the weight plus the third component of spin. The second term 21u 2 x 1/2 corresponds to the meson operator M := QQ. The baryon operator B := Q 4 is represented as 15u 4 x and the remaining 231u 4 x is M 2 . The higher order terms are symmetric products of these operators and fermion contributions. We checked the agreement of the electric and magnetic indices up to O(x 3 ). We also computed the superconformal indices for the case with F = 7 and observed the agreement up to O(x 3 ).
Kutasov-type duality
In this section, we regard the SU(N) gauge theory with a symmetric tensor as an electric description and propose a Kutasov-type 2 duality according to Pouliot's argument [16]. The electric side becomes a 3d N = 2 SU(N) gauge theory with N + 4 fundamental matters Q and a symmetric-bar tensorS, where we shifted F → N + 4 for simplicity. The electric theory includes the following superpotential where Y dressed is a dressed Coulomb branch operator and its precise form will be defined below. Notice that Pouliot's 4d duality [16] has a different superpotential W 4d ele = detS whereas the 3d case includes a monopole superpotential (4.1). Since there is no superpotential for the Higgs branch coordinates, the Higgs branch operators are not truncated at all and described by where the Coulomb branch corresponds to the vector superfield of the unbroken U(1) 1 subgroup. Along this flat direction, the mixed Chern-Simons term between the U(1) 1 and U(1) 2 symmetries is generated. This turns on a non-trivial
) and the dressed operator is defined by
where the color indices ofS N −2 are contracted by an epsilon tensor of the unbroken SU(N −2) subgroup. This dressed operator is eliminated from the chiral ring elements due to the superpotential (4.1). Table 4 summarizes the quantum numbers of the elementary fields and moduli coordinates. One should notice that the U(1) charge assignment is completely fixed by the monopole superpotential (4.1).
The magnetic side becomes a 3d N = 2 Spin(7) gauge theory with N + 4 spinor matters q and a meson singlet M which is identified with the electric mesonSQQ. The magnetic theory includes a tree-level superpotential W mag = Mqq.
(4.7) Table 5 summarizes the quantum numbers of the elementary fields in the magnetic Spin (7) gauge theory. Notice that the assignment of the global U(1) charge in Table 5 is fixed by the meson matching M ∼SQQ and the superpotential (4.7). Therefore, the matching of the other operators becomes a non-trivial test of the duality as we will see below. Table 5: The Spin(7) magnetic dual description of Table 4 Spin (7) Due to the superpotential (4.7), the magnetic meson qq is eliminated from the moduli space of the magnetic theory. The matching of the gauge invariant operators are easily obtained by comparing Table 4 with Table 5: Notice that in the corresponding 4d duality [16], the composite operator detS was excluded due to a tree-level superpotential W 4d ele = detS while it is here mapped to the Coulomb branch operator Z.
Let us compare the flat directions of the electric and magnetic theories. In the 4d case where a different superpotential W 4d ele = detS is introduced, the F -flatness condition imposes rank M = QQS ≤ N − 2. On the other hand, in 3d,S is not constrained by the superpotential (4.1). Therefore,S (and M := QQS as well) can have non-zero expectation values such that rank M ≤ N. We can see that the meson singlet M on the magnetic side is in the same situation: If the singlet M had a non-zero vev with rank M ≥ N + 1, the magnetic theory flows to a 3d N = 2 Spin(7) theory with less than four spinor matters. As we explained in Section 2, such a theory exhibits a runaway superpotential and loses all the supersymmetric vacua. Therefore, the rank of the meson singlet on the magnetic side should be less than or equal to N. In this way, the electric and magnetic theories have the same mesonic flat directions parametrized by M =SQQ.
As a further consistency check, let us consider a trivial case with N = 1, where SU(N) is vanishing. The electric side becomes a non-gauge theory with two gauge-singlet chiral superfields Q andS. These are free fields. When N = 1, the magnetic side becomes a 3d N = 2 Spin(7) gauge theory with five spinors. As explained in Section 2, this magnetic theory is s-confined and described by the effective superpotential where we defined a magnetic meson N := qq. The two mesonic fields M and N are massive. By integrating out these massive fields, we are left with the two free fields B and Z which are identified with Q andS, respectively.
3d G 2 Seiberg duality
In this section, we propose a Seiberg duality for the 3d N = 2 G 2 gauge theory with F fundamental matters [39]. The dimension of the fundamental representation in G 2 is 7. This duality can be derived from the Spin(7) duality with F +1 spinor matters, which was studied in Section 2 and Section 3, by introducing a rank-one vev to M. On the electric side, the vev for the last flavor M F +1,F +1 = 0 breaks the gauge group into G 2 and one spinor is eaten via the Higgs mechanism. On the magnetic side (shifting F → F + 1), the gauge group is SU (F −3) and not higgsed. The vev for M F +1,F +1 = 0 decomposes Msqq → Msqq + sq 0q0 , where we absorbed the vev of M F +1,F +1 intoq 0 's. In this way, we can derive the 3d duality between the G 2 and SU(F − 3) gauge theories. In what follows, we will investigate this duality in further detail. The electric theory is a 3d N = 2 G 2 gauge theory with F fundamental matters. The supersymmetry-breaking and confinement phases for F ≤ 4 were investigated in [39]. Here, we focus on F ≥ 5. The global symmetry is SU(F )×U(1)×U(1) R where the U(1) symmetry counts the number of quarks. Notice that there is no chiral anomaly in 3d and that the 3d parity anomaly impose no constraint on the matter content. There is no tree-level superpotential on the electric side. The quantum numbers of the elementary fields are summarized in Table 6.
The Higgs branch, which is identical to the 4d one, is described by the following gaugeinvariant composites [40][41][42] M := QQ, B := Q 3 , F := Q 4 , where the color indices are contracted by δ a 1 a 2 , f a 1 a 2 a 3 and ǫ a 1 ···a 7 f a 5 a 6 a 7 (a i = 1, · · · , 7). Next, we consider the Coulomb branch from the G 2 vector superfield. The G 2 Coulomb branch denoted by Z SU (2) was studied in [39,43]. When Z SU (2) obtains a non-zero expectation value, the gauge group is spontaneously broken to where 14 is an adjoint representation. Since the fundamental matter reduces to a massless SU(2) triplet along this flat direction, the low-energy SU(2) gauge theory can have a stable supersymmetric vacuum [28][29][30]. Therefore, the Coulomb branch labeled by Z SU (2) is quantum-mechanically allowed. For other Coulomb branch directions, there is no massless dynamical quark from the fundamental representation and hence the low-energy SU(2) gauge theory makes its vacuum unstable [28]. As a result, the quantum Coulomb branch is described by a single coordinate Z SU (2) . Table 6: 3d N = 2 G 2 gauge theory with F fundamental matters The magnetic description is given by the 3d N = 2 SU(F − 3) gauge theory with F + 1 anti-fundamental matters, a symmetric tensor s and a meson singlet M. The meson field M is identified with the electric meson QQ. The anti-fundamental matters are decomposed into F anti-quarksq and a single oneq 0 as we explained before. The magnetic theory includes a tree-level superpotential which distinguishesq 0 fromq. The last term Y dressed is a dressed Coulomb branch operator of the magnetic theory, which will be defined below. The quantum numbers of the magnetic elementary fields are summarized in Table 7. The global U(1) charge assignment on the magnetic side is completely determined only by requiring the baryon matching B := Q 3 ∼ q F −3 and the availability of Msqq + sq 0q0 in the superpotential. Therefore, it is a non-trivial consistency check that we can have the superpotential W = Y dressed consistent with the symmetries in Table 7. As we will see below, it is the case. The analysis of the Coulomb branch is the same as the previous sections. The bare Coulomb branch operator denoted by Y bare SU (F −5) corresponds to the gauge symmetry breaking where Y bare SU (F −5) is constructed from the U(1) 1 vector superfield. Since the bare operator has a non-zero U(1) 2 charge proportional to the effective mixed Chern-Simons term between U(1) 1 and U(1) 2 [34], we need to define the dressed monopole operator where the color indices of s F −5 are contracted with an epsilon tensor of the SU(F − 5) gauge group. Note that the global U(1) charge is correctly canceled in this combination as in Table 7. Table 6 SU The matching of the chiral ring elements under the G 2 duality is transparent from Table 6 and Table 7: Notice that the matching of the quartic baryon F := Q 4 ∼q F −4q 0 and the Coulomb branch Z SU (2) ∼ det s is non-trivial and serves as a consistency check of our duality. The magnetic superpotential lifts unnecessary flat directions sqq, sq 0q0 and Y dressed .
We can verify further consistencies of the G 2 duality. For F = 4, the electric theory is in an s-confinement phase [39]. The confined degrees of freedom are M, B, F and Z SU (2) . The effective superpotential for F = 4 becomes which governs the low-energy dynamics. On the magnetic side with F = 4, the gauge group SU(F − 3) vanishes. Thus, the theory is described by the gauge-invariant chiral superfields q,q 0 , s and M. The magnetic superpotential becomes where we used the following operator identification B ∼q, F ∼q 0 , Z SU (2) ∼ s, (5.12) which is consistent with the global symmetries in Table 6 and Table 7. In this way, we can reproduce a part of the electric superpotential (5.10) except for Z SU (2) det M. We expect that this missing term is dynamically generated on the magnetic side since Z SU (2) det M is consistent with the magnetic global symmetries. As a final consistency check, we compute the superconformal indices [35][36][37][38] by using the electric and magnetic descriptions. We here focus on the case with F = 5. We find that the two descriptions give us an identical result: 13) where t is a fugacity for the global U(1) symmetry. The R-charge of Q is set to be r Q = 1 3 for simplicity. The second term 1 t 10 + 15t 2 x 2/3 consists of the meson M and the Coulomb branch operator Z SU (2) . The third term 10t 3 x corresponds to the cubic baryon B. The quartic baryon F is represented as 5t 4 x 4/3 . The higher order terms are fermion contributions and symmetric products of the bosonic operators. We verified the agreement of the electric and magnetic indices up to O(x 3 ).
Summary and Discussion
In this paper, we proposed the Seiberg duality for the 3d N = 2 Spin(7) gauge theory with F spinor matters. The dual description is given by the 3d N = 2 SU(F − 4) gauge theory with F anti-fundamental matters, a symmetric tensor and a meson singlet. Since the matter content on the magnetic side is "chiral" in a four-dimensional sense, this duality equates "chiral" and "non-chiral" 3d gauge theories. In Section 4, we switched the roles of the electric and magnetic theories and proposed the 3d Kutasov-type duality for the SU(N) gauge theory with a symmetric tensor and fundamental matters with a (dressed) monopole superpotential. As a by-product of the Spin(7) duality, we found the Seiberg duality for the 3d N = 2 G 2 gauge theory with fundamental matters. These dualities are very similar to the corresponding 4d dualities discovered by Pouliot [16] except for the superpotential.
An important observation in this paper is that the composite operator det s, which was truncated in the 4d duality, is not removed but mapped to the Coulomb branch operator of the dual theory. This is a new feature of the 3d Spin(7) and G 2 dualities. As a validity check of our analysis, we computed the superconformal indices by using the electric and magnetic theories and observed a beautiful agreement.
Although the dualities we found here are very similar to the 4d dualities [16], we couldn't derive these 3d dualities from the 4d ones via dimensional reduction [12,13]. An important difference between the 3d and 4d Spin (7) dualities is the absence of the superpotential proportional to det s in 3d. It would be interesting if we could find the connection between the 3d and 4d Spin(7) and G 2 dualities. In principle, it is possible to reduce the 4d superconformal indices to the 3d partition functions of these dualities [12,[44][45][46][47]. This will give us another consistency check of the dualities and deepen our understanding.
The Seiberg dualities proposed here include the (dressed) monopole superpotential. Therefore, this can be regarded as a generalized version of the monopole-deformed dualities [48]. It is interesting to further generalize the 3d dualities with dressed monopole superpotential in this direction. Since the theory includes a monopole potential, these dualities have a UVcompletion problem. We do not know how to express the dressed Coulomb branch operator in terms of the UV degrees of freedom in a gauge-invariant way. Of course, this does not ruin the validity of the dualities because the proposed dualities are valid in a far-infrared regime. Nonetheless, it is nice if we could understand the origin of the (dressed) monopole superpotential and obtain the UV-complete dualities where both of the electric and magnetic theories are UV-complete and have well-defined Lagrangian descriptions.
It is very important to generalize our argument to more general cases: One can in principle construct similar dualities for the 3d N = 2 Spin(N)gauge theories with vector and spinor matters according to the 4d dualities [17][18][19][20][21][22][23][24]. We will soon come back to this problem elsewhere. | 6,426 | 2019-07-07T00:00:00.000 | [
"Physics"
] |
INTERNATIONAL TRADE IN THE COVID-19 OUTBREAK: IS THE DIGITAL ECONOMY WORKING?
This paper presents the strengthening of the digital economy which acts as a stabilizer in the trade sector during the Covid-19 outbreak. In a contactless world, most interactions between customers and employees must occur virtually. While still making exceptions to some things, operating digitally is the only way to survive restrictions on activities by the government, including in the economic sector. The instructions for switching in a digitizing pattern are nothing new, they are simply brought into sharp focus. The most recent event, the Covid-19 outbreak, has accelerated a transformation, as evidenced by the sharp strengthening in the digitizing economic sector. Based on a literature study, the conclusions explain the opportunities for the digital economy to support the trade sector which experienced inconsistencies in growth tabulations during this outbreak. Through the perspective of neoliberalism, it can be used to explain how it is related to strengthening the digital economy with the stability of international trade at macro and micro levels
INTRODUCTION
The coronavirus outbreak in 2019 is still spreading almost globally.A number of countries are still experiencing a fairly rapid increase in cases.It has been more than half a year that the virus outbreak originating in the city of Wuhan (China) has become a health threat to the world community.The virus that attacks the human respiratory system has also spread in 215 countries.Recorded until September 2, 2020, this outbreak has infected 25,602,665 people in the world, of which 852,758 people (3.34%) have been confirmed dead.The good news, the healing rate is 17,704,454 people.When compared to each country, the United States is currently in first place for the highest number of positive cases, which is more than 6 million cases.Meanwhile, Anguilla is a country with the number of cases and the number of people who have recovered the least among others (3 people).Thus, Covid-19 not only threatens global health conditions but also greatly impacts the global economy if serious steps are not taken (WHO, 2020;Darma et al., 2020a).
Economic factors that experience a recession, over time, will lead countries to new problems (education, environment, culture, and legal certainty).For example, those who work in the trade sector play a major role in the progress of the country's economy.If production, distribution, and consumption patterns are hampered, then the key to a country's success in meeting the needs of its population will be disrupted.Micro, Small, and Medium Enterprises (MSMEs) have had a significant impact so far because they do not have much capital, so they experience a decrease in turnover spikes.The Gross Domestic Product (GDP), which is highly dependent on independent activities, will collapse and there will be many reductions in wages and massive employee layoffs.The supplement that needs to be taken is to revitalize it with a policy (from previously traditional to modern).After all, it was not only trade that collapsed, but primary sectors (such as agriculture) were also affected.A bold breakthrough (such as subsidies, business guarantees, and technical assistance) in the short term, is considered a solution rather than not taking concrete steps in order to save the economy (Muliadi, 2020;Darma et al., 2020b).
To explain this phenomenon, we use the theoretical framework of neoliberalism in international relations.Based on the proportions of the neoliberalism framework, we argue that the strengthening of the digital economy is one of the contemporary-based international relations phenomena that are interesting to discuss, especially in the current era.Obtaining statistical data from various institutions shows a significant increase in consumers of digital service providers in various parts of the world.This paper will then be divided into three parts, namely a section on the framework of thinking about neoliberalism in international relations, a discussion section with a description of the strengthening of the digital economy as a phenomenon of contemporary international relations in relation to the perspective of neoliberalism, and the last part is a conclusion.
NEOLIBERALISM AND INTERNATIONAL RELATIONS
The presence of neoliberalism is a form of actualization as well as a criticism of its predecessor theory, neorealism, which is considered too pessimistic so that it is unable to predict changes in the global political situation after the end of the last cold war.This triggered the interest of a number of researchers to revisit the formula presented by the liberalism perspective in overcoming global problems through the form of cooperation.In this way, neoliberalism is present as a new perspective that focuses on the operationalization scheme of international organizations and non-state actors through international cooperation (Mammadov and Hasanov, 2016;Nye, 1988;Simon, 1995;Stephen, 2011).
Neoliberalism recognizes the state as the main actor in international relations but rejects its position as a single actor and emphasizes that the presence of non-state actors plays a significant role.This departs from the international political and economic reality which is basically institutionalized.This institution acts as a mediator in realizing cooperation between actors in the international system.In addition, the existence of international institutions and regimes is very important in order to facilitate cooperation between countries that present principles, norms, rules, and procedures in the decision-making process (Lamy, 2005).Diversity in various fields ranging from economic, social, environmental to political causes various problems and interests which then require collaboration to find solutions.Thus, dependence on these various fields encourages inter-actors to then carry out a collaboration.The interdependence of both parties will create itself when each of them is familiar in establishing a cooperative relationship.The conditions of the international order competitive in nature, a driving factor for the state to maximize its achievements.Even so, the state tends to heed the benefits obtained in this cooperation but still tries to get absolute benefits (Castañeda and Shemesh, 2020;Tauss, 2012;Wills, 2014).Each country is interdependent with other countries and has their respective comparative advantages.This advantage makes them an opportunity to join in international trade.The value of export commodities and services will increase the trade balance of exporting and importing countries.As a result of the Covid-19 phenomenon, Figure 1 and Figure 2 present that the decline in trade fell drastically.Both scenarios (optimists and pessimists) both indicate an inevitable recession.For example, the intensity of export trade from the North American Continent reaches around 40% and the export value of the Asian Continent is up to the level of 36%.In terms of imports, only the European Union experienced the lowest loss of 12%.The domino effect due to the collapse of the global trade pattern also has fatal consequences for the unemployment rate.The occurrence of a massive reduction in the workforce of 5.9% can be explored for the Chinese case.In general, the prediction of global trade volume could fall by 12.9% in 2020 (it could even reach 31.9%).The USA is not alone in facing the increasing number of unemployed.Australia and South Korea also noted an increase in the unemployment rate and the situation would be even worse.
Based on this description, we take the current case through the strengthening of the digital economy as a phenomenon in contemporary international relations.In essence, digital transformation in the economic sector is nothing new.Prior to this outbreak, the shift towards economic digitization had already occurred in various countries.However, the existence of recent events has been able to accelerate the digital economy transformation so as to achieve extraordinary performance and experience significant strengthening.When viewed from the perspective of neoliberalism, this strengthening can occur due to interests in the economic sector which causes inter-factors to cooperate in order to gain profits.Economic digitization has succeeded in bridging social distances that are now turning virtually.
CONTEMPORARY INTERNATIONAL RELATIONS WITH THE ROLE OF THE DIGITAL ECONOMY
Industrial revolution 4.0 has brought significant changes to various areas of life.One of them is digital transformation innovation carried out in line with business competencies and involving digital technology in the implementation stage.The concept of the digital economy explains that the socio-political situation and economic system have the characteristics of an intellectual space, including access to instruments, capacity, and ordering information.The digitalization era is now growing rapidly by blurring the boundaries between networks.This was followed by a collaboration that was able to give birth to the concept of sharing economy, the internet of things, e-commerce, and financial technology in the economic field (Tapscott, 1995;Maria et al., 2019).
Covid-19 has brought humanity around the world into an unprecedented phase of isolation.Various efforts were made including social restrictions which were considered the most effective in slowing the spread of the virus until the discovery of a Covid-19 control vaccine.This forced business managers to transform their business into digital form.For example, the food and beverage producing industry in China have expanded take away options by up to 40% where this was first applied when times were difficult.In addition, according to data compiled by QuantumMetric, the level of urgency for online shopping increased by 8.8% in February (Sheth, 2020;Ye and Kulathunga, 2019;Wardhono et al., 2019).
Not only consumption in terms of fashion and food, but digital content also experienced a surge in users to reach 51% through streaming services.For example, during the first quarter of 2020, Netflix has recorded 16 million new user registrations for its services worldwide.In Indonesia itself, the use of digital services has increased during the period of social distancing.Consumers decide to buy daily necessities through an online platform in the form of e-commerce.As a comparison, the development of e-commerce in Indonesia is able to penetrate the figure of IDR 40 billion in 2019 which makes Indonesia the country with the largest digital economy value in Southeast Asia and is predicted to increase to IDR 130 billion in 2025 (BDO United States, 2020).
Millions of people choose to change their behavior in terms of meeting economic needs at the same time for reasons of time effectiveness and personal convenience.Economic digitization has proven to have a positive impact related to efficiency, productivity, reduced production costs, and agility of excellence by quickly connecting one party to another.In fact, the utilization of digital technology is able to optimize activities so that they are more focused and reduce waste of resources.In addition, the existence of instructions from international institutions to carry out Work From Home (WFH) is able to maximize workforce productivity while maintaining company credibility.And last but not least, the application of data-based technology has the flexibility for customers so that they are able to make or change decisions more quickly.
This phenomenon then proves that the existence of non-state actors has a significant influence on international relations.Especially at this time, the digital economy is capable of playing a key role in stabilizing the economic sector through micro and macro policies.Even though the neoliberalists believe in an anarchist international system, they still show optimism about the possibility of a collaboration.Neoliberalism also embraces utility value in the economic sector which is based on this case manifested through digital economic services by involving the e-commerce platform as an actor.The wide-open opportunities in international trade during the Covid-19 outbreak caused the market to act as the rulers, especially in this case digital service providers.
CONCLUSION
We conclude that the existence of Neoliberalism as a theory of international relations provides an opportunity for non-state actors to contribute to the sector of life.The existence of an interest in the economic sector is one of the factors causing inter-factors to cooperate in order to gain profits.In strengthening the digital economy that occurred during the Covid-19 outbreak, the government left the matters of determining wages and reciprocity to work owners.In addition to increasing profits by service providers, consumers also get various advantages in carrying out transactions.The digital economy has succeeded in becoming a key role in stabilizing the trade sector at both micro and macro levels.Thus, we have proven that the perspective of neoliberalism can be used to explain how its role and relationship are to strengthen the digital economy as a stabilizer for the trade sector that was affected during the Covid-19 outbreak. | 2,745.4 | 2020-10-28T00:00:00.000 | [
"Economics"
] |
TF-YOLO: An Improved Incremental Network for Real-Time Object Detection
: In recent years, significant advances have been gained in visual detection, and an abundance of outstanding models have been proposed. However, state-of-the-art object detection networks have some ine ffi ciencies in detecting small targets. They commonly fail to run on portable devices or embedded systems due to their high complexity. In this workpaper, a real-time object detection model, termed as Tiny Fast You Only Look Once (TF-YOLO), is developed to implement in an embedded system. Firstly, the k-means ++ algorithm is applied to cluster the dataset, which contributes to more excellent priori boxes of the targets. Secondly, inspired by the multi-scale prediction idea in the Feature Pyramid Networks (FPN) algorithm, the framework in YOLOv3 is e ff ectively improved and optimized, by three scales to detect the earlier extracted features. In this way, the modified network is sensitive for small targets. Experimental results demonstrate that the proposed TF-YOLO method is a smaller, faster and more e ffi cient network model increasing the performance of end-to-end training and real-time object detection for a variety of devices.
Introduction
When people glance at an image, they can immediately know what the objects are, which type of the targets are in the image, and where they are [1]. In order to match the excellent human visual system, fast and accurate object detection represents a significant focus of research in the field of computer vision. Their achievements in this area are used in various applications, such as video surveillance, face recognition, human-computer interaction, and self-driving technology, to name just a few [2]. Robust object detection in a simple environment is relatively easy to achieve, but it is hard to guarantee both speed and accuracy of recognition in practice since the real-world environments may look more complex [3].
Deep learning techniques have been widely employed in the field of object detection during the past decade and have become efficient approaches of extracting features from images. Generic object detection based on deep learning is characterized by two factors: plentiful features and robust feature representation capabilities. They are also combined with traditional hand-crafted features [4]. Existing object detection methods based on deep learning can be generally grouped into two categories, the models based on region proposals and the models based on regression [5].
Currently, classical object detection methods based on region proposals include Region-based Convolutional Neural Networks (R-CNNs) [6], Spatial Pyramid Pooling Networks (SPP-net) [7], Fast R-CNNs [8], Faster R-CNNs [9], and Region-based Fully Convolutional Networks (R-FPN) [10]. However, these approaches fail to achieve real-time speed due to the expensive running process and inefficiency of region propositions. The R-CNNs hypothesizes object locations which depends on region proposal algorithms. Features in the R-CNN are, first, extracted from each candidate region, and then fed into convolutional neural networks. Finally, they are evaluated by Support Vector Machines score are defined in each grid cell. Each grid cell predicts C conditional probabilities, denoted as P(Class i Object). If there is an object in the grid, P(Object) = 1. Otherwise, P(Object)= 0. The confidence score here refers to the accuracy the box predicts and the probability the objects are contained, which is defined as P(Object) * IOU truth pred . Intersection-Over-Union (IOU), here, refers to the intersection area between the predicted bounding box and ground truth box, representing a fraction ranging from 0 to 1. It is noteworthy that conditional class probability (P Class i Object) ) is quite different from the confidence score (P(Object) * IOU truth pred ). The former is predicted in each grid, while the other is predicted in each bounding box [1]. Multiply these values in the test process, the class-specific scores in each box is defined in Equation (1). These scores encode the probability of the object appearing in the box, as well as how well the bounding box fits the object. P(Class i Object) * P(Object) * IOU truth pred = P(Class i ) * IOU truth pred (1)
The Network of Darknet19
YOLOv3 follows the principle of coordinate prediction in YOLOv2. For predicting the categories, multi-label and multi-classification are applied instead of original single-label and multi-classification. Meanwhile, YOLOv3 adopts binary cross entropy loss function instead of multi-class loss function.
On a standard computer with Graphics Processing Unit (GPU), it is easy for YOLOv3 to achieve real-time performance [22]. However, in the miniaturized embedded devices, such as Nvidia SoM, the conventional YOLOv3 algorithm runs slowly. The YOLOv3-tiny network can basically satisfy real-time requirements based on limited hardware resource [23]. Therefore, this paper switches to the YOLOv3-tiny algorithm. The Darknet19 structure of the YOLOv3-tiny network is shown in Table 1, which shows streamlined and enables the YOLOv3-tiny network to achieve the desired effect in miniaturized devices.
Multi-Scale Prediction in Detecting Objects
Detecting objects at different scales used to be a challenging research topic in the field of computer vision [24]. Feature pyramids built on image pyramids form the fundamentals of a standard solution, partly because of their intensity in computing and their memory. Recently proposed target detectors based on deep learning have avoided pyramid representations [25]. Nevertheless, image pyramids are not the best approach to compute a multi-scale feature representation. In order to naturally leverage the inherent multi-scale and pyramidal shape in the feature hierarchy, the in-network feature pyramids can replace the image pyramids [26] without sacrificing speed and memory.
Relative top-down architectures with skip connections are popular in state-of-the-art object detection research. The YOLOv3-tiny creates a feature pyramid with strong semantics at two scales by adopting subsampling layers and a fusion approach. As shown in Figure 1, the size of the two scales are 13 × 13 and 26 × 26, which are obtained in the detection of ordinary size target, respectively. Finally, two scales are merged in the end. The architecture is constructed as a feature pyramid, wherein predictions are independently made on each level. The feature pyramid has rich semantics via a top-town pathway and lateral connections [27]. In this way, YOLOv3-tiny has the ability to detect small targets. [25]. Nevertheless, image pyramids are not the best approach to compute a multi-scale feature representation. In order to naturally leverage the inherent multi-scale and pyramidal shape in the feature hierarchy, the in-network feature pyramids can replace the image pyramids [26] without sacrificing speed and memory. Relative top-down architectures with skip connections are popular in state-of-the-art object detection research. The YOLOv3-tiny creates a feature pyramid with strong semantics at two scales by adopting subsampling layers and a fusion approach. As shown in Figure 1, the size of the two scales are 13 × 13 and 26 × 26, which are obtained in the detection of ordinary size target, respectively. Finally, two scales are merged in the end. The architecture is constructed as a feature pyramid, wherein predictions are independently made on each level. The feature pyramid has rich semantics via a top-town pathway and lateral connections [27]. In this way, YOLOv3-tiny has the ability to detect small targets.
Proposed TF-YOLO Network
Previous methods, such as SSD and YOLOv3, regard detection as a regression problem, which have successfully achieved efficient and accurate results. Nevertheless, these methods fail to detect objects on an embedded system in real-time. This section introduces the proposed Tiny Fast YOLO (TF-YOLO) network in detail. As shown in Figure 2, the proposed TF-YOLO network is designed based on the YOLOv3-tiny algorithm, and it attempts to process more efficiently on the above devices. Owning to multi-scale fusion, multi-scale detection, and k-means++ clustering, the TF-YOLO network enables end-to-end training and real-time speeds while keeping high average precision. Therefore, TF-YOLO network performs well on detecting multi-scale targets, especially on recognizing smaller targets. The framework flowchart of TF-YOLO network is shown in Figure 3.
Proposed TF-YOLO Network
Previous methods, such as SSD and YOLOv3, regard detection as a regression problem, which have successfully achieved efficient and accurate results. Nevertheless, these methods fail to detect objects on an embedded system in real-time. This section introduces the proposed Tiny Fast YOLO (TF-YOLO) network in detail. As shown in Figure 2, the proposed TF-YOLO network is designed based on the YOLOv3-tiny algorithm, and it attempts to process more efficiently on the above devices. Owning to multi-scale fusion, multi-scale detection, and k-means++ clustering, the TF-YOLO network enables end-to-end training and real-time speeds while keeping high average precision. Therefore, TF-YOLO network performs well on detecting multi-scale targets, especially on recognizing smaller targets. The framework flowchart of TF-YOLO network is shown in Figure 3. [25]. Nevertheless, image pyramids are not the best approach to compute a multi-scale feature representation. In order to naturally leverage the inherent multi-scale and pyramidal shape in the feature hierarchy, the in-network feature pyramids can replace the image pyramids [26] without sacrificing speed and memory. Relative top-down architectures with skip connections are popular in state-of-the-art object detection research. The YOLOv3-tiny creates a feature pyramid with strong semantics at two scales by adopting subsampling layers and a fusion approach. As shown in Figure 1, the size of the two scales are 13 × 13 and 26 × 26, which are obtained in the detection of ordinary size target, respectively. Finally, two scales are merged in the end. The architecture is constructed as a feature pyramid, wherein predictions are independently made on each level. The feature pyramid has rich semantics via a top-town pathway and lateral connections [27]. In this way, YOLOv3-tiny has the ability to detect small targets.
Proposed TF-YOLO Network
Previous methods, such as SSD and YOLOv3, regard detection as a regression problem, which have successfully achieved efficient and accurate results. Nevertheless, these methods fail to detect objects on an embedded system in real-time. This section introduces the proposed Tiny Fast YOLO (TF-YOLO) network in detail. As shown in Figure 2, the proposed TF-YOLO network is designed based on the YOLOv3-tiny algorithm, and it attempts to process more efficiently on the above devices. Owning to multi-scale fusion, multi-scale detection, and k-means++ clustering, the TF-YOLO network enables end-to-end training and real-time speeds while keeping high average precision. Therefore, TF-YOLO network performs well on detecting multi-scale targets, especially on recognizing smaller targets. The framework flowchart of TF-YOLO network is shown in Figure 3. Multi-scale detection is more sensitive to small targets. Multi-scale fusion makes full use of features of the input image. k-means++ is applied to find more excellent priori boxes of the targets.
The Features of Multiple Layers Concatenation
When deep neural networks start converging, the degradation problem will be exposed, and the accuracy will deteriorate rapidly in the end. Aiming to address that problem, this paper follows
The Features of Multiple Layers Concatenation
When deep neural networks start converging, the degradation problem will be exposed, and the accuracy will deteriorate rapidly in the end. Aiming to address that problem, this paper follows DenseNet proposed by Huang et al. [2]. Short connections in DenseNet enable the training process to be easier and more accurate in CNN, which is of great importance in image classification. In the Dense block, previous convolutional layer output is transferred to the subsequent one. Hence, more complicated features are extracted by the filters of the convolutional layers, which are included by multi-layer networks. It can also be understood that all layers are directly connected with each other, thus apparently alleviating gradient disappearance. By incorporating those ideas into the hidden network of DenseNet, the sufficient features of the above network can be extracted. Since the layer is not very deep, the output layer can extract the previous features. Subsequently all of them have been connected together. This paper adopts the principle of multiple layers concatenation and eventually achieves satisfactory performance, which will be thoroughly presented in Section 4.
The detailed all-layer connection method utilized in the proposed TF-YOLO detection approach is display in Figure 4. Specifically, the tenth, eleventh and thirteenth layers of the designed network are connected, and then these layers feed into the convolutional layer followed by the up-sampling layer. Similarly, the tensors in eighth and ninth layers are processed together, forming an innovative input and entering into the next layer. Finally, the corresponding in sixth and seventh layers and the tensor are connected to the next hidden layer. DenseNet proposed by Huang et al. [2]. Short connections in DenseNet enable the training process to be easier and more accurate in CNN, which is of great importance in image classification. In the Dense block, previous convolutional layer output is transferred to the subsequent one. Hence, more complicated features are extracted by the filters of the convolutional layers, which are included by multi-layer networks. It can also be understood that all layers are directly connected with each other, thus apparently alleviating gradient disappearance. By incorporating those ideas into the hidden network of DenseNet, the sufficient features of the above network can be extracted. Since the layer is not very deep, the output layer can extract the previous features. Subsequently all of them have been connected together. This paper adopts the principle of multiple layers concatenation and eventually achieves satisfactory performance, which will be thoroughly presented in Section 4. The detailed all-layer connection method utilized in the proposed TF-YOLO detection approach is display in Figure 4. Specifically, the tenth, eleventh and thirteenth layers of the designed network are connected, and then these layers feed into the convolutional layer followed by the up-sampling layer. Similarly, the tensors in eighth and ninth layers are processed together, forming an innovative input and entering into the next layer. Finally, the corresponding in sixth and seventh layers and the tensor are connected to the next hidden layer.
Scale 1
Up-sample Figure 4. The workflow of connecting multiple layers.
Multi-Scale Prediction Framework
Small target detection is a significant challenge that commonly causes traditional detectors to fail. Multi-scale prediction is an important step towards understanding and inferring different objects, especially small targets, and their arrangements observed in a scene. This section presents an improved FPN-based multi-scale prediction framework and integrates it to a particular filter detector to address that problem. The major advantage of FPNs is that they produce a multi-scale feature representation in which all levels are semantically strong. As a result, predictions can be made on the finest level. In addition, it can be trained end-to-end with three scales and be used consistently during
Multi-Scale Prediction Framework
Small target detection is a significant challenge that commonly causes traditional detectors to fail. Multi-scale prediction is an important step towards understanding and inferring different objects, especially small targets, and their arrangements observed in a scene. This section presents an improved FPN-based multi-scale prediction framework and integrates it to a particular filter detector to address that problem. The major advantage of FPNs is that they produce a multi-scale feature representation in which all levels are semantically strong. As a result, predictions can be made on the finest level. In addition, it can be trained end-to-end with three scales and be used consistently during training and testing process. Therefore, FPNs are able to achieve higher accuracy without increasing testing time over the single-scale baseline.
In the proposed work, several convolutional layers are added from the feature extractor. The last three predict a three-dimensional tensor encoding bounding box coordinates, object prediction, and class predictions [1]. Assuming that there are 10 classes in the experiment, and three boxes at each scale are predicted. Thus, the tensor is M × M × [3 × (4 + 1 + 10)], for four bounding box offsets, one object prediction, and ten class predictions.
The proposed TF-YOLO network adopts the advances of the Darknet structure. The neural network is not deep, whereas, the features in different scales are merged in the proposed TF-YOLO network. It connects the feature maps with the same feature scale in the above mentioned Darknet structure. Meanwhile, the network extracts the feature map from two former convolution layers, followed by the up-sampling layer. Then the tensors above are connected together. In this way, the characteristics of the hidden layer, as well as the deep features can be extracted by the full-connection layer. Section 3.1 explains how to make hierarchical connections in detail. In the first layer, the size of the tensor is 13 × 13 × 18. Via two convolutional layers and an up-sampling layer, the tensor becomes 26 × 26 × 18, which predicts the second scale. This procedure repeats one more time, and the tensors becomes 52 × 52 × 18.
In the proposed TF-YOLO network, these three scales are used to detect targets in various sizes. The network performs large-scale detection in 13 × 13 size map and detects the moderate-scale target in 26 × 26. The small target is detected in 52 × 52 size map. By connecting the multiple features of the same scale, the TF-YOLO network prominently promotes the ability to detect objects. The feature extraction workflow of the TF-YOLO network is shown in Figure 5. training and testing process. Therefore, FPNs are able to achieve higher accuracy without increasing testing time over the single-scale baseline.
In the proposed work, several convolutional layers are added from the feature extractor. The last three predict a three-dimensional tensor encoding bounding box coordinates, object prediction, and class predictions [1]. Assuming that there are 10 classes in the experiment, and three boxes at each scale are predicted. Thus, the tensor is , for four bounding box offsets, one object prediction, and ten class predictions. The proposed TF-YOLO network adopts the advances of the Darknet structure. The neural network is not deep, whereas, the features in different scales are merged in the proposed TF-YOLO network. It connects the feature maps with the same feature scale in the above mentioned Darknet structure. Meanwhile, the network extracts the feature map from two former convolution layers, followed by the up-sampling layer. Then the tensors above are connected together. In this way, the characteristics of the hidden layer, as well as the deep features can be extracted by the full-connection layer. Section 3.1 explains how to make hierarchical connections in detail. In the first layer, the size of the tensor is13 13 18 × × . Via two convolutional layers and an up-sampling layer, the tensor becomes 26 26 18 × × , which predicts the second scale. This procedure repeats one more time, and the tensors becomes 52 52 18 × × .
In the proposed TF-YOLO network, these three scales are used to detect targets in various sizes.
The network performs large-scale detection in 13 13 × size map and detects the moderate-scale target in 26 26 × . The small target is detected in 52 52 × size map. By connecting the multiple features of the same scale, the TF-YOLO network prominently promotes the ability to detect objects. The feature extraction workflow of the TF-YOLO network is shown in Figure 5.
K-means++ Clustering in Pre-processing
Clustering algorithms is generally viewed as an unsupervised method for data analysis [28]. K-means++ clustering is an approach commonly used to adaptively partition a dataset into groups. It is necessary to specify the number of cluster centers in advance, since k clustering centers are
K-means++ Clustering in Pre-Processing
Clustering algorithms is generally viewed as an unsupervised method for data analysis [28]. K-means++ clustering is an approach commonly used to adaptively partition a dataset into groups. It is necessary to specify the number of cluster centers in advance, since k clustering centers are simultaneously initialized. The way to choose the cluster center is to select the maximum or minimum value randomly in the dataset as the initial cluster center. Therefore, the better the cluster centers are selected, the more effective the algorithm would be obtained.
As mentioned above, K-means++ algorithm is used to conduct latitude clustering at first. It randomly selects a sample point in the dataset as the first initialized cluster center, and it calculates the rest for each cluster center [29]. After the sample points are initialized, the distances between the cluster centers are calculated, then the shortest distance from its own cluster center is chosen. On the other hand, the sample with the greatest distance is selected as the new cluster center. The above process is repeated until there is no further change in the assignment of distances to clusters. Thus, the algorithm converges. Eventually, the final cluster center is calculated using the k-means++ algorithm to determine the specific parameters of the anchor.
The k-means++ algorithm generally uses the Euclidean distance to measure the distance between two points. Nevertheless, there are the following three scale targets in the dataset: large-scale targets, moderate-scale targets, and small-scale targets [30]. The main steps of the K-means++ method are as follows:
•
Step 1: Choose an initial center t 1 uniformly at random from the dataset X.
•
Step 2: Choose the next center t i , selecting t i = x ∈ X with probability is the distance from a data point x to the closest center.
•
Step 3: Repeat Step 2, until D(x) is the shortest distance. After which, k center is chosen.
•
Step 4: Define center T = {t 1 , t 2 , · · · , t k }. • Step 5: For i ∈ {1, 2, · · · , k}, set the cluster T i to be the set of points in X that are closer to t i than they are to t j for all j i. • Step 6: For i ∈ {1, 2, · · · , k}, set t i to be the center of mass of all points in T i : Step 7: Repeat Step 5 and Step 6 until T converges.
The definition of the three sizes of targets here refers to the proportion that they are in the entire image. Furthermore, in terms of the Euclidean distance, more errors would occur in the larger bounding boxes rather than in smaller bounding boxes. Since the goal is to get better IOU through anchor boxes, Jake's distance is a better choice. Jake's distance is adapted to the variable box size, which is a good solution to resolve the error caused by Euclidean distance. The distance formula is defined as: where box represents the sample, and centroid represents the center of the cluster, and IOU(box, centroid) represents the intersection of the cluster's center box and the cluster box [31]. The intersection ratio IOU can indicate the accuracy of the prediction box by Equation (3).
where bb gt represents the real box, and bb dt represents the prediction box. Combining the above two equations, the final distance can be calculated as d(box, centroid) = bb gt ∪ bb dt − bb gt ∩ bb dt bb gt ∪ bb dt (4) As mentioned above, YOLOv3 algorithm has achieved end-to-end training and high-speed target detection. However, some problems still exist. Conventional YOLO divides each image into a fixed grid, which results in the number of detected objects will be limited. The fixed parameters provided by anchor are suitable for the targets in the VOC datasets, while they are not adapted to the targets in specific scenes. Common targets, such as vehicles, tanks, and airplanes, have a large aspect ratio. Therefore, this section takes advantage of the ideas in Fast R-CNN and SSD to re-cluster according to a spacific scenario. In the beginning, the network manually sets a priori box. To guarantees the selection of network is more subjective, and it would make the deep network easier to learn. Furthermore, its predictions perform better than the original method. In order to optimize the adapted parameters and select appropriate anchor box size, the designed network needs to re-cluster according to real application domains. In this way, the TF-YOLO network performs well on multi-scale prediction, meanwhile, it is insensitive to small objects particularly.
Experimental Verification and Result Analysis
In this section, the performance of the proposed TF-YOLO network is evaluated. Specifically, aerial remote sensing images from NWPU VHR-10 [32][33][34] are used for training and testing in the experiment. The NWPU VHR-10 dataset contains a total of 800 very-high-resolution (VHR) remote sensing images cropped from Google Earth and the Vaihingen dataset, and manually annotated by experts. The differences between remote sensing and conventional natural images can be briefly described as follows. First of all, there are numerous small targets with a little visual information in the remote sensing images. Assuming that the CNN's pooling layer further reduces the amount of information to the small targets. After four pooling layers, a 24 × 24 target has only approximately 1 pixel, leading to the dimension too diminutive to distinguish. Secondly, there are various scale and perspective of remote sensing images. The perspectives of aerial remote sensing images are basically high-altitude, and the targets on the ground may have different size and mode. Some detectors well trained on conventional datasets may fail to perform well in remotely sensed images. Thirdly, the background of remote sensing images is complex for the large field of view. What's more, a variety of backgrounds will extend a certain amount of interference on testing targets.
Comparison of Speed and Precision
For this test, a total of 500 pictures containing 10 types of objects were selected in NWPU VHR-10 dataset. The detailed 10 categories are airplane, ship, storage tank, baseball diamond, tennis court, basketball court, ground track field, harbor, bridge, and vehicle. Cross validation is used to evaluate the precision of TF-YOLO. During the process of data pre-processing, nine cluster centers are selected, and the targets are distributed into three scales by K-means++. The size of these nine scales are arranged from small to large as follows: (22,19), Generally, the trade-off between accuracy and recall is a tricky problem. In order to evaluate the precision among different types of targets, mAP is introduced, which is one of significant measure metrics to evaluate the test results [36].
Meanwhile, in order to maintain the consistency of the data distribution in each subset, the feature can be extracted through hierarchical sampling layers. AP and mAP of the three sets of comparative experiments are illustrated in Table 2. The first set of comparison experiments is the YOLOv3-tiny network. Inspired by YOLOv3-tiny, the second set of comparison experiments is defined as the YOLO_k network. Without data pre-processing, its structure is the same as TF-YOLO. The third group of comparative experiments is the TF-YOLO network, which improves the size of anchors through K-means clustering. One significant advantage of the proposed TF-YOLO network is real-time working on portable devices. For instance, the TF-YOLO network can be applied to the embedded system on the Nvidia Jetson TX2. After training models in NWPU VHR-10, the TF-YOLO network runs at about 24.3 FPS. However, the YOLOv3-tiny network runs at 24.6 FPS. In embedded systems, the YOLOv3-tiny network is slightly faster than the TF-YOLO network. Partly because the network in TF-YOLO is deeper than YOLOv3-tiny, and more parameters in TF-YOLO are learnt during the training period. However, the accurate of the TF-YOLO network is significantly improved than the YOLOv3-tiny network, which results in the TF-YOLO network performing is well on detecting multi-scale targets with real-time speed.
As obviously revealed in Table 2, the mAPs of TF-YOLO network are prominently higher than those of the YOLOv3-tiny and YOLO_k networks, regardless of whether complex objects are included. The YOLOv3-tiny network does not respond satisfactory to the targets in testing images. Besides, by improving the network structure in the YOLO_k network, mAP significantly improves to 0.86413, but it is still unsatisfactory. On the other hand, the proposed TF-YOLO network, within K-means++ clustering to change anchors, demonstrates the highest AP of the single-class target and achieves 0.89449 in mAP.
In VOC 2007 dataset, a total of 200 images containing small targets were selected. In consideration of the detection results, the TF-YOLO and YOLOv3-tiny networks were chosen for comparison with state-of-art methods based on region proposals, including SPP-net, RCNN, Faster RCNN, and YOLOv3. The APs and mAPs of the above methods are displayed in Table 3. Table 3 shows that the AP scores of the TF-YOLO method are higher than those of the classical methods in every object class. When small objects are included in complicated background, the mAP of the TF-YOLO method is 31.5%, which is higher than that of the YOLOv3-tiny method by 27.2%, and SPP-net by 30.3%. When compared with YOLOv3, RCNN, and Faster RCNN, the precision of the TF-YOLO method is lower than these methods. Nevertheless, the detection speed of the TF-YOLO method is much faster than these classical methods, which is shown in Table 4. Taking both accuracy and detection speed into consideration, TF-YOLO method exhibits the best performance in small object detection among the above methods in an embedded system.
Comparison of Loss Curves
As revealed in Figure 6, the loss curve of the TF-YOLO network converges faster than the YOLOv3-tiny network. Specifically, the loss curve of the YOLOv3-tiny network converges approximately from 0.1. Whereas, the loss curve of TF-YOLO starts to converge approximately from 0.05. and SPP-net by 30.3%. When compared with YOLOv3, RCNN, and Faster RCNN, the precision of the TF-YOLO method is lower than these methods. Nevertheless, the detection speed of the TF-YOLO method is much faster than these classical methods, which is shown in Table 4. Taking both accuracy and detection speed into consideration, TF-YOLO method exhibits the best performance in small object detection among the above methods in an embedded system.
Comparison of Loss Curves
As revealed in Figure 6, the loss curve of the TF-YOLO network converges faster than the YOLOv3-tiny network. Specifically, the loss curve of the YOLOv3-tiny network converges approximately from 0.1. Whereas, the loss curve of TF-YOLO starts to converge approximately from 0.05.
Comparison of IOU Curve
IOU is the intersection over union of predicted bounding box and ground truth. The ideal situation is complete overlap, and IOU should approach 1. In general, if IOU > 0.7, it can be considered as a good result [37]. For the loss function of the training model, the sum-squared error is used to integrate the localization error (bounding boxes coordinate error) and the classification error. Figure 7 shows the IOU curve of the YOLOv3-tiny network and the loss curve of the TF-YOLO network, respectively.
Comparison of IOU Curve
IOU is the intersection over union of predicted bounding box and ground truth. The ideal situation is complete overlap, and IOU should approach 1. In general, if IOU > 0.7, it can be considered as a good result [37]. For the loss function of the training model, the sum-squared error is used to integrate the localization error (bounding boxes coordinate error) and the classification error. Figure 7 shows the IOU curve of the YOLOv3-tiny network and the loss curve of the TF-YOLO network, respectively.
Compared with the IOU curve of the YOLOv3-tiny network, the area under the curve of the TF-YOLO network is larger. Furthermore, the IOU curve of the TF-YOLO network converges faster. The TF-YOLO network achieves a higher overlap between the candidate bound and the ground truth bound, which means the ratio of their intersection to union is greater, which indicates that the predicted bounding box is close to the ground truth. In summary, the performance of the TF-YOLO network has been greatly improved. Compared with the IOU curve of the YOLOv3-tiny network, the area under the curve of the TF-YOLO network is larger. Furthermore, the IOU curve of the TF-YOLO network converges faster. The TF-YOLO network achieves a higher overlap between the candidate bound and the ground truth bound, which means the ratio of their intersection to union is greater, which indicates that the predicted bounding box is close to the ground truth. In summary, the performance of the TF-YOLO network has been greatly improved.
Qualitative Analysis
In aerial photography and remote sensing images, the targets are prone to be smaller in size and are under complicated background. Sometimes, the scale and orientation would be various. Therefore, the conventional algorithms generally fail to detect these targets.
In this section, the YOLOv3-tiny network is chosen for comparison with the proposed TF-YOLO network. Figure 8 depicts a total of 16 pictures of 8 scenarios. The pictures in the first and third rows are performed by the YOLOv3-tiny network. For comparison, the pictures in the second and fourth rows are detected by the proposed TF-YOLO network. It is easy to learn that there is certain missed and false detection in the YOLOv3-tiny network. In contrast, almost all visible targets are effectively monitored by the TF-YOLO network. For a clearer representation, the above corresponds to the same picture. It is easy to see that there are certain missed and false detection in the YOLOv3-tiny network. On the other hand, TF-YOLO network has a much better detection effect, and almost all the targets to be inspected are detected.
Qualitative Analysis
In aerial photography and remote sensing images, the targets are prone to be smaller in size and are under complicated background. Sometimes, the scale and orientation would be various. Therefore, the conventional algorithms generally fail to detect these targets.
In this section, the YOLOv3-tiny network is chosen for comparison with the proposed TF-YOLO network. Figure 8 depicts a total of 16 pictures of 8 scenarios. The pictures in the first and third rows are performed by the YOLOv3-tiny network. For comparison, the pictures in the second and fourth rows are detected by the proposed TF-YOLO network. It is easy to learn that there is certain missed and false detection in the YOLOv3-tiny network. In contrast, almost all visible targets are effectively monitored by the TF-YOLO network. For a clearer representation, the above corresponds to the same picture. It is easy to see that there are certain missed and false detection in the YOLOv3-tiny network. On the other hand, TF-YOLO network has a much better detection effect, and almost all the targets to be inspected are detected. Compared with the IOU curve of the YOLOv3-tiny network, the area under the curve of the TF-YOLO network is larger. Furthermore, the IOU curve of the TF-YOLO network converges faster. The TF-YOLO network achieves a higher overlap between the candidate bound and the ground truth bound, which means the ratio of their intersection to union is greater, which indicates that the predicted bounding box is close to the ground truth. In summary, the performance of the TF-YOLO network has been greatly improved.
Qualitative Analysis
In aerial photography and remote sensing images, the targets are prone to be smaller in size and are under complicated background. Sometimes, the scale and orientation would be various. Therefore, the conventional algorithms generally fail to detect these targets.
In this section, the YOLOv3-tiny network is chosen for comparison with the proposed TF-YOLO network. Figure 8 depicts a total of 16 pictures of 8 scenarios. The pictures in the first and third rows are performed by the YOLOv3-tiny network. For comparison, the pictures in the second and fourth rows are detected by the proposed TF-YOLO network. It is easy to learn that there is certain missed and false detection in the YOLOv3-tiny network. In contrast, almost all visible targets are effectively monitored by the TF-YOLO network. For a clearer representation, the above corresponds to the same picture. It is easy to see that there are certain missed and false detection in the YOLOv3-tiny network. On the other hand, TF-YOLO network has a much better detection effect, and almost all the targets to be inspected are detected. To guarantee the objectiveness of evaluating the performance of the proposed TF-YOLO network, detection results of 20 randomly selected test images from the test-set are also illustrated. The testing results are shown in Figure 9. Experimental results indicate that the proposed TF-YOLO network enables a better retrieval capability and a higher detection accuracy for object detection, and it is sensitive for small targets. To guarantee the objectiveness of evaluating the performance of the proposed TF-YOLO network, detection results of 20 randomly selected test images from the test-set are also illustrated. The testing results are shown in Figure 9. Experimental results indicate that the proposed TF-YOLO network enables a better retrieval capability and a higher detection accuracy for object detection, and it is sensitive for small targets. To guarantee the objectiveness of evaluating the performance of the proposed TF-YOLO network, detection results of 20 randomly selected test images from the test-set are also illustrated. The testing results are shown in Figure 9. Experimental results indicate that the proposed TF-YOLO network enables a better retrieval capability and a higher detection accuracy for object detection, and it is sensitive for small targets.
Conclusions
This paper proposes a multi-scale object detection approach for small targets detection. The improved incremental network for real-time object detection is termed as Tiny Fast YOLO (TF-YOLO) network. The major improvement of the TF-YOLO network is owning to its structural optimization of the YOLOv3-tiny network. In addition, introducing the K-means++ algorithm as a starting point, the TF-YOLO network can get a better priori box for each target, with clustering the dataset and selecting the number and specifications of the candidate frames. In this way, the TF-YOLO network can carry out multi-scale prediction, and the accuracy of the detection of small targets has also been significantly improved. Compare to conventional detectors, this paper is a smaller, faster and more efficient detector, increasing the performance of end-to-end training and real-time object detection to a variety of devices, even the embedded systems or portable devices. Experimental results demonstrate that the TF-YOLO network takes full advantage of image features in the framework and improves the performance on small targets with less time consuming. Considering a trade-off between accuracy and speed, the proposed TF-YOLO network exhibits the best performance in small object detection among the state-of-the-art methods. In future work, the lower level features will be richly extracted with the multi-scale paradigm to promote detection performance. | 8,796.2 | 2019-08-07T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
General relativity as a biconformal gauge theory
We consider the conformal group of a space of dim n=p+q, with SO(p,q) metric. The quotient of this group by its homogeneous Weyl subgroup gives a principal fiber bundle with 2n-dim base manifold and Weyl fibers. The Cartan generalization to a curved 2n-dim geometry admits an action functional linear in the curvatures. Because symmetry is maintained between the translations and the special conformal transformations in the construction, these spaces are called biconformal; this same symmetry gives biconformal spaces overlapping structures with double field theories, including manifest T duality. We establish that biconformal geometry is a form of double field theory, showing how general relativity with integrable local scale invariance arises from its field equations. While we discuss the relationship between biconformal geometries and the double field theories of T-dual string theories, our principal interest is the study of the gravity theory. We show that vanishing torsion and vanishing co-torsion solutions to the field equations overconstrain the system, implying a trivial biconformal space. Wih co-torsion unconstrained, we show that (1) the torsion-free solutions are foliated by copies of an n-dim Lie group, (2) torsion-free solutions generically describe locally scale-invariant general relativity with symmetric, divergence-free sources on either the co-tangent bundle of n-dim (p,q)-spacetime or the torus of double field theory, and (3) torsion-free solutions admit a subclass of spacetimes with n-dim non-abelian Lie symmetry. These latter cases include the possibility of a gravity-electroweak unification. It is notable that the field equations reduce all curvature components to dependence only on the solder form of an n-dim Lagrangian submanifold, despite the increased number of curvature components and doubled number of initial independent variables.
Biconformal spaces and the biconformal action
It was shown in the 1950s and 1960s that general relativity may be cast as a Lorentz [1] or Poincaré [2] gauge theory. Subsequent approaches [3,4,5,6,7,8,9] refined the methods and extended the initial symmetry to Weyl, deSitter and conformal. A systematic approach to the resulting gauge theories of gravity shows that it is possible to formulate general relativity in several ways [10].
Conformally based theories of gravity
Generally, the use of conformal symmetries for gravity theories (and in the MacDowell-Mansouri case, de Sitter) leads to actions functionals which are quadratic in the curvature and apply only to 4-dimensional spacetimes. This is because conformal scaling by e φ changes the volume form by e nφ in n-dimensions. In four dimensions this factor may be offset by two factors of the curvature, but in even dimension n = 2k, we require k factors of the curvature to make the action dimensionless. Various techniques allow these quadratic theories to nonetheless reduce to general relativity [4,8,9,11]. The most studied case is that of Weyl (conformal) gravity, in which the difficulty of higher order field equations has been alternatively exploited and overcome. When only the metric is varied, the field equations are fourth order, and include solutions not found in general relativity [12,13,14]. Mannheim [15] attempts to use the additional scaling properties to explain galactic rotation curves (but also see [16]). Alternatively, it has been shown in [17] that by reformulating Weyl gravity as a gauge theory and varying all of the gauge fields, the additional field equations give the integrability conditions needed to reduce the order and exactly reproduce locally scale invariant general relativity.
There are two exceptions to these higher-order curvature requirements, which have actions linear in the curvature and can be formulated in any dimension. In the first of the two approaches, Dirac [3] builds on previous work with scalar-tensor theories [18,19,20,21], achieving a curvature-linear action by including a scalar field to help offset the scaling of the volume form. Dirac considers a Weyl geometry in which the curvature is coupled to the Weyl vector and a scalar field. Generalizing his action functional to n-dimensions, it takes the form S =ˆ κ 2 g αβ R αβ − β 2 κ 2( n−4 n−2 ) g αµ g βν Ω αβ Ω µν − 4 (n − 1) n − 2 g αβ D α κD β κ − λκ where κ is a scalar field of conformal weight w κ = − n−2 2 and Ω αβ = W α,β −W β,α is the dilatational curvature. The only occurrence of the Weyl vector is in the dilatational curvature, Ω αβ , which must vanish or be unmeasurably small to avoid unphysical size changes. Dirac interpreted Ω αβ as the electromagnetic field and the time dependence of the scalar field κ as a time-dependent gravitational constant. The electromagnetic interpretation is untenable, but solutions with Ω αβ = 0 or with other interpretations remain to be explored.
The second curvature-linear action arises when the volume element is like that of a phase space, since the "momentum" directions have opposite conformal weight from the "space" dimensions. Such a space arises naturally from the quotient of the conformal group by its Weyl subgroup (SO (p, q) and dilatations), with the most general curvature-linear action built from the SO (p, q) and dilatational curvatures [11] being Here, Ω a b is the curvature of the SO (p, q) gauge field, Ω is the dilatational curvature, and α, β and γ are dimensionless constants. The differential forms e a and f a are the gauge fields of translations and special conformal transformations, respectively. Together the latter give an orthonormal frame field on a 2n-dimensional manifold. Because of the symmetry maintained between the translations of the conformal group and the special conformal transformations, these spaces are called biconformal. Here we explore the large class of torsion-free biconformal spaces, showing that they reduce to general relativity on Lagrangian submanifolds. It is the consequences of the action, Eq.(1) that will occupy our present inquiry.
The biconformal gauging
The construction of a biconformal space begins with a flat, n-dimensional space with SO (p, q) invariant metric, p + q = n. Compactifying this space by adding an appropriate point or null cones at infinity (See Appendix A) allows us to define its conformal group, SO (p + 1, q + 1). The biconformal quotient [8,9] is then SO (p + 1, q + 1) /SO(p, q) × SO (1, 1), where SO (1, 1) transformations represent dilatations and the full subgroup W ≡ SO(p, q)×SO (1, 1) is the homogeneous Weyl group. The quotient gives rise to a principal fiber bundle with 2n dimensional base manifold and homogeneous Weyl fibers. The connection of this flat biconformal space is then generalized, giving rise to conformal Lie algebra-valued 2-form curvatures. These are required to be horizontal and the resulting Cartan equations integrable. The 2n-dimensional curved base manifolds are biconformal spaces while local SO (p, q) and dilatational invariance remain, together comprising the biconformal bundle.
Biconformal gravity is the gravity theory following from variation of the action, Eq.(1), with respect to each of the conformal gauge fields, together with the Cartan structure equations to define the curvatures in terms of the connection, and the generalized Bianchi identities arising as integrability conditions. The construction of these models is described in full detail in [22]. See also [8,9,11,23].
Relationship to double field theory
Biconformal spaces share many features in common with double field theories.
Double field theory is a means of making the O (d, d) symmetry of T -duality manifest. By introducing scalars to produce an additional d dimension, Duff [27] doubled the X(σ, τ ) string variables to make this O (d, d) symmetry manifest. Siegel brought the idea to full fruition by deriving results from superstring theory [24,25,26]. Allowing fields to depend on all 2d coordinates, Siegel introduced generalized Lie brackets, gauge transformations, covariant derivatives, and a section condition on the full doubled space, thereby introducing torsions and curvatures in addition to the manifest T-duality.
There has been substantial subsequent development. Much of this development is reviewed in [28]; the introduction to [29] gives a concise summary. Briefly, double field theory arises by making T -duality manifest in string theory. When we compactify n dimensions on a torus, the windings of string about the torus can be interpreted as momenta. T -duality is a mapping between the original spatial directions and these momenta. Double field theory arises when these two n-spaces are kept present simultaneously, making T -duality manifest and leading to an overall O (n, n) symmetry. The T -duality is identified with the Weyl group of O (n, n), consisting of permutations of the distinct circles of the maximum torus and interchange of phases.
Invariant tensors in double field theory and biconformal spaces
In double field theory, doubled coordinates are introduced, extending the spacetime coordinate x α by an equal number of momenta, where M, N, · · · = 1, · · · , 2n and α, β, · · · = 1, · · · , n are coordinate indices. There are at least two important invariant tensors identified in [29]. Defining the O (n, n) symmetry, there is the 2n × 2n quadratic form with A, B, · · · , L = 1, · · · , 2n and lower case Latin indices, a, b, · · · = 1, · · · , n orthonormal. The second invariant object is the spacetime/dual generalized metric, M ab , built from the spacetime metric and the Kalb-Ramond potential, which takes the orthonormal form M AB = η ab 0 0 η ab where η ab is either Euclidean or Lorentzian, depending on the model considered. These are only half the invariant structures in biconformal spaces, all of which arise from natural invariances of the conformal group. Again letting η ab be Euclidean or Lorentzian (or in our main development, any (p, q) metric), we make use of the Killing form of the conformal group of a compactified (p, q) space, where the upper left block is the norm on Lorentz or Euclidean transformations, the next n rows and columns arise from tranlations and the next n from special conformal transformations. The final 1 in the lower right gives the Killing norm on dilatations. Upper case Greek indices run over the dimension of the conformal group. When K Σ∆ is restricted to the biconformal manifold, we have only the translations and special conformal portion, and this is precisely the O (n, n) metric, Use of the Killing form as metric was first mentioned in [9] with explicit use in biconformal spaces in [30,31,22] where the orthonormal basis (e a , f b ) is taken to satisfy General linear changes of the original (e a , f b ) basis are allowed, These become O (n, n) transformations when they are required to preserve the inner product given in Eqs.
(2) - (4). These basis forms (χ a , ψ b ) are local, but may be defined globally when the structure equations and field equations provide an appropriate involution. Such alternative choices of basis have been explored in [31,22,32]. There are further objects, discussed in detail in [31] and more comprehensively in [22], where it is shown that there exists a Kähler structure on biconformal space. The complex structure arises from the symmetry of the conformal Lie algebra given by interchanging translation and special conformal transformation generators and changing the sign of the dilatation generator. This is essentially an inversion, and when carried through to its effect as a linear operation on the basis forms, may be written as Further, it has long been recognized that the Maurer-Cartan equation of dilatations and, generically, its Cartan generalization describe a symplectic form, The symplectic character is manifest since the left side shows the 2-form to be closed while the right shows it to be non-degenerate. As a matrix in this basis, the symplectic form is The two of these may be used to define a Kähler metric via which is exactly the M AB of double field theory. All of these objects arise from properties of the conformal group. We note that the Killing metric K AB is not the metric defined by the almost Kähler structure. The change of basis of Eq.(5) and Eq.(6) will be restricted further depending on which of these objects the change is required to preserve. For example, the time theorem of [31] requires invariance of the inner product, Eqs.(2) -(4), and preservation of the symplectic form, S AB , reducing the allowed change of basis to the form This is simply an instance of spontaneous symmetry breaking. Solutions typically do not preserve the full symmetry of a system of equations.
Connection and action
The principal differences between the usual treatment of double field theories and biconformal gravity lie in the connection, the action, and the means by which the doubled dimension is reduced back to an ndimensional spacetime.
There have been multiple proposals for a connection in dual field theory [29], including the Weitzenböch connection, While this is compatible with the double field theory structures, it leads to vanishing curvature and nonvanishing torsion. Even its generalization has vanishing curvature. Constructing an action becomes problematic. One proposal, given in [29], is an action on the full doubled space given by generalizes the dilaton Φ and L is given by There are also multiple proposals in double field theory for finding a condition that will reduce the full space back to an n-dimensional spacetime. One proposal [33], due to Scherk and Schwarz, proposes requiring the functional dependence of fields to be of the form with the scalar field additively separable, d (X, Y ) =d (X) + λ (Y ). Here the hatted indices,Â, refer to the gauged double field theory while unhatted A, B are associated to the double field theory before applying the Scherk-Schwarz reduction. Alternatively, Berman et al. [29] propose additive separability with the "section condition" η AB ∂ A ∂ B = 0 acting on all fields. Faced with these divergent approaches, the authors of [29] summarize a set of desirable properties for a connection on double field theory. We quote (replacing their notation with ours and numbering the points for convenient reference below): ". . . we might want it to 1. define a covariant derivative that maps generalised tensors into generalised tensors, 2. be compatible with the generalised metric M AB , 3. be compatible with the O (n, n) structure K AB , 4. be completely determined in terms of the physical fields, in particular the vielbein and its derivatives, 5. be torsion-free, 6. lead to a curvature that may be contracted with the metric to give the scalar which appears in the action." Their proposal satisfies conditions 1 − 4.
The situation is quite different in biconformal geometry because it has been developed first as a gravity theory, and all the relevant structures are present from the Cartan construction. In particular, the connection is automatically given by the SO (p, q) spin connection and the Weyl vector, and these are compatible with not only the generalized metric (the biconformal Kähler metric) M AB and the O (n, n) structure K AB present as the restricted conformal Killing form, but also the almost complex structure J A B and symplectic form S AB . This satisfies points 1, 2 and 3.
While an O (n) rather than an O (n, n) connection may seem restrictive, O (n, n) transformations of the orthonormal basis still retain the larger symmetry. Moreover, the spin connection and Weyl vector start as general 1-forms on a 2n-dimensional space, It is because the spin connection performs the same O (n) rotation simultaneously on each subspace that it is able to preserve the multiple structures. In fact, Eq.(7) displays far more generality than we ultimately want: we would like for all fields to be determined purely the the spacetime solder form, e c (x α ), and this will require reduction of both components (e.g., (ω a bc , ω a c b ) → ω a bc ) and of independent variables ((x α , y β ) → x α ). Accomplishing this will satisfy point 4 above, and this is the central accomplishment of the current presentation.
The only assumption we make, beyond the Cartan biconformal construction and the the field equations following from the action (1), is vanishing torsion. This is a natural constraint for a spacetime gravity theory, is consistent with existing measurements in general relativity, and satisfies point 5.
Point 6 is satisfied by the action, (1), which despite retaining scale invariance, is linear in the biconformal curvatures. Notice that the α term in Eq.(1) is completely analogous to the Einstein-Hilbert action written in similar language, i.e., S EH =´R ab ∧ e c ∧ . . . ∧ e d ε abc···d .
We therefore claim that the reduction presented here satisfies all six desired conditions.
Additional potential advantages
There are further potential advantages of biconformal models. The biconformal theory developed here also overlaps strongly with calculations in twistor space. Twistor space, in arbitrary dimension, is the space of spinors of the conformal group, up to projection by an overall factor. Witten showed in [34] (see also [35,36,37]) that when a topological string theory is formulated in twistor space it is equivalent to N = 4 supersymmetric Yang-Mills theory. Most notably, twistor string theory provided a string/gauge theory equivalence that allowed remarkably efficient calculations of scattering amplitudes for gauge theory, reducing months of supercomputer calculations to fewer than two dozen integrals. The effort slowed considerably when it was thought that it would necessarily lead to fourth order Weyl gravity instead of general relativity. While a few alternative formulations were found, Mason was led to conclude [37]: Clearly, more work is required to discover what other twistor-string theories can be constructed. In particular, one would like to have twistor-string theories that give rise to Poincaré supergravities, or to pure super-Yang-Mills, or that incorporate other representations of the gauge and Lorentz groups.
Biconformal gravity might be an ideal ground state for twistor string since it arises from conformal symmetry, maintains scale invariance, and reduces to general relativity. It is therefore of interest to formulate twistor string theory in a biconformal space. These will naturally use the spinor representation, as in the supergravity extension of biconformal space [38].
Biconformal spaces also seem well suited to string compactification. In the present work, we show that these 2n-dimensional gravity theories reduce via their field equations to n-dimensional general relativity. As a result, a string theory written in a 10-dimensional biconformal background will require only two dimensional compactification to describe 4-dimensional general relativity. There are only a countable number of 2dimensional topologies, compared to the truly huge number of 6-dimensionsal compact spaces available when going from 10 directly to 4-dimensions.
The situation is even more restrictive than all 2-dimensional compact spaces because the compactification is required to go between two biconformal spaces. It therefore must include one basis direction of each conformal weight. This necessarily restricts the compactification to a 2-torus or possibly a 2-sphere.
Organization
The organization of the paper begins with the basic equations of biconformal gravity, including some notational conventions and ending with the field equations. In Section 3, we show the effects of vanishing torsion on the remaining curvatures, using the Bianchi identites and field equations to reduce the number of components. From the effect of vanishing torsion on the curvtures, it is immediate to see by symmetry that if the co-torsion were also to be set to zero, the additional constraints would force the solution to be trivial. We begin solving for the connection in Section 4 by making use of the Frobenius theorem on the involution of the solder form. This clarifies the meaning of the doubled dimension, showing that the biconformal space is foliated by an n-dimensional Lie group. This foliation may be interpreted as the translation group of the co-tangent bundle, the torus of double field theory, or as a new, nonabelian internal symmetry.
In Section ??, we extend the partial solution for the connection from the involution back to the full biconformal space and substitute into each structure equation to find the resulting form of the curvatures, then use these results to reduce the field equations. From this point on, the solution divides into two cases depending on whether the Lie group of the foliation is abelian or non-abelian. Each of these cases merits a Section (6,??). Finally, we summarize our results in Section 8.
The field equations of biconformal gravity
The first construction of the biconformal quotient was carried out by Ivanov and Niederle [8], who used it to describe a gravity theory using a curvature-quadratic action. Subsequently, the geometry was revived [9] and a curvature-linear action was introduced [11] to give biconformal gravity. The details of the construction are given in [22], along with a demonstration of the signature-changing properties derived in [31]. Here, we rely on the specifics given in [22], providing only a basic description and introducing some convenient nomenclature, then moving quickly to the Cartan structure equations, Bianchi identities, and the linear action.
From the action, we find the field equations and study their consequences with only the assumption of vanishing torsion. Throughout, we work in arbitrary dimension with arbitrary signature for the conformal metric class.
Building the structure equations
Consider a space of dimension n = p + q, with an SO (p, q)-symmetric orthonormal metric η. We compactify with appropriate null cones at infinity, to permit the inversions that give the space a well-defined conformal symmetry, C = SO (p + 1, q + 1). The homogeneous Weyl subgroup W = SO (p, q) × SO (1, 1) ⊂ C consists of the pseudo-rotations and dilatations. The quotient C/W is a 2n-dimensional homogeneous manifold from which we immediately have a principal fiber bundle with fiber symmetry W. We take the local structure of this bundle as a model for a curved space à la Cartan, modifying the manifold (if desired) and altering the connection subject to two conditions: 1. The resulting curvature 2-forms must be horizontal.
2. The resulting Cartan structure equations satisfy their integrability conditions (generalized Bianchi identities).
Let the connection forms dual to the generators of the Lie algebra be written as ω a b (SO (p, q) transformations), e a (translations), f a (special conformal transformations, called co-translations in the context of these biconformal geometries), and ω (dilatations). Then the Cartan structure equations are: Horizontality requires the curvature to be expanded in the (e a , f b ) basis, giving each of the components (Ω a b , T a , S a , Ω) the general form 2 and integrability follows from the Poincaré lemma, d 2 ≡ 0. The (n−1)(n+2) 2 curvature components (Ω a b , T a , S a , Ω) together comprise a single conformal curvature tensor. However, the local symmetries of the homogeneous Weyl symmetry of the biconformal bundle do not mix these four separate parts. Thereofore, we call the SO (p, q) part of the full conformal curvature Ω a b the curvature, the translational part of the curvature T a the torsion, the special conformal part of the curvature the co-torsion, S a , and the dilatational portion Ω the dilatational curvature or simply the dilatation.
Each of the curvatures each has three distinguishable parts, as seen in Eq. (13). We call the e a ∧ e b term the spacetime term, the f a ∧ e b term the cross term, and the f a ∧ f b term the momentum term. While it may be somewhat abusive to call a signature (p, q) space "spacetime", for the gravitational applications we consider the name is ultimately appropriate. In the cases where the co-solder forms generate a nonabelian Lie group, the name "momentum" is not appropriate, and we will speak of the relevant group manifold.
To avoid introducing too many symbols, the symbols for the three parts of curvatures are distinguished purely by index position. Thus, Ω a c b d denotes the cross-term of the SO (p, q) curvature and Ω a bcd the spacetime term of the SO (p, q) curvature. These are independent functions. We therefore do not raise or lower indices unless, on some submanifold, there is no chance for ambiguity. Note also that the raised and lowered index positions indicate the conformal weights, +1 and −1 respectively, of all definite weight objects. Therefore, the torsion cross-term T ab c has net conformal weight +1, the spacetime term of the co-torsion S abc has conformal weight −3, and the full torsion 2-form T a has conformal weight +1.
Note the similarity between Eqs.(10) and (11). This occurs because, by taking the quotient of the conformal group by its homogeneous Weyl subgroup instead of the more common inhomogeneous Weyl group, symmetry is maintained between the translations and the special conformal transformations. Indeed, in their action on the defining compactified (p, q) space, the special conformal transformations are simply translations in inverse coordinates, y µ = xµ x 2 . As a result, they behave near infinity exactly as translations do at the origin; correspondingly, the effect of a simple translation expressed in inverse coordinates is the same as that of a special conformal transformation at the origin. In the biconformal space, the resulting gauge field of translations, e a , and the gauge field of special conformal transformations, f a , form a cotangent basis. Each locally spans an n-dimensional subspace of the full biconformal cotangent space, which we ultimately show to be submanifolds. In parallel to calling e a the solder form, we call f a the co-solder form. Similarly, just as the field strength of the solder form is called the torsion, T a , we refer to the field strength of the co-solder form as the co-torsion, S a .
Bianchi identities
The generalized Bianchi identities are the integrability conditions for the Cartan equations. They are found by applying the Poincaré lemma, d 2 ≡ 0, to each structure equation, then using the structure equations again to eliminate all but curvature terms. They always give covariant expressions -we are guaranteed that all purely connection terms must cancel because when all curvatures vanish the Cartan equations reduce to the Maurer-Cartan equations, for which the integrability conditions are the Jacobi identities, and therefore are automatically satisfied.
Knowing that all connection terms must cancel when we replace exterior derivatives with the corresponding curvatures makes it easier to derive the identities. Furthermore, every exterior derivative of a curvature becomes a covariant derivative. Using this knowledge, we may quickly find the identities. Thus, for the SO (p, q) curvature, we take the exterior derivative of Eq.(9), Proceeding through Eqs.(9) -(12), we find the full set of integrability conditions, where the covariant derivatives are given by Since each Bianchi identity contains the covariant derivative of a curvature, it is typically difficult to use them to help find solutions to the field equations. They are simply the conditions on the curvatures that guarantee that a solution exists, and if we find a solution to the field equations, the Bianchi identities are necessarily satisfied. However, if one of the curvatures vanishes the relations become algebraic and can be extremely helpful.
Notational conventions
The metric is η ab with pseudo-rotational invariance under SO (p, q), p + q = n. Lower case Latin indices run a, b . . . = 1, 2 . . . , n, and refer to orthonormal frames (e a , f a ). When coordinates are introduced they are given Greek indices. Thus, we may write e a = e a µ dx µ + e µa dy µ Until we have established appropriate submanifolds, we cannot use the components of the solder form, e a µ , to change basis.
An antisymmetric projection operator on type 0 2 tensors may be written as If we raise the f index and lower e, this becomes the antisymmetric projection operator on type 1 1 tensors. This symbol occurs frequently.
The volume form
The volume form is unusual, having two types of index. Since we can distinguish the conformal weight +1 solder forms e a from the conformal weight −1 co-solder forms, f a , we can always partially re-arrange. Thus, while the 2n-dim volume form may be written as represents an index that contracts with ω a and a · represents an index that contracts with ω a , we can always insist that the weight +1 indices go first and the weight −1 go last, This convention means that a contraction, e ab...c ae...f is meaningful, when it would vanish immediately with the full antisymmetrization. This is, nonetheless, correct since there exists an unambiguous local separation by conformal weight, each with its own induced volume form. Since variation of the action is local, we may use this to find the field equations. A single contraction gives e ab...c ae...f = δ b...c e...f and in general, contracting all but k pairs of indices, The presence of this conformal separation also allows the dilatational curvature to be included as the β term in the action. It may be argued that this is not allowed if the subspaces are not integrable. We find that the subspaces are integrable, but have checked that setting β = 0 throughout does not alter any of our conclusions.
There exist conditions that guarantee that such a splitting into subspaces is integrable across the full biconformal space. For example, the e a subspace is certainly integrable to a submanifold if the basis structure equation, is in involution, and this is true if the torsion T a is suitably restricted. Specifically, if the momentum term of the torsion vanishes, T acd = 0, then the Eq.(13) for the torsion reduces to and e a is in involution. Similarly, the co-solder equation will be involute provided S acd = 0.
We make no assumptions about the torsion or co-torsion in deriving the field equations. Though neither occurs explicitly in the curvature-linear action, integrations by parts after variation nonetheless introduce them into the field equations.
We define the Hodge dual of unity as a convenient volume form, It follows that Eq. (22) is useful for finding the field equations. Taking a second dual, * Φ ≡ * * 1 regardless of the dimension or signature.
The action functional
Eq.(1) is the most general action linear in biconformal curvatures. It is defined on the 2n-dimensional base manifold of the bundle, spanned by (e a , f a ). The initial conformally symmetric space has metric η ab of any dimension n > 2 and any signature (p, q). We find the field equations by varying the full set of connection 1-forms, {ω a b , e a , f a , ω}. Each variation has two parts when we expand in the (e a , f b ) basis, for example, with δA a bc and δB a c b independent, arbitrary variations. We therefore find eight sets of field equations. To illustrate details of the variation technique, the variation of the spin connection ω a b is given in Appendix B. Carrying out each of the connection variations, we arrive at the final field equations: where the constant Λ is defined to be Λ ≡ (n − 1) α − β + n 2 γ . Ultimately, all our results depend on a single parameter, χ = 1
Biconformal spaces
The system we wish to study consists of the structure equations Eqs.(9)-(12), their associated Bianchi identities Eqs. (14)- (17), and the field equations Eqs.(23)- (30). These have been written above with no additional conditions, and they apply to the biconformal geometry constructed from the conformal group in any dimension n and any signature (p, q).
Our goal is to show how the full set of biconformal curvatures in 2n-dimensions reduces to only those required to describe n-dimensional general relativity. Assuming only vanishing torsion and the field equations, we show in the next Section that the curvatures, each initially in the general form given in Eq. (13), reduce to (4) and (??), we use the structure equations to reduce the coordinate dependence to x α only, with the exception of a few explicit terms linear in y α . There is also further reduction of the curvatures.
Reducing the curvatures of torsion-free biconformal spaces
We seek to reduce the field equations as far as possible. In particular, we will show that scale-invariant general relativity emerges from the vanishing torsion field equations. As in Riemannian geometry, vanishing torsion is a natural constraint on the full generality of a biconformal space. This has three definite consequences corresponding to the three parts in the expansion given by Eq. (13), We expect the first term in this expansion to give the spacetime torsion, which is zero in general relativity. The cross term, T ac d f c ∧e d , gives the extrinsic curvature of the spacetime submanifold in the full space, while the final term measures the non-involution -the degree to which the solder form fails to be in involution [23]. Taking the full torsion to vanish therefore has clear geometric consequences: it guarantees the existence of a spacetime submanifold with vanishing spacetime torsion, embeded with no extrinsic curvature in the larger biconformal space.
Note that it is important that we do not constrain the co-torsion. Indeed, we show at the end of this Section that setting both torsion and co-torsion to zero is overly restrictive, forcing the full space to have at most constant curvature and dilatation. Naturally, setting the torsion but not the co-torsion to zero breaks some of the symmetry between the solder form and the co-solder form. It would be equivalent to break the symmetry the other way, setting the co-torsion to zero and not the torsion.
We begin with the consequences of vanishing torsion in the Bianchi identities.
Consequences of the Bianchi identities
If the torsion vanishes, T a = 0, then the second Bianchi identity, eq.(15), becomes an algebraic condition on the curvature and dilatation: Expanding each of the curvatures in components, This breaks into three independent equations, with components related by Note in particular that since Ω a cd b is antisymmetric on ab, the trace gives Ω a cd a = 0, and the final condition requires and therefore, the momentum space component of the SO (p, q) curvature vanishes, The cross-term Bianchi may be used to express the cross curvature in terms of the cross dilatation. Expanding the antisymmetry in Eq. (32), we formally lower the a index to e, and cycle the b, e, d indices. Then adding the first two permutations and subtracting the third, we get , Vanishing torsion also affects the remaining Bianchi identities, but the effects are most pronounced when those are combined with the field equations. Therefore, we turn next to the simplification of the field equations.
Simplifications of the torsion and co-torsion equations
Of the torsion and co-torsion equations, the first two relate various traces. Equations (23) and (24) identify two relationships between these traces. When the torsion vanishes, these become Using these in the next pair, Eqs. (25) is now identically satisfied while (26) determines the antisymmetric part of the cross-terms of the co-torsion, in terms of its trace, The rc trace fixes the remaining possible contraction, There is only one independent contraction of the cross-term, and it determines the antisymmetric part of the full cross-term.
Simplifications of the curvature and dilatation equations
Now consider the remaining four equations for the curvature and dilatation, Eqs.
where we define The cross-term of the cuvature is now given by Eq. (36), Next, we examine the remaining vanishing torsion Bianchi identity, Eq. (31). Expanding the antisymmetry and taking the ad trace, Combining this with the field equation, Eq.(28), for the corresponding components, Ω a bac = − β α Ω bc , we have ((n − 2) α − 2β) Ω bc = 0 so the spacetime dilatation generically vanishes. The field equation then implies The special case when ((n − 2) α − 2β) = 0 allows a non-integrable Weyl geometry and, likely being unphysical, will not concern us further. Because of the constant form of the components of the dilatation, Eq.(41), the dilatation Bianchi identity gives constraints on the co-torsion. Starting with Eq.(17) with vanishing torsion and the complete dilatation now given by Ω = χe a f a , Eq. (17) gives For generic constants in the action we may cancel the 1 + χ factor, but the χ = −1 case permits the presence of a non-abelian internal symmetry.
A theorem: Vanishing torsion and co-torsion
We digress briefly to prove a useful result. From our results so far, we can easily prove the following theorem. We start with the definition of a flat and trivial biconformal space. Because of the "cosmological constant" term Λ in Eqs. (27) and (29), we cannot, in general, set all curvatures to zero unless Λ = 0 as well. We therefore define a flat biconformal space [9] to have vanishing curvatures and Λ = 0, and a trivial biconformal space to have vanishing curvatures except for constant curvature and dilatation cross-terms, which then have the Λ-dependent forms given in Eqs. (41) and (42). That these constant values of the curvatures yield solutions to the field equations follows as a special case of the generic torsion free solution below.
Triviality Theorem : Biconformal spaces in which both the torsion and the co-torsion vanish are trivial biconformal spaces.
Proof: With vanishing torsion, we have already seen that the momentum curvature and dilatation vanish. By the symmetry of biconformal spaces, zero co-torsion requires the spacetime curvature and dilatation to vanish as well. Since, by assumption we have both T a = 0 and S a = 0, the only nonvanishing curvature components are the dilatation and curvature cross-terms, shown above to necessarily have the forms given in Eqs. (41) and (42), There are interesting properties to trivial biconformal spaces. These homogeneous manifolds have been shown to be Kähler [22], and allow time to emerge as part of the solution from the properties of the underlying conformal group [31,22]. Still, there can be no spacetime or momentum space curvature if both the torsion and the co-torsion vanish completely, and therefore no local gravity. To achieve a meaningful gravity theory it is necessary that at least part of either the torsion or the co-torsion remains nonzero.
Summary of curvatures and remaining field equations
Initially, the four curvatures ("curvature", torsion, co-torsion, and dilatation) have the three independent terms displayed in eq. (13). Using the assumption of vanishing torsion, we have now reduced these to together with the remaining field equations and remaining Bianchi conditions, Even when 1 + χ = 0, the equations involving the co-torsion cross-term do not determine the co-torsion further; we must turn to the structure equations to proceed.
While the severe restrictions evident in Eqs. (46) reduce the space considerably toward an n-dimensional theory, the remaining fields are still functions of all 2n coordinates. It is only by using the structure equations that we fully reduce the theory to n-dimensional scale-covariant general relativity.
The meaning of the doubled dimension
With T a = 0, the torsion Eq.(10) is in involution. This lets us first solve the structure equations on a submanifold and results in a substantial restriction of the connection forms. Extending back to the full space, we then work through the full structure equations to determine the final form of each connection form.
The involution
The involution of the solder form, allows us to apply the Frobenius theorem, which tells us that there exist n functions on the manifold, x µ , such that e a = e a µ dx µ Furthermore, holding those functions constant, x µ = x µ 0 , so that dx µ = 0 and e a = 0, the remaining structure equations describe submanifolds of a foliation of the full space. These remaining equations are where the tilde indicates the restriction to vanishing solder form, e.g., We will also examine the restriction of the integrability (i.e., the Bianchi identity) of the co-solder equation, When e a = 0, the curvature, co-torsion, and dilatation simplify toΩ In the previous section we showed that these components of the curvature and dilatation, Ω a cd b and Ω cd , vanish. Therefore, the structure equations and basis integrability reduce to Let this submanifold be spanned by coordinates y µ . The first two equations show that the spin connection, ω a b , and Weyl vector,ω, are pure gauge on the submanifold, where at each x µ 0 we are free to choose a local SO (p, q) transformation Λ a c (y) and a local dilatation φ (y). This allows us to gauge bothω a b (x 0 , y) andω (x 0 , y) to zero if desired. It proves convenient to rename the restriction of the basis, h a ≡f a and the restriction of the spin connection as ξ a b ≡ω a b (x 0 , y), while gauging the Weyl vector to zero. The basis h a must span the co-tangent space to the submanifold, so it must be nondegenerate. The submanifold is then described by To continue, we examine a manifold with these conditions, Eqs. (49)(50)(51). Notice that Eqs.(49)-(51) describe a differentiable manifold with flat connection for which the momentum part of the co-torsion 1 2 S cd a h c ∧ h d is the torsion of the submanifold. This submanifold torsion is constrained by Eq. (45), The importance of these properties will be seen in this and the following Sections.
Foliation by a Lie group
We quote a well-known theorem due to Auslander and Markus [49]: THEOREM 5. Let M be a differentiable manifold with complete, flat, affine connection Γ and holonomy group H(M; Γ) = 0. Then M is a complete Riemann space with Christoffel connection Γ and M is diferentiably isometric with a torus space.
F. W. Kamber and Ph. Tondeur generalize this theorem [50], introducing their proof with the following: Consider a linear connection on a smooth manifold. The connection is flat, if the curvature tensor R is zero. If the torsion tensor T has vanishing covariant derivative, the torsion is said to be parallel. A linear connection is complete, if every geodesic can be defined for any real value of the affine parameter. In this note the following structure theorem for smooth manifolds admitting a complete flat connection with parallel torsion is proved: Any such manifold is the orbit space of a simply connected Lie group G under a properly discontinuous and fixed-point free action of a subgroup of the affine group of G. This Theorem includes the classical cases of flat Riemannian manifolds and flat affine manifolds (Auslander and Markus), where the torsion is assumed to be zero and G turns out to be R n , and also generalizes a theorem of Hicks [Theorem 6] for complete connections with trivial holonomy group and parallel torsion tensor, stating that a manifold with such a connection is homogeneous. We consider the case where the curvature vanishes, without requiring the holonomy group to be trivial.
As noted above, our equations, Eqs. (49)(50)(51), exactly describe a manifold with flat connection ξ a b but with torsion satisfying only DS a = 0. This torsion conditions is weaker than those of the theorems above. Moreover, the additional conditions may or may not hold. Spacetime, and the more general SO (p, q) spaces we consider may be pseudo-Riemannian rather than Riemannian. Further, we know that spacetimes are generically incomplete [46,47,48] and that our physical spacetime contains black hole singularities and initial time incompleteness; the corresponding properties of the momentum subspace depend on the manifold chosen during the quotient construction. Finally, with our general considerations we cannot be certain of the remaining specifications regarding holonomy present in both theorems. Therefore, we do not attempt to apply Auslander-Markus or Kamber-Tondeur theorems, but derive our results directly, making our assumptions explicit.
We consider the two possible solutions to Eq.(52): Case 1: S bc a = 0. In Sec.(6) below, we show that with no further assumptions, generic biconformal spaces (i.e., those with χ = −1) are foliated by an abelian Lie group. They therefore describe either the co-tangent bundle or torus space foliations over SO (p, q) spaces. Generically, therefore, the conclusion of Theorem 5 of Auslander and Markus holds for the momentum submanifolds of biconformal space.
Case 2: χ = −1. In Sec. (7) below, we show that the subclass of biconformal spaces with 1 + χ = 0 allows the possibility of foliation by a nonabelian Lie group. The result is consistent with the claim of Kamber and Tondeur. To make further progress, we too assume vanishing covariant derivative of the torsion rather than vanishing covariant exterior derivative.
In the remainder of this Section and in Sec. (5), we show results that hold for either Case 1 or Case 2 by assuming S bc a constant and placing no condition on χ. This is sufficient to show foliation by a Lie group; we leave detailed topological discussion to subsequent studies. In Sec. (5) we extend back to the full biconformal space, substituting the form of the connection into the structure equations to continue the reduction of the system toward general relativity. The program is completed in two different ways for Case 1 and Case 2, in Sections (6) and (7) respectively.
Co-torsion Bianchi
We have seen that the vanishing torsion, T a = 0, combined with the dilatation Bianchi identity gives Eqs. (52). For the remainder of this Section, we will place a weaker constraint on the momentum co-torsion and χ consistent with both Cases above. Thus, the conclusions of this Section for the discussions of both Sec. (6) and Sec. (7).
The integrability condition for the submanifold co-torsion, Eq. (51) is so the covariant exterior y-derivative of 1 2 S cd a h c ∧ h d vanishes. Choosing the y α -dependent part of the gauge so that the submanifold spin connection and Weyl vector vanish, the covariant derivative reduces to a partial derivative, 0 =dS a and therefore for some 1-form, ξ aS a =d ξ a In coordinate components, However, instead of such a general potentialξ a , we assume This is one of the assumptions of the Kamber-Tondeur Theorem.
With the momentum co-torsion independent of y µ the structure equation on the e a = 0 submanifolds becomes In terms of the basis h a the integrability condition for eq.(55) is ≡ −S bc a , and observe that the pair form the Maurer-Cartan equations and the Jacobi identity (in the adjoint representation) for an n-dimensional Lie algebra. The field equation for the momentum co-torsion, Eq. (37) shows that the adjoint generators are traceless, so when the adjoint representation is faithful the Lie group elements will have unit determinant. With the observation that h a has an n-dimensional SO (p, q) or Spin (p, q) index (depending on which representation we have chosen for the beginning group), we have therefore proved the following theorem: Theorem: In any 2n-dimensional, torsion-free biconformal spaces with ∂ µ S αβ a = 0, there exists an ndimensional foliation by a Lie group. If the adjoint representation is faithful, the group is special.
We conjecture that the theorem holds for all torsion free biconformal spaces. This is one of our most important new results, giving a definitive interpretation to the doubled dimension of biconformal spaces.
Introducing vector fields G a dual to the one forms h a , we have and Let G be the Lie group generated by the G a . Then the y-submanifold at each x 0 is the group manifold.
Since h a transforms as a vector under SO (p, q), there may be constraints between SO (p, q) and G. Acting with Λ a b ∈ SO (p, q) on the structure equation of h a , invariance of the structure equation requires so the structure constants must transform as a 2 1 tensor, consistent with S a being a tensor. Since SO (p, q) acts on itself, any subgroup of SO (p, q) will be allowed, but it is clear that there are additional possibilities. For example, the vanishing structure constants of an abelian group will be preserved, as will partly abelian combinations. We develop a concrete example. Starting with a 3-dim representation of SO (3), we require a 3-dimensional Lie group with structure constants that transform as a tensor under SO (3). Consider ISO (2), the two translations and one rotation of the plane. The Lie algebra is where we may think of R as the generator of rotations about the z-axis and T k as translations in the xy-plane.
Then the structure constants may be written in terms of the unit vectors as which now manifestly transforms as a 1 2 tensor under rotations.
For n = 4, the electroweak group, SU (2) × U (1), naturally springs to mind. This, and other particular cases will be explored explicitly in subsequent work. If the group G is abelian, the structure constants are zero and the momentum co-torsion vanishes. In this case, dh a = 0 and we have h a = f µ a (x) dy µ where the y-dependence of the coefficients must now vanish, though in this case it is useful to allow the dependence on x µ . This describes an exact, orthonormal frame and therefore a flat space. Since we evaluate at fixed x α = x α 0 , the coefficients f µ a (x 0 ) are constants on the submanifold, but may be functions of x µ when we extend back to the full biconformal space. This is equivalent to the abelian Lie algebra of n translations, and the G-foliation of the biconformal space may be identified as the co-tangent bundle of the remaining SO (p, q) space. Alternatively, the abelian group may be taken as a compactification on a torus.
Parameterization of the group elements as coordinates
The integral of the structure equations, gives the group manifold, which is most easily coordinatized by the group elements. We may find these by exponentiating the Lie algebra, V = {y a G a |y a ∈ R n } where G a satisfy the Lie algebra relations in Eqs. (58) and (59). The group elements may be parameterized by coordinates y a , by exponentiating the Lie algebra, g (y) = e yaG a ∈ G. The basis forms, h a may be explicitly turned into Lie algebra valued one forms using any desired linear representation of the generators, . For example, using the adjoint representation we contract a copy of the structure constants with the Maurer-Cartan equations, eq.(56), and define Then, using the Jacobi identity, This is the structure equation of a connection on a flat manifold, so we may write ξ a b as a pure gauge connection, . We check that this solves the structure equation, as required. We note that while this construction gives an explicit form for h a , this is not the usual connection, , which gives constant curvature [51].
Returning to the full space
We have established a geometric breakdown of the 2n-dim biconformal space into an n-dimensional foliation with a Lie group for leaves. However, the connection forms still retain their dependence on the full set of coordinates (x α , y β ). In this Section, and for the generic χ = −1 case in the next Section, we show that the structure equations further restrict this dependence so that except for certain explicit linear y α dependence, all fields depend only on the x α , up to coordinate choices and gauge transformations. To this end, we turn to the full space and the Cartan structure equations. When we restore the solder form, letting x µ vary again, the connection forms must be given by their e a = 0 parts, Eqs. (49,56, and the vanishing Weyl vector) plus additional parts proportional to the solder form. Therefore, Eqs. (60) -(63) hold as long as we perform only x-dependent fiber transformations. While this form is convenient for recognizing the content of the geometry, the biconformal space is unchanged by more general transformations. General (x α , y β )-dependent transformations on biconformal space act similarly to canonical transformations on phase spaces. They do not change the underlying physics. We substitute these forms into the structure equations, with the reduced curvatures as given in Eq. (46), where χ ≡ 1 n−1 1 (n−1)α−β Λ and (1 + χ) factors appear where we have combined the dilatation and curvature cross terms with matching pieces of the connection.
The basis structure equations
The sole mixed term must vanish, dy µ ∧ ∂ µ e a = 0, and this requires the solder form to be independent of y α . Therefore, e a µ (x, y) = e a µ (x) (68)
Solving the solder form equation for the spin connection
The next step is to solve the solder form equation for the spin connection. In the remaining e a ∧e b part of the reduced solder form equation, Eq.(61) we may separate the connection into the familiar metric compatible piece, and a Weyl vector piece. Let α a b be chosen as the e a -compatible connection, so that We note that as a consequence of Eqs. (68) and (69), α a b = α a b (x). Then writing ω a b = α a b + β a b with antisymmetry on each piece, α a b = −η ac η bd α d c and β a b = −η ac η bd β d c , we must have Since the solution is unique up to local Weyl transformations, we need only find an expression that works.
Using the antisymmetric 1 1 projection operator ∆ ac bd to impose antisymmetry, and requiring linearity in the Weyl vector and the solder form, we guess that and check as required. Therefore, the spin connection is where α a b is the connection compatible with e a . Any y-dependence must come from the Weyl vector.
Coordinate form of the connection
We can also find the coordinate form of the connection. Starting from the solder form equation we expand, As the antisymmetric part of the coefficient expression in parentheses must vanish, it must equal a symmetric object, i.e., Σ a νµ ≡ ∂ µ e a ν + e b ν ω a bµ − W µ e a ν where Σ a νµ = Σ a µν . Writing Σ a νµ = e a α Σ α νµ the equation takes the form of a vanishing covariant derivative, We easily check that Σ α νµ is indeed the expected connection. First, contract with η ab e b β , Now symmetrize on βν and use g αβ = η ab e a α e b β , This is precisely the conformal metric compatibility of g νβ with Σ α βµ in a Weyl geometry. Solving for the connection by the usual cyclic permution of µνβ, adding the first two permutations and subtracting the third, we recover the explicit form of the compatible connection of a Weyl geometry [39]: where Γ νβµ is the Christoffel connection. This connection is compatible with the conformal class of metrics, g αβ e 2φ all φ (x, y) .
The covariant derivative of the solder form
The Weyl covariant derivative is compatible with the component matrix of the solder form, Knowing the coordinate form of the covariant derivative lets us compute the covariant derivative of y a . Notice that multiplying by e µ a changes the conformal weight.
This will be of use shortly.
Curvature equation
We next study the curvature equation, Eq.(64). We begin with its integrability condition, which places strong constraints on the co-torsion. Then, substituting the connection forms from Eqs.(60)-(62), we impose the curvature field equation, Ω c acb = 0.
Curvature Bianchi
Expanding the curvature Bianchi identity, Eq. (14), and substititing the reduced curvatures, it becomes To find the independent parts, we must break the exterior derivative into d (x) and d (y) . This also requires separating the independent parts of the co-solder form, Taking the af trace and using the field equation, Ω a bac = 0, A further contraction with η bc shows that (1 + χ) (n − 1) η ad S e d a = 0 and therefore, 0 = (1 + χ) S a b c . Substitutinng this back into Eq.(76) shows that the spacetime curvature is independent of y α , ∂ α Ω a bcd = 0.
The ad trace of Eq.(82) requires 0 = (n − 1) (∂ µ W b + (1 + χ) h µ b ) which solves the full cross-term equation. This same condition is also required by the dilatation equation below. For the e c e d terms, notice that we may still have some y µ dependence.
The spacetime equation
It is convenient to define the curvature 2-form of the connection compatible with the solder form, This is the Riemann curvature built from α a b (x), not the full scale-invariant curvature of the biconformal space. Writing the spin connection as 2 Ω a bcd e c ∧ e d Solving for 1 2 Ω a bcd e c ∧ e d , using Eq.(83), and recognizing the α-covariant derivative of β a b as Recalling that D (α,x) e a = 0 the covariant exterior derivative of β a b becomes The β c b ∧ β a c term may be simplified considerably. With Therefore, It is convenient to define the curvature 2-form of the full spin connection as well, where ω a b = α a b − 2∆ ac db W c e d . This may be recognized as the curvature tensor of an n-dim Weyl geometry [39].
We also identify the Schouten tensor In any dimension greater than two, knowing the Schouten tensor is equivalent to knowing the Ricci tensor, since we may always invert, R ab = (n − 2) R ab + η ab R. In terms of the Schouten tensor, the decomposition of the Riemann curvature into the traceless Weyl conformal tensor, C a b and its Ricci parts, takes the simple form [39], Using this decomposition, the Ricci parts of the curvature combine with the additional terms from the scale covariance, To impose the field equation, set P ec ≡ R ec + D Since C c bcd = 0, this has the component form, for any P ab , 0 = ∆ ce db P ec − ∆ ce cb P ed , which, expanding the projections and combining result with the further contraction with η bd , is seen to be true if and only if P ab = 0.
Applying this general result to the field equation by replacing P ec , we have This determines c ab unless 1 + χ = 0. The symmetric part of the first four terms on the left side is the Weyl-Schouten tensor, and we see that there is an antisymmetric part to the trace of the Riemann-Weyl tensor, This agrees with the trace of the corresponding term of the torsion-free Bianchi identity arising from Eq.(65), and shows that c ab may have both symmetric and antisymmetric parts.
Returning to the full spacetime curvature after satisfying the field equation, so the spacetime piece of the biconformal curvature reduces to the Weyl (conformal) curvature of the metric compatible connection. This part of the curvature is independent of y µ as required by Eq.(80), since the only y µ -dependence of the connection must arise from the Weyl vector, and as seen in Eq.(87) the Weyl vector is only present in the trace terms of the curvature. The full SO (p, q) curvature may now be written as with h a = h α a dy α . The second expression holds only when 1 + χ = 0.
Form of the spacetime Bianchi identity
When we combine this solution for Ω a b with the spacetime part of the curvature Bianchi identity, we have Eq.(77) where we have expanded ω a c = α a c + β a c . But C a b is the usual traceless part of the Riemann curvature, which satisfies x) R e ∧ e d and we may rewrite the covariant exterior derivative of the Weyl curvature in terms of the derivative of the Schouten tensor. Making this replacement and setting β a c = −2∆ ae dc W e e d , the Bianchi identity may be written as where we set S (ee) b ≡ 1 2 S emn e m ∧ e n . Now we expand the ∆-projections, distribute and re-collect terms, then use the first Riemannian Bianchi C db ∧ e d = 0, to show that Expanding both the ∆-projection and the triple antisymmetrization, we show that for all n > 3, Eq.(90) holds if and only if Restoring two basis forms, we may write this as which solves the full Bianchi relation, Eq.(90). From Eq.(91) it follows that: Theorem: In any torsion-free biconformal space with integrable Weyl vector, W α = ∂ α φ, and 1 + χ = 0, the spacetime co-torsion is the obstruction to conformal Ricci flatness.
Complete details of the algebra leading to Eq.(90) are given in Appendix C.
The dilatation and co-torsion structure equations
Expanding the dilatation equation, Eq.(67), using Eqs.(61)-(63), to display the independent parts where we have set c ≡ e c ∧ c c . The Bianchi identity reduces to 0 ≡ d 2 ω = (1 + χ) d (e a ∧ f a ) = (1 + χ) D (e a ∧ f a ) = − (1 + χ) e a ∧ S a , which reproduces Eqs. (78) and (79) and shows that The dilatation structure equations may be integrated exactly, but the result depends crucially on whether or not (1 + χ) = 0. The two cases will be handled separately in the next two Sections.
The co-torsion structure equation also depends on the case considered. In addition to the structure equations (66), we still have the field equations and constraints from the curvature and dilatation Bianchi identities, To complete the reduction of the biconformal space, we turn to the 1 + χ = 0 and 1 + χ = 0 cases.
6 Generic case: 1 + χ = 0 In this Section, we consider the final reduction to spacetime for generic values of the constants α, β, γ in the original action, assuming 1 + χ = 0 (96) It follows that we must have S bc a = 0 and therefore on the e a = 0 submanifold, where the functions y a may be written as coordinates with an x 0 -dependent linear transformation, ) When we return to the full biconformal space, the linear coefficients h µ a (x) and β µ (x) remain as arbitrary coordinate choices. The full co-solder form, Eq.(62), then satisfies Thus, the coordinate choice of the origin for y µ at each x α changes c ab . We continue to define the co-basis h a as only the dy µ part, The momentum space is therefore foliated by abelian group manifolds. The foliation may be identified as R n or a toroidal compactification of all or part of R n . Being principally interested in the underlying presence of general relativity, we take it to be R n and eventually identify it with the cotangent space at each x α . However, it may also be taken as the torus T d of double field theory for other applications to string theory. Given the form Eq.(97) of the co-basis h a , it is useful to begin with the natural inner product arising from the conformal Killing form together with our freedom to choose the x-coordinates. The coordinate freedom allows us to conveniently choose the functions h a µ to be the inverse solder form, enabling us to integrate the dilatation equation for the Weyl vector.
The Killing metric
The field h µ a (x) lets us choose a convenient orthonormal basis for the y α space at each point of the x α space. Taking the restriction of the conformal Killing form to the biconformal manifold, , as the biconformal metric lets us usefully control this coordinate freedom. The Killing metric gives the orthonormal inner product of (e a , f b ) basis, Choosing arbitrary coordinates,x µ as the complement to y µ , the first of these three relations, e a , e b = 0, shows that dx µ , dx ν = 0. Substituting f a = h a + c ab e b into e a , f b and expanding in coordinates, We see from Eq.(98) that the inner product of dy µ with dx ν cannot depend on y µ , Moreover, like e ν b (x) and h µ b (x), k ν µ (x) must be invertible. Let x α = x α (x) be any coordinate transformation ofx α . Then in the new x-coordinates the inner product becomes Since ∂x ν ∂x α is an arbitrary general linear transformation at each point and k ν µ is invertible, we may choose ∂x ν ∂x α to be its inverse. Then dx ν , dy µ = δ ν µ Writing eq.(98) in these new coordinates, we have is just the inverse matrix to e a µ (x). This fixes h a = e µ b (x) dy µ (100)
The dilatation equation
With the change of x-coordinate, the basis forms are now given by with the spin connection and Weyl vector given by Eq.(60) and Eq.(63). The x-dependent translation β µ remains an arbitrary coordinate choice.
Using the coefficients e a µ to change basis in the usual way to convert between coordinate and orthonormal indices, we expand c a = e α a c αν dx ν and the Weyl vector ω = W a e a = W µ dx µ in Eq.(67) in coordinates.
Equating independent parts, Eq.(104) is integrated immediately, This must satisfy both equations, so substituting into Eq.(103), Before making the obvious coordinate choice, β µ = −α µ , it is suggestive to comment on the form of Eq.(105). The integration "constant" α µ (x) is a potential for the antisymmetric part of c µν and the antisymmetric part is independent of y µ . Since an x-dependent rescaling does not affect the vanishing of the f a component of the Weyl vector, we may perform a dilatation to modify α µ (x). This is precisely the form of the gauge transformation of the electromagnetic potential, but as with the failed Weyl theory of electromagnetism, it may lead to unphysical size changes since the dilatational curvature, Ω µν does not necessarily vanish. However, notice that biconformal space has a symplectic form. Eq.(67) describes a manifestly nondegenerate 2-form, e a ∧ f a , which is exact and therefore closed. This means we may interpret the full biconformal space as a relativistic particle phase space with canonical coordinates (x α , y β ). In this view, y µ is a momentum and the Weyl vector (105) has exactly the form and gauge properties of the electromagnetic conjugate momentum if α µ is taken proportional to the vector potential. Moreover, the previous well-known conflict with observation is avoided. The transformation by β µ to remove α µ is then the cannonical transformation between the conjugate electromagnetic momentum, π µ = p µ − eA µ and the simple particle momentum p µ .
The original ill-fated attempt by Weyl to identify the Weyl vector of a Weyl geometry as the vector potential of electromagnetism, W µ = eA µ , leads to nonvanishing dilatation in the presence of electromagnetic fields, Ω µν = eF µν . Einstein immediately observed that this conflicts with experiment, and it is easy to show, for example, that two hydrogen atoms moving to produce a closed path that encloses some electromagnetic flux would emerge with different sizes, and therefore very different spectra. The precision of atomic spectra therefore disproves the simplest version of the theory. The situation is completely different in the biconformal setting. Because of the extra e a f a term in the dilatation equation, it is possible to have vanishing dilatational curvature and retain the interpretation of α µ (rather than W µ ) as the vector potential. The idea has been explored to some extent in [9]. Here, the form of the dilatation is given by or since we may also write Ω = 1 2 Ω µν dx µ dx ν + Ω µ ν dy µ dx ν + Ω µν dy µ dy ν we have coordinate components Therefore, while the space spanned by f a = 0 shows no unphysical size changes, Ω ab = 0, the space defined by setting y µ = 0 has Ω µν given by the antisymmetric part of c µν . It is possible to avoid dilatational curvature altogether by setting χ = 0. In this case, the full dilatational curvature is identically zero. There is still a symplectic form in this subclass of theories, since we still have dω = e a ∧ f a . This permits the consistent interpretation of the Weyl vector as the conjugate electromagnetic momentum according to Eq.(105).
Notice that setting χ = 0 is inconsistent with the 1 + χ = 0 cases to be studied in the next Section. The possibility of a geometric graviweak theory with 1 + χ = 0 is more appealing than this χ = 0 case, since the success of the standard model strongly suggests that the electromagnetic and weak interactions should arise together. We continue with the generic picture, but eventually choose the y a coordinate to be offset by β µ (x) = −α µ (x). This makes Ω µν = χc [µν] vanish without restricting the action, while it leaves the cross-dilatation nonzero and c µν symmetric. There is no effect of this on spacetime, but, identifying y µ = i p µ as argued in [41,42] it leads to a non-integrability in phase space of the form i ¸p µ dx µ = 0 arising from the interesting conjunction of the dilatational curvature with the symplectic form. The result might be consistent with a quantum interpretation. This idea has been explored in [42,41,30].
Without further conjecture on the interpretation of the geometry, we continue with the generic case of the reduction toward general relativity. Without loss of generality, we choose the y µ coordinate so that α µ =α (x)+β µ (x) = 0, but this is merely a convenient coordinate choice. The solution retains full coordinate covariance.
Collecting the forms for the connection and basis established in Eqs.(71) and (101), and writing the gauged form of the Weyl vector co-solder form, we now have where The dilatation may now be written as
The co-solder equation
Now consider the co-solder equation, Eq.(66), with the co-torsion constrained by the Bianchi identities, Eqs.(95) First, note that the the Bianchi identity S [abc] = 0 is identically satisfied, since contraction with the solder form vanishes identically: Now, solving Eq.(113) for the co-torsion and substituting for the connection forms We first need the dy β dependent pieces of dc a , where c a is given by Eq. (111). Since the only y-dependence is in the Weyl vector, Expanding the x-dependent, α-covariant derivative of the Weyl vector, the y-derivatives become Substituting, the dy α terms cancel identically, leaving a + ω ∧ c a This shows once again that the cross-term of the co-torsion vanishes, S c a b = 0. Now we expand the spin connection, rewriting all of the derivatives as x-dependent, α-covariant, D (α,x) .
After distributing the covariant derivative and expanding β b a , we separate curvature terms and simplify, Using the partition of the Riemann tensor, Eq.(86), and the Ricci identity, If W a were the gradient of a function of x µ , then Eq.(114) would be the condition for the spacetime to be conformal to a Ricci-flat spacetime. Since this is the case, but only at constant y α .
This is in agreement with our conclusion, Eq.(91), from the spacetime Bianchi equation combined with the usual Riemannian Bianchi identity. The result means that for vanishing co-torsion and constant y α there exists an x-dependent gauge transformation to a Ricci flat spacetime. This form is not unfamiliar, the same expression having been noted in another context in [17]. The remaining y-dependence is the only obstruction to the Triviality Theorem: if a conformal transformation could make S abc vanish for all y α , then the biconformal space would necessarily be trivial. Note that the field equations Eqs.(94) and the Bianchi identities Eqs.(95) for the co-torsion are now all satisfied for any allowed S a .
Collecting the results (1 + χ) = 0
We have now solved for the full connection and satisfied all of the field equations.
Notice that dω = (1 + χ) e c f c defines a symplectic form. The curvatures follow from the structure equations as T a = 0 (116) The combination χe a ∧ h a = χdx α ∧ dy α is also non-degenerate and closed, and therefore symplectic.
The Lagrangian submanifold of spacetime
The basis forms h a = e µ a dy µ are manifestly involute. Holding y µ constant so that h a = 0, the resulting vanishing of the symplectic form shows that the h a = 0 submanifold is Lagrangian. The structure equations for the resulting Lagrangian submanifold are dω = 0 and the remaining part of the co-solder equation is, With y µ = y 0 µ constant, the form of the connection is Notice that the Weyl vector is now the gradient of − (1 + χ) y 0 α x α with respect to x α . There is one further consequence of the curvature field equation, and the curvatures are as given in Eqs.(115)-(117) with y µ = y 0 µ , together with Ω = 0.
Interpreting c a
Combining Eq.(117) at constant y µ with Eq.(121), we expand the derivatives and replace the Weyl curvature using the partition of the Riemann tensor, Eq.(86), and therefore This is exactly the condition for the existence of a conformal transformation to a spacetime satisfying the Einstein equation with matter sources, found in [39], where the matter source (1 + χ) c ab is given in terms of the energy-momentum tensor T ab by Therefore, there exists an x α -dependent rescaling of the solder form -the Riemannian gauge -such that the co-torsion equation becomes which, in turn, is the Einstein equation with source given by T ab . Explicitly, substituting the definition of the Schouten tensor from Eq.(85) and the form of c ab from Eq.(123), we substitute for the trace of the energy momentum, T = − 1 2 (n − 2) R. Solving for the energy-momentum tensor, we find the usual form of the Einstein equation, The h a = 0 submanifold is therefore a spacetime satisfying the locally scale-covariant Einstein equation, including phenomenological matter sources. Study is underway to determine whether the energy-momentum of fundamental source fields automatically enter in this way in place of c ab , or if special couplings to matter are required in the Lagrangian. Notice that in the Riemannian gauge Eq.(122) reduces to For a single vector to annihilate the curvature tensor can happen only in the simplest Petrov type spacetimes (O and N , and these are already conformally Ricci flat; see [17]) and we conclude that, generically, the gauge transformation that makes the spacetime Riemannian is simultaneously the one which makes the Weyl vector vanish.
Contractions of the Bianchi identity for the curvature on the Lagrangian submanifold
In components, the Bianchi identity for the Riemann-Weyl curvature on the spacetime submanifold, Eq.(91) becomes Taking the bc contraction, In the Riemannian gauge, and therefore any gauge, the first two terms cancel since follows from the Bianchi identity for the Riemannian curvature. Therefore, Now, expanding the co-torsion using Eq.(121) and substituting for c ab from Eq.(123) in and we have established c ab to be both symmetric and conserved on spacetime, in agreement with the requirments for the energy momentum tensor. Naturally, this last condition also follows directly from the vanishing divergence of the Einstein tensor.
Metric on the Lagrangian submanifold
While we have used the Killing form to motivate the choice of h µ a (x) in the basis form on the cotangent spaces, Eq.(100), the use of the Killing form is not necessary. Indeed, when general relativity is developed as a gauge theory from the Poincaré group, the Killing form vanishes when restricted to the base manifold. Instead, the spacetime metric may be motivated by the spin connection, which is compatible with Lorentzian signature. Ultimately, there is no inherent group structure that requires the choice except this compatibility. Similarly, in the biconformal gauging, we may introduce an SO (p, q) compatible metric by hand on the Lagrangian submanifolds where the restriction of the Killing form vanishes. For Lorentzian cases, with an original SO (n − 1, 1) spin connection, it is natural to introduce the corresponding Minkowski metric on each Lagrangian submanifold.
This choice is sufficient for the generic case, but since the restriction of the conformal killing form to biconformal space is non-degenerate, there are alternatives that trace their origin to the conformal group. These have been explored in a variety of ways ( [41,31,22,44,32]) but these considerations take us too far beyond the scope of this class of solutions.
7 Non-abelian case: 1 + χ = 0 We note that the condition 1 + χ = 0 becomes, in terms of the parameters of the original action, and this does not coincide with any other special conditions.
We now return to the form of the connection, structure equations, and curvatures established at the end of Sec. (5), and set 1 + χ = 0. The connection forms still take the form, The form of the spin connection immediately gives the solution for Ω a bcd as the Riemann-Weyl curvature tensor of an integrable Weyl geometry, given by Eq.(84). The field equation is the vanishing of the Weyl-Schouten tensor, and therefore vanishing Weyl-Ricci tensor. The field equation reduces the full spacetime curvature to the Weyl curvature Ω a bcd = C a bcd (α) with α a b the metric compatible connection. The dilatation structure equation is now simply dω = 0 so to simplify the form of the field equations, we may gauge to ω = 0. In the W a = 0 gauge, the Weyl connection becomes Riemannian, ω a b = α a b and the curvature is where α a b is the metric compatible connection, de a = e b ∧ α a b . The curvature field equation is simply the vacuum Einstein equation. The dilatation and curvature cross terms are now of unit magnitude, The only remaining field equations are those describing the co-torsion, and the only remaining structure equation is the co-solder equation, c cd a f c ∧ f d When 1 + χ = 0, the remaining structure equation on the x µ = constant, e a = 0 submanifold is given by Eq.(56), which is precisely the Maurer-Cartan equation for a Lie group G with structure constants c cd a . Here we see the realization of one of the motivations for the use of the conformal group as the starting point for Poincaré gravity, and the subsequent motivation for the biconformal gauging. One anticipates that by starting with the larger conformal group and taking the quotient by the inhomogeneous Weyl group C/IW that the resulting additional symmetry might account for some known or new fundamental interaction beyond gravity. This hope is frustrated by the finding that the additional special conformal gauge fields f a are always auxiliary, and determined by the Ricci tensor [43]. When these auxiliary gauge fields are substituted back into the rest of the model, they serve to turn the Riemann curvature tensor into the Weyl curvature tensor. As a result, though they enforce conformal symmetry, they never provide an additional interaction. As a way to avoid the elimination of f a , we are led to the biconformal gauging, C/W, the idea being that if both e a and f b together are required to span the base manifold, then f b cannot possibly be removed as auxiliary [8,9]. Although considerable subsequent work continues to find f a serving to turn R a b to C a b as in subsection (5.2.3), the emergence of an additional symmetry group is now realized in the 1 + χ subclass of cases.
The biconformal space comes equipped with the SO (p, q) pseudo-rotation group of an n-dimensional space, but these rotations and boosts acts on a 2n-dimensional manifold. This is much less than the SO (n, n) one might expect. Indeed, as seen above, the generic torsion-free solution dictates that half the space is flat, so there is no curvature corresponding to the dy µ part of the spin connection. The spin connection reduces, essentially, to the metric compatible connection of e a , ω a b = ω a bµ (x, y) dx µ + ω a µ b (x, y) dy µ ⇒ α a bµ (x) + 2∆ ac µb y c dx µ which is fully expressed on the n-dimensional, constant y µ Lagrangian submanifolds. This reduction of the spin connection reduces the number of physical fields, but when 1 + χ = 0 the extra translational gauge fields -the co-solder form -make up for it by providing a new connection and field strength: there is necessarily an n-dimensional Lie group G acting at each x µ . Since the n-dimensions of this group are labeled by an SO (p, q) index, SO (p, q) must act on G. We show in this section that this internal symmetry group is gauged.
For a particularly pertinent example, suppose we have started with Euclidean 4-space. This does not preclude a spacetime Lagrangian submanifold, for it has been shown in [31] that time emerges uniquely from a Euclidean starting point, while [22] shows that this emergence arises purely from properties of the conformal group. With the 4-dim Euclidean starting point, the spin connection has symmetry SO (4) = SU (2)×SU (2). The obvious 4-dimensional subgroup is the electroweak symmetry, SU (2)×U (1). In this case, the symmetry breaking from a left-right symmetric electroweak theory to left-handed representations of SU (2) is forced by the requirement of an n-dimensional subgroup. In a spinor representation of the conformal group, the P a and K a (x µ and y µ submanifolds) are left-and right-handed, respectively. Details of this case are currently under investigation.
It is important to note that although this symmetry G is restricted to be acted on by SO (p, q) the connection gauge field, structure constants and field strength arise completely independently. The biconformal gauging has the additional fields required for this further symmetry.
While a fiber bundle gives us a foliation, the converse is not always the case. The central requirment to have a principal fiber bundle is the existence of a projection from the bundle to the base manifold. We establish this by showing a second involution. Separating the co-solder equation into independent parts, we observe that the exterior y α -derivative of c a must be linear in dy α , and so linear in h a . We may therefore write d (y) c a = C b a h b ∧ e a and solve for dh a on the full biconformal space, This shows that h a is in involution. Setting h a = 0 constitutes a projection to an n-dimensional submanifold spanned by e a .
Gauging G
We compare our usual gauging of G with the structures already present in the biconformal geometry. Our usual gauging of a symmetry is to take the Cartan generalization of the Maurer-Cartan equation for G, eq.(126). For this we replace the Maurer-Cartan connection h a with a general connection, leading to the introduction of a field strength. Taking A a to be the generalization of h a , the Maurer-Cartan equation becomes the Cartan equation, where the field strength F a is required to be horizontal and the equation integrable. Horizontality demands while integrability requires = 0 by the Jacobi identity for G. Within the biconformal solution, we interpret the co-solder forms f a as these generalized potentials A a for the G-connection. The full structure equation for f a , however, is not exactly what we expect for a typical gauging. Instead we have The situation appears to be similar to what Kibble encountered in writing general relativity as a gauge theory of the Poincaré group [2]. Kibble introduced Poincaré fibers over spacetime, then "soldered" the translational gauge fields of the fiber symmetry to the cotangent basis of the bundle. This identification avoided double counting the translations. With the quotient method, such identification is no longer needed since the quotient of the Poincaré group by the Lorentz group automatically changes the translational symmetry into the base manifold.
Here, we already have an SO (p, q) symmetry on the fibers and find that the base manifold has a similar, but restricted symmetry. It is this latter, emergent symmetry we would like to use. The present situation differs from the Poincaré case since it is the connection and not the frame field that is doubled, with ω d a ∧ f d and − 1 2 c cd a f c ∧ f d both acting on the same index of f d . We cannot simply solder the two together because they produce covariance with respect to different symmetries. Moreover, we still need the original symmetry to act on the remaining gauge fields in the usual way. There are a number of possible resolutions to the difficulty. First, it might be possible to build the gauging of the co-solder form into the original quotient. However, though closer examination may reveal a natural way to do this, it would seem to lose the appealing feature of an emergent internal symmetry. A second approach is suggested by [22,23,44], in which an initially SO (n) connection is written as a Lorentz connection plus additional terms, which then introduce physical fields. If a similar technique is applied here, perhaps along with the transformation of [22], it is possible that the restriction of the connection can occur directly.
A third approach is to keep both connections but keep careful track of which fields transform under which symmetry. This actually causes no problem for additional fields we might wish to introduce, since these will enter as representations of either the SO (p, q) transformations, the G transformations, or both, leaving no ambiguity about their transformations. The only potential conflict arises with f a ⇔ f A itself. Thinking of G as a subgroup of SO (p, q), we must wonder whether a full SO (p, q) transformation would introduce sensible but distinct copies of the gauge potential. In the case of electroweak symmetry, we could construct a theory with SO (4) breaking to SU (2) × U (1) on the fibers while the full SO (4) is written following [22], as a Lorentz connection plus additional scalar field and cosmological constant. It is not clear whether the resulting weak fields would violate the causal structure of the Lorentz sector.
However, these conjectures will require-and are the subject of-further study.
For the present we content ourselves with the following. First, we satisfy the final field equation for the cross-term of the co-torsion, by the sufficient condition S b c a = 0. The only remaining field equation is S ba a = −c ba a = 0, the tracelessness of the structure constants. This leads to unit determinant for elements of G.
Having solved the field equations, we identify f a as the gauge field A a , we modify the indices to make it clear which gauge group applies. For fields that transform under SO (p, q), we retain the lower case Latin indices, while for fields that transform under G we replace the relevant indices by upper case Latin. Both sets range from 1 to n. The connection becomes a pair, f C so in the new notation, α A B = 0 and 1 2 c cd a f c = 0. The curvature and solder form equations are unchanged, but the co-torsion equation becomes This exactly reproduces the form of a Yang-Mills field, Eqs.(127) and (128). Consistency of this restriction is now immediate because in the full set of structure equations, S Acd e c ∧ e d the internal symmetry G has fully decoupled from the spactime geometry.
Metric on the submanifolds in the nonabelian case
The nonabelian case still allows us to simply choose the spacetime metric as is done in general relativity and was used in the generic case above. For the group submanifold, it is natural to choose the Killing form of G if it is nondegenerate. While this assignment is certainly possible for semisimple G, the details depend on the particular group and will be discussed elsewhere. As with the generic case, the considerations of [41,31,22,44,32] may be relevant.
Remaining issues
While we have arrived at a satisfactory separation of a new internal symmetry, we still lack both the field equation for S A and the contribution of this additional field to the Einstein equation. We consider three possible resolutions: 1. The absence of a source for gravity arising from G is not surprising given the restriction of the action to linear curvature terms. Including up to quadratic curvature terms in the original action could provide both field equation and gravitational coupling.
2. Allow nonvanishing cross-term to the torsion. This preserves the involution of the solder form and avoids spacetime torsion. The cross-term gives extrinsic curvature to the embedding of the submanifolds into the full biconformal space, and allows curvature of the group manifold as in [51]. The cross-term of the torsion, driven by the internal symmetry, then enters the spacetime curvature quadratically and might supply the required gravitational source.
3. The gravitational instanton has been shown to introduce both field equations and gravitational source in the usual form [52,53].
Leaving these considerations to further work, we make only the following observation. It would be natural to introduce a quadratic term such as´T a ∧ * S a into the action. With vanishing torsion, only the variation δT a = Dδe a yields nonvanishing contributions δ eˆT a ∧ * S a =ˆδe a ∧ D * S a + surf ace term + terms proportional to torsion This term leads to a divergence of S a in the curvature equation, but it because of the presence of f a in the volume form also gives terms quadratic in the components of S a . However, with vanishing co-torsion cross-term the quadratic terms added to the spacetime curvature field equations involve only products of the spacetime and the momentum components, and these are not in the form of energy momentum tensor. This suggests that a combination of this quadratic term and nonvanishing cross-term for torsion may provide a solution.
Conclusions
We have shown how general relativity emerges from the torsion-free solutions to biconformal gravity. The derivations involve field equation driven dimensional reduction and may therefore have relevance to dimensional reduction of twistor string, or the reduction of Drinfeld doubles.
Results in biconformal gravity
We began with the conformal group C p,q of a compactified space with (p, q)-signature metric in n = p + q dimensions. The quotient of C p,q by its homogeneous Weyl subgroup W gives a 2n-dimensional Kahler manifold with local SO (p, q) and scale invariance. Generalizing this local structure leads to a curved geometry characterized by SO (p, q) curvature, torsion, co-torsion and dilatational curvature corresponding to the generators of the conformal group. Throughout, SO (p, q) may be replaced by Spin (p, q) when a spinor representation is desired. This biconformal space admits a scale invariant action functional linear in the Cartan curvatures, Eq.(1). Varying the action yields a gravity theory in 2n dimensions. All models with (n − 2) α − 2β nonzero are considered. The special case when ((n − 2) α − 2β) = 0 may include non-integrable Weyl geometries and, likely being unphysical, these are not considered in detail. We established the following distinct results for this model.
1. Triviality with vanishing torsion and co-torsion. If, in addition to vanishing torsion, the cotorsion (the field strength of special conformal transformations) is taken to vanish, the biconformal space takes a trivial form, with the only nonvanishing components of the SO (p, q) and dilatational curvatures being constant cross-terms.
2. Foliation by a Lie subgroup. We prove that half of the biconformal space is foliated by an ndimensional Lie group G with structure constants lying in a representation of SO (p, q). This result follows from the involution guaranteed by vanishing torsion, together with the field equations. When α, β and γ are chosen such that χ = 1 n−1 1 + n 2 γ (n−1)α−β = −1, G may be non-abelian, otherwise it is abelian.
3. Generic solution. The generic solution assumes only that 1 + χ is nonzero, together with vanishing torsion, T a = 0. The field equations reduce the only nontrivial dependence of all the remaining curvatures -the SO(p, q) curvature, co-torsion, and dilatation -from 2n independent variables (x α , y β ) down to n variables (x α ) and reduce the number and form of the independent components. Explicitly, the form and dependency of each curvature Ω A ∈ {Ω a b , S a , Ω} begins as three distinct tensors dependent on 2n coordinates, and after application of the field equations, each is reduced as follows: The resultant curvatures, 1 2 C b acd (x) e c ∧ e d and R ab (x) are the Weyl and Schouten parts of the Riemann curvature tensor computed from the connection compatible with the n-dimensional solder form, e a (x). Each of the 2-forms dω = (1 + χ) e c ∧ f c Ω = χe a ∧ h a = χdx α ∧ dy α is closed and non-degenerate, hence symplectic on the full biconformal space.
The basis forms e a , f b are given by and the manifest involution of h a = e α a dy α shows that setting y α = constant gives a Lagrangian submanifold for spacetime. Conversely, setting x α = constant gives conjugate Lagrangian submanifolds, each the leaf of a foliation by flat manifolds. The entire 2n-dimensional biconformal spaces is therefore interpreted as the cotangent bundle of spacetime. The spin connection and Weyl vector are given by is the metric compatible connection, de a = e b ∧α a b (x). Here the orthonormal frame field and spin connection pair (e a , α a b ) is equivalent to the metric and Christoffel connection pair, g µν , Γ α µν .
The sole remaining constraint on the system is the locally scale covariant Einstein equation with source c ab , Eq.(122): which has been shown in [39] to be the condition for the existence of a conformal transformation to the sourced Einstein equation when the source c a = c ab e b is written as We showed from the properties of c ab that T ab is symmetric and divergence free, and in the Riemannian gauge (in which the Weyl vector vanishes), We conclude that the generic case describes the locally scale covariant n-dimensional Einstein equation sourced by a symmetric, divergence free tensor and formulated on the co-tangent bundle of spacetime. The reduction to n-dimensions is accomplished using only the field equations with vanishing torsion. 4. The non-abelian cases. When α, β and γ are chosen such that χ = 1 n−1 1 + n 2 γ (n−1)α−β = −1, G may be non-abelian, and there are substantial differences in the Cartan structure equations. For these cases, we again showed the reduction to dependence on the spacetime solder form, e a (x) , but now the final forms of the curvature and dilatation are The curvature is subject to the scale-covariant vacuum Einstein equation, For the co-torsion, Lorentz transformations are suppressed while the appearance of the G-connection is automatic. This leads to the co-solder form becoming the G gauge field and the spacetime cotorsion becoming the usual Yang-Mills field strength, The biconformal space becomes a principal G-bundle over spacetime. Effectively, the total principal bundle has homogeneous Weyl and G symmetry, W × G. This is not what occurs if the original quotient is of the conformal group by inhomogeneous Weyl, C (p,q) /IW, which essentially gives Poincaré fibers over spacetime [43,17]. The emergence of non-abelian symmetry from the degrees of freedom of the special conformal gauge fields of the conformal group is a new result. In 4-dimensions, with SO (p, q) = SO (4), the maximal G is the electroweak symmetry with necessary parity violation.
A note on the metric and signature change
To formulate general relativity as a gauge theory using the Cartan techniques, we take the quotient of the Poincaré group by the Lorentz group. The only guidance as to the metric is the presence of the Lorentzian connection, which will leave the usual orthonormal metric, η ab = diag (−1, 1, 1, 1) invariant. We then introduce the metric by hand using the orthonormal frame field, g αβ = e a α e b β η ab . The situation is different in biconformal space, where there are natural metric structures present. Of course, there is the SO (p, q) connection, making it possible to introduce a (p, q) signature metric by hand, just as we do in general relativity. However, the Killing form of the conformal group has a non-degenerate restriction to biconformal space, and we may use this instead. The resulting metric on spacetime depends not only on the original signature, but also on what submanifold is taken as spacetime.
Using the restricted Killing form, the metric is and the restriction to the e a or f b subspaces has vanishing restriction. It is shown in [31], however, that if we seek orthogonal Lagrangian submanifolds on which the Killing metric is non-degenerate, there are limited possibilities, with initial spaces of signature (n, 0) , n 2 , n 2 or (0, n) being the only consistent starting points, and the two Euclidean cases leading uniquely to Lorentzian signature, (n − 1, 1) or (1, n − 1). This development of a time direction from an initially Euclidean space is appealing.
There are other possibilities. If we drop the orthogonality requirement from the theorem of [31], it becomes possible to have different signature on the two Lagrangian submanifolds. This too has its advantages, as we might arrange for Lorentzian signature on one submanifold and Euclidean on the other, enabling an additional compact internal symmetry.
Some of these avenues have been explored. The Euclidean starting point leading to Lorentzian signature on both Lagrangian submanifolds is studied in [22], in which all the results are seen to depend only on structures inherent in the conformal group. In [32] connections of both types are introduced, and some possibilities are explored in [45]. Work is currently underway to examine a 4 + 4 dimensional model with mixed signatures, to take advantage of the potential graviweak theory.
There is still another metric possibility, because the metric compatible with the Kähler structure is different, having signature (2p, 2q) while the the Killing metric has signature (n, n).
For the present results, it seems best to simply impose the metric we choose. If we let the original SO (p, q) be Lorentzian, SO (n − 1, 1) then the natural choice for spacetime is Lorentzian (but see [22]).
Discussion
Biconformal spaces with appropriate signature give rise to general relativity, generically formulated on the cotangent bundle of spacetime. In a subclass of cases there may be an additional non-abelian internal symmetry. While this internal symmetry ultimately arises from the special conformal transformations, no previous gauging of the conformal group has shown the direct possibility of a non-abelian symmetry. This opens the possibility of a graviweak unification, which, while still requiring additional structure for the strong force, holds out the hope of a deeper understanding of parity violation and the breaking of a left-right symmetric SU (2) × SU (2) model. This possibility is under current investigation.
As described in the introduction, these gravity models may provide new insights into string theory. The existence of a conformal route to general relativity, as opposed to fourth-order Weyl gravity, allows for the consistent use of twistor string models. In addition, the doubled dimension makes possible a compactification from 10-dimensionsal superstring theory to an 8-dimensional biconformal space with an immediate interpretation as 4-dimensional general relativity. In this way, the myriad 6-dimensional compact spaces are avoided, to be replaced by compactification of only 2-dimensions and possibly uniquely to a torus if other structures are to be maintained.
Finally, this reduction of the biconformal gauging shares many features with Drinfeld doubles. The match between the Killing form and the symmetric form of the Drinfeld product may suggest systematic ways of reducing the doubles to their half-dimension.
so we see the mapping between x α and w α is a bijection.
We extend W 0 by taking the union with a new setŴ of pointsŵ α satisfyinĝ W = w α | η αβŵ αŵβ = 0 ClearlyŴ ∩ W 0 = φ. We suggest that W ≡Ŵ ∪ W 0 provides a compactification of the space. If the signature is Euclidean, (p, q) = (n, 0), then N consists of the origin alone, andŴ is the 1-point compactification of R n .
More generally, a detailed proof of compactness must rely on the specification of a topology on indefinite spaces. For general spacetimes this relies on introducing ideal points [46,47,48]. While such methods should meet no obstruction in the flat, nonsingular spaces considered here, the definition of the conformal group requires only the existence of inverses. This much has already been accomplished with the definition of W. We therefore content ourselves by defining a suitable extension and indicating compactness by studying the resulting extensions of spacetime curves.
Appendix B: Variation of the spin connection
Here we give details of the variation of the spin connection, since some of the steps are novel. Because many of the expressions are long, we introduce some notational conventions to make expressions more compact and transparent. Specifically, since all differential forms are rendered in boldface, there is no loss of information if we assume wedge products between all adjacent forms, dropping the explicit wedge. We further define a multi-index form, ω c···d ≡ ω c ω c1 . . . ω d ≡ ω c ∧ ω c1 ∧ . . . ∧ ω d for any number of basis 1-forms ω c . It is always possible to deduce the correct number of indices from the Levi-Civita tensor. The spin connection occurs only in the SO (p, q) curvature, Ω a b , so the spin connection variation affects only the α term of the action, The covariant derivatives of the basis forms divide naturally into two tensors, the torsion, T a = De a and the co-torsion, S a = Df a . We show below that if both of these vanish, the solution must be trivial (i.e., non-gravitating) so it is important to realize when considering torsion-free solutions that the co-torsion S a remains non-zero. The variation must preserve the antisymmetry, η bc η ad ω c d = −ω a b , of the spin connection, so , δω a b ∆ ar sb = δω r s ,. Therefore, the coefficients of the variation, δA a bc and δB a c b , are antisymmetric on the first pair of indices. As a result, only the antisymmetric part of the rest of the integrand vanishes, so we require the projection operator, ∆ ac db ≡ 1 2 (δ a d δ c b − η ac η bd ) = 1 2 η an η md (δ m n δ c b − δ c n δ m b ) with ∆ ar sb ∆ sm nr = ∆ am nb . This acts to antisymmetrize 1 1 tensors, ∆ ac db T d c = 1 2 η an (T nb − T bn ).
For the second equation, the same steps yield, | 24,619 | 2018-08-21T00:00:00.000 | [
"Physics"
] |
Lactobacillus Suppresses Tumorigenesis of Oropharyngeal Cancer via Enhancing Anti-Tumor Immune Response
Deficiency in T cell-mediated adaptive immunity, such as low CD8+ T cell infiltration, inhibits the immune surveillance, promotes malignant transformation, and facilitates tumor growth. Microbiota dysbiosis diminishes the immune system and contributes to the occurrence of cancer. However, the impact of oral dysbiosis on the occurrence and molecular mechanisms of oropharyngeal cancer (OPC) remains largely unknown. In the current study, we used 4-nitroquinoline-1-oxide (4NQO) to mimic tobacco-related carcinogenesis to generate a murine OPC model and determine the role of microbiota changes in OPC tumorigenesis. Our results showed that the oral flora composition of mice was deregulated during the tumorigenesis of OPC. The abundance of Streptococcus, Veillonella, Muribacter, Rodentibacter, and Gemella was increased, whereas the dominant genus Lactobacillus was gradually decreased with disease progression. We further demonstrated that infiltration of CD8+ T lymphocytes was markedly reduced due to the reduction of Lactobacillus. Supplementation of Lactobacillus increased the infiltration of CD8+ T cells, promoted the expression of IFN-γ and granzyme B, and lessened the OPC progression. Analyzing the metabolites of the Lactobacillus, we demonstrated that Lactobacillus enhanced the anti-tumor immune response by producing acetate in OPC development. Administration of acetate to mice could increase the expression of IFN-γ and IFN-γ-inducible chemokines in tumor tissues by activating GPR43 to promote the infiltration of CD8+ T lymphocytes and substantially delay the development of OPC. Together, our data suggest that dysbiosis of oral microbiota promotes the tumorigenesis of OPC through downregulation of cytotoxic T lymphocytes. Lactobacillus and its metabolite acetate improve the tumor microenvironment, which could be applied in the treatment of OPC.
INTRODUCTION
Oropharyngeal cancer (OPC) is a universal malignant tumor in the head and neck with a high incidence in patients with a history of smoking. Ninety percent of OPCs are squamous cell carcinoma (Parkin et al., 1999). Although surgery, radiotherapy and chemotherapy have made significant progresses in the treatment of OPC, the 5-year survival rate of patients is only about 30% (Rusthoven et al., 2008;Siegel et al., 2014). Currently, tumor immunotherapy shows continuously clinical response and is triggering a shift in cancer treatment. CD8 + T cells play a dominant role in tumor immunity. Less infiltration or dysfunction of CD8 + T cells in the tumor microenvironment (TME) has led to poor clinical outcomes for many cancers (Yu and Fu, 2006;Houot et al., 2015;He et al., 2021). Therefore, promoting the infiltration and function of CD8 + T cells in TME is beneficial to improve the efficacy of cancer treatment.
The microbiota plays a vital role in human health. Studies have shown that 15-20% of cancers are caused by microbial dysbiosis (D'Souza et al., 2007). Katz et al. found a large number of periodontal pathogens-Porphyromonas gingivalis (P. gingivalis) in gingival squamous cell carcinoma (GSCC) tissues (Katz et al., 2011;Ritter and Greten, 2019), in which microorganismmediated chronic inflammation plays a role in promoting disease progression (Bhatt et al., 2017). Recent studies on mice and humans have shown that gut microbiome modulates the anti-tumor efficacy in chemotherapy and immunotherapy by shaping host immunity (Iida et al., 2013;Gopalakrishnan et al., 2018;Matson et al., 2018;Routy et al., 2018;Tanoue et al., 2019;Jian et al., 2021). Compared with the untreated group, cancer patients treated with antibiotics had a lower response to anti-PD-1 immunotherapy. Restructure of sterile mice with feces from anti-PD-1 responding patients improves tumor control and T cell responses (Routy et al., 2018). The intestinal microbiota regulates dendritic cells and CD4 + T cells to enhance cancer immune surveillance and advance therapeutic effects (Iida et al., 2013;Vétizou et al., 2015;Gopalakrishnan et al., 2018;Matson et al., 2018;Routy et al., 2018;Tanoue et al., 2019;Jian et al., 2021). However, it is still not fully understood how the oral microbiota changes and whether the oral microbiota could regulate the tumor-infiltrating CD8 + T cell responses in OPC.
The microbiota synthesizes multiple metabolites, which may affect the development of cancer and impact systemic immune responses (Levy et al., 2017;Mariño et al., 2017;Zitvogel et al., 2017). Short-chain fatty acids (SCFAs), including acetic acid, butyrate, and propionic acid, are produced by bacterial fermentation of dietary fiber (Koh et al., 2016). The SCFAmediated activation of G protein-coupled receptor 43 (GPR43) promotes the expression of IFN. GPR43 deficiency leads to decreased IFN production (Antunes et al., 2019). IFN-γinducible CXCL9, CXCL10, and CXCL11 are chemokines attracting cytotoxic T cells to infiltrate into tumor tissues and exert anti-tumor effects (Gandhi et al., 2021). In addition, SCFAs also affect autoimmune CD8 +T cell response to prevent diabetes (Tan et al., 2017). These data prompted us to investigate whether the oral microbial metabolites could promote anti-cancer immunity and improve the therapeutic effect.
In order to investigate the effects of oral flora in OPC development, we collected pharyngeal tissues from mice. Through 16S rRNA sequencing, we analyzed the alterations in the composition and diversity of the flora and studied the impact of microbiota changes on the immune system. We identified that Lactobacillus is the dominant genus in the oral cavity and Lactobacillus metabolites slowed the OPC progression by activating GPR43 to increase the number of tumor-infiltrating CD8 + T cells, indicating that supplementation of specific microbiota and metabolite may have an impact on future cancer therapy.
Establishment of Animal Model
Healthy BALB/c male mice were housed in animal barrier facility in Henan University on a 12 h light-dark cycle with food and water available ad libitum. Mice were randomly assigned into two cages in each group for collecting feces and recording survival. We randomly divided BABL/c male mice into an experimental group (n = 20) provided with drinking water containing 100 mg/L 4NQO (sigma, United States) and into a control group (n = 20) provided with drinking water without 4NQO (Bouaoud et al., 2021). Five mice were randomly sacrificed every 4 weeks, continuously for 16 weeks. Experiments involving mice were under the regulation of ethical requirements and approved by the institutional animal care and use committee (IACUC) of Henan University in China.
Glossopharyngeal Tissue Collection
The mice were sacrificed by cervical dislocation and then dissected immediately. The mouse glossopharyngeal tissue was taken out and divided longitudinally from the midline, and washed with normal saline. One part was subjected to 16S rRNA sequencing. The other one was fixed in 10% neutral formalin buffer and dehydrated by ethanol gradient after 24 h, followed by embedded in conventional paraffin, sectioned at 2 μm, stained with HE, and observed under an optical microscope.
Application of Lactobacillus in Mice
Lactobacillus was purchased from American Type Culture Collection (202195) and cultured in Lactobacillus MRS broth (Panigrahi et al., 2017). Lactobacillus was inactivated by pasteurization for 30 min at 70°C . After pasteurization, no viable Lactobacillus could be recovered in culture. 2 × 10 9 CFU of Lactobacillus in 0.2 ml PBS or PBS alone was dripped into the mouse mouth for 16 weeks (Si et al., 2021).
HE Staining
A total of 2-μm sections of glossopharyngeal tissue from different groups were processed for HE staining for histopathology. Histopathological examination was performed by three experienced oral pathologist in a blind manner basically according to the criteria described by Kramer et al. (Kramer et al., 1978).
Diagnosis and Grading Criteria for Epithelial Dysplasia
Epithelial dysplasia was diagnosed as previous described (Stanley et al., 1992). 1) The polarity of epithelial basal cells disappeared; 2) More than one layer of basal-like cells appeared; 3) The nucleoplasma ratio of cells increased; 4) The epithelial spikes were droplet; 5) Epithelial hierarchy was disordered; 6) Mitotic index increased with a few abnormal mitosis; 7) Mitosis appeared in the superficial half of the epithelium; 8) Cellular atypia; 9) Nuclear hyperchromia; 10) Nucleolar enlargement; 11) Decreased intercellular cohesion; 12) Keratinization of single or clustered cells in the spinous layer. Tissues with two items of the above conditions are mild dysplasia. Those with three to four items are moderate dysplasia, and focuses with more than five, or epithelial hierarchy severely disordered, are severe dysplasia. Carcinoma in situ refers to severe dysplasia involving the whole layer of epithelium, but not yet invading the basement membrane and growing downward. Those who break through the basement membrane are invasive cancer (Stanley et al., 1992).
Short-Chain Fatty Acids Detection by Gas Chromatography-Mass Spectrometry Quantitation
SCFAs in the glossopharyngeal tissue and serum of mice were quantified as described previously (Guo et al., 2020) with minor modifications. Appropriate amount of sample was added in a 2-ml centrifuge tube followed by mixing with 50 μL of 15% phosphoric acid, 10 μL of 75 μg/ml internal standard (isohexanoic acid) solution, and 140 μL ether for 1 min. The mixture was then centrifuged at 12,000 rpm for 10 min at 4°C, and the supernatant was used for analysis with the Chromatographic Agilent HP-INNOWAX capillary column (30 m*0.25 mm ID*0.25 μm). Split injection volume was 1 μL and split ratio was 10:1. The carrier gas was helium and the carrier gas flow rate was 1.0 ml/min.
Immunohistochemistry
After deparaffinizing, the 2-μm glossopharyngeal tissue section was hydrated in gradient ethanol. For antigen retrieval, slides were immersed in 0.01 M sodium citrate buffer and heated for 30 min. After inactivating endogenous peroxidase with 3% hydrogen peroxide and blocking with goat serum albumin, the sections were incubated with anti-CD8 antibody (1:200, Abcam, United Kingdom) at 4°C for overnight. The slides were incubated with biotinylated secondary antibodies. Then the sections are incubated with peroxidase-streptavidin and stained with 3,3′-diaminobenzidine tetrahydrochloride (DAB) for 30 s. Finally, the nuclei were counterstained with hematoxylin. As a negative control, tissue sections were processed in parallel by incubating with PBS instead of primary antibody.
The semi-quantitative assessment of IHC staining was reviewed by three different pathologists and classified as negative, weak, medium, or strong. To determine the H score, antibody-stained tissue was scored by calculating the product of the percentage of cells staining for each intensity level and the intensity level (0, negative; 1+, weak; 2+, medium; 3+, strong). Then the individual intensity level scores are added together to calculate the H score (Sun et al., 2020).
16S rRNA Sequencing
DNA in the glossopharyngeal tissue of mice was extracted with a DNA extraction kit (GenElute ™ Stool DNA Isolation Kit, Sigma-Aldrich), and the purity and concentration of sample DNA were detected by ultraviolet spectrophotometer. The sample DNA was subjected to 1% agarose gel electrophoresis to determine the integrity of DNA. After DNA was qualified, PCR amplification was carried out, and 16S rRNA gene V3-V4 region was amplified. PCR amplification primers for V3-V4 region were 338 F (ACTCCTACGGGAGGCAGCAG) and 806 R (GGACTACHVGGGTWTCTAAT) (Liu et al., 2016). The amplified sample was validated by 2% agarose gel electrophoresis, and the target band size of PCR products was confirmed before any subsequent experiments were performed. The high-throughput sequencing and data analysis involved were completed by Shanghai Meiji Biology Company (Shanghai, China).
Cytokine Analysis
The secretion of Granzyme B, IFN-γ and IFN-γ-inducible chemokines from tumors was detected by ELISA kits (R&D Systems, Minneapolis, MN). After the mouse was euthanized, the glossopharyngeal tumors were isolated from the mouse immediately. The glossopharyngeal tumors were mechanically homogenized in PBS containing protease inhibitors (10 ug/mL aprotinin, 10 ug/mL leupeptin and 10 ug/mL pepstatin). After homogenization, Triton X-100 (Applichem, Darmstadt, Germany) was added to a final concentration of 1%. The samples were frozen at −80°C, thawed and centrifuged at 10,000 × g for 5 min to remove cell debris. The supernatant of tumors homogenate was examined for cytokine production by ELISA according to the manufacturer's instructions. The homogenates from three tumors per group were pooled for the analysis. Signal intensity was calculated using Image J (Aindelis et al., 2020).
Statistical Analysis
Statistical analysis was performed in SPSS 21.0 program. The quantitative data were expressed as mean ± standard deviations (SD) and were analyzed by one way ANOVA. p < 0.05 was considered significant. (pp < 0.05; ppp < 0.01; pppp < 0.001; ns, no significant difference).
Development of Premalignant and Oropharyngeal Cancer in 4NQO-Treated Mice in a Time-dependent Manner
In order to establish an OPC mice model, we treated BABL/c male mice with 4NQO in drinking water for 16 weeks. Five mice were randomly Figure S1A). At the beginning of the experiment, there was no significant difference in body weight between control and 4NQO-treated mice (p > 0.05). At the end of 16 weeks, body weights of 4NQO-treated mice decreased as compared with those in control mice (p < 0.05) (Supplementary Figure S1B). With the prolongation of 4NQO treatment, oropharyngeal tissues of mice displayed a typical progression from mild, moderate, and severe dysplasia to OPC (Supplementary Figure S1C).
Changes of Oral Flora Composition and Diversity in Mice With Dysplasia and Oropharyngeal Cancer
To determine the alteration of oral microbiome in the development of OPC, we examined dynamics of the oral microbiota throughout the formation of OPC in our animal model. We performed a 16S rRNA sequencing on the oral flora of mice. PCA analysis of the sequencing results showed that control mice and dysplasia or OPC mice bore different clusters. With the extension of dysplasia, a farther distance in the composition of the oral flora from control mice were observed, indicating that components of microbiota were altered during OPC development ( Figure 1A). The hierarchical clustering analysis also yielded parallel results ( Figure 1B), supporting our notion that oral microbiota composition of mice with dysplasia and OPC was different from that of control animals. We further examined oral microbiota diversity of the mice with dysplasia and OPC. Shannon and Chao indexes were used to measure the diversity and abundance of microbiota. We found that alpha diversity and abundance of the oral flora in OPC mice were significantly different from those in control mice (p < 0.05). As progression of atypical hyperplasia to OPC, the diversity and abundance of oral microflora in mice were significantly higher than those in control mice ( Figures 1C,D). At the phylum level, the oral bacteria of control mice are mainly composed of Firmicutes, Proteobacteria, and Bacteroidetes. Firmicutes gradually decreased with the aggravation of atypical hyperplasia, accompanied by the increase of Proteobacteria ( Figure 1E). At the genus level, the abundance of Streptococcus, Veillonella, Muribacter, Rodentibacter, and Gemella was increased in dysplasia or OPC mice, whereas the abundance of Lactobacillus was decreased ( Figure 1F). Lactobacillus was the dominant genus, which gradually decreased with worsening of atypical hyperplasia ( Figure 1F). These results strongly suggest that OPC tumorigenesis is related to the dysbiosis of the oral microbiota, as highlighted by significant shifts in bacterial populations from a broad range of taxonomic groups.
Abundance of Lactobacillus Was Reduced in Mice With Oropharyngeal Cancer
To further explore the impact of oral microbiota on tumorigenesis of OPC, we analyzed the differences in oral flora between control mice and tumor-producing mice. We analyzed the proportions of different microorganisms through Pie diagrams. As compared with control mice, OPC mice showed an upregulated abundance of Muribacter, Rodentibacter, Veillonella, Gemella, Streptococcus and Porphyromoras, and a downregulated abundance of Lactobacillus (Figure 2A). More importantly, as compared with control mice, Lactobacillus gradually decreased with worsening of atypical hyperplasia, being the highest reduction in OPC mice ( Figure 2B), indicating that Lactobacillus may be involved in the pathogenesis of OPC.
Supplement of Lactobacillus Slows Down Oropharyngeal Cancer Development
Since Lactobacillus is significantly reduced in OPC tissues, we inquired whether supplement of Lactobacillus could slow down OPC progression. We treated mice with Lactobacillus and 4NQO Frontiers in Cell and Developmental Biology | www.frontiersin.org March 2022 | Volume 10 | Article 842153 5 simultaneously for 16 weeks ( Figure 3A), and found that application of Lactobacillus significantly alleviated 4NQO-induced body weight reduction of the mice ( Figure 3B). Histologically, oropharyngeal and tongue tissues in control-and Lactobacillus-treated mice were normal. Those in mice treated with 4NQO alone experienced typical pathological changes from normal to mild, moderate, and severe dysplasia to early invasive carcinoma. Application of Lactobacillus to 4NQO-treated mice markedly delayed the progression of oropharyngeal tissue dysplasia with only moderate to severe dysplasia being observed at the end of 16w-exposure ( Figure 3C), indicating that Lactobacillus supplementation slows down the carcinogenesis of 4NQO.
Short-Chain Fatty Acids, in Particular Acetate, Contribute to Lactobacillus-Induced Anti-Tumor Effects
To determine how Lactobacillus slows down the development of OPC, we tested whether Lactobacillus acts through its own components or metabolites to exert anti-tumor effects. We found that viable, but not pasteurization-inactivated, Lactobacillus or its culture supernatant markedly delayed the onset of OPC induced by 4NQO ( Figure 4A). It is reported that Lactobacillus produces shortchain fatty acids (SCFAs), which reduce the expression and function of intestinal P-glycoprotein (P-gp) and upregulate breast cancer resistance protein (BCRP), leading to tumor suppression via the inhibition of HDAC/NF-κB and activation of PPARγ (Xie et al., 2021). To determine whether SCFAs mediate the anti-tumor effect of Lactobacillus, we analyzed SCFA contents in the glossopharyngeal tissue of mice treated with or without Lactobacillus by gas chromatography-mass spectrometry. We found that levels of acetate, propionate and butyrate in the glossopharyngeal tissue of Lactobacillus-treated and Lactobacillus supernatant-treated mice were substantially increased, whereas no difference was identified in the levels of isobutyrate, isovalerate, valerate, and caproate ( Figures 4B-H). Compared with mice treated with the medium, acetate, propionate and butyrate in the serum of the mice treated with Lactobacillus supernatant increased significantly, and there was no statistical difference in the remaining SCFAs ( Supplementary Figures S2A-G). Among SCFAs tested, acetate possessed a highest concentration (Figures 4B-H; Supplementary Figures S2A-G). To assess the role of SCFA(s) secreted by Lactobacillus in slowing the progression of OPC, we evaluated the anti-tumor effect of acetate. We found that treatment with 150 mM acetate (Xie et al., 2021) in drinking water for 16 weeks significantly slowed down 4NQO-induced OPC development ( Figure 5A), increased the weight of mice ( Figure 5B) and reduced the number of tumors ( Figure 5C). Together, our results support the notion that acetate is a critical mediator in Lactobacillus-reduced the OPC progression.
Metabolites of Lactobacillus Enhance Anti-Tumor Immunity Against Oropharyngeal Cancer in Mice
The infiltration and activation of immune cells in the tumor microenvironment affect the occurrence and development of tumors, especially that lower CD8 + T cell infiltration directly promotes tumor growth (He et al., 2021). T cell-mediated adaptive immunity protects against the malignant transformation. Oral microecological dysbiosis diminishes the immune system and hence contributes to the occurrence of oral cancer and OPC (Bhatt et al., 2017). To examine the effect of acetate on immune microenvironment of OPC, we analyzed CD8 + T lymphocytes in 4NQO-induced OPC treated with or without acetate. We observed an increased infiltration of CD8 + cytotoxic T cells into the sublesional areas of mice that received a simultaneous treatment of acetate and 4NQO, as compared with lesions in mice treated with 4NQO alone (Figures 6A,B). In addition, levels of IFN-γ and granzyme B were significantly higher in OPCs from mice fed with acetate and 4NQO than those in mice treated with 4NQO alone (Figures 6C,D). With the increase of CD8 + cytotoxic T cells in the lesions from mice treated with acetate, lower degree of dysplasia in oropharyngeal tissues of 4NQO-treated mice was observed, supporting our notion that Lactobacillus secretes acetate to slow down the OPC progression by enhancing the anti-tumor immunity.
Lactobacillus Metabolites Activate GPR43 to Enhance Anti-Tumor Immunity via Increasing the Production of IFN-γ-Inducible Chemokines SCFA-mediated activation of G protein-coupled receptor 43 (GPR43) promotes the expression of IFN. GPR43 deficiency leads to decreased IFN production stimulated by SCFAs (Antunes et al., 2019). We found that the transcription and expression of GPR43 in OPC tumors in mice treated with acetate and 4NQO were strikingly increased as compared with those in mice treated with 4NQO alone (Figures 7A,B). To determine the role of GPR43 in the production of IFN-γ in OPC tissues from acetate-treated mice, we treated mice with GPR43 inhibitor GLPG0974 once a week for 16 weeks, during which acetate and 4NQO were administrated. As expected, treatment of acetate significantly elevated the level of IFN-γ in OPC tissues of mice. However, IFN-γ expression was significantly reduced in OPC tissues from Acetate+4NQO mice with the addition of GLPG0974 ( Figure 7C). IFN-γ-inducible CXCL9, CXCL10 and CXCL11 are chemokines for the activation of cytotoxic T cells, which attract more cytotoxic T cells to infiltrate into tumor tissues and exert antitumor effects (Gandhi et al., 2021). We found that inhibition of GPR43 with GLPG0974 significantly diminished the expression of CXCL9, CXCL10 and CXCL11 in OPC tissues from acetate and 4NQO-treated mice, leading to a reduction in the infiltration of CD8 + T cells and promoting the OPC progression ( Figures 7D-H). In brief, the Lactobacillus metabolites enhanced the expression of IFN-γ and IFN-γ-inducible chemokines in tumor tissues by activating GPR43, thereby promoting the infiltration of CD8 + T cells and inhibiting the OPC development ( Figure 8).
DISCUSSION
4NQO, a precursor carcinogen, is an aromatic amine heterocyclic compound, which is metabolized in two pathways in the body.
The first metabolic pathway is the formation of near-carcinogen 4-hydroxyaminoquinoline-1-oxide through the catalyzation of 4NQO reductase, which is further metabolized into final carcinogen 4-acetylaminoquinoline-1-oxide by prolylation. Finally, it binds to the nucleophilic structure of the target organ's DNA to form DNA admixture, leading to G→A transformation in the 12th codon of H-ras gene on mouse chromosome 7 and chromosome damage (Makita et al., 1996). The second pathway is the detoxification of 4NQO, which forms glutathione (GHT) conjugates by glutathione transferase S (GSTs) in the body (Li et al., 1997). Imbalance of the two metabolic pathways is the main reason that 4NQO induces tumors. 4NQO reductase plays a key role in the carcinogenic process of 4NQO. Tanaka et al. reported that 4NQO reductase bears the highest content in the glossopharynx mucosa of mice (Tanaka et al., 1997). We induced tumors in mice at 8 weeks old and formed tumors 16 weeks after induction. This process spans the animal growth period (4-8 weeks after birth), sexual maturity, body maturity and middle and old age, and the tumorigenesis process is long, similar to the natural course of human OPC. We induced OPC with 4NQO in BABL/c mice, and the severity of the lesions increased gradually with the prolongation of 4NQO treatment. It has undergone the pathological process of mild dysplasia, moderate dysplasia, severe dysplasia, carcinoma in situ, and early invasive carcinoma, indicating that the pathological Frontiers in Cell and Developmental Biology | www.frontiersin.org March 2022 | Volume 10 | Article 842153 8 process and results of the oropharyngeal mucosa in mice are similar to human OPC. The 12-week and 16-week exposure are corresponded to the early and middle-advanced stages of the mucosal lesions respectively, with a consistency in lesions among individuals. Thus, the model should be able to successfully reflect the early and middle-advanced stages of OPC.
The oral cavity is one of the microbial reservoirs of the body. About 700 kinds of microorganisms are colonized to maintain the microecological balance in the oral cavity (Zhang et al., 2019). If the homeostasis is broken, it will cause a variety of oral diseases, such as caries, periodontal disease, oral cancer, etc. (Aas et al., 2005;Lamont et al., 2018). Besides, it is also associated with diseases other than oral cavity and oropharyngeal disorders, such as cardiovascular diseases, pancreatic cancer, Alzheimer's disease, etc. (Koren et al., 2011;Lee et al., 2017;Gaiser et al., 2019). Microorganisms are involved in pathogenesis of multiple cancers, such as H. pylori in gastric cancer (Mager, 2006), Chlamydia trachomatis in cervical cancer, and Fusobacterium nucleatum in colon cancer (Kostic et al., 2012). The role of oral flora in the development and progression of OPC is still unclear. In this study, we found that the diversity and composition of the oral microbiota in OPC mice were significantly different from those in the control mice through 16S rRNA sequencing and bioinformatics analysis. Firmicutes, Proteobacteria, and Bacteroidetes are three phyla with the highest abundance in the mice oral cavity, which constituted the dominant community, according with previous studies (Pushalkar et al., 2011;Guerrero-Preston et al., 2016). The relative abundance of Firmicutes decreased in OPC, whereas the Proteobacteria increased significantly. Compared with the control mice, OPC mice showed an abundance in Streptococcus, Veillonella, Muribacter, Rodentibacter, and Gemella, and a decline in Lactobacillus. Lactobacillus was the dominant genus, which gradually decreased with the disease severity, indicating that the genus may be closely related to OPC.
With the gradual aggravation of dysplasia, the diversity and abundance of the oral flora in OPC mice were significantly higher than those in control mice. The reason for the alteration in OPC mice may be due to the lessen of Lactobacillus, the dominant genus, which leads to the colonization of a variety of uncommon conditional pathogens, causing the increase of the diversity and abundance of oral bacteria. Interestingly, Guerrero-Preston et al. collected saliva samples from patients with oral squamous cell carcinoma and found that the diversity of the microbial community in the saliva of patients with oral squamous cell carcinoma was significantly reduced compared to healthy subjects (Guerrero-Preston et al., 2016). In another study, Hu et al. collected saliva samples from oral squamous cell carcinoma and found that microorganisms in the saliva of these patients were more diverse than those in healthy subjects (Hu et al., 2016). The types of microorganisms in saliva are greatly affected by factors such as saliva flow rate, secretion volume, and pH value, which may contribute to the difference in diversity and abundance of oral bacteria. We detected the microbial colonization of glossopharyngeal tissue, including bacteria in the mucosa and saliva, with a wider detection range and more representative. The microbial community structure in OPC mice was significantly different from that in the control and oral microbiota dysbiosis existed in mice with OPC.
Tumor-bearing mice showed significant decline in Lactobacillus, as compared with control mice. We further demonstrated that the reduction of Lactobacillus played an important role in the occurrence and development of OPC. We treated the mice with Lactobacillus and 4NQO simultaneously for 16 weeks, and observed that the carcinogenesis of the mouse glossopharyngeal tissue was significantly suppressed ( Figure 3C), indicating that supplementation of Lactobacillus could slow down the cancerous process of OPC.
Microbiota regulates host immunity and hence promotes antitumor response. CD8 + T cells is a critical effector for anti-tumor immunity. However, it remains unclear whether microbiota directly regulates the function of anti-tumor cytotoxic CD8 + T cells. In the current study, we demonstrated that microbiota directly promotes anti-tumor CD8 + T cell immunity, and Lactobacillus slowed down the OPC process through its metabolites, especially acetate. SCFAs, such as acetate, can bind to GPR43, with varying affinities to promote cellular effects in metabolism or changes in immune function (Docampo et al., 2021). In addition, GPR43 promotes the expression of IFN (Antunes et al., 2019). IFN-regulated gene transcription is increased and correlates with the extent of immune cell infiltration, indicating that IFN-related immune responses play an important role in immune surveillance (Wenzel et al., 2008). It is known that the supernatant derived from tumors can increase the frequency of CD8 + T cells that produce IFN-γ (Khou et al., 2020). IFN-γ-inducible CXCL9, CXCL10 and CXCL11 are chemokines for cytotoxic T cells, which attract more cytotoxic T cells to infiltrate into tumor tissues and exert anti-tumor effects (Gandhi et al., 2021). Interestingly, we found that acetate upregulated the transcription and expression of GPR43, accompanied by an increase in the FIGURE 8 | Lactobacillus metabolites suppress the development of OPC through anti-tumor immunity. Lactobacillus metabolites increased the expression of IFN-γ and IFN-γ-inducible chemokines CXCL9, CXCL10 and CXCL11 in oropharyngeal tissues by activating GPR43, thereby promoting the infiltration of CD8 + T cells and the secretion of IFN-γ and Granzyme B and inhibiting the OPC development.
Frontiers in Cell and Developmental Biology | www.frontiersin.org March 2022 | Volume 10 | Article 842153 production of IFN-γ and IFN-γ-inducible CXCL9, CXCL10 and CXCL11, enhanced the infiltration of CD8 + T lymphocytes, and delayed tumorigenesis of OPC, suggesting that acetate generated by microorganisms could promote anti-tumor immunity to sufficiently improve the therapeutic efficacy. Therefore, supplement of probiotics or application of microbial metabolites could be an essential procedure for the improvement of anti-tumor immunity. Whether microbiota directly or indirectly regulates the anti-tumor CD8 + T cell response or other microbial metabolites participated in the anti-tumor immunity in OPC needs further research.
DATA AVAILABILITY STATEMENT
The authors declare that all the data supporting the findings of this study are available within the paper and its Supplementary Materialfiles.
ETHICS STATEMENT
The animal study was reviewed and approved by The institutional animal care and use committee of Henan University in China.
AUTHOR CONTRIBUTIONS
Y-PJ designed the study, and wrote the manuscript. K-KW, K-YH, J-YY, M-JL, and J-RG performed the experiments. J-YL and J-HW explained and discussed the data. Z-XX contributed to the conception and writing. All authors read and approved the final manuscript.
FUNDING
This work was supported by grants from the National Natural Science Foundation of China (Nos. 82020108024 and 81772924).
Supplementary Figure S2 | SCFAs levels in the serum of mice treated with Lactobacillus culture supernatant. Lactobacillus culture supernatant (Lac-Medium) or Medium alone was dripped into mouth of mice treated with 4NQO for 16 weeks. SCFAs in murine serum were quantified as described in Materials and Methods. (A-G) Levels (µg/ml) of acetate, propionate, isobutyrate, butyrate, isovalerate, valerate, and caproate in sera of the mice. Data represent mean ± SEM. * p < 0.05, n = 5, compared with Medium. | 6,840.4 | 2022-03-01T00:00:00.000 | [
"Biology"
] |
Exploiting Position and Contextual Word Embeddings for Keyphrase Extraction from Scientific Papers
Keyphrases associated with research papers provide an effective way to find useful information in the large and growing scholarly digital collections. In this paper, we present KPRank, an unsupervised graph-based algorithm for keyphrase extraction that exploits both positional information and contextual word embeddings into a biased PageRank. Our experimental results on five benchmark datasets show that KPRank that uses contextual word embeddings with additional position signal outperforms previous approaches and strong baselines for this task.
Introduction
Keyphrase extraction is the task of automatically extracting a small set of descriptive words or phrases that can accurately summarize the topics discussed in a document Ng, 2010, 2014). Keyphrases are useful in many applications such as document indexing and summarization (Abu-Jbara and Radev, 2011;Qazvinian et al., 2010;Turney, 2003), topic tracking (Augenstein et al., 2017), contextual advertising (Yih et al., 2006), and opinion mining (Berend, 2011).
Most of the previous approaches to keyphrase extraction are either supervised or unsupervised. While supervised approaches perform generally better (Kim et al., 2013), the unsupervised ones have the advantage that they do not require large human-annotated corpora for training reliable models. Unsupervised keyphrase extraction methods usually use graph-based ranking algorithms such as PageRank that work on the word graph constructed from the target document (Mihalcea and Tarau, 2004). Various PageRank extensions have been proposed that incorporate different types of information (Wan and Xiao, 2008;Gollapalli and Caragea, 2014). For example, Wan and Xiao (2008) proposed to incorporate a local neighborhood of the target document into the graph construction, with the neighborhood being determined based on the textual similarity between documents. Liu et al. (2010) exploited topical information to select keyphrases from all major topics. More recently, Mahata et al. (2018) proposed a theme-weighted biased PageRank, called Key2Vec, for keyphrase extraction. In Key2Vec, a theme-vector is computed by averaging the embeddings of words and phrases from the title of a scientific document to capture its theme and the PageRank is biased based on the similarity of candidate words or phrases to the computed theme vector. However, this model is oblivious to the position of words in a scientific document, in which more important words appear not only frequently, but also close to the beginning of the document (Florescu and Caragea, 2017).
Inspired by the Transformer models (Vaswani et al., 2017) that infuse positional information into the word embeddings to produce embeddings with time signal, we propose an extension of Key2Vec that incorporates words' positions into a biased PageRank. Moreover, different from Mahata et al. (2018), who used non-contextual FastText embeddings (Mikolov et al., 2018), we propose to integrate SciBERT contextual embeddings (Beltagy et al., 2019) into our biased PageRank extension. Our contributions are as follows: (1) We propose KPRank, an unsupervised graph-based algorithm that exploits both the position of words in a document and the contextual word embeddings for computing a biased PageRank score for ranking candidate phrases; (2) We show empirically that infusing position information into our biased KPRank model yields better performance compared with its counterpart that does not use the position information. In addition, KPRank with contextual SciB-ERT embeddings performs better than FastTextbased KPRank; (3) Finally, we show that KPRank outperforms many previous unsupervised models.
Proposed Approach
In this section, we describe our unsupervised graphbased algorithm called KPRank, that exploits both position information of the words in a document along with contextual word embeddings for computing a biased PageRank score for each candidate word. Our approach consists of three steps: (1) candidate word selection and word graph construction; (2) word scoring by biased PageRank; and (3) candidate phrase formation.
Candidate Word Selection and Graph Construction
For a target doucment D, we first apply a partof-speech filter 1 and select only nouns and adjectives as candidate words, consistent with previous works (Gollapalli and Caragea, 2014;Mihalcea and Tarau, 2004;Wan and Xiao, 2008). We build a word graph G = (N, E) for D using the candidate words as nodes in G. N and E are the sets of nodes and edges, respectively. We consider an edge (n i , n j ) ∈ E between two nodes n i and n j in N if the words corresponding to these nodes appear within a window of k consecutive words in the content of D. We experimented with values of k from 1 to 10 and obtained best results with k = 10, which is consistent with (Wan and Xiao, 2008). The weight of an edge (n i , n j ), denoted as w ij , is computed based on the co-occurrence count of the two words within k consecutive words in D (k = 10). Here, we build undirected graphs because prior work (Mihalcea and Tarau, 2004;Liu et al., 2010) observed that the type of graph (directed or undirected) used to represent the text does not significantly influence the performance of the keyphrase extraction task.
Biased PageRank
Preliminaries. PageRank (Page et al., 1998) is a graph-based ranking algorithm that iteratively calculates the importance of each node in a graph through endorsements from its neighbors. For document D, we construct an undirected graph G as explained above. Initially, the score of each node in G is set to 1 |N | . This score is then iteratively updated using PageRank. That is, the score s for node n i is obtained by applying the equation: s(nj) (1) 1 We used Python's NLTK toolkit for POS tagging.
where O(n j ) = n k ∈Adj(n j ) w jk and Adj(n j ) is the set of all adjacent nodes of node n j ∈ N . p i is defined below. In order to prevent the PageRank from getting stuck in cycles or dead ends, a dumping factor α was added to Eq. (1) to allow the PageRank to randomly jump to any node in the graph (α = 0.85). Let p = [ p 1 , · · · , p i , · · · , p |N | ] be the probability distribution of randomly jumping to any node in the graph. For an unbiased PageRank, this is a uniform distribution, with p i = 1 |N | , for all i from 1 to |N |. For a biased PageRank, this probability distribution is not uniform, but rather the nodes in the graph are visited preferentially, with some nodes being visited more often than others, depending on the p i value for node n i (Haveliwala, 2003). Key2Vec is an example of (topic) biased PageRank for keyphrase extraction that computes p i for node n i using the cosine similarity between the embedding of word/phrase corresponding to node n i and a theme vector for the entire document, which corresponds to the aggregated word/phrase embeddings from the document's title (Mahata et al., 2018). That is, p i is higher for words/phrases that are topically (semantically) more similar to the overall theme vector for the document. Next, we describe our extension KPRank of Key2Vec.
KPRank. In our proposed approach, we calculate p i for node n i using two types of scores: theme (or topic) score and positional score. We multiply both scores to assign a final weight to node n i before running the biased PageRank algorithm. Both scores and their calculation are explained below.
To calculate the theme score (ts i ) for node n i ∈ N , we first calculate a theme vector (T D ) for document D. A theme vector is obtained by averaging SciBERT (Beltagy et al., 2019) word embeddings of adjectives and nouns from D's title. The theme score for node n i is calculated using the cosine similarity of the SciBERT word embedding corresponding to node n i and T D . The idea is to assign a higher score to a word if that word is closer to the theme (topic) of a given document. For obtaining word embeddings, for all the words with similar stemmed version (obtained with Porter stemmer), we averaged the contextualized word embeddings of a word obtained by using SciBERT. We used the title and abstract of a document as input to the SciBERT model. We also experimented with pretrained BERT (Devlin et al., 2018), and found that the performance of BERT-based KPRank and SciBERT-based KPRank are very similar. To calculate the positional score (ps i ) for node n i , we consider the set P i that contains all the positions of occurrence in the text of the word corresponding to node n i . Then, ps i is calculated as ps i = j∈P i 1 j . For example, for a word occurring on positions 1 and 10 in the text, its ps i score is 1 1 + 1 10 , whereas for a word occurring on position 100, its ps i score is 1 100 . The intuition behind this weighting scheme is to give higher weight to words appearing in the beginning of a document since in scientific writing, authors tend to use keyphrases very early in the document (even from the title) (Florescu and Caragea, 2017). Based on these considerations, the first position of a phrase/word and its relative position are also used in many supervised approaches as powerful features (Patel and Caragea, 2019;Hulth, 2003;Wu et al., 2005) To calculate the weight w i for n i , we perform multiplication of both the theme score (ts i ) and the positional score (ps i ). The intuition is that we give preference to words that appear near the beginning of the document and are more frequent as compared with less frequent words appearing later in document even though both words may be equally close to the theme of the document or may have similar theme score. The vector p is last set to the normalized weights for each node as follows: The biased PageRank scores for each node n i are finally calculated by iteratively applying Eq. (1) with p as in Eq. (2). Figure 1 shows the illustration of our approach. As can be seen, even though both n 1 and n 4 have similar theme score, final weights are different based on different positional scores.
In our experiments, the PageRank scores are updated until the difference between two consecutive iterations is ≤ 0.001 or for 100 iterations.
Candidate Phrases Formation
Candidate words appearing continuously in the document are concatenated to generate candidate phrases. We consider phrases with the regular expression (adjective)*(noun)+, of length up to four words, to generate candidate phrases. We used stemmed version of each word using Porter stemmer. We use POS tagger from Python's NLTK toolkit. The score for each candidate phrase is calculated by summing up the scores of its individual words (Wan and Xiao, 2008). The top-scoring phrases are output as predicted keyphrases for a given document.
Data
For evaluation, we use five datasets, which we describe below. We use the combination of controlled (author assigned) and uncontrolled (reader assigned) keyphrases as gold-standard phrases. We used uncontrolled keyphrases when available. Table 1 shows the summary of the datasets. SemEval (Kim et al., 2010) contains 288 research papers with a train and test split consisting of 188 and 100 papers, respectively.
Krapivin (Krapivin et al., 2009) contains 2,304 ACM research papers with full text and authorassigned keyphrases. Similar to (Meng et al., 2017), since the dataset does not have a train-test split, we sampled 400 papers as the test set.
NUS (Nguyen and Kan, 2007) contains 211 research papers. This dataset does not have a train and test split and it is relatively small. Hence, consistent with (Meng et al., 2017) we used entire dataset as the test set.
ACM (Patel and Caragea, 2019) contains 30,000 papers published in ACM conferences with a train and test split consisting of 10, 000 and 20, 000 papers, respectively. For each dataset we use its test set for evaluation.
Experimental Setup and Results
Evaluation metrics. To evaluate the performance of different methods, we use micro avg. F1-score. We report the performance for the top 5 and 10 candidate phrases returned by different methods as in (Meng et al., 2017). To create a word graph for a given document, we use its title and abstract. To match the predicted keyphrases with gold-standard keyphrases, we do exact match between the stemmed version of each.
The effect of position, contextual embeddings, and the comparison with previous works. To see the effect of positional information, we compare the performance of KPRank that uses contextual SciBERT (SB) embeddings along with positional information (denoted as KPRank(SB)) with that of its counterpart that does not use positional information (denoted as KPRank(SB−POS)). Moreover, to see the effect of contextual embeddings, we compare the performance of SciBERT-based KPRank (KPRank(SB)) with that of KPRank that uses FastText non-contextual word embeddings (Mikolov et al., 2018) (denoted as KPRank(FastText)). For FastText, we used pretrained 300 dimensional embeddings trained on subword information on Common Crawl. Note that KPRank(SciBERT) and KPRank(FastText) use positional information along with the theme score. Last, we compare the performance of KPRank with Tf-Idf and six PageRank based unsupervised methods as baselines: PositionRank (Florescu and Caragea, 2017), Key2Vec (Mahata et al., 2018), TextRank (Mihalcea and Tarau, 2004), SingleRank (Wan and Xiao, 2008), ExpandRank (Wan and Xiao, 2008), TopicRank (Bougouin et al., 2013).
Tables 2 shows these comparisons on SemEval, Inspec, Krapivin, NUS, and ACM. It can be seen from the table that adding position information shows much higher improvement in the performance of KPRank, i.e. KPRank(SB) substantially outperforms KPRank(SB−POS). Moreover, KPRank(SB) outperforms KPRank(FastText) on all the datasets except for Krapivin. Importantly, KPRank(SB) outperforms most baseline methods, including Key2Vec (by a large margin) e.g., on Se-mEval, KPRank(SB) achieves an F1@5 of 22.51% as compared with 17.54% achieved by Key2Vec. We can also notice from Table 2 that KPRank(SB) achieves comparable performance whenever any baseline method achieves the best performance. Figure 2 shows the confusion matrices of KPRank(SB) using @5 predictions on all five datasets. Each matrix is represented as a heat map, i.e., the darker the blue color the higher the value at that position and the darker the blue on the main diagonal, the more accurate the model is.
Organization design: The continuing influence of information technology
Drawing from an information processing perspective, this paper examines how information technology (IT) has been a catalyst in the development of new forms of organizational structures. [...] to the present environmental instability that now characterizes many industries. Specifically, the authors suggest that advances in IT have enabled managers to adapt existing forms and create new models for organizational design that better fit requirements of an unstable environment. [...]. IT has gone from a support mechanism to a substitute for organizational structures in the form of the shadow structure. [...] Gold-standard keyphrases: Organization design, Information processing perspective, Organizational structures, Environmental instability, Information technology Predicted keyphrases: Organization design, Information technology, Information processing perspective, Organizational structures, Organizational design, Organization, Information processing, Shadow structure, New forms, Bureaucratic structure Comparison with a supervised approach. Usually, the performance of the supervised keyphrase extraction models is better than the unsupervised models (Kim et al., 2013). We compare the performance of KPRank(SB) with the CRF based sequence classification model for the keyphrase extraction (Patel and Caragea, 2019) that uses word embeddings as features along with document specific features. The CRF model outperforms KPRank(SB) on all five datasets, e.g., CRF model achieves an F1 of 45.73% as compared with 25.76% achieved by KPRank(SB) on SemEval.
Anecdotal example. To see the quality of predicted phrases by the KPRank(SB), we randomly selected a paper from the Inspec dataset and evaluated the KPRank(SB) on it. We manually inspected the top-10 predictions by the KPRank(SB) and contrasted them with the gold-standard keyphrases. The title, abstract, gold-standard keyphrases and top-10 predicted keyphrases for this paper are shown in Figure 3. Precisely, in the figure, the cyan italic phrases shown in the text on the top of the figure represent gold-standard keyphrases, whereas the bottom of the figure shows gold-standard keyphrases and the top-10 predicted keyphrases by KPRank(SB) (shown in the order of their prediction). It can be seen from the figure that four out of five gold-standard keyphrases are present in the top-5 predicted keyphrases.
We can also see that KPRank(SB) did not predict gold-standard phrase "environmental instabily." A closer inspection of the document and both types of scores (theme score and positional score) assigned by KPRank(SB) to both constituent words of the gold-standard phrase that was not ranked in top-10 predictions revealed that these constituent words have lower values of theme score and they both appear only once in the document. Hence, the Pagerank algorithm will not boost these words. Inspecting other errors, we found that KPRank can fail to predict phrases that contain words that are less frequent in the document and their word embeddings are far from the theme vector.
Conclusion and Future Work
In this paper, we proposed a novel unsupervised graph-based algorithm, named KPRank, which incorporates both positional appearances of the words along with contextual word embeddings for computing a biased PageRank score for each candidate word. Our experimental results on five datasets show that incorporating position information into our biased KPRank model yields better performance compared with a KPRank that does not use the position information, and SciBERTbased KPRank usually outperforms FastText-based KPRank on this task. Moreover, KPRank outperforms strong baseline methods. In the future, it would be interesting to explore KPRank on other domains, such as Biology, and Social Science. | 4,090.8 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
Unified understanding of nonparametric causality detection in time series
Most complex systems in the real world are driven by multiple interactions among components. Identifying these interactions is critical for understanding, forecasting, and controlling system-level phenomena. Transfer entropy (TE) and convergent cross mapping (CCM) are two widely-used approaches for nonparametric causality detection based on time-series data. However, the theoretical relationship between TE and CCM has not been formally explained. Here, we provide a theoretical formulation that links TE and CCM in an information-theoretic framework, showing that they have different definitions of causal influence. Furthermore, from the formulation, we propose a novel nonparametric causality test, named unified information-theoretic causality (UIC), which has lower data requirements than those of TE (due to robustness to noise) and lower false-positive rates than those of CCM. Numerical experiments confirmed that UIC outperforms TE and CCM in terms of the performance of causality detection for both linear and nonlinear dynamical systems. Finally, we demonstrate the practical importance of the conditional test based on UIC using numerical simulations with indirect causality and empirical data for microbial communities in irrigated water samples from rice fields. Our results contribute to a unified understanding of nonparametric causality tests in time series and the accurate reconstruction of complex real-world systems.
I. INTRODUCTION
Detection of cause-effect relationships among variables, events or objects is a fundamental topic in both natural and social sciences. Most attempts at causality detection are easily disturbed by confounding factors [1] and nonlinear dynamics [2]. These issues are particularly challenging in large systems where intervention is technically or ethically difficult or even impossible [3]. In recent years, nonparametric tests for time series have attracted attention as promising methods for detecting potential causal relations with the minimal need to intervene in the target system [2,4]. Time-series data support Granger's statistical concepts of causality: (1) the cause necessarily occurs before the effect, and (2) the cause contains information about the effect when accounting for confounding variables [5,6].
Nonparametric causality tests based on time-series data have been proposed from the perspectives of information theory and dynamical systems theory. In this study, we focus on two widely-used methods: transfer entropy (TE) [7] and convergent cross mapping (CCM) [2]. TE is an information-theoretic causality test, known as a nonparametric generalization of Granger causality [8]. TE has a clear mathematical definition of causal influence, which is advantageous for theoretical advances [9,10] and practical applications [3,11]. CCM is a nonparametric causality test developed mainly for nonlinear dynamical systems [2,12]. The most important feature of CCM is the use of time-delay embedding theorems [13], which allow for the detection of causal relations without addressing confounding variables embedded by state space reconstruction [2]. As the performance of these methods can be reversed depending on the system [2,14], it is essential to clarify the mathematical connection between TE and CCM in order to understand their advan-tages and disadvantages as causal detection in various systems.
Here, we provide a theoretical formulation that links TE and CCM in an information-theoretic framework. Furthermore, based on the formulation, we propose a novel nonparametric causality test, named unified information-theoretic causality (UIC). UIC incorporates the advantages of both TE and CCM, including (1) a clear mathematical definition of causal influence, (2) noise robustness for causal variables in nonlinear dynamical systems, and (3) efficient computation of statistical significance. Based on the previously developed extension of TE [9], we also introduce a conditional UIC test and apply it to an artificial model system and an empirical microbial system. In the numerical experiments, UIC outperformed TE and CCM for both linear and nonlinear dynamical systems, consistent with the theoretical results.
A. Setting
We consider a multivariate time series with x = {x t } as the effect variable and y = {y t } as the cause variable and test whether y t−p has a causal influence on x t with time lag p > 0. Under generic conditions, x t can be embedded by y t−p and x (E,τ ) t = {x t−τ , x t−2τ , ..., x t−Eτ } based on the time-delay embedding theorem [13,15]. Note that, in most real-world cases, the embedding dimensions (E > 0) and time interval (τ > 0) are not known a priori and need to be estimated [16].
B. TE and CCM
In the above setting, we give the definitions of TE and CCM following their original formulations. Using the joint probability p(x, y) and the conditional probability p(x|y), TE measures causal influence as the flow of information from a cause variable to an effect variable as follows [7]: . (1) Thus, the causal influence from y t−p to x t is tested by rejecting the null hypothesis, TE(y t−p → x t |x (E,τ ) t ) = 0. CCM evaluates the predictive performance of cross mapping (i.e., a nonparametric regression model based on the nearest neighbor method) to test for causal influence. Under the assumption of Gaussian noise, the predictive performance X(L, P ) is defined as follows [2]: where L and P are the sets of time indices for state space reconstruction (i.e., model training) and prediction (i.e., model cross-validation), respectively. x L = {x t |t ∈ L} and y L = {y t |t ∈ L}. w denotes the model parameters (e.g., projection weights). Note that p(w|x t , x (E,τ ) t , x L ) and p(y t−p |y L , w) describe the process where nearest neighbors are searched from the effect variable and the process where the predictive performance of the cause variable is evaluated based on the searched nearest neighbors, respectively. Sugihara et al. [2] measured causal influence by the improvement in predictive performance with increasing training data (so-called "convergence") as follows: where L min is the set of time indices of training data at the minimum size (minimum library length) [16]. As the training data size approaches zero, we can prove that the expectation of causal influence defined by Eq. (2) is equivalent to that of the following mutual information (see Supplemental Material S1): which is considered the information-theoretic definition of the CCM causal influence.
C. Unified Information-theoretic Causality
Using the information-theoretic definitions, here we clarify the theoretical connection between TE and CCM. We define the following novel measure of causal influence, hereafter referred to as unified informationtheoretic causality (UIC): . (4) By the Bayes' rule, p(y|x, z)p(x|z) = p(x|y, z)p(y|z), we can immediately prove the equivalence between TE and UIC from Eqs. (1) and (4). If there are enough data to accurately estimate the conditional probabilities, we will have the same value of causal influence for TE and UIC. It is also obvious from Eqs. (3) and (4) that CCM and UIC (or TE) are equivalent only if y t−p and x (E,τ ) t are independent. In general, CCM tends to perform better for nonlinear systems because this independence can be more satisfied due to long-term unpredictability. CCM is more likely to detect false causality than TE [3,14] because y t−p may have causal influence on x t via x (E,τ ) t (i.e., the previous states of x t ).
In CCM and UIC, x and y show complete separation as conditional and predicted variables for conditional probabilities. This variable separation is not trivial for nonparametric causality tests for two reasons. First, it relates to noise robustness because noise of conditional variables has a significant effect on the estimation of causal influence, particularly in nonlinear dynamical systems. As CCM and UIC do not include any cause variables (i.e., y t−p ) as conditional variables, they show greater robustness to noise than that of TE. Second, variable separation allows us to efficiently generate surrogate data (see Supplemental Material S3 for details), which are required to evaluate the statistical significance of causal influences [17]. These theoretical expectations are indeed confirmed in the following numerical experiments.
D. Numerical Experiments
Using artificial model systems, we compared the detection performance of three causality tests. Synthetic timeseries were generated by numerical simulations from two dynamical systems with varying noise: a nonlinear logistic model and linear vector autoregression (VAR) model (Supplemental Material S5). The performance of causality tests was evaluated by the area under the receiveroperator characteristic curve (AUC). We found that UIC outperforms TE and CCM for both nonlinear and linear systems (Fig. 1). The performance of CCM was similar to that of UIC for the nonlinear system (Fig. 1a). However, for the linear system, with an increasing number of time points, the improvement in performance was slower for CCM than for TE and UIC (Fig. 1b) because false positives do not decrease in CCM; even with an infinite number of observations, CCM did not necessarily detect true causality (i.e., a lack of statistical consistency). Consistent with its theoretical expectation, TE was quite sensitive to the noise levels of causal variables for the nonlinear dynamical system and thus requires a considerable number of time points for robust causality detection (Fig. 1a).
III. DEVELOPMENT OF CONDITIONAL CAUSALITY TESTS
In most large real-world systems, the causal effect between two variables can be detected by causality tests even when there are no direct relations between them. There are two possible scenarios for the misidentification of causal effects. In the first scenario, the causal effect is indirectly mediated by third variables; for example, when y t−2 affects z t−1 , which affects x t , we find the causal effect of y t−2 on x t . In the second scenario, target variables are strongly driven by external forces, which are never embedded by time-delayed variables. In this case, an external force z t−2 affects both y t−1 and x t , resulting in the spurious detection of a causal effect of y t−1 on x t . The misidentifications of causality are ameliorated by conducting the causality test conditional on third variables. In the following section, we introduce the conditional UIC test based on previous studies of TE [9].
A. Conditional UIC
The measure of causal influence conditional on third variables, z = {z t }, is defined as follows: The difference from unconditional UIC defined by Eq. 4 is that the conditional probabilities include z t as conditional variables. By conducting two causal tests: we can examine whether the causal influence of y t−p on x t is indirectly mediated by z t or not.
B. Applications
First, we applied the conditional UIC test to a fourspecies food chain model to demonstrate the identification of direct and indirect effects [12]. In the food chain model, one species directly affects another species at time-lag 1 along a transitive causal chain ( Fig. 2; Supplemental Material S5). Using conditional UIC tests expressed by conditions (6), we successfully detected the direct causal effects at time lag 1 (red circle in Fig. 2; P < 0.05 using a surrogate-based significance test). However, when using only unconditional UIC tests, we detected many indirect effects (blue circle in Fig. 2) in addition to direct effects.
Next, we applied the conditional UIC test to bacterial DNA time-series data collected from an experimental rice field [18]. These time-series data were obtained by quantifying DNA copy numbers for over 1000 taxa from irrigated water in experimental rice plots. In this application, we chose 20 abundant bacterial taxa with sufficient temporal fluctuations (see Supplemental Material S6 for more details). Conditional UIC detected 16 direct interactions and 28 indirect interactions among these bacterial taxa. To investigate the effects of identifying direct and indirect interactions on dynamical properties, we further estimated the signs and strengths of these interactions by S-map [19,20]. Then, we evaluated the dynamic stability of the system [21] in two situations: (1) the system is incorrectly assumed to be driven by both direct and indirect interactions (Fig. 3a) and (2) the system is correctly assumed to be driven by only direct interactions (Fig. 3b). In this analysis, the sign was reversed for 4 of 16 direct interactions due to a failure to distinguish between indirect and direct interactions. These interactions had weak (i.e., near-zero) strengths, and their estimates may be influenced by other strong interactions. Using the conditional UIC, the system was identified as more stable because the reconstructed network eliminated suspicious indirect interactions (lower panels in Fig. 3), suggesting that the time-series analyses that do not distinguish between direct and indirect effects will misestimate dynamical properties such as system stability.
IV. DISCUSSION
In this paper, we develop a unified framework for nonparametric causal tests based on information-theoretic formulations and propose a novel causality test (i.e., UIC), establishing the theoretical connection between TE and CCM. Numerical experiments reveal that UIC shows higher robustness to noise in time-series data and have lower data requirements than TE and CCM (Fig. 1). Importantly, UIC outperforms TE and CCM in both linear and nonlinear dynamical systems. These results support the real-world applications of UIC because we do not know a priori whether TE or CCM is an appropriate causality test in weakly nonlinear systems.
In addition to its high performance, our method has two key advantages over conventional approaches. First, UIC has access to previous theoretical advances in both information theory [6,10] and dynamical systems theory [22,23]. As such, our framework facilitates the development of causality tests in an integrative way, with the help of both theories. Second, our method enables efficient computations of statistical significance. Specifically, we can reduce the computational requirements for searching nearest neighbors for state space reconstruction, which is the greatest computational challenge in nonparametric causality tests (Supplemental Material S3). Because recent studies have focused on large systems with thousands of components or more [18], efficient computation is important for real-world applications.
Interestingly, despite the mathematical equivalence of their definitions, TE and UIC show distinct statistical behavior. This arises from the following three characteristics of the nonparametric causality tests we studied: (1) the use of time-delayed variables for state space re- construction, (2) the need to search nearest neighbors to obtain nonparametric estimators, and (3) the need to separately calculate two (conditional) probabilities and their ratio to obtain causal influence. In contrast to (1) and (2), the last characteristic may depend on the algorithm used in nonparametric causality tests. For example, the Frenzel-Pompe algorithm [9] gives an identical value for causal influence for both TE and UIC. Although comparing computational algorithms is beyond the scope of our study, we stress the caveat that TE and UIC might have similar statistical behavior when implemented using other computational algorithms.
To understand complex systems in the real world, it is critical to identify interactions among components. For example, under anthropogenic stress, ecosystems sometimes exhibit unexpected regime shifts, including decreases in net primary production [24,25] and mass bleaching of corals [26], due to transitions in dynamical structure emergent from interacting components [27]. With improved ecosystem monitoring techniques [28], nonparametric causality detection allows us to capture such critical transitions, providing a basis for preventing unexpected regime shifts and supporting ecosystem management to maintain biodiversity and ecosystem functions. Our method can improve the application of causality detection to real-world complex systems, thereby contributing to human well-being in a changing world. | 3,518.2 | 2023-07-13T00:00:00.000 | [
"Computer Science"
] |