text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Research trend on sustainable architecture: a bibliometric analysis emphasizing on building, material, façade, and thermal keywords
Studies about sustainable research remain increase recently due to the global trend and movement on greener lifestyles. Keywords on building, material, façade, and thermal represent the current issue in sustainable architecture. This study aims to trends in sustainable architecture research with specific keywords and periods. Publishing period between 1976-2020 with particular limitation applied to emerge 859 publication documents to be analyzed. Scopus’ on-site analysis system and VOSviewer were employed to generate graphical and visual analysis on research trends. The analysis results generate five significant research clusters: energy, façade technology, measurement, thermal, and climate. Besides groups, the analysis also indicates that most of the countries involved were from Europe. Hence, the conclusion of this study reveals potential research on a similar topic in tropical countries which demanding sustainable building encounters a humidity challenge. This study fills the gap in the bibliometric analysis regarding sustainable architecture with specific keywords and publishing period.
Introduction
The high demand for green building concepts brings a new environmentally-friendly culture to the construction industry. The government and stakeholders are expected to take this issue seriously and commit to everlast sustainable construction to establish a better environment, social, and economic [1].
Complex decision yet thoughtful design to the environment is enormously required to address the sustainable living standards [2]. In research about the interpretation of sustainable building, Berardi [3] clarifies a building predicated as a sustainable building if it contributes whole sustainability domains holistically. However, the green building movement with all those promising benefits, in any case, finds difficulties in realizations, particularly in developing countries. Lack of information, unclear regulation, and insufficient marketing strategies for green products lead to sluggish green building movement in developing countries [4]. Hence, the government and the green building council should encourage stakeholders to promote the concept through certification, regulation, and reward [5].
Pertain to the green building; the material is the inseparable part that contributes to establishing sustainability. The research discussed the contribution of material to promote green building with earthy material usage [6]. The expedient in material usage also considerably reduce the waste material; thus, it is an addition to promote sustainability in material aside from earthy material [7]. The involvement of recent technology, Building Information Modelling (BIM), supports the sustainability in material usage [8] that comprises information on volumes and material selected for each element in building [9], and waste management support [10]. Materials are also entangling to the greenhouse effect, which [11]. Glass as typical material for highrise building enveloped wall facade feasible contributes to causing waste energy effect. Some issues addressing glass envelope walls comprise poor insulation and rely on artificial ventilation [12].
Hence, the envelope wall as a highrise building façade is always appealing to research. To prevent the thermal transfer but remain the glass as envelope wall is a challenge for architects. Such tricky actions to obtain thermal comfort encompass double layer wall glass with sure angled sun blinds and air-circulated space between [13]. Meanwhile, wall materials remain updated with the latest technology and formula to prevent the thermal transfer, such as glazing and glass prismatic [14]. Research regarding building, material, façade, and thermal shown desired theme to study. Thermal comfort, heat transfer mitigation building, and pursuing a better environment are the purpose of those research, and the trend of this research will remain updated in the future. Bibliometric analysis is legal research using big data on Scopus, Web Science, Dimensions, or PubMed.
Keywords mentioned above in the title are about the architectural or construction industry. Li et al. [15] illustrated two-decade publications about green building sourced from the Web of Science core collection database. Their research revealed that thermal comfort and natural ventilation are appealing issues for green building themes [15]. Lima et al. [16] conducted a bibliographic analysis using VOS Viewer to generate potential future research regarding the construction industry. Analyzing big data from literature databases is fruitful to bridge historical gaps, understand the research evolution, and predict future research [15,16]. Then, this research aims to generate a mapping and visualized research regarding the keywords building, material, façade, and thermal in the Scopus database. Thus the outcome of the research enables future researchers to envision the potential and intense research theme.
Methodology
This study employs bibliometric analysis to provide knowledge mapping and visualize the maps graphically [17]. Bibliometric analysis likewise illustrated the geographical source and identified leading authors and affiliations involved in particular research keywords [18]. The supply of data uses in the study acquired from Scopus comprises citation information, abstract, and keyword. A preliminary step uses four keywords determined by authors during the colloquizing process and explores the Scopus database using the following keywords: building, material, façade, and thermal. The keywords utilized are explained in the introduction chapter, which are imperative keywords for current research in sustainability in the modern urban life of architecture. Scopus database reveal 984 document results in search of those keywords accessed on June 13, 2021. The number of documents exposed encompasses publications from 1976-2021, numerous subject areas, various source types, and two final publication stages.
The exclusion sets limit the documents before exporting the data for meta-analysis purposes. The limitation (L) encompass year (L1), subject area (L2), publication stage (L3), and language (L4) (table1). After searching and screening the document results, the data are exported to the CSV Excel file, including citation information and abstract & keywords. Thus, CSV Excel file extracted processed using VOSViewer to generate meta-analysis of 859 publications. The employment of VOSViewer allows the researchers to understand the meta-analysis visually by presenting label view, density view, cluster density view, and scatter view [19]. (2018) work, which revealed glass innovation for façade, and obtained the most cited document outright [23]. Nevertheless, in the following years, the number of publications was lessened insignificantly (figure 1). Actually, in the middle of 2021, the publications along 2021 capped 68 documents. TIt indicates that publications in the whole year of 2021 possibly reach a higher amount than in 2018. According to the area of the research publication with four keywords found in Scopus, table 2 indicates engineering possesses the most subject followed by energy, material science, and environmental science. Table 3 shows that Energy and Building is the journal most regarding this research area, looking at the screened documents.
Countries involvement in research
European countries dominated the ten most occupied countries in research about those four keywords. China and the US were the only country outside the European continent that belong to ten. Italy leads the highest involved country equipped with 121 research documents on Scopus. Researches in Italy are mainly focusing on the achievement of thermal comfort in building by the utilization of vegetation, natural ventilation, and material innovation [24,25,26]. Meta-analysis through VOSViewer presents the network between countries involved in research with keywords: building, material, façade, and thermal.
Keyword analysis
Author keyword applies in this study to recognize the relation between keywords and indicates the clusters. This analysis requires five minimum number of occurrences of a keyword. Thus, VOSviewer visualizes 74 thresholds ( figure 5). Meta-analysis of selected Scopus documents generates 9 clusters indicated by different colours. Manual labelling is required to designation the clusters based on the items contained (Table 4). Five clusters indicated the most influential cluster and have inter-cluster linkage. The prominent cluster is red, labelled as energy, possess the most keyword in the research area. Energy efficiency holds the hottest item in this label and broadest linkage area outright-energy efficiency is tightly associated with thermal concerns, façade, and material. The second cluster, the green dots labelled as Façade Technology, contains façade and shading innovations to mitigate heat transfer. The blue dots, the third cluster labelled as measurement because of assessment and simulation contained. Fourth, cluster the yellow dots labelled Thermal since they are closely associated with natural ventilation and thermal comfort. The fifth cluster is purple dots labelled as climate as a consequence of urban heat island is the main issue in this cluster. (Table 6). Authors belong to the list signify their frequent in this research area based on documents they published on Scopus. | 1,898.4 | 2021-11-01T00:00:00.000 | [
"Economics",
"Education"
] |
Constraining Electroweakinos in the Minimal Dirac Gaugino Model
Supersymmetric models with Dirac instead of Majorana gaugino masses have distinct phenomenological consequences. In this paper, we investigate the electroweakino sector of the Minimal Dirac Gaugino Supersymmetric Standard Model (MDGSSM) with regards to dark matter (DM) and collider constraints. We delineate the parameter space where the lightest neutralino of the MDGSSM is a viable DM candidate, that makes for at least part of the observed relic abundance while evading constraints from DM direct detection, LEP and lowenergy data, and LHC Higgs measurements. The collider phenomenology of the thus emerging scenarios is characterised by the richer electroweakino spectrum as compared to the Minimal Supersymmetric Standard Model (MSSM) -- 6 neutralinos and 3 charginos instead of 4 and 2 in the MSSM, naturally small mass splittings, and the frequent presence of long-lived particles, both charginos and/or neutralinos. Reinterpreting ATLAS and CMS analyses with the help of SModelS and MadAnalysis 5, we discuss the sensitivity of existing LHC searches for new physics to these scenarios and show which cases can be constrained and which escape detection. Finally, we propose a set of benchmark points which can be useful for further studies, designing dedicated experimental analyses and/or investigating the potential of future experiments.
Introduction
The lightest neutralino [1][2][3] in supersymmetric models with conserved R-parity has been the prototype for particle dark matter (DM) for decades, motivating a multitude of phenomenological studies regarding both astrophysical properties and collider signatures. The ever tightening experimental constraints, in particular from the null results in direct DM detection experiments, are however severely challenging many of the most popular realisations. This is in particular true for the so-called well-tempered neutralino [4] of the Minimal Supersymmetric Standard Model (MSSM), which has been pushed into blind spots [5] of direct DM detection. One sub-TeV scenario that survives in the MSSM is bino-wino DM [6][7][8][9], whose discovery is, however, very difficult experimentally [10][11][12].
It is thus interesting to investigate neutralino DM beyond the MSSM. While a large literature exists on this topic, most of it concentrates on models where the neutralinos -or gauginos in general -have Majorana soft masses. Models with Dirac gauginos (DG) have received much less attention, despite excellent theoretical and phenomenological motivations . The phenomenology of neutralinos and charginos ("electroweakinos" or "EW-inos") in DG models is indeed quite different from that of the MSSM. The aim of this work is therefore to provide up-to-date constraints on this sector for a specific realisation of DGs, within the context of the Minimal Dirac Gaugino Supersymmetric Standard Model (MDGSSM) The colourful states in DG models can be easily looked for at the LHC, even if they are "supersafe" compared to the MSSM -see e.g. [47,58,[60][61][62][63][64][65][66][67][68]. The properties of the Higgs sector have been well studied, and also point to the colourful states being heavy [38,56,59,[69][70][71]. However, currently there is no reason that the electroweak fermions must be heavy, and so far the only real contsraints on them have been through DM studies. Therefore we shall begin by revisiting neutralino DM, previously examined in detail in [72] (see also [73,74]), which we update in this work. We will focus on the EW-ino sector, considering the lightest neutralinõ χ 0 1 as the Lightest Supersymmetric Particle (LSP), and look for scenarios where theχ 0 1 is a good DM candidate in agreement with relic density and direct detection constraints. In this, we assume that all other new particles apart from the EW-inos are heavy and play no role in the phenomenological considerations.
While the measurement of the DM abundance and limits on its interactions with nucleii have been improved since previous analyses of the model, our major new contribution shall be the examination of up-to-date LHC constraints, in view of DM-collider complementarity. For example, certain collider searches are optimal for scenarios that can only over-populate the relic density of dark matter in the universe, so by considering both together we obtain a more complete picture.
Owing to the additional singlet, triplet and octet chiral superfields necessary for introducing DG masses, the EW-ino sector of the MDGSSM comprises six neutralinos and three charginos, as compared to four and two, respectively, in the MSSM. More concretely, one obtains pairs of bino-like, wino-like and higgsino-like neutralinos, with small mass splittings within the bino (wino) pairs induced by the couplings λ S (λ T ) between the singlet (triplet) fermions with the Higgs and higgsino fields. As we recently pointed out in [66], this can potentially lead to a long-livedχ 0 2 due to a small splitting between the bino-like states. Moreover, as we will see, one may also have long-livedχ The results of our study are presented in section 4. We first delineate the viable parameter space where the lightest neutralino of the MDGSSM is at least part of the DM of the universe, and then discuss consequences for collider phenomenology. Re-interpreting ATLAS and CMS searches for new physics, we characterise the scenarios that are excluded and those that escape detection at the LHC. In addition, we give a comparison of the applicability of a simplified models approach to the limits obtained with a full recasting. We also briefly comment on the prospects of the MATHUSLA experiment. In section 5 we then propose a set of benchmark points for further studies. A summary and conclusions are given in section 6.
The appendices contain additional details on the implementation of the parameter scan of the EW-ino sector (appendix A.2), and on the identification of parameter space wherein lie experimentally acceptable values of the Higgs mass (appendix A.3). Finally, in appendix A.4, we provide some details on the reinterpretation of a 139 fb −1 EW-ino search from ATLAS, which we developed for this study.
Classes of models
Models with Dirac gaugino masses differ in the choice of fields that are added to extend those of the MSSM, and also in the treatment of the R-symmetry. Both of these have significant consequences for the scalar ("Higgs") and EW-ino sectors. In this work, we shall focus on constraints on the EW-ino sector in the MDGSSM. Therefore, to understand the potential generality of our results, we shall here summarise the different choices that can be made in other models, before giving the details for ours.
To introduce Dirac masses for the gauginos, we need to add a Weyl fermion in the adjoint representation of each gauge group; these are embedded in chiral superfields S, T, O which are respectively a singlet, triplet and octet, and carry zero R-charge. Some model variants neglect a field for one or more gauge groups, see e.g. [28,75]; limits for those cases will therefore be very different.
The Dirac mass terms are written by the supersoft [16] operators where W iα are the supersymmetric gauge field strengths. It is possible to add Dirac gaugino masses through other operators, but this leads to a hard breaking of supersymmetry unless the singlet field is omitted -see e.g. [55]. On the other hand, whether we add supersoft operators or not, the difference appears in the scalar sector (the above operators lead to scalar trilinear terms proportional to the Dirac mass), so would not make a large difference to our results. There are then two classes of Dirac gaugino models: ones for which the R-symmetry is conserved, and those for which it is violated. If it is conserved, with the canonical example being the MRSSM, then since the gauginos all carry R-charge, the EW-inos must be exactly Dirac fermions. For a concise review of the EW sector of the MRSSM see [50] that we discuss later. However, in that class of models the phenomenology is different to that described here. The second major class of models is those for which the R-symmetry is violated. This includes the minimal choices in terms of numbers of additional fields -the SOHDM [28], "MSSM without µ term" [76] and MDGSSM, as well as extensions with more fields, e.g. to allow unification of the gauge couplings, such as the CMDGSSM [69,74]. The constraints on the EW-ino sectors of these models should be broadly similar. Crucially in these models -in contrast to those where the EW-inos are exactly Dirac -the neutralinos are pseudo-Dirac Majorana fermions. This means that they come in pairs with a small mass splitting, in particular between the neutral partner of a bino or wino LSP and the LSP itself. This has significant consequences for dark matter in the model, as has already been explored in e.g. [72,74]: coannihilation occurs naturally. However, we shall also see here that it has significant consequences for the collider constraints: the decays fromχ 0 2 toχ 0 1 are generally soft and hard to observe, and lead to a long-lived particle in some of the parameter space.
Electroweakinos in the MDGSSM
Here we shall summarise the important features of the EW-ino sector of the MDGSSM. Our notation and definitions are essentially identical to [72], to which we refer the reader for a more complete treatment.
The MDGSSM can be defined as the minimal extension of the MSSM allowing for Dirac gaugino masses. We add one adjoint chiral superfield for each gauge group, and nothing else: the field content is summarised in Table 1. We also assume that there is an underlying R-symmetry that prevents R-symmetry-violating couplings in the superpotential and supersymmetry-breaking sector, except for an explicit breaking in the Higgs sector through a (small) B µ term. This was suggested in the "MSSM without µ-term" [76] as such a term naturally has a special origin through gravity mediation; it is also stable under renormalisation group evolution, as the B µ term does not induce other R-symmetry violating terms.
The singlet and triplet fields can have new superpotential couplings with the Higgs, These new couplings may or may not have an underlying motivation from N = 2 supersymmetry, which has been explored in detail [59]. After electroweak symmetry breaking (EWSB), we obtain 6 neutralino and 3 chargino mass eigenstates (as compared to 4 and 2, respectively, in the MSSM). The neutralino mass matrix M N in the basis where s W = sin θ W , s β = sin β and c β = cos β; tan β = v u /v d is the ratio of the Higgs vevs; m DY and m D2 are the 'bino' and 'wino' Dirac mass parameters; µ is the higgsino mass term, and λ S and λ T are the couplings between the singlet and triplet fermions with the Higgs and higgsino fields. By diagonalising eq. (3), one obtains pairs of bino-like, wino-like and higgsinolike neutralinos, 1 with small mass splittings within the bino or wino pairs induced by λ S or λ T , respectively. For instance, if m DY is sufficiently smaller than m D2 and µ, we find mostly bino/U(1) adjointχ 0 1,2 as the lightest states with a mass splitting given by Alternative approximate formulae for the mass-splitting in other cases were also given in [72].
Turning to the charged EW-inos, the chargino mass matrix in the basis v This can give a higgsino-likeχ ± as in the MSSM, but we now have two wino-likeχ ± -the latter ones again with a small splitting driven by λ T . A wino LSP therefore consists of a set of two neutral Majorana fermions and two Dirac charginos, all with similar masses.
Note that in both eqs. (3) and (5), Majorana mass terms are absent, since we assume that the only source of R-symmetry breaking in the model is the B µ term. If we were to add Majorana masses for the gauginos, or supersymmetric masses for the singlet/triplet fields, then they would appear as diagonal terms in the above matrices (see e.g. [72] for the neutralino and chargino mass matrices with such terms included), and would generically lead to larger splitting of the pseudo-Dirac states.
Parameter scan
We now turn to the numerical analysis. Focusing solely on the EW-ino sector, the parameter space we consider is: The rest of the sparticle content of the MDGSSM is assumed to be heavy, with slepton masses fixed at 2 TeV, soft masses of the 1st/2nd and 3rd generation squarks set to 3 TeV and 3.5 TeV, respectively, and gluino masses set to 4 TeV. The rest of parameters are set to the same values as in [66]; in particular trilinear A-terms are set to zero. The mass spectrum and branching ratios are computed with SPheno v4.0.3 [77,78], using the DiracGauginos model [79] exported from SARAH [80][81][82][83]. This is interfaced to mi-crOMEGAs v5.2 [84][85][86] 2 for the computation of the relic density, direct detection limits and other constraints explained below. To efficiently scan over the EW-ino parameters, eq. (6), we implemented a Markov Chain Monte Carlo (MCMC) Metropolis-Hastings algorithm that walks towards the minimum of the negative log-likelihood function, − log(L), defined as Here, • χ 2 Ωh 2 is the χ 2 -test of the computed neutralino relic density compared to the observed relic density, Ωh 2 Planck = 0.12 [87]. In a first scan, this is implemented as an upper bound only, that is • p X1T is the p-value for the parameter point being excluded by XENON1T results [88]. The confidence level (CL) being given by 1 − p X1T , a value of p X1T = 0.1 (0.05) corresponds to 90% (95%) CL exclusion. To compute p X1T , the LSP-nucleon scattering cross sections are rescaled by a factor Ωh 2 /Ωh 2 Planck .
• m LSP is the mass of the neutralino LSP, added to avoid the potential curse of dimensionality. 3 In order to explore the whole parameter space, a small jump probability is introduced which prevents the scan from getting stuck in local minima of − log(L). We ran several Markov Chains from different, randomly drawn starting points; the algorithm is outlined step-by-step in Appendix A.2.
2 More precisely, we used a private pre-release version of micrOMEGAs v5.2, which does however give the same results as the official release. 3 Due to the exponential increase in the volume of the parameter space, one risks having too many points with an m LSP at the TeV scale. Current LHC searches are not sensitive to such heavy EW-inos.
The light Higgs mass, m h , also depends on the input parameters, and it is thus important to find the subset of the parameter space where it agrees with the experimentally measured value. Instead of including m h in the likelihood function, eq. (7), that guides the MCMC scan, we implemented a Random Forest Classifier that predicts whether a given input point has m h within a specific target range. As the desired range we take 120 < m h < 130 GeV, assuming m h ≃ 125 GeV can then always be achieved by tuning parameters in the stop sector. Points outside 120 < m h < 130 GeV are discarded. This significantly speeds up the scan. Details on the Higgs mass classifier are given in Appendix A.3.
In the various MCMC runs we kept for further analysis all points scanned over, which With the procedure outlined above, many points with very light LSP, in the mass range below m h /2 and even below m Z /2, are retained. We therefore added two more constraints a posteriori. Namely, we require for valid points that 6. ∆ρ lies within 3σ of the measured value ∆ρ exp = (3.9 ± 1.9) × 10 −4 [89], the 3σ range being chosen in order to include the SM value of ∆ρ = 0; 7. signal-strength constraints from the SM-like Higgs boson as computed with Lilith-2 [90] give a p-value of p Lilith > 0.05; this eliminates in particular points in which m LSP < m h /2, where the branching ratio of the SM-like Higgs boson into neutralinos or charginos is too large.
Points which do not fulfil these conditions are discarded. We thus collect in total 52550 scan points, which fulfil all constraints, as the basis for our phenomenological analysis.
Treatment of electroweakino decays
As argued above and will become apparent in the next section, many of the interesting scenarios in the MDGSSM feature the second neutralino and/or the lightest chargino very close in mass to the LSP. With mass splittings of O(1) GeV,χ decays intoχ 0,± 1 + γ become important. These decays were in the first case not implemented, and in the second not treated correctly in the standard SPheno/SARAH. We therefore describe below how these decays are computed in our analysis; the corresponding modified code is available online [91]. 4 Note that the precise calculation of the chargino and neutralino decays is important not only for the collider signatures (influencing branching ratios and decay lengths), but can also impact the DM relic abundance and/or direct detection cross sections. 4 We leave the decaysχ 0 i toχ ± j + pion(s) to future work.
Chargino decays into pions
When the mass splitting between chargino and lightest neutralino becomes sufficiently small, three-body decays via an off-shell W -boson,χ * start to dominate. However, when ∆m ≲ 1.5 GeV it is not accurate to describe the W * decays in terms of quarks, but instead we should treat the final states as one, two or three pions (with Kaon final states being Cabibbo-suppressed); and for ∆m < m π the hadronic channel is closed. Surprisingly, these decays have not previously been fully implemented in spectrum generators; SPheno contains only decays to single pions from neutralinos or charginos in the MSSM via an off-shell W or Z boson, and SARAH does not currently include even these. A full generic calculation of decays with mesons as final states for both charged and neutral EW-inos (and its implementation in SARAH) should be presented elsewhere; for this work we have adapted the results of [92][93][94] which include only the decay via an off-shell W: Herem − ,m 0 are the masses of theχ The couplings of the W-boson to the light quarks and the W mass are encoded in G F ; in SARAH we make the substitution is the coupling of the up and down quarks to the W-boson.
While the single pion decay can be simply understood in terms of the overlap of the axial current with the pion, the two-and three-pion decays proceed via exchange of virtual mesons which then decay to pions. The form factors for these processes are then determined by QCD, and so working at leading order in the electroweak couplings we can use experimental data for processes involving the same final states; in this case we can use τ meson decays. The two-pion decays are dominated by ρ and ρ ′ meson exchange, and the form factor F (q 2 ) was defined in eqs. (A3) and (A4) of [93]. The expressions for the Breit-Wigner propagator BW a of the a 1 meson (and not the a 2 meson as stated in [92][93][94]), which dominates 3π production, as well as for the three-pion phase space factor g(q 2 ) can be found in eqs. (3.16)-(3.18) of [95].
As in [92][93][94] we use the propagator without "dispersive correction," and so include a factor of 1.35 to compensate for the underestimate of τ − → 3πν τ decays by 35%. Note finally that the three-pion decay includes both π − π 0 π 0 and π − π − π + modes, which are assumed to be equal. In Figure 1 we compare our results to those of [92][93][94] neutralinos, which can be easily tuned in mass relative to each other by changing the bino mass.
In Figure 2 we show the equivalent expressions in the case of interest for this paper, where there are no Majorana masses for the gauginos. We take tan β = 34.664, µ = 2 TeV, v T = −0.568 GeV, v S = 0.92 GeV, λ S = −0.2, 2λ T = 0.2687, m D2 = 200 GeV, and vary m DY between 210 and 221 GeV. We find identical behaviour for both models, except the overall decay rate is slightly different; and note that in this scenario we haveχ 0 2 almost degenerate withχ 0 1 , so we include decays ofχ ± 1 to both states of the pseudo-Dirac LSP. Finally, we implemented the decays of neutralinos to single pions via the expression where nowm 1,2 are the masses ofχ 0 1,2 and c L , c R are the couplings for the neutralinos to the Z-boson analogously defined as above; since the neutralino is Majorana in nature we must have c R = −c * L .
Neutralino decays into photons
In the MDGSSM, the mass splitting between the two lightest neutralinos is naturally small. 5 Therefore in a significant part of the parameter space the dominantχ 0 2 decay mode is the loop-induced processχ 0 2 →χ 0 1 + γ. This is controlled by an effective operator where is a Majorana spinor, and yields Our expectation (and indeed as we find for most of our points) is that |C 12 | ∼ 10 −5 -10 −6 GeV −1 . This loop decay process is calculated in SPheno/SARAH using the routines described in [96]. However, we found that the handling of fermionic two-body decays involving photons or gluons was not correctly handled in the spin structure summation. Suppose we have S-matrix elements M for a decay F (p 1 ) → F (p 2 ) + V (p 3 ) with a vector having wavefunction ε µ , then we can decompose the amplitudes according to their Lorentz structures (putting v i for the antifermion wavefunctions) as This is the decomposition made in SARAH which computes the values of the amplitudes {x i }. Now, if V is massless, and since M is an S-matrix element, the Ward identity requires (p 3 ) µ M µ = 0 (note that this requires that we include self-energy diagrams in the case of charged fermions), and this leads to two equations relating the {x i }: Here, m 1 and m 2 are the masses of the first and second fermion, respectively. Performing the spin and polarisation sums naively, we have the matrix spins, polarisations When we substitute in the Ward identities and re-express as just x 1 , x 2 we have spins, polarisations This matrix will yield real, positive-definite widths for any value of the matrix elements x 1 , x 2 , whereas this is not manifestly true for eq. (17). Therefore as of SARAH version 4.14.3 we implemented the spin summation for loop decay matrix elements given in eq. (18), i.e. in such decays we compute the Lorentz structures corresponding to x 1 , x 2 and ignore x 3 , x 4 . This applies to allχ
Properties of viable scan points
We are now in the position to discuss the results from the MCMC scans. We begin by considering the properties of theχ 0 1 as a DM candidate. Figure 3(a) shows the bino, wino and higgsino composition of theχ 0 1 when only an upper bound on Ωh 2 is imposed; all points in the plot also satisfy XENON1T (p X1T > 0.1) and all other constraints listed in section 3.1. We see that cases where theχ 0 1 is a mixture of all states (bino, wino and higgsino) are excluded, while cases where it is a mixture of only two states, with one component being dominant, can satisfy all constraints. Also noteworthy is that there are plenty of points in the low-mass region, m LSP < 400 GeV. Figure 3(b) shows the points where theχ 0 1 makes for all the DM abundance. This, of course, imposes much stronger constraints. In general, scenarios with strong admixtures of two or more EW-ino states are excluded and the valid points are confined to the corners of (almost) pure bino, wino or higgsino. Similar to the MSSM, the higgsino and especially the wino DM cases are heavy, with masses ≳ 1 TeV, and only about a 5% admixture of another interaction eigenstate; in the wino case, the MCMC scan gave only one surviving point within the parameter ranges scanned over. Light masses are found only for bino-like DM; in this case there can also be slightly larger admixtures of another state: concretely we find up to about 10% wino or up to 35% higgsino components. As mentioned, we assume that all other sparticles besides the EW-inos are heavy. Hence, co-annihilations of EW-inos which are close in mass to the LSP must be the dominating processes to achieve Ωh 2 of the order of 0.1 or below. The relation between mass, bino/wino/higgsino nature of the LSP, relic density and mass difference to the next-to-lightest sparticle (NLSP) is illustrated in Figure 4. The three panels of this figure show m LSP vs. Ωh 2 for the points from Figure 3(a), where the LSP is > 50% bino, wino, or higgsino, respectively. The NLSP-LSP mass difference is shown in colour, while different symbols denote neutral and charged NLSPs. Two things are apparent besides the dependence of Ωh 2 on mχ0 1 for the different scenarios: 1. All three cases feature small NLSP-LSP mass differences. For a wino-like LSP, this mass difference is at most 3 GeV. For bino-like and higgsino-like LSPs it can go up to nearly 25 GeV, though for most points it is just few GeV.
2. The NLSP can be neutral or charged, that is in all three cases we can have mass orderings χ For bino-like LSP points outside the Z and Higgs-funnel regions, a small mass difference between the LSP and NLSP is however not sufficient-co-annihilations with other nearby states are required to achieve Ωh 2 ≤ 0.132. Indeed, as shown in Figure 5, we have m D2 ≈ m DY , with typically m D2 /m DY ≈ 0.9-1.4, over much of the bino-LSP parameter space outside the funnel regions. This leads to bino-wino co-annihilation scenarios like also found in the MSSM. The scattered points with large ratios m D2 /m DY have µ ≈ m DY , i.e. a triplet of higgsinos close to the binos. Outside the funnel regions, the bino-like LSP points therefore feature mχ ± 1 − mχ0 1 ≲ 30 GeV and mχ0 3,4 − mχ0 1 ≲ 60 GeV in addition to mχ0 2 − mχ0 1 ≲ 20 GeV. For completeness we also give the maximal mass differences found within triplets (quadruplets) of higgsino (wino) states in the higgsino (wino) LSP scenarios. Concretely we have mχ0 2 − mχ0 1 ≲ 15 GeV and mχ ± 1 − mχ0 1 ≲ 50-10 GeV (decreasing with increasing mχ0 1 ) in the (a) LSP more than 50% bino. (b) LSP more than 50% wino.
(c) LSP more than 50% higgsino. higgsino LSP case. In the wino LSP case, GeV (though mostly below 10 GeV). However, as noted before, either mass ordering, mχ0 2 < mχ ± 1 or mχ ± 1 < mχ0 2 is possible. An important point to note is that the mass differences are often so small that the NLSP (and sometimes even the NNLSP) becomes long-lived on collider scales, i.e. it has a potentially visible decay length of cτ > 1 mm. This is illustrated in Figure 6, which shows in the left panel the mean decay length of the LLPs as function of their mass difference to the LSP. Longlived charginos will lead to charged tracks in the detector, while long-lived neutralinos could potentially lead to displaced vertices. However, given the small mass differences involved, the decay products of the latter will be very soft. The right panel in Figure 6 shows the importance of the radiative decay of long-livedχ From the collider point of view, the bino-like DM region is perhaps the most interesting one, as it has masses below a TeV. We find that, in this case, the NLSP is always theχ 0 2 with mass differences mχ0 2 − mχ0 1 ranging from about 0.2 GeV to 16 GeV. As already pointed in [72,73], this small mass splitting helps achieve the correct relic density throughχ . This is shown explicitly in the right panel of the same figure. Concretely, we have mχ ± 1 − mχ0 1 ≲ 30 GeV and mχ0 3,4 − mχ0 1 ≈ 10-60 GeV. Often, that is when the LSP has a small wino admixture, theχ ± 2 is also close in mass. In most cases mχ ± 1 < mχ0 3 although the opposite case also occurs. All in all this creates peculiar compressed EW-ino spectra; they are similar to the bino-wino DM scenario in the MSSM, but there are LT/ 2 in SARAH convention. scattering cross sections on protons, with the p-value from XENON1T indicated in colour. While the bulk of the points has cross sections that should be testable in future DM direct detection experiments, there are also a few points with cross sections below the neutrino floor. We note in passing that the scattering cross section on neutrons (not shown) is not exactly the same in this model but can differ from that on protons by few percent.
LHC constraints
Let us now turn to the question of how the DG EW-ino scenarios from the previous subsection can be constrained at the LHC. Before reinterpreting various ATLAS and CMS SUSY searches, it is important to point out that the cross sections for EW-ino production are larger in the MDGSSM than in the MSSM. For illustration, Figure 9 compares the production cross sections for pp collisions at 13 TeV in the two models. The cross sections are shown as a function of the wino mass parameter, with m D2 = 1.2 m DY (M 2 = 1.2 M 1 ) for the MDGSSM (MSSM); the other parameters are µ ≃ 1400 GeV, tan β ≃ 10, λ S ≃ −0.29 and 2λ T ≃ −1.40. While LSP-LSP production is almost the same in the two models, chargino-neutralino and charginochargino production is about a factor 3-5 larger in the MDGSSM, due to the larger number of degrees of freedom.
Constraints from prompt searches SModelS
We start by checking the constraints from searches for promptly decaying new particles with SModelS [97][98][99][100]. The working principle of SModelS is to decompose all signatures occurring in a given model or scenario into simplified model topologies, also referred to as simplified model spectra (SMS). Each SMS is defined by the masses of the BSM states, the vertex structure, and the SM and BSM final states. After this decomposition, the signal weights, determined in terms of cross-sections times branching ratios, σ × BR, are matched against a database of LHC results. SModelS reports its results in the form of r-values, defined as the ratio of the theory prediction over the observed upper limit, for each experimental constraint that is matched in the database. All points for which at least one r-value equals or exceeds unity (r max ≥ 1) are considered as excluded. The SLHA files produced with SPheno in our MCMC scan contain the mass spectrum and decay tables. For evaluating the simplified model constraints with SModelS, also the LHC cross sections at s = 8 and 13 TeV are needed. They are conveniently added to the SLHA files by means of the SModelS-micrOMEGAs interface [85], which moreover automatically produces the correct particles.py file to declare the even and odd particle content for SModelS. Once the cross sections are computed, the evaluation of LHC constraints in SModelS takes a few seconds per point, which makes it possible to check the full dataset of 52.5k scan points.
The results are shown in Figures 10 and 11. The left panels in Figure 10 show the points excluded by SModelS (r max ≥ 1), in the plane of mχ0 1 vs. mχ0 3,4 (top left) and mχ ± j vs. mχ0 3,4 (bottom left), the difference betweenχ 0 3,4 not being discernible on the plots. Points with binolike or higgsino-like LSPs are distinguished by different colours and symbols: light blue dots for bino-like LSP points and magenta/pink triangles for higgsino-like LSP points. There are no excluded points with wino-like LSPs.
As can be seen, apart from two exceptions, all bino LSP points excluded by SModelS lie in the Z or h funnel region and have almost mass-degenerateχ show the excluded points, r max ≥ 1, in the mχ0 1 vs. mχ0 3,4 (top) and mχ ± j vs. mχ0 3,4 (bottom) planes, with bino-like or higgsino-like LSP points distinguished by different colours and symbols as indicated in the plot labels. The right panels show the same mass planes but distinguish the signatures, which are responsible for the exclusion, by different colours/symbols (again, see plot labels); moreover the region with r max ≥ 0.5 is shown in yellow, and that covered by all scan points in grey.
to about 750 GeV for wino-likeχ 3 ) below about 500 GeV. In terms of soft terms, the excluded bino LSP points have m D2 < 750 GeV or µ < 400 GeV, while the excluded higgsino LSP points have µ < 200 GeV and m D2 < 500 GeV (see Figure 11).
The right panels of Figures 10 and 11 show the same mass and parameter planes as the left panels but distinguish the signatures, which are responsible for the exclusion, by different colours/symbols. We see that W H + E miss T simplified model results exclude only bino-LSP points in the h-funnel region, but can reach up to mχ0 3,4 ≲ 750 GeV; all these points have
MadAnalysis 5
One disadvantage of the simplified model constraints is that they assume that charginos and neutralinos leading to W Z signatures are mass degenerate. SModelS allows a small deviation from this assumption, butχ ± iχ 0 j production with sizeable differences between mχ ± i and mχ0 j will not be constrained. Moreover, the simplified model results from [101][102][103][104] are cross section upper limits only, which means that different contributions to the same signal region cannot be combined (to that end efficiency maps would be necessary [98]). It is therefore interesting to check whether full recasting based on Monte Carlo event simulation can extend the limits derived with SModelS.
Here we use the recast codes [105][106][107] for Run 2 EW-ino searches available in MadAnalysis 5 [108][109][110][111]. For these analyses we again treat the two lightest neutralino states as LSPs, assuming the transitionχ 0 2 →χ 0 1 is too soft as to be visible in the detector. For the CMS 35.9 fb −1 analyses, we simulate all possible combinations ofχ 0 1,2 with the heavy neutralinos, charginos, and pair production of charginos; while to recast the analysis of [102] we must simulate pp →χ ± iχ 0 j>2 + njets, where n is between zero and two. The hard process is simulated in MadGraph5_aMC@NLO [115] v2.6 and passed to Pythia 8.2 [116] for showering. MadAnalysis 5 handles the detector simulation with Delphes 3 [117] with different cards for each analysis, and then computes exclusion confidence levels (1 − CL s ), including the combination of signal regions for the multi-lepton analysis. For the two 35.9 fb −1 analyses we simulate 50k events, and the whole simulation takes more than an hour per point on an 8-core desktop PC. For the ATLAS 139 fb −1 analysis, we simulate 100k events (because of the loss of efficiency in merging jets, and targeting only b-jets from the Higgs and in particular the leptonic decay channel of the W ) and each point requires 3 hours.
The reach of collider searches depends greatly on the wino fraction of the EW-inos. Winos have a much higher production cross section than higgsinos or binos, and thus we can divide the scan points into those where m D2 is "light" and "heavy." The results are shown in Figure 12. They show the distribution of points in our scan in the mχ0 1 − mχ0 3 plane. In our model, there is always a pseudo-Dirac LSP, so the lightest neutralinos are nearly degenerate; for a higgsino-or wino-like LSP the lightest chargino is nearly degenerate with the LSP. However, mχ0 3 gives the location of the next lightest states, irrespective of the LSP type. In this plane we show the points that we tested using MadAnalysis 5, and delineate the region encompassing all excluded points.
For "light" m D2 < 900 GeV, nearly all tested points in the Higgs funnel are excluded by [102] up to mχ 3 = 800 GeV; the Z-funnel is excluded for mχ 3 ≲ 300 GeV. Otherwise we can find excluded points in the region mχ0 1 ≲ 200 GeV, mχ0 3 ≲ 520 GeV. While for small mχ0 3 − mχ0 1 the ATLAS-SUSY-2019-08 search [102] is not effective, at large values of mχ0 3 some points are excluded by this analysis, and others still by CMS-SUS-16-039 [112] and/or CMS-SUS-16-048 [114]. We note here that the availability of the covariance matrix for signal regions A of [112] is quite crucial for achieving a good sensitivity. It would be highly beneficial to have more such (full or simplified) likelihood data that allows for the combination of signal regions! For "heavy" m D2 > 700 GeV, 9 we barely constrain the model at all: clearly Z-funnel points are excluded up to about mχ0 3 = 260 GeV; but we only find excluded points for mχ0 1 ≲ 100 GeV, mχ 3 ≲ 300 GeV. Hence one of the main conclusions of this work is that higgsino/bino mixtures in this model, where m D2 > 700 GeV, are essentially unconstrained for mχ0 1 ≳ 120 GeV.
In general, as in [66], one may expect a full recast in MadAnalysis 5 to be much more powerful than a simplified models approach. However, comparing the results from MadAnalysis 5 to those from SModelS, a surprisingly good agreement is found between the r-values from like searches (such as the W H + E miss T channel in the same analysis). 10 Indeed, from comparing [112]; in terms of the ratio r MA5 of predicted over excluded (visible) cross sections, this corresponds to r MA5 = 0.67 and 0.71, so somewhat lower than the values from SModelS.
• The W H + E miss T signal for the two example points above splits up into several components (corresponding to different mass vectors) in SModelS, which each give r-values of roughly 0.3 but cannot be combined. The recast of ATLAS-SUSY-2019-08 [102] with MadAnalysis 5, on the other hand, takes the complete signal into account and gives 1 − CL s = 0.77 for the first and 0.96 for the second point. 9 The regions are only not disjoint so that we can include the entire constrained reach of the Higgs funnel in the "light" plot; away from the Higgs funnel there would be no difference in the "light" m D2 plot if we took m D2 < 700 GeV. 10 We shall see this explicitly for some benchmark scenarios in section 5.
• The points excluded with MadAnalysis 5 but not with SModelS typically contain complex spectra with all EW-inos below about 800 GeV, which all contribute to the signal.
• Most tested points away from the Higgs funnel region, which are excluded with Mad-Analysis 5 but not with SModelS, have r max > 0.8.
• There also exist points which are excluded by SModelS but not by the recasting with MadAnalysis 5. In these cases the exclusion typically comes from the CMS EW-ino combination [104]; detailed likelihood information would be needed to emulate this combination in recasting codes.
It would be interesting to revisit these conclusions once more EW-ino analyses are implemented in full recasting tools, but it is clear that, since adding more luminosity does not dramatically alter the constraints, the SModelS approach can be used as a reliable (and much faster) way of constraining the EW-ino sector; and that the constraints on EW-inos in Dirac gaugino models are still rather weak, particularly for higgsino LSPs where the wino is heavy.
Constraints from searches for long-lived particles
As mentioned in section 4.1, a relevant fraction (about 20%) of the points in our dataset contain LLPs. Long-lived charginos, which occur in about 14% of all points, can be constrained by Heavy Stable Charged Particles (HSCP) and Disappearing Tracks (DT) searches. Displaced vertex (DV) searches could potentially be sensitive to long-lived neutralinos; in our case however, the decay products of long-lived neutralinos are typically soft photons, and there is no ATLAS or CMS analysis which would be sensitive to these.
We therefore concentrate on constraints from HSCP and DT searches. They can conveniently be treated in the context of simplified models. For HSCP constraints we again use SModelS, which has upper limit and efficiency maps from the full 8 TeV [118] and early 13 TeV (13 fb −1 ) [119] CMS analyses implemented. (The treatment of LLPs in SModelS is described in detail in Refs. [99,120].) A new 13 TeV analysis for 36 fb −1 is available from ATLAS [121], but not yet included in SModelS; we will come back to this below.
For the DT case, the ATLAS [122] and CMS [123] analyses for 36 fb −1 provide 95% CL upper limits on σ × BR in terms of chargino mass and lifetime on HEPData [124,125]. Here, σ × BR stands for the cross section of direct production of charginos, which includes χ ± 1χ There is also a new CMS DT analysis [126], which presents full Run 2 results for 140 fb −1 . At the time of our study, this analysis did not yet provide any auxiliary (numerical) material 11 This is 95 < mχ± 1 < 600 GeV and 0.05 < τχ± 1 < 4 ns (15 < cτχ± 1 < 1200 mm) for the ATLAS analysis [122], and 100 < mχ± 1 < 900 GeV and 0.067 < τ χ± 1 < 333.56 ns (20 < cτχ± 1 < 100068 mm) for the CMS analysis [123]. for reinterpretation. We therefore digitised the limits curves from Figures 1a-1d of that paper, and used them to construct linearly interpolated limit maps which are employed in the same way as described in the previous paragraph. Since the interpolation is based on only four values of chargino lifetimes, τχ ± 1 = 0.33, 3.34, 33.4 and 333 ns, this is however less precise than the interpolated limits for 36 fb −1 . The results are shown in Figure 13 in the plane of chargino mass vs. mean decay length; on the left for points with long-lived charginos, on the right for point with long-lived charginos and neutralinos. Red points are excluded by the HSCP searches implemented in SModelS: orange points are excluded by DT searches. The HSCP limits from [118,119] eliminate basically all long-lived chargino scenarios with cτχ± ≳ 1 m up to about 1 TeV chargino mass. The exclusion by the DT searches [122,123] covers 10 mm ≲ cτχ ± 1 ≲ 1 m and mχ ± 1 up to about 600 GeV; this is only slightly extended to higher masses by our reinterpretation of the limits of [126].
To verify the HSCP results from SModelS and extend them to 36 fb −1 , we adapted the code for recasting the ATLAS analysis [121] written by A. Lessa and hosted at https://github. com/llprecasting/recastingCodes. This requires simulating hard processes of single/double chargino LLP production with two additional hard jets, which was performed at leading order with MadGraph5_aMC@NLO. The above code then calls Pythia 8.2 to shower and decay the events, and process the cuts. It uses experiment-provided efficiency tables for truth-level events rather than detector simulation, and therefore does not simulate the presence of a magnetic field. However, the code was validated by the original author for the MSSM chargino case and found to give excellent agreement.
We wrote a parallelised version of the recast code to speed up the workflow (which is available upon request); the bottleneck in this case is actually the simulation of the hard process (unlike for the prompt recasting case in the previous section), and our sample was simulated on one desktop. We show the result in Figure 14. For decay lengths cτχ ± 1 > 1 m, the exclusion is very similar to that from SModelS, only slightly extending it in the mχ ± 1 ≈ 1-
Future experiments: MATHUSLA
We also investigated the possibility of seeing events in the MATHUSLA detector [127], which would be built O(100)m from the collision point at the LHC, and so would be able to detect neutral particles that decay after such a long distance. Prima facie this would seem ideal to search for the decays of long-lived neutralino NLSPs; pseudo-Dirac states should be excellent candidates for this (indeed, the possibility of looking for similar particles if they were of O(GeV) in mass at the SHiP detector was investigated in [128]). However, in our case the only states that have sufficient lifetime to reach the detector have mass splittings of O(10) MeV (or less), and decaysχ 0 2 →χ 0 1 + γ vastly dominate, with a tiny fraction of decays to electrons.
In the detectors in the roof of MATHUSLA the photons must have more than 200 MeV (or 1 GeV for electrons) to be registered. Moreover, it is anticipated to reconstruct the decay vertex in the decay region, requiring more than one track; in our case only one track would appear, and much too soft to trigger a response. Hence, unless new search strategies are employed, our long-livedχ 0 2 will escape detection.
Benchmark points
In this section we present a few sample points which may serve as benchmarks for further studies, designing dedicated experimental analyses and/or investigating the potential of future experiments. Parameters, masses, and other relevant quantities are listed in Tables 2 and 3. Moreover, the total relevant EW-ino production cross section is only 41 fb at s = 13 TeV, compared to ≈ 2.6 pb for Point 2. Therefore, again, no relevant constraints are obtained from the current LHC searches. In particular, SModelS does not give any constraints from EW-ino searches but reports 34 fb as missing topology cross section, 64% of which go on account of W * (→ 2 jets or lν) + γ + E miss T
signatures.
Point 4 (SPhenoDiracGauginos_2231) has bino and wino masses of the order of 600 GeV similar to Point 3, but features a smallerχ We also note that theχ 0 2 is long-lived with a mean decay length of about 0.5 m. However, given the tiny mass difference to theχ 1079 GeVχ The LHC production cross sections are however very low for such heavy EW-inos, below 1 fb at 13-14 TeV. This is clearly a case for the high luminosity (HL) LHC, or a higher-energy machine.
Point 7 (SPhenoDiracGauginos_37) is another higgsino DM point with mχ0 1 ≃ 1.1 TeV but small, sub-GeV mass splittings between the higgsino-like states, mχ0 2 − mχ0 1 ≃ 120 MeV and mχ ± 1 − mχ0 1 ≃ 400 MeV. Co-annihilations betweenχ 1159 GeVχ 1327 GeVχ The SLHA files for these 10 points, which can be used as input for MadGraph, mi-crOMEGAs or SModelS are available via Zenodo [130]. The main difference between the SLHA files for MadGraph5_aMC@NLO or micrOMEGAs is that the MadGraph5_aMC@NLO ones have complex mixing matrices, while the micrOMEGAs ones have real mixing matrices and thus neu-tralino masses can have negative sign. The SModelS input files consist of masses, decay tables and cross sections in SLHA format but don't include mixing matrices. The CalcHEP model files for micrOMEGAs are also provided at [130]. The UFO model for MadGraph5_aMC@NLO is available at [79], and the SPheno code at [91].
Conclusions
Supersymmetric models with Dirac instead of Majorana gaugino masses have distinct phenomenological features. In this paper, we investigated the electroweakino sector of the Minimal Dirac Gaugino Supersymmetric Standard Model. The MDGSSM can be defined as the minimal Dirac gaugino extension of the MSSM: to introduce DG masses, one adjoint chiral superfield is added for each gauge group, but nothing else. The model has an underlying R-symmetry that is explicitly broken in the Higgs sector through a (small) B µ term, and new superpotential couplings λ S and λ T of the singlet and triplet fields with the Higgs. The resulting EW-ino sector thus comprises two bino, four wino and three higgsino states, which mix to form six neutralino and three chargino mass eigenstates (as compared to four and two, respectively, in the MSSM) with naturally small mass splittings induced by λ S and λ T . All this has interesting consequences for dark matter and collider phenomenology. We explored the parameter space where theχ 0 1 is a good DM candidate in agreement with relic density and direct detection constraints, updating previous such studies. The collider phenomenology of the emerging DM-motivated scenarios is characterised by the richer EW-ino spectrum as compared to the MSSM, naturally small mass splittings as mentioned above, and the frequent presence of long-lived charginos and/or neutralinos.
We worked out the current LHC constraints on these scenarios by re-interpreting SUSY and LLP searches from ATLAS and CMS, in both a simplified model approach and full recasting using Monte Carlo event simulation. While HSCP and disappearing track searches give quite powerful limits on scenarios with charged LLPs, scenarios with mostly E miss T signatures remain poorly constrained. Indeed, the prompt SUSY searches only allow the exclusion of (certain) points with an LSP below 200 GeV, which drops to about 100 GeV when the winos are heavy. This is a stark contrast to the picture for constraints on colourful sparticles, and indicates that this sector of the theory is likely most promising for future work. We provided a set of 10 benchmark points to this end.
We also demonstrated the usefulness of a simplified models approach for EW-inos, in comparing it to a full recasting. While cross section upper limits have the in-built shortcoming of not being able to properly account for complex spectra (where several signals overlap), the results are close enough to give a good estimate of the excluded region. This is particularly true since it is a much faster method of obtaining constraints, and the implementation of new results is much more straightforward (and hence more complete and up-to-date). Moreover, the constraining power could easily be improved if more efficiency maps and likelihood information were available and implemented. This holds for both prompt and LLP searches.
We note in this context that, while this study was finalised, ATLAS made pyhf likelihood files for the 1l + H(→ bb) + E miss T EW-ino search [102] available on HEPData [131] in addition to digitised acceptance and efficiency maps. We appreciate this very much and are looking forward to using this data in future studies. To go a step further, it would be very interesting if the assumption mχ ± 1 = mχ0 2 could be lifted in the simplified model interpretations.
Furthermore, the implementation in other recasting tools of more analyses with the full ≈ 140 fb −1 integrated luminosity from Run 2 would be of high utility in constraining the EW-ino sector. Here, the recasting of LLP searches is also a high priority, as theories with such particles are very easily constrained, with the limits reaching much higher masses than for searches for promptly decaying particles. A review of available tools for reinterpretation and detailed recommendations for the presentation of results from new physics searches are available in [132].
Last but not least, we note that the automation of the calculation of particle decays when there is little phase space will also be a fruitful avenue for future work.
Funding information This work was supported in part by the IN2P3 through the projects "Théorie -LHCiTools" (2019) and "Théorie -BSMGA" (2020). This work has also been done within the Labex ILP (reference ANR-10-LABX-63) part of the Idex SUPER, and received financial state aid managed by the Agence Nationale de la Recherche, as part of the programme Investissements d'avenir under the reference ANR-11-IDEX-0004-02, and the Labex "Institut Lagrange de Paris" (ANR-11-IDEX-0004-02, ANR-10-LABX-63) which in particular funded the scholarship of SLW. SLW has also been supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant 396021762 -TRR 257. MDG acknowledges the support of the Agence Nationale de Recherche grant ANR-15-CE31-0002 "HiggsAutomator." HRG is funded by the Consejo Nacional de Ciencia y Tecnología, CONACyT, scholarship no. 291169.
A.1 Electroweakinos in the MRSSM
In this appendix we provide a review of the EW-ino sector of the MRSSM in our notation, to contrast with the phenomenology of the MDGSSM.
The MRSSM [19] is characterised by preserving a U (1) R-symmetry even after EWSB. To allow the Higgs fields to obtain vacuum expectation values, they must have vanishing Rcharges, and we therefore need to add additional partner fields R u,d so that the higgsinos can obtain a mass (analogous to the µ-term in the MSSM). The relevant field content is summarised in Table 4. The superpotential of the MRSSM is Names Spin 0, Here we define the triplet as Notably the model has an N = 2 supersymmetry if The above definitions are common to e.g. [38,59,72] and can be translated to the notation of [50] via The Higgs fields as well as the triplet and singlet scalars have R-charges 0, so their fermionic partners all have R-charge −1. The R u,d fields have R-charges 2, so the R-higgsinos have Rcharge 1. Together with the "conventional" bino and wino fields, which also have R-charge 1, this gives 2 × four Dirac spinors with opposite R-charges. After EWSB, the EW gauginos and (R-)higgsinos thus form four Dirac neutralinos with mass-matrix The above mass matrix looks very similar to that of the MSSM in the case of N = 2 supersymmetry! On the other hand, for the charginos, although there are eight Weyl spinors, these organise into four Dirac spinors, and again into two pairs with opposite R-charges. So we have The MRSSM therefore does not entail naturally small splittings between EW-ino states. However, if the R-symmetry is broken by a small parameter, then this situation is reversed: small mass splittings would appear between each of the Dirac states.
A.3 Higgs mass classifier
A common drawback for the efficiency of phenomenological parameter scans, is finding the subset of the parameter space where the Higgs mass m h is around the experimentally measured value. Our case is not the exception, as m h depends on all the input variables considered in our study. This is clear for µ, the mass term in the scalar potential, and tan β, the ratio between the vevs. For the soft terms, the dependence becomes apparent when one realises that in DG models, the Higgs quartic coupling receives corrections of the form where m SR and m T P are the tree-level masses of the singlet and triplet scalars, respectively, and are given large values to avoid a significant suppression on the Higgs mass 15 .
To overcome this issue, we have implemented Random Forest Classifiers (RFCs) that predict, from the initial input values, if the parameter point has a m h inside (p in ) or outside (p out ) the desired our 120 < m h < 130 GeV range. A sample of 50623 points was chosen so as to have an even distribution of inside/outside range points. The data was then divided as training and test data in a 67:33 split. We trained the classifier using the RFC algorithm in the scikit-learn python module with 150 trees in the forest (n_estimators=150).
The obtained mean accuracy score for the trained RFC was 93.75%. However, we are interested in discarding as many points with m h outside of range as possible while keeping all the p in ones. To do so we have rejected only the points with a 70% estimated probability of being p out . In this way, we obtained an improved 98.8% on the accuracy for discarding p out points while still rejecting 86% of them. The cut value of estimated probability for p out was chosen as an approximately optimal balance between accuracy and rejection percentage. Above the 70% value there is no significant improvement in the accuracy, but the rejection percentage depreciates. This behaviour is schematised in Figure 15, where the estimated probability of p out is shown as a function of m h .
Finally, to estimate the overall improvement on the scan efficiency, we multiplied the percentage of real p out (roughly 88%) by the p out rejection percentage (86%) and obtained an overall 75% rejection percentage. Hence, the inclusion of the classifier yields a scan approximately four times faster.
A.4 Recast of ATLAS-SUSY-2019-08
ATLAS reported a search in final states with E miss T , 1 lepton (e or µ) and a Higgs boson decaying into bb, with 139 fb −1 in [102]. This is particularly powerful for searching for winos with a lighter LSP (such as a bino or higgsino) and so we implemented a recast of this analysis in MadAnalysis 5 [108][109][110][111]. The analysis targets electroweakinos produced in the combination of a chargino and a heavy neutralino, where the neutralino decays by emitting an on-shell Higgs, and the chargino decays by emitting a W -boson, i.e. W H + E This search should be particularly effective when other supersymmetric particles (such as sleptons and additional Higgs fields) are heavy. Given constraints on heavy Higgs sectors and colourful particles, it is rather model independent and difficult to evade in a minimal model. The ATLAS collaboration made available substantial additional data via HEPData at [131], in particular including detailed cutflows and tables for the exclusion curves, which are essential for validating our recast code.
The implementation in MadAnalysis 5 follows the cuts of [102] and implements the lepton isolation and a jet/lepton removal procedure as described in that paper directly in the analysis. Jet reconstruction is performed using fastjet [133] in Delphes 3 [117], where b-tagging and lepton/jet reconstruction efficiencies are taken from a standard ATLAS Delphes 3 card used in other recasting analyses [134][135][136][137]. The analysis was validated by comparing signals generated for the same MSSM simplified scenario as in [102]: this consists of a degenerate wino-like chargino and heavy neutralino, together with a light bino-like neutralino. The analysis requires two or three signal jets, two of which must be b-jets (to target the Higgs decay); the signal is simulated by a hard process of p, p →χ Table 5: Number of events expected in each signal region in [102] (columns labelled "ATLAS") against result from recasting in MadAnalysis 5 (columns labelled "MA") for different parameter points. The quoted error bands are Monte Carlo uncertainties, but the cross-section uncertainties can also reach 10% for some regions.
In the validation, up to 2 hard jets are simulated at leading order in MadGraph5_aMC@NLO, the parton shower is performed in Pythia 8.2, and the jet merging is performed by the MLM algorithm using MadGraph5_aMC@NLO defaults. In addition, to select only leptonic decays of the W -boson, and b-quark decays of the Higgs, the branching ratios are modified in the SLHA file (with care that Pythia does not override them with the SM values) and the signal cross-sections weighted accordingly: this improves the efficiency of the simulation by a factor of roughly 8, since the leptonic branching ratio of the W is 0.2157 and the Higgs decays into b-quarks 58.3% of the time. A detailed validation note will be presented elsewhere, including detailed cutflow analysis and a reproduction of the exclusion region with that found in [102]. Here we reproduce the expected (according to the calculated cross-section and experimental integrated luminostiy) final number of events passing the cuts for the "exclusive" signal regions, for the three benchmark points where cutflows are available in table 5, where an excellent agreement can be seen. For each point, 30k events were simulated, leading to small but non-negligible Monte-Carlo uncertainties listed in the table.
Application to the MDGSSM
To apply this analysis to our model, firstly we treat both the lightest two neutralino states as LSP states; we must also simulate the production of all heavy neutralinos (χ 0 i , i > 2) and charginos in pairs. It is no longer reasonable to select only leptonic decays of the W , because we can have several processes contributing to the signal. Indeed, in our case, we can have both for example. Therefore we do not modify the decays of the electroweakinos in the SLHA files, and simulate p, p →χ ± i≥1 ,χ 0 j≥3 + njets, n ≤ 2 as the hard process in MadGraph5_aMC@NLO, before showering with Pythia 8.2 and passing to the analysis as before.
We have not produced an exclusion contour plot for this analysis comparable to the MSSM case in [102], because a heavy wino with a light bino always leads to an excess of dark matter unless the bino is near a resonance. We should generally expect the reach of the exclusion to be better than for the MSSM, due to the increase in cross section from pseudo-Dirac states; since we can only compare our results directly for points on the Higgs-funnel, for mχ 1 ≈ m h /2, we find a limit on the heavy wino mass of about 800 GeV in our model, compared to 740 GeV in the MSSM. | 14,351.4 | 2020-07-16T00:00:00.000 | [
"Physics"
] |
Motion‐insensitive susceptibility weighted imaging
To enable SWI that is robust to severe head movement.
| INTRODUCTION
Sensitivity to tissue susceptibility can be encoded into the magnitude and phase of gradient-echo (GRE) images with the use of long TEs. 1 In T * 2 -weighted images, only the magnitude information is used, whereas the key feature of SWI is to also exploit the image phase for increased sensitivity. 2 This lends SWI the excellent ability to depict veins and makes it useful for the detection of calcifications and iron deposition, including cerebral microbleeds. 3 Unfortunately, the blessing of susceptibility contrast is also a curse, as the same susceptibility-sensitizing phase renders these images particularly sensitive to motion. 4 The reason for this is that the susceptibility-induced phase is not rotationinvariant, at least not if the head orientation is changed relative to the B 0 field. 5 Prospective motion-correction techniques adjust the imaging gradients to follow the motion in real time, but cannot compensate for this residual motioninduced phase. 6 Figure 1 and Supporting Information Videos S1-S4 illustrate how the image phase is affected by motion, despite prospective correction. Motion during the acquisition of an image may therefore induce phase changes that interfere with the spatial encoding and cause ghosting artifacts, even if the acquisition was perfectly prospectively motion-corrected. The motion sensitivity of SWI may restrict its availability for patients who are inclined to move, such as children 7 who would otherwise benefit from this technique. 8 Susceptibility-weighted imaging is typically based on spoiled 3D-GRE acquisitions. 2 However, the SNR efficiency of these acquisitions is rather low due to the idle time between the excitation pulse and the long TE required for susceptibility sensitivity. The SNR efficiency can be greatly improved using 3D EPI, 9 in which any gaps in the pulse sequence are used to sample additional k-space lines. Many drawbacks of single-shot EPI, such as distortion from field inhomogeneity, can be moderated using multishot 3D-EPI sequences, in which each plane in k-space is acquired with multiple interleaved shots. 10 The remaining spatial distortions at some locations are often tolerable, as these locations would suffer from signal dropout anyway, due to intravoxel dephasing. Multishot 3D EPI offers SWI with increased spatial resolution without excessive scan time. 11 Alternatively, the high SNR efficiency can be used to shorten acquisition time by an order of magnitude. 12 Even if multishot 3D EPI is sensitive to motion for the same reasons as 3D GRE, this smaller temporal footprint of k-space improves the situation, as there is simply less time for motion to occur. 13 In this work, we investigate the efficacy of prospective motion correction for SWI using a markerless optical motiontracking system. 14 We explore the motion sensitivity of SWI based on 3D GRE and fast multishot 3D EPI. Furthermore, we revisit 2D interleaved snapshot EPI [15][16][17][18] in the context of SWI. In this sequence, all shots of a given slice are acquired consecutively, which reduces the temporal footprint by a factor equal to the number of slices, greatly decreasing the sensitivity to motion. Unfortunately, the TR will also be reduced by the same factor, having the effect of restricted T 1 relaxation, hence imposing lower SNR. To improve SNR, imaging is made in the transient state using a variable flip-angle train tailored to the TR and number of shots. 16 Further SNR is gained by acquiring F I G U R E 1 Effect of motion on the phase of gradient-echo (GRE) images (single-shot EPI, TE = 35 ms). The conformance of the magnitude images demonstrates successful prospective motion correction. The numbers indicate the estimated rotation about the x-, y-, and z-axes, relative to the reference image (A). The phase images show the difference with respect to the reference. The image phase is heavily disturbed by rotation perpendicular to the B 0 field (B,C), whereas rotation about the direction of B 0 has a much smaller effect (D) multiple signal averages, which also enables retrospective motion correction between the averages before combining them. Conventional interleaved 2D EPI is included for comparison. The sequences are evaluated at 3 T by controlled motion experiments involving 2 cooperative volunteers. Lesion conspicuity is assessed by SWI of a tumor patient.
| METHODS
Images were acquired using a 3 T Signa Premier MR system and a 48-channel head coil (GE Healthcare, Milwaukee, WI). All pulse sequences were prospectively motion-corrected by updating the FOV and slice/slab position and orientation before each excitation pulse, according to the latest available rigidbody motion estimate. These were provided by a research version of a markerless optical motion-tracking system (Tracoline TCL3.1m; TracInnovations, Ballerup, Denmark). The setup of the tracking system was done as described elsewhere, 19 but the scanner/tracker cross-calibration image volume was acquired using an anthropomorphic phantom before the subject entered the scanner. The experiments were performed under local ethical approval, and informed consent was obtained from both volunteers and the patient. All pulse sequences were developed in-house using the KS Foundation framework. 20 The vendor's reconstruction routine was used for 3D GRE without SWI processing. The 3D-GRE-SWI reconstruction and the EPI reconstruction were developed in-house in C++ and Matlab (The MathWorks, Natick, MA), respectively.
| Prospectively corrected EPI
Because each EPI shot was prospectively updated, zero and firstorder phase alignment between odd and even lines was done separately for each shot, referred to as "dynamic ghost correction." This was enabled by acquiring four navigator lines without phase encoding immediately after each excitation pulse. 19 Intershot zero and first-order phase alignment was then made based on the even navigator lines of each shot and a reference shot.
Echo-time shifting 16 and RF spoiling with hexagonal gradient spoiling 21 was used in all EPI sequences. For 2D EPI, GRAPPA acceleration was enabled by a leading 6.4-s calibration volume based on the fast low-angle excitation echoplanar technique, 18 acquired with a 96 × 96 matrix with three interleaves, flip angle (FA) = 5°, TE = 18 ms, and TR = 44 ms. The calibration volume was also prospectively updated and ghost-corrected by four navigator lines.
The images shown in Figure 1 and Supporting Information Videos S1-S4 were acquired using a prospectively corrected single-slice and single-shot GRE-EPI sequence without GRAPPA acceleration. The sequence playout was repeated continuously to enable cine imaging during rotational motion about each of the gradient axes. Frames corresponding to 10° rotation about each axis were extracted for the figure. Imaging parameters were as follows: square 240-mm FOV, 96 × 96 matrix, 4-mm slice thickness, FA = 15°, TE = 35 ms, and TR = 79 ms.
| Interleaved snapshot EPI
In standard multishot 2D EPI, each shot is acquired for all slices before acquiring the next shot. The sequence was modified to optionally enable the acquisition of all shots for a given slice before proceeding with the next slice, denoted as "interleaved snapshot EPI." 15 Repeated averages were acquired after all shots and slices of the preceding average had been acquired. Note that this effectively yields two TR intervals: an inner TR (time between shots) and an outer TR (time between averages). In Figure 2, the loop ordering of standard EPI and interleaved snapshot EPI are illustrated. Following McKinnon, 16 a variable flip angle (VFA) train was calculated based on the number of shots, the inner TR, and an assumed T 1 relaxation time (T 1 = 1 s was used as a middle ground between white and gray brain matter). No dummy shots were played. It can be noted that interleaved snapshot EPI has also been referred to as consecutive multishot EPI 22 and VFA fast low-angle excitation echo-planar technique. 18 To avoid signal fluctuation between shots due to deviations from the assumed T 1 and prescribed flip angle, the flip angle train was calculated for a larger number of excitations than actually played out. 16 An experiment was performed comparing VFA trains calculated with zero, one, and two extra shots. In addition, constant flip-angle imaging was done at the Ernst angle, which was 18° for an inner TR of 58 ms. The theoretical SNR was calculated for all FA trains. A single average with 12 k-space interleaves was acquired with a GRAPPA acceleration factor R = 3, resulting in four acquired shots. 22 Other imaging parameters were as follows: square 240-mm FOV, 36 slices of 4-mm thickness, 288 × 288 matrix, FOV bandwidth = ±250 kHz, and TE = 26 ms.
| Motion experiments
The sequences were compared in three different scan sessions with intentional head motion. To reproduce the motion patterns with and without prospective correction, the volunteer was guided by movies played back on a functional MRI screen visible through the coil-mounted mirrors and instructed to aim at the moving cross-hairs with their nose.
The first scan session aimed to evaluate 3D techniques in the presence of continuous head motion, combined with prospective motion correction, and to investigate the effect of TE. Therefore, short-TE 3D-GRE, long-TE 3D-GRE, and | 1973 BERGLUND Et aL.
long-TE 3D-EPI acquisitions were made, all with RF spoiling. Each sequence was repeated without intentional motion, with yaw motion (rotation axis parallel to B 0 , ~5 cycles/minute), and with pitch motion (rotation axis perpendicular to B 0 , ~4 cycles/minute). The acquisitions with motion were each repeated with prospective correction on and off. The whole brain was covered by 48 slices of 3-mm thickness. The 3D-GRE acquisitions were flow-compensated in the readout and slab-selection directions. A 240 × 192 mm FOV with 384 × 320 matrix was collected with R = 2, 16 GRAPPA autocalibration lines, and elliptical k-space. Imaging parameters for the short/long TE acquisitions were as follows: TE = 4.9/20 ms, TR = 9.1/29 ms, FA = 10°/15°, and FOV bandwidth = ±41.7/13.9 kHz. The total acquisition time was 52 seconds/2:43 minutes. The 3D-EPI acquisitions were made with a 100-mm inferior spatial saturation band, as the flow compensation of the EPI train appeared to be insufficient. A square 240-mm FOV with 320 × 320 matrix was collected with 16 interleaves and R = 2, resulting in eight shots per k z plane, but the 16 centermost planes were fully sampled with 16 shots to enable GRAPPA autocalibration. Other imaging parameters were as follows: TE = 29 ms, TR = 71 ms, FA = 18°, and FOV bandwidth = ±125 kHz. The total acquisition time was 39 seconds. No SWI processing was performed.
In the second scan session, the purpose was to compare prospectively corrected SWI using interleaved snapshot 2D EPI to standard interleaved 2D EPI, 3D EPI, and 3D GRE.
All sequences were acquired without intentional motion, as well as with and without prospective motion correction during continuous circular motion (drawing a circle with the nose, ~6 cycles/minute). The imaging parameters of the 3D sequences were identical to the first scan session. The imaging parameters of the 2D-EPI sequences were the same as for the VFA experiment, except 48 slices of 3-mm thickness were acquired with four signal averages. The standard 2D EPI used one dummy shot, FA = 90°, and TR = 2.8 seconds. The snapshot 2D EPI used a VFA train without dummies and had an outer TR of 11.2 seconds. The total acquisition time was 51 seconds. The SWI processing as described by Reichenbach et al 2 was performed for each average separately after adaptive coil combination, 23 but before retrospective motion correction and RMS combination of the averages.
The effect of head orientation on SWI processing was investigated in the third scan session. A second volunteer was instructed to change head pose between acquisitions of interleaved snapshot EPI. The SWI images were then reconstructed by combining and aligning one average from each of the four poses, as well as for each pose separately. The image-acquisition parameters were identical to the previous scan session.
A 52-year-old tumor patient was imaged to assess the lesion conspicuity of interleaved snapshot EPI. The acquisition parameters were identical to the previous scan sessions. For reference, 3D EPI was also acquired with identical scan F I G U R E 2 Illustration of the swapped shot/slice loop order for standard EPI and interleaved snapshot EPI. For simplicity, only two shots (red) and two slices are shown. In standard EPI, the first shot is acquired for all slices before proceeding to the next shot. Therefore, each slice will be excited at regular intervals (TR). For interleaved snapshot EPI, all shots are acquired for the first slice before proceeding to the next slice. If multiple averages are acquired, this gives two different TR intervals. In this example, it can be noted that k-space is segmented into four interleaves, but only two shots are played out. The unacquired lines are synthesized by GRAPPA, resulting in an effective acceleration factor R = 2 coverage. The TE and spatial resolution were matched to the interleaved snapshot EPI.
| RESULTS
The motion estimates from the first scan session are shown in Figure 3. The timing and magnitude of the motion were similar for the acquisitions repeated with prospective motion correction on and off. The performed motion was in the range of 5º-8°. Figure 4 shows the same slice from each of the acquisitions. Although the motion severely degraded all uncorrected acquisitions, the prospectively corrected yaw motion resulted in acceptable image quality for all sequences. In contrast, for the prospectively corrected pitch motion, the image quality was acceptable for the short-TE acquisition only.
The VFA trains calculated with zero, one, and two extra shots were [31°, 37°, 47°, 90°], [28°, 32°, 37°, 47°], and [25°, 28°, 32°, 38°], respectively. The corresponding SNRs were 54%, 49%, and 45% relative to the SNR of standard 2D F I G U R E 3 Estimates of rotational motion from the first scan session, comparing 3D GRE with short and long TE, and 3D EPI. Acquisitions were made with continuous yaw and pitch motion patterns, as well as without intentional motion for reference. The volunteer was guided by videos of moving cross-hairs, to perform comparable motion patterns, as the acquisitions were repeated with prospective motion correction (PMC) on and off | 1975 EPI, whereas the SNR for Ernst-angle excitation was 30%. Reconstructed images from the VFA experiment are shown in Figure 5. The VFA train without extra shots corresponds to an effective total FA of 90°, and therefore had the highest SNR. However, it was also more sensitive to mismatch between the prescribed and achieved FAs, as well as between the assumed and actual T 1 values. This mismatch results in signal variation between the shots, which leads to ghost artifacts. Using two extra shots in the VFA train calculation appeared to be a good compromise between artifact level and SNR. Imaging at the F I G U R E 4 Motion experiments comparing 3D GRE with short and long TE, and 3D EPI. Prospective motion correction works quite well for all sequences in the case of rotation about the B 0 field (yaw motion, third column). However, prospective correction of rotation perpendicular to B 0 (pitch motion, last column) works only if the TE is short F I G U R E 5 Comparing different RF excitation strategies for interleaved snapshot EPI. The image SNR is maximized with the use of a variable flip angle train (VFA), tailored to the TR and an assumed T 1 relaxation time. Deviation from the assumed T 1 yields signal variation between the shots, which causes ghost artifacts (arrows). The sensitivity to T 1 errors can be decreased by calculating the flip-angle train for a greater number of shots than prescribed. Using two "extra" shots resulted in an acceptable ghost level, while still providing a significant SNR advantage compared with a constant flip angle corresponding to the Ernst angle, which is SNR-optimal for steady-state imaging Ernst angle resulted in significantly lower SNR. Therefore, VFA calculated with two extra shots was used in the following interleaved snapshot EPI acquisitions.
Motion estimates from the second scan session are shown in Figure 6. Again, the timing and magnitude of the motion matched well between acquisitions with and without prospective motion correction. The range of the circular motion was about 8°.
A slice reconstructed without SWI processing for the 2D-EPI acquisitions is shown in Figure 7. For the standard 2D EPI, the acquisitions with motion showed severe artifacts, regardless of the application of prospective and retrospective motion correction. The consecutive shot order of interleaved snapshot EPI made the sequence much less sensitive to motion, as demonstrated by the uncorrected acquisition, which was greatly improved, albeit still blurry. This blur was considerably reduced by retrospective correction alone, and even more by prospective correction alone. The benefit of adding retrospective correction to the prospectively corrected acquisition was typically subtle, as demonstrated in Figure 7, although a greater improvement was seen in a few slices, presumably due to inaccuracies of the prospective correction. The combination of retrospective and prospective motion correction resulted in similar image quality as for the acquisition without motion. Figure 8 shows images from the third scan session, demonstrating the effect of head pose on the SWI processing. The slices from the different acquisitions with different head poses were well aligned, as they were prospectively motioncorrected using the same reference position. Even though the varying head poses induced large phase differences between the acquisitions, the effect on the SWI filter was limited, as the low spatial frequencies had been removed. One SWIprocessed average from each head pose was aligned with retrospective motion correction and combined with magnitude averaging ( Figure 8B). The quality of the combined SWI image was similar to a corresponding motion-free image ( Figure 8C), even though the head poses spanned 11.7°. The combined image demonstrated a subtle loss of sharpness seen in the frontal portion of the brain.
Another slice from the second scan session is shown in Figure 9, where SWI with interleaved snapshot EPI is compared with 3D EPI and 3D GRE. The 3D sequences did not tolerate the motion well, even when prospectively corrected. In contrast, interleaved snapshot EPI demonstrated much greater robustness to the motion. With both prospective and retrospective motion correction applied, the image quality of interleaved snapshot EPI was comparable with and without motion. Figure 10 shows several slices from the tumor patient, comparing interleaved snapshot EPI to 3D EPI with matching spatial resolution. The patient did not move significantly in either acquisition (< 0.2°). The sequences display an evident difference in image contrast; although the interleaved snapshot EPI is purely T * 2 -weighted, the 3D EPI shows additional T 1 weighting due to the combination of TR and FA. Both sequences appeared equally sensitive to susceptibility changes, and the lesion conspicuity was equal. Interleaved snapshot EPI showed more ghost artifacts in some slices.
F I G U R E 6
Estimates of rotational motion from the second scan session, comparing conventional 2D interleaved EPI, 2D interleaved snapshot EPI, 3D GRE, and 3D EPI. Acquisitions were made with a continuous circular motion pattern, repeated with PMC on and off, as well as without intentional motion for reference | 1977 BERGLUND Et aL.
| DISCUSSION
This work has investigated the potential of prospective motion correction for susceptibility-based imaging. It was shown that applying prospective correction alone to 3D imaging techniques is insufficient in the presence of rotational motion perpendicular to the B 0 field (Figure 4). This applies for long-TE GRE imaging-the very same conditions that enable susceptibility contrast. Prospective motion correction has previously been combined with quantitative susceptibility mapping, but only small or involuntary motion was examined in the context of 7 T high-resolution imaging. 6 The present study investigated more challenging motion patterns with the aim of enabling diagnostic SWI for patients who are likely to move.
When 3D SWI fails due to motion, a common clinical workflow is to acquire a (low-resolution) single-shot 2D-EPI T * 2 -weighted image instead. Such images are very motionrobust due to the snapshot quality of single-shot EPI, but are less sensitive to lesions such as cerebral microbleeds. 3 This motivated us to aim for the motion robustness of singleshot EPI, the spatial resolution of multishot EPI, and the increased sensitivity of SWI. Our proposed solution is SWI based on 2D spoiled GRE interleaved snapshot EPI, which effectively averts motion-induced phase discrepancies by greatly reducing the temporal footprint of the acquisition of each slice. Interleaved snapshot EPI has previously been used for T 1 -weighted and T * 2 -weighted imaging, 17 functional MRI, 15,22,24 diffusion MRI, 25 and parallel imaging calibration, 18 but not for SWI, to the best of our knowledge. The present work demonstrates that interleaved snapshot EPI in combination with prospective motion correction (aided by dynamic ghost correction) yields sharp SWI in the presence of large continuous motion. The sequence is, however, motion robust in itself, and therefore suitable for routine clinical imaging even when prospective motion correction is not available. A further advantage is that the sequence does not require any additional complicated reconstruction routines. The minimization of the temporal footprint requires a short (inner) TR to be used, which leads to low FA and low SNR for steady-state imaging. Therefore, it was found to be beneficial to use transient state imaging with a variable flip angle train, 16 which aims to distribute all of the available longitudinal magnetization equally over the shots. Assuming some "extra" shots in the VFA train calculation poses a more conservative approach with lower artifact level at the expense of SNR (see Figure 5). Two extra shots were found to be a good trade-off, in agreement with the advice of McKinnon. 16 Even with VFA transient state imaging, the SNR efficiency of interleaved snapshot EPI is significantly lower than for F I G U R E 7 Motion experiments for 2D multishot interleaved EPI. The standard interleaved EPI (first row) is sensitive to motion, as phase incoherence develops during the temporal footprint of one average for a given slice (11.2 seconds). This induced ghost artifacts, which could not be recovered by prospective or retrospective motion correction (RMC). Interleaved snapshot EPI (last row) acquires all shots for one average of a given slice before moving to the next slice, which greatly reduced the temporal footprint (0.2 seconds) and therefore the motion-induced ghost artifacts. Retrospective motion correction attempts to align the averages, which decreased motion blur. The prospective correction is even more potent, as each shot can be aligned, thus offering correction also within averages. Differences in image sharpness are best seen in the frontal interhemispheric fissure. Prospective and retrospective correction together could provide sharpness comparable to the reference without motion standard 2D EPI and 3D EPI. The SNR could be further improved by signal averaging, which also enabled retrospective motion correction that could compensate for prospective correction flaws, as demonstrated in Figure 7. The long outer TR increases the probability of motion-induced phase mismatch between the averages, but this appeared tolerable, as the SWI processing was performed for each average separately before magnitude combination (Figure 8). We also implemented the option to acquire all averages for a slice before proceeding to the next slice, with a VFA train over all shots and averages. This, however, defeats the purpose of averaging, as the same amount of available longitudinal magnetization must be shared between the averages with only a minor SNR gain from T 1 recovery during the VFA train (results not shown).
An alternative sequence is short-axis propeller EPI, 26 which has been proposed for motion-robust 3D SWI. 27 The idea is that the amount of motion within each volumetric blade will be limited, but the duration of these "3D snapshots" is an order of magnitude longer compared with those of 2D interleaved snapshot EPI. Short-axis propeller EPI also suffers from lower scan efficiency due to frequent gradient ramping, and B 0 distortions that rotate with the blade angle.
Another approach to cope with motion is to simply acquire images faster. This reduces the opportunity for motion to F I G U R E 8 Susceptibility-weighted image processing was performed separately for each signal average. A, The columns correspond to interleaved snapshot EPI averages, each acquired with a different head pose. The numbers indicate the estimated rotation about the x-, y-, and z-axes, relative to the reference pose in the leftmost column. Prospective motion correction assured consistency across acquisitions for the signal magnitude (first row), but the phase (second row) varied with head pose. Despite this, the SWI filter (third row, inverted contrast) remained quite consistent, as it only keeps the high spatial frequency content of the phase image. This resulted in consistent SWI-processed averages (last row). B, The SWI- happen in the first place. 13,28 Even if shorter acquisition time is desirable in itself, this is not a satisfying solution, as devastating motion may still happen during the short acquisition (see, for example, the 39-second 3D EPI in Figure 9). Threedimensional multislab SWI is also less motion sensitive, only by virtue of its shorter total acquisition time. 29 In contrast, the sequence proposed here is fast, but can be prolonged without affecting its motion robustness. If, for example, thinner slices are acquired, the SNR loss can be compensated by adding more averages. Unlike for 3D imaging, the minimum slice thickness will be limited by the RF pulse. If a higher in-plane resolution is needed, it is probably useful to add more shots to keep a desired TE, although this will slightly increase the sensitivity to motion from intershot phase errors. In addition, the VFA train will need to distribute the available longitudinal magnetization over more shots, to the disadvantage of the SNR efficiency. This could be compensated by more averages, increasing the acquisition time.
For the prospectively corrected interleaved EPI sequences, artifacts related to intershot mismatch were observed. This can manifest as ghosting or more subtle shading, as seen in Figure 7. If this is due to imperfect ghost correction, improved algorithms could be used, or the problem could be avoided by prospectively updating the interleaved snapshot EPI once per slice rather than each shot, at the cost of increasing the prospective update interval from 0.05 seconds to 0.2 seconds. Another possibility is that the mismatch is an effect of the nonrectangular slice profile, accumulating an error over the course of the VFA train. 24 Deeper investigation of this issue, however, was considered to fall outside the scope of this work.
An evident difference in image contrast can be noted when comparing the interleaved snapshot EPI to the other sequences ( Figure 10); most conspicuously, fluid appears much brighter. This is explained by the absence of T 1 -weighting; not much T 1 contrast builds up during the 0.2-second RF F I G U R E 9 The SWI-processed images acquired with 3D GRE, 3D EPI, and 2D interleaved snapshot EPI. Compared with the motion-free references (first column), uncorrected circular motion (second column) induces severe blurring and ghosting for the 3D sequences and some blurring for interleaved snapshot EPI. For the same motion pattern, the image quality of the 3D sequences was much improved by PMC (last column), although not to an acceptable level. For the interleaved snapshot EPI, PMC and RMC resulted in an image quality comparable to the acquisition without motion. The different contrast for interleaved snapshot EPI is due to the absence of T 1 -weighting with the prescribed imaging parameters train, whereas full T 1 recovery is permitted during the 11-second outer TR. Although not investigated in detail here, the contrast between fluid and soft tissue may impair the conspicuity of vessels or lesions at fluid/soft tissue interfaces, due to partial-volume effects. In the tumor patient, the lesions were equally well depicted by interleaved snapshot EPI and 3D EPI, despite this difference in contrast. If desired, an SWI contrast similar to the other sequences can be achieved by acquiring the slices in several concatenations to lower the outer TR, or by suitable magnetization preparation pulses. 17 Both approaches would result in an SNR reduction.
Beyond bulkhead movement, phase errors due to spatiotemporal field changes may also be induced by other movements or physiological motion such as breathing. Although the proposed snapshot SWI approach primarily avoids these phase errors by minimizing the temporal footprint, the dynamic ghost correction also inherently provides zero-order and first order-phase correction in the frequency-encoding direction. Several alternative approaches have been developed, primarily for susceptibility-based imaging at 7 T, in which respiratory-induced field changes are significant. 30 The field changes may be measured using multichannel FID navigators, 4 respiratory signal correlated to a reference scan, 31 NMR field probes, 32-35 navigator echoes, 36,37 image navigators, [38][39][40] or the mismatch between image data and coilsensitivity profiles. 41 Correction may then be implemented by real-time shimming [31][32][33][34]36 or retrospectively during reconstruction. 35,[37][38][39][40][41] Although, many of these approaches are limited to low-order spatial correction, which has been suggested to be insufficient to counter the effects of head movement. 42 The volumetric navigator approach by Liu et al enables phase correction at 4-mm isotropic resolution every 0.5 seconds, which could retrospectively correct involuntary motion and intentional stepwise motion for T * 2 -weighted 3D GRE at 7 T. 40 However, volumetric navigators may require complex reconstruction schemes, and would significantly increase the acquisition time if combined with an EPI-based main sequence. Alternatively, intershot phase errors can be incorporated into advanced parallel-imaging reconstruction algorithms such as multiplexed sensitivity encoding, 43 which has previously been successfully combined with prospectively motion-corrected DWI. 44 A different approach is to predict field changes based on the estimated motion to correct the image retrospectively. This is feasible if a susceptibility map of the subject is available, 45 although this is generally not the case. There are also sophisticated approaches to estimate motion-induced field changes from the actual data. 46 This has been demonstrated for distortion correction of diffusion-encoded data, but relies on estimating field changes between volumes rather than between shots, and is therefore not applicable to the problem at hand. F I G U R E 1 0 A 52-year-old patient with a previously operated and radiated oligodendroglioma was imaged with resolution-matched SWI based on 3D EPI and interleaved snapshot EPI, respectively. The interleaved snapshot EPI had no T 1 -weighting, which made the operation cavity seen in the left frontal lobe on the top two rows appear hyperintense. The extension of metal artifacts from the craniofix (signal void) is similar for both sequences, and their depiction of tumor calcifications in the basal ganglia and septum pellucidum (arrows, second row) is equal. The conspicuity of punctate radiation-induced microhemorrhages in the cerebellar hemispheres (bottom row, solid arrows) is also equal, although ghost artifacts are seen for interleaved snapshot EPI (dashed arrows) | 1981 BERGLUND Et aL.
| CONCLUSIONS
The present study showed that susceptibility-based imaging can be severely corrupted by motion-induced phase errors caused by spatiotemporal B 0 field changes. Therefore, prospective motion correction alone is insufficient for SWI in the presence of substantial head motion.
These errors can be avoided by interleaved snapshot EPI, as the temporal footprint of each slice is short compared with the rate of spatiotemporal B 0 field changes, even in the presence of continuous head motion. This makes interleaved snapshot EPI robust to motion with fast acquisition and simple image reconstruction, and therefore offers improved availability of susceptibility contrast for patients prone to move, even when prospective motion correction is not available. When combined with prospective and retrospective motion correction, interleaved snapshot EPI enables SWI insensitive to motion. Although the snapshot property is gained at the expense of SNR and the minimum slice thickness will be limited for 2D sequences, interleaved snapshot EPI is a superior alternative for susceptibilitybased imaging of patients inclined to move. | 7,396.2 | 2021-06-02T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Genomic loss of HLA alleles may affect the clinical outcome in low-risk myelodysplastic syndrome patients
The Revised International Prognostic Score and some somatic mutations in myelodysplastic syndrome (MDS) are independently associated with transformation to acute myeloid leukemia (AML). Immunity has also been implicated in the pathogenesis of MDS, although the underlying mechanism remains unclear. We performed a SNP array on chromosome 6 in CD34+ purified blasts from 19 patients diagnosed with advanced MDS and 8 patients with other myeloid malignancies to evaluate the presence of loss of heterozygosity (LOH) in HLA and its impact on disease progression. Three patients had acquired copy-neutral LOH (CN-LOH) on 6p arms, which may disrupt antigen presentation and act as a mechanism for immune system evasion. Interestingly, these patients had previously been classified at low risk of AML progression, and the poor outcome cannot be explained by the acquisition of adverse mutations. LOH HLA was not detected in the remaining 24 patients, who all had adverse risk factors. In summary, the clinical outcome of patients with advanced MDS might be influenced by HLA allelic loss, wich allows subclonal expansions to evade cytotoxic-T and NK cell attack. CN-LOH HLA may therefore be a factor favoring MDS progression to AML independently of the somatic tumor mutation load.
INTRODUCTION
Myelodysplastic Syndromes (MDS) are a range of heterogeneous clonal hematologic diseases characterized by ineffective hematopoiesis and a tendency to develop acute myeloid leukemia (AML) [1]. Given the heterogeneity of the disease, several prognostic scoring systems are currently used to stratify patients according to the risk of AML development, including the International Prognostic Scoring System (IPSS) and the revised IPSS (IPSS-R), which incorporates a cytogenetic risk classification [2,3]. Recurrent in epigenetic regulation of DNA, including methylation (TET2, DNMT3A, IDH1/2) and chromatin regulation (ASXL1, EZH2) processes. Mutations of genes that participate in cellular signaling pathways (FLT3, NRAS) are less frequent in these patients and are acquired at later stages of disease progression [4][5][6]. In addition, markers of high molecular risk (TP53, EZH2, ETV6, RUNX1, ASXL1, SRSF2) have been defined that predict worse overall survival and a greater risk of leukemic transformation and post-transplantation relapse, independently of prognostic scores, whereas mutations in SF3B1 have been associated with improved survival outcomes [7][8][9].
Dysregulation of the immune system also appears to be implicated in the pathogenesis of MDS, although most studies have focused on the role of the tumor microenvironment [10,11]. Immune evasion is a hallmark of cancer [12][13][14][15], and one of the main escape mechanisms is thought to be a reduction in antigen presentation due to HLA class I (HLA-I) abnormality. Although total lack of HLA-I antigen expression is frequent in tumor tissue, it is rarely observed in leukemia at presentation [16][17][18]. Limited research has been conducted on HLA-I antigen expression in hematologic malignancies such as B-cell and Hodgkin lymphoma [19,20], chronic lymphoblastic leukemia (CLL) [21], acute lymphoblastic leukemia (ALL), and acute myeloid leukemia (AML) [22][23][24]. Copy neutral loss of heterozygosity (CN-LOH) in the HLA region has been described in approximately 13% of aplastic anemia patients as a possible mechanism to escape the autoimmune response of cytotoxic T-CD8 lymphocytes (CTLs) [25,26]. This mechanism has also been reported in a significant proportion of AML patients who relapsed after donor-lymphocyte infusions following transplantation with hematopoietic stem cells from haploidentical donors [24,27,28]. It is likely that selective rather than total loss contributes more effectively to simultaneous escape from T and NK cells [13,29]. Haplotype loss is a frequent signature in various human tumors and is particularly relevant in non-small cell lung cancer (NSCLC) [30,31]. These data have been reported in other studies, which suggest that the high prevalence of LOH HLA is attributable to positive selection during tumor evolution, facilitating immune escape [32]. In the present study, single nucleotide polymorphism (SNP) array techniques were used to explore the contribution of the LOH HLA mechanism to MDS progression. Extensive 6p LOH, including the complete HLA region, may be an immune escape mechanism, explaining its impact on clonal evolution and disease progression.
Patient characteristics
The incidence of MDS in Spain is estimated at 4-5 cases per 100,000 persons/year [1]. Over the past 4 years, 120 new cases have been recorded in our geographical area. The present study includes 19 of these cases, including 8 cases of advanced MDS with excess blasts (MDS EB) and 11 cases of AML secondary to MDS (sAML).
Of the total of 27 patients included in the study, in 14 of the 21 patients with MDS, sAML, or CMML, the baseline IPSS-R score was Very High Risk (VHR, n= 6), High Risk (HR, n=1), Intermediate (n=3), or Low/Very Low Risk (LR or VLR, n=4); no data were available for 7 patients. The cytogenetic risk score was very poor (n=6), intermediate (n=4), or good/very good (n=5); no data were available for 6 patients (Table 1).
Mutational analysis
Next, the mutational profile of the patients was analyzed, sequencing target regions of 54 genes associated with myeloid neoplasms using NGS techniques. One patient (Patient 3) could not be studied by this procedure. Out of the 26 patients studied, 23 (85.2%) had mutations in driver genes (Table 1 and Supplementary Table 1). Most mutant driver genes were splicing genes (SF3B1, U2AF1, SRSF2), methylation genes (TET2, IDH1/2, DNMT3A), and/or chromatin regulation genes (ASXL1, EZH2). At least one of the aforementioned genes was mutated in 15 of the 26 patients (57.7%). There was also a notable frequency of mutations affecting RUNX1 (n=6) and TP53 (n=8). The majority of patients with mutations in TP53 (5 out of 8 patients) had no alterations in other genes.
Among the 20 patients with MDS-EB, sAML, or CMML, 8 (40%) (4 MDS-EB, 3 sAML and 1 CMML) had ≤2 mutations in driver genes, 10 (50%) (2 MDS-EB, 7 sAML and 1 CMML) had ≥3 mutations, and only 2 (10%) (1 MDS-EB and 1 sAML) had no mutations in the sequenced genes. Among the 6 patients with de novo AML group (n=6), 4 had ≤2 mutations in driver genes. Only patient 22 had 3 mutations, while patient 23 had no mutations in the studied genes. Hence, the patients with sAML had a larger number of mutations affecting driver genes in comparison to patients with MDS-EB or de novo AML.
Furthermore, 23 of the 26 patients had a mutation with allelic frequency (Variant Allele Frequency, VAF) ≥40% in at least one driver gene, and only 3 patients (1 sAML and 2 de novo AML) had VAF<40% in mutated genes (Supplementary Table 1). In addition, 13 (72%) of the 18 patients with MDS-EB or sAML had at least one mutation in a High-Molecular-Risk (HMR) gene www.oncotarget.com (TP53, RUNX1, ASXL1, ETV6, EZH2). Interestingly, SF3B1 mutation, considered a good prognosis factor, was observed in three of the five patients with no HMR gene mutation (Table 1 and Supplementary Table 1).
LOH analysis in HLA region of chromosome 6
We performed SNP array studies of chromosome 6 to analyze LOH in the HLA region (6p21) (LOH HLA) using DNA from purified CD34 + blasts and DNA from autologous CD3 + control cells. In 24 (89%) out of the 27 patients studied, no alterations were detected on chromosome 6 in the CD34 + cell fraction in comparison to control cells (data not shown). These results were confirmed by HLA typing techniques using Luminex technology. The purified CD34 + cell fraction of the 24 patients was found to retain the two HLA haplotypes observed in the autologous control cells ( Table 2). Among the 24 patients with no alterations on chromosome 6, 17 had MDS or sAML with high-risk IPSS-R scores and/or mutations affecting HMR genes, while 2 of the remaining 7 patients had CMML and 5 had de novo AMLs.
LOH HLA was detected in 3 (11%) of the 27 patients (2 AML secondary to MDS isolated del(5q) and 1 de novo AML), as detailed below.
Patient 10 had isolated del(5q) MDS with Low-Risk IPSS-R (score of 2); no mutational studies were carried out at the diagnosis in April 2012. After 5 cycles of lenalidomide therapy with no hematologic response and high toxicity, this patient underwent RIC-Allo-HSCT from a related donor with compatibility at all HLA loci (10/10) in October 2012. The patient suffered engraftment failure in May 2014, and the mutational profile at that time predicted a good prognosis (SF3B1, VAF= 40%) (Supplementary Figure 1A). A second RIC-Allo-HSCT obtained a complete response with full chimerism [34]. In November 2016, the patient relapsed and progressed to a secondary AML (mixed chimera with 35% of recipient) and received 2 cycles of 5-azacitidine. In February 2017, the chronic hepatic, cutaneous and digestive-graft versus host disease (GVHD) of the patient worsened, followed by death due to multiple organ failure and chronic grade 4 GVHD one month later ( Figure 1A).
The cytogenetic features of leukemic cells (isolated del(5q)) and driver mutation (SF3B1, VAF=17%) were the same before the second RIC-Allo-HSCT as after the subsequent relapse ( Figure 1A and Supplementary Figure 1B). No additional mutations were detected in sequenced genes. Chimera studies on CD34 + isolated cells obtained at the time of the second relapse showed that the leukemic cells all belonged to the patient (data not shown). SNP array analysis was carried out in DNA from CD34 + and CD3 + autologous cells, detecting LOH in the CD34 + fraction due to a deletion of 38 Mb (approximately from p25.2 to p21.2) involving a large part of the short arm of chromosome 6 that encompasses the HLA region ( Figure 2). B-allele frequency (BAF) plots on CD34 + cells revealed a homozygosity pattern in the distribution of SNPs (BAF= 0 or 1) in the 6p region ( Figure 2B and 2D) compared with the control sample, which showed a heterozygosity pattern (BAF= 0, 0.5 or 1) (Figure 2A and 2C). In addition, log2 ratio studies in CD34 + cells showed conserved CN (log2 ratio= 0) similar to that of control cells ( Figure 2E and 2F). All these data suggested a CN-LOH in CD34 + leukemic cells due to acquired uniparental disomy (aUPD) mechanisms.
Genomic HLA typing was performed, based on Sanger Sequencing, in order to verify the finding of LOH in the HLA region of leukemic cells ( Figure 2G and 2H). Comparison of the sequencing electropherogram of CD34 + cells with that of control cells revealed LOH at all polymorphic positions in exons 2, 3, and 4 of loci HLA-B, -C and -DRB1, retaining HLA-B * 18:01, C * 07:01 and DRB1 * 11:04 alleles. HLA-A and -DQB1 loci were homozygous in CD3 + control cells and were therefore not informative for the sequencing analysis. In conclusion, CD34 + cells of patient 10 lost HLA-B * 39:01, C * 12:03 and DRB1 * 11:01 alleles ( Figure 2G and 2H). Subsequent HLA Luminex typing of leukemic blasts confirmed the HLA allele losses detected by sequencing analysis ( Table 2). Patient 9 had isolated del(5q) MDS with Low-Risk IPSS-R (score of 2) ( Table 1). Mutational studies were not carried out at the diagnosis in May 2013. After 36 cycles of lenalidomide therapy, a complete hematologic and cytogenetic response was observed [34]. In June 2016, the patient relapsed and progressed to sAML and finally died due to multiple organ failure. At the moment of the relapse/progression, no mutations were detected by NGS techniques. DNA from purified CD34 + and CD3 + autologous cells at the time of the relapse were used for SNP array analysis. Results revealed a complex pattern of LOH produced by a non-simultaneous double deletion affecting a region of 40Mb (approximately, from p25.2 to p12.3) that included the HLA loci in leukemic blasts compared with CD3 + cells ( Figure 3A and 3B). This complex pattern is due to a small region telomeric to the HLA loci with heterozygosity retention, but this technique did not reveal its location at chromosomal level ( Figure 3B). In addition, the Log2 Ratio plot showed a conserved copy number; therefore, this patient also had CN-LOH in CD34 + leukemic cells (data not shown).
SNP array results for HLA loci genotyping in CD34 + cells showed a lower amplification signal for SNPs associated with the alleles corresponding to the haplotype HLA-A * 66:01; HLA-B * 51:01; C * 07:01; DRB1 * 13:03; DQB1 * 03:01, in comparison to the signal for haplotype HLA-A * 30:02; HLA-B * 18:01; C * 05:01; DRB1 * 03:01; DQB1 * 02:01, a frequent haplotype in Spanish hematopoietic patients [35]. Next, analysis of Sanger sequencing electropherogram in the CD34 + cell fraction revealed in all polymorphic positions of the analyzed www.oncotarget.com loci (HLA-DRB1 and -DQB1) a striking reduction in the height of the peaks that constituted the nucleotide sequence corresponding to the alleles HLA-DRB1 * 13:03 and DQB1 * 03:01 in comparison to the signal intensity of the sequence for alleles DRB1 * 03:01 and DQB1 * 02:01 ( Figure 3C and 3D). These findings suggested that the haplotype loss might not be present in all purified cells of the CD34 + cell fraction, so that there would be a small proportion of pathological cells with heterozygosity retention, conserving the two HLA haplotypes. Likewise, the HLA typing analysis by Luminex technology did not assign DRB1 * 13:03 or DQB1 * 03:01 alleles in CD34 + samples, because most of the adjusted median fluorescence intensity (MFI) values were close to or lower than the cut-off value of the informative sequence-specific oligonucleotide (SSO) probes (Table 2 and Figure 3E and 3F). Patient 22 had de novo AML (AML M1 FAB classification) with no cytogenetic anomaly; mutation studies were not carried out at the diagnosis in August 2014. After induction and two consolidation cycles of chemotherapy, a morphologic complete response (CR) and positive minimal residual disease (MRD) were recorded, and the patient underwent autologous transplantation in February 2015. The patient suffered a relapse in July 2017 and received second-line treatment with IdaFlag induction, obtaining CR with negative MRD. The patient then underwent RIC-Allo-HSCT from a related donor (10/10) in November 2017, obtaining a CR and full chimerism [34]. In January 2018 (52 days after RIC-Allo-HSCT), the acute hepatic, cutaneous and digestive-GVHD of the patient deteriorated, followed by death due to multiple organ failure and invasive pulmonary aspergillosis ( Figure 1B).
Leukemic cells at the relapse in July 2017 (postautologous transplantation) were FLT3-TKD-positive (VAF=43%) with mutations in NMP1 (VAF=34%) and WT1 (VAF=47.2%) genes (Table 1, Supplementary Table 1 and Supplementary Figure 1C−1E). SNP array analysis of DNA from purified CD34 + and CD3 + autologous cells at this time showed LOH due to a deletion of 36 Mb (approximately from p25.2 to p21.2) that encompassed the HLA region in leukemic blasts in comparison to control cells. CD34 + cells showed a homozygotic pattern in the altered region with a conserved copy number, suggesting CN-LOH, which also affected the HLA region ( Figure (Table 2).
DISCUSSION
Alteration of HLA-I expression on the cell surface is frequently used by tumors to evade T-cell control [13,[36][37][38]. Various HLA-I phenotypes have been reported in tumors arising in different tissues, including total loss or downregulation of HLA-I antigens, HLA-haplotype, -locus or -allele loss, among others, and these changes have been attributed to multiple molecular mechanisms [13,39,40]. The most frequently cited mechanism involves LOH HLA, suggesting a common immune evasion strategy in cancers that may result from positive selection, according to recent studies [32,41,42]. This mechanism involves the presentation of a smaller repertoire of putative neoantigens to CD8 + T cells in comparison to heterozygous status, resulting in a less effective antitumor response by cytotoxic T (CD8) cells [32,[42][43][44]. Most investigations of HLA expression have been performed in solid tumors, detecting a higher frequency of HLA losses in comparison to hematological neoplasms, with reports of 90% in cervical cancer [45], 49% in lung carcinoma [30,32], and 20-70% in both melanoma [39,46,47] and laryngeal carcinoma [48]. It has been reported that partial loss of HLA alleles is a more frequent mechanism than total haplotype loss in some hematological neoplasms [23,49]. In the present study of 27 patients, loss of a full HLA haplotype was detected in three (11%), and we provide the first report of LOH HLA in two patients (patient 10 and patient 22) who relapsed after identical-HSCT. Intriguingly, both relapsing patients had a favorable clinical-biological profile, with no risk factors such as HMR mutations or complex karyotypes. One of them (patient 10) had isolated del(5q) with a low risk according to the IPSS/IPSS-R system and good karyotype (low cytogenetic risk); the same SF3B1 mutation was detected before the second HSCT as at the subsequent relapse/AML transformation, with no additional mutations. SF3B1 is a molecular marker of a good prognosis and has been associated with positive post-HSCT outcomes [9]. The other patient (Patient 22) had a de novo AML without cytogenetic anomaly, and the mutations detected at the time of relapse (FLT3-TKD, NPM1, WT1) have not been considered molecular markers of an unfavorable prognosis [50].
The remaining patient with LOH HLA (patient 9) was diagnosed with sAML and, in common with patient 10, had a favorable clinical profile. It is likely that the selective HLA loss in these patients favors immune evasion and may explain the expansion and proliferation of the malignant clone. In this line, a recent study observed an increased homozygosity rate at HLA-A, B, C and DRB1 loci in chronic lymphocytic leukemia patients in comparison to the general population, suggesting LOH HLA as a possible mechanism that evolves through positive selection [51]. Furthermore, in a study of patients with aplastic anemia, the finding of CN-LOH at 6p arms involving the HLA region was described as an escape mechanism from CTL autoimmunity [25].
In contrast, we found no case of LOH HLA in patients with a high risk of progression to leukemia according to their IPSS and IPSS-R scores, the presence of HMR mutations, and their complex karyotypes. It is likely that a proliferative advantage from the accumulation of these risk factors accounts for the disease progression, although other immune evasion mechanisms might be involved. Alternatively, it has been found that the role of cellular immune responses significantly differs between low-and high-risk (IPSS-classified) MDS. Low-risk MDS is associated with autoimmune disease-like characteristics, increased NK cell levels, activation of CTLs, reduced regulatory T-cell (Treg) count, and increased type 17 T-helper (Th17) cell count. In contrast, high-risk MDS bone marrow is characterized by a microenvironment that suppresses immune responses through the presence of dysfunctional NK cells, reduced and exhausted CTLs, and higher levels of Tregs and immunosuppressive cytokines. We propose that certain features of the tumor microenvironment in low-risk MDS patients permit immunoediting of the cancer cells, which thereby acquire a weakly immunogenic phenotype (HLA loss) that facilitates immune escape [52].
Haplotype loss in chromosome 6 has been reported in patients after haploidentical-HSCT, with findings of the loss of mismatched HLA by the aUPD mechanism in around 20% of relapsing patients [27,28,53]. The fact that leukemic cells are in direct contact with NK cells in myelodysplastic syndromes and other hematological disorders may explain why complete loss of HLA alleles has rarely or never been observed in these diseases. No case of HLA-I cell surface total loss was detected in the present study (data not shown). Neoplastic cells are less exposed to the action of NK cells in solid tumors, for which various immune escape mechanism have been reported, including homing defects/difficulties that exclude NK cells from direct contact with cancer cells [54][55][56]. Our group found that tumor nests in HLA-I negative NSCLC cases are poorly infiltrated by CTLs and NK cells [41]. Progressive alteration in the phenotype of NK cells from healthy tissue to tumor tissue has also been described, with the emergence of a non-cytotoxic phenotype in tumor tissue [53][54][55][56][57]. Interestingly, defects in NK cell function have also been described in myelodysplastic syndromes, mostly attributed to the unsuccessful or inadequate generation of mature/ functionally competent NK cells, which might contribute to disease progression through impaired immune surveillance [58,59]. Our tumor microenvironment study in bone marrow samples from MDS patients detected striking alterations in the functional phenotype of NK cell populations (data not shown). Notably, the patient with sAML and partial LOH HLA (not present in the total purified CD34 + cell fraction) had HLA-C alleles belonging to different C-groups (HLA-C * 07:01 (C1 group) and HLA-C * 05:01 (C2 group)) that are suppressor ligands of killer immunoglobulin like-receptor (KIR) in NK cells. In this context, a single haplotype loss in neoplastic cells would involve attack by NK cells, given that the KIRs would not interact with the HLA-C antigens lost. In addition, LOH HLA was restricted to HLA-B alleles that belonged to the HLA-Bw6 group, whereas the HLA-Bw4 group was conserved, and HLA-Bw6 loss has been described as an escape mechanism not only from CTLs but also from NK cells [60].
In contrast, the two patients with LOH HLA after identical-HSCT had the same two HLA-C antigens (HLA-Cw7, Cw12) belonging to the C1 group. LOH HLA may take place, at least in patients with permissive phenotype (inhibitory ligands of the same C-group), with tumor cells becoming invisible to T and NK cell attack. These results also explain why downregulation of some alleles, but not complete loss of HLA class I antigen expression, is observed in leukemias prior to transplant and may lead to the escape from immune surveillance and adversely impact clinical outcome [61]. In fact, mutations in the gene that encodes the ß2-microglobuline chain have not been described in www.oncotarget.com hematological neoplasms with peripheral expression, but have been reported in patients with lymphomas [62]. A likely explanation is that complete loss of HLA class I antigen expression renders cells susceptible to NK cellmediated killing, whereas partial loss of HLA class I alleles might protect the tumor cell from both a T cell-and NK cell-mediated immune response [13,29,63,64].
In conclusion, in the absence of mechanisms that satisfactorily explain the aggressive behavior of the disease in the three patients described in this paper, who all had favorable clinical and mutational profiles, these data suggest that LOH HLA may be an important immune evasion mechanism that allows clonal evolution of the disease. Furthermore, the study of LOH HLA may help to explain poor clinical outcomes in apparently low-risk patients.
Patients
The study included 27 patients from Granada region in Spain diagnosed between December 2016 and February 2018. Patients were classified according to the WHO-2016 classification [33]. Eight patients (6 males and 2 females, mean age 73 years) were diagnosed with MDS, including one with excess blasts-1 (MDS EB-1) and seven with excess blasts-2 (MDS EB-2); eleven (8 males, 3 females; mean age 73 years) were diagnosed with AML secondary to MDS (sAML), two (78-yr-old male and 66-yr-old female) with chronic myelomonocytic leukemia (CMML), and six (3 males, 3 females; mean age 59 years) with de novo AML. Patient characteristics are exhibited in Table 1. All patients signed informed consent to participate in the study, which followed the principles of the Helsinki Declaration and was approved by the ethical committee of our hospital.
Automated CD34 + and CD3 + cells isolation
An automatic immunomagnetic cell processing system (autoMACS Pro, Miltenyi Biotec) was used to isolate CD34 + and CD3 + cells from the bone marrow and peripheral blood of patients. Peripheral blood mononuclear cells (PBMCs) were isolated by Ficoll-Hypaque centrifugation (GE Healthcare Bio-Sciences). CD34 + cells were isolated with a CD34 + cell isolation kit (MicroBead Kit, human, Miltenyi Biotec), according to the manufacturer's instructions. MACS buffers (Miltenyi Biotec) were used for incubation with beads and for cell separations on the AutoMACS Cell Separator. CD3 + cells isolation was performed using CD3 + cell isolation kit (MicroBead Kit, human, Miltenyi Biotec) following the same methodology as the isolation of CD34 + cells. Flow cytometry was used to evaluate CD34 + and CD3 + purity. All samples showed purity ≥ 96%.
Flow cytometry analysis
The effectiveness of CD34 + cell separation was
DNA isolation
Genomic DNA was obtained from peripheral blood and bone marrow samples or from CD34 + -and CD3 + -purified cells using a QIAamp DNA Blood Mini Kit (QIAGEN). Extracted DNA was quantified using a Qubit dsDNA BR Assay Kit (ThermoFisher Scientific, Walltham, MA) and Qubit 2.0 Fluorometer.
Next-generation sequencing (NGS) of myeloid gene panel
The mutational profile of the patients was studied by NGS using a commercial gene panel (TruSight Myeloid Sequencing Panel, Illumina, San Diego, CA) that includes 54 myeloid target genes. Amplicon sequencing libraries were prepared from 50 ng of DNA per sample using TruSeq Custom Amplicon Assay and TruSight Myeloid Sequencing Panel Oligos. Libraries were normalized to 4nM and pooled in groups of 8 patient libraries. Paired-end sequencing (2x150 cycles) of each library pool was performed on a MiSeq platform with a reagent kit V3 (Illumina, San Diego, CA) following the manufacturer´s instructions. The fastq files obtained after sequencing were loaded in the Sophia Genetics application (version 4.6.2) for sequence alignment, variant annotation and subsequent analysis. Integrative Genomics Viewer version 2.3.68 (Broad Institute, Cambridge, MA) was used to visualize read alignment data. www.oncotarget.com Single nucleotide polymorphism (SNP) array analysis DNA samples from leukemic cells and controls (autologous CD3 + cells) were genotyped using the Illumina Infinium assay on the Immunochip (v2), following the manufacturer's instructions (Illumina, San Diego, CA), which detects 253,703 SNPs selected according to the GWAS of immune system diseases. Illumina Genome studio software was used to obtain data on the loss of heterozygosity (LOH) and copy number (CN), expressed as "theta" and "R" values, respectively, with "theta" representing the B-allele frequency and "R" the combined fluorescence intensity of both channels. "Theta" can be interpreted directly to detect LOH using BCF tools [PMID:26826718], while "R" must be compared with a reference standard to detect regions of CN loss or gain. In the present study, this standard was based on the median fluorescence value per probe in immunochip data from 1632 non-cancer samples of European ancestry, subsequently obtaining log-ratios. A log-ratio distribution around zero can be regarded as neutral CN, while chromosomal intervals of mainly positive (or negative) log-ratios can be interpreted as CN gain (or loss). Chromosomal stretches of B-allele frequencies with values of mainly zero or one can be interpreted as LOH.
HLA genomic typing by luminex technology
DNA from CD34 + cells and autologous CD3 + lymphocytes were used to perform HLA genomic typing with the LIFECODES HLA-A, -B, -C, -DRB1 and -DQB1 Typing Kits-Rapid (IMMUCOR, Georgia) following the manufacturer's instructions. The Luminex 100/200 TM System, based on xMAP Technology (Luminex®, Austin, Texas) and the Match-It DNA v1.2 software (IMMUCOR) were used to analyze HLA typing, enabling detection of haplotype, locus or allele losses in HLA genes in leukemic cell samples.
HLA genomic typing by sanger sequencing
Sanger sequencing analysis was performed with the GenDxAlleleSEQR kits (GENDX, Utrecht) to confirm the results obtained by SNP-array, using DNA from CD34 + cells and autologous CD3 + T cells. CE-marked SBTengine® software was used for high resolution analysis of HLA sequencing data.
Author contributions
PM and MK contributed equally. PM and MB contributed to sequencing and data analysis. MK, AM and JM contributed to LOH experiments and analysis. FH, PG and MJ contribute to clinical and hematologic characteristics of the patients. PM, MB, PJ, FG and FRC were involved with all aspects of the study´s design and contributed to manuscript preparation. | 6,365.8 | 2018-12-11T00:00:00.000 | [
"Biology",
"Medicine"
] |
The Largest Condorcet Domain on 8 Alternatives
In this note, we report on a Condorcet domain of record-breaking size for n =8 alternatives. We show that there exists a Condorcet domain of size 224 and that this is the largest possible size for 8 alternatives. Our search also shows that this domain is unique up to isomorphism and reversal. In this note we investigate properties of the new domain and relate them to various open problems and conjectures.
Introduction
Condorcet domains (CD), which are sets of linear orders giving rise to voting profiles with an acyclic pairwise majority relation, have been studied by mathematicians, economists, and mathematical social scientists since the 1950s [3,2].Condorcet domains find use in Arrovian aggregation and social choice theory [13,10].In social choice theory, a Condorcet winner is a candidate who would win over every other candidate in a pairwise comparison by securing the majority of votes [11].However, the existence of such a candidate is not always guaranteed, leading to the relevance of Condorcet Domains.A central question in this field has revolved around identifying large Condorcet domains, see Fishburn, Gamlambos & Reiner, Monjardet, Danilov & Karzanov & Koshevoy, Puppe & Slinko, Karpov & Slinko, Karpov [5,6,12,4,14,7,9].
A significant category of Condorcet domains is rooted in Fishburn's alternating scheme, which alternates between two restriction rules on a subset of candidates and has been employed to construct numerous maximum size Condorcet domains.We refer to such domains based on the alternating scheme as Fishburn domains.
Fishburn introduced a function f (n) in [5], defined to be the maximum size of a Condorcet domain on a set of n alternatives, and posed the problem of determining the growth rate for f (n).Fishburn also proved that for n = 16 the Fishburn domain is not the largest CD.This was followed by further research and bounds on f (n) by Gamlambos & Reiner, Danilov & Karzanov, and Monjardet [6,4,12].Karpov & Slinko extended and refined this work in [8] and Zhou & Riis [17].
Although extensive research has been conducted, all known maximum-sized Condorcet domains have been built using components based on either Fishburn's alternating schemes or his replacement scheme.For instance, Karpov& Slinko [7] introduced a novel construction that enabled the creation of new Condorcet domains with unprecedented sizes.This allowed the authors to construct a Condorcet domain, superseding the size of Fishburn's domain for 13 alternatives.Recently, Zhou & Riis [17] constructed Condorcet domains on 10 and 11 alternatives, superseding the size of the corresponding Fishburn domains.
This paper shows that n = 8 is the smallest number of alternatives for which the Fishburn domain (size 222) is not the largest and that there is a Condorcet domain of size 224.Furthermore, relying on extensive computer calculation on the super-computer Abisko at Umeå, we also established 224 as an upper bound and that there, up to isomorphism, is only one such Condorcet domain.The need for a supercomputer, and a carefully devised algorithm, reflects the fact that a naive search would lead to search-tree with more than 6 112 vertices.We also analyse some of the properties of this new domain.
Preliminaries
There are many equivalent definitions of Condorcet domains.In this paper, we adopt the definition proposed by Ward in [16].According to this definition, a Condorcet domain of degree n ≥ 3 is a set of orderings of X n = {1, 2, . . ., n} that satisfies certain local conditions.Specifically, a Condorcet domain of degree n = 3 is defined as a set of orderings of X 3 that satisfies one of nine laws, denoted by xNi, where x is an element of X 3 , and i is an integer between 1 and 3.The law xNi requires that x does not come in the i-th position in any order in the Condorcet domain.For example, xN1 means that x may never come first, while xN3 means that x may never come last.
A Condorcet domain of degree n > 3 is a set A of orderings of X n that satisfies the following property: the restriction of A to every subset of X n of size 3 is a Condorcet domain.In other words, for every triple a, b, c of elements of X n , one of the nine laws xNi must be satisfied, where x ∈ a, b, c.For example, cN2 would mean that c may not come between a and b in any orderings in A.
A maximal Condorcet domain of degree n is a Condorcet domain of degree n that is maximal under inclusion among the set of all Condorcet domains of degree n.A Maximum Condorcet domain is a Condorcet domain of the largest possible size for a given value of n.
To avoid repetition, we will use the acronyms CD and MCD, to refer to Condorcet domain and Maximal Condorcet domain respectively.
For the case of degree 3, there are nine MCDs, each corresponding to one of the nine different laws xNi.It is easy to verify that these nine MCDs contain exactly four elements: two transpositions and two even permutations (either the identity or a 3-cycle).Among the 9 MCDs of order 3, precisely six contain the identity order 1 > 2 > 3 since the laws 1N1, 2N2, and 3N3 each rule out one CD of degree 3.
Transformations and isomorphism of Condorcet domains
First, recall that each linear order A in a CD B can also be viewed as a finite sequence of integers, obtained by ordering the elements of X n according to the linear order, and as the permutation which permutes X n to this sequence.We let S n denote the set of all permutations on X n .
Let g ∈ S n and i ∈ X n .We define ig as g(i); and if A is a sequence of elements of X n we define Ag to be the sequence obtained by applying g to the elements of A in turn.If B is a CD, regarded as a set of sequences, we define Bg to be the set of sequences obtained by applying g to the sequences in B, and then Bg is also a CD.Specifically, if B satisfies the law xNi on a triple (a, b, c) for some x ∈ (a, b, c), then Bg satisfies the law xgNi on the triple (ag, bg, cg).We call CDs B and Bg isomorphic.Therefore, two isomorphic CDs differ only by a relabelling of the elements of X n .
The core of a CD B is the set of permutations g ∈ B such that Bg = B.The core of a CD which contains the identity permutation B is a group.We provide a more detailed discussion of the core in [1].
Condorcet domains
Core Table 1: The Condorcet domains for 3 alternatives which contain the identity order.Each rule assigned to the triplet (i, j, k) with i<j<k is associated with a CD (which is given on the same line).The CDs displayed fall into 3 isomorphism classes, and each CD has a core of size 2.
It can be readily shown that for any Condorcet domain, the total number of 1N3 and 2N3 rules remains invariant under isomorphism.Likewise, this holds for the total number of 2N1 and 3N1 rules and the total number of 1N2 and 3N2 rules.
Search methodology
We developed an algorithm to generate all MCDs of a given degree n and size at least equal to a user-specified cutoff value (e.g.size ≥ 222 for n = 8).We implemented this algorithm in C in a serial version which is sufficient for n ≤ 6, and a parallelized version that we used for n = 7 and 8.It is important to stress that this algorithm, unlike the one used by Zhou & Riis [17], aims to construct all MCDs above some user-specified size.
Our algorithm works by starting with the unrestricted domain of all linear orders on n alternatives and then stepwise applying never laws iNp to those triples which do not already satisfy some such law.The algorithm works with unitary CDs, meaning CDs which contain the identity permutation.Since every CD is isomorphic to some unitary CD this is without loss of generality.However, by using unitary CDs we reduce the set of possible never laws from 9 to 6, thereby speeding up our search.We will next sketch some of the details required in order to see that the algorithm is complete, though at first inefficient, and then how to also make it efficient.
We define the Condorcet tree of rank n, which is a homogeneous rooted tree of valency 6 and depth n 3 , as follows.The n 3 triples of elements of X n are arranged in some order, so that the vertices of the tree at a given depth t are associated with the corresponding triple T t .The six laws that a unitary CD may obey on a given triple are also ordered, and each child w of a non-leaf v of the tree is associated with one such law L w .Every vertex v is associated with a set c v of linear orders on X n .If v is the root then c v is the set of all orderings.If w is a child of v, where v has depth t, then c w is obtained from c v by removing those orderings that do not satisfy the law L w when applied to T t .
It is possible, in theory, to process the entire tree, depth first, constructing the sets c v for every vertex v. Then the unitary MCDs of degree n, as well as many non-maximal CDs, are found among the sets c v for the leaves v.In practice this is impracticable for n > 5 as the tree is too big.
For any leaf v the set c v is a unitary CD, but these are not always maximal, and there will be very many duplicates.This arises from the fact that, as we move down the tree, the sets c v will often not only obey the laws that have been explicitly applied on triples but may also obey laws on triples which are implied by the applied laws.Using this observation allows a massive reduction in the number of vertices that need to be processed, giving us a tree with 0, 1 or 6 descendants from v depending on whether c v cannot be maximal or must be a duplicate, has an implied law, or is unrestricted by earlier laws.This is determined as follows.
When a vertex v of height t is processed the law that was enforced on each triple T s for s ≤ t to define v -in other words the path from the root to v -is recorded, and c v is constructed by taking c u , where u is the parent of v, and deleting all elements that do not satisfy the corresponding never law N v .For each s ≤ t + 1 the set L s of laws that all the elements of c v obey when applied to the triple T s is determined.If, for some s ≤ t, the set L s contains a law that precedes the law N u , where u is the ancestor of v of depth s, then the vertex v is not processed any further, on the grounds of duplication, and its descendants are not visited.Otherwise, for each s ≤ t, a law from L s is selected, and the set of sequences that obey all these laws is computed.This set clearly contains c v , and if, for some such selection of laws, this set strictly contains c v then again c v is not processed further.In this case, any unitary CD arising from a leaf descendant of v must either fail to be maximal, or will be a duplicate of a unitary MCD constructed from a descendant of another vertex of depth t.If v passes these tests, and L t+1 is non-empty, the only descendant of v that will be processed is the child w defined by the least element of L t+1 , and then c w = c v .Otherwise all children of v are processed.
The validity of these restrictions of the full Condorcet tree follows from a recursive argument which is given in full in [1].
Condorcet domains on 8 alternatives with size 224
Relying on extensive computer calculation on the super-computer Abisko at Umeå, we have established that: Theorem 4.1.The maximum size of a CD on 8 alternatives is 224.Up to isomorphism, there is only one such CD.This CD has a core of size 4.There are no MCDs of size 223.
The largest Condorcet domain containing the identity permutation and its reverse for n = 8 alternatives is the Fishburn domain, which has a size of 222.
We aim to extend this with more precise counts and analysis of other large Condorcet domains on 8 alternatives in an upcoming paper.Now let us investigate the properties of the MCD of size 224.
1.The Fishburn domain has size 222 and hence is not the maximum CD for n = 8 alternatives 2. There are 56 isomorphic Condorcet domains of size 224 which contain the identity order.Among these there is one special MCD we will refer to as D224, where each never-ruleexcept for the two triplets (123) and (678) -is 1N3 or 3N1.We display the rules for D224 in Table 2 and its linear orders in Table 3 3.The domain does not have maximal width, i.e. it does not contain a pair of reversed orders.
4. The domain is self-dual.That is, the domain is isomorphic to the domain obtained by reversing each of its linear orders.
5. The restriction of the domain to each triple of alternatives has size 4.This means that this domain is copious in the terminology of [15] and is equivalent to the fact that the domain satisfies exactly one never-rule on each triple.
6.The domain is a peak-pit domain in the sense of [4], i.e. every triple satisfies a condition of either the form xN1 or xN3, for some x in the triple.
7. The authors of [7] asked for examples of maximum CDs which are not peak-pit domains of maximal width.Our domain is the first known such example and shows that n = 8 is the smallest n for which this occurs.
8. The domain is connected (see [12] for the lengthy definition of this well used property.)This is in line with the conjecture from [14] that all maximal peak-pit CDs are connected.
9. The domain has a core of size 4, which is given in captions of Tables 2 and 3.
Triplets Rules
Conclusion
In conclusion, our work has demonstrated a record-breaking maximum Condorcet domain for n = 8 alternatives, which is essentially unique (up to isomorphism and reversal).We have also investigated how our domain relates to various well-studied properties of MCDs.Our findings contribute to understanding the structure of Condorcet domains and have potential applications in voting theory and social choice.
Overall, our work highlights the importance of understanding the properties and structures of CDs in order to construct larger examples and might pave the way for future research in this area.
We also observe that some record-breaking CDs for n = 8 alternatives exhibit almost all rules of the form 1N3 and 3N1.These rules can be interpreted as a form of seeded voting.In such a system, for each set of three alternatives, a seeding is implemented to restrict the lowest-seeded alternative from being the highest-ranked preference or the highest-seeded alternative from being the lowestranked preference.A better understanding of the global effects of this type of local seeding could serve as a foundation for future research, potentially offering insights into algorithmic fairness and impartiality in computer-supported decision-making.
Table 2 :
Table of triplets and rules that produces the Condorcet domain D224 of size 224 for 8 alternatives.This specific CD is invariant under the action by the permutations group G =
Table 3 :
Permutation in Condorcet domain corresponding to the rules in table 2The CD's core consists of the underlined permutations 12345678, 12346587,21435678 and 21436587. | 3,756.2 | 2023-03-27T00:00:00.000 | [
"Mathematics"
] |
Developing a Method of Calculating the Operational Flow of Methanol to Prevent the Formation of Crystalline Hydrates in the Operation of Underground Gas Storage Facilities
When operating underground storage (UGS) of gas hydrates liquidation formed untimely could lead to serious consequences a complete shut-in and elimination of its process. With a small fund operating wells with high daily output storage operation would entail a violation of technological regime, the failure of gas sales plans, increased hours of downtime operational fund. Therefore, ensuring the smooth and reliable operation of underground gas storage wells fund is an urgent task. The authors of the article developed a methodology for calculating the operational flow of methanol to prevent the formation of gas hydrates in UGS operation. On the basis of the developed technique using industrial operating data Punginskoye UGS made the study of technological modes of its work and recommendations to prevent hydrate formation in the underground gas storage wells.
Introduction
In production systems, processing and transportation of natural gas the formation of crystalline hydrates is a serious problem associated with the violation of technological processes working gas production equipment and pipelines.Typical locations of gas hydrate formation in field conditions are: a near-wellbore zone of wells, cables and infield collectors.
To restore the normal operation of the well after the occurrence of operational complications serious measures to thawing of long hydrate blockage are required.For the realization of these measures considerable forces and means are spent, but one can't stop hydrate formation in gas wells completely.Unlike gas deposits the characteristic feature of the operation of underground gas storage facilities (UGS) is the cyclical nature of the UGS work , when there is a periodic change in the direction of gas flow -from the formation (in season selection) into the reservoir (in the injection season).Under these operating conditions, the formation of hydrates, most often occurs in the gas sampling period from the formation at negative outdoor air temperature and higher sampling rates.Season selection is usually carried out from October to March, sometimes to the April.
The object of research
In field conditions the most common way to deal with hydrates is the use of volatile hydrate inhibitor -methanol.Methanol has a high degree of temperature reduction of hydrate formation, the ability to rapidly decompose already formed tube and mix with water in any ratio.It has low viscosity and low freezing point [3,17].The advantage of using methanol as the non-hydrate reactant is that this technology provides not only the prevention of hydrate formation, but under certain conditions, is an effective means to remove already formed deposits of hydrate.
Using the unique physical and chemical properties of methanol, in particular its ability to be mixed in any concentration and pass into the vapor phase without losing its initial characteristics, and the cyclic operation of the underground storage mode (injection of gas into the reservoir -the selection of the gas from the formation [19]) the technology of methanol supply in injection period in the bottom zone of the formation of individual wells was developed.Its technological mode of operation is accompanied with the formation of hydrates in the selection period.In practice, this warning method of providing of continued non-hydrate well operation is much more effective than the elimination methods of the already arisen problem.
The technology provides a method of methanol injection into the formation before the end of the season of filling gas storage [14,16,18].Submission of hydrate inhibitor occurs in the flow of gas wells, technological mode of operation which is characterized as a hydrate.At the stage of technology development it is found out that it is necessary to deliver methanol to the bottom hole of wells for one -two months before the end of the gas injection season at UGS.
There is a method for calculating VNIIGAZ methanol consumption, which in accordance with the WFD 39-1.13-010-2000involves calculation of hydrate formation temperature and indirectly reflects the influence of the gas composition [10,20].This technique is very effective for the calculation of indices in a continuous supply of methanol.
When the periodic injection into the reservoir with the change of the operating conditions it is necessary to create flexible methodologies for incorporating changes in the conditions, frequency of injection, methanol concentration and the relationship between process parameters [4].
To develop a new method which would allow to remove the hydrate problem securely and have the least possible negative consequences and measures to ensure the smooth operation of the gas transportation system, industrial researches in different technological modes UGS were carried out.
Initial data for calculation are adopted by dispatching data Punginskoye UGS operation.The composition of gases corresponds to average over a five year period and is identified as Gas №1 (Cenomanian), Gas №2, Gas №3 (Valanginian).
Underground storage of gas hydrates can be formed directly in the formation, if injection is conducted in the cooled aquifer [4].Hydrates are accumulated directly in the bottomhole formation zone or at a considerable distance from the well bottom, if the hightemperature gas is injected under pressure, greatly exceeding the initial hydrostatic pressure [15].
The estimated dependence for determining the specific consumption of methanol injected into the gas stream to prevent hydrate formation in the "protected" area, is [2]: where ΔW -the amount contained in the gas (or condensate) liquid water, kg/1000m3; C2 -minimum necessary concentration of methanol in the aqueous phase required to prevent hydrate formation in the protected point, % wt.; C1 -methanol concentration in the injected gas (90 -95% wt.); qg1 -amount of methanol contained in the feed gas, kg/1000 m3; qg2 -methanol, dissolving in the gas phase at a concentration in the aqueous solution C2, kg/1000m3; qc1 -the amount of methanol contained in the feed gas hydrocarbon condensate, kg / 1000m3; qc2 -the amount of methanol, dissolved in the hydrocarbon condensate at a concentration of water-methanol solution C2, kg / 1000m3.
The resulting image, depending on the flow rate of methanol pressure and temperature, as shown in [1], with a high correlation coefficient can be described by an equation of the form (for 60 to 75 kgf / cm2 pressure and gas temperature of 0 to 20 º C): where p -pressure, kgf / cm2; t -temperature, ° C; N and M -factors that depend on the composition of the gas.
This dependence can be used to calculate the daily consumption of methanol by gas pressure for various compositions.
The technique of methanol consumption calculation, suggested by IUT, is based on the obtained inhibitor, depending on the pressure and consumption of gas composition at its constant flow rate [9]: where N and M -coefficients which depend on gas composition.This dependence can be used to calculate the daily consumption of methanol by pressure directly to the different gas compositions.Example of dependence obtained for different gases is shown in Figure 1.
Obviously, the pressure increases [13] and, consequently, the temperature of hydrate formation, increasing the daily intake of methanol.On consumption of methanol, based on the received graphic dependence, the gas composition affects.With increase in the proportion of heavy hydrocarbons in the gas, methanol consumption curve depending on the pressure becomes steeper, it indicates a sharp, compared with methane, methanol consumption increases with increasing pressure [11,12].To determine the daily consumption of methanol at a constant gas flow rate of the proposed method, it is necessary to know the pressure and determine the coefficient B, depending on the specific gravity ' Г J , the concept, suggested by G.V.Ponomarev.
The concept of the specific gravity is the following one.The equilibrium conditions depend on the existence of hydrates in gas composition, which can be roughly characterized by a molecular or specific gravity.If we calculate the sum of the partial specific weights hydrate forming components which are a part of the gas, and divide it by the amount of mole hydrate forming components ¦ i Y , the resulting value ' Г J will characterize gas hydrate-forming ability more strictly: Analytical dependence for determining the coefficient B (0.555 <γ'≤1) is based on the analysis of data obtained in UIT: where γ'g -reduced specific gravity.The proposed relationship considerably simplifies finding the coefficient method in the calculation of the daily consumption of methanol.
Table 1 shows the values of coefficients for the gas №1 ... №3.To determine the amount of methanol, required for the injection before selection period, the following dependence is suggested:
Table1
where QI -the amount of methanol for well injection; С -methanol concentration; TI -methanol supply frequency; KE -the coefficient, characterizing the effectiveness of the inhibitor for the period between the methanol supply.
Results and discussion
The resulting methanol dependence on the daily consumption of gas flow is shown in Figure 2-4.We see that when increasing the proportion of heavy hydrocarbons in the gas, the dependence of methanol consumption from the gas flow increases due to the higher temperature of hydrate formation and, consequently, the greater minimum necessary concentration of methanol in the water phase which is required to prevent hydrate formation in the protected point.
Thus, according to the analysis of the developed technique for modeling of hydrate formation process in the conditions of UGS Punginskoye, heavy gas is more dangerous.A large amount of methanol is necessary to prevent its hydrating.tested in operation mode UGS of "pumping gas", by supplying methanol to the culling of problem wells, the operation of which has been complicated by hydrate formation in wellhead and ground communications.2. It has been found that reliable non-hydrated modes of wells and pipelines on UGS are implemented with sufficient content of methanol in the whole volume of gas withdrawn from the storage.Methodology for calculation of the specific consumption of methanol to each non-hydrated well operation provides a certain margin of methanol feed, since the temperature drop at the end of the loop, the appearance of the produced water in the well production affect its consumption.3. It has been found that for the calculation of the time and period of the methanol injection into the reservoir it is necessary to know exactly the start of the upcoming season selection.An ideal variant is methanol feed during the entire period of the injection season in problem wells.It requires additional feasibility study and may lead to a large reagent costs.It has been found that the positive effect of the methanol injection is achieved in 1-2 months before the start of the withdrawal season, i.e., start of processing bottom-hole zones -in August -September.
Fig. 1 .
Fig. 1.The dependence of the methanol flow from the pressure and gas composition.The dependence of the daily consumption of methanol from the gas composition can be expressed in terms of the coefficient B, which depends on the specific gravity.Changing the value of the coefficient B in the original model of hydrate, dependence of the coefficients N from B and M from B is obtained.The authors obtained dependence according to the definition: x The ratio N from B: N=63,83e0,017B, x The ratio M from B: M=0,215B2+9,755B+37,27.To determine the daily consumption of methanol at a constant gas flow rate of the proposed method, it is necessary to know the pressure and determine the coefficient B,
Fig. 2 .Fig. 3 .
Fig. 2.The graph of methanol consumption dependence from the pressure and temperature for the gas №1.
Fig. 4 .
Fig. 4. The graph of methanol consumption dependence from the pressure and temperature for the gas №3.
method of preventing hydrate formation has been developed.It has been successfully
.
Value ratios N and M. | 2,654.4 | 2016-01-01T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
The Geometry of the Universe
In the late 1990s, observations of type Ia supernovae led to the astounding discovery that the universe is expanding at an accelerating rate. The explanation of this anomalous acceleration has been one of the great problems in physics since that discovery. We propose cosmological models that can simply and elegantly explain the cosmic acceleration via the geometric structure of the spacetime continuum, without introducing a cosmological constant into the standard Einstein field equation, negating the necessity for the existence of dark energy. In this geometry, the three fundamental physical dimensions length, time, and mass are related in new kind of relativity. There are four conspicuous features of these models: 1) the speed of light and the gravitational constant are not constant, but vary with the evolution of the universe, 2) time has no beginning and no end; i.e., there is neither a big bang nor a big crunch singularity, 3) the spatial section of the universe is a 3-sphere, and 4) in the process of evolution, the universe experiences phases of both acceleration and deceleration. One of these models is selected and tested against current cosmological observations, and is found to fit the redshift- luminosity distance data quite well.
Introduction
In the late 1990s, observations of type Ia supernovae made by two groups, the Supernova Cosmology Project [23] and the High-z Supernova Search Team [29], indicated that the universe appears to be expanding at an accelerating rate. The explanation of this anomalous acceleration has been one of the great problems in physics since that discovery. Currently, there are three mainstream approaches to explaining the accelerating expansion of the universe: the introduction of a cosmological constant; the proposal of scalar fields; and the developments of modified gravity. The first two approaches are commonly referred to as dark energy models. In the spatially flat ΛCDM model, presently the best-fit model to the available cosmological data, dark energy accounts for nearly three-quarters of the total mass-energy of the universe [3]. These approaches all raise several theoretical difficulties, and understanding the anomalous cosmic acceleration has become one of the greatest challenges of theoretical physics. Chapter 14 of Ellis, Maartens and MacCallum [11] provides a brief overview on this issue.
In this paper we propose cosmological models that can simply and elegantly explain the accelerating universe via the geometric structure of the spacetime continuum, without introducing a cosmological constant into the standard Einstein field equation, negating the necessity for the existence of dark energy. In this geometry, the three fundamental physical dimensions length, time, and mass are related in new kind of relativity. There are four conspicuous features of these models: The speed of light and the gravitational "constant" are not constant, but vary with the evolution of the universe. Time has no beginning and no end; i.e., there is neither a big bang nor a big crunch singularity. The spatial section of the universe is a 3-sphere, ruling out the possibility of a flat or hyperboloid geometry. In the process of evolution, the universe experiences phases of both acceleration and deceleration.
One of these models is selected and tested against current cosmological observations, and is found to fit the redshift-luminosity distance data quite well.
The paper is organized as the following: In the next section, the cosmological models are developed, with the details of the calculations presented in the Appendix. In Section 3, the dynamical evolution of the universe is determined by solving the field equation under various conditions. In Section 4, a selected model is tested against
The Stress-energy-momentum Tensor
The universe is assumed to contain both matter and radiation. The content of the universe is described in terms of a stress-energy-momentum tensor T ab . We shall take T ab to have the general perfect fluid form where u a , ρ and P are, respectively, a time-like vector field representing the 4-velocity, the proper average mass-energy density, and the pressure as measured in the instantaneous rest frame of the cosmological fluid.
The Field Equation
In a cosmology where the speed of light c and the Newtonian gravitational constant G are assumed constant, the interaction between the curvature of spacetime at any event and the mass-energy content at that event is depicted through Einstein's field equation where R ab is the Ricci tensor and R is the curvature scalar. In a cosmology with a varying c and a varying G , one needs a new field equation for attaining consistency; this is discussed in detail in Barrow [4] and Ellis and Uzan [12].
In a geometrical theory of gravity like general relativity, mass is measured in units of length. Noting that G/c 2 is the conversion factor that translates a unit of mass into a unit of length, we assert that c and G vary in such a way that G(t)/c 2 (t) must be absolutely constant with respect to the cosmic time t. Most of all, the constancy of G/c 2 can make equation (3) and the time-variations of c and G consistent with the usual meaning of conservation of mass-energy, as it will be shown that, with constancy of G/c 2 , the vanishing covariant divergence of the right-hand side of (3) retains expressing the conservation of mass-energy. We can make G(t)/c 2 (t)=1 by choosing proper units of mass and length. Accordingly, we obtain that in a cosmology with a varying c and a varying G , the field equation describing the interaction between the spacetime geometry and the massenergy content is given as Equations (5) and (6) express the conservation of proper mass-energy.
Dynamics of the Universe
In this section we determine the dynamical behavior, which is characterized by the functions a(t) and c(t) in metric (2), of the universe as described by our cosmological models. To obtain predictions for the dynamical evolution, we substitute the components of metric (2) into those of the field equation (4) and solve for a(t) and c(t). Computing the components of G ab in terms of a(t) and c(t), and then plugging the expressions for them into those of the field equation (4), after some tedious but straightforward calculation (see Appendix, Section A1 and A3), we arrive at the evolution equations for a homogeneous and isotropic universe: where M≡4πρ(t)a 3 (t)/3 , which is constant by equation (5); while for the universe composed of dust and radiation, as where M'≡4πρ(t)a 4 (t)/3 , which is constant by equation (6).
There are two unknown functions, c(t) and a(t), to be determined. To solve equations (8) and (9) we need a further postulate on the relationship between c(t) and a(t). For this we argue as follows: Time has no absolute meaning. The concept of time arises from the observation that the distribution of mass-energy contained in the universe is dynamic. There is no time apart from dynamicity. In the procedure of any measurement, one needs a standard to refer to. No matter how time is parameterized, the intrinsic length, in the geometry of spacetime continuum, of a span of time should be measured by the change in the mass-energy distribution during this period. The derivative, ( ) t , of the cosmological proper density is the very quantity that manifests the dynamicity of a homogeneous universe. When the magnitude of an increment in time, dt , is to be measured in units of length, ( ) t must be taken to be the standard to refer to. If the distribution was static, i.e., . Therefore the conversion between time and length can be expressed as where κ 0 is constant with respect to the cosmic time t. Since the speed of light c(t) is viewed simply as a conversion factor between time and length in the geometry of spacetime continuum, we also have the conversion ( ) . c t dt dt Comparing the right hand sides of these two conversions, we conclude that Accordingly, we speculate that where κ is constant with respect to the cosmic time t. We are now ready to solve equations (8) and (9) for a(t) and c(t).
Given equations (5) and (10), equation (8) is redundant, so equation (9) is all we need to arrive at a solution. We will solve equation (9) for the universe composed of pressurefree dust and with spatially 3-sphere geometry (k=1) in detail and discuss the other cases briefly. Substituting (10) into Simplifying and preparing (11) for integration results in In this model, a(t) is the hyper-radius of the universe at cosmic time t. The radius will get smaller and smaller as t approaches ±∞, however it can never reach zero, and therefore, time has no beginning and no end, and there is neither a big bang nor a big crunch singularity. Setting γ(t)=a(t)/2M, the universe is accelerating in the epoch when γ(t)<7/8 and is decelerating when γ(t)>7/8 . The graph of a(t)/2M versus t/σ is displayed in Fig. 1. Figure 1. The evolution of the universe composed of pressure-free dust and with spatially 3-sphere geometry. The hyper-radius of the universe, a(t) , can never reach zero. The universe is accelerating in the epoch when γ<7/8 and is decelerating when γ>7/8. (10) and (13), the speed of light, as a function of the cosmic time t, can be calculated as
From equations
Since the speed of light c, wavelength λ , and frequency ν are related by c=λν, a varying c could be interpreted in different ways. We assume that a varying c arises from a varying λ with ν kept constant. We further assume that the relation between the energy E of a photon and the wavelength λ of its associated electromagnetic wave is given by equation E(t)=η/λ(t), where η is a constant that does not vary over cosmic time. Consequently, from relation Therefore, the so called Planck's constant h actually varies with the evolution of the universe.
Following the same procedure as above, the solutions for the other 5 cases are given as follows: For a universe composed of pressure-free dust and with spatially flat geometry, We have chosen the time origin (t=0) to be the moment when a reaches the value 2M . In this case a(t) will blow up at a finite future time t=σ . The graph of a(t)/2M versus t/σ is displayed in Fig. 2.
For a universe composed of pressure-free dust and with spatially hyperboloid geometry, Solving equation (15) for a in terms of t yields In this case a(0)=2M and a(t) will blow up at a finite future time t=σ(2 3/4 -1). The graph of a(t)/2M versus t/σ is displayed in Fig. 2.
For a universe composed of dust and radiation, and with spatially 3-sphere geometry, We have chosen the time origin (t=0) to be the moment when a achieves its maximum value 2 ' M . In this case a(t)~|t | -1 , as t→ ±∞ .
For a universe composed of dust and radiation, and with spatially flat geometry, From these results, we see that a spatially flat or spatially hyperboloid geometry is not feasible to describe our universe, since in either case a(t) will blow up at a finite future time.
The Cosmological Redshift and Data Fitting
In this section we test the model for the universe composed of pressure-free dust and with spatially 3-sphere geometry against cosmological observations. Theoretical predictions of luminosity distance as a function of redshift will be compared with the observations of the type Ia supernovae contained in the Supernova Cosmology Project (SCP) Union 2.1 Compilation (http://supernova.lbl.gov/ Union/).
Suppose that a photon of frequency (wavelength) ν e (λ e ) is emitted at cosmic time t e by an isotropic observer E with fixed spatial coordinates (ψ E , θ E , φ E ) . Suppose this photon is observed at time t o by another isotropic observer O at fixed co-moving coordinates. We may take O to be at the origin of our spatial coordinate system. Let ν o (λ o ) be the frequency (wavelength) measured by this second observer. The redshift factor, z, is given by Substituting the expression for a(t) in (13) and that for c(t) in (14) into the above equation yields In the SCP Union 2.1 Compilation, the luminosity distance is represented by the stretch-luminosity corrected effective B-band peak magnitude [23], For a given γ o , the quantity γ e (z) in (20), as a function of redshift factor z, is defined implicitly by equation (17).
Plugging (20) into the expression for m γ0,β (z) and after some tedious calculations (See Appendix, Section A4), yields: The best-fit parameter is determined by minimizing the quantity
Discussions
In the Friedmann cosmology [13], a homogeneous and isotropic universe must have begun in a singular state. Hawking and Penrose [15] proved that singularities are generic features of cosmological solutions, provided only fairly general physical conditions. The prediction of singularities represents a breakdown of general relativity. Many people felt that the idea of singularities was repulsive and spoiled the beauty of Einstein's theory. There were therefore a number of attempts [5; 16; 17; 19] to avoid the conclusion that there had been a big bang, but were all abandoned eventually. Negating the existence of singularities restores beauty to Einstein's theory of general relativity.
Cosmological constant was introduced into the field equation of gravity by Einstein as a modification of his original theory to ensure a static universe. After Hubble's redshift observations [18] indicated that the universe is not static, the original motivation for the introduction of was lost. However, has been reintroduced on numerous occasions when it might be needed to reconcile theory and observations, in particular with the discovery of cosmic acceleration in the 1990s. With our models successfully explaining the accelerating universe without the introduction of , the concept of cosmological constant shall be discarded again from the point of view of logical economy, as suggested by Einstein [10].
Beginning with Dirac [7] in 1937, some physicists have speculated that several so called physical constants may actually vary [6]. Theories for a varying speed of light (VSL) have been proposed independently by Petit [25;26;27] from 1988, Moffat [22] in 1993, and then Barrow [4] and Albrecht and Magueijo [1] in 1999 as an alternative way to cosmic inflation [2; 14; 20] of solving several cosmological puzzles such as the flatness and the horizon problems (for a detailed discussion of these problems, see Section 4.1 of Weinberg [32] and Section 9.7.1 of Ellis, Maartens and MacCallum [11]; for reviews of VSL, see Magueijo [21]). In the standard big bang cosmological models, the flatness problem arises from observation that the initial condition of the density of matter and energy in the universe is required to be fine-tuned to a very specific critical value for a flat universe. With our models asserting that the spatial section of the universe is a 3-sphere, the flatness problem disappears automatically. The horizon problem of the standard cosmology is a consequence of the existence of the big bang origin and the deceleration in the expansion of the universe. Without the big bang origin and with the universe being accelerating in the epoch when γ(t)<7/8 , our models may thus provide a solution to the horizon problem.
Essentially, this work is a novel theory about how the magnitudes of the three fundamental physical dimensions length, time, and mass are converted into each other, or equivalently, a novel theory about how the distribution of mass-energy and the geometry of spacetime interact. The theory resolves problems in cosmology, such as those of the big bang, dark energy, and flatness, in one fell stroke by postulating that 4 2 ( ) ( ) a n d ( ) ( ) ( ) c o n s t a n t .
Since there are three fundamental physical dimensions, any cosmological model requires two constants to describe the relationship between them. Einstein took c and G as the two constants, whereas we assert that the two constants are κ, the factor relating to the conversion between time and length, and τ, the conversion factor between mass and length. These two constants, κ and τ, together with η, the constant relating the energy of a photon and the wavelength of its associated electromagnetic wave, can be used to define the natural units of measurement for the three fundamental physical dimensions. Using dimensional analysis, we obtain: By comparing cosmological models, we refute the claim [8] that the time variation of a dimensional quantity such as the speed of light has no intrinsic physical significance. We illustrate our point as follows: In Friedmann's closed universe, which resulted from the constancy of the speed of light, the time span is a closed and bounded interval, from the big bang to the big crunch, while in ours the time span is an open interval, with neither beginning nor end. The two models can be discriminated by the topological structures of their time spans-the former is compact, whereas the latter is not. The mathematical definition of the concept of compactness, and the proof of the compactness of a closed and bounded interval can be found in Chapter XI of Dugundji [9]. Since compactness is a topological property, it's impossible to find a one-to-one and onto continuous correspondence between the time spans (for the proof, see Theorem 1.4, Dugundji [9], p. 224). This fact makes the two models intrinsically different.
Acknowledgements
The author would like to thank Mr. Ching-Lung Huang (黃經龍) of the Molecular BioPhotonics Laboratory (MBPL) of the Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Taiwan, for designing computer programs for searching for the best parameter in data fitting and for providing the figures used in this paper.
A1 Calculations for the Components of Γ c ab and Rab
For the case of 3-sphere geometry, in the synchronous time coordinate and co-moving spatial spherical coordinates (t, ψ, θ, φ) , the covariant components of metric g ab , are and the covariant components are For a given γ o , the quantity γ e (z) in (27), as a function of redshift factor z, is defined implicitly by equation (26) For a given γ o , the quantity γ e (z) in (32), as a function of redshift factor z, is defined implicitly by equation (31) | 4,288.2 | 2010-07-11T00:00:00.000 | [
"Physics"
] |
Multi-attribute scientific documents retrieval and ranking model based on GBDT and LR
: Scientific documents contain a large number of mathematical expressions and texts containing mathematical semantics. Simply using mathematical expressions or text to retrieve scientific documents can hardly meet retrieval needs. The real difficulty in retrieving scientific documents is to effectively integrate mathematical expressions and related textual features. Therefore, this study proposes a multi-attribute scientific documents retrieval and ranking model based on GBDT (gradient boosting decision tree) and LR (logistic regression) by integrating the expressions and text contained in scientific documents. First, the similarities of the five attributes are calculated, including mathematical expression symbols, mathematical expression sub-forms, mathematical expression context, scientific document keywords and the frequency of mathematical expressions. Next, the GBDT model is used to discretize and reorganize the five attributes. Finally, the reorganized features are input into the LR model, and the final retrieval and ranking results of scientific documents are obtained. The experiment in this study was carried out on the NTCIR dataset. The average value of the final MAP@20 of the scientific document recall was 81.92%. The average value of the scientific document ranking nDCG@20 was 86.05%.
Introduction
Most existing search engines support text retrieval, but still have problems retrieving mathematical expressions, especially expressions without natural language annotations. While traditional search engines are losing their roles in this respect, recent research on mathematical expression retrieval has achieved relatively rich results [1−5].
Focusing on mathematical expressions in LaTeX format, Zhong et al. [6] proposed a mathematical formula retrieval algorithm based on Operator Tree. By matching multiple disjoint common subtrees with the same structure, the maximum number of sub-formulas is matched, which improves the efficiency of formula matching. Although the maximum number of matching sub-forms can improve retrieval accuracy, most sub-forms are more complicated. Therefore, the response time of real-time retrieval is approximately 20 s, which cannot meet the needs of real-time mathematical formula retrieval. To achieve faster sub-formulas retrieval, the team also proposed a strategy based on an inverted index and dynamic pruning [7], which improves the time efficiency of retrieval while ensuring that the retrieval results are still valid.
Focusing on mathematical expressions in MathML format, Schubotz et al. [8] proposed the VMEXT system, which can realize a visual tree of expressions in MathML format. It can also realize human-computer interaction, which is convenient for users to quickly find and improve the expression tree. In addition, similar or identical elements of two expressions can be visualized to calculate the similarity of expressions.
Focusing on mathematical expression images, Davila et al. [9] proposed a mathematical formula matching system. The system is mainly aimed at matching handwritten formulas on the teaching whiteboard with the formulas in course notes. First, the entire image was preprocessed, including formula search and structure correction. Then, the largest match in each image was identified by the symbolic consistent spatial alignment and similar relative sizes. Finally, each mathematical formula was divided into multiple symbol pairs. Symbol pairs are two symbols in a formula that are the nearest geometric neighbor of each other, which indicates the logical relations between them. The angle of a symbol pair is the angle between the line connecting the centers of the symbols and a horizontal line, which is helpful for judging the relationship between the two symbols. The images were sorted by the angle of the symbol pair.
With the development of deep learning, text embedding methods are widely used in natural language processing. Gao et al. [10] tried to apply the same method to formula embedding. They applied neural networks to mathematical information retrieval and proposed the "symbol2vec" model. This model was used to learn the vector representation of mathematical symbols and perform similarity calculations. Similarly, the NTFEM model [11] used an N-ary tree to convert the mathematical formula into a linear sequence. The word embedding model is used to embed the formula, and a weighted average embedding vector is obtained by using a weighting function. In mathematical formula retrieval, the BERT (bidirectional encoder representations from transformer)-based embedding model [12] is proposed to introduce more semantic information when the formula is embedded. The model uses the LaTeX format as the input and the BERT model is used to encode the formula. The index is built according to the embedded formula vector, formula id and post id from which the formula originates, and finally, the cosine similarity is used to obtain the final ranking of the formula.
In terms of fusion retrieval and ranking of mathematical expressions and scientific documents, Pathak et al. [13−15] committed to fusing expressions and related texts for retrieval. First, they proposed the MathIR system composed of three modules: "TS", "MS" and "TMS". This made scientific documents retrieval a similarity calculation of expression and text fusion rather than a simple expression search. Next, the "context-formula" pair was extracted, and the context of the formula was merged for retrieval. Finally, the modules of the system were optimized, and the formula retrieval was effectively integrated with the retrieval module for the text. Similarly, Schubotz et al. [16] regarded formulas and natural text as a single information source. The description of mathematical formula symbols was extracted from the surrounding text of the formula. These mathematical symbol descriptions were used to represent the definition of mathematical symbols. The namespace was formed as an internal data structure for mathematical information retrieval. This method can eliminate the ambiguity of mathematical symbols and better meet the retrieval needs of users. While retrieving mathematical expressions, Wang et al. [17] integrated other attributes to rank scientific documents, such as document category, types of journals to which scientific documents belong, and document citations. The sorting results were optimized by fusing these attributes of scientific documents. To better integrate mathematical expressions and text in scientific document retrieval, a weight parameter was proposed [18]. Based on formula similarity and text similarity, the proportion of text and mathematical expressions is manually adjusted.
In conclusion, current scientific document retrieval and ranking methods could be roughly divided into two types, the first type recalls by mathematical expression similarity and sorts by text similarity or recalls by text similarity and sorts by expression similarity. Regardless of what kind of similarity is used for the final sorting, it will weaken the similarity of another part. The second type manually adjusts the weight to fuse expression similarity and text similarity, but this method is difficult for users with less experience to control the specific values of the parameters. To solve the above problem, this study proposes a multi-attribute retrieval and ranking model of scientific documents that combines mathematical expressions and related texts. This model is an improvement of the second type, and can eliminate the need to manually adjust the weights of expressions and texts.
The similarity of five attributes is calculated: mathematical expression symbols (MESY), mathematical expression sub-forms (MESF), mathematical expression context (MECT), scientific document keywords (SDKY) and the frequency of mathematical expressions in scientific documents (FOME). A gradient boosting decision tree (GBDT) and logistic regression (LR) are used for feature reorganization and calculation to obtain the final search results, which improves the rationality of the retrieval. Figure 1 shows a flow chart of the scientific documents retrieval and ranking system (solid lines denote online query flows and dotted lines denote offline index flows). The whole process consists of four parts: query preprocessing, scientific document preprocessing, multi-attribute similarity measure and scientific document retrieval and ranking.
Overview
The query preprocess module is used to process the input query. The query is a combination of mathematical expressions and text, which need to be split. The scientific document preprocessing module is used to extract mathematical expressions and related text, preliminarily decompose mathematical expression symbols and calculate the weights of related text. Then, the module interacts with the database module to store and index the information corresponding to the scientific documents to facilitate subsequent similarity calculations. The multi-attribute similarity measure module calculates the similarity of the five attributes of scientific documents. According to the different characteristics of each attribute, different similarity calculation algorithms are set up. The module interacts with the database module to store the calculated similarity. The scientific document retrieval and ranking module combines the similarity of the multiple attributes of scientific documents to fuse and calculate the attributes. Finally, the similarity between the scientific documents and the input query is obtained, and the scientific documents are ranked according to the similarity.
Similarity calculation of mathematical expression symbols (MESY)
For the retrieval of mathematical expressions, there will be problems when inputting query expressions, such as inaccurate input and incorrect input of mathematical symbols. It is necessary to retrieve each mathematical symbol one by one to improve the fault-tolerant performance of the system. Definition 1 Q ME is the query expression, is the mathematical expression dataset from the scientific documents, and M E T is the number of mathematical expressions in the dataset. First, FDS [19] is used to normalize the mathematical expressions in various formats into a unified form by decomposing them into multiple mathematical symbols with the corresponding five attribute values called level, flag, count, ratio, and operator.
The "level" attribute represents the level of a mathematical symbol, based on its position relative to the horizontal baseline. For example, in the mathematical expression 2 b a , the level values of W W , a, b and 2 are 0, 1, 1 and 2, respectively. "Flag" represents the spatial flag bit of a symbol. Table 1 shows the value of the flag taking x as an example. "Count" refers to the sequential position of a symbol in the mathematical expression. "Ratio" refers to the frequency of the operator in the mathematical expression. "Operator" refers to whether a mathematical symbol is an operator. If a symbol is an operator, it is marked as 1; otherwise, it is marked as 0.
In this way, the mathematical expression is converted into a list, which is convenient for subsequent retrieval of expression symbols. Table 2 shows the membership functions of the five attributes [20]. According to the distribution of values in each attribute by symbols in the data set, the balance factors in the function is determined by using curve fitting. The values of each balance factor are as follows.
After the membership calculation is completed, each symbol corresponds to a five-tuple membership degree vector, denoted by sym where the term refers to the current mathematical symbol and ex refers to the expression id corresponding to the current mathematical symbol.
" and Dt ME = " 2 x y " as examples. The three mathematical symbols that are the same in the two expressions are " x ", " " and " x ". Table 3 shows the attribute values and membership degrees after the decomposition of the three symbols.
Next, hesitant fuzzy sets [21−23] are used to calculate the membership degree of each mathematical symbol. Hesitant fuzzy sets have advantages in dealing with multi-attribute decision-making problems. The formula for calculating the similarity of expressions using hesitant fuzzy sets is shown in Eq (2).
Finally, the normalization calculation of the mathematical symbols is performed to obtain the similarity of the expressions. The specific algorithm is shown in Algorithm 1.
Definition 3
The formula [20] for calculating Symbol Sim in Algorithm 1 is shown in Eq (2). . When = 1, the formula degenerates to the standard Hamming distance. When = 2, the formula degenerates to the standard euclidean distance. In this study, = 2.
Take the two mathematical expressions in Table 3 as an example, we suppose that x x is query and 2 x y is the mathematical expression with id = 1 in the data set. Algorithm 1 is used to calculate the similarity of these two expressions. The result of the first update of 1, 1, 1, 1), (1, 1, 1, 1, 1), (1, 1, 1, 1, 1), (1, 1, 1, 1, 1)].The final calculated SIM = 0.1425. The mathematical expression sub-form similarity calculation refers to the retrieval of Q ME as a whole object. Table 4 shows the membership functions corresponding to the three attributes [16]. represent the membership value of the three attributes length, level, and flag, respectively.
Contextual text similarity calculation of mathematical expressions (MECT)
BERT (bidirectional encoder representations from transformer) [22−24] is a pre-training language model that uses unsupervised data for pre-training and fine-tuning on the task corpus, and has excellent performance on tasks for understanding natural language. There are two tasks in the model pre-training phase: masked language mode and next sentence prediction. The joint training of these two tasks makes the word vector obtained by training more accurate and comprehensive. It can solve the polysemy problem that cannot be solved in word2vec.
This study uses mathematical expression contextual text to fine-tune BERT to achieve the similarity calculation of the contextual text. The specific algorithm is shown in Algorithm 3.
Similarity calculation of scientific document keywords (SDKY)
The Jaccard coefficient is used to calculate the similarity of two sets ( , ) A B G G . It is expressed as the ratio of the intersection and union of the two sets, and can effectively calculate the degree of overlap between the two sets to obtain the similarity of the sets. Its definition is shown in Eq (3).
Jaccard( , )
Each scientific document often contains a specific topic. The keywords of the documents are extracted, and similarity matching with the query words can improve the accuracy of the search results. The contents of the scientific document are divided into words. By calculating the weight of the words, the 5 words with the highest weights are selected as the keywords of the scientific document. The weight calculation method is shown in Eq (4 Since the difference in text length will affect the calculated keyword similarity, this study improves the Jaccard coefficient and adds the length difference part. The calculation of similarity is shown in Eq (5).
where DT WE refers to the keyword collection of scientific documents, and is the balance factor.
The frequency of mathematical expressions in scientific documents (FOME)
When retrieving scientific documents, the same mathematical expression appears differently in different scientific documents, and the importance and retrieval order of scientific documents are also different.
The frequency of mathematical expressions in scientific documents is the product of the frequency of mathematical expressions in the document (EF) and the inverse document frequency (EIDF), which is similar to TFIDF. The difference is that when the text frequency is calculated, the query text must be exactly the same as the text in the document before the text can be considered to appear once. In the process of searching for mathematical expressions, partially identical expressions can also be considered to appear once. For example, when ME Q is U IR , the appearance of U IR The calculation of EIDF requires the number of occurrences of ME Q in the dataset. If ME Q appears multiple times in different scientific documents, its importance will decrease accordingly. The calculation method of the EIDF is shown in Eq (7).
where N refers to the total number of scientific documents in the data set, INCLUDE( ) exp refers to the number of scientific documents containing exp. The specific calculation of INCLUDE( ) exp is shown in Eq (8).
Finally, the calculation method of the frequency of mathematical expressions in scientific documents is shown in Eq (9). fre Sim EF EIDF (9)
Multi-attribute integration of scientific documents
The LR (logistic regression) model is based on linear regression plus sigmoid function (non-linear) mapping. It is shown as Eq (10).
where T x is the input of the sigmoid, and and x are both matrices. is the linear regression parameter. T refers to the transpose of matrix. x refers to the feature of the input. The LR model has a simple structure and fast running speed, but the learning ability and expression ability of the LR model are very limited. A large amount of feature engineering is required for feature dispersion and feature combination to increase the learning ability of the model. Therefore, an approach is needed for automatically discovering effective features and feature combinations and shortening the LR feature experiment cycle. The GBDT model can automatically discover features and carry out effective feature combinations.
GBDT (gradient boosting decision tree) [25−28] is a boosted tree model based on the CART regression tree model. In the process of generating each tree, the residual of the previous tree is calculated. The next tree is fitted on the basis of the residuals so that the residuals obtained on the next tree decrease. It is shown in Eq (11). The sample T x is judged by two tree nodes and belongs to different leaf nodes of the two trees. The leaf nodes of the two trees are coded. The leaf nodes to which sample T x belongs are marked as 1, and the others are marked as 0. The leaf node codes of the two trees are connected in series to form a seven-dimensional sample (1, 0, 0, 1, 0, 0, 0).
Each T x will go through multiple GBDT trees to recombine features. For GBDT trees, the path from the root node of the tree to the leaf nodes is a combination of different features. Therefore, the leaf node can uniquely represent this path. The leaf node is input into the LR model as a discrete feature for training. In the final prediction, the input sample will pass through each tree of GBDT to obtain a discrete feature (a set of feature combinations) corresponding to a certain leaf node. Then, the feature is passed into LR in one-hot form for linear weighted prediction. The final similarity SIM calculation result is obtained. Figure 3 shows the specific flow chart. For the LR model, the L2 penalty term is used, and the value of the inverse of regularization strength is 0.05. For the GBDT model, the metric is "binary_logloss", num_leaves is 32, num_trees' is 60 and the learning_rate is 0.005.
Experimental data and environment
The dataset used in the experiment is "MathTagArticles" in NTCIR-12_MathIR_Wikipedia_Corpus, which includes 31742 scientific documents. The "MathTagArticles" includes 16 archive files (they are coded as wpmath0000001-wpmath0000016),and each archive file contains about 2000 scientific documents. In this study, the hold-out method is used: "wpmath0000001-wpmath0000008" are used for training, "wpmath0000008-wpmath0000012" are used for verification, "wpmath0000013-wpmath0000016" are used for testing. Table 5 shows the experimental environment.
Relevance ratings
The evaluators are five mathematics graduate students who are familiar with mathematical expressions and scientific documents. For each set of queries, the top 10 results are selected for evaluation. The evaluation indicators are relevant, partially relevant and not relevant. Among them, relevant ones are marked as 2, partially relevant ones are marked as 1, and not relevant ones are marked as 0. The results of the same query will be marked separately by five evaluators. Different evaluators should not mark the same retrieval result too differently. For example, for the same search result, when some commenters are marked as 2, other commenters can mark 1 or 2, but cannot mark 0. So, another labeling rule is set: for the same result, the difference between the scores of different evaluators should be less than or equal to 1. If it is greater than 1, the marks are invalid.
Finally, the results of the five evaluators are summarized. The reviewer's score is converted to a comprehensive score in Table 6. Based on the principle of obedience to the majority, a total score greater than 7 is considered relevant, a total score greater than 2 is considered partially relevant, otherwise it is not relevant. In the subsequent evaluation of results, if the evaluation metrics only require relevant and not relevant, the partial relevant will default to relevant. Reciprocal rank (RR) is the reciprocal of the ranking of the first related document in the retrieved results. MRR is the average of the reciprocal rankings of multiple queries, and the calculation method is shown in Eq (13).
where rank( ) i refers to the ranking of the first related document for the i-th query. Table 8 shows the values of P@3, P@5, P@10, and MRR for the 20 queries in Table 6, and Figure 4 shows the values of P@3, P@5 and P@10 for the 20 queries in Table 7. Figure 4 shows that the P@3 of some queries can reach 100%. However, the precision of some queries is low, which is related to the fact that there are fewer scientific documents matching it in the dataset.
Ablation experiment
Average precision considers the position factor on the basis of precision. It is more sensitive to the position of sorting. The calculation method is shown in Eq (14).
where r refers to the total number of related documents, pos( ) i refers to the position of the i-th related document in the retrieved results.
NDCG is the normalized loss cumulative gain. The calculation method of DCG (discounted cumulative gain) is shown in Eq (15). 1 2 where i r e l refers to the relevance of the i-th document. There are three levels of relevance: good, fair and bad. They are assigned scores of 3, 2 and 1.
In an ideal state, according to the order of relevance from largest to smallest, the case where DCG takes the maximum value is IDCG. 1 2 where REL refers to the sorting situation of the documents in the ideal state, and k refers to the collection of the first k documents. NDCG uses IDCG to normalize the evaluation indicators.
The similarities of the five attributes of scientific documents are calculated separately, they are MESY, MESF, MECT, SDKY and FOME. In order to verify the role of each attribute in the experiment, an ablation experiment was carried out in this study. One of the five attributes is removed in turn, and the remaining four attributes are input into GBDT and LR for training, then five models are obtained. Experiments with these five models are compared with the original model, and the results obtained are shown in the Figure 5. In Figure 5, model A represents MESF + MECT + SDKY + FOME, model B represents MESY + MECT + SDKY + FOME, model C represents MESY + MESF + SDKY + FOME, model D represents MESY + MESF + MECT + FOME, model E represents MESY + MESF + MECT + SDKY, and the model F represents MESY + MESF + MECT + SDKY + FOME. As shown in Figure 5, the MESY attribute affects the precision of the model. There are fewer relevant results retrieved, and the less relevant results are ranked relatively higher, so the MAP and nDCG of model A will be slightly higher. MESF also affects the precision of the model, but has little effect on the ranking. The two attributes of MECT and FOME have little effect on precision, but they will affect the ranking of results. The SDKY attribute will get more relevant results and affects the ordering of the model to some extent. Figures 6 and 7 show the comparison results of the algorithm in this study with Tangent-CFT [4] and MIaS [3], MIaS system is an open-source system. The Tangent-CFT model was reproduced experimentally. Table 9 gives the average comparisons of MAP and NDCG. Tangent-CFT [4] is a mathematical expression embedding model realized by word2Vec, that can achieve precise matching of mathematical expression structure. To locate a scientific document according to a mathematical expression, the retrieval of "mathematical expression-scientific document"(scientific document pairs corresponding to mathematical expressions) is realized. MIaS [3] is an open search engine for mathematical expressions. It can also retrieve corresponding scientific and technological documents based on the similarity of mathematical expressions. The system builds an XML tree through the structure of mathematical expressions to retrieve query expressions and expressions with query expressions as sub-expressions.
Conclusions
This study proposes a multi-attribute retrieval and ranking model based on GBDT + LR to solve the problem of poor integration of mathematical expressions and relevant texts in scientific document retrieval. This method combines the five attributes MESY, MESF, MECT, SDKY and FOME. GBDT is used to reorganize the features, and LR trains the reorganized features. Finally, the similarity of the final scientific documents is obtained and sorted.
Future research is expected to complete the semantic retrieval of expression symbols based on the context of expressions. Meanwhile, in terms of semantics, it is better to effectively integrate expressions and text. When sorting the final scientific documents, the attributes of the scientific | 5,742.8 | 2022-02-10T00:00:00.000 | [
"Computer Science"
] |
Ultrafast critical ground state preparation via bang-bang protocols
The fast and faithful preparation of the ground state of quantum systems is a challenging task but crucial for several applications in the realm of quantum-based technologies. Decoherence poses a limit to the maximum time-window allowed to an experiment to faithfully achieve such desired states. This is of particular significance in critical systems, where the vanishing energy gap challenges an adiabatic ground state preparation. We show that a bang-bang protocol, consisting of a time evolution under two different values of an externally tunable parameter, allows for a high-fidelity ground state preparation in evolution times no longer than those required by the application of standard optimal control techniques, such as the chopped-random basis quantum optimization. In addition, owing to their reduced number of variables, such bang-bang protocols are very well suited to optimization purposes, reducing the increasing computational cost of other optimal control protocols. We benchmark the performance of such approach through two paradigmatic models, namely the Landau-Zener and the Lipkin-Meshkov-Glick model. Remarkably, the critical ground state of the latter model can be prepared with a high fidelity in a total evolution time that scales slower than the inverse of the vanishing energy gap.
e fast and faithful preparation of the ground state of quantum systems is a challenging task but crucial for several applications in the realm of quantum-based technologies. Decoherence poses a limit to the maximum time-window allowed to an experiment to faithfully achieve such desired states. is is of particular signi cance in critical systems, where the vanishing energy gap challenges an adiabatic ground state preparation. We show that a bang-bang protocol, consisting of a time evolution under two di erent values of an externally tunable parameter, allows for a high-delity ground state preparation in evolution times no longer than those required by the application of standard optimal control techniques, such as the chopped-random basis quantum optimization. In addition, owing to their reduced number of variables, such bang-bang protocols are very well suited to optimization purposes, reducing the increasing computational cost of other optimal control protocols. We benchmark the performance of such approach through two paradigmatic models, namely the Landau-Zener and the Lipkin-Meshkov-Glick model. Remarkably, the critical ground state of the la er model can be prepared with a high delity in a total evolution time that scales slower than the inverse of the vanishing energy gap.
I. INTRODUCTION
antum technologies have seen considerable progress in recent years [1], thanks to the unprecedented degree of isolation and manipulation capabilities achieved over individual quantum systems [2][3][4], paving the way to the development of novel technologies and furthering our fundamental understanding of quantum information processing [5]. Yet, continued development of these technologies requires fast and robust schemes to prepare and manipulate quantum states. In particular, reducing the preparation time of target quantum states would have a profound impact for several quantum technologies, embodying an area of active research [1,6]. e ability to prepare ground states of a given Hamiltonian is especially important for many reasons. On one hand, arbitrary states can be encoded as ground states of suitably arranged Hamiltonians, which is important for adiabatic quantum computation [7]. On the other hand, the ground state of quantum many-body systems is pivotal to the investigation of quantum phase transitions (QPTs) [8]. Indeed, close to the critical point of a second-order QPT, the ground states feature non-analytic behavior, and are very sensitive to variations of the underlying control parameter. is provides advantages for tasks such as quantum metrology [9][10][11]. Critical ground states of many-body systems also o en possess a large degree of entanglement, making them an invaluable resource for several quantum information tasks [12][13][14][15][16][17]. Nevertheless, the preparation of a critical ground state is experimentally challenging. is stems from the extremely long time required by adiabatic ground state preparation, due to the vanishing energy gap close to the critical point of a second-order QPT [8]. Devising fast and robust protocols for the generation of critical ground states is thus an important avenue of research. Such e orts would shed further insight into the study of QPTs, such as the experimental determination of their universality class and the fundamental time constraints posed by their vanishing energy gap. Here, we will focus on the preparation of the ground state of a second-order quantum critical model, aiming to shorten the time duration of the protocol.
Currently known fast state-preparation strategies include local adiabatic protocols [18][19][20], shortcuts to adiabaticity [21][22][23] and fast quasi-adiabatic ramps [24]. ese methods typically require the system to be analytically solvable or numerically treatable. In addition, further demanding control in the system, embodied for instance by additional timedependent parameters, is o en required.
In this paper, we show that even simple protocols can provide remarkable results, in some cases even outperforming algorithms as sophisticated as CRAB. We showcase this, in particular, for the task of ground state preparation close to a second-order QPT. We propose the use of a double-bang protocol, which consists of two constant evolutions under a Hamiltonian with xed parameters rather than a single one as considered in [50,51]. We focus on two paradigmatic models: the Landau-Zener (LZ) [52], and the Lipkin-Meshkov-Glick (LMG) one [53]. e la er describes an interacting quantum many-body system featuring a mean-eld second-order QPT [54][55][56][57][58]. Furthermore, we provide strong numerical evidence in support of the optimality of double-bang protocols. Remarkably, our approaches are computationally resourcee cient owing to the small number of parameters de ning the protocol. At the same time, they allow us to reach almostunit delities in quite short times, compared to pulse shapes obtained via state-of-the-art QOC methods such as CRAB. As further evidence of the good performances of bang-bang protocols, we show that the time required to achieve the criti-cal ground state of the LMG model with good delity scales slower than the inverse of the minimum energy gap, which is the type of scaling observed in previous analyses [40,59]. e remainder of this paper is organised as follows. In Sec. II we formulate the problem while in Sec. III we discuss the application of optimal control techniques to the ground state preparation problem, focusing on bang-bang protocols. In Sec. IV we showcase the performance and advantages of our method through its application to the LZ and LMG models. Finally, in Sec. V we summarise our main ndings and brie y discuss further avenues of investigation.
II. GROUND STATE PREPARATION AND FUNDAMENTAL QUANTUM LIMITS
Let us consider a Hamiltonian H which, without loss of generality, we can assume to depend on a single tunable and dimensionless parameter g according to the decomposition where H 0,1 are time-independent Hamiltonian operators. Given the initial and nal values g 0 and g 1 of the (externally controllable) parameter, our goal is to nd a time-dependent protocol g(t) such that |φ 0 (g 0 ) evolves into |φ 0 (g 1 ) in the shortest possible evolution time τ , where |φ 0 (g) denotes the ground state of H(g). In general, the associated dynamics cannot be solved exactly, making it necessary to resort to numerical optimization techniques. We can broadly identify two distinct dynamical regimes. Given a typical energy scale ω, for evolution times such that ωτ → ∞, any continuous ramp is su cient to achieve the target state, and the evolving state follows the instantaneous ground state of the system, as a consequence of the adiabatic theorem [60]. On the other hand, for very short evolution times ωτ 1, the evolution is far from adiabatic. In such regime, quantum speed limits [61][62][63][64][65][66][67][68][69] provide fundamental bounds on the minimum evolution time τ required to evolve between two states under a given time-independent dynamics. Such time is lower-bounded by a quantity proportional to the Bures angle between initial and nal states, and inversely proportional of either the variance or the average energy along the trajectory. It is worth stressing here that such quantum speed limits do not provide any information on the optimal dynamics implementing the target transition, but rather give an estimate of the evolution time for a given dynamics. e task of nding the optimal Hamiltonian achieving a given evolution is a more di cult problem, sometimes referred to as the quantum brachistochrone problem [70] or minimum control time [71]. e notion of control at the quantum speed limit has attracted considerable a ention [40,71,72]. In particular, it has been observed that the minimal evolution time to generate a ground state scales as τ * ∝ ∆ −1 min where ∆ min denotes the minimum energy gap of the Hamiltonian during the evolution [59]. is is particularly interesting for the LMG model, where ∆ min occurs at the QPT and vanishes as ∆ min ∝ N −z with N the size of the system and z = 1/3 the dynamical critical exponent [54]. However, we will provide examples in which this does not hold, and the minimal evolution time τ * scales slower than ∆ −1 min , namely, τ * ∆ min ∝ N −α with α > 0 a scaling exponent.
III. OPTIMAL CONTROL
To nd an optimal time-dependent protocol we de ne the cost function F X as the state delity between output and target state for a given protocol parameterisation g X is the output state corresponding to a dynamics with pulse shape g X and F X = | ψ X (τ )|φ 0 (g 1 ) | 2 is the state delity. Here, |φ 0 (g) is the ground state of H(g), so that |φ 0 (g 0 ) and |φ 0 (g 1 ) are initial and target states, respectively, while T is the Dyson time-ordering operator. Numerical optimization is used to maximise F X with respect to X. e di erent methods put forward to achieve this goal di er in how the function g X is parameterised, that is, on the choice of ansatz being considered. Common choices include CRAB [39,40], local adiabatic ramps [18,19] and bang-bang protocols [50,51].
Here we focus on bang-bang and in particular double-bang protocols, benchmarking our results against those obtained via CRAB.
We also constrain the magnitude of the interaction g X (t), imposing |g X (t)| ≤ g max for all t. is ensures that the optimized protocols only require nite energy to be implemented, and ensures the existence of a maximum, i.e. non-zero, evolution time. We refer to App. A for the details about the employed optimization procedures.
A. Bang-bang protocols
Bang-bang protocols with bangs involve a piece-wise constant function of the form where χ I (t) = 1 for t ∈ I and χ I (t) = 0 otherwise. Here, t 1 = 0 and t +1 = τ are the xed initial and nal evolution times, respectively and X ≡ (g 1 , ..., g , t 2 , ..., t ) are the 2 − 1 optimization parameters (with the added constraints t i−1 ≤ t i ≤ t i+1 ). Note how bang-bang protocols involving + 1 bangs include as a subset bang-bang protocols with bangs.
In particular, the double-bang protocols we will use have the form where X ≡ (g A , g B , t B ). When clear from the context, we will omit the explicit functional dependence of g DB on its parameters, writing g X,(0,τ ) DB ≡ g DB . In double-bang protocols, the control parameter g(t) is thus instantaneously changed from g 0 to g A at the beginning of the protocol, then suddenly quenched to take value g B at some time t B , and nally changed into g 1 at the end of the evolution [73]. An example of a double-bang protocol is given in Fig. 1. It is worth stressing that our use of the term bang-bang di ers from the way it is used in the context of NMR, where it refers to a technique to avoid environmental interactions [74]. e piecewise-constant nature of the bang-bang protocols allows one to simplify the time-evolution operators, which can be wri en as is makes simulating the associated dynamics computationally easier, compared to simulating the evolution of a state through generic time-dependent dynamics, as required for instance by CRAB or Krotov protocols.
B. Chopped-random basis quantum optimization (CRAB)
In order to benchmark our results and highlight the advantages o ered by the bang-bang protocols, we compare them with the results obtained via CRAB. is method uses a time-dependent pulse shape wri en as a modulation of a linear ramp connecting initial and nal parameter values. is variation is wri en in terms of trigonometric functions with randomly chosen frequencies. More precisely, it uses the ansatz [39][40][41][42] Nc n=1 (x n cos(ω n t) + y n sin(ω n t)) , • g Lin (t) ≡ g 0 + (g 1 − g 0 )t/τ is the linear ramp connecting g 0 and g 1 in a total time τ .
• e integer N c is the total number of frequencies in the ansatz. Its value is set before the start of the evolution, together with the total evolution time τ .
• e frequencies ω n are uniformly sampled around the principal harmonics, ω n = 2πnω 0 (1 + ξ n ) with ξ n ∈ [−1/2, 1/2] independent uniform random numbers. e use of random frequencies implies that the functional basis being used is not constrained to be orthogonal, a feature that was found to sometimes enhance the performance of the search algorithm [39,40].
• e function b(t) is used to normalise the CRAB correction, ensuring g(0) = g 0 and g(τ ) = g 1 . A possible choice for this is b(t) = ct(t − τ ) for some constant c > 0.
e optimization is run on the 2N c parameters X ≡ (x 1 , ..., x Nc , y 1 , ..., y Nc ). Whereas g 0 , g 1 , t 0 , t 1 are set by the problem, the values of ω n (equivalently, ξ n ) are chosen empirically (o en randomly) before the evolution starts. e optimization algorithm is o en further run for di erent sets of frequencies ω n , keeping only the best result.
Notice that while the most general formulation CRAB in principle encompasses a large class of parameterisations [40] which include bang-bang protocols as a special case, in this work we refer to the most common CRAB methods based on truncated random Fourier basis.
To compute the evolution of a state through a CRAB protocol we need to numerically simulate the dynamics through the time-dependent Hamiltonian. is is in general not as ecient as computing the evolution through piecewise-constant protocols. Notice that the dynamics must be simulated a large number of times while looking for the optimal protocol, which builds up to a signi cant di erence in computational times, as illustrated in the example addressed in Sec. IV A.
IV. APPLICATIONS
We here discuss the e ectiveness of bang-bang protocols to generate the ground state at the critical point of LZ and LMG models, comparing them in particular with the results achieved using CRAB protocols. In [75] we make available the data as well as all the corresponding parameter values employed to generate the results.
with H 0 ≡ ωσ x , H 1 ≡ ωσ z , and σ k the k-Pauli matrix (k = x, y, z). Without loss of generality, we set the initial state to be the ground state of H LZ (−5), and use as the ground state of H LZ (0), that is the ground state at the avoided crossing, as a target. e initial Hamiltonian H LZ (−5) is an approximation of the asymptotic one H LZ (−∞) ∼ −σ z . is approximation is sensible in this context, as the state delity between the ground states of H LZ (−5) and H LZ (−∞) is ∼ 0.99. We optimize over double-bang protocols for di erent evolution times. Our goal is to nd simple protocols achieving the transition between initial and target ground states in the shortest possible time. We thus scan di erent values of the evolution times τ , optimizing the protocol for each chosen value. As shown in [76], depending on the imposed constraints and the time τ the control landscape shows a rich structure. We test both the bang-bang and CRAB protocols with di erent numerical optimization algorithms, and nd that the doublebang protocols achieve be er results in shorter τ times, while requiring signi cantly less computational time. Studying the optimal protocols at several di erent times allows us to pinpoint the minimum value of τ required to reach the target state with our protocol, with a given delity. We show in Fig. 1 examples of such optimized bang-bang and CRAB protocols (based on N c = 4 and 10 frequencies). e rst point to appreciate is the di erence in the number of parameters FIG. 2. Representation of the dynamics corresponding to doublebang (red) and CRAB (orange) protocols, optimized to transport the ground state of a LZ model from H = ωσx + g0ωσz with g0 = −5 to H = ωσx with g1 = 0. Note that σx |± = ± |± , while |1 and |0 (|R and |L ) are the eigenstates of σz (σy) with eigenvalue +1 and -1, respectively. e total evolution time is ωτ = 1, and the CRAB protocol shown has Nc = 4 frequencies. For such evolution time, both protocols reach the target state up to numerical precision. that need to be optimized: while double-bang requires the management of only 3 parameters, CRAB with N c = 10 frequencies needs 20 coe cients (in addition to the frequencies in the optimization). is can be quite demanding for numerical optimization toolboxes, with di erences in optimization times going from the order of hours for CRAB to seconds or minutes for double-bang. A second point of notice is that, intuitively, the search space grows with N c , thus allowing CRAB to e ectively encompass double-bang protocols. However, this would also make the associated optimization task demanding enough to be practically unfeasible.
In Fig. 2 we give a representation of the state evolution in the Bloch sphere under the protocols addressed here, while in Fig. 3 we report the delities obtained optimizing doublebang and CRAB protocols to achieve the ground state at the avoided crossing. We nd that double-bang, despite its simplicity, realises the target transition with good delity faster than CRAB, achieving delities F > 1 − 10 −10 in time ωτ * ≈ 0.8. whereas, CRAB requires ωτ * ≈ 0.9 to reach similar delities. We nd that increasing the number of frequencies N c in CRAB does not bring about signi cant improvements, while making the optimization considerably more computationally demanding.
To test further the minimum control time, we also performed the optimization with di erent protocols. In particular, we tested a variation of CRAB in which initial and nal values of the protocol are also included in the optimization, as well as triple-bang protocols. We nd that both such approaches achieve F > 1 − 10 −10 at a shorter time ωτ * ≈ 0.76. is suggests that the sub-optimality of CRAB for this particular case might be partly due to the xed initial and nal parameter values and the inherent analyticity of the ansatz.
B. Lipkin-Meshkov-Glick model e LMG model [53], originally introduced in the context of nuclear physics, describes a fully long-range interaction of N spin-1/2 subjected to a transverse magnetic eld. anks to its experimental realisation with cold atoms [77] and trapped ions [78], the model has gained renewed a ention [79][80][81][82][83], and has served as a test bed to study several aspects of quantum critical systems [84][85][86][87][88][89][90]. e model is described by the Hamiltonian with S k = 1 2 i σ i k the k = x, y, z collective angular momenta operators. e model exhibits a second order meaneld QPT at a critical value g c = 1 [54][55][56][57][58] and belongs to the same universality class of the quantum Rabi [91,92] and the Dicke [93] models. We focus on the task of driving the ground state of H LMG (g 0 = 0) to the ground state at the critical point, H LMG (g c = 1). See also Ref. [94] for a similar task using a variational quantum-classical simulation [95]. As shown in Fig. 4, in line with the results achieved for the LZ model, the double-bang protocols achieve the target transition with Optimization results generating the ground state at the avoided crossing of an LZ model [75]. We given the optimized delity F when using both double-bang and CRAB protocols for di erent total evolution times ωτ . In each optimization, we constrain the available energy imposing |g(t)| ≤ 10 ∀t. Each point gives the delity obtained optimizing a double-bang (blue circles) or CRAB (orange crosses and green triangles) protocol to evolve the ground state of HLZ(−5) to the ground state of HLZ(0). e shaded region marks results for which the numerical precision starts being an important factor, and additional care must be taken to maintain the required level of accuracy while simulating the state. All the points shown in the gure correspond to F > 1 − 10 −14 . Optimizations with up to Nc = 10 CRAB frequencies do not provide a signi cant improvement in the delity. high delity faster than CRAB, and with scaling behavior be er than the expected speed-limit scaling τ * ∼ ∆ −1 min . More precisely we nd, with double-bang protocols, delities F 0.999 for very short evolution times ωτ ∼ 1. While F 0.99 for ωτ = 0.75 with a double-bang protocol, CRAB with N c = 10 frequencies only achieves F 0.9 for the same τ . Appendix B reports further details and results of the performance corresponding to the use of double-bang protocols.
Increasing the system size leads to a closing of the energy gap at the critical point according to ∆ min ∼ N −z with z = 1/3 [54]. Hence, larger systems exhibit a smaller gap, which translates into longer evolution times to faithfully prepare the ground state at g c . In Fig. 5 we plot the results upon optimizing double-bang protocols for di erent system sizes. Without loss of generality, we choose a constraint |g(t)| ≤ g max = 1.7. As argued in Ref. [40], an optimized protocol will only be able to nd F ≈ 1 for protocols of duration τ ≥ τ * ∝ ∆ −1 min . Since the energy gap scales as N −z , it follows that τ * ∼ N z . Hence, τ * ∆ min = O(1) should remain constant when the protocol operates at the quantum speed limit. We nd that the double-bang approach allows to prepare critical ground states with delities F 0.999 in a time τ * ∆ min ∝ N −α with α > 0. We obtain an estimate of τ * in two di erent ways: rst, as the time at which the delity surpasses F = 0.998 and, second, as the time at which the kink displayed in Fig. 5 in the delity is reached. Both criteria for τ * lead to the same scaling, as shown in Fig. 6 where we nd τ * ∆ min ∝ N −α with α = 0.21 (1).
We also study the dependence of the minimal evolution time on the energy constraints imposed on the protocol. As shown in Fig. 7, increasing the allowed energy decreases the minimum control time. Another interesting observed phenomenon is the existence of a threshold, at around F ∼ 0.999, above which it is harder to push the delity. We nd that the maximum delity, for both bang-bang and CRAB protocols, increases rapidly at rst, but then hits a threshold, at which the increase is very slow with τ . Moreover, this threshold seems to be una ected by the allowed energy, suggesting that it cannot be avoided by simply pumping more energy into the system, being instead related to the constraints inherent to the model under consideration. is same behavior can be seen also in Figs. 4 and 5.
Our ndings suggest the optimality of double-bang protocols for this task. Even allowing for more complex protocols, we never nd be er delities than those achieved using the simple double-bang. More precisely, we analyzed bang-bang protocols involving three and four bangs [cf. Eq. (4) no improvement with respect to the performance of doublebang protocols. Indeed, it appears that the optimal protocols use only two distinct values of the parameter (as opposed to the three values allowed for by triple-bang protocols). is strongly suggests the optimality of a double bang for this task. As in the LZ case, this hints at a possible explanation of the sub-optimality of CRAB, which is constrained to use xed initial and nal values of g(t). Optimal paths that involve a sudden quench at the beginning and/or end of the protocol are hardly a ainable with a continuous CRAB with a nite number of frequencies. As further evidence in this direction, we considered a variation of CRAB in which the endpoints g 0 , g 1 are also optimized. Consistently with our conjecture, this improves the results, pushing the minimal time for N = 50 spins to ωτ * ≈ 1. As it can be seen in Fig. 4, ωτ * obtained with optimized endpoints lies between the ωτ * achieved with double-bang and that of CRAB with xed initial and nal points.
V. CONCLUSIONS
We have shown that simple double-bang protocols can be employed for a fast and faithful ground state preparation. In particular, we have explicitly addressed the paradigmatic LZ and LMG models, the la er to illustrate the possibility to reliably prepare a critical ground state. In these models, optimized double-bang protocols can perform be er than well-established optimal control techniques, such as CRAB. Owing to their nature, these double-bang protocols are very well suited for optimization purposes, o ering a large computational advantage with respect to other optimal control methods.
In the LMG model, double-bang protocols allow the preparation of the ground state at the critical point in a time that scales slower than the inverse of the energy gap at the QPT. Other quantum critical models can be investigated following similar routes. Although distinct optimal control techniques may reach similar results than those reported here under a double-bang scheme, the o en large number of variables to be maximised makes these protocols very di cult to be optimized, thus hindering this key observation.
Our results motivate further theoretical studies in the realm of quantum speed limits in many-body systems. It is worth stressing that our double-bang protocol can be readily implemented in di erent experimental setups, allowing for the fast preparation of interesting quantum states, such as highly entangled states of a large number of ions [51]. Appendix A: Implementation details e optimizations reported here have been carried out in P , using the algorithms provided by the S scienti c library [96]. We used the Nelder-Mead [97] and Powell [98] optimization methods, which were found to give the best performances. Nelder-Mead, in its adaptive variant [99], is found to give be er results when using the CRAB protocol, while Powell gives be er results to optimize bang-bang protocols. In each plot we report the delity corresponding to a saturated double-bang protocol, in which the interaction strength is gmax for times ranging from 0 to ωτ1, and −gmax between ωτ1 and ωτ . All plots use the same color scale, with dark blue corresponding to values close to zero and bright red to delities F > 0.99. e dashed vertical green lines are only used to mark the values ωτ = 0.5, 1.0 and 1.5. For each total evolution time ωτ and value of gmax, we report the corresponding delity. e optimal value of the delity is achieved for all times at values of gmax between 0.5 and 0.9. Recall that the QPT takes place at g = 1.
Appendix B: Saturated-boundary double-bang protocols
In the phase in which the optimal delity increases quickly, before the saturation point, the optimal double-bang protocols are found to be of the following form: g(t) = g max for t ∈ [0, τ 1 ], for τ 1 some threshold time, and g(t) = −g max for t ∈ [τ 1 , τ ], with τ the total evolution time. We analyse this further in Fig. 8, where for di erent energy constraints g max , we show the delities for the di erent possible saturated double-bang protocols, by varying ωτ and τ 1 /τ to explore the di erent possible shapes. We nd that the saturation threshold observed in Figs. 4, 5 and 7 corresponds to a marked change in behavior of the delity. Although not explicitly shown, we analyse the scaling of the optimal time ωτ * as a function of the energy constraint g max , which is found to follow ωτ * = 1.819 · g −0.559 max , where the values are determined via a numerical t. For completeness, we also give in Fig. 9 the delities obtained using a constant protocol based on the use of a g max value. As expected, in this simple model it is not possible to exploit the available energy to speed up the transition, and delities F > 0.99 are only possible for small energies, and the times always larger than those obtainable using double-bang. | 6,931.4 | 2020-07-14T00:00:00.000 | [
"Physics"
] |
AUTOMATIC CROWD ANALYSIS FROM VERY HIGH RESOLUTION SATELLITE IMAGES
Abstract. Recently automatic detection of people crowds from images became a very important research field, since it can provide crucial information especially for police departments and crisis management teams. Due to the importance of the topic, many researchers tried to solve this problem using street cameras. However, these cameras cannot be used to monitor very large outdoor public events. In order to bring a solution to the problem, herein we propose a novel approach to detect crowds automatically from remotely sensed images, and especially from very high resolution satellite images. To do so, we use a local feature based probabilistic framework. We extract local features from color components of the input image. In order to eliminate redundant local features coming from other objects in given scene, we apply a feature selection method. For feature selection purposes, we benefit from three different type of information; digital elevation model (DEM) of the region which is automatically generated using stereo satellite images, possible street segment which is obtained by segmentation, and shadow information. After eliminating redundant local features, remaining features are used to detect individual persons. Those local feature coordinates are also assumed as observations of the probability density function (pdf) of the crowds to be estimated. Using an adaptive kernel density estimation method, we estimate the corresponding pdf which gives us information about dense crowd and people locations. We test our algorithm usingWorldview-2 satellite images over Cairo and Munich cities. Besides, we also provide test results on airborne images for comparison of the detection accuracy. Our experimental results indicate the possible usage of the proposed approach in real-life mass events.
INTRODUCTION
Recently automatic detection of people and crowds from images gained high importance, since it can provide very crucial information to police departments and crisis management teams.Especially, detection of very dense crowds might help to prevent possible accidents or unpleasant conditions to appear.Due to their limited coverage of area, street or indoor cameras are not sufficient for monitoring big events.In addition to that, it is not always possible to find close-range cameras in every place where the event occurs.
Due to the importance of the topic, many researchers tried to monitor behaviors of people using street, or indoor cameras which are also known as close-range cameras.However, most of the previous studies aimed to detect boundaries of large groups, and to extract information about them.The early studies in this field were developed from closed-circuit television images (Davies et al., 1995), (Regazzoni and Tesei, 1994), (Regazzoni and Tesei, 1996).Unfortunately, these cameras can only monitor a few square meters in indoor regions, and it is not possible to adapt those algorithms to street or airborne cameras since the human face and body contours will not appear as clearly as in closerange indoor camera images due to the resolution and scale differences.In order to be able to monitor bigger events researchers tried to develop algorithms which can work on outdoor camera images or video streams.Arandjelovic (Arandjelovic, Sep. 2008) developed a local interest point extraction based crowd detection method to classify single terrestrial images as crowd and non-crowd regions.They observed that dense crowds produce a high number of interest points.Therefore, they used density of SIFT features for classification.After generating crowd and noncrowd training sets, they used SVM based classification to detect crowds.They obtained scale invariant and good results in terrestrial images.Unfortunately, these images do not enable monitoring large events, and different crowd samples should be detected before hand to train the classifier.Ge and Collins (Ge and Collins, 2009) proposed a Bayesian marked point process to detect and count people in single images.They used football match images, and also street camera images for testing their algorithm.It requires clear detection of body boundaries, which is not possible in airborne images.In another study, Ge and Collins (Ge and Collins, 2010) used multiple close-range images which are taken at the same time from different viewing angles.They used threedimensional heights of the objects to detect people on streets.Unfortunately, it is not always possible to obtain these multi-view close-range images for the street where an event occurs.Chao et al. (Lin et al., Nov. 2001) wanted to obtain quantitative measures about crowds using single images.They used Haar wavelet transform to detect head-like contours, then using SVM they classified detected contours as head or non-head regions.They provided quantitative measures about number of people in crowd and sizes of crowd.Although results are promising, this method requires clear detection of human head contours and a training of the classifier.Unfortunately, street cameras also have a limited coverage area to monitor large outdoor events.In addition to that, in most of the cases, it is not possible to obtain close-range street images or video streams in the place where an event occurs.Therefore, in order to behaviors of large groups of people in very big outdoor events, the best way is to use airborne images which began to give more information to researchers with the development of sensor technology.Since most of the previous approaches in this field needed clear detection of face or body features, curves, or boundaries to detect people and crowd boundaries which is not possible in airborne images, new approaches are needed to ex-tract information from these images.Hinz et al. (Hinz, 2009) registered airborne image sequences to estimate density and motion of people in crowded regions.For this purpose, first a training background segment is selected manually to classify image as foreground and background pixels.They used the ratio of background pixels and foreground pixels in a neighborhood to plot density map.By observing change of the density map in the sequence, they estimated motion of people.Unfortunately, their approach did not provide quantitative measures about crowds.In a following study (Burkert et al., Sep. 2010), they used previous approach to detect individuals.Positions of detected people are linked with graphs.They used these graphs for understanding behaviors of people.
In order to bring a fully automatic solution to the problem, we propose a novel framework to detect people from remotely sensed images.One of the best solutions to monitor large mass events is to use airborne sensors which can provide images with approximately 0.3 m. spatial resolution.In previous studies (Sirmacek and Reinartz, 2011a) and (Sirmacek and Reinartz, 2011b), we used airborne images to monitor mass events.In the first study (Sirmacek and Reinartz, 2011a), we proposed a novel method to detect very dense crowd regions based on local feature extraction.Besides, detecting dense crowds, we have also estimated number of people and people densities in crowd regions.In following study (Sirmacek and Reinartz, 2011a), by applying a background control, individual persons are also detected in airborne images.Moreover, in a given airborne image sequence, detected people are tracked using Kalman filtering approach.Although airborne images are useful to monitor large events, unfortunately sometimes flying over mass event might not be allowed, or it might be an expensive solution.Therefore, detecting and monitoring crowds from satellite images can provide crucial information to control large mass events.As the sensor technology is being developed, new satellites can provide images with higher spatial resolutions.With those new satellite sensors, it became possible to notice human crowds, and even individual persons in satellite images.Therefore, herein we propose a novel approach to detect crowds automatically from very high resolution satellite images.Although resolutions of satellite images are still not enough to see each person with sharp contours, we can still notice a slight change of intensity and color components at the place where a person exists.Therefore, the proposed algorithm is based on local features which are extracted from intensity and color bands of the satellite image.In order to eliminate redundant local features which are generated by the other objects or texture on building rooftops, we apply a feature selection method which consists of three steps as; street classification approach, eliminating high objects on streets using shadow information, and using digital elevation model (DEM) of the region which is automatically generated using stereo satellite images to eliminate buildings.After applying feature selection, using selected local features as observations, we generate a probability density function (pdf).Obtained pdf helps us to detect crowded regions, and also some of the individual people automatically.We test our algorithm using Worldview-2 satellite images which are taken over Cairo and Munich cities.Our experimental results indicate the possible usage of the proposed approach in real-life mass events and to provide a rough estimation of the location and size of crowds from satellite data.Next, we introduce steps of the approach in detail.
LOCAL FEATURE EXTRACTION
In order to illustrate the algorithm steps, we pick M unich1 image from our dataset.In we represent a subpart of this image in order to give information about real resolution.As can be seen here, satellite image resolutions do not enable to see each single person with sharp details.On the contrary, each person is represented with two or three mixed pixels, and sometimes additionally two or three mixed shadow pixels.All those pixels coming from a human appearance make a change of intensity components at the place where the person exists which can be detected with a suitable feature extraction method.Therefore, our crowd and people detection method depends on local features extracted from input image.For local feature extraction, we use features from accelerated segment test (FAST).FAST feature extraction method is especially developed for corner detection purposes by Rosten et al. (Rosten et al., Nov. 2010), however it also gives high responses on small regions which are significantly different than surrounding pixels.The method depends on wedge-model-style corner detection and machine learning techniques.For each feature candidate pixel, its 16 neighbors are checked.If there exist nine contiguous pixels passing a set of pixels, the candidate pixel is labeled as a feature location.In FAST method, these tests are done using machine learning techniques to speed up the operation.For detailed explanation of FAST feature extraction method please see (Rosten et al., Nov. 2010).
We assume (xi, yi) i ∈ [1, 2, ..., Ki] as FAST local features which are extracted from input image.Here, Ki indicates the maximum number of features extracted from panchromatic band of the input image.We represent locations of detected local features for M unich1 test image in Fig. 2.(b).As can be seen in this image, we have extracted local features on street at places where each individual person exits.Unfortunately, many redundant features are also detected generally on building rooftops, and corners.For detection of people and crowds, first of all local features coming from other objects should be eliminated.For this purpose, we apply a feature selection method that we represent in
FEATURE SELECTION
For eliminating redundant features coming from building rooftop textures or corners of other objects in the scene, we use three masks as follows.The first mask (B1(x, y)) is obtained by street segmentation using a training street patch which is selected by user.The second mask (B2(x, y)), is generated using the shadow information, in order to remove high objects which appear on the detected street network.Finally, the third mask (B3(x, y)) is obtained using height information obtained from DEM.
For street segmentation, we first choose a 20×20 pixel size training patch (t(x, y)) from input image.We benefit from normalized cross correlation to extract possible road segment.Normalized cross correlation between the training patch and the input image is computed using following equation.
Here t represents the mean of intensity values in the template patch, and g u,v represents the mean of the input image intensity values which are under the template image in correlation operation.At the normalized cross correlation result γ(u, v), we obtain the road segment pixels as highlighted due to the high similarity to the training patch.By applying Otsu's automatic thresholding algorithm (Otsu, 2009) to the normalized cross correlation result, we obtain the road-like segments as in Fig. 3.(a).This binary image is assumed as the first mask (B1(x, y)) which is going to be used for feature selection.
Although estimated street segment helps us for feature selection, still we cannot eliminate features coming from high objects on street such as street lamps, statues, small kiosks, etc.Unfortunately, those small objects also do not appear in DEM of the region, and they cannot be eliminated using height information coming from DEM.In order to eliminate features coming from these objects, in this step we try to detect them using shadow information.For shadow extraction, we use local image histograms.For each 100 × 100 pixel size window of the input image, the first local minimum in grayscale histogram is assumed as a threshold value to apply local thresholding to the image.After applying our automatic local thresholding method, we obtain a binary shadow map.In Fig. 3.(b), we represent detected shadow pixels on original image.
After detecting shadow pixels, we use the sun illumination angle to generate our high object mask.For labeling high objects, each shadow pixel should be shifted into opposite side of illumination direction.Assuming that (xs, ys) is an array of shadow pixel coordinates which are represented in Fig. 3.(b).New positions of shadow pixels (( xs, ys)) are computed as xs = xs + l sin(−θ), and y s = ys + l cos(−θ).Here θ is the opposite direction of the illumination angle which is given by user, and l is the amount of shift in θ direction as pixel value.For better accuracy l should be chosen as the width of the shadow in illumination direction.However, in order to decrease computation time and complexity, we assume l equal to the length of the minor axis of an ellipse which fits shadow shape.After shifting shadow pixels, we generate our second mask B2(x, y) binary mask where B2(x, y) = 1 for (( xs, ys)).In Fig. 4, we illustrate shadow pixel shifting operation.
In order to obtain the last mask B3(x, y), we use DEM of the corresponding region which is generated from stereo Ikonos images using the DEM generation method of dAngelo et al. (dAngelo et al., 2009).We obtained B3(x, y) binary mask by applying local thresholding to DEM.We provide original DEM corresponding to M unich1 image, and obtained binary mask in Fig. 5.(a), and (b) respectively.As can be seen, building rooftop regions are eliminated, however other low regions like park areas, parking lots with cars (or sea surface for some other test areas) cannot be eliminated with this mask.Therefore, we use information coming from three masks we generated.We assume our interest area as S(x, y) = B1(x, y)∧B2(x, y) ′ ∧B3(x, y), where '∧' represents logical and operation for binary images.
We use detected S(x, y) interest area for removing FAST features which are extracted from other objects.We eliminate a FAST feature which is at (xi, yi) coordinates, if S(xi, yi) = 0. Remaining FAST features behave as observations of the probability density function (pdf) of the people to be estimated.In the next step, we introduce an adaptive kernel density estimation method, to estimate corresponding pdf which will help us to detect dense people groups and also other people in sparse groups.
DETECTING INDIVIDUALS AND DENSE CROWDS
Since we have no pre-information about possible crowd locations in the image, we formulate the crowd detection method using a probabilistic framework.Assume that (xi, yi) is the ith FAST feature where i ∈ [1,2,...,Ki].Each FAST feature indicates a local color change which might be a human to be detected.Therefore, we assume each FAST feature as an observation of a crowd pdf.For crowded regions, we assume that more local features should come together.Therefore knowing the pdf will lead to detection of crowds.For pdf estimation, we benefit from a kernel based density estimation method as Sirmacek and Unsalan represented for local feature based building detection (Sirmacek and Unsalan, 2010).
Silverman (Silverman, 1986) defined the kernel density estimator for a discrete and bivariate pdf as follows.The bivariate kernel function [N (x, y)] should satisfy the conditions given below; The pdf estimator with kernel N (x, y) is defined by, where h is the width of window which is also called smoothing parameter.In this equation, (xi, yi) for i = 1, 2, ..., n are observations from pdf that we want to estimate.We take N (x, y) as a Gaussian symmetric pdf, which is used in most density estimation applications.Then, the estimated pdf is formed as below; where σ is the bandwidth of Gaussian kernel (also called smoothing parameter), and R is the normalizing constant to normalize pn(x, y) values between [0, 1].
In kernel based density estimation the main problem is how to choose the bandwidth of Gaussian kernel for a given test image, since the estimated pdf directly depends on this value.For different resolution images, the pixel distance between two persons will change.That means, Gaussian kernels with different bandwidths will make these two persons connected to detect them as a group.Otherwise, there will be many separate peaks on pdf, however we will not be able to find large hills which indicate crowds.As a result, using a Gaussian kernel with fixed bandwidth will lead to poor estimates.Therefore, bandwidth of Gaussian kernel should be adapted for any given input image.
In probability theory, there are several methods to estimate the bandwidth of kernel functions for given observations.One wellknown approach is using statistical classification.This method is based on computing the pdf using different bandwidth parameters and then comparing them.Unfortunately, in our field such a framework can be very time consuming for large input images.The other well-known approach is called balloon estimators.This method checks k-nearest neighborhoods of each observation point to understand the density in that area.If the density is high, bandwidth is reduced proportional to the detected density measure.This method is generally used for variable kernel density estimation, where a different kernel bandwidth is used for each observation point.However, in our study we need to compute one fixed kernel bandwidth to use at all observation points.To this end, we follow an approach which is slightly different from balloon estimators.First, we pick Ki/2 number of random observations (FAST feature locations) to reduce the computation time.For each observation location, we compute the distance to the nearest neighbor observation point.Then, the mean of all distances give us a number l.We assume that variance of Gaussian kernel (σ 2 ) should be equal or greater than l.In order to guarantee to intersect kernels of two close observations, we assume variance of Gaussian kernel as 5l in our study.Consequently, bandwidth of Gaussian kernel is estimated as σ = √ 5l.For a given sequence, that value is computed only one time over one image.Then, the same σ value is used for all observations which are extracted from images of the same sequence.The introduced automatic kernel bandwidth estimation method, makes the algorithm robust to scale and resolution changes.
We use Otsu's automatic thresholding method on obtained pdf to detect regions having high probability values (Otsu, 2009).After thresholding our pdf function, in obtained binary image we eliminate regions with an area smaller than 1000 pixels since they cannot indicate large human crowds.The resulting binary image Bc(x, y) holds dense crowd regions.Since our M unich1 test image does not include very dense crowds, in Fig. 7 we illustrate an example dense crowd detection result on another Worldview-2 satellite test image which is taken over Cairo city when an outdoor event occurs.
After detecting dense crowds automatically, we focus on detecting individuals in sparse areas.Since they indicate local changes, we assume that detected local features can give information about individuals.
In most cases, shadows of people or small gaps between people also generate a feature.In order to decrease counting errors coming double counted people because of their shadows, we follow a different strategy to detect individuals.We use a binary mask B f (x, y) where (xi, yi) feature locations have value 1.Then, we dilate B f (x, y) using a disk shape structuring element with a radius of 2 to connect close feature locations.Finally, we apply connected component analysis to mask, and we assume mass center of each connected component as a detected person position.In this process, slight change of radius of structuring element does not make a significant change in true detected people number.However, an appreciable increase in radius can connect features coming from different persons which leads to underestimates.
EXPERIMENTS
To test the proposed algorithm, we use a Worldview-2 satellite image dataset which consists of four multitemporal panchromatic images taken over Munich city (M unich1−4 images), and one panchromatic image taken over Cairo city (Cairo1).Those panchromatic Worldview-2 satellite images have approximately half meter spatial resolution.We also test proposed algorithm on an airborne image (with 30 cm. spatial resolution) taken from the same region in over Munich city, in order to show robustness of the algorithm to resolution and sensor differences In Fig. 6, we represent people detection results for M unich1−4 images.For these four multitemporal images, true individual person detection performances are counted as 92, 02%, 70, 73%, 88, 57%, and 89, 19% respectively.Besides, false alarm ratios are obtained as 14, 49%, 40, 34%, 24, 29%, and 27, 03% respectively.In Fig. 7.(a), we present dense crowd detection and people detection results in Worldview-2 satellite image taken over Cairo city.Robust detection of dense crowd boundaries indicate usefulness of the proposed algorithm to monitor large mass events.Finally, in Fig. 7.(b), we represent people detection results on an airborne image which is taken in the same test area over Munich city.Obtained result proves robustness of the algorithm to scale and sensor differences of the input images.
CONCLUSION
In order to solve crowd detection and people detection, herein we introduced a novel approach to detect crowded areas automatically from very high resolution satellite images.Although resolutions of those images are not enough to see each person with sharp details, we can still notice a change of color components in the place where a person exists.Therefore, we developed an algorithm which is based on local feature extraction from input image.After eliminating local features coming from different objects or rooftop textures by applying a feature selection step, we generated a probability density function using Gaussian kernel functions with constant bandwidths.For deciding bandwidth of Gaussian kernel to be used, we used our adaptive bandwidth selection method.In this way, we obtained a robust algorithm which can cope with input images having different resolutions.By automatically thresholding obtained pdf function, dense crowds are robustly detected.After that, local features in sparse regions are analyzed to find other individuals.We have tested our algorithm on panchromatic Worldview-2 satellite image dataset, and also compared with an algorithm result obtained from an airborne image of the same test area.Our experimental results indicate possible usage of the algorithm in real-life events.We believe that, the proposed fully automatic algorithm will gain more importance in the near future with the increasing spatial resolutions of satellite sensors.
Fig. 1.(a), we represent original M unich1 panchromatic WorldView-2 satellite test image, and in Fig. 1.(b), Figure 1: (a) M unich1 test image from our Worldview-2 satellite image dataset, (b) Real resolution of a small region in M unich1 test image.
Figure 2 :
Figure 2: (a) Original M unich1 test image, (b) FAST feature locations which are extracted from M unich1 test image.the next section in detail.
Figure 3 :
Figure 3: (a) Road-like pixels which are segmented from M unich1 test image, (b) Automatically extracted shadow pixels from M unich1 test image.
Figure 4 :
Figure 4: Illustration of shadow pixel shifting operation.
Figure 5 :
Figure 5: (a) Digital elevation model corresponding to M unich1 test image which is generated using stereo WorldView-2 satellite images.(b) Low regions in M unich1 image obtained by applying local thresholding to DEM. | 5,444.6 | 2013-04-26T00:00:00.000 | [
"Computer Science",
"Environmental Science"
] |
Domain Adaptation in Remote Sensing Image Classification: A Survey
Traditional remote sensing (RS) image classification methods heavily rely on labeled samples for model training. When labeled samples are unavailable or labeled samples have different distributions from that of the samples to be classified, the classification model may fail. The cross-domain or cross-scene remote sensing image classification is developed for this case where an existing image for training and an unknown image from different scenes or domains for classification. The distribution inconsistency problem may be caused by the differences in acquisition environment conditions, acquisition scene, acquisition time, and/or changing sensors. To cope with the cross-domain remote sensing image classification problem, many domain adaptation (DA) techniques have been developed. In this article, we review DA methods in the fields of RS, especially hyperspectral image classification, and provide a survey of DA methods into traditional shallow DA methods (e.g., instance-based, feature-based, and classifier-based adaptations) and recently developed deep DA methods (e.g., discrepancy-based and adversarial-based adaptations).
Domain Adaptation in Remote Sensing
Image Classification: A Survey observations, which can be used for large-scale and long-term applications.
Although different types of RS images are available, they are difficult to be collaboratively processed due to the existence of large distribution differences (e.g., different sensors and acquisition conditions) among these images. In particular, for RS image classification, we usually want to build a model on a known image to classify an unknown one. If these two images have different data distributions, traditional classification methods may not provide satisfying results. Fortunately, if distribution of two images are related, we can use a domain adaptation (DA) technique to build connection between images and transfer knowledge from one image to the other. For RS image processing, there are various cases where two images have different but related distributions [2], [3]: 1) Difference in sensors: two images are acquired by two different sensors on the same scene; 2) difference in spatial locations (bias sampling): two images correspond to two disjoint regions in a large scene; 3) difference in scenes: two images correspond to two different scenes with similar materials; 4) difference in acquisition conditions (atmospheric, illumination, or acquisition angle): two images are acquired in different imaging conditions; 5) difference in acquisition times: two images are acquired in different times and the ground materials have been changed. In all these cases, two images have some related characteristics, such as the same or similar scenes, the same sensor or similar acquisition condition. Thanks to such correlation, the data inconsistency problem can be solved by the DA technique, which forms the differences in sensors or imaging environmental conditions into a data or feature transfer problem.
DA aims to solve the distribution discrepancy between domains [4], [5]. Depending on the availability of target labels, the DA methods can be categorized into unsupervised, semisupervised, and supervised methods, in which an unsupervised DA method with no label in the target domain is a hotspot because it matches many actual situations. In early research of unsupervised DA, scholars mainly focus on traditional DA methods that aim to align distributions from the aspects of instance, feature, or classifier. The instance-based methods mainly consider to adjust the marginal distribution of source or target samples. Feature-based methods align the subspace features of different domains to minimize their distribution differences. Classifier-based methods mainly aim to adapt a classifier trained on the source domain to the target domain. In recent years, many deep learning-based DA methods are proposed [6]. This By means of deep network architecture, deep DA methods can automatically extract deep features from domains and further learn transferrable features by adding feature adaptation layers in an original deep network architecture or constructing feature learning modules (e.g., adversarial learning). Considering different network architectures, deep DA methods are mainly categorized into discrepancy-based methods and adversarial-based methods [5]. Although these traditional and deep DA methods are widely applied for computer vision tasks, their applicability on RS images is not clear. In this article, we provide a review on these unsupervised DA methods and test their performance on cross-domain RS image classification.
The rest of this article is organized as follows. Section II introduces notations. Sections III, IV, and V describe the shallow DA methods, such as instance-based, feature-based, and classifier-based methods, respectively. Section VI presents the deep DA methods, such as discrepancy-based and adversarialbased methods. Section VII provides experimental results. Finally, Section VIII concludes this article. DA considers the classification problems where the class space of source and target domains are the same but the distribution between domains are different but related. The objective of DA is to classify target samples using the model built on source samples.
The DA methods can be categorized as traditional shallow DA methods and recent deep DA methods. The traditional DA methods can be instance-based, feature-based, and classifier-based DA methods. In the following, they are introduced in detail. Table I shows the taxonomy of DA methods discussed in this article.
III. INSTANCE-BASED METHODS
Instance-based DA methods mainly adjust the marginal distribution of source or target samples such that the distribution of domains are aligned. Let p s (x) and p t (x) be the marginal density distribution of source and target samples, respectively, an importance weight can be defined as [5] By adjusting the reweighting factor w(x), the sample selection bias and covariate shift problems can be alleviated to certain extent [146], [147]. As shown in Fig. 1, the instance reweighting strategy reweights the source data [i.e., the solid points in Fig. 1(c) have large weights] to minimize the marginal distribution difference between domains and then a classifier built on the reweighted source data [i.e., the black line in Fig. 1(c)] can be used to classify target samples.
To solve the sample selection bias, Huang et al. [7] presented a nonparametric kernel mean matching (KMM) method to directly produce resampling weights without distribution estimation. Yaras et al. proposed a randomized histogram matching (RHM) method to augment training data to describe domain shifts of satellite images. In detail, they analyzed different reasons for the domain shift, such as changing sensors, illumination variations, and imaging conditions, and modeled these factors as nonlinear pixelwise transformations, and then training data augmentation with deep neural networks was employed to increase the model robustness to these transformations [8]. Cui et al. [9] proposed an iterative weighted active transfer learning framework (IWATL) for hyperspectral image (HSI) classification. It weighted the source samples by considering the distance between the samples and the classification hyperplane as well as the similarity between the source and target distributions. Li et al. [10] proposed a cost-sensitive self-paced learning (CSSPL) framework for the classification of multitemporal images, which automatically assigned sample weight via a mixture weight regularizer. To reuse a large number of existed labeled images, a historical and target training data weighting strategy was proposed in an extreme learning machine (ELM)-based RS image transfer classification framework [11].
In the instance-based DA methods, source or/and target sample reweighting and landmark selection are widely used strategies [146], [147]. These strategies can also be embedded into the feature-based and classifier-based adaptations for domaininvariant feature learning or classifier refinement [25], [46], [148], respectively.
IV. FEATURE-BASED METHODS
Feature-based DA methods transform source and target data into a feature space such that the data distribution of both domains in the feature space are similar. Then, source features and corresponding labels can be used to generate a classifier to predict label of target samples. Feature-based adaptation is usually realized by joint feature extraction, and typical methods are subspace-based and transformation-based adaptation methods.
A. Subspace-Based Adaptation
Subspace-based DA methods usually project the source and target samples into individual subspaces according to subspace learning or dimensionality reduction methods, and then align the subspaces [5].
Goplan et al. [12] proposed a sampling geodesic flow (SGF) method for DA, which learns intermediate representations of source and target samples via Grassmann manifolds to describe domain shift. However, the SGF approach has several limitations, such as the difficulty in its sampling strategy and the high dimensionality of the new representations. To solve these problems, geodesic flow kernel (GFK) method was proposed [13]. It constructed a GFK to model domain shift and provided a simple solution to compute the kernel. So, the GFK method is easy to implement than SGF [13], [41]. Banerjee et al. [14] proposed a coclustering-based method for DA in the absence of source samples. The samples from both domains are projected in a shared space by the GFK-based projection, followed by a probabilistic support vector machine (SVM)-based iterative coclustering method.
Fernando et al. [15] proposed a subspace alignment (SA) method, which first employed the principal component analysis (PCA) to generate individual subspaces for source and target domains and then learned a linear transformation M to align these subspaces where · 2 F is the Frobenius norm, and P s , P t ∈ R D×d are the low-dimensional representations (i.e., basis vectors) of source and target data, respectively. The procedure of SA is illustrated in Fig. 2.
Sun et al. [16] directly applied the SA for cross-view RS scene classification, where the partial least squares (PLS) method was used to generate discriminative subspace of source domain. They further proposed a transfer sparse subspace analysis (TSSA) algorithm for unsupervised cross-view RS scene classification [18]. It minimized the maximum mean discrepancy (MMD) distance between domains and preserved the self-expressiveness property of the data in a reproducing kernel Hilbert space (RKHS) according to sparse subspace clustering. Wei et al. [17] proposed a robust DA method for HSIs which employed SA to perform subspace feature-level alignment. The SA was extended to a tensor alignment (TA) for HSI classification [19], where tensors of source and target domains were constructed and SA was performed on the tensors. Gao et al. [20] proposed an unsupervised tensorized principal component alignment framework for multimodal RS image classification. Gui et al. [21] proposed a statistical scattering component-based SA for cross-domain polarimetric synthetic aperture radar (PolSAR) image classification.
It can be seen that SA only aligns the subspace bases without considering the distributions of subspaces. To incorporate the distribution alignment into SA, a subspace distribution alignment (SDA) method was proposed to align both subspace bases and subspace distributions [22]. For the cross-scene classification of HSIs, a discriminative cooperative alignment (DCA) method was proposed to alleviate spectral shift [23]. In the DCA, SA and distribution alignment work cooperatively through the subspace correlation constraint and MMD [23]. Zhang et al. [24] proposed a correlation subspace dynamic distribution alignment (CS-DDA) method for RS scene classification, which maximizes the correlation between source and target subspaces and meanwhile dynamically minimizes the statistical distribution difference between domains.
To handle nonlinearity, Aljundi et al. [25] extended the linear SA to kernel-based SA (KSA), The kernel of source and target domains are first constructed on the selected landmarks, and then the SA is performed on the source and target kernels to align the kernel-based subspaces [25]. To further exploit the source labels and multiple kernel representations, an ideal regularized discriminative multiple kernel subspace alignment (IRDMKSA) was proposed for HSI classification [26]. It performs SA in the composite-kernel-based spaces to reduce the distribution differences of domains.
Traditional subspace learning-based strategies usually assume the existence of a single subspace for both domains. However, such an assumption may not be true in many scenarios due to the diversity in the statistical properties of the underlying classes [149]. Banerjee et al. [149] proposed a hierarchical subspace learning-based unsupervised DA technique for multitemporal RS image classification, where node-specific subspaces are learned from a binary-tree. Shen et al. [27] presented a hyperspectral feature adaptation and augmentation (HFAA) method for cross-scene HSI classification, which iteratively learns a common subspace by introducing two separate projection matrices and augments it with a feature selection strategy. Li et al. [28] proposed an iterative reweighting heterogeneous transfer learning (IRHTL) framework, which iteratively learns a shared space of source and target data based on a weighted SVM and conducts an iterative reweighting strategy to reweight the source samples.
Invariant feature-based methods can be regarded as a special case of subspace-based adaptation. It aims to select a set of features that are not affected by shifting factors. The selected features can form a new subspace. Bruzzone et al. [29] proposed a multiobjective optimization framework to select spatially invariant features for the classification of spatially disjoint scenes. The multiobjective framework ensures the selected features with both high discrimination ability and high spatially invariance. The invariant feature selection can also be performed in an RKHS [30]. Paris et al. [31] presented an invariant-feature-based sensor-driven hierarchical DA method. Yan et al. [32] proposed a TrAdaBoost based on an improved particle swarm optimization (PSO) method for cross-domain scene classification, which can select an optimal feature subspace for classifying "harder" and "easier" instances.
There are other subspace-based adaptation methods. Ye et al. [33] proposed a dictionary learning-based feature-level DA technique, which learns a common dictionary to represent source and target data and then aligns their representation coefficient features to reduce the spectral shifts between domains. Wang et al. [34] proposed a pairwise constraint discriminant analysis and nonnegative sparse divergence (PCDA-NSD) method for HSI classification. The PCDA learned potential discriminant information of sample sets in the source and target domains by using pairwise constraints and NSD measured the divergence between different distributions. Lin et al. [35] proposed a dual space unsupervised structure preserving transfer learning (DSTL) framework for HSI classification. It first transfers the data on both domains to a specific subspace, on which the initial classification results for the target HSI are obtained. Then, the initial results on the original target data space are optimized by applying the Markov random field (MRF) approach. Chen et al. [36] proposed a semisupervised dual-dictionary nonnegative matrix factorization (SS-DDNMF) method for heterogeneous transfer learning on cross-scene HSIs, where two different dictionaries are designed for source and target scenes to project two different feature spaces into a shared subspace. Gui et al. [37] proposed a general feature paradigm (GFP) for PolSAR image classification, where information scattering and statistical information are used to reduce the domain shifts.
B. Transformation-Based Adaptation
The transformation-based DA methods transform the original data into new representations to minimize the statistical distribution (i.e., marginal and conditional distributions) discrepancy and geometrical divergence between domains while preserving the underlying structure of original data [5], as shown in Fig. 3.
The MMD, Kullback-Leibler divergence (KL-divergence), or Bregman divergence are usually used to measure the domain's distribution discrepancy [38]. The MMD is defined as where φ is a nonlinear map induced by a universal kernel. Pan et al. [38] introduced the MMD to measure the distribution discrepancy and proposed a transfer component analysis (TCA) method. TCA intends to learn a set of transfer components in an RKHS using the MMD such that marginal distribution differences between domains are reduced and data variance is maximized [38]. The transformation matrix W ∈ R (n s +n t )×d in the TCA can be solved by the following optimization problem: where μ is a regularization parameter, I m ∈ R m×m is an identity matrix, H is the centering matrix, K is the kernel matrix defined on all source and target data. Here, . The unsupervised TCA can be extended to semisupervised TCA (SSTCA) by using the source labels [38]. Matasci et al. [39] directly applied the TCA and SSTCA for the DA of RS image classification. Long et al. [40] proposed a transfer joint matching (TJM) method, which performs feature matching and instance reweighting simultaneously in a unified optimization framework to reduce marginal distribution differences between domains. Peng et al. [41] proposed a discriminative transfer joint matching (DTJM) for HSI classification by considering the label information of source domain.
To align both the marginal and conditional distributions between domains, Long et al. [42] further proposed a joint distribution adaptation (JDA) method. JDA finds a linear transformation A ∈ R D×d to align the marginal distribution based on the MMD and to align the conditional distribution based on a conditional MMD where M 0 and M c are the MMD matrices [42].
The above TCA, TJM, and JDA methods assume that there exists a unified transformation A to map source and target samples into a common space. However, if domain shift is large, it is very different to find a common transformation [43]. Zhang et al. [43] proposed a joint geometrical and statistical alignment (JGSA) method to learn two coupling mappings A and B for source and target domains, respectively. The distribution divergence minimization in the JGSA can be represented as Inspired by the JGSA, Zhou et al. [44], [45] proposed a DA technique based on transformation learning (DATL) for HSI classification. It learned two different transformations using the idea of linear discriminant analysis (LDA) to minimize the the ratio of within-class distance to between-class distance. A distance-based objective function is designed to optimize the transformations and meanwhile to preserve stochastic neighborhood and discriminative information of domains in a latent space [44], [45]. Li et al. [46] proposed a locality preserving joint transfer (LPJT) method to improve the JGSA by considering the local discriminative information preservation and landmark selection in a unified optimization framework. Huang et al. [47] proposed a graph embedding and distribution alignment (GEDA) method for HSI classification, which used the graph embedding method to preserve discriminative information of source and target domains and a pseudo-label learning method to refine the target pseudo labels [47]. Similarly, they further proposed a distribution alignment and discriminative feature learning (DADFL) method [48], which performs classwise discriminative information preservation and uses a structural prediction method to learn pseudo-label of target samples.
Sun et al. [49] proposed a correlation alignment (CORAL) method, which finds a linear transformation A to transform the source data such that the variances of transformed source data and target data can be minimized where CŜ is the covariance of the transformed source features A T X S . The solution to (10) has the closed-form: T . Peng et al. [50] proposed a sparse matrix transformbased CORAL method for HSI classification. Zhu et al. [51] proposed a class centroid alignment method, which aligns the class centroids by moving the target domain samples toward the source domain. To consider the first-and second-order statistical alignment, a class centroid and covariance alignment (CCCA) method was developed for classification of RS images [52]. The proposed method included three main steps: 1) spatial filtering preprocessing, 2) overall centroid alignment-based coarse adaptation, and 3) CCCA-based refined adaptation.
Many canonical correlation analysis (CCA)-based transformation methods were proposed for DA. Qin et al. proposed a cross-domain collaborative learning (CDCL) method for heterogeneous DA of HSIs. It consisted of three parts, i.e., random walker (RW)-based pseudo-labeling, cross-domain learning via cluster canonical correlation analysis (C-CCA), and final classification based on extended RW (ERW) algorithm [53]. Samat et al. [54] proposed a supervised and semisupervised multiview CCA ensemble method for heterogeneous DA in RS image classification. Li et al. [55] proposed a sparse subspace correlation analysis-based supervised classification (SSCA-SC) method for HSI classification, which integrated the idea of CCA into a sparse representation subspace learning framework and directly classified the target samples based on the sparse representation reconstruction residuals. Volpi et al. [56] proposed a kernel CCA transformation (kCCA) method to align spectral characteristics of multitemporal cross-sensor images for change detection.
To correct nonlinear variation between domains, Tuia et al. constructed a nonlinear transform based on vector quantization and graph matching to describe the data changes under different acquisition conditions [57]. They further proposed a semisupervised manifold alignment (SS-MA) method to align the manifolds of RS images [58] by solving a standard Rayleigh quotient where the affinity matrix V can be used to maximize the distances between samples of different classes, and U enhances class similarity between the labeled instances between domains. The matrices U and V can be constructed based on the graph Laplacian. The SS-MA method can be used for multitemporal, multisource, and multiangular classification. Yang et al. [59] proposed a global aligned local manifold (GALM) method to align two globally similar manifolds and to minimize the impact of spectral changes at the local scale. They further extended the MA and proposed spectral and spatial proximity-based MA for multitemporal HSI classification [60]. Hong et al. [61] proposed a learnable MA (LeMA) for semisupervised cross-modality hyperspectral-multispectral classification. Ma et al. [62] proposed a unsupervised MA method for cross-domain classification of RS images, which used an SVM-prediction-based crossdomain similarity matrix and a per-class MMD constraint. To exploit the manifold structure of data, Luo et al. [63] proposed a manifold regularized distribution adaptation (MRDA) algorithm to minimize the per-class MMD and meanwhile preserve the manifold structure of source and target data in the subspace. Wang et al. [64] proposed a DA broad learning (DABL) method for HSI classification, which combined the DA and broad learning system (BLS) to perform MMD-based distribution alignment and manifold structure preservation. Dong et al. proposed a spectral-spatial weighted kernel manifold embedded distribution alignment (SSWK-MEDA) for RS image classification [65], which applied spatial filter to preprocess the hyperspectral data and constructed a spatial-spectral composite kernel for kernelbased adaptation. Gross et al. [66] proposed a nonlinear feature normalization alignment (NFNalign) transformation to mitigate nonlinear effects in hyperspectral data.
There are also some other transform-based methods. Chakraborty et al. [150] proposed an artificial neural networkbased DA strategy, which unified the common data transformation and transfer learning methods. Tardy et al. [67] applied the optimal transport (OT) for land-cover mapping of highresolution satellite images time series. Jia et al. [68] applied the 3-D Gabor transformation to extract spatial-spectral features of HSI for DA.
V. CLASSIFIER-BASED ADAPTATION
The classifier-based DA methods adapt a classifier trained on a source domain to a target domain by considering unlabeled samples of the target domain.
The classifier-based DA can be performed by the adaptation of classifier parameters [69], [70], [71], [72], [151]. Bruzzone et al. [69] proposed a classifier-based DA method to solve the data distribution difference between multitemporal RS images by updating the parameters of a trained maximum-likelihood (ML) classifier on the basis of the distribution of a new image to be classified. The ML-based DA technique was further extended to the Bayesian cascade classifier, multiple-classifier, and multiple cascade-classifier [70], [71], [72]. Zhong et al. [152] proposed a classifier updating method by considering spectral features and guided-filter-based posteriori spatial features. Izquierdo-Verdiguier et al. [73] updated the SVM classifier by adding virtual support vectors (VSVs) for training, where the VSVs contained the invariances to rotations, reflections, and object scale. An SVM-based sequential classifier training (SCT-SVM) approach was proposed for multitemporal RS image classification [74]. By casting the DA as a multitask or multiplekernel learning problem, many multiple-kernel learning-based DA methods were proposed [75], [76], [77], [78]. Xu et al. [79] proposed a DA method through transferring the parameters of ELM. Considering the simplicity of ELM, many ELM-based classifier adaptation methods were proposed, such as cross domain ELM (CDELM) [80], ELM-based heterogeneous DA [81], interpretable rule-based fuzzy ELM (IRF-ELM) [82], ensemble transfer learning based on ELM (TL-ELM) [83]. Wei et al. investigated the combination of multiple classifiers for DA of RS image classification. The multiple domain adaptation fusion (MDAF) method and the multiple base classifier fusion (MBCF) method were proposed to obtain a more stable classification performance [84]. Zhang et al. [85] considered the open set DA problem for RS scene classification via updating the classifier by exploring transferability and discriminability. Wang et al. [86] proposed an easy transfer learning (EasyTL) approach by exploiting intradomain structures to learn both nonparametric transfer features and classifiers.
Semisupervised learning (SSL) and active learning (AL) techniques can also be used to solve the domain shift problem by updating a source classifier with the use of unlabeled target samples [153], [154]. Rajan et al. [87] proposed a binary hierarchical classifier (BHC) framework for knowledge transfer from an existing labeled source domain to a spatially separate and multitemporal target domain, where an SSL technique was used to update the BHC to reflect the characteristics of new data. The classical supervised SVM was also extended to a semisupervised case to solve the DA problem [88], [155]. A typical semisupervised SVM is the domain adaptation SVM (DASVM) [88], which built a standard SVM on a source domain and then iteratively adjusted the SVM model using unlabeled target samples. Kim et al. proposed an adaptive manifold classifier (MRC) in a semisupervised setting, where a kernel machine was first trained with labeled data and then iteratively adapted to new data using manifold regularization [89]. The AL technique was also adopted to update existing classifiers [90], [91], [92], [93], [94], [95], [96], [97]. As shown in Fig. 4, the AL method first built a classifier on the source data and then classified target samples. By selecting some candidates with the highest uncertainty and providing user labels for them, these samples can be used to expand the training set to update the classifier. Deng et al. [140] proposed an active multikernel DA method for HSI classification, which combines the AL with multikernel learning for DA. Kalita et al. [156] proposed a standard deviation (SD)-based AL technique to exploit the labeled source images to generate the "most-informative" target samples. Saboori et al. [157] proposed an active multiple kernel Fredholm learning (AMKFL) method, where a Fredholm kernel regularized model was presented to label samples.
VI. DEEP DOMAIN ADAPTATION
Unlike hand-crafted features in traditional DA methods, deep learning methods can automatically learn features using deep neural networks (DNNs) [158]. Most of the current deep DA methods add adaptation layers to an original deep network architecture to realize the source-to-target adaptation or adopt an adversarial learning strategy to minimize the crossdomain discrepancy. Deep DA methods are mainly divided into discrepancy-based methods, adversarial-based methods, and others [5], [159].
A. Discrepancy-Based Adaptation
The discrepancy-based deep DA methods mainly aim to match marginal or/and conditional distributions between domains by adding adaptation layers (e.g., MMD-based metric) in deep neural networks (DNN) for task-specific representations, as shown in Fig. 5. Long et al. [98] proposed a DNN which was the first time to utilize the DNNs to learn transferable features across domains for DA. In DAN, three adaptation layers based on multiple kernel variant of MMD (MK-MMD) are designed to align marginal distributions between domains. In order to reduce both marginal and conditional distribution differences, they further proposed a joint adaptation network (JAN) which used source labels and target pseudo labels to construct MMD and conditional MMD to align the joint distribution according to an adversarial training strategy. Based on DAN, Zhu et al. [99] proposed a multirepresentation adaptation network (MRAN) [100], which performs cross-domain classification tasks through multirepresentation alignment. They further proposed a deep subdomain adaptation network (DSAN) using the idea of subdomain adaptation [101]. Zhu et al. [160] developed a weakly pseudo-supervised decorrelated subdomain adaptation (WPS-DSA) network for crossdomain land-use classification. Sun et al. [102] proposed a DeepCORAL method to extend the CORAL to deep learning.
For HSI classification, Ma et al. [103] designed a class centroid alignment module in the DNN for cross-domain HSI classification. Garea et al. [104] proposed a TCA-based network (TCANet) for DA of HSIs, which used the TCA to construct an adaptation layer. Wang et al. [105] proposed a deep DA with MMD-based classwise distribution alignment and manifold structure preservation in the target domain. Ma et al. [106] proposed a deep DA network (DDA-Net) for cross-dataset HSI classification, which minimized the domain discrepancy and transferred the task-relevant knowledge from source to target in an unsupervised way. Li et al. [107] proposed a two-stage deep DA (TDDA) method, where in the first stage, the distribution distance between domains is minimized based on the MMD to learn a deep embedding space, and in the second stage, a spatialspectral Siamese network is constructed to learn discriminative spatial-spectral features to further decrease the distribution discrepancy. Zhang et al. [108] proposed a topological structure and semantic information transfer network (TSTnet). It employs the graph structure to characterize topological relationships and combines the graph convolutional network (GCN) and CNN for cross-scene HSI classification. The optimal transmission (OT)based graph alignment and MMD-based distribution alignment work cooperatively. Wang et al. [109] proposed a graph neural network (GNN) DA method for multitemporal HSIs, which incorporated the domainwise and classwise CORAL into the GNN network to align the joint distributions of domains. Liang et al. [110] proposed an attention multisource fusion-based deep few-shot learning (AMF-FSL) method for small-sized HSI classification, which contains three modules, namely, the target-based class alignment, domain attention assignment, and multisource data fusion. It can transfer the learned ability of classification from multiple source data to target data.
Othman et al. [161] proposed a DA network for cross-scene classification. It uses a pretraining and fine-tuning strategy to ensure that the network can correctly classify the source samples, and align source and target distributions and preserve the geometrical structure of target data [161]. Similarly, Lu et al. [111] proposed a multisource compensation network (MSCN) for cross-scene classification task. In the network, a cross-domain alignment module and a classifier complement module are designed to reduce the domain shift and to align categories in multiple sources, respectively. Zhu et al. [112] proposed an attentionbased multiscale residual adaptation network (AMRAN) for cross-scene classification, which contains a residual adaptation module for marginal distribution alignment, an attention module for robust feature extraction, and a multiscale adaptation module for multiscale feature extraction and conditional distribution alignment [112]. Geng et al. [113] proposed a deep joint distribution adaptation networks (DJDANs) for transfer learning in SAR image classification, where marginal and conditional distribution adaptation networks are developed.
B. Adversarial-Based Adaptation
Inspired by generative adversarial nets (GAN) [114], [115], adversarial DA approaches learn transferable and domain invariant features through adversarial learning. GAN contains a generator model G and a discriminator model D. The generator aims to produce samples similar to the source domain, and to confuse the discriminator to make a wrong decision. Among them, the training purpose and process can be summarized where C is the classifier, L cls is the classification loss on labeled source data, L adv G and L adv D are the loss functions of the adversarial training of G and D. The discriminator then tends to discriminate between the true source data and the counterfeits generated by model G [5]. After GAN-based adaptation, a task-specific classifier built on the source domain can be used to classify target samples. Fig. 6 illustrates the GAN-based DA.
Tzeng et al. [116] introduced an adversarial CNN-based architecture that aligned distributions between domains by minimizing the classification loss, soft label loss, domain classifier loss, and domain confusion loss. Pei et al. [117] proposed a multiadversarial DA (MADA) method that constructed multiple classwise domain discriminators to reduce the joint distribution difference between domains. Yu et al. [118] proposed a dynamic adversarial adaptation network (DAAN), which can dynamically assess the relative importance of global and local domain distributions. Saito et al. [119] proposed a deep DA method based on the maximum classifier discrepancy (MCD) that leverages task-specific decision boundaries and adversarial learning ideas to adjust the distribution of source and target domains. The MCD method aims to learn domain-invariant features, which may have lower discriminative ability. To solve this problem, a dynamic weighted learning (DWL) method was proposed to adjust the weights of domain alignment learning and class discrimination learning in the MCD framework [120].
Recently, many adversarial-based DA approaches were developed for HSI classification. Ma et al. [121] proposed an adversarial learning-based DA method for the classification of HSIs, which included a variational autoencoder (VAE)-based generator and a multiclassifier-based discriminator. The generator learns features such that the source classification error is minimized and the classification disagreement on the target dataset is maximized. The discriminator deceives the generator by adjusting classifiers such that the classification disagreement on the target data set is minimized. Miao et al. [122] also used the VAE module to construct a generative model and further designed a joint distributions alignment module to perform coarseto-fine joint distributions alignment for HSI classification. Yu et al. [123] proposed a contentwise alignment method within an adversarial learning framework. Pande et al. [124] proposed a class reconstruction driven adversarial DA method, which incorporates an additional class-level cross-sample reconstruction loss to make the learned space classwise compactness and an additional orthogonality constraint over the source domain to avoid any redundancy within the encoded features. Liu et al. [125] proposed a classwise adversarial adaptation network for HSI classification, which performed classwise adversarial learning. Saboori et al. [126] proposed an adversarial discriminative active deep learning (ADADL) method for HSI classification. Similar to MCD, it incorporates two different land-cover classifiers as a discriminator to consider class boundaries when aligning feature distributions, and combines the entropy measure along with the cross-entropy loss during training to use the information in unlabeled target data [126]. Wang et al. [127] proposed a domain adversarial broad adaptation network (DABAN) for HSI classification. It included a domain adversarial adaptation network (DAAN) and a conditional adaptation broad network (CBAN), which can align the statistical distribution between domains and also enhance the representation ability of domaininvariant features. Yu et al. [123] proposed an unsupervised DA architecture with dense-based compaction (UDAD) for crossscene HSI classification. It incorporated spectral-spatial feature compaction, unsupervised DA, and classifier training into an integrated framework and utilized adversarial domain learning to reduce the domain discrepancy. Deng et al. [128] proposed a deep metric learning-based feature embedding method for HSI classification, which uses an adversarial learning strategy to align source and target features and to preserve the similar clustering structure of source and target features. Fang et al. [162] developed a confident learning (CL)-based DA (CLDA) for HSI classification, where the CL module is designed to select highconfidence pseudo-labeled target samples. Li et al. [129] proposed a deep cross-domain few-shot learning (DCFSL) method for HSI classification, which combines FSL and DA, where a conditional adversarial DA is employed to reduce domain shift and FSL is used to learn transferable knowledge from source to target for classification.
Adversarial-based DA approaches also used for the scene classification of RS images. Teng et al. [130] presented a classifier-constrained deep adversarial domain adaptation (CDADA) method exploiting the idea of MCD for cross domain semisupervised classification of RS scene images, where a deep convolutional neural network (DCNN) is used to build feature representations and adversarial DA is used to align the feature distribution of domains. Zhang et al. [131] proposed a domain feature enhancement network (DFENet) to enhance the discriminative ability of the learned features for dealing with the domain variances of scene classification. Specifically, a context-aware feature refinement module is first designed to recalibrate global and local features by explicitly modeling interdependencies between the channel and spatial for each domain. Then, a multilevel adversarial dropout module is further designed to strengthen the generalization capability of the network. Yan et al. [132] proposed a triplet adversarial domain adaptation (TriADA) method for pixel-level classification of very high resolution (VHR) RS images, which learned a domain invariant classifier by a domain similarity discriminator. Zhu et al. [133] proposed a semisupervised center-based discriminative adversarial learning (SCDAL) method for cross-domain scene classification of aerial images using adversarial learning with center loss. Liu et al. [163] proposed an unsupervised adversarial DA network for remotely sensed scene classification, where a GAN model-based feature extractor makes the source and target distributions closer, and a transferred classifier trained by transferred source domain features is able to acquire a better classification accuracy on the target domain. Zheng et al. [134] proposed a two-stage adaptation network (TSAN) for RS scene classification considering single source domain and multiple target domains, which utilizes the adversarial learning to align single source features with mixed-multiple-target features and self-supervised learning to distinguish the mixed-multiple-target domain. Adayel et al. [164] developed a deep open-set DA method for cross-scene classification using adversarial learning and pareto ranking. To exploit the classification information in target domain, Zheng et al. [165] proposed a DA via a task-specific classifier (DATSNET) method for RS scene classification, where an adversarial learning strategy is used to adjust task-specific classification decision boundaries.
Adversarial-based approaches were also applied for other DA tasks of RS images. Bejiga et al. [135] proposed a domain adversarial neural network (DANN) for large-scale land cover classification of multispectral images, where the network consisted of a feature extractor, a class predictor, and domain classifier blocks. Rahhal et al. [166] proposed an adversarial learning method for DA from multiple remote sensing sources. It aligns the source and target distributions using a min-max entropy optimization method. Elshamli et al. [136] employed the denoising autoencoders (DAE) and domain-adversarial neural networks (DANN) to tackle the DA problem for multispatial and multitemporal RS images. Martini et al. [3] developed self-attention-based domain-adversarial networks for land cover classification using multitemporal satellite images, where the deep adversarial network can reduce the domain discrepancy between distinct geographical zones. Ji et al. [167] proposed an end-to-end GAN-based DA method for land cover classification from multiple-source RS images, where the source images are translated to the style of the target images through adversarial learning for training a fully convolutional network (FCN) for semantic segmentation of target images. Tasar et al. [137] proposed a multisource DA method (i.e., StandardGAN) for semantic segmentation of VHR satellite images. They further designed a unsupervised, multisource, multitarget, and life-long DA method for semantic segmentation of satellite images [168]. Wittich et al. [169] deployed a deep adversarial DA network using semantically consistent appearance adaptation for the classification of aerial images. A color mapping generative adversarial network (ColorMapGAN) was built for DA of RS image semantic segmentation [170]. Makkar et al. [171] adopted adversarial learning to extract discriminative target domain features that are aligned with source domain for geospatial image analysis. Mateo et al. [172] investigated a cross-sensor adversarial DA method of Landsat-8 and Proba-V images for cloud detection.
C. Others
There are some other deep DA methods. Yang et al. [138] proposed a transfer learning-based two-branch CNN model for HSI classification, where the spatial and spectral CNNs are used to extract joint spectral-spatial features from HSIs followed by target network training using transfer learning with limited labeled samples of target domain. Zhou and Prasad [139] proposed a deep feature alignment neural network (FANN) for HSI classification, where discriminative features for both domains were extracted using deep convolutional recurrent neural networks (CRNN) and then aligned layer-by-layer according to the transformation learning-based domain adaptation (DATL) method. Deng et al. [140] proposed an active transfer learning network for HSI classification, which exploited a hierarchical stacked sparse autoencoder (SSAE) network to extract deep joint spectral-spatial features and an active TL strategy to transfer the pretrained SSAE network and the limited training samples from source to target domains. Song et al. [173] added an SA layer into CNN models for DA, which can fulfill domain alignment in feature subspace by fine-tuning the modified CNN models. Liu et al. [174] combined transfer learning and virtual samples in a 3D-CNN model to solve the problem of insufficient samples. Zhong et al. [141] proposed a cross-scene deep transfer learning network with spectral feature adaptation (SFA) for HSI classification, which designed a multiscale spectral-spatial unified network (MSSN) with two-branch architecture and a multiscale bank to extract discriminating features of HSI. Chen et al. [142] proposed an augmented associative learning-based DA (AALDA) method for HSI classification, which employs the criterion of cycle consistency to generate features that are domain-invariant and discriminative. Mdrafi et al. [143] proposed an attention-based DA using residual network for HSI classification, which considers different levels of attentions. Saha et al. [175] developed a graph neural network for multitarget DA in RS classification. Lasloum et al. [176] presented a multisource semisupervised DA method using a pretrained CNN for RS scene classification.
Othman et al. [144] designed a three-layer convex network termed as 3CN for DA in multitemporal VHR RS images. It is composed of three main layers: 1) mapping source training samples to the target domain via ELM; 2) target image classification via ELM; and 3) spatial regularization via the random-walker algorithm. Kellenberger et al. [177] combined the CNNs with AL for animal detection in UAV images, which used the OT to find corresponding regions between source and target data sets in the space of CNN activations. Kalita et al. [178] investigated the DA problem for land cover classification by utilizing the ensemble decision approach of deep neural networks to address the extra and missing class problem. Chakraborty et al. [179] proposed a multilevel weighted transformation based neurofuzzy DA method using stacked auto-encoder for land-cover classification. Lucas et al. [145] proposed a Bayesian-inspired CNN-based semisupervised DA method to produce land cover maps from satellite image time series data. Tong et al. [180] proposed a transferable deep model for land-cover classification of multisource high-resolution RS images, which used a pseudolabel learning strategy to automatically select training samples from the target domain and extracted multiscale contextual information of RS for classification.
VII. EXPERIMENTAL RESULTS AND ANALYSIS
Two images in the 2013 and 2018 IEEE GRSS data fusion contest, i.e., Houston2013 and Houston2018, are used in the experiment. The two images were acquired by the ITRES Compact Airborne Spectrographic Imager (CASI)-1500 sensor over the University of Houston campus and the neighboring urban area on June 23, 2012 and February 16, 2017, respectively [181], [182]. Houston2013 has the size of 349 × 1905 pixels, 144 spectral bands, and 15 categories. Houston2018 has the size of 4172 × 1202 pixels, 48 spectral bands, and 20 categories. For consistency, 48 spectral bands of Houston2013 are selected and seven common classes in these two images are considered for the DA task [108]. The Houston2013 and Houston2018 images are set as source and target domains, respectively. The RGB composite image and ground-truth map of two images are shown in Fig. 7. The number of samples are shown in Table II. In the experiments, we compare some traditional shallow methods and recent deep DA methods, as shown in the upper and lower part of the line in Table III. The 1-nearest neighbor (1-NN) classifier is chosen as the base classifier. The NA (no adaptation) uses the 1-NN classifier built on the source domain to directly classify target samples. The GFK [13], SA [15], and KSA [25] are subspace-based adaptation methods. The TCA [38], [39], JDA [42], JGSA [43], LPJT [46], DADFL [48], [101], DeepCORAL [102], and TSTnet [108] are discrepancy-based methods. The DAAN [118], MCD [119], and DWL [120] are adversarial-based methods. For subspace-based DA algorithms, the dimensionality of the subspace is set to 20. The optimal learning rate lr of all deep learning algorithms is chosen from {0.0001, 0.001, 0.01, 0.1}, the batch size of the network is all set to 128, and the number of training iterations is 100. For DAN, DAAN, MRAN, and DSAN, there is a regularization parameter λ whose value is chosen from {0.001, 0.01, 0.1}. For all compared deep learning algorithms, to ensure the fairness of the comparison experiments, the backbone network is the ResNet18. The classification evaluation indicators are overall accuracy (OA) and kappa coefficient (κ).
From Table III, we can see that some traditional DA methods provide poor adaptation performance and their OAs are even worse than NA. It demonstrates that not all of the DA methods can reduce distribution discrepancy, and it is likely to produce negative transfer when there is significant spectral difference between domains. The JGSA, LPJT, and DADFL provide relatively better results than NA because these methods align the statistical and geometrical discrepancy between domains by learning two projections for source and target domains, respectively, and taking into account the local or global discriminative information of domains. The Houston2013 and Houston2018 datasets have great spectral differences, so there may not exist a shared subspace generated by a unified transformation. In addition, the discriminative information of source or/and target domain can be used to improve the DA performance.
For deep DA methods, the adversarial methods, such as DAAN, MCD, and DWL, show better results than NA. Through an adversarial learning, the ability of generator and discriminator are simultaneously improved. The feature generator is likely to produce target features that are highly similar to the source features, and then a task-specific classifier built on the source domain can be used to classify target samples. Among all methods, the recently proposed TSTnet produces the best results. In the feature extraction part of TSTnet, the GCN and CNN are used to extract convolutional and topological structure features. In the adaptation part, the optimal transmission-based graph alignment and MMD-based distribution alignment work cooperatively. By exploiting the topological structure and semantic information of HSIs and considering the distribution alignment and topological relationship alignment, TSTnet generates excellent results.
The classification map on the target domain of different methods are shown in Fig. 8. It can be seen that some methods, such as TCA, CORAL, EasyTL, misclassify the class "Grass stressed" in green color to the class "Grass healthy" in purple color due to the high spectral similarity between these two classes. In addition, many methods misclassify the class "Nonresidential buildings" to the class "Road." iteratively update the transformation matrix and pseudo labels, so their running times are relatively long.
VIII. CONCLUSION
The early DA methods focus on either instance reweighting or subspace adaptation or transformation-based adaptation.
For RS image classification, due to the existence of large spectral drift between domains, it usually needs to simultaneously consider instance reweighting, subspace learning, and feature transformation. For traditional DA methods, we can incorporate landmark selection or feature weighting, target pseudo-label learning, local discriminative preservation into a subspace-based transformation framework to improve the discriminative ability of DA models.
For deep DA methods, the feature extraction module can be further improved by considering the data characteristics of RS images. The adversarial learning strategy can be combined with the discrepancy-based adaptation. In addition, the target pseudolabel learning can be used in the deep DA methods to iteratively update the network and improve the discriminative ability.
Currently, many existing RS DA methods focus on the general homogeneous unsupervised DA problem where the source and target domains have similar or same dimensionality feature spaces and there are no labeled instances in the target domain. In real situations, the RS classification problem may be more complex. The feature space and class space of source and target domains may be different. The classical DA problem can be extended to the following cases.
1) Heterogeneous DA [183]: The dimensionality of source and target domains are different and features of two domains are disjoint. For example, due to the difference in hyperspectral sensors, different HSIs usually have different spectral bands. 2) Multisource DA [184]: There are multiple source domains.
The challenges lie in the unavailability of target labels and complex composition of multiple source domains [185]. For long-term RS image series analysis, there may exist multiple historical labeled images as sources. Compared with a single source domain, the joint use of multiple sources is likely to improve the DA performance. 3) Open set DA [186]: Only a few categories of interest are shared between source and target data. That is, the class space of source and target domains are different and intersect. Due to the changes of ground materials and acquisition regions, the source and target domains usually have some different classes especially for large-scale RS classification. 4) Partial DA [187]: It is assumed that the target label space is a subspace of the source label space. For example, if we only focus on some special classes in the target domain, the rich information of source domain can be used to perform partial DA. 5) Few-shot DA [188], [189]: The combination of DA with few-shot learning for using very few labeled target samples in training. When the source and target domains have great distribution differences and the number of classes is large, the unsupervised DA will fail. In this case, the limited labeled target samples can play a great role in building a connection between source and target classes. 6) Domain Generalization [190]: It aims to achieve out-ofdistribution generalization by using only source data for model learning. There are many RS images obtained by different sensors or acquired in different conditions. It is likely to learn a model with high generalization ability from available RS images, and then applies the model to classify other images in real-time RS analysis. The DA techniques can be used for large scene and long-term RS image processing. The labeling process for a large scene is costly and time-consuming. The DA technique can help to transfer the labels from a small region to the whole scene. For long-term image processing, historical images can be used to predict unseen images, and change analysis among RS images of different times can be performed. | 11,358.2 | 2022-01-01T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
CoRT: Complementary Rankings from Transformers
Many recent approaches towards neural information retrieval mitigate their computational costs by using a multi-stage ranking pipeline. In the first stage, a number of potentially relevant candidates are retrieved using an efficient retrieval model such as BM25. Although BM25 has proven decent performance as a first-stage ranker, it tends to miss relevant passages. In this context we propose CoRT, a simple neural first-stage ranking model that leverages contextual representations from pretrained language models such as BERT to complement term-based ranking functions while causing no significant delay at query time. Using the MS MARCO dataset, we show that CoRT significantly increases the candidate recall by complementing BM25 with missing candidates. Consequently, we find subsequent re-rankers achieve superior results with less candidates. We further demonstrate that passage retrieval using CoRT can be realized with surprisingly low latencies.
INTRODUCTION
The successful development of neural ranking models over the past few years has rapidly advanced state-of-the-art performance in information retrieval [5,12]. One key aspect of the success is the exploitation of query-document interactions based on representations from self-supervised language models (LMs) [9,26,28]. While early models use static word embeddings for this purpose [7,11,35], more recent models incorporate contextualized embeddings from transformer-based models such as BERT [19,21,27]. However, models using such query-document interactions have an important shortcoming: To calculate a single relevance score for one passage in response to a given query, a forward pass through an often massive neural network is necessary [21]. Hence, it is not feasible to rank an entire corpus with such an interaction-focused approach. Instead it is a common practice to re-rank relatively small subsets of potentially relevant candidates. These candidates are retrieved by a scalable first-stage ranker, commonly a term-based bag-of-words model such as BM25. However, such a first-stage ranker may act as a "gate-keeper" [37], which effectively hinders the re-ranking model from discovering more relevant documents. We argue that relevant passages for some particular queries -especially if they are naturally formulated questions -can only effectively be retrieved, if a) a soft matching between strongly related terms is involved and b) the model recognizes that a term's meaning may change in interaction with its context. Hence, we suggest to employ a neural ranking approach that acts as a complementizer to existing term-based models and uses candidates from both to compile more complete candidates for subsequent re-rankers. Boytsov et al. [2] utilized nearest neighbor search based on IDF-weighted averages of static word embeddings for queries and document to retrieve re-ranking candidates in addition to BM25. In contrast to that we propose COmplementary Rankings from Transformers (CoRT), a neural first-stage ranking model and framework that aims to leverage contextual representations from transformer-based language models [9,32] with the goal to complement term-based first-stage rankings. CoRT optimizes an underlying text encoder towards representations that reflect the concept of relevance through vector similarity. The model is specifically trained to act complementary to BM25 by sampling negative examples from BM25 rankings. To our knowledge, CoRT is the first representation-focused neural ranking approach that makes use of the self-attention mechanism [32] of a pretrained language model [9] to produce context-sensitive vector representations of fixed size per query and document.
We study the characteristics of CoRT with four types of experiments based on the MS MARCO dataset. First, we measure various ranking metrics and compare the results with various first-stage ranking baselines. In course of this, we demonstrate the complementary portion of relevant candidates that were added due to CoRT. Second, we combine the candidates from CoRT and BM25 with a state-of-the-art re-ranker based on BERT [9,21] and investigate how many candidates are needed to saturate the ranking quality. Third, we train CoRT with various representation sizes and measure its impact on the first-stage ranking quality. Finally, we measure the retrieval latencies of CoRT with two retrieval modalities: an distributed exhaustive search on four GPUs and an approximate search based on a graph-based nearest-neighbor index with pruning heuristics [15]. We consider our study on the relationship between the re-ranking quality, the numbers of candidates used, and the quality of the candidates as one of our main contributions. We also contribute a framework for training and deploying a representationfocused and transformer-based first-stage ranker that complements term-based candidate retrievers, for which we will provide an opensource implementation 1 .
RELATED WORK
Neural ranking approaches employ neural networks to rank documents in response to a query. In this section we describe key concepts of neural ranking and reference exemplary works for each of them. Subsequently, we present neural first-stage ranking approaches that allow a direct comparison with our results.
Key Concepts of Neural Ranking
According to Guo et al. [11], Neural ranking approaches can be categorized into two types of models depending on the architecture. Representation-focused approaches [14,31,37] aim to build representations for queries and documents which are used to predict relevancy scores with a simple distance or similarity measure. In this context, exploiting local interactions between neighboring terms is often seen as an important technique for good representations [31,37] . Related models often follow the idea of Siamese Networks [3] where identical neural networks are joined at their outputs using a distance measure [8,30]. Here are pair-wise learning objectives an effective training modality [12,37]. In contrast to point-wise objectives [8,21] where relevance prediction is modeled as binary classification, optimize pair-wise objectives the relative preferences of positive and negative documents [8,12,37].
Models of the interaction-focused type exploit interactions between query and document terms [7,11,21,35]. In contrast to representation-focused approaches, this requires one forward pass through the whole model for each potentially relevant document. However, this kind of interaction usually leads to superior ranking quality [11,12,27]. Many related models employ a dedicated layer to enforce interactions via soft matching between query and document terms [7,11,35]. Another approach is to use the attention mechanism [32], or more specifically, a pretrained transformer encoder such as BERT [9] to exploit both local and query-document interactions [21,27].
Recently, some hybrid approaches have been proposed that combine typical representation-focused techniques with interactionfocused approaches to reduce computational cost. Gao et al. [10] proposes a model architecture comprising three modules for document understanding, query understanding and relevance judging as part of their framework EARL. The understanding modules produce token-level representations, which can be cached as usual in representation-focused approaches. The relevance judging module, on the other side, uses those representations to apply querydocument interactions more quickly when document representations are cached. Each module is a stack of transformer layers [32], initialized with weights from BERT. Khattab and Zaharia [16] proposes a related approach, namely ColBERT. The model architecture incorporates a relatively inexpensive max-similarity mechanism instead of a shallow transformer network to perform query-document interactions. The authors proposes to store token representation in an Approximate Nearest Neighbor (ANN) index [15] to quickly retrieve only those documents that have token representations in the proximity to those of the query. Thus, ColBERT can be described as a re-ranker that brings its own first-stage retrieval mechanism allowing to perform a full ranking in less than a second (458 on MS MARCO [16]).
Neural First-stage Ranking
A first-stage ranker can be characterized as an efficient full ranker that is used to retrieve documents for a subsequent re-ranker. In this context, various neural approaches have been proposed with the goal to overcome the limitations of traditional ranking functions. Many of them make use of existing infrastructure for term-based ranking functions based on inverted indexing [20]. Zamani et al. [37] proposes SNRM, a representation-focused approach with sparse representations that can be used for inverted indexing as if each feature dimensions corresponds to a term in a bag-of-words representation. SNRM uses pretrained GloVe Word Embeddings [26] to model soft-matched n-grams which are then encoded and aggregated to a sparse representation. Nogueira et al. [23] predict queries for given documents to expand those documents by corresponding query terms. In their first work, known as doc2query, they used a sequence-to-sequence (seq2seq) transformer model [32]. Nogueira and Lin [22] reported large effectiveness gains for their follow-up model docTTTTTquery by replacing the employed seq2seq model with T5 [29]. Another approach aims to predict near-optimal document term weights as a function of the term's context. DeepCT, proposed by Dai and Callan [6], utilizes BERT to predict those context-aware weights based on associated queries in the training data. Inverted indexing is only applicable for either sparse representations or approaches that extend or re-weight existing bag-of-words representations. Representation-focused models using dense representations can instead employ an ANN index, which heuristically prunes documents that are unlikely to be in the top-proximity of the query representation to realize low response latencies [2,13].
PROPOSED APPROACH
With CoRT we describe a first-stage ranking model that acts as a complementary ranker to existing term-based retrieval models such as BM25. To achieve this, we make use of local interactions [31] and sample negative training examples from BM25 rankings.
Architecture
The model architecture of CoRT, illustrated in Figure 1, follows the idea of a Siamese Neural Network [3]. Thus, passages and queries are encoded using the identical model with shared weights except for one detail: the passage encoder and the query encoder use different segment embeddings [9]. CoRT computes relevance scores as angular similarity between query and passage representations. The parameters of are trained using a pair-wise ranking objective.
Encoding
CoRT can incorporate any encoder of a BERT-like language model as underlying text encoder. Here, we choose a pretrained ALBERT [17] encoder for its smaller model size, the tougher sentence coherence pretraining and increased first-stage ranking results throughout our early-stage experiments compared to BERT. The tokenizer of ALBERT is a WordPiece tokenizer [34] including the special tokens [CLS] and [SEP] known from BERT. From the text encoder we seek a single representation vector for the whole passage or query, which we call context representation. From ALBERT we take the last-layer representation of the [CLS] token for this purpose. The context representation obtained from the underlying encoder for an arbitrary string is denoted with ( ) ∈ R ℎ where ℎ is the output representation size.
(AL)BERT's language modeling approach involves training of sentence coherence for which segment embeddings are used to signal different input segments. Although we only feed single segments to the encoder, i.e. a query or a passage, we use segment embeddings allowing the model to represent queries and passages differently. We refer to the segment embeddings and (illustrated in Figure 1) with the context encoder functions and for passages and queries respectively. The context representation is further projected to the desired representation size using a linear layer followed by a tanh activation function. Thus, the complete passage encoder function is ( ) := tanh( ( ) + ) where ∈ R ℎ× and ∈ R are parameters of the linear layer. The query encoder is defined analogous with shared parameters.
Training
Training CoRT corresponds to updating the parameters of the encoder towards a representation that reflects relevance between queries and documents through vector similarity. Each training sample is a triple comprising a query , a positive passage + and a negative passage − . While positive passages are taken from relevance assessments, negative passages are sampled from term-based rankings (i.e. BM25) to support the complementary property of CoRT. The relevance score for a query-passage pair ( , ) is calculated using the angular cosine similarity function. 2 As illustrated in Figure 1, the training objective is to score the positive example + by at least the margin higher than the negative one − . We use the regular triplet margin loss function as part of our batch-wise loss function, which is defined as: Inspired by Oh Song et al. [24], we aim to take full advantage of the whole training batch. For each query, each passage in the batch is used as a negative example except for the positive one. Thus, the batch-wise loss function can be defined as follows: , + and − denote the triple of the ℎ sample in the batch and the number of samples per batch. We found this technique makes the training process more robust towards exploding gradients thus the model can be trained without gradient clipping [38]. Also, it positively affects on the first-stage ranking results 3 .
Indexing and Retrieval
For retrieval with CoRT, each passage must be encoded by the passage encoder . Subsequent normalization of each vector allows us to use the dot product as a proxy score function for , which is sufficient to accurately compile the ranking. Given a query , we calculate its representation ( ) and the dot product for with each normalized passage vector. From those, the highest scores are selected and sorted to form the CoRT ranking. This procedure can be implemented heavily parallelized using GPU matrix operations. Alternatively, the passage representations can be indexed in an ANN index to avoid exhaustive similarity search. Finally, we combine the resulting ranking of CoRT with the respective BM25 ranking by zipping the positions beginning with CoRT until a new ranking of the same length has been arranged. During this process, each passage that was already added by the other ranking is omitted.
EXPERIMENTS
We conducted four experiments studying the ranking quality and recall of CoRT, the connection between the number of candidates and re-ranking effectiveness, the impact of the representation size , and CoRT's retrieval latencies. Finally, we outline a competitive end-to-end ranking setup with CoRT and a BERT-based re-ranker.
Datasets
MS MARCO Passage Retrieval. The Microsoft Machine Reading Comprehension [1] dataset for passage ranking was introduced in 2018 and provides a benchmark for passage retrieval with realworld queries and passages gathered from Microsoft's Bing search. It comprises 8.8M passages sampled from web pages and about 1M queries that are formulated as questions. The objective is to rank those passages high that were labeled as relevant to answer the respective question. The annotations, however, are sparse. There are 530k positive relevance labels distributed over 808k queries in the training set, whereby most queries are associated to one passage. Only 25k queries are associated to more than one passage. The validation and evaluation sets, dev and eval, comprise 101k queries each. An official subset of dev, called dev.small comprises 6980 queries and 7437 relevance labels and is often used for publicly reported evaluations. We follow this convention and use dev.small for testing. The associated labels for eval, however, are not publicly available. The dataset does not contain any true negatives, hence any unlabeled passage is assumed to be negative. This means there might be situations where assumed negative passages are actually more relevant than labeled ones. The creators suggest to use the mean reciprocal rank cut at the tenth position (MRR@10) as primary evaluation measure. Additionally, we measure NDCG@20 [20] as less punishing ranking quality measure and the recall at various positions to indicate how many relevant passages a re-ranker would miss when the respective ranking is used for candidate selection. manual relevance assessments per query for a set of 43 MS MARCO queries. Each assessments corresponds to rating on a scale from 0 (not relevant) to 3 (perfectly relevant). The passages that were chosen to be rated by human assessors were selected with a pooling strategy. Selected are the top-10 rankings of submitted runs plus at least 100 passages selected by a special tool that models relevance based on already found relevant passages. We adopt the evaluation metrics MRR (uncut), NDCG@10 and MAP from the official TREC overview [5]. In contrast to the original MS MARCO benchmark, this evaluation set provides dense annotations, but only for few queries.
First-Stage Ranking
We trained CoRT as described in Section 3.3 while using a representation size of = 768. In this section we discuss the first-stage ranking results of our model using the datasets and their associated metrics described in Section 4.1.
MS MARCO Passage Retrieval
The results of CoRT and its baselines on the MS MARCO passage retrieval task (dev.small) are reported in Table 1. Next to the obvious choice of BM25 as a baseline, we include DeepCT [6], doc2query [23] and its successor docTTTT-Tquery [22]. All three are recent first-stage rankers with average retrieval latencies below 100ms per query on the MS MARCO passage corpus. The metrics MRR@10 and NDCG@20 reveal a quite decent ranking quality for the standalone CoRT ranker. Both only slightly increase due to merging with BM25 (CoRT BM25 ). The recall, however, increases by a large margin. Since CoRT's primary use is candidate retrieval rather than standalone ranking, we pay particular attention to the recall at various cuts. From the perspective of BM25, the absolute increase of recall due to merging with CoRT ranges between 15.1 (RECALL@50) and 9.2 (RECALL@1000), which
Candidate Re-ranking
We re-rank candidates from both BM25 and CoRT BM25 to study the impact of the candidates on a subsequent interaction-focused re-ranking. By varying the numbers of candidates, we investigate when adding more candidates becomes ineffective. The corresponding metrics, reported in Table 3, have been calculated based on MS MARCO (dev.small).
Re-ranking Model. Similar to [21], we use a simple binary classifier based on a BERT. The model takes a query-passage pair and yields a relevance confidence. The pair ( , ) is concatenated to one token sequence with two segments (as it is conventional in BERT). This sequence is processed by the BERT encoder while the [CLS] embedding of the last layer, which we denote with ( , ), is projected to a single classification logit. We then apply the sigmoid activation function to obtain the relevance confidence for query and passage . This procedure can be formalized as ( , ) = ( ′ ( , ) + ′ ) where ′ ∈ R ℎ×1 and ′ ∈ R are the parameters of a linear layer with a single output activation. To form a ranking at inference time, we sort the candidates by the model's confidence. Following [21], this model is trained using a point-wise objective. We sample query-passage pairs, each associated with a binary relevance label ∈ {0, 1} and minimize the binary cross-entropy loss described below.
Re-ranking Results. The results, reported in Table 3, show superior ranking quality in terms of MRR@10 for candidates from CoRT BM25 . This is especially true, if low numbers of candidates are used. We also notice earlier saturation 4 of MRR@10 for CoRT BM25 , which is illustrated in Figure 2. Only 64 candidates from CoRT BM25 are sufficient to achieve top results with this re-ranker. In contrast, 256 candidates from BM25 are needed to reach the point of saturation, which translates in quadrupled re-ranking time. We also report the recall for the top-20 re-ranked positions (RECALL@20) and the recall for all candidates that were available for the re-ranker (RECALL@ALL
Impact of Representation Size
As described in Section 3.2, CoRT projects the context representation of the underlying encoder to an arbitrary representation size . This size determines the size of the final index and also influences the retrieval latency. The total size of the encoded corpus is easy to calculate. For example, with = 128 and the MS MARCO corpus, the index size (without overhead) amounts 8.8 × 128 / × 4 / ≈ 4.5 × 10 9 ≈ 4.2 . Thus, is proportional to the total size and reducing to 64 would halve the memory footprint. If is small, however, it is more difficult to attain the margin objective (Eq. 2). Thus, can be used for a trade-off between ranking quality and computational effort / resource cost. We investigate the relation between the representation size and the ranking quality by conducting identical training runs with different numbers for . The results in Table 4 show that MRR@10 already saturates at = 128. However, if is reduced to 64 or below, a loss in ranking quality can be noticed. We also report the corresponding metrics for CoRT BM25 to illustrate if the loss in ranking quality due to the size reduction is intercepted by BM25. Indeed, the loss of recall when reducing is much lower if we use the compound ranking, indicating that many candidates that we lose due to representation size reduction are already covered by BM25. We conclude = 64 would still perform similarly to = 768 in a re-ranking setting.
Latency Measurement
We propose two methods for the deployment of CoRT. The first exhaustively calculates similarity scores using multiple GPUs while the second incorporates an Approximate Nearest Neighbor index (ANN). We measure retrieval latencies of those methods and compare them with BM25 as representative for term-based retrieval models based on inverted indexing. Approaches that are based on the bag-of-words model such as DeepCT or doc2query have latencies slightly greater or equal to BM25. We conduct the latency measurement based on the top-1000 retrieval for the 6980 queries of the dev.small split while using MS MARCO passage corpus containing around 8.8M passages. Since some approaches profit from batch computing, we also measure the latency for batches of 32 queries. As representation size, we have chosen = 128 because it is the smallest representation size investigated in Section 4.4 that does not hurt the ranking quality of CoRT BM25 . Lucene BM25 Baseline. As retrieval latency baseline, we use a Lucene index generated by the Anserini toolkit [36]. The retrieval was performed on an Intel Core i9-9900KS with with 16 logical cores (8 physical) and enough memory to fit the whole corpus. Single queries were processed using the single-threaded search function, while batch-wise search has been performed with 16 threads. First-Stage Ranking using multiple GPUs. Multiple GPUs can be used to deploy CoRT for fast large-scale ranking. We propose to uniformly distribute the vector representations of the corpus on the available GPUs. Each GPU ranks its own partition of the corpus as described in Section 3.4. Afterwards, the results for each partition are aggregated by selecting the top-k candidates with highest scores. First-Stage Ranking using ANN. Since CoRT operates on vector similarities, it can make use of ANN search. We measure the retrieval latency and the loss of ranking quality, which occur due to the imperfection of pruning heuristics. For this purpose, we use a graph-based index including a special optimization method called ONNG [15]. An implementation of this method is publicly available as part of the NGT Library 5 . To control the trade-off between retrieval latency and accuracy, we can tweak the search range coefficient . Latency Results. The latency measurements are reported in Table 5. For CoRT the total retrieval latency per query consists of two factors, query encoding and the actual retrieval. The query encoding has to be performed by the query encoder ( ), which we highly recommend to run on a GPU. The latency of the actual retrieval depends on the retrieval methods described above. The exhaustive search using 4 GPUs takes 17ms for a single query. Together with the encoding the total retrieval time per query sums up to 17 + 8 = 25 , which is below the BM25 baseline. However, this is only possible due to the ability of GPUs to perform massively parallel computing. Furthermore, we observe a substantial increase in efficiency when processing queries batch-wise on the GPUs. The retrieval of 32 queries at once takes only about twice as long as a single query. The tested BM25 index, on the other side, seems to suffer from multiprocessing overhead or other computational limitations when operating on a single instance. The latencies for the ANN index has been measured with three different values for the search range coefficient . While this significantly affects the retrieval latency, only slight differences on the quality of the firststage ranking can be observed.
End-to-end Retrieval
Intrigued by the exceptional ratio of retrieval latency and ranking quality of ColBERT's full-ranking approach [16], we used our above findings to create a competitive end-to-end ranking setup. We suggest to re-rank the top-64 candidates from CoRT BM25 with = 128, retrieved by an ANN index ( = 0.4). The end-to-end latency comprises 8 for query encoding, 71 for CoRT retrieval based on ONNG, 38 for BM25 retrieval 6 , and 192 for re-ranking. As reported in Table 5, we outperform ColBERT's end-to-end ranking performance in terms of MRR@10 and retrieval latency 7 . CoRT's representations for the MS MARCO corpus only weight 4.3GB when is set to 128, or 7.0GB when indexed in an ONNG index. The size of the query encoder amounts about 50MB, which is due to ALBERT's parameter sharing. To compile the full CoRT BM25 candidates, the corresponding BM25 index is needed, which amounts 2.2 GB on disk. Although more memory is needed to deploy and operate both indexes, this is by far less than the 154GB footprint reported by Khattab and Zaharia [16] for ColBERT's end-to-end approach.
CONCLUSION
In this paper, we propose CoRT, a framework and neural first-stage ranking model that combines term-based retrieval models with the benefits of local interactions in a neural ranking model to compile improved re-ranking candidates. As a result, we observe high recall measures on our candidates improving re-ranking results in multi-stage pipelines. At the same time, we are able to decrease the number of candidates without hurting the end-to-end ranking performance. Our further experiments reveal the sweet spots for CoRT's representation size and the number of re-ranking candidates. We also propose two deployment strategies for CoRT and measured their performances in terms of efficiency and effectiveness. Finally, we demonstrate CoRT can be used to create a highly competitive multi-stage ranking pipeline. The longest recorded time a human has ever gone without sleep is 18 days, 21 hours, and 40 minutes, which resulted in hallucinations, paranoia, etc. However most people can only last 4-6 days without stimulants, and about 7-10 days before the body will be unable to function and long term damage can be caused.
None >1.000 1 and HuggingFace's Transformers [33] as deep learning libraries. PyTorch has been built from source code to ensure computation as fast as possible. Any BM25 ranking were generated by the Anserini toolkit [36]. Anserini ensures reproducibility by providing optimized parameter sets and ranking scripts based on Apache Lucene for several datasets including MS MARCO.
CoRT Training Details. We trained CoRT based on the pretrained ALBERT model "albert-base-v2", which is the lightest available version in HuggingFace's repository 8 [33]. Each model has been trained for 10 epochs, where each epoch includes all queries that are associated to at least one relevant document plus one randomly sampled positive and one negative passage. Most queries are only associated to one relevant passage, though. Negative examples are sampled from the corresponding top-100 BM25 ranking to support the complementary property of our model. We filter positively labeled passages as well as the = 8 highest ranks for their relatively high probability of actually being relevant. Due to high computational effort, this parameter was not tuned systematically. However, we achieved 0.7 p.p. higher MRR@10 and 1.2 p.p. higher RECALL@100 on the MS MARCO dataset when training with = 8 compared to = 0. As usual for BERT-based models we use the ADAM optimizer including weight decay fix [18] with the default parameters 1 = 0.9, 2 = 0.999, = 10 −6 , a weight decay rate 8 https://huggingface.co/transformers/pretrained_models.html of = 0.1 and a linearly decreasing learning rate schedule starting with = 2 × 10 −6 after 2.000 warm-up steps. We train mini-batches of size = 6 samples (triples) while accumulating the gradients of 100 mini-batches before performing one update step. The triplet margin (eq. 2 in Section 3.3) has been set to = 0.1, which has been tuned in the range of [0.01, 0.2]. Re-ranker Training Details. Our BERT re-ranking experiment utilized the pretrained "bert-base-uncased" model, hosted by HuggingFace [33]. We used the same optimizer settings than for CoRT except for the learning rate, which we empirically set to 5 × 10 −5 . The batch-size has been set to 8 and accumulated the gradients of 16 batches before performing one update step. Originally, we trained dedicated model instances for varying numbers of candidates in Section 4.3. However, we found that a model that is trained with negatives from the Top1000 BM25 ranking generally performs better than a model that only uses the Top100 candidates. Hence, all re-ranking models have been trained on random negatives from the Top1000 candidates of the corresponding first-stage ranking. Table 6 shows top-1 retrieval examples of CoRT and BM25. The first query exemplifies the advantage of local interactions in the query encoder. We hypothesize, the query could successfully be "interpreted" as a question about the density although the term density was not included. The second query is an example, where BM25 works well due to favorable keywords in the passage. Although CoRT's top result is not labeled, it clearly is somewhat relevant to the question. Since the passage misses the keyword "insane", it is difficult to retrieve for a term-based model. We hypothesize, due to the terms "hallucinations" and "paranoia", CoRT can correctly match the context in this example. | 6,756.4 | 2020-10-20T00:00:00.000 | [
"Computer Science"
] |
A qualitative and life cycle-based study of the energy performance gap in building construction: Perspectives of Finnish project professionals and property maintenance experts
ABSTRACT The significant share of buildings from total energy consumption across the world has been mentioned and emphasized very well by several scholars. In this regard, there have been major developments and improvements in the expertise of developing and designing buildings to be adequately energy efficient. However, the recent studies show that there is still a considerable deviation between the intended and actual energy consumption of the completed buildings. Hence, this exploratory study aims to discover the origin of success and failure in achieving energy efficiency in building construction projects with a life cycle perspective and based on viewpoints of key participants in the project and constructed building’s operation. To do so, 21 semi-structured interviews were conducted with Finnish project professionals representing client, design/planning experts, contractors and building operation/maintenance experts. Both thematic analysis and content analysis methods were employed for analysing the obtained research data. The findings reveal a set of challenges/barriers and solutions/enablers which account for failure and success of achieving energy-efficient buildings. The obtained results contribute to the existing body of knowledge and practices on achieving energy efficiency in building construction projects.
Introduction
The significant role of buildings in global energy consumption has been recognized very well by the research community.According to the previous studies, buildings almost account for the 40% of the whole energy consumption in the world (e.g.Laconte & Gossop, 2016).This fact, in turn, has triggered the research community and regulatory bodies in the built environment sector to ponder about the causes and also the possible solutions.In the European Union, the Energy Performance of Buildings Directive (EPBD) was implemented to guarantee that new buildings would be much more energy efficient than those built in the past (Directive 2010/31/EU).
In this regard, there have been also major developments and improvements in designing energy-efficient buildings.An example of these developments is the theoretical and practical advancements in building information modelling, energy simulation and calculation methods, and use of hybrid energy systems.However, other studies show that there is still a considerable deviation between the energy consumption target, specified in the design phase and the actual performance of constructed building in terms of energy consumption.Laconte and Gossop (2016) stated the actual energy consumption of buildings is usually two or even three times bigger than the design intentions.
The mentioned deviation, which is called the performance gap, can be seen as one of the major obstacles in developed and developing countries which are trying to transform their built environment sector into carbon neutrality era.Such performance gaps have been observed in existing buildings, building retrofit projects and new construction (e.g.Mahdavi et al., 2021).The gap exists both between simulated and actual performance and between the target set by regulations vs. the actual performance (Zou et al., 2018).Even buildings certified to be energy-efficient often consume more energy than their conventional counterparts.While energy certificates can serve as a good indication of energy efficiency at a design level, they do not show whether the building is performing as well as it is supposed to.To take into account the actual use of the building and its practical performance, energy efficiency should be examined through the lens of energy consumption targets that have been set for each specific building and type of energy.Thus, the performance gap between the targeted and actual energy consumption provides a useful look at energy efficiency on a practical level.
The performance gap increases the operation cost of the building in its life cycle which negatively affects both the building owner and users.Higher energy consumption will increase the carbon footprint of the building, while incorrectly dimensioned or operated systems can also negatively impact the indoor air quality and temperature conditions within the building.Considering the negative impact of the performance gap, it is imperative to study this phenomenon in an in-depth manner.In this regard, there have been a few studies conducted to address the barriers of energy efficiency in building construction projects (Häkkinen & Belloni, 2011;Liang et al., 2019;Qian et al., 2015).However, it seems that adopting a life cycle perspective and addressing all the involved disciplines in building construction projects has been almost overlooked.Consequently, there is currently very limited research-based knowledge on the barriers and solutions of energy efficiency through the lens of different disciplines involved in phases of project life cycle.
In order to the fill the mentioned knowledge gap, this study aims to explore and overcome the challenges and barriers of achieving energy efficiency in building construction projects.The resultant article is structured in six sections.The next section includes the theoretical background on the subject under study, which is followed by the explanation of the data collection and analysis process in the Methodology section.Then, the obtained results are presented and discussed.Finally, the conclusions are stated.
Barriers/challenges
Despite the environmental and even economic benefits of energy-efficient buildings, many new buildings end up consuming more energy than necessary.There are many reasons for this issue, ranging from the technical and regulatory level to knowledge and psychology.In this regard, there have been a few studies, which are explained and discussed in the following.Häkkinen and Belloni (2011) performed the literature review and surveyed and interviewed Finnish experts on sustainable buildings.They found that the major barriers to sustainable building relate to steering mechanisms, economics, lack of client understanding, process and underpinning knowledge, including access to methods and tools.In addition, Frei et al. (2017) reviewed many energy performance gap studies and noted that the causes of the energy performance gap can be linked to three life cycle phases of the building: (i) design and planning (poor early design decisions, uncertainty in energy modelling, oversizing of systems), (ii) construction and commissioning (economy over design, poor commissioning) and (iii) operation (equipment issues, user interaction, change of building purpose).Moreover, interviews by Dadzie et al. (2018) revealed that in Australia, many barriers to energy renovation relate to the financial side, such as the cost of sustainable technologies, perceived poor payback period, unreliable energy-savings projections and hidden costs of renovation.Existing design and age of buildings also serve as barriers.Demolish-and-build is perceived to be a more economical alternative to retrofitting of existing buildings.
Furthermore, based on a Norwegian survey of property owners, consultants and property managers in 2018, the greatest barriers to building's usability and lifetime value creation were found to be the bad decisions made in the early-phase planning (Boge et al., 2018).Avoiding moderate investments in the early stage often requires substantial investments at a later stage to remedy.Not solving the issues in the early stage may even result in irreversible problems in the operation phase.
In the same year, energy audits of office buildings were done in Brazil to map the issues that lead to problems with energy efficiency and indoor environmental quality (Borgstein et al., 2018).In a result of this effort, 38 failure modes relating to energy performance were encountered.In small buildings, most of the problems were caused by management failure (bad contracts, guidelines without energy performance requirements, lack of proper setpoints and documentation, and high night-time loads).In large buildings, the issues were typically related to improper operation of systems.With the large floor plans, there are also typically too large control zones, where the automation does not work correctly.In both small and large buildings, failures happened mostly due to the operation and maintenance procedures and could be fixed without installing new equipment.
Besides previous research efforts, Liang et al. (2019) surveyed facility managers of commercial buildings in the United States to understand the reasons for the energy performance gap.The primary reasons found for the gap were occupants using more energy than designed, there being more occupants than designed and the failure of energy-efficient technologies.It was speculated that higher than expected occupant energy consumption often results from green building certification or the fact that perceived energy efficiency encourages wasteful behaviour (i.e. the rebound effect).The survey also revealed that facility managers were generally expected to work on improving energy efficiency in their buildings but were not incentivized nor really required to do so.In another study, Willan et al. (2020) examined how various building construction actors talk about the performance gap.The observed discourse indicated that in fear of being held liable for any performance gap, instead of trying to reduce the gap, construction actors instead tend to rationalize the gap as an unavoidable difference between 'theory' and 'reality'.Similar findings were reported by Markus et al. (2022) through exploration of facility managers' responses to given data and KPIs collected from the buildings they supervise.Unfamiliarity with data-driven methodologies and tools and the aversion to risk in building operations may lead to disinterest in optimal control strategies and deter actionable insights from data-driven building operations analytical approaches.Long-term insights generated through complex computational analyses may seem distant to existing heuristics and experience-based decision-making methods.Hesitancy may especially be faced if the data-based recommendations conflict with existing operating procedures.Rasmussen and Jensen (2020) noted that alongside the energy performance gap, there are other types of building performance gaps, such as those related to operations and indoor conditions.However, the gaps are often interdependent, with shared causes for the different challenges.On the other hand, closing one performance gap might widen another type of performance gap.Interviews with Danish experts and focus groups revealed that performance gaps should be examined from various viewpoints, as building clients and facility managers may not share priorities.The performance gap was also noted to be project specific, meaning that even different types of facility managers may not perceive the same gap as the most important one.López-Bernabé et al. (2021) surveyed Spanish hotel owners to find out why the adoption of energy efficiency measures in hotels is so low.While hotel owners considered energy efficiency to be important, the majority of respondents did not know how much energy their buildings consume and what is the cost of energy.
Ironically, establishments with appropriate thermal insulation regarded energy efficiency improvement as less important.Perceived good energy performance can thus reduce the interest towards other energy efficiency measures.
The Soft Landings framework is a multi-stage process intended to ensure that the energy performance of buildings is on the intended level.Samarakkody and Perera (2023) examined the barriers to the adoption of the process in Sri Lanka.One important barrier was the lack of interest towards building performance.
Another key barrier was the lack of industry standards and that quality certifications are not really adhered to.Other barriers were industry concerns on corruption, procurement challenges, high construction costs, adverse impacts of government changes, lack of appropriate policies and delays in administration and approvals.The inefficiency of the local government and public administrative bodies were seen as the reason for the existence of many of the perceived barriers.Kazemi and Kazemi (2022) examined the financial barriers to residential buildings' energy efficiency in Iran.The main barriers were misplaced incentives and distortionary fiscal and regulatory policies as well as unpriced costs and benefits, fear of hidden costs, and a focus on initial cost.Mistaken beliefs in energy efficiency also served as an obstacle.Finally, in a recent study, Too et al. (2023) performed interviews and examined project reports and documentation to reveal a lack of effective handover procedures after the completion of building projects.
Enablers/solutions
While various studies have identified many types of obstacles for energy-efficient construction, those same studies typically also reveal enablers that can help solve the issues and increase the uptake of energyefficient building solutions.According to Häkkinen and Belloni (2011), promotion of sustainable building requires improving the awareness of clients about the benefits of sustainable building and the adoption of methods for sustainable building requirements management.Designers' competence and teamwork skills were also important.New tools and services should be developed and utilized.
Li and Yao (2012) recommended establishing stricter compliance and verification systems to guarantee that energy efficiency standards are implemented in practice.There should also be policies to remove structural barriers between construction professionals and all the other stakeholders in creating, operating and using buildings.Increased access to data on actual building performance would help confirm actual energy performance of buildings and the efficacy of any energy efficiency policies.Frei et al. (2017) suggested data collection and monitoring and improved commissioning and management to improve building performance on the operational side.Training, design improvements and better communication between different stakeholders, alongside energy performance contracts were also found important.
According to Boge et al. (2018), a building's usability and lifetime value creation is largely determined by decisions made in the early-stage planning.More resources utilized in early-stage planning will reduce costs and problems later.Just time and attention are often adequate when the issues are addressed early enough.Facility management planning should also be started right at the beginning of construction projects to ensure successful facility management in the operational phase.
In the case of small office buildings, Borgstein et al. (2018) found out that even simple checklists to identify performance failures could be used to improve building performance.As users have more control, the systems should be kept as simply as possible.In large buildings with more complex systems, commissioning and finetuning of operational parameters are needed.Investments into building automation would also improve energy efficiency.Liang et al. (2019) noted that to reduce energy consumption in buildings, facility managers should be incentivized to do so, for example, through rebates based on saved energy.The need for continuous energy consumption monitoring was highlighted.Energy performance targets and contracts would be useful for energy efficiency.Influencing the occupants' behaviour was also found to be important.Proper commissioning procedures and continuous monitoring as a way to achieve energy efficiency was also proposed by Mikhail et al. (2023).Knowledge of energy consumption at each moment of time allows finetuning of setpoints and system start times.Buildings should be designed with variable occupation in mind, so that the building energy systems can adjust to non-uniform occupation of spaces and distinguish between occupied and unoccupied hours.Willan et al. (2020) suggested that in addition to energy performance targets, there need to be incentives for companies to take responsibility for those targets.This could be realized through public procurement contracts that encourage collaboration and innovation, instead of focusing on accountability.Markus et al. (2022) noted that improving energy efficiency should be a key role of building operators and facility managers.This includes short-and long-term interventions to controls and equipment maintenance to continually improve building energy performance.Education of the operational staff is required so that there is enough understanding to apply insights obtained from data-driven methods, especially if such actions contradict existing operating procedures.Rasmussen and Jensen (2020) emphasized a multi-disciplinary view of building performance gaps.When solving one issue (e.g.energy performance), care should be taken to avoid worsening other issues (e.g.indoor air quality).
López-Bernabé et al. ( 2021) reported that information-based policy instruments such as labels, energy audits and feedback on bills would be useful to increase energy efficiency adoption in the hotel industry.Lower HVAC system prices and labels with additional monetary information on the impact of efficiency improvement would increase interest in energy efficiency.Mandatory energy efficiency standards would increase adoption among those who generally choose cheaper and low-quality equipment.Samarakkody and Perera (2023) proposed the application of the Soft Landings (SL) framework to reduce the building performance gap in Sri Lanka. the existence of a significant performance gap serves as a motivator to implement SL.To enable the use of SL, there need to be available building environmental assessment methodologies and certifications.Clients and facility managers need to be educated on the gap and the framework.Other enablers are increased valuation of cross-disciplinary collaboration and good practices followed by market leaders.
Providing more financial options through loan financing, grants, subsidies and fiscal incentives were recommended as solutions to increase the uptake of energy efficiency measures in Iranian buildings (Kazemi & Kazemi, 2022).Better regulation as well as training and information programmes were also recommended.Accounting for the negative environmental costs of fossil fuel supply chains would make energy-efficiency a more economically feasible choice.
Research gap
A large number of different barriers for energy efficiency have been identified in past studies, with corresponding solutions to enable improved building performance.The issues vary country by country and by building type.In some regions, the problems relate mainly to facility management procedures, while elsewhere the issue might be inadequate enforcement of regulations or lack of communication and proper performance incentives.However, it is clear that the energy performance gap can result from decisions and actions made in each stage of the construction process and that various professionals have different views and priorities on the issues.This study sets out to provide a comprehensive view of the issue in the Finnish construction industry by separately looking at each phase of the construction process and by involving professionals specifically responsible for each of those phases.
Research design
This study aims to explore and overcome the challenges/barriers of achieving energy efficiency in building construction projects.Due to the existence of literature related to the topic under study, the deductive approach was adopted (Saunders et al., 2019).Consequently, literature study and semi-structured interviews were selected as the data collection methods.The exploratory purpose of the research seemed to justify the selection of the semi-structured interview as the data collection method, and thematic as well as content analysis as the data analysis methods (Saunders et al., 2019).The next step was defining the context and selecting the sampling method.Building construction and renovation projects were selected as the focus of the study.In terms of the building type (construction category), the interviewed professionals' latest project represented constructing or renovating residential buildings, institutional buildings (i.e.school and hospital) and commercial buildings (i.e.shopping mall and office building).Regarding the sampling method, a combination of quota sampling and purposive sampling method was utilized in this study through which the research team defined four groups of interviewees including client project manager, contractor project manager, design manager, and maintenance experts and targeted at least five interviewees in each group.According to the data collection possibilities and available time as well as resources, the goal was to interview at least five people in each interviewee group.Then, the research team filled each quota by intentionally choosing individuals (i.e.interviewees) who was in the possession of relevant knowledge and experiences related to the quota and the research topic.
Data collection
Following the selection of the sampling method, the protocol and questions of the semi-structured interviews were formulated.The developed questions aimed to the explore the causes behind the energy performance gap in building construction projects based on the viewpoints of different project parties and their representatives, involved in different phases of project life cycle.The action was theoretically justified through the findings of the study conducted by Rasmussen and Jensen (2020), suggesting that performance gap should be examined from various viewpoints.The interview protocol and questions can be seen in the Appendix.
The developed interview protocol and questions were piloted in the first three interviews to seek feedback from the interviewees.Since there was neither negative feedback nor any changes in the interview protocol and questions, the first three interviews, which had been conducted with piloting purpose, were also considered valid to be analysed in the data analysis stage.
Then, 21 semi-structured interviews were conducted in Finland with project professionals representing client, design/planning experts, contractors and building operation/maintenance experts.The interview panel was consisted of three individuals of which the first one was the leading researcher (i.e.main author of this article), the second one was a native Finnish speaking member of the research team (the second author of this article) who asked the main and follow up questions in Finnish language, and the third member of the interview panel was a colleague with a background in building energy efficiency.The interviews were audio recorded based on the obtained consent from the interviewees.
Demographic information of the interviewees
Figure 1 shows the gender, age group and experience of the interviewees.In addition, Table 1 shows interviewees' discipline, role and their latest project's type, budget, and duration.
Data analysis and results validation
The conducted interviews were transcribed and translated to the English language by the native Finnishspeaking member of the research team.Then, the translated transcripts were reviewed by the leading researcher to identify the challenges/barriers and solutions/ enablers of energy efficiency realization in the project and product (i.e.constructed building) life cycle.The extracted research data was then inductively coded (Saunders et al., 2019).Qualitative coding is a process of systematically categorizing excerpts in the qualitative data (i.e.interview transcripts in this study) in order to find themes and patterns.The labels of the codes were data derived by the researcher (Saunders et al., 2019).In result of this effort, 51 codes were generated.These generated codes were then reviewed by the leading
Results
Thematic analysis of the discovered challenges/barriers and solutions/enablers revealed that they represent nine different themes.These themes were: (i) building operation, maintenance and optimization, (ii) building's energy system, (iii) client, (iv) competence development, (v) design, (vi) finance, (vii) information management, (viii) project delivery and (ix) regulatory issues.In this section, the identified challenges and solutions are presented according to their relevant themes.
Challenges/barriers for achieving energy efficiency in building construction projects
Analysing the interview transcripts resulted in the identification of more than 100 challenges for realizing energy efficiency in building construction projects.Among the discovered challenges, 11 of them were mentioned by more than one interviewee.These challenges are shown in Table 2.As can be seen in Table 2, the major obstacles of achieving energy efficiency in building construction projects seem to be rooted in the project delivery model, financial issues and building's energy system.Regarding project delivery, the lack of early involvement of building services designer, contractors and building operation/maintenance people in project definition phase seem to be the key challenges.Concerning financial issues, insufficient budget combined with high initial cost of modern energy systems seem to be the dominant barriers.Finally, the complexity of the modern energy systems and inaccurate calculation of building energy consumption are further challenges for achieving the targeted energy consumption in the operation phase of the new buildings.
Solutions/enablers for achieving energy efficiency in building construction projects
Akin to the identified challenges, there were also some enablers which were mentioned by more than one interviewee.These nine solutions/enablers, representing four different themes (project delivery, building's energy system, building operation, maintenance, and optimization, competence development) were found to be of importance.Table 3 shows these solutions/ enablers.
In terms of project delivery, having a life cycle contract, which extends the liability of the party in building operation phase for a sufficient period, can be seen as the most important solution, mentioned by a few interviewees.Life cycle contract, in other words, not only shares the risk and reward in project life cycle, but also in the product (i.e.building) life cycle.In addition to the life cycle contract, involvement of the key project participants (e.g.building services designer and contractors, building operation and maintenance people) in the project definition phase, and early definition of building's end use/end user are the key solutions by the interviewees.The leading author of this article has developed a collaborative project delivery model (featuring the mentioned solutions) for achieving energy efficiency in the building construction projects which will be reported in a separate publication.
In addition to the project delivery area, there were three frequently mentioned solutions representing building's energy system and building operation/maintenance/optimization.As can be expected, the interviewees mentioned that developing guidelines for designing hybrid energy systems for buildings could have significant impact on the high functionality of these systems in building operation phase.Heat pump and thermal borehole field design guidelines were specifically requested.Moreover, appropriate calibration of building energy system and its continuous monitoring and optimization were also found to be key enablers for achieving energy efficiency in building construction projects.
Mapping challenges and solutions of achieving energy efficiency in life cycle phases of building construction projects
Figure 2 shows the relevance of listed challenges and solutions in Tables 2 and 3 to life cycle phases of building construction projects.As can be seen in Figure 2, most of the identified barriers/challenges and solutions/enablers are related to the project definition and building operation phases.
This finding corresponds to the previous research (e.g.Boge et al., 2018), and highlights the significance of life cycle-based decisions and investments in the project definition phase.Theoretically, this is aligned with the relationship between time and cost of early and late changes in the project.But there are two differences.The first difference here is that instead of the change, the life cycle-based decisions and investments seem to have a relationship with time.
The second difference is that this relationship seems also to be valid not only in project life cycle, but also in the building life cycle.This relationship may look Table 3.Most frequently mentioned solutions/enablers by interviewees for achieving energy efficiency in building construction projects.
something like in Figure 3.According to Figure 3, it can be argued that project definition phase and building commissioning are two important gates which seem to have direct effect on the cost of completing the project and using the building.
Discussion
The identified challenges/barriers and enablers/solutions for achieving energy efficiency in building construction projects spotlight some key issues which are discussed here.The identified challenges and solutions are discussed through the lens of their representing theme (mentioned in Tables 2 and 3).In this regard, project delivery is the first focus area here.Project delivery model, through which project organization and processes are formed and managed, has a key role in the success or failure of any project.The detected challenges and solutions under project delivery theme provide a basis to argue that employing the appropriate and compatible project delivery model and contractual framework is of prime importance for the buildings which are designed and built to be highly energy efficient.These identified challenges and solutions imply that utilizing traditional delivery models (e.g.design-bidbuild) seems to be a major obstacle for realistic and accurate target setting for energy consumption and achieving it in building operation phase.This argument is supported through the fact that the identified challenges/barriers in Table 3 clearly represent the characteristics (e.g.lack of early involvement of key project participants, isolated work, and dominance of lowprice criteria) of traditional delivery models like design-bid-build (Moradi et al., 2021;Oakland & Marosszeky, 2017).
In contrary, the mentioned solutions by the interviewees for project delivery area emphasizes the essence of employing collaborative project delivery models (e.g.alliance) which enable early involvement of key project participants, shared risk-reward, joint design and control of projects, mutual trust and competence-based selection of the contractor (Moradi & Kähkönen, 2022).Moreover, the wise choice with the project delivery model can also overcome the mentioned financial challenges in terms of high initial costs for modern energy systems.This can be realized through fair share of risk and reward in the project and product (building) life cycle.This provides a great motivation for client and contractor to justify the high initial costs of modern energy systems with the benefits in the payback period.This includes not only equipment, but also additional design work.Additional investment into design and planning can prevent incorrect system dimensioning, saving money in the procurement and/ or operational phases.
The second focus area here is the mentioned challenges related to building's energy system and building operation/maintenance/optimization.These challenges and solutions are again highly rooted in the type of project delivery model and contract.This means that if the building services and maintenance experts are involved in project definition and design phase (which happens in collaborative project delivery), the chance of making better decisions and having highly accurate calculation of building's energy consumption and more functional and cost-effective design of the building's energy system considerably increase (e.g.Boge et al., 2018).This argument can be also supported based on the map of identified barriers and enablers in Figure 3. Besides, when there is a collaborative project delivery model, featuring a life cycle contract, then it seems highly unlikely to miss or neglect proper commissioning and continuous monitoring as well as optimization of building's energy system.This is because contractors are contractually responsible for full commissioning and optimization of building energy system.
The renewal of the maintenance process itself was also called out due to the increased complexity of modern hybrid energy systems.Traditionally the responsibility of maintaining building energy systems has fallen to janitor-type personnel, who have no specific expertise in building energy management.As new systems may involve multiple parallel heating/cooling systems and complicated automation, there is need for both specialist personnel and automated diagnostic tools.The smooth operation of hybrid energy systems requires constant optimization, and merely tuning the system settings during commissioning was seen as inadequate.In addition, the need for updating outdated building maintenance guidelines was brought forward.
Some interviewees also highlighted the missed potential of energy efficiency due to the lack of awareness of available solutions and their actual impact on energy cost.Many types of energy solutions are available on the market, but there has been no systematic review of their joint performance in practice.Detailed monitoring and publication of the results in an open database would provide all market participants and stakeholders a better understanding of available solutions and their expected performance.This could help reduce the energy performance gap while improving the baseline efficiency level.
Many of the observed barriers to energy efficiency seem to be the same as those reported more than 10 years ago by Häkkinen and Belloni (2011) regarding the uptake of sustainable buildings.Thus, clients need to be made aware of the long-term benefits of sustainable solutions that are more costly in the project initiation phase.Teamwork skills must be improved, and new service concepts need to be developed.The longevity of these issues highlights the fact that awareness of the problems is not enough, and active work must be done to fix them.
Conclusions
This study aimed to explore the challenges/barriers and solutions/enablers of realizing energy efficiency in building construction projects.This was accomplished through a qualitative study of semi-structured interviews.The opinion of project professionals representing client, design, contractor and property management were obtained and analysed.Accordingly, it is concluded that: . Accuracy of energy consumption estimation (predicted building use and users) has a huge impact on the actual energy performance of the building and therefore building services people need to get involved in the early stages of the project. .Increasing the awareness of clients about hybrid systems and their life cycle cost and benefits is of prime importance to provide a basis for long-term decisions in the project definition phase. .Employing traditional delivery models, like designbid-build seem to be a major obstacle for realizing energy efficiency in building construction projects due to the lack of early involvement of key project participants, isolated work, mistrust, unfair share of risk-reward and dominance of low-price criteria. .On the contrary, collaborative project delivery models (like alliance) with a life cycle contract (covering risk and reward both in the project life cycle and building life cycle) considerably contribute towards the realization of energy efficiency in building construction projects. .Project definition and building operation phases seem to have the biggest impact on the realization of energy efficiency in building construction projects. .Life cycle-based decisions and investments in the project definition phase directly affect the achievement of energy consumption target in the building operation phase. .To ensure that the building systems work as designed, a proper commissioning process must be performed, but in addition, there must be continuous monitoring and optimization of the systems through the whole operational phase.
This study contributes to the existing body of knowledge on the barriers and enablers of realizing energy efficiency in building construction projects.It also provides practical insights for the relevant project professionals.However, it is acknowledged that the generalisability of the findings could be limited as the limited number of interviews were conducted only in Finland.Moreover, purposive sampling technique which was utilized in this study might have affected the generalizability of the findings.Although the identified challenges in this study are more relevant in Finland, the explored solutions such as collaboration and fair share of risk-reward in project and product life cycle, on the other hand, can be applied in other contexts as well.All in all, similar efforts in other regions are still potential for further research and development in this area.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Funding
This Appendix.Interview protocol and questions Step 1.The interviewer(s) starts the session by explaining the purpose of the interview.This explanation is to be done through reading the following statement: . This interview is conducted to collect data from project professionals for a research project which aims at exploring and overcoming challenges and barriers of realizing energy efficiency in building construction and renovation projects.
The motivation for conducting this research project comes from the latest research findings that there is a performance gap between the design intentions for energy efficiency of the buildings and the actual performance of the buildings in the operation phase.The interview will take 45-60 minutes.
The identity of the interviewees and real name of the projects (if mentioned during the interview) will not be disclosed to any third party neither throughout the research nor after that.Only demographic information of the interviewees (i.e.age, education, experience) and projects (i.e.type, size, duration) will be used in the data analysis and research dissemination without revealing their identity.The interview will be audio recorded for the purpose of transcribing and analyzing the collected data.The interviewee is giving his/her consent for recording the interview by answering the following questions.
Step 2. The interviewer(s) opens the discussion by reading the following statement: . Now, I will start asking seven questions two of which are related to demographic information.
Step 3. The interviewer(s) asks the following questions from the interviewees based on their discipline: o Q6.How do you think these challenges and barriers can be overcome?o Q7.Is there anything which we may have overlooked in our questions and you would like to say something about it?
Step 4. The interviewer(s) thanks the interviewee for his/ participation in the interview through the following statement: We truly appreciate your participation in this interview and your insightful responses to our questions.
Figure 1 .
Figure 1.Demographic information of the interviewees.
Figure 2 .
Figure 2. Important barriers and enablers of achieving energy efficiency in different life cycle phases of building construction projects.
Figure 3 .
Figure 3. Cost and benefit of poor and wise decisions in the project definition and operation phases.
Table 1 .
Interviewees' discipline, role and their latest project's type, budget, and duration.
.
Questions for the interview with design managers: o Q1: What is your educational degree, professional experience and age?o Q2.What was/is the type, budget and duration of your last/current project?o Q3.What are the challenges and/or barriers for realizing energy efficiency in the design of new buildings?o Q4.How do you think these challenges can be overcome?o Q5.What are the challenges and/or barriers for achieving energy efficiency in the renovation of existing buildings?o Q6.How do you think these challenges can be overcome?o Q7.Is there anything which we may have overlooked in our questions, and you would like to say something about it? .Questions for the interview with client project managers: o Q1: What is your educational degree, professional experience and age?o Q2.What was/is the type, budget and duration of your last/current project?o Q3.What are the challenges and/or barriers in the definition phase of building construction projects which negatively affect the realization of energy efficiency?o Q4.How do you think these challenges and barriers can be overcome?o Q5.What are the challenges and/or barriers in the definition phase of building renovation projects which negatively affect the realization of energy efficiency?o Q6.How do you think these challenges and barriers can be overcome?o Q7.Is there anything which we may have overlooked in our questions and you would like to say something about it. .Questions for the interview with contractor project managers: o Q1: What is your educational degree, professional experience and age?o Q2.What was/is the type, budget and duration of your last/current project?o Q3.What are the challenges and/or barriers in the delivery of building construction projects which are designed to be energy efficient?o Q4.How do you think these challenges and barriers can be overcome?o Q5.What are the challenges and/or barriers in the renovation of existing buildings which are retrofitted to be energy efficient?o Q6.How do you think these challenges and barriers can be overcome?o Q7.Is there anything which we may have overlooked in our questions and you would like to say something about it? .Questions for the interview with property managers: o Q1: What is your educational degree, professional experience and age?o Q2.What was/is the type, budget and duration of your last/current project?o Q3.What are the challenges and/or barriers of realizing energy efficiency in the utilization phase of the new buildings?o Q4.How do you think these challenges and barriers can be overcome?o Q5.What are the challenges and/or barriers of realizing energy efficiency in the utilization phase of the renovated buildings? | 8,502.2 | 2023-11-29T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Politeness in the Samawa Ethnic Language on Social Media
. This research explains politeness in the Sumbawa ethnic language on social media, including the types of politeness that tend to be used. The politeness theories of Lakoff, Leech, Brown and Levinson, Grice, and Pranowo are the theories used to explain this. Data was collected using documentation methods through the use of language on social media sites such as Instagram, Facebook, and WhatsApp, either individually or in groups. The data was then analyzed using principles in pragmatic studies. The research results show that not all types of language politeness proposed by experts are used by the Sumbawa ethnic group. The politeness type of giving sympathy, showing agreement or similarity, giving praise, and avoiding arguments are the dominant types of politeness used with the person you are talking to. Comprehensive studies need to be carried out in all domains to obtain a politeness model for the Sumbawa ethnic language.
Introduction
Language is unique, and this can be observed at various linguistic levels, including phonology, morphology, syntax, lexicon, semantics, and pragmatics.Linguistic studies regarding the uniqueness of a language until the second decade of the 21st century was more oriented towards structural aspects, while pragmatic aspects (especially language politeness) received less attention.This research aims to explain (types of) politeness in the Sumbawa ethnic language on social media from various existing politeness theories.
On the other hand, the Sumbawa ethnic group uses social media both individually and in groups.Commonly used social media include Facebook, Instagram, and WhatsApp, both in Sumbawa and Indonesian.As an ethnic group, it is possible for its speech acts to be uniquely different from other ethnic groups.It is important to know these differences not only to reveal the language politeness model but also how the Sumbawa ethnic group can defend itself or not defend itself in a speech event.Knowledge about this is very important so that participants can identify things that they can and cannot do in communicating with the Sumbawa ethnic group.Because conceptually, in a speech event, participants will be faced with two speech act choices, namely, threatening face or not threatening face.
Considering that communication on social media is generally used for transmitting information, building harmony, and other positive purposes, face-threatening speech acts are important to identify.This is based on the consideration that face-threatening speech acts can be learning material for social media users, especially Sumbawa ethnic groups who use social media.So that communication can be carried out politely when dealing with the person you are speaking to.Apart from that, as an ethnic group that has its own characteristics, the Sumbawa ethnic group has a unique knowledge system regarding the types and tendencies of language politeness.The availability of communication signs on social media for the Sumbawa ethnic group is important, considering the increasing use of language in the digital era.
In a speech event, each speech act is a reflection of the attitude of an individual or group.This means that every speech act can reflect politeness, so it is very important for a speaker to master language politeness, especially in group communication.Participants who are less able to act appropriately will be placed in a disadvantageous position in a speech event.Such conditions, in pragmatic terminology by Brown and Levinson, are called threatened face speech acts [1].To avoid these face-threatening conditions, the authors proposed language strategies including, for example, avoiding conflict, increasing attention/sympathy, identifying oneself with the speaker, and so on.Such strategies were proposed by many researchers [2][3][4][5][6][7][8][9][10][11][12][13].
How the speech acts of the Sumbawa ethnic group involved in speech events are seen from the various politeness theories above needs to be researched.In other words, this research explains how the realization of politeness in the Sumbawa ethnic language is seen from the theoretical perspective [1][2][3][4]13].The realization of politeness was examined in speech events that occurred on WhatsApp, Facebook, and Instagram in the period from March to September 2023.This research also revealed the types of politeness that are dominantly used by the Sumbawa ethnic group.
Before explaining the research findings, it is necessary to state several important terms related to this study.This study is included in the subfield of pragmatics (in the field of linguistics).In the study of pragmatics, there are the terms politeness and politeness, which are not contradictory in this article.Politeness refers to behavior that is in accordance with social rules in society, which can be demonstrated through concern and sensitivity towards others [1,3,4,[13][14][15][16]. [2] believes that politeness is related to three things, namely (1) formality (the speech participants involved must feel completely comfortable in the entire speech act); (2) indecisiveness (the participants feel comfortable with each other and have many choices for speaking); and (3) equality/friendship (the speaker must consider the speaker as their friend while speaking).Language politeness is related to six things, namely (1) making the other person's benefits maximum and making the speaker's losses minimal; (2) making your own profits as small as possible and making your own losses as large as possible; (3) criticize others as little as possible and praise others as much as possible; (4) praise yourself as little as possible and criticize yourself as much as possible; (5) create as little disagreement as possible and as much agreement as possible; and (6) reduce feelings of antipathy as little as possible and increase feelings of sympathy as much as possible [3].Another researcher believes that language politeness is characterized by several things, namely (1) the speaker is able to maintain the dignity of the speaker (not being embarrassed); (2) the speaker does not say anything bad about the speaker; (3) the speaker does not express joy at the speaker's misfortune; (4) the speaker must not express disagreement with the speaker so that he feels his self-esteem falls; and (5) speakers must not praise themselves [4].Pranowo emphasized that language politeness can be done in six ways, namely (1) humility, (2) respect, (3) curt, (4) satire, (5) happy tone, and (6) concerned tone [13].
Brown and Levinson specifically put forward a language politeness strategy known as "face-saving," namely speech acts that contain certain intentions and characteristics as a manifestation of appreciation or respect for individual members of society [1].The facesaving language politeness strategy includes two interrelated aspects, namely (a) negative face (the speaker's desire to be free to act/do something) and (b) positive face (the speaker's desire to be accepted or liked by other parties).They are called negative politeness and positive politeness, respectively [1].In accordance with the research objectives, the positive politeness theory will be depicted in the speech acts carried out by the Sumbawa ethnic group.Positive politeness is an approach that creates the same impression as the person you are talking to.To reduce the disappointment of the interlocutor, Brown and Levinson offer fifteen action strategies to the interlocutor, namely (1) paying attention to interests, desires, behavior, or goods; (2) exaggerating feelings of interest, approval, sympathy; (3) increasing interest; (4) shows a common identity/group; (5) seek approval; (6) avoid conflict; (7) presupposes the perception of a number of speakers' similarities; (8) making jokes; (9) presupposes that the speaker understands the wishes of the interlocutor; (10) making offers and promises; (11) shows a sense of optimism; (12) trying to involve the speaker in a particular activity; (13) giving and asking for reasons; ( 14) offers a reciprocal action, namely if the interlocutor does X then the speaker does action Y, and (15) giving sympathy to the person you are talking to [1].If you look closely, strategies (1), ( 2), ( 3), (11), and ( 15) can be grouped into one, as well as strategies ( 4), (5), and ( 6) can be grouped into one so that in fact there are nine politeness.
The results of searching for several accessible literature related to this research take Indonesian as an object, including the study by [13,[17][18][19][20][21].Two studies showed Perceptions of Directive Politeness in Indonesian among Several Ethnicities in Jakarta and Implicature and Politeness in Language: Some Insights from the Drama Ludruk [17,18]; Rahardi studied Pragmatics: Imperative Politeness in Indonesian and Sociopragmatics: Imperative Studies in the Context of Sociocultural Context and Situational Context [19,20]; and Pranowo studied politeness in using Indonesian and speaking politely [13].When viewed from the aspects and objects studied, those studies are different from the present study but overlap in terms of politeness.However, the discussion of politeness in this article is more focused on politeness strategies (positive) rather than in general, as studied by the three linguists above.Other studies were conducted by [22][23][24][25][26][27] but are not related to this research.
Method
This research was conducted from March to September 2023 by observing the use of the Sumbawa language on social media sites such as Facebook, Instagram, and WhatsApp.Data collection was carried out on language use, both individually and in groups.Individual use of the Sumbawa language is taken from statements made on the homepage, while groups through Sumbawa associations are found on the three social media above.Thus, data was collected using the documentation method (compared [21]).The data collected is in the form of speech acts that are thought to contain politeness, as hypothesized by experts.The data is then analyzed by comparing the found speech with existing politeness theories.This model analysis is called the intralingual matching method and the extra lingual matching method [21].The intralingual equivalent method compares the analyzed speech act with the speech acts before and after it, while the extra lingual method compares the analyzed speech act with the context (units in the form of non-language elements, such as who is speaking, where, when, about what, etc.).Data were also analyzed based on the concepts previously described [28].The analyzed data is then quantified simply by counting the total number of speech acts and based on their type.
Results and Discussion
As stated above, Lakoff argued that language politeness is related to formality, indecisiveness, and similarity/friendship [2].Of the three types, only the types of indecisiveness and sameness are used by the Sumbawa ethnic group on social media.
(1) Aida nda maras lamin nonda Pak Saleh.'Aida (phatic marked), it wouldn't be fun if Mr. Saleh wasn't there.' (2) Jam pida nawar ada waktu sia bos.'What time do you (respectfully) have tomorrow, boss?' (3) Lamin nene sate maras, nene ajak gama Pak Saleh.'If you want to have fun, you should invite Mr. Saleh, ' The strategy of similarity or camaraderie is also used by the Sumbawa ethnic group, namely by treating the person they are saying as truly their friend.
(5) Khusus buat Ami Mack lagu Taliang.'This song is especially for Mr. Mack Taliwang' (6) Ko ko ko, na barola kotar ompa ke linglung yammara saya, karing tudadi ngajar online.'Well (phatic marker), quickly get tired and confused like I am now, teaching online.'Of Leech's six theories of politeness [3], there are five that are used by the Sumbawa ethnic group.First, make your own profits as small as possible and make your own losses as large as possible.
(7) Kajulin eneng maaf luk ngka saya bau datang sarawi Dea Papen eee.'I (gently) apologize because I couldn't come to Grandpa Haji's place last night.'Second, criticize others as little as possible and praise others as much as possible.
(13) Eee nanta kami, ta nya rua kami telas.'Eee (phatic marker) poor us, this is our life.'( 14) Eee, kami ta tau nonda.'Eee (phatic marker), We are no one.'Fourth, make as few disagreements as possible and make as many agreements as possible.(15) Second, the speaker does not say anything unfavorable about the person he is speaking to, including data (20) and (27).Third, the speaker must not express disagreement with the person he is speaking to so that he feels his self-esteem fall, including data (1), (16), and (27).Fourth, speakers should not praise themselves.
(25) Ngaro mo sia tulung.'Please (respectfully) help.' Based on the theory of Brown & Levinson [1], nine types of politeness were found to reduce the disappointment of the interlocutor.First, pay attention to interests, desires, behavior, or goods (data (26)). | 2,766.4 | 2024-01-01T00:00:00.000 | [
"Linguistics",
"Sociology"
] |
Focused ion beam direct writing of magnetic patterns with controlled structural and magnetic properties
Focused ion beam irradiation of metastable Fe$_{78}$Ni$_{22}$ thin films grown on Cu(100) substrates is used to create ferromagnetic, body-centered-cubic patterns embedded into paramagnetic, face-centered-cubic surrounding. The structural and magnetic phase transformation can be controlled by varying parameters of the transforming gallium ion beam. The focused ion beam parameters as ion dose, number of scans, and scanning direction can be used not only to control a degree of transformation, but also to change the otherwise four-fold in-plane magnetic anisotropy into the uniaxial anisotropy along specific crystallographic direction. This change is associated with a preferred growth of specific crystallographic domains. The possibility to create magnetic patterns with continuous magnetization transitions and at the same time to create patterns with periodical changes in magnetic anisotropy makes this system an ideal candidate for rapid prototyping of a large variety of nanostructured samples. Namely spin-wave waveguides and magnonic crystals can be easily combined into complex devices in a single fabrication step.
material 4 .However, these approaches often lead to ferromagnetic structures embedded in a ferromagnetic (or antiferromagnetic) matrix.For many applications, it is more suitable to have ferromagnetic elements surrounded by nonmagnetic regions.This can be done either by destruction of magnetism in multilayers by e.g.ion-induced alloying 5 or via a positive process by creating ferromagnetic elements by ion-induced change of chemical 6,7 or structural order 8 .Another possible approach is to use ion-induced chemical reactions to create magnetic patterns [9][10][11] .
Metastable face-centered cubic (fcc) Fe thin films 12,13 are good candidates for magnetic patterning, because they are paramagnetic at room temperature and can be transformed by ion-beam irradiation to ferromagnetic body-centered cubic (bcc) Fe 8 .Unfortunately, there is a thickness limit as fcc Fe films thicker than approx.
2 nm transform spontaneously to bcc 12 .It is possible to overcome this thickness limit by stabilizing the fcc phase either by depositing the Fe at increased CO background pressure 14,15 or by alloying with Ni 16 .In this work we use 8-nm-thick Ni-stabilized fcc Fe films as nonmagnetic template and study the influence of FIB parameters on the structural and magnetic properties of transformed patterns.We show that it is possible to control not only the degree of transformation (saturation magnetization) but also the growth of specific crystallographic domains exhibiting different magnetic anisotropies (uniaxial anisotropy directions).
The films were grown in an ultra-high vacuum (UHV) system by evaporation from Fe78Ni22 (2 mm thick rod, purity 99.99%) heated by electron bombardment.Prior to the experiments, the Cu(100) crystals were cleaned by several cycles of sputtering (2 keV Ar + ions, 30 min) and annealing (600 °C, 10 min).The cleanliness of the surface as well as the film composition was checked by Auger Electron Spectroscopy (AES).The pressure during the deposition was 5×10 -10 mbar and the deposition rate 0.02 Å/s (calibrated by a quartz-crystal microbalance) resulted in a deposition time of approx. 1 h for 8-nm films.To suppress highenergy ions which may modify the growth mode of the films 17 , a repelling voltage of +1.5 kV was applied to a cylindrical electrode in the orifice of the evaporator.After the deposition, the crystallographic structure of the films was checked by low-energy electron diffraction (LEED).
The Cu crystal with deposited metastable film was then removed from UHV and transferred into the high vacuum chamber of the scanning electron microscope equipped with focused ion beam column (FIB-SEM, Lyra3, Tescan) where we conducted the FIB transformation.The residual pressure in the FIB-SEM vacuum chamber during transformation was 9×10 -7 mbar.For the experiments we used the following nominal parameters of the gallium ion beam: acceleration voltage 30 kV, beam current 145 pA, beam spot size 30 nm and scanning step size 10 nm.First, we performed a dose test where we transformed rectangles (6 µm ×14 µm) with an increasing ion dose.The transformation was performed by two different approaches: 1) by performing 100 fast scans over the full area of the rectangle and 2) by applying the full ion dose in one (slower) scan.The total irradiation time was the same in both cases.After the transformation, we imaged the transformed areas by SEM.Although the sample surface after transformation was perfectly flat, by using an electron energy of 5 keV and a conventional Everhart-Thornley (SE) detector we were able to observe a clear contrast between irradiated and non-irradiated areas.The contrast was reversing from dark to white and back upon tilting the sample ±10° from the normal and also upon rotation (with 6-fold symmetry in bcc areas and 8-fold symmetry in fcc areas), which points to its crystallographic origin 18 .This crystallographic contrast cannot be fully quantified, but it is sufficient to image the difference between untransformed fcc Fe and transformed bcc Fe areas and also to distinguish different orientations of bcc domains after the transformation.Additionally, we measured the Kerr ellipticity (which is proportional to magnetization) of the transformed areas with our home built micro-Kerr magnetometer 19 mrad to the value of 0.9×10 -4 mrad at an ion dose of 2×10 15 ions/cm 2 .This suggests a stochastic process of transformation resulting in small bcc nuclei, where the number of nuclei is proportional to the number of incident ions (keeping the probability of creating the bcc nuclei by incident ion constant).From the linear fit of the Kerr ellipticity and assuming maximum measured Kerr ellipticity equals to fully transformed layer we can estimate the transformation efficiency of approx.3 Fe or Ni atoms per incoming Ga + ion [see inset in FIG. 1 a)].After the saturation ion dose (maximal magnetic signal) is reached the ion-beam-induced intermixing and sputtering processes lower the magnetization down to the point where all the iron and nickel has been sputtered off and no magnetic signal is observed anymore.
In the case of a single scan the magnetization in the low-dose regime also increases linearly (suggesting the same mechanism as in multi-scan approach) yet when a critical ion dose of 3×10 15 ions/cm 2 is reached the transformation efficiency suddenly increases to approx.12 Fe or Ni atoms per incoming Ga + ion.The ion beam is now irradiating the fcc-bcc boundary with sufficient ion flux to achieve steady state migration of the bcc structure into the fcc surroundings 20,21 .rectangle III] shows fully transformed area but the resulting magnetization measured inside the rectangle is lower than in the dark area of FIG. 1 c), rectangle III [compare also with the graph in FIG. 1 a)].This is because the maximum is reached at a later stage, where sputtering and intermixing already decreases the magnetization.These results show that single-scan transformation is much more efficient than multiplescan transformation and once the initial bcc nuclei are formed, then the transformation proceeds mainly via grain growth of the already transformed areas.Once the initial grain is transformed it is easier to move the grain boundary via collision induced migration of vacancies and interstitials at the boundary 20,21 .
In the second experiment we transformed a 15x15 µm 2 square in a single scan with an ion dose of 2×10 15 ions/cm 2 .The ion beam was scanning in square spiral from the center of the rectangle towards the border.
The resulting SEM image [FIG. 2. a)] shows clear division of the rectangle into four triangular domains;
each domain corresponds to a different scanning direction.Inside of each triangular domain we can observe an additional texture.Unfortunately, the SEM observation does not allow to extract any quantitative information about the crystallography of the areas with different contrast 18 .
FIG. 2. Rectangle transformed by spiral scanning. a) Crystallographic contrast in SEM shows division into four domains resulting from different FIB scanning directions. Polar plots of remanent magnetization measured by micro-Kerr magnetometry show four-fold magnetic anisotropy in the center of the rectangle (point b) and uniaxial anisotropies with different directions inside the crystallographic domains (points c and d).
Magnetic measurements provide further insight into the behavior of the material.We used the micro-Kerr magnetometer 19 to measure the angular dependence of the remanent magnetization in the center of the transformed square and in the center of each triangular domain.The spot size of the micro-Kerr magnetometer was approx. 1 µm.In the center of the square the plot of remanent magnetization shows clear four-fold magnetic anisotropy [FIG.To study the dependence of the magnetic anisotropy direction on the scanning direction we transformed 36 circles with 10 µm diameter.The circles were transformed by linear scanning with varying angle of FIB scanning starting from fcc [011] direction in 10° steps.The results of the experiment are shown in FIG. 3 a).When the direction of FIB scanning was between 0° and 90° (fcc [011] and fcc [01 � 1] directions) the resulting magnetic anisotropy direction, represented by the direction of the easy axis rotated by approx.20° between 35° and 55°.When the direction of FIB scanning passed 90° (fcc [01 � 1] direction) the resulting easy axis direction jumped from 55° to 125°.With further increase of the scan angle, the resulting easy axis direction further gradually changed from 125° to 145°.At 180° (fcc [01 � 1 � ]) the easy axis again jumped from 145° to 35° and the angular dependence continued symmetrically in third and fourth quadrant, with continuous rotation for FIB scanning in between the fcc low-index directions and jumps when the FIB scanning direction passed the fcc low-index directions.
FIG. 3. a) Magnetic anisotropy (easy axis direction) as a function of the FIB scanning direction (0° corresponds to fcc [011]). b) LEED pattern showing four bcc(110) domains after transformation by a broad ion beam. c) Schematic of all four bcc(domains) with arrows indicating directions of the easy axes.
To explain the behavior of the evolution of the magnetic uniaxial anisotropy with respect to the direction of the FIB scanning we need to look at the crystallographic structure of the bcc Fe thin films on Cu(100).
The LEED pattern [see FIG. 3 b)] of the 8 nm thin film transformed in UHV by broad Ar + ion beam shows four possible bcc(110) domains formed in a Pitsch orientational relationship 22 .The magnetic easy axis for bcc Fe is aligned with its <001> directions 23 .FIG. 3 c) shows of all four possible bcc(110) domains, with blue arrows indicating the angles of the easy axes.For these domains, the azimutal angles of the easy axes are 35°, 55°, 125° and 145°.
Putting together the magnetic and structural data reveals the behavior of the FIB-induced transformation.
In case of transformation by a broad ion beam or by isotropic scanning by FIB (and also when using multiple FIB scans) the transformed film contains all four bcc(110) domains [see FIG. then the other two bcc(110) domains are preferred and the magnetic easy axis jumps by 70° from 55° to 125°.Then, by further increase of the FIB scanning angle from 90° to 180° it is again possible to control the ratio of transformed domains [domain 3 and domain 4 in FIG. 3. c)] and to rotate the easy axis between 125° and 145°.The exact reason why the direction of FIB scanning can control the nucleation of individual bcc domains is not clear.The most probable explanation is uniaxial strain propagating perpendicularly to the FIB scanning direction.
The films described in this paper are well suited to prepare magnetic patterns or structures which are extremely difficult or impossible to prepare by conventional lithography techniques.In FIG. 4 a) we show a magnonic crystal consisting of 500 nm wide stripes with alternating magnetic anisotropy.In FIG. 4 b) is another magnonic crystal with modulated magnetization.The modulation in magnetization can be either in steps, or it is also possible to fabricate a gradual magnetization transition.All these structures are results of pure magnetic patterning without any apparent topography on the irradiated structures.The contrast in SEM images is purely crystallographic.In summary, we have presented very powerful method for magnetic pattering by direct FIB writing.The system allows precise control of the magnetic parameters of the transformed areas.We have shown that it is possible to control degree of transformation (magnetization) by selecting proper ion dose and using multiple scans over the sample area.Even more important, we have shown that it is also possible to control the magnetic anisotropy of the transformed patterns by changing the FIB scanning direction.With linear scanning, the bcc(110) domains having [001] directions (easy axes) parallel, or close to parallel to the FIB scanning direction are preferentially formed.The examples of transformed patterns with sub 100-nm transitions show that FIB patterned metastable Fe78Ni22 thin films on Cu(100) can be used as rapid prototyping platform for many spintronic and magnonic applications.
.
The results of the dose test are shown in FIG. 1.The graph in FIG 1. a) shows dependence of the Kerr ellipticity (degree of transformation) on the ion dose for different number of scans.Each point in the graph is from a separate experiment.The results show clear difference in the transformation process when the structures are transformed by using either multiple scans over the same area (dashed line with open circles) or by single scan only (solid line with open triangles).When irradiating the material by multiple passes of the ion beam the magnetization of the structures increases linearly from the background value of 5×10 -6
FIG. 1 .
FIG. 1. Dose test and development of FIB induced transformation.a) Plot of the dependence of the Kerr 2. b)], whereas inside the triangular domains the magnetic anisotropy is clearly uniaxial.Moreover, the direction of the uniaxial anisotropy changes with the FIB scanning direction [FIG.2. c), d)].
3. b)] and exhibits four-fold magnetic anisotropy [see FIG. 2. b)].The linear single-scan FIB transformation results in uniaxial magnetic anisotropy [see FIG. 2 c), d)] and the direction of the anisotropy depends on the direction of FIB scanning [see FIG. 3 a)].The experimental data fit to the model where FIB scanning in between fcc low-index directions preferentially forms bcc domains which have [001] direction parallel, or close to parallel to FIB scanning direction.For example, when the FIB is scanning between 0° and 90° then there is preferential nucleation of the domains with [001] directions at 35° and 55° [domain 1 and domain 2 in FIG. 3. c)].The FIB scanning angle can control the ratio of transformed domains and the easy axis direction can be continuously rotated between 35° and 55°.When the FIB scanning angle exceeds 90° (fcc [01 � 1] direction),
FIG. 4 .
FIG. 4. Examples of magnetic patterns.a) Magnonic crystal with periodical changes in magnetic This research has been financially supported by the joint project of Grant Agency of the Czech Republic (Project No. 15-34632L) and Austrian Science Fund (Project No.I 1937-N20).The FIB transformation was carried out in CEITEC Nano Research Infrastructure (ID LM2015041, MEYS CR, 2016-2019).L.F. was supported by Brno PhD talent scholarship. | 3,421.6 | 2018-03-12T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Stretchable Textile Yarn Based on UHF RFID Helical Tag
In the context of wearable technology, several techniques have been used for the fabrication of radio frequency identification (RFID) tags such as 3D printing, inkjet printing, and even embroidery. In contrast to these methods where the tag is attached to the object by using sewing or simple sticking, the E-Thread® technology is a novel assembling method allowing for the integration of the RFID tag into a textile yarn and thus makes it embeddable into the object at the fabrication stage. The current E-Thread® yarn uses a RFID tag in which the antenna is a straight half-wave dipole that makes the solution vulnerable to mechanical strains (i.e., elongation). In this paper, we propose an alternative to the current RFID yarn solution with the use of an antenna having a helical geometry that answers to the mechanical issues and keeps quite similar electrical and radiative properties with respect to the present solution. The RFID helical tag was designed and simulated taking into consideration the constraints of the manufacturing process. The helical RFID tag was then fabricated using the EThread® technology and experimental characterization showed that the obtained structure exhibited good performance with 10.6 m of read range in the ultra high frequency (UHF) RFID band and 10% of tolerance in terms of elongation.
Introduction
Radio frequency identification (RFID) is a very popular standardized technology that is mainly employed for the identification purposes of objects or people. More precisely, an object associated with a RFID tag is remotely identified by the means of a RFID reader. The communication principle is based on the tag's load modulation of the backscattered electromagnetic wave [1][2][3], which implies that in most of the cases, the RFID tag is passive (i.e., it uses the transmitted energy from the reader without the need for any additional energy source). RFID is a very interesting concept that contributes to the Internet of Things (IoT) development and, more generally, it is considered as a key technology for humanity [4,5]. The advantages that are offered by RFID tags such as communication without line of sight, low cost, small size, and unique identification have made them an essential candidate for a wide range of applications, for example, logistics, retail, access and identity cards as well as wireless payment systems.
Recently, the emergence of electronic devices that can be worn in, on, or near the body called "wearables" has allowed for the possibility of recovering various physiological information from a human body and transmitting it wirelessly to a processing unit or even to a smartphone [6]. The information obtained from a wearable device can be very useful in a wide range of applications, especially in the health care sector and one of the required operations is the unique identification of the device. For this purpose, in the last years, many efforts have been undertaken in order to develop wearable RFID tags that can be associated with clothing or an accessory in a way that is non-invasive, comfortable, and invisible for the wearer. Popular considerations during the design of wearable RFID tags are usually the impact of deformation on the RFID tag's performance, the effect of the human body's proximity to the tag's electrical and radiative properties or the tag's washability [7][8][9][10][11]. However, in the encountered studies, the RFID tag's topology is often kept unchanged from the conventional one (i.e., planar antenna on a substrate with properties that are specific to the application). In fact, the link between a RFID tag and the object it is associated to, is often neglected and the concept of integrating the tag into the object since the manufacturing phase is part of the "Industry 4.0" era.
One of the technologies that supports this idea is E-Thread ® , in which the RFID tag's form factor is reinvented as a RFID textile yarn. The patented technique [12] consists of an automated assembling process during which the RFID chip is associated with a halfwave dipole antenna in a repeated operation. The obtained cascaded RFID tags are then wrapped by a textile material to constitute a spool of textile RFID yarns. When isolated from the spool, one RFID yarn operates in the European Ultra High Frequency (UHF) band (865.5 − 867.5) MHz and exhibits a reading range of 12 m [13]. The current E-Thread ® RFID yarn constitutes a very interesting solution as it can be integrated within an object during the fabrication stage and offers great advantages with its slender configuration such as invisibility and comfortable for the user. However, a RFID wearable tag has to be robust to any kind of mechanical constraints such as the elongation, which is lacking in the actual RFID yarn.
In this paper, we propose an alternative solution that consists of using for the tag's antenna, a helical geometry that has similar mechanical properties to a string. A helical antenna is mainly fabricated by winding a conductive material and its geometrical parameters have an important impact on its electromagnetic properties in terms of input impedance and radiation pattern. Usually, these helical antenna properties are exploited for several scenarios such as phased antenna arrays for millimeter waves and wireless power transfer applications [14,15], wireless sensor nodes in smart agriculture [16] as well as biomedical applications [17][18][19]. However, to the authors' knowledge, in the literature, very few examples can be found where a helical antenna has been used in a RFID tag. For example, the study in [20] focused on the development of a helical RFID tag to be integrated into a vehicle tire. In this case, the impedance matching between the antenna and the chip was achieved using a transmission line. Meanwhile, in the study presented in [21], a helical antenna was developed for and RFID tag in which the impedance matching was achieved by tuning the geometrical parameters of the antenna.
In a previous work [22], the latter method was employed in order to design a helical antenna for the RFID tag yarn without the use of any additional elements in order to perform the impedance matching. The RFID helical tag exhibited a maximum read range at 1040 MHz, which is higher than the frequency of interest and 1 m of read range in the European UHF RFID band. As explained, the observed result is due to manufacturing process constraints and one of the given improvement solutions was to design a helical antenna with a spacing between turns that is higher while increasing the antenna's halflength h.
In this paper, the new UHF RFID helical tag-based textile yarn includes two significant improvements: (i) the helical RFID tag was designed while taking into consideration the manufacturing constraints (the nature of the employed materials and the physical dimensions' limits), and (ii) the integration of a stretchable core material as a support for the elongation. Compared to the previous version of the helical RFID tag, the suggested methodology design also allows for a manufactured structure to be obtained for which the dimensions and the electromagnetic characteristics are close to the simulated ones. This is possible through a more accurate modeling of the materials' characteristics in the design process. The rest of this paper is organized as follows. In Section 2, the topology of the helical antenna when integrated into a textile yarn is presented together with the design methodology including electrical and manufacturing specifications. Moreover, criteria for the helical RFID tag characterization using simulation and experiments are given. Section 3 highlights the simulation results in terms of reflection coefficient and radiation pattern.
Moreover, the fabricated prototypes as well as the experimental characterization's results are presented. Finally, conclusions and future work are drawn in Section 4.
Topology of the RFID Textile Yarn Integrating a Helical Antenna
In free space, the helical antenna is characterized by its geometrical parameters, which are the diameter D; the half-length h; the turns number N; the pitch s; and the wire radius a, as shown in Figure 1a. As stated, these parameters impact the electromagnetic properties as follows: the diameter D and the pitch s mainly have an impact on the impedance matching while the half-length h and turns number N mainly modify the resonance frequency. Moreover, a helical antenna with a diameter much smaller than the wavelength allows a radiation pattern to be maintained with a normal mode similar to the dipole antenna of the current solution [23]. The RFID helical tags presented in this paper were fabricated using the E-Thread ® technology. The E-Thread technology consists of an automated assembling process where a dipole antenna is associated with a RFID chip for which the package was modified beforehand. On the RFID chip edges, two grooves receive two copper wires that form the tag's antenna [12]. This technique allows for several cascaded RFID tags to be obtained that can have a textile finishing during a wrapping process [13]. Furthermore, in order to obtain the helical shape, an additional step is required. This step consists of wrapping the textile material containing the cascaded RFID tags around a core material giving the helical aspect; here, a stretchable material is employed as the core of the helical antenna offering elongation capabilities. Details on the practical fabrication are given in [22]. Preliminary parametric simulations testing different dielectric constants for the used core material has allowed us to conclude that when the dielectric constant of the core increases, the impedance matching frequency shifts toward the low frequencies. Thus, it is important to identify and characterize the nature of the used material as the core during the antenna design. Indeed, any change after the manufacturing process is very difficult and may strongly deteriorate the RFID yarn.
For the simulation purpose, the textile material used for wrapping and the core material were modeled simply as dielectric materials characterized by their permittivity constant provided by the industrial partner. The dielectric constants for the employed nylon and lycra are ε r = 3.6 and ε r = 1.5, respectively. A cross section of the helical RFID textile yarn is shown in Figure 1b: D ext is the external diameter of the helical tag integrated in the textile; D int is the diameter of the cylindrical core material; and 2a is the helical antenna wire's diameter.
Design Specifications
In order to design a helical antenna for the RFID textile yarn, electrical specifications have to be guaranteed. In addition to these conditions, manufacturing constraints in terms of dimensioning are imposed by the manufacturing process.
•
The helical RFID tag has to operate, here (but without loss of generality on the concept), within the European UHF RFID band (865.5 − 867.5) MHz; and • The considered RFID integrated circuit (IC) is the Monza R6 [24] and its impedance is Z chip = 15 − j150 Ω at 865 MHz. This RFID IC is used by the industrial partner for the current commercialized solution. However, the design methodology is independent from the IC choice.
Manufacturing Constraints
In order for the RFID helical tag design to be compatible with the E-Thread ® manufacturing process, some of the helical antenna's geometrical parameters have to respect certain limitations (which for the most part are therefore fixed according to manufacturing constraints):
•
The helical antenna's pitch has to be higher than 0.7 mm. As explained in [22], the value of this parameter depends on the rotation speed of the conductive filament around the core material. Consequently, the value that meets the manufacturing process and employed for our design methodology was s = 1.2 mm; • The core material around which the copper conductive wire was wound had a diameter of 1 mm. A lower diameter strongly alters the impedance matching while a high value leads to a complex winding process. Consequently, this condition allows us to make a compromise between the manufacturing process and the helical RFID tag's performance; • The external diameter D ext , which depends on the textile material thickness, is provided by the industrial partner as D ext = 1.35 mm; and • The conductive wire diameter was fixed to 2a = 0.1 mm and corresponded to the copper's diameter used in the E-Thread ® process.
Hence, the geometrical parameters of the helical antenna that can be varied in order to design a helical RFID tag while meeting the specifications are: the half-length h and the turns number N. Table 1 summarizes the variable and the fixed geometrical parameters.
Helical Antenna's Simulated Structure
All the presented simulations were performed using CST Microwave Studio 2018, electromagnetic simulation commercial software.
The described helical RFID tag was configured in 3D and a full view is shown in Figure 2a. In addition, a vertical cross section is illustrated in Figure 2b. The pitch s and the diameter D that strongly impact the impedance matching of the helical antenna have been fixed for manufacturing constraints and thus, only the resonance frequency can be modified. For this purpose, the number of turns N and the half-height h are simultaneously varied in order to obtain a resonance frequency in the UHF RFID band.
Characterization of the Helical RFID Tag
Here, the designed helical RFID tag was characterized in two ways. First, by simulation, and more precisely by evaluating its impedance matching and its radiation pattern. Second, the tag was evaluated by experimental tests through the measurements of the read range and by estimating its robustness to stretching.
Helical RFID Tag's Impedance Matching
Unlike other RF scenarios in which the antenna's impedance has to be matched to 50 Ω, in RFID, the antenna's impedance has to be matched with the IC's impedance. The impedance matching is evaluated through the complex power wave reflection coefficient Γ, which can be expressed as in Equation (1): where Z ant is the helical antenna's input impedance.
Helical RFID Tag's Read Range
In most applicative contexts of UHF RFID, the read range is a very important criterion to describe the performance. In order to compare the experimental result to the one obtained by simulation, the read range can be calculated using the theoretical expression obtained from the Friis transmission equation: where λ is the wavelength; P t is the power transmitted by the reader; G t is the reader's antenna gain; G r is the tag's antenna gain; χ is the polarization loss; P th is the tag's activation threshold that represents the power needed for the IC to start operating; and τ is the power transmission coefficient defined as: It is worth noting that the quantity P t G t represents the equivalent isotropic radiated power (EIRP). Its maximum value depends on the geographical location, for instance, the value imposed by the European Telecommunications Standards Institute (ETSI) is 3.28 W, whereas the tag's activation threshold is specific to the chosen IC.
In the presented work, the Voyantic Tagformance commercial test bench [25] was used to measure the read range.
Helical RFID Tag's Robustness in Terms of Stretching
In order to measure the helical RFID tag's tolerance to elongation, the Voyantic Bench test was also used after performing some modifications in order to correspond to our application. More precisely, both the antenna extremities are attached to a basic textile filament that is wound around two spools. As shown in Figure 3, the spools' rotation, clockwise and counter clockwise, allows for the application of an elongation on the tag. The read range is then measured for each considered elongation.
Helical RFID Tag's Reflection Coefficient Γ and Its Radiation Pattern
After optimization, the helical RFID tag's geometrical parameters were: h = 50 mm; N = 42, in addition to the fixed ones given in Table 1. The reflection coefficient obtained from simulation is shown in Figure 4. It can be observed that the tag's antenna exhibited a minimum value of the reflection coefficient Γ of −6.27 dB at 865 MHz. The radiation pattern obtained by simulation is shown in Figure 5, where the antenna is positioned along the z-axis and has a maximum gain of 1.27 dB. It can be seen that the radiation pattern was omnidirectional in the xoy plan, which is identical to a half-wave's dipole radiation pattern. Moreover, through the obtained axial ratio (AR) as shown in (Figure 6), defined as E θ E ϕ = 35.8 dB for the main lobe (E θ and E ϕ being the orthogonal components of the radiated electric field), the antenna is elliptically polarized with a vertical major axis [23]. Figure 7a presents the fabricated textile yarn obtained from the modified E-Thread ® assembling process. The spool of the textile filament is composed of helical RFID tags, which are cascaded. Note that in practice, each tag can be cut at the appropriate length in order to be operational at the desired frequency. One helical RFID tag was isolated from the spool by cutting at the length that allowed it to have a resonance frequency in the UHF RFID band. The obtained RFID helical tag is shown is Figure 7b and has the following geometrical parameters: h = 47.5 mm; N = 40, in addition to the ones given in Table 1. An error of 5% can be observed regarding the height, which is due to the fact that in the simulation, the material properties are known with a certain imprecision and the pitch s is not ideal. Thus, the helical tag's length has to be adjusted after fabrication.
Measured Read Range of the Helical RFID Tag
Considering that the RFID reader has an EIRP of 3.28 W and the IC has a threshold power P th = −20 dBm, the measured read range and the one deduced from the simulation using Equation (2) are shown in Figure 8. It was shown that the helical RFID tag exhibited a maximum measured read range of 10.6 m at the frequency of 865 MHz. Moreover, the RFID helical tag exhibited a wide band behavior as it can be operational in the U.S. UHF RFID band (902 − 928) MHz with a read range of 9 m. The measured result is coherent with respect to the simulation as the maximum read range obtained by the simulation was 11.3 m at 865 MHz. It is also worth remarking that the gain value of the antenna helped to compensate for the transmission coefficient and allowed a read range to be obtained closer to that of the current E-Thread solution (12 m). Moreover, it can be remarked that compared to the simulation, a wider frequency bandwidth was obtained in the experiment, which is very advantageous for an applicative scenario. The difference in the results may be explained by the manufacturing process (some inaccuracies in the dimensions and the permittivity values of materials), which does not allow for an exact fit with the dimensions employed in the simulation.
Evaluation of the Helical RFID Tag's Robustness in Terms of Stretching
The impact of the tag's elongation on the read range was measured and the results are presented in Figure 9. At the initial state (without elongation) for an antenna having the total length of 9.5 cm, the maximum read range was 11 m at 865 MHz, which is higher than the previously shown result. This small difference may be attributed to the fact that in the previous measurement, the antenna was slightly bent; this also shows the impact that the curvature will have for a tag in wire form. It can also be observed that up to a length of 10 cm, the helical RFID tag's read range is maintained at the frequency of interest. However, beyond this length, the resonance frequency is shifted to lower frequencies, which is coherent with the increase in the length of an antenna. At the maximum considered length of 10.6 cm, the tag was still readable at a range of 9 m (18% of loss) at the frequency of interest. From these measurements, the robustness of the proposed antenna was confirmed in terms of the read range performance as well as the structural aspect of the textile material.
Conclusions
In this paper, a helical RFID tag was designed to be integrated into a textile yarn using the E-Thread ® technology. The simulation results showed that fixing the parameters such as the pitch s and the diameter D made a complex impedance matching process due to the strong impact these parameters have on the helical antenna input impedance. However, the tag's read range maybe improved to reach a value close to the one obtained in the current solution by ensuring an antenna gain that enables compensating the reflection coefficient Γ. Another improvement solution might be adding lumped elements to achieve an impedance matching with the inconvenience of a complex manufacturing process. From the experimental measurements, the helical RFID tag exhibited a read range of 10.6 m, which is an improvement considering the previous work [15]. Compared to the current solution of the RFID yarn using a half-wave dipole that has a read range of 12 m, the helical RFID tag offers a close read range with the advantage of being robust to elongation. Indeed, as the experiments have demonstrated, up to an elongation of 10% from the initial length, the helical RFID tag is still readable at 9 m.
The presented helical RFID tag may be used in a wide range of applications. The capabilities of the helical RFID tag could also be expanded beyond the classical identification purposes to some other functionalities, for example, using the antenna elasticity in order to measure strain deformation and thus the textile helical RFID tag becomes a sensor. | 5,009.2 | 2021-11-22T00:00:00.000 | [
"Materials Science"
] |
Ultrastable cellulosome-adhesion complex tightens under load
Challenging environments have guided nature in the development of ultrastable protein complexes. Specialized bacteria produce discrete multi-component protein networks called cellulosomes to effectively digest lignocellulosic biomass. While network assembly is enabled by protein interactions with commonplace affinities, we show that certain cellulosomal ligand–receptor interactions exhibit extreme resistance to applied force. Here, we characterize the ligand–receptor complex responsible for substrate anchoring in the Ruminococcus flavefaciens cellulosome using single-molecule force spectroscopy and steered molecular dynamics simulations. The complex withstands forces of 600–750 pN, making it one of the strongest bimolecular interactions reported, equivalent to half the mechanical strength of a covalent bond. Our findings demonstrate force activation and inter-domain stabilization of the complex, and suggest that certain network components serve as mechanical effectors for maintaining network integrity. This detailed understanding of cellulosomal network components may help in the development of biocatalysts for production of fuels and chemicals from renewable plant-derived biomass.
C ellulosomes are protein networks designed by nature to degrade lignocellulosic biomass 1 . These networks comprise intricate assemblies of conserved subunits including catalytic domains, scaffold proteins, carbohydrate binding modules (CBMs), cohesins (Cohs), dockerins (Docs) and X-modules (XMods) of unknown function. Coh:Doc pairs form complexes with high affinity and specificity 2 , and provide connectivity to a myriad of cellulosomal networks with varying Coh:Doc network topology [3][4][5] . The most intricate cellulosome known to date is produced by Ruminococcus flavefaciens (R.f.) 6,7 and contains several primary and secondary scaffolds along with over 220 Doc-bearing protein subunits 8 .
The importance of cellulolytic enzymes for the production of renewable fuels and chemicals from biomass has highlighted an urgent need for improved fundamental understanding of how cellulosomal networks achieve their impressive catalytic activity 9 . Two of the mechanisms known to increase the catalytic activity of cellulosomes are proximity and targeting effects 10 . Proximity refers to the high local concentration of enzymes afforded by incorporation into nanoscale networks, while targeting refers to specific binding of cellulosomes to substrates. Protein scaffolds and CBM domains are both critical in this context as they mediate interactions between comparatively large bacterial cells and cellulose particles. As many cellulosomal habitats (for example, cow rumen) exhibit strong flow gradients, shear forces will accordingly stress bridging scaffold components mechanically in vivo. Protein modules located at stressed positions within these networks should therefore be preselected for high mechanostability. However, thus far very few studies on the mechanics of carbohydrate-active proteins or cellulosomal network components have been reported 11 .
In the present study we sought to identify cellulosomal network junctions with maximal mechanical stability. We chose an XMod-Doc:Coh complex responsible for maintaining bacterial adhesion to cellulose in the rumen. The complex links the R. flavefaciens cell wall to the cellulose substrate via two CBM domains located at the N-terminus of the CttA scaffold, as shown in Fig. 1a. The crystal structure of the complex solved by X-ray crystallography 12 is shown in Fig. 1b. XMod-Doc tandem dyads such as this one are a common feature in cellulosomal networks. Bulk biochemical assays on XMod-Docs have demonstrated that XMods improve Doc solubility and increase biochemical affinity of Doc:Coh complex formation 13 . Crystallographic studies conducted on XMod-Doc:Coh complexes have revealed direct contacts between XMods and their adjacent Docs 12,14 . In addition, many XMods (for example, PDB 2B59, 1EHX, 3PDD) have high b-strand content and fold with N-and C-termini at opposite ends of the molecule, suggestive of robust mechanical clamp motifs at work 15,16 . These observations all suggest a mechanical role for XMods. Here we perform AFM single-molecule force spectroscopy experiments and steered molecular dynamics simulations to understand the mechanostability of the XMod-Doc:Coh cellulosomal ligand-receptor complex. We conclude that the high mechanostability we observe originates from molecular mechanisms, including stabilization of Doc by the adjacent XMod domain and catch bond behaviour that causes the complex to increase in contact area on application of force.
Results and Discussion
Single-molecule experiments. We performed single-molecule force spectroscopy (SMFS) experiments with an atomic force miscroscope (AFM) to probe the mechanical dissociation of XMod-Doc:Coh. Xylanase (Xyn) and CBM fusion domains on the XMod-Doc and Coh modules, respectively, provided identifiable unfolding patterns permitting screening of large data sets of force-distance curves [17][18][19] . Engineered cysteines and/or peptide tags on the CBM and Xyn marker domains were used to covalently immobilize the binding partners in a site-specific manner to an AFM cantilever or cover glass via poly(ethylene glycol) (PEG) linkers. The pulling configuration with Coh-CBM immobilized on the cantilever is referred to as configuration I, as shown in Fig. 1c. The reverse configuration with Coh-CBM on the cover glass is referred to as configuration II. In a typical experimental run we collected about 50,000 force extension traces from a single cantilever. We note that the molecules immobilized on the cantilever and glass surfaces were stable over thousands of pulling cycles. We sorted the data by first searching for contour length increments that matched our specific xylanase and CBM fingerprint domains. After identifying these specific traces (Fig. 2a), we measured the loading rate dependency of the final Doc:Coh ruptures based on bond history. To assign protein subdomains to the observed unfolding patterns, we transformed the data into contour length space using a freely rotating chain model with quantum mechanical corrections for peptide backbone stretching (QM-FRC, Supplementary Note 1, Supplementary Fig. 1) 20,21 . The fit parameter-free QM-FRC model describes protein stretching at forces 4200 pN more accurately than the commonly used worm-like chain (WLC) model 20,22 . The resulting contour length histogram is shown in Fig. 2b. Peak-to-peak distances in the histogram represent contour length increments of unfolded protein domains. Assuming a length per stretched amino acid of 0.365 nm and accounting for the folded length of each subdomain, we compared the observed increments to the polypeptide lengths of individual subdomains of the Xyn-XMod-Doc and Coh-CBM fusion proteins. Details on contour length estimates and domain assignments are shown in Supplementary Table 1.
Unfolding patterns in configuration I showed PEG stretching followed by a three-peaked Xyn fingerprint (Fig. 1a, top trace, green), which added 90 nm of contour length to the system. Xyn unfolding was followed by CBM unfolding at B150 pN with 55 nm of contour length added. Finally, the XMod-Doc:Coh complex dissociated at an ultra-high rupture force of B600 pN. The loading rate dependence of the final rupture event for curves of subtype 1 is plotted in Fig. 2c (blue). The measured complex rupture force distributions are shown in Supplementary Fig. 2.
Less frequently (35-40% of traces) we observed a two-step dissociation process wherein the XMod unfolded before Doc:Coh rupture as shown in Fig. 2a (middle trace, orange). In these cases, the final dissociation exhibited a much lower rupture force (B300 pN) than the preceding XMod unfolding peak, indicating the strengthening effect of XMod was lost, and XMod was no longer able to protect the complex from dissociation at high force. The loading rate dependency of Doc:Coh rupture occurring immediately following XMod unfolding is shown in Fig. 2c (grey).
In configuration II (Fig. 2a, bottom trace), with the Xyn-XMod-Doc attached to the cantilever, the xylanase fingerprint was lost after the first few force extension traces acquired in the data set. This indicated the Xyn domain did not refold within the timescale of the experiment once unfolded, consistent with prior work 17,18 . CBM and XMod unfolding events were observed repeatedly throughout the series of acquired force traces in both configurations I and II, indicating these domains were able to refold while attached to the cantilever over the course of the experiment.
We employed the Bell-Evans model 23 (Supplementary Note 2) to analyse the final rupture of the complex through the effective distance to the transition state (Dx) and the natural off-rate (k off ). The fits to the model yielded values of Dx ¼ 0.13 nm and k off ¼ 7.3 Â 10 À 7 s À 1 for an intact XMod, and Dx ¼ 0.19 nm and k off ¼ 4.7 Â 10 À 4 s À 1 for the 'shielded' rupture following XMod unfolding (Fig. 2c). These values indicate that the distance to the transition state is increased following XMod unfolding, reflecting an overall softening of the binding interface. Distances to the transition state observed for other ligand-receptor pairs are typically on the order of B0.7 nm (ref. 17). The extremely short Dx of 0.13 nm observed here suggests that mechanical unbinding for this complex is highly coordinated. We further analysed the unfolding of XMod in the Bell-Evans picture and found values of Dx ¼ 0.15 and k off ¼ 2.6 Â 10 À 6 s À 1 . The loading ARTICLE rate dependence for this unfolding event is shown in Supplementary Fig. 3. The exceptionally high rupture forces measured experimentally ( Fig. 2) are hugely disproportionate to the XMod-Doc:Coh biochemical affinity, which at K D B20 nM (ref. 12) is comparable to typical antibody-antigen interactions. Antibody-antigen interactions, however, will rupture at only B60 pN at similar loading rates 24 , while bimolecular complexes found in muscle exposed to mechanical loading in vivo will rupture at B140 pN (ref. 25). Trimeric titin-telethonin complexes also found in muscle exhibit unfolding forces around 700 pN (ref. 26), while Ig domains from cardiac titin will unfold at B200 pN (ref. 27). The XMod-Doc:Coh ruptures reported here fell in a range from 600 to 750 pN at loading rates ranging from 10 to 100 nN s À 1 . At around half the rupture force of a covalent gold-thiol bond 28 , these bimolecular protein rupture forces are, to the best of our knowledge, among the highest of their kind ever reported. The covalent bonds in this system are primarily peptide bonds in the proteins and C-C and C-O bonds in the PEG linkers. These are significantly more mechanically stable than the quoted gold-thiol bond rupture force (B1.2 nN) (ref. 29) and fall in a rupture force range 42.5 nN at similar loading rates. Therefore, breakage of covalent linkages under our experimental conditions is highly unlikely. We note that the high mechanostability observed here is not the result of fusing the proteins to the CBM or Xyn domains. The covalent linkages and pulling geometry are consistent with the wild-type complex and its dissociation pathway. In vivo, the Coh is anchored to the peptidoglycan cell wall through its C-terminal sortase motif. The XMod-Doc is attached to the cellulose substrate through two N-terminal CBM domains. By pulling the XMod-Doc through an N-terminal Xyn fusion domain, and the Coh through a C-terminal CBM, we established an experimental pulling geometry that matches loading of the complex in vivo. This pulling geometry was also used in all simulations. The discontinuity between its commonplace biochemical affinity and remarkable resistance to applied force illustrates how this complex is primed for mechanical stability and highlights differences in the unbinding pathway between dissociation at equilibrium and dissociation induced mechanically along a defined pulling coordinate.
Steered molecular dynamics. To elucidate the molecular mechanisms at play that enable this extreme mechanostability, we carried out all-atom steered molecular dynamics (SMD) simulations. The Xyn and CBM domains were not modelled to keep the simulated system small and reduce the usage of computational resources. This approximation was reasonable as we have no indication that these domains significantly affect the XMod-Doc:Coh binding strength 30 . After equilibrating the crystal structure 12 , the N-terminus of XMod-Doc was harmonically restrained while the C-terminus of Coh was pulled away at constant speed. The force applied to the harmonic pulling spring was stored at each time step. We tested pulling speeds of 0.25, 0.625 and 1.25 Å ns À 1 , and note that the slowest simulated pulling speed was B4,000 times faster than our fastest experimental pulling speed of 6.4 mm s À 1 . This difference is considered not to affect the force profile, but it is known to account for the scale difference in force measured by SMD and AFM 31,32 .
SMD results showed the force increased with distance until the complex ruptured for all simulations. At the slowest pulling speed of 0.25 Å ns À 1 the rupture occurred at a peak force of B900 pN, as shown in Supplementary Fig. 4 and Supplementary Movie 1. We analysed the progression and prevalence of hydrogen bonded contacts between the XMod-Doc and Coh domains to identify key residues in contact throughout the entire rupture process and particularly immediately before rupture. These residues are presented in Fig. 3a,c,d and Supplementary Figs 5,6. The simulation results clearly reproduced key hydrogen bonding contacts previously identified 12 as important for Doc:Coh recognition ( Supplementary Fig. 5). The main interacting residues are shown in Fig. 3a,b. Both Coh and Doc exhibit a binding interface consisting of a hydrophobic centre (grey) surrounded by a ring of polar (green) and charged residues (blue, positive; red, negative). This residue pattern suggests the hydrophilic side chains protect the interior hydrophobic core from attack by water molecules, compensating for the flat binding interface that lacks a deep pocket. The geometry suggests a penalty to unbinding that stabilizes the bound state. Further, we analysed the contact surface areas of interacting residues (Fig. 3b-e). The total contact area was found to increase due to rearrangement of the interacting residues when the complex is mechanically stressed, as shown in Fig. 3e and Supplementary Movie 2. Doc residues in the simulated binding interface clamped down on Coh residues upon mechanical loading, resulting in increased stability and decreased accessibility of water into the hydrophobic core of the bound complex (Fig. 3b). These results suggest that a catch bond mechanism is responsible for the remarkable stability 33 under force and provide a molecular mechanism which the XMod-Doc:Coh complex uses to summon mechanical strength when needed, while still allowing relatively fast assembly and disassembly of the complex at equilibrium. The residues that increase most in contact area (Fig. 3c,d) present promising candidates for future mutagenesis studies.
Among the 223 Doc sequences from R. flavefaciens, six subfamilies have been explicitly identified using bioinformatics approaches 8 . The XMod-Doc investigated here belongs to the 40-member Doc family 4a. A conserved feature of these Doc modules is the presence of three sequence inserts that interrupt the conserved duplicated F-hand motif Doc structure. In our system, these Doc sequence inserts make direct contacts with XMod in the crystallized complex ( Fig. 1) and suggest an interaction between XMod and Doc that could potentially propagate to the Doc:Coh binding interface. To test this, an independent simulation was performed to unfold XMod (Fig. 4). The harmonic restraint was moved to the C-terminus of XMod so that force was applied from the N-to C-terminus of XMod only, while leaving Doc and Coh unrestrained. The results (Fig. 4b) showed XMod unfolded at forces slightly higher than but similar to the XMod-Doc:Coh complex rupture force determined from the standard simulation at the same pulling speed. This suggested XMod unfolding before Doc:Coh rupture was not probable, but could be observed on occasion due to the stochastic nature of domain unfolding. This was consistent with experiments where XMod unfolding was observed in B35-40% of traces. Furthermore, analysis of the H-bonding between Doc and XMod (Fig. 4d, red) indicated loss of contact as XMod unfolded, dominated by contact loss between the three Doc insert sequences and XMod. Interestingly, XMod unfolding clearly led to a decrease in H-bonding between Doc and Coh at a later stage (B200 ns) well after XMod had lost most of its contact with Doc, even though no force was being applied across the Doc:Coh binding interface. This provided evidence for direct stabilization of the Doc:Coh binding interface by XMod. Fig. 4e, the root mean squared deviation (RMSD) of Doc increased throughout the simulation as XMod unfolded. Coh RMSD remained stable until it started to lose H-bonds with Doc. Taken together this suggests that, as XMod unfolded, Coh and Doc became more mobile and lost interaction strength, potentially explaining the increase in Dx from 0.13 to 0.19 nm on unfolding of XMod in the experimental data sets. Apparently the XMod is able to directly stabilize the Doc:Coh interface, presumably through contact with Doc insert sequences that then propagate this stabilizing effect to the Doc:Coh binding interface.
In summary, we investigated an ultrastable XMod-Doc:Coh complex involved in bacterial adhesion to cellulose. While previously the role of XMod functioning in tandem XMod-Doc dyads was unclear 12,14 , we show that XMod serves as a mechanical stabilizer and force-shielding effector subdomain in the ultrastable ligand-receptor complex. The Doc:Coh complex presented here exhibits one of the most mechanically robust protein-protein interactions reported thus far, and points towards new mechanically stable artificial multi-component biocatalysts for industrial applications, including production of second-generation biofuels.
Methods
Site-directed mutagenesis. Site-directed mutagenesis of R. flavefaciens strain FD1 chimeric cellulosomal proteins. A pET28a vector containing the previously cloned R. flavefaciens CohE from ScaE fused to cellulose-binding module 3a (CBM3a) from C. thermocellum, and a pET28a vector containing the previously cloned R. flavefaciens XMod-Doc from the CttA scaffoldin fused to the XynT6 xylanase from Geobacillus stearothermophilus 12 were subjected to QuikChange mutagenesis 34 to install the following mutations: A2C in the CBM and T129C in the xylanase, respectively.
For the construction of the native configuration of the CohE-CBM A2C fusion protein Gibson assembly 35 was used. For further analysis CohE-CBM A2C was modified with a QuikChange PCR 36 to replace the two cysteins (C2 and C63) in the protein with alanine and serine (C2A and C63S). All mutagenesis products were confirmed by DNA sequencing analysis.
The XynT6-XDoc T129C was constructed using the following primers: 5 0 -acaaggaaggtaagccaatggttaatgaatgcgatccagtgaaacgtgaac-3 0 5 0 -gttcacgtttcactggatcgcattcattaaccattggcttaccttccttgt-3 0 The CBM-CohE A2C was constructed using the following primers: The CohE-CBM C2A C63S was constructed using the following phosphorylated primers: 5 0 -ccgaatgccatggccaatacaccgg-3 0 5 0 -cagaccttctggagtgaccatgctgc-3 0 Expression and purification of Xyn-XMod-Doc. The T129C Xyn-XMod-Doc protein was expressed in E. coli BL21 cells in kanamycin-containing media that also contained 2 mM calcium chloride, overnight at 16°C. After harvesting, cells were lysed using sonication. The lysate was then pelleted, and the supernatant fluids were applied to a Ni-NTA column and washed with tris-buffered saline (TBS) buffer containing 20 mM imidazole and 2 mM calcium chloride. The bound protein was eluted using TBS buffer containing 250 mM imidazole and 2 mM calcium chloride. The solution was dialysed with TBS to remove the imidazole, and then concentrated using an Amicon centrifugal filter device and stored in 50% (v/v) glycerol at À 20°C. The concentrations of the protein stock solutions were determined to be B5 mg ml À 1 by absorption spectrophotometry.
Expression and purification of Coh-CBM. The Coh-CBM C2A, C63S fusion protein was expressed in E. coli BL21(DE3) RIPL in kanamycin and chloramphenicol containing ZYM-5052 media 37 overnight at 22°C. After harvesting, cells were lysed using sonication. The lysate was then pelleted, and the supernatant fluids were applied to a Ni-NTA column and washed with TBS buffer. The bound protein was eluted using TBS buffer containing 200 mM imidazole. Imidazole was removed with a polyacrylamide gravity flow column. The protein solution was concentrated with an Amicon centrifugal filter device and stored in 50% (v/v) glycerol at À 80°C. The concentrations of the protein stock solutions were determined to be B5 mg ml À 1 by absorption spectrophotometry.
Sample preparation. In sample preparation and single-molecule measurements calcium supplemented TBS buffer (Ca-TBS) was used (25 mM TRIS, 72 mM NaCl, 1 mM CaCl 2 , pH 7.2). Cantilevers and cover glasses were functionalized according to previously published protocols 18,38 . In brief, cantilevers and cover glasses were cleaned by UV-ozone treatment and piranha solution, respectively. Levers and glasses were silanized using (3-aminopropyl)-dimethyl-ethoxysilane (APDMES) to introduce surface amine groups. Amine groups on the cantilevers and cover glasses were subsequently conjugated to a 5 kDa NHS-PEG-Mal linker in sodium borate buffer. Disulfide-linked dimers of the Xyn-XMod-Doc proteins were reduced for 2 h at room temperature using a TCEP disulfide reducing bead slurry. The protein/ bead mixture was rinsed with Ca-TBS measurement buffer, centrifuged at 850 r.c.f. for 3 min, and the supernatant was collected with a micropipette. Reduced proteins were diluted with measurement buffer (1:3 (v/v) for cantilevers, and 1:1 (v/v) for cover glasses), and applied to PEGylated cantilevers and cover glasses for 1 h. Both cantilevers and cover glasses were then rinsed with Ca-TBS to remove unbound proteins and stored under Ca-TBS before force spectroscopy measurements.
Site-specific immobilization of the Coh-CBM-ybbR fusion proteins to previously PEGylated cantilevers or coverglasses was carried out according to previously published protocols 39 Single-molecule force spectroscopy measurements. SMFS measurements were performed on a custom built AFM 40 controlled by an MFP-3D controller from Asylum Research running custom written Igor Pro (Wavemetrics) software. Cantilever spring constants were calibrated using the thermal noise/equipartition method 41 . The cantilever was brought into contact with the surface and withdrawn at constant speed ranging from 0.2 to 6.4 mm s À 1 . An x-y stage was actuated after each force-extension trace to expose the molecules on the cantilever to a new molecule at a different surface location with each trace. Typically 20,000-50,000 force-extension curves were obtained with a single cantilever in an experimental run of 18-24 h. A low molecular density on the surface was used to avoid formation of multiple bonds. While the raw data sets contained a majority of unusable curves due to lack of interactions or nonspecific adhesion of molecules to the cantilever tip, select curves showed single-molecule interactions. We filtered the data using a combination of automated data processing and manual classification by searching for contour length increments that matched the lengths of our specific protein fingerprint domains: Xyn (B89 nm) and CBM (B56 nm). After identifying these specific traces, we measured the loading rate dependency of the final Doc:Coh ruptures based on bond history.
Data analysis. Data were analysed using previously published protocols 17,18,22 . Force extension traces were transformed into contour length space using the QM-FRC model with bonds of length b ¼ 0.11 nm connected by a fixed angle g ¼ 41°and and assembled into barrier position histograms using cross-correlation. Detailed description of the contour length transformation can be found in Supplementary Note 1 and Supplementary Fig. 1.
For the loading rate analysis, the loading rate at the point of rupture was extracted by applying a line fit to the force vs time trace in the immediate vicinity before the rupture peak. The loading rate was determined from the slope of the fit. The most probable rupture forces and loading rates were determined by applying Gaussian fits to histograms of rupture forces and loading rates at each pulling speed.
Molecular dynamics simulations. The structure of the XMod-Doc:Coh complex had been solved by means of X-ray crystallography at 1.97 Å resolution and is available at the protein data bank (PDB:4IU3). A protonation analysis performed in VMD 42 did not suggest any extra protonation and all the amino-acid residues were simulated with standard protonation states. The system was then solvated, keeping also the water molecules present in the crystal structure, and the net charge of the protein and the calcium ions was neutralized using sodium atoms as counter ions, which were randomly arranged in the solvent. Two other systems, based on the aforementioned one, were created using a similar salt concentration to the one used in the experiments (75 mM of NaCl). This additional salt caused little or no change in SMD results. The overall number of atoms included in MD simulations varied from 300,000 in the majority of the simulations to 580,000 for the unfolding of the X-Mod.
The MD simulations in the present study were performed employing the NAMD molecular dynamics package 43,44 . The CHARMM36 force field 45,46 along with the TIP3 water model 47 was used to describe all systems. The simulations were done assuming periodic boundary conditions in the NpT ensemble with temperature maintained at 300 K using Langevin dynamics for pressure, kept at 1 bar, and temperature coupling. A distance cut-off of 11.0 Å was applied to shortrange, non-bonded interactions, whereas long-range electrostatic interactions were treated using the particle-mesh Ewald (PME) 48 method. The equations of motion were integrated using the r-RESPA multiple time step scheme 44 to update the van der Waals interactions every two steps and electrostatic interactions every four steps. The time step of integration was chosen to be 2 fs for all simulations performed. Before the MD simulations all the systems were submitted to an energy minimization protocol for 1,000 steps. The first two nanoseconds of the simulations served to equilibrate systems before the production runs that varied from 40 to 450 ns in the 10 different simulations that were carried out. The equilibration step consisted of 500 ps of simulation where the protein backbone was restrained and 1.5 ns where the system was completely free and no restriction or force was applied. During the equilibration the initial temperature was set to zero and was constantly increased by 1 K every 100 MD steps until the desired temperature (300 K) was reached.
To characterize the coupling between Doc and Coh, we performed SMD simulations 49 of constant velocity stretching (SMD-CV protocol) employing three different pulling speeds: 1.25, 0.625 and 0.25 Å ns À 1 . In all simulations, SMD was employed by restraining the position of one end of the XMod-Doc domain harmonically (center of mass of ASN5), and moving a second restraint point, at the end of the Coh domain (center of mass of GLY210), with constant velocity in the desired direction. The procedure is equivalent to attaching one end of a harmonic spring to the end of a domain and pulling on the other end of the spring. The force applied to the harmonic spring is then monitored during the time of the molecular dynamics simulation. The pulling point was moved with constant velocity along the z-axis and due to the single anchoring point and the single pulling point the system is quickly aligned along the z-axis. Owing to the flexibility of the linkers, this approach reproduces the experimental set-up. All analyses of MD trajectories were carried out employing VMD 42 and its plug-ins. Secondary structures were assigned using the Timeline plug-in, which employs STRIDE criteria 50 . Hydrogen bonds were assigned based on two geometric criteria for every trajectory frame saved: first, distances between acceptor and hydrogen should be o3.5 Å; second, the angle between hydrogen-donor-acceptor should be o30°. Surface contact areas of interacting residues were calculated employing Volarea 51 implemented in VMD. The area is calculated using a probe radius defined as an in silico rolling spherical probe that is screened around the area of Doc exposed to Coh and also Coh area exposed to Doc. | 6,224.4 | 2014-12-08T00:00:00.000 | [
"Biology",
"Chemistry",
"Environmental Science",
"Materials Science"
] |
Interferometric delay tracking for low-noise Mach-Zehnder-type scanning measurements
Precise delay control is of paramount importance in optical pump-probe measurements. Here, we report on a high-precision delay tracking technique for mechanical scanning measurements in a Mach-Zehnder interferometer configuration. The setup employs a 1.55-μm continuous-wave laser beam propagating along the interferometer arms. Sinusoidal phase modulation at 30 MHz, and demodulation of the interference signal at the fundamental frequency and its second harmonic, enables delay tracking with sampling rates of up to 10 MHz. At an interferometer arm length of 1 m, root-mean-square error values of the relative delay tracking below 10 attoseconds for both stationary and mechanically scanned (0.2 mm/s) operation are demonstrated. By averaging several scans, a precision of the delay determination better than 1 as is reached. We demonstrate this performance with a mechanical chopper periodically interrupting one of the interferometer arms, which opens the door to the combination of high-sensitivity lock-in detection with (sub-)attosecond-precision relative delay determination. © 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Introduction
Ultrafast pump-probe measurements such as attosecond streaking [1,2] or THz-TDS [3,4] require precise time-delay measurements, sometimes with a precision down to a few attoseconds [5]. In addition, a long scan range is also necessary to follow the reactions in the case of attosecond streaking or for the study of molecular rovibrational signals with high spectral resolution and accuracy, particularly gaseous samples in time-domain spectroscopy. Therefore, it is mandatory to measure over a long time-range with a sufficiently short timestep with high precision and repeatability [6].
The conventional method for delaying in time-resolved pump-probe experiments is moving a retro-reflector mounted on a mechanical delay stage. At each step the position is held and the corresponding signal is processed (step-scan technique), or the average of several scans with higher speeds is calculated (rapid-scan technique). The position of the stage is determined by a linear encoder of the delay stage typically with 20 nm precision or by utilizing a Michelson interferometer for more precise position detection down to a few nanometers. The step-scan technique is well-known and widely used in most pump-probe methods and its drawbacks have been deeply investigated [7]. An analysis of the noise sources in THz-TDS is given in [8]. Low frequency intensity noise has been addressed in several pump-probe measurements like in THz-TDS and can be suppressed by a lock-in amplifier (LIA), where the THz beam is modulated (most commonly amplitude modulation by mechanical chopping) and the local oscillator (probe signal) is measured after a nonlinear optical interaction. Typically, the intensity noise of the local oscillator can be suppressed by 100 dB after the demodulation and proper filtering [7].
In this paper we focus on the real-time measurement and reduction of delay time uncertainties, for example stochastic and periodic fluctuations [9] originating from different sources such as refractive index fluctuations of air, airflow turbulences caused by the chopper wheel, and periodic sampling errors from acoustic vibrations and nonlinear movement of the opto-mechanical elements in particular the linear stage.
Several efforts have been made to avoid moving mechanical components in pump-probe setups. One proposed solution is an acousto-optical delay line with 15 as precision over the available time-delay window of 6 ps [10]. Another technique, asynchronous dual-comb spectroscopy, is ideal for rapid scans with tremendous spectral resolution as the repetition rate of the oscillators determines the time-delay window (from 100 ps up to several nanoseconds), and the frequency difference of the two phase-locked oscillators the scanning speed. However, the theoretical time resolution is limited due to residual timing jitter between the two lasers. In state-of-the-art systems, the time-delay fluctuations could be reduced down to 45 fs by active electronic stabilization of the laser frequencies over a time-delay window of 1 ns [3]. Other advanced scan methods in THz-TDS are based on single-shot techniques which encode the pump induced temporal dynamics into the spatial profile of the probe beam, or into the temporal profile of the chirped optical probe beam [11]. The drawbacks are the limited time-delay window determined by the expanded beam size or the length of the chirped probe pulse. For precision Fourier-transform infrared spectrometry (FTIR) measurements, birefringent delay lines provide: attosecond precision for short delay scans [12,13], increased robustness and position accuracy and can be combined with an LIA as shown in [14].
Nevertheless, delay stages are still commonly used because of their simplicity, low cost and universality. A technical solution, widely implemented in commercial FTIR devices, consists of a pilot beam from a frequency-stabilized HeNe laser propagating through the Michelson interferometer of the FTIR. The interferometrically detected position fluctuations are then actively corrected by a piezo-actuated mirror.Timing jitter of less than 20 as can be achieved through application of an active interferometric delay stabilization scheme [5], mainly limited by the locking bandwidth and quality of the feedback loop. Furthermore, it is necessary to take into account the beam pointing instability, caused by the piezo actuator, especially for long beam arms. In THz-TDS, active interferometric delay stabilization has been demonstrated to measure THz absorption spectra up to 5 THz (corresponding to 200 as time delay precision) with 2 fs timing jitter of the THz signal in a fixed signal maximum position over a 10 minute measurement interval [5,15] In this paper, we propose and characterize an interferometric delay tracking (IDT) method for accurate and real-time delay corrections in pump-probe setups using conventional delay stages and heterodyne measurements with LIA. We employ a commercially available interferometer for position measurements with precision down to a few pm. This interferometer was adjusted for a Mach-Zehder interferometer (MZI) type test setup, which can be implemented in many general pump-probe arrangements, provided that there is a common path between the pilot, pump and probe beams. Instead of active delay correction (usually limited in bandwidth to few tens of kHz), the tracked position data (acquired at a rate of up to several hundreds of kHz) is used directly for the correction of the nominal delay values in the data processing. The reproducibility of the position measurements is characterized with and without the chopper wheel, demonstrating the robustness of the method and highlighting the noise contributions introduced by the chopper wheel due to generated airflow turbulences.
Optical layout
The experimental setup consists of an MZI as shown in Fig. 1. The two arms (approx. 1 m long each) are defined by 50/50 beam splitters. For mechanical scanning, a servo-motordriven linear stage (LS110 from PI miCos GmbH) was included in the delay line. The maximum scan range amounts to 90 mm, corresponding to a ~300 ps time window. A commercially available Michelson-interferometer for displacement measurements (SmarAct PicoScale [16]) was implemented for tracking the position of the delay line. The light source is a pig-tailed distributed-feedback (DFB) laser diode emitting a 1.55-µm wavelength continuous-wave (CW) beam, connected via a single-mode fiber to a compact, monolithic Michelson-type sensor head. Here, the laser light is split by a beam splitter cube and one of the outputs is reflected by a coated surface of the cube, serving as the reference arm. The other beam, forming the measurement arm of the Michelson interferometer, is guided to the tracked object, reflected and recombined with the reference beam in the cube. The resulting interference signal is guided back through the optical fiber, measured and the electronic signal is then evaluated inside a controller to infer the object's displacement [16]. This configuration, tracking only the stage movement, serves as a reference for our measurement (channel 2, CH2). ) is operated in parallel to a commercial Michelson-type sensor head (channel 2, CH2), which monitors the position of a mirror mounted into the delay stage. In CH1, a CW pilot laser output of 140 µW at 1.55 µm is amplified by a semiconductor optical amplifier (Thorlabs, EDFA100S) then split and combined by 50/50 beamsplitters (BS). One part of the interference signal is coupled into a fiber and sent back to the commercial position readout unit. A 3-port fiber circulator is used to separate the input and output beams. From the other side of the BS the signal is coupled directly onto an InGaAs photodiode (DET), which acts as a placeholder for any delay dependent setup. A chopper wheel for lock-in detection can be implemented in the focus of a one-to-one telescope and an attenuator to reduce the power in one arm.
In the Mach-Zehnder configuration (channel 1, CH1), a fiber circulator is employed to separate the output laser beam from the detected interference signal of the same port. The laser diode delivers an output power of 140 μW to each channel, which is sufficient for the normal configuration (CH2), considering a single reflection off a metal mirror and coupling back to the fiber. For the MZI, considering pump-probe setups with high optical losses, this power level is insufficient for an accurate position determination. An Erbium-doped fiber amplifier (Thorlabs, EDFA100S) was implemented to boost the CW signal resulting in an output of 80 mW in front of the interferometer. After recombination, the interfering beams are attenuated to 420 μW and coupled into the detection arm of the fiber circulator for position measurement. The other output port of the MZI beam splitter is sent onto an InGaAs photodiode (DET) for independent monitoring of the interference signal.
Optionally, a mechanical chopper with a frequency up to 10 kHz can be placed into one arm to simulate LIA detection. With additional attenuation in one arm, we can investigate the sensitivity of the position measurements for asymmetric optical losses.
Measurement control and data acquisition
Time-delay determination is based on the sinusoidal phase-modulation interferometry technique [17]. The DFB laser diode injection current is modulated at 30 MHz, which imprints a wavelength modulation on the carrier light. After demodulation at 30 MHz and its second harmonic (60 MHz) in the built-in LIA of the commercial system, two sinusoidal signals are extracted, that are phase-shifted by 90 degrees and thus, are in quadrature and dependent on the target mirror position. The quadrature signals, plotted as Lissajous figures, describe a circle in the ideal case of equal intensities and optimum alignment. At least one of the quadrature signals always exhibits high sensitivity due to high steepness. Furthermore, the direction of the movement can be followed from the sign of the phase difference between the quadrature signals. This technique requires an unbalanced interferometer and the sensitivity depends on the signal strength and working range [18]. By increasing the delay between the interferometer arms (i.e., scanning with the delay stage), the phase changes between the quadrature signals rotating the position vector in the Lissajous curve with a periodicity corresponding to an optical path delay of 1.55 µm (laser wavelength). For delays longer than 1.55 µm, the number of periods is counted. In this way, a nominal resolution of the delay measurement well below 100 pm is reached [19]. Under normal ambient conditions, mechanical and air fluctuations, as well as fast acquisition times, limit the precision to a few nanometers [20].
In measurements including the mechanical chopper, periodically blocking one of the arms results in a repeated loss of the interferometric signal. Continuous position data acquisition is achieved by (i) ensuring that the optical delay variation within one chopping period is considerably less than half the wavelength (λ = 1.55 µm), corresponding to a single cycle of the Lissajous graph, (ii) triggering the data acquisition of the interferometer with the chopping frequency and (iii) (manually) optimizing the timing of the chopping relative to the data acquisition. For a continuous, linear scan of the optical delay with the stage velocity ν , condition (i) imposes an upper limit on ν given by: where a factor of 2 takes into account traversing the optical path twice (in the delay line) and n is the number of points per wavelength. For scans in this paper n ≥19.
The chopped photodiode signal is fed into a LIA or into an independent analog-to-digital converter (ADC) with 24-bit nominal resolution. Considering that the commercial LIA outputs have 14-bit quasi-analog outputs, the latter is advantageous for improving the SNR by reducing the digitization error. The inner clocks of the two ADCs (for position and signal detection) are synchronized by a 1 MHz reference clock signal. This common clock ensures the correct pairing of the position data and the corresponding signal data. The data acquisition of both devices is triggered simultaneously. All output signals, namely raw quadrature signals, calculated position from the position readout unit, and signal data from the 24-bit ADC output, along with the corresponding time logs from both ADCs are transferred to the measurement control PC, where the data processing is performed (Fig. 1).
Data processing
Although the clock and start time are synchronized, there can be different delays for the devices from the trigger to the first corresponding data point. When using the LIA the averaging/filtering leads to a delay by design. By shifting one of the time axes by a single constant value for forward and backward scans, the correct shift can be found easily as it will overlay the signals scanned in the two directions optimally.
Results
Benchmarking measurements were performed for three different cases. First, using the MZI setup, the measured position fluctuations were corrected by the photodiode signal fluctuations for a fixed delay position. The main goal here was to demonstrate the capability of the system for highly sensitive delay measurements.
Second, for a nominally linear delay stage scan, the quality of the position signal is compared to the signal measured in the second interferometer output using an InGaAs photodiode (DET in Fig. 1) for both channels (CH1 and CH2). In this case, the effects of the chopping and intensity attenuation of one of the arms were investigated. The reproducibility of the measurements was investigated for 40 individual, 1-cm long, delay scans.
Third, the improvement of the precision with the number of averaged scans is scrutinized with a large number of short-range scans.
Delay time fluctuations at fixed position
Measuring the optical delay fluctuation at a fixed zero crossing (ZC) position demonstrates the sensitivity of the interferometric delay tracking method [21]. Furthermore, it opens a perspective towards real time optical phase measurements, like the carrier-envelope phase (CEP), if the MZI is implemented into a THz-TDS setup. Choosing a ZC position of the field amplitude in a TDS experiment, the signal fluctuations deriving from optical beam path delays can be corrected with high precision, with the remaining fluctuations originating from the CEP.
Here, we use the photodiode signal (DET) to correct the position data (CH1) which allows for the direct characterization of the method in nanometers. As we measure a complementary physical signal by the detector and the interferometer, a constant zero line is expected after the correction, affected by remaining noise. A 20 s long measurement was performed on a ZC position of the signal with a deviation of less than 10 nm, justifying a linear correction. The measured position fluctuations (red) and the positions were calculated from the photodiode signal (black) using just a scaling factor and setting the first point to zero. By subtraction we obtain the corrected curve (blue) around zero with a standard deviation of 1.36 nm delay, corresponding to a phase stability of 5.5 mrad at 1.55 µm over an observation time of 20 seconds. The phase data was recorded at data acquisition rates of 156 kHz. The scan over 20 seconds is shown in Fig. 2(a) and a zoomed view covering an 80-ms window in Fig. 2(b). The fluctuations visible here are due to the servo feedback of the stage. The Fourier transforms before and after correction, yielding the power spectral density of the position fluctuations, are shown in Fig. 3, together with the frequency integrated values. The external effects contributing to very slow noise (air fluctuation/some drifts) and noise up to several kHz can be reduced substantially, down to 1.36 nm RMS delay fluctuation, at a sampling rate of 156 kHz.
Reproducibility of position for scanning operations
Depending on the application, it can be important to focus on the temporal and/or the spectral domain. The dynamic range and signal to noise ratio are important merits in both domains, but the conversion between the temporal and spectral domain values requires a careful treatment which is beyond the scope of this paper (e.g. the dynamic range in the spectral domain depends on the scanning range, the time step or the number of averages of full timedomain traces [22]). Furthermore, the impact of delay line noise becomes more essential for higher frequencies, because the same amount of jitter has a stronger effect on the signal where the steepness is higher [10]. In the following, we focus on a comparison of the position tracking measurement with the commercial Michelson-interferometer-based position readout (CH2/Ref.) and the MZI configuration (CH1).
Fourty scans were recorded for each setting, pairing the photodiode signal (DET) data with the corresponding position data of CH1 and CH2, respectively. The quality of the measurement was verified in the time domain by comparing the deviation of the position of all zero-crossings of different scans in a 33-ps window (1 cm delay). All the scans were recorded with 0.2 mm/s scanning speed, 5 kHz chopping frequency, and 156 kHz sampling rate for the signal while only one position point was measured during each chop with 1/156 ms acquisition time. The zero-crossing positions are those in which the interferometric signal (DET) crosses the mean value of its modulation. To determine each position, linear interpolation between the acquired points directly above and below the mean value is used. For each of the almost 13000 crossings, the standard deviation of its position in 40 scans is calculated (zero crossing fluctation).
In Fig. 4(a) a comparison between tracking only the stage movement (CH2/Ref.) and tracking the whole interferometer (CH1) is shown. As drifts within the interferometer affect the first, all scans were shifted in position such that the sinusoidal signals had the lowest deviation in phase in a 1-ps window [at 22 ps delay in Fig. 4(a)]. As expected, the chopper causes additional fluctuations and increases the mean repeatability of all ZCs from 62 to 127 as (18.6 to 38.2 nm). The corrected case however, shows a dramatically improved repeatability, with 8.2 as and 11.2 as (2.46/3.36 nm) for the un-chopped and chopped case respectively. The difference between chopped and not chopped operation is with less than 3 as very small (The observed differences seem to be caused partially by different calibrations of the PicoScale device). Comparing these values to the stationary case we notice an increase by almost a factor of two to the instrument limit with these settings, due to the moving stage introducing additional fluctuations. Overall, the reproducibility of single scans with about 10 as per 2 points (to calculate the ZC) is remarkable.
In this measurement, the power coupled back to the PicoScale detector was attenuated from 33 mW to 420 µW. To test the performance at even further attenuation and also asymmetric power in the interferometer arms, one arm was attenuated by a neutral density filter by a factor of 20. The comparison only for CH1 between the attenuated case and the ideal case is shown in Fig. 4(b). The mean values are 14 and 16 as (4.2 and 4.8 nm) which is only slightly worse than without the filter. A significant contribution to this change is attributed to the reduction of the signal-to-noise ratio of the sinusoidal signal and not due to a reduction of the quality of the position data.
Long-term stability
The results presented so far were based on single scans with data and position acquired in 1/156 ms per point. While the manufacturer claims down to a few pm for longer integration times, it is informative to test how many scans can be averaged to further improve the reproducibility between averaged scans in the same way as done in Fig. 4(a). For these scans the velocity was 0.1 mm/s, with a delay of 600 µm containing 374 ZCs, and the chopper frequency was 5 kHz. Figure 5 shows an Allan deviation-like measurement. For each scan the ZCs positions are evaluated. Each ZC position is then averaged over a number of consecutive scans. This procedure is repeated for the same number of scans recorded immediately thereafter. The standard deviation for all 374 ZCs of these two samples is calculated and the average over all 374 forms the Allan deviation of these two samples. In Fig. 5 the mean and the standard deviation of 10 consecutive Allan deviations are plotted. The behavior till 32 scans (76 s) is almost noise limited, until the system starts to be affected by small drifts. The optimal value is reached for roughly 100 averages with an expected reproducibility of about half an attosecond (0.17 nm). This result makes this setup also very interesting whenever subattosecond precision is required. Notably, this averaging procedure is still allows for quite fast acquisition rates, with the cumulative acquisition time for one ZC for 100 scans is still only 1.3 ms.
Conclusion and perspectives
In this paper, we present an interferometric delay tracking method for high precision position detection and real-time correction in ultrafast pump-probe experiments using conventional, mechanical delay lines. A commercial fiber-based Michelson interferometer with nominally picometer position resolution was adapted for a typical Mach-Zehnder arrangement of pumpprobe setups. Furthermore, according to our best knowledge this is the first MZI positioning setup combined with mechanical chopping for lock-in amplifier detection. Applying our IDT in a test setup we demonstrate the capability to track and compensate noise sources up to several kHz with a root-mean-square delay accuracy of 1.4 nm at 156 kHz sampling rate. In direct comparison to tracking the stage position only, the positioning fluctuations of 40 scans were an order of magnitude lower when tracking the absolute delay interferometrically and over a 1-cm delay (with 2.5 nm and 3.3 nm when chopped) are almost as good as in the stationary case (1.36 nm). The implementation of the applied system into THz-TDS opens the way for more accurate measurements, which can be combined with lock-in detection. Furthermore. we demonstrate that with higher acquisition time sub-attosecond reproducibility for chopped delay scans is possible, which makes this setup also interesting for slower scans requiring higher precision. With an amplified beam, losses at 1.55 µm wavelength in real setups can be compensated, making this approach very versatile for most pump-probe setups.
Funding
Munich Centre for Advanced Photonics (www.munich-photonics.de), a DFG-funded Cluster of Excellence. | 5,244.4 | 2019-02-08T00:00:00.000 | [
"Physics",
"Engineering"
] |
AGN / Starburst Connection
Two main physical processes characterize the activity in the nuclear region of active galaxies: an intense star formation (starburst, SB) and an Active Galactic Nucleus (AGN). While the existence of a starburst-AGN connection is undisputed, still it is not clear which process dominates the energetic output in both local and high redshift Universe. Moreover there is no consensus on whether AGN fueling is synchronous with star formation or follows it during a post-starburst phase. Here I first review how to disentangle the relative SB-AGN contribution, then I focus on the physical and geometrical properties of the circumnuclear environment.
Introduction
The issue of AGN -star formation (AGN-SF) connection in local and distant galaxies is possibly relevant for understanding several processes: from galaxy formation and evolution, and the star formation and metal enrichment history of the Universe, to the the origin of the extragalactic background at low and high energies and the origin of nuclear activity in galaxies.It is well known that SF traces the growth of a galaxy in terms of stellar mass, and that galaxies assemble their mass through SB episodes.For example, local ULIRGs (Ultraluminous Infrared Galaxies) and high-z SMGs (Submillimeter galaxies) harbor the large molecular-gas reservoirs that are necessary to switch on SB episodes (Tacconi et al. 2010(Tacconi et al. , 2013)).On the other hand, AGNs trace the growth of supermassive black holes (SMBHs, with masses M BH > 10 6 M ) and quiescent SMBHs are believed to dwell in almost all galaxy bulges as AGN relics, i.e. the result of a past AGN activity (Soltan 1982
Evidence of AGN-SF connection
A direct link between bulge formation and the growth of the central SMBHs has been inferred from tight relations between BH mass M BH and the bulge structural parameters, such as velocity dispersion ( This may indicate a dichotomy in the formation history of galaxies (major merger vs. secular evolution) and/or that activity in galaxy nuclei can be stochastically driven by local processes such as minor mergers, bar/disk instabilities.In the following I concentrate the discussion on the major-merger scenario, its implications and observational evidences.The reader is anyhow advised about the importance of secular evolution processes, especially in low-luminosity, local Seyfert galaxies.
The major-merger scenario (ULIRGs-QSO path)
The merger between galaxies of approximately the same size is relevant for the cosmological evolution of SBs, quasars and early type galaxies (Sanders et During first encounters, the mergers between gasrich galaxies (i.e.spirals) drag the gas which fuels both SF and AGN activity.Thus a violent SB occurs (ULIRG-phase) as well as heavily embedded SMBH growth (observed as obscured AGN).Then, the peak of accretion happens when the systems coalesce.At this point the quasar is cleaning the circumnuclear environment thanks to strong winds that blow out the gas, the AGN becomes X-ray and optically thin.Finally, due to feedback processes both BH growth and further SF are quenched leaving a QSO relic in a red galaxy (passive evolution).Therefore, obscured QSO and feedback processes are key ingredients in the BHhost galaxy co-evolution.In the following, after a brief description of the AGN and SF emission, I review the AGN-SF observations for both single sources (interesting for their physics/structure) and large samples (to infer statistical properties).
The AGN vs SF Emission
An issue in AGN-SF studies is to correctly identify AGNs that represent only a small fraction of sources compared to the population detected at optical and IR wavelengths, and their properties can be elusive.Obscured AGNs can be elusive because their optical emission resembles that of normal galaxies and therefore they cannot be identified by the color-selection techniques used for QSOs, or because their spectra do not show any AGN signature.Their IR emission can be dominated by PAH features, typical of star-forming galaxies, instead of continuum emission from hot dust.Radio observations reveal the radio-loud population of obscured AGNs (e.g., radio galaxies), but this comprises only a small fraction, 10%, of the total AGN population.A hard X-ray selection fails to detect the luminous narrow-line AGNs in the numbers predicted by some models of the cosmic X-ray background.Polletta and collaborators (2007) show the differences in the spectral energy distributions (SEDs) of AGN and star forming galaxies.The average SEDs of the three classes show some clear differences.They become increasingly blue in the optical-near-IR (λ < 1 µm), warmer in the IR (i.e.red at λ 1−10 µm), and brighter in X-rays in the sequence SF→AGN2→AGN1.
More in detail, most of the diagnostics developed till recently fail in identifying an AGN when it is intrinsically faint and/or deeply obscured.As an example of a possible case where the AGN is hardly detectable, I consider in Fig. 2 an intrinsically faint AGN, contributing only 20% to the bolometric source luminosity, and moreover its primary emission is absorbed by gas and dust.Only two spectral regions are useful for such an AGN detection: the hard X-rays (above 2 keV) and the near-mid infrared between 3 and 10 µm.Indeed in the near-IR and optical ranges the dust and gas extinction respectively avoid the AGN detection, and at radio wavelengths the SB emission peak dilutes AGN emission properties.
Figure 2: Top: simulated SEDs of a SB (blue) and an AGN (black) contributing to 80% and 20% respectively of the L bol .Bottom: extinction of the AGN emission by gas (green) and dust (red), the horizontal black line indicates an absorption factor of 5.The AGN is undetectable in the range 3 µm-2 Å due to dust and gas extinction, and at λ ≥ 10 µm for the SB dilution.Only the hardest X-rays and 3-10 µm observations reveal the AGN.
It is thus clear that the diagnostic performed in the X-ray and IR bands can highlight the main AGN-SF differences and represent the wavelength rages where we do have the most detailed information respectively.
X-Ray and IR Screenings: Imaging and Spectroscopy
Here I briefly describe recent results obtained thanks to X-ray -IR imaging and spectroscopy in two puzzling sources.The final goal is to constrain the physics of ongoing processes and the structure of the nuclear region.
The case of NGC 6240
Thanks However, the marginal emission below 10 kev (Fig. 5left) is consistent with the sole SB component, the typical Fe Kα line is undetected, and the AGN is only seen above 20keV where the much narrower Suzaku FOV eventually confirms an early tentative detection by BeppoSAX.The X-ray data are well fitted by an absorbed power-law with N H = 3.5 × 10 2 4cm −2 and L 2−10 ∼ 7 × 10 44 erg/s, i.e. about 10% of the bolometric luminosity is given by the AGN intrinsic emission.The result proves IRAS 12071-0444 as the closest Comptonthick quasar 2 yet known.Finally, a simple visual comparison with the prototypical CT Seyfert NGC 4945 reveals that the reflection efficiency is about a factor of ten lower than for a standard geometry.A peculiar geometry of the absorber is thus required to explain the X-ray/IR data.There are basically two possibilities: (i ) a geometrically thick torus, with a small opening angle (i.e.large covering factor).In this case both the direct and reflected components are absorbed.Alternatively, (ii ) the reflection component is absent due to a small compact cloud along the line of sight (i.e. with a small covering factor).The last picture implies both photoelectric Compton absorption without scattering.The resulting intrinsic luminosity exceedes the bolometric one a factor of 3 thus the compact cloud scheme is unphysical.As a probe of the IRAS 12071-0444 represents a key evolutionary step of the galaxy-BH growth.
Statistical Analysis of Composite Sources
By selecting large samples of composite SB+AGN sources, it is possible to link the AGN-SF connection processes with the system properties.In the following I show recent results concerning IR and X-ray statistical analysis.
I ) The availability of a reliable measure of the relative AGN/SB contribution to ULIRGs (Risaliti et
What We Know and What Is Missing
• While the AGN accretion history is well traced, we still do not know how the SMBH mass function evolves.
• If the SF-AGN structure on large scales is well studied, we still lack in constrainig the physical processes on small scales (i.e. with extreme adaptive optics corrections for imaging and IFU observations).
• We are confident that SMBH growth happens simultaneously to the bulk of SF in unobscured composite sources, but the issue for type 2 objects is open.To solve it, we need to identify the entire population and to measure their M BH , L/L Edd , SFR etc.
• It is well known that AGN powers strong winds in local galaxies.Nonetheless most of them affect only the ionized gas on circumnuclear spatial scales and further observational efforts are required to identify kpc-scale molecular outflows at the peak of AGN activity (z∼ 2).
Ferrarese & Merrit 2000, Gebhardt et al. 2000, Gultekin et al. 2009), luminosity (Kormendy & Richstone 1995, Marconi & Hunt 2003, Sani et al.2011) and mass (Magorrian et al. 1998, MH03, Häring & Rix 2004).Since a SB is a natural consequence of dissipative gaseous processes as-sociated with spheroid formation (e.g.Barnes & Hernquist 1991) and AGN-SF connection dating back to the early Universe is implied by these results, a mergerdriven scenario is well suited to reproduce such scaling relations.Recently, attention has been drawn to the coexistence of pseudobulges and BHs (Nowak et al. 2010, Sani et al. 2011).Indeed pseudobulges seem to follow their own relation with BHs, hosting less massive BHs than classical bulges (Greene, Ho, & Barth 2008, Hu 2009), or they are at least displaced from the scaling relations for classical bulges (Sani et al. 2011, Kormendy et al. 2011).
Figure 1 :
Figure 1: Major-merger scenario.Starting from the merger of two spiral galaxies (left side), the merger produces a violent SB and the onset of QSO activity (middle) and ends with the formation of a dead spheroid (right side).From Hopkins et al. 2008.
3. 2
IRAS 12071-0444: a CT type 2 QSO Mid-IR (data from IRS/Spitzer) and X-ray spectroscopy (data from BeppoSAX, Chandra, Suzaku) allow to investigate the nature and peculiar structure of the nearby (z= 0.128) ULIRG (L IR = 9 × 10 45 erg/s) IRAS 12071-0444.It is one of the few ULIRGs optically classified as a type 2 AGN and its mid-IR spectrum (in Fig. 5-right, Nardini et al. 2009) shows the typical features of ongoing SB (traced by PAH emission) plus a buried AGN (traced by a reddened power-law).
Figure 3 :
Figure 3: IRAS12071-0444 data.Left: the Spitzer/IRS ∼ 5 − 8µm spectrum (green dots) is reproduced (black line) by means of an AGN (blue dashed line) and SB (red dot-dashed line) templates.Right: Suzaku data (blue) well agrees with the Chandra/ACIS spectrum (red) in the 0.5-5 keV range, and beyond 15 keV with the BeppoSAX/PDS points (orange).The shaded blue and orange areas in the upper left corner allow to compare Suzaku and BeppoSAX/PDS fields of view respectively.I also compare IRAS 12071-0444 with the NGC 4945 X-ray spectrum observed by XMM-Newton (green).The NGC 4945 data are scaled to match IRAS 12071-0444 at 0.5-10 keV.
(Bush et al. 2008)03s, Medling et al. 2011, Feruglio et al. 2013SF activity in the nuclei of local galaxies (e.g.NGC 1068, NGC 6240, Mrk 231, Arp 299, Circinus...).The angular resolution represents the only limit for this analysis.NGC 6240 is a well known double-merging system 107 Mpc far away.Here the Xray imaging and spectroscopy performed with Chandra(Komossa et al. 2003) revealed a Compton Thick double AGN, that was confirmed at all wavelengths, plus a SB covering a projected ∼ 2 kpc scale(Risaliti et al. 2006, Medling et al. 2011, Feruglio et al. 2013).HST and Spitzer images(Bush et al. 2008)show how the thermal emission follows the optical dust obscuration very closely.NGC 6240 is thus an active interacting remnant viewed at the point of nuclei merging where two AGNs are visible.The results are well consistent with a major merger scenario with the transition from disk galaxies to a spheroid.
(Daddi et al. 2007, Sarria et al. 2010, Mainieri et al. 2011)osts is consistent with the value for normal galaxies(Daddi et al. 2007, Sarria et al. 2010, Mainieri et al. 2011).This is thus a debated matter, but we can not relate the SF physics to the BH physics in a complete and unbiased way because we can not measure e.g.BH mass and accretion rate in type 2 objects.As mentioned at the beginning of this review, a key ingredient of the merger scenario is feedback.So far a direct detection of the AGN winds able to quench SF in the host galaxy is lacking.Indeed according to Veilleux et al. 2005, ∼ 30% of QSO show fast winds (100 km/s < v out < 1000 km/s) but the outflow affects the circumnuclear environment only on the broad/narrow line region scales and thus can not inhibit the SF in the host (Muller-Sanchez et al. 2011).Only in nearby sources like Mrk 231 is possible to detect strong molecular outflows (Feruglio et al. 2010, Fisher et al. 2010) acting on kpc scales as expected by QSO-feedback models.The current challenge is to detect such molecular winds at the epoch of the AGN activity peak (i.e. at about redshift 2).
Nardini et al. 201099;Goto 2005).InNardini et al. 2010we have quantified how SF and nuclear activity are the primary engine at the opposite ends of the ULIRG luminosity range.The SB component dominates at log(L IR /L ) < 12.5, where the AGN Interestingly Seyfert 2, LINERs and HII regions harbor a similar AGN content, supporting the idea of a connection between the SF activity and nuclear obscuration.Moreover, SB dominated galaxies host the most obscured AGNs (see also Georgakakis et al. 2004, Sani et al. 2008, Nardini & Risaliti 2011).On the other hand, no enhancement of SF is observed in obscured AGNs ( | 3,183.6 | 2014-12-04T00:00:00.000 | [
"Physics"
] |
Investigating the variation of the Sun’s visual shape, atmospheric refraction and Einstein’s special relativity considered
By experimental measurements and theoretical analyses, this paper investigates the variation of the Sun’s visual shape and figures out the reasons for the variation of its shape. First, the method of image processing, the method of moments and the least-square method are combined to perform experimental measurements and calculations, and the features of the Sun’s visual shape are extracted from the photos of the Sun. Second, theoretical analyses are conducted based on atmospheric refraction and the Einstein’s special relativity theory. The relationship model is established between the zenith and azimuth angles of the Sun, the velocity of the Sun relative to the Earth, and the observation time and position; the refraction index of the atmosphere is expressed as a function of altitude and wavelength of light; an iterative algorithm is constructed to trace rays of light in the atmosphere; a set of formulas is derived to determine the contraction ratio and contraction direction of the Sun’s visual shape. Finally, the theoretical and experimental results are compared; their relative errors are less than 0.3%, which verifies the theoretical analyses. Both theoretically and experimentally, this research proves that the Sun’s visual shape is an ellipse; its shape variation mainly results from atmospheric refraction effects; and the length contraction effect of the Einstein’s special relativity also contributes a little, except at the time of sunrise and sunset.
Introduction
When you enjoy sunrise and sunset, do you pay attention to the variation of the Sun's visual shape? Comparatively speaking, the Sun's image at sunrise or sunset is flatter than that at noon; however, although the Sun's image at noon is the roundest on a day, it still appears to be an ellipse and not a perfect circle. Very few research papers systemically explain the theoretical reasons for the variation of the Sun's visual shape, and only a few very short articles on Wikipedia and in popular science magazines simply outline the natural phenomenon based on atmospheric refraction.
The effects of atmospheric refraction have been studied widely in the fields of optical communications and weather forecast. For the free-space optical communication systems, by applying astronomical refraction formulas and considering meteorological conditions on ground, Karin and Florian (2004) have simulated the refraction angles of solar rays in the atmosphere, and calculated the vertical deviation of solar beams at various wavelengths. For the earth-to-satellite laser communications, Xiang (2008) analyzed the effect of atmospheric chromatic dispersion on the pointing error of the uplink laser beam. For the sake of improving accuracy of satellite laser ranging, Yuan et al. (2011) used ray tracing to compute the various optical paths caused by atmosphere refraction, and gave a regional distribution of the optical path difference in China. Jiang et al. (2013) investigated the relationship between the optical paths and the satellite zenith angles, and proposed an atmospheric refraction compensation scheme for the different satellite zenith angles. In order to find out about the effect of atmospheric refraction on the propagation of weather radar beams, Wang et al. (2018) explored the vertical variation of the refraction index in the first kilometer of the atmosphere with regional climate and topography. Balal and Pinhasi (2019) evaluated the effect of atmospheric refraction and absorption on the propagation of millimeter and sub-Communicated by: H. Babaie millimeter wavelengths from land to satellite. Chaim and Hall (2000) made use of ray tracing to determine atmospheric refraction, and combined the digital terrain model to calculate the visual sunrise and sunset times at some cities in Israel. Kambezidis and Papanikolaou (1990) researched the relationship between solar position and atmospheric refraction. Kambezidis and Tsangrassoulis (1993) proposed a new correction of right ascension according to solar position. Kambezidis (1997) established a set of appropriate spherical trigonometric formulas to estimate sunrise and sunset hours by considering flat and complex terrains and atmospheric refraction. Instead of spherical trigonometry, Sproul (2007) used vector analysis to research the position relationship between the Sun and the Earth.
This paper investigates the variation of the Sun's visual shape on a day, and figures out the reasons for the shape variation by massive experimental measurements and systemic theoretical analyses. First, hundreds of photos of the Sun were taken from sunrise to sunset; an image processing was performed to extract features of the Sun's visual shape from the photos. The method of moments and the least-square method were combined to fit the periphery of the Sun in the processed images; and a set of formulas was derived to calculate the feature parameters of the elliptic Sun. Error analyses showed that the relative measurements accuracy was about 0.023%, the standard deviation of the fitting curve of the ellipse was only 4 pixels. The experimental results proved that the Sun's visual shape can be accurately approximated by an ellipse. Second, in order to investigate the reasons for the variation of the Sun's visual shape, the atmospheric refraction effects were first researched to simulate the influence on the Sun's visual shape at different observation times and positions. Based on Kepler's laws, the relative position and velocity between the Sun and the Earth were analyzed, and a set of formulas was derived to calculate the zenith and azimuth angles of the Sun for different observation positions and times; according to the differential equation of light propagation in an inhomogeneous medium, an iterative algorithm was developed to trace the rays of light in the atmosphere. Meanwhile, because the refraction index of the atmosphere mainly depends on air density, it is expressed as a function of altitude and wavelength of light. Under this expression, it is proved theoretically that the trajectory of sunlight is a planar curve. Nevertheless, as a result of atmospheric refraction, the Sun's visual shape contracts only in the zenith direction, resulting in an elliptic Sun in the observer's eyes. In addition, the length contraction phenomenon in the Einstein's special relativity theory due to the relative movement between the Sun and the observer was also investigated to simulate its effects on the Sun's visual shape at different observation times and positions. Because the relative velocity between the Sun and the observer varies with time, especially the direction of the motion, the effect of the length contraction phenomenon on the Sun's visual shape also varies with time; therefore, the analysis included the contraction ratio and the contraction direction of the Sun's visual shape, which describe a ratio of the minor to the major axis and the direction of semi-minor axis of the Sun's visual shape. A relationship between the length contraction effect, observation positions and times was established. Although the effect of the Einstein's special relativity theory also makes the Sun's visual shape contract in one direction and results in an elliptic Sun, this differs from the atmospheric refraction effect, because the contraction direction caused by the Einstein's special relativity theory varies with time. Comprehensively considering the effects of atmospheric refraction and the Einstein's special relativity theory on the Sun's visual shape, although different, we can prove that the Sun's visual shape is still an ellipse. Therefore, a formula calculating the shape parameter of the ellipse was derived. Finally, the observation position was chosen to be Dalian, and the observation time mid-December 2018. Then, the variation of the Sun's visual shape was simulated, including two cases: one with atmospheric refraction effects only, and another with both atmospheric refraction and the Einstein's special relativity theory. By comparison, it can be found that the relative errors of the simulation results and the experimental data are less than 0.3%. These results show that the reason for the variation of the Sun's visual shape is mainly atmospheric refraction; and at times except of sunrise and sunset, the length contraction effect of the Einstein's special relativity theory also contributes, but little.
Experimental measurements and calculations
In order to investigate the variation of the Sun's visual shape, a high-resolution digital camera was used to take more than 300 photos of the Sun from sunrise to sunset. The time chosen was before the winter solstice in the year 2018; the position was Dalian, a city in northeast China. In Fig. 1, three photos are given, the projection plane is perpendicular to the direction from the camera to the Sun, and the horizontal direction of the photo is about parallel to the sea level. Here, the Sun's visual shape is assumed an ellipse. In the later Sections, the assumption will be verified by experimental results and theoretical analyses. A shape parameter describing the ellipse is defined as k = b/a, where a stands for the dimension of its semi-major axis, b the dimension of its semi-minor axis. When the sun rises or sets, the vertical direction of the photo or the direction of b is about perpendicular to the sea level. A bigger k means that the Sun's visual shape is rounder. Seeing from Fig. 1, at sunrise, noon and sunset, the major and minor axes of the Sun are basically along the horizontal and vertical directions of the images.
For extracting the shape parameter k from the Sun's photos, image processing, and the methods of moments and leastsquares were employed. First, the Sun's image expressed by RGB was converted to a binary image expressed by two luminance values of 0 and 1; then the periphery of the Sun was extracted from the binary image by removing the interior pixels; next, considering the Sun's binary image as an ellipse, the method of moments was used to evaluate its feature parameters, such as the semi-major axis, the semi-minor axis, the coordinates of the center, and the angle between the major axis of the ellipse and the horizontal direction of the image, which were defined as the initial values of the iterative algorithm for fitting an ellipse to the periphery of the Sun. Finally, the leastsquares method or the iterative algorithm was applied to fit an ellipse to the periphery of the Sun and compute the feature parameters of the Sun's elliptical image.
Processing the Sun's image taken by a camera A photo of the Sun taken by a camera is a true color image expressed in RGB. In order to extract the shape parameter k from the Sun's photo, the Sun's image should be processed.
First, the true color image was converted to a grayscale image by removing the hue and saturation information, while retaining luminance. The mapping relationship between luminance and RGB can be described as where Y denotes the luminance value at every pixel, ranging from 0 to 1, Y = 1 denotes white, Y = 0 stands for black. Then, in order to reduce noise in the grayscale image, a 3-by-3 neighborhood median filtering was performed. Fig. 2a is a true color image taken by a camera, and Fig. 2b is a processed image after median filtering. Next, according to the contrast of Fig. 2b, a luminance threshold e Y was estimated, and the luminance values of all pixels with Y ≥ e Y were replaced with the value 1, while the luminance values of the other pixels were set to 0. In this way, Fig. 2b is converted to a binary image expressed by two luminance values of 0 and 1. Besides, in order to eliminate isolated points and burrs on the periphery, the morphological operations were applied to the binary image, including image erosion and dilation. During the image erosions and dilations, a 3-by-3 matrix SE was used as a structuring element object After performing 3 times erosions, 3 times dilations were implemented. Fig. 2c is the binary image after performing erosions and dilations. The last step was to remove the interior pixels and retain the periphery of the Sun. For a pixel in Fig. 2c, if the luminance values of all its 4-connected neighbors were 1, its luminance value would be set to 0, otherwise, the pixel would be retained as a point on the periphery of the Sun, and its luminance value is 1. Fig. 2d illustrates the periphery of the Sun.
Extracting features from the Sun's binary image by the method of moments Fig. 2c is a binary image, where the Sun can be regarded as an ellipse. The feature parameters of the ellipse include the dimension a of the semi-major axis, the dimension b of the semi-minor axis, the center coordinate C (x c , y c ), and the angle γ between the major axis of the ellipse and the horizontal direction of the image, as shown in Fig. 3.
The binary image is expressed by an M × N matrix consisting of 0 and 1; 1 and 0 denote white and black, respectively. It is assumed that the matrix element is IM i, j , i = 1, 2, ⋯, M; j = 1, 2, ⋯, N, using the method of moments, the Sun's area A and the center coordinate C (x c , y c ) can be calculated by The angle γ between the major axis of the ellipse and the horizontal direction of the image can also be obtained by where, I xx , I xy and I yy are the secondary moments relative to C (x c , y c ), which can be computed by the following equations Applying Eqs. (4) and (5), the principal moments of inertia, I 1 and I 2 , are obtained If the Sun in the binary image is considered as an ellipse with two semi-axes a and b, its area A and the principal moments of inertia I 1 and I 2 can be expressed as Therefore, a and k can be solved by Applying the method of moments, the feature parameters of the Sun's binary image are finally extracted, including a, b, C (x c , y c ), γ and k.
Taking the Sun's image in Fig. 2c for example, the feature parameters are extracted. The result data are given in Table 1, where the length unit is pixel. Comparing the binary image of the Sun with the ellipse obtained by Eq. (7), the area of nonoverlapped domain is calculated, equal to 37,418; and its error relative to the total area is about 0.38%.
Fitting an ellipse to the Sun's periphery by the least-square method Fig. 2d illustrates the periphery of the Sun's visual shape. It is assumed that P i (x i , y i ), i = 1, 2, ⋯, q is a point on the periphery. In order to fit an ellipse to the periphery points, two coordinate systems are established, one is the global coordinate system O − XY, the other is the local coordinate system C − X e Y e , as shown in Fig. 3. In O − XY, X and Y are along the horizontal and vertical directions of the image, respectively. In C − X e Y e , the original point is set as the center of the ellipse; X e and Y e are along the semi-major axis and semi-minor axis of the ellipse, respectively.
In the local coordinate system C-X e Y e , P i (x i , y i ) on the Sun's periphery can be expressed in the form of a vector P ! i , satisfying The equation of fitting a reference ellipse can be written as where, r ! θ ð Þ is the polar radius of an edge point on the fitting reference ellipse; θ is the corresponding curve parameter; i ! e and j ! e stand for the unit vectors along X e and Y e ; (x e , y e ) is the coordinate of the point in C-X e Y e . Its unit tangent vector T ! θ ð Þ and unit normal vector N ! θ ð Þ can be expressed as Setting , T x , T y , N x and N y satisfy Δx c , Δy c , Δa, Δb] T , and the sign "T" means the transpose of a vector or matrix. The least-square method is applied to adjust the five feature parameters. The objective is to make ∑ q i¼1 ε 2 i minimum. Based on Eq. (18), the normal equation of the least-square method can be described as By Solving Eq. (19), the parameter increments Δx c , Δy c , Δγ, Δa and Δb can be figured out.
According to the principle of statistics, by applying Eq. (18), the variance SE 2 of the residuals of the fitting ellipse can be calculated Subsequently, the variances S 2 a and S 2 b are obtained where, S 2 a is the variance of a; S 2 b is the variance of b; V 4, 4 and V 5, 5 stand for the 4th and 5th diagonal elements of By employing the least-square method (LSM) to fit an ellipse to the periphery of the Sun's image, the optimal result parameters are obtained, including the semi-major axis a new , the semi-minor axis b new , the angle γ new and the center coordinate x new c ; y new Taking the Sun's periphery in Fig. 2d for example, the least-square method is used to fit an ellipse to the periphery. The initial values are the data listed in Table 1, which are the feature parameters calculated by the method of moments. The final results are shown in Fig. 4 and Table 2.
Seeing from Table 2, the variances S 2 a and S 2 b are very little. If the confidence level is 99.73%, for the semi-major axis a, its measurement accuracy is ±3S a = ± 0.1971, and the relative accuracy is 0.011%; for the semi-minor axis b, its measurement accuracy is ±3S b = ± 0.2016, and the relative accuracy is 0.012%. For the shape parameter k in Table 2, its relative measurement accuracy can be estimated as 0.023%, the sum of the relative accuracies of a and b. The standard deviation of the fitted ellipse to the periphery of the Sun is only about 4 pixels, while the average diameter of the Sun is about 1765 pixels. The results show that the Sun's image can be described as an ellipse.
Experimental results
The method of moments and the least-square method are combined to calculate the shape parameters k of the Sun's images. The images have been taken from sunrise to sunset in mid-December at Dalian. In the Sun's binary image, the Sun is considered as an ellipse. The method of moments is used to evaluate the feature parameters of the ellipse at first, and the feature parameters are set as the initial values of the iterative algorithm for fitting an ellipse to the periphery of the Sun; then, the least-square method or the iterative algorithm is performed to fit an ellipse to the periphery of the Sun and compute the feature parameters of the ellipse.
The resulting data are listed in Table 3, including the shape parameter k, the semi-major axis a, the semi-minor axis b, and the corresponding errors, where the maximum and minimum values of k, and their corresponding timings, as well as the maximum error of k are marked in bold. Obviously, the Sun's visual shape always appears to be an ellipse instead of a perfect circle. The Sun's image is the roundest at noon, while at 7:11 Beijing time, the Sun's image is a flat ellipse with the minimum shape parameter of 88.13%. It should be noted that the sunset photos were taken at Jinshitan Beach. Because the mountains obstruct some sunset view, the last time in Table 3 is 16:20 Beijing time.
To easily observe the variation of the shape parameter, a variation curve is plotted based on the data of Table 3, as shown in Fig. 5, where the horizontal axis is Beijing time, and the vertical axis is k. Seeing from Fig. 5, it is observed that k varies fast and abruptly around sunrise and sunset, whereas it is almost flat from 8:00 to 15:00 Beijing time.
Theoretical analyses and simulation
When the Sun is just rising in the morning, the Sun's image is flattened in the vertical direction, and its shape parameter is minimal; as time goes on, the Sun's visual shape becomes rounder and rounder, until noon, when the shape parameter reaches its maximum; however, the roundest image of the Sun is still an ellipse, but not a circle. In this Section, based on atmospheric refraction effects and the length contraction phenomenon in the Einstein's special relativity theory, the theoretical analysis and simulation are conducted to investigate the reasons for the variation of the Sun's visual shape.
Atmospheric refraction effects
By considering the atmospheric refraction effects only, the Sun's visual shape is affected by the refractive index of the atmosphere, the Sun's position in the sky and observation time; therefore, a model to estimate the refraction index at an altitude was established, and a set of formulas was derived to determine the zenith and azimuth angles of the Sun for different observation times and positions. Finally, an iterative algorithm was constructed to trace rays of light in the atmosphere.
Model of the atmospheric altitude and the refraction index
The Sun's diameter is 1.392 × 10 6 km, and the average Sun-Earth distance is about 1.496 × 10 8 km. It is assumed that the rays from the Sun travel in straight lines. When an observer on the Earth sees the Sun, his angle of view can be calculated by the Sun's diameter over the average Sun-Earth distance, equal to 32.004 ′ or 0.5334 o . However, the Earth is surrounded by a thick atmosphere, and the rays of light must pass through the atmosphere before reaching the Earth. By increasing the altitude, the air density decreases, resulting in a decrease in the refractive index of the atmosphere. As a result, the rays of light are bent in the atmosphere, which means the angle of view varies. The variation in the angle of view mainly occurs in the vertical direction, while in the horizontal direction, the angle of view almost remains unaffected. Therefore, the Sun in our eyes is like an ellipse, but not a circle. The shape of the ellipse varies with the observation time and position. The altitude influences the refractive index by the varying temperature and pressure. When the altitude is above 50 km, the refractive index approaches 1, so the paper mainly focuses on the propagation of light rays in the atmosphere below this altitude.
The relationship between air temperature T (in°K) and altitude H (in km) can be formulated as (Karin and Florian 2004;Jiang et al. 2013;Wang et al. 2018) where, T 0 is the air temperature at ground level. (23), a temperature variation curve T(H) is given in Fig. 6, where the horizontal axis is the altitude H, the vertical axis is the temperature T.
According to hydrostatics, the air pressure P and the altitude H satisfy the condition where, ρ is the air density in g/m 3 ; g is the gravitational acceleration, equal to 9.8 m/s 2 . In the atmosphere, the air satisfies the equation of state where, R = 8.3144 J · Mol −1 · K −1 is the gas constant; μ = 28.966 g · Mol −1 is the molar mass of air. Substituting Eq.
(25) into Eq. (24), the relationship between the air pressure P and altitude H is obtained The relationship between air density ρ and altitude H can be expressed as Substituting Eq. (23) into Eqs. (26) and (27) where, P 0 and ρ 0 stand for the air pressure and air density at ground level. P 0 , ρ 0 and T 0 satisfy Eq.(25); P, T and ρ are functions of H. Setting T 0 = 269.15 o K, P 0 = 1.01325 × 10 5 Pa, ρ 0 can be calculated from Eq. (25), that is ρ 0 ¼ P 0 μ RT 0 ¼ 1:3115 kg=m 3 . Based on Eqs. (23), (28) and (29), the variations of atmospheric pressure and air density as a function of altitude are obtained, as shown in Fig. 7.
Neglecting influences of vapor in the air on the refractive index of the atmosphere, based on Eq. (25), the refractive index n can be calculated by (Karin and Florian 2004;Jiang et al. 2013;Xiang 2008) Where, λ (in μm) is the wavelength of light. Based on Eq. (30), the refractive index n(H) is plotted in Fig. 8, where the red line stands for the red light from the Sun at sunrise, at the wavelength of λ = 0.685 μm; the black line stands for the white light from the Sun at noon, at the wavelength of λ = 0.550 μm. Seeing Fig. 8, it can be found that with an increase in the altitude, the refractive index n decreases. When H > 30 km, n approaches 1, i.e. the refractive index in a vacuum. In order to investigate the influence of wavelength of light on the refractive index n, the difference of n between the red and white light is calculated; the results are shown in Fig. 9. Obviously, the influence of wavelength on n mainly occurs in the atmosphere below 30 km, but the influence is very little and can be neglected.
Model of the theoretical zenith angle of the sun, the observation time and position
The Sun's photos were taken in mid-December 2018 at Dalian. Besides the refractive index of the atmosphere, the observation time and position also affect the Sun's visual shape. Therefore, a model of the Sun's zenith angle, the observation time and position should be established. Here, neglecting atmospheric refraction effects, a set of formulas was derived to calculate the theoretical zenith angle of the Sun according to the observation position and time. Figure 10 is a sectional view of the Earth surrounded by a thick atmosphere. Dalian is located at 121.62 o E longitude and 38.92 o N latitude. It is assumed that a line connects Dalian (point A) and the Earth's center (point O), another one connects the Sun and the Earth's center (point O). The angle φ DL between the two lines is the theoretical zenith angle of the Sun.
The Earth rotates around its axis, and simultaneously moves around the Sun on the ecliptic plane. The Earth's axis is not perpendicular to the ecliptic plane since there is the declination angle between the equatorial plane and the ecliptic plane. Therefore, the theoretical zenith angle φ DL of the Sun varies with the Earth's rotation and its position on the ecliptic plane.
For convenience, it is assumed that the Sun moves around the Earth in an elliptical orbit, and the Earth is located at a focal point of the elliptical orbit. The parameters of the elliptical orbit are: the semi-major axis aŜ ¼ 1:4960 Â 10 8 km, the semi-minor axis bŜ ¼ 1:4958 Â 10 8 km, the focal length cŜ ¼ 25 Â 10 5 km, the eccentricity eŜ ¼ 0:0167. The Sun coordinate system O − X s Y s is shown in Fig. 11, where the original point is set as the focus where the Earth is located; X s points to the perihelion; Y s is perpendicular to X s . In terms of X s the elliptical orbit can be described as where, r(θ) is the polar radius; θ is the corresponding polar angle; PŜ ¼ bŜ À Á 2 =cŜ.
According to Kepler's second law, r(θ) satisfies where, C 0 is an undetermined constant; t stands for the time counting from the perihelion.
The area of the elliptical orbit is π Á aŜ Á bŜ, which is defined as Because the period of revolution of the Earth is T 0 = 365.2422 days, and T 0 = t(2π), substituting Eq. (34) into Eq. (33), C 0 can be obtained Substituting Eqs. (31) and (35) into Eq. (33), t(θ) can be rewritten as Letting tan ð φ2 Þ ¼ replacing the variable θ in Eq. (36) yields For a given time t 0 , in order to determine the Sun's polar angle θ, φ̂should be first solved by using Eq. (38); then θ is obtained by substituting φ̂into Eq. (37).
Eq. (38) is a transcendental equation. In order to solve for φ, an iterative algorithm is constructed where, φ̂n is the iteration variable in the n th iteration, with an initial iteration value of φ̂0 ¼ 2πÁt 0 T 0 . The error formula of the iterative algorithm can be expressed as where,φ * is the solution to Eq. (38); L is Lipschitz constant, the maximum value of the derivative ofφ n , satisfying From Eqs. (40) and (41), it is seen that the iterative algorithm converges very fast. During the iteration, when setting the accuracy as 10 −6 ( o ), the number of iterations is not more than 3, that is n ≤ 3.
In Fig. 11, the Earth coordinate system O − X E Y E Z E is also established, where the original point is the Earth's center, which coincides with the original point of the Sun coordinate system O − X s Y s ; Z E is the Earth's axis; Y E is along the line of the intersection of the equatorial plane and the ecliptic plane, and coincides with Y s ; X E is in the equatorial plane, the angle δ is the declination angle equal to 23.433 o .
According to the Gregorian calendar, when the Sun is located at an intersection of the equatorial plane and the ecliptic plane, or when the Sun is on the Y E , the time in 2018 is 00:15:24 on March 21 (Beijing time); in this paper, this time is taken as reference time. Beijing time is defined as the time at 120 o E longitude. At 00:15:24 on March 21, ξ 0 BJ is used to denote the angle between the projection of the 120 o E longitude on the plane O − X E Y E and the X E axis. At the reference time, the Sun's polar angle θ = 90 o , and there are 11.7433 h from 00:15:24 to 12:00:00 Beijing time; then, ξ 0 BJ can be calculated by where, ω E ¼ 15:0411 o =hour, is the average angular velocity of the Earth around its axis; ω S ¼ 0:0411 o =hour, is the average angular velocity of the Earth around the Sun. Dalian is located at 121.62 o E longitude, with a longitude difference between Dalian and Beijing of 1.62 o . ξ DL is used to denote the angle between the X E axis and the projection of the 121.62 o E longitude on the O − X E Y E , as shown in Fig. 11. At the reference time, using ξ 0 DL to describe ξ DL , and ξ 0 DL can be calculated by Taking 00:15:24 on March 21, 2018 (Beijing time) as reference time, after Δt hours, ξ DL can be obtained In the Earth coordinate system O − X E Y E Z E , η DL is used to stand for the angle between the radius direction from the original point (point O) to Dalian (point A) and the coordinate plane O − X E Y E , as shown in Fig. 11. Because Dalian is located at 38.92 o N latitude, η DL = 38.92 o . OA ! is the unit vector along the radius direction OA, and can be expressed by where, i E ! , j E ! and k E ! are the unit vectors along X E , Y E and Z E , respectively. 37) and (38), setting θ = 90 o , the moving time of the Sun from the perihelion to the reference time can be obtained, that is Δt 0 = 89.367789 days. In the Sun coordinate system O − X s Y s , applying the iterative algorithm of Eq. (39), setting t o = Δt 0 + Δt, the polar angle θ of the Sun after moving Δt hours can be obtained from Eqs. (37) and (38). At the moment t o , it is assumed that OS ! is the unit vector along the radius vector r(θ) of the Sun. In the Earth coordinate sys- The angle between OS ! and OA ! is the theoretical zenith angle φ DL of the Sun relative to Dalian, as shown in Fig. 10. Based on Eqs. (45) and (46), φ DL can be figured out by Projecting OS ! on the equatorial plane O − X E Y E , ξ Sun is used to denote the angle between the projection vector of OS ! and the X E , which describes the direction of the Sun, satisfying 12:00:00 Beijing time is not the moment when the Sun shines over Dalian. The time difference Δt can be calculated by In Fig. 11 In the Dalian coordinate system A − X DL Y DL Z DL , the components of OS ! can be written as Thus, relative to Dalian, the Sun's azimuth angle ψ DL can be figured out by Defining clockwise rotation as positive, ψ DL describes the angle between the Sun's direction OS ! and due north. It is assumed that atmospheric refraction effects are neglected, the observation position is Dalian, and the observation time is Beijing time one day in mid-December 2018. In Fig. 12, the variations of the theoretical zenith angle φ DL and the azimuth angle ψ DL are given, which are calculated from Eqs. (47) and (53).
Besides, the time when the Sun shines over Dalian is calculated by Eq. (49). The simulation results are given in Fig. 13, where the horizontal axis is the date in the whole year of 2018, and the vertical axis is the time when the Sun shines over Dalian. From Fig. 13, it is seen that the time ranges from 11:36 to 12:08.
Iterative algorithm for tracing rays of light in the atmosphere
Because the refractive index of the atmosphere varies with altitude, the rays of light are bent in the atmosphere surrounding the Earth. In the inhomogeneous medium, the propagation of light satisfies (Born and Wolf 2007) where, R ! s ð Þ stands for the trace of light; s is the length of arc; n is the refractive index of the atmosphere; ∇ ! n is the gradient of the refractive index.
According to Eq. (30), the refractive index of the atmosphere can be expressed as n(R), where R is the distance of a point in the atmosphere to the Earth's center. Because the contour surface of n(R) is spherical, its gradient direction is parallel to the radial direction of the Earth, that is R ! Â ∇ ! n ¼ 0. Thus, Eq. (54) can be simplified as to the plane. Therefore, Eq. (57) proves that the trace of light in the atmosphere is a plane curve, and the plane is determined by the Earth's center, the incidence point and incidence direction of light, as shown in Fig. 14. Usingθ to denote the angle between the propagation direction of light and the radial direction of the Earth, taking the modulus of the two sides of Eq.
In order to trace the rays of light in the atmosphere, the atmosphere below the altitude of 50 km is equally divided into 500 layers. It is assumed that the refractive index of each layer is constant. For convenience, the subscript"i" is used to express the ith layer of the atmosphere, i = 1, 2⋯500. R i stands for the distance of the ith layer of the atmosphere from the Earth's center; n i is the refractive index of the ith layer; θ i is the angle between the radial direction of the Earth and the propagation direction of light in the ith layer, as shown in Fig. 14. Eq. (58) is rewritten as According to Eq. (59), an iterative algorithm for tracing the rays of light can be constructed. In Fig. 14, a coordinate system O − XY was established, where the original point was the Earth's center; the horizontal axis X pointed to the Sun; the Y axis was perpendicular to the X axis in the propagation plane of the rays. Based on the reversibility of the optical path, it is assumed that the rays of light travel from the Earth to the Sun.
can be traced and calculated by where, β i + 1 is the incidence angle of light from the ith to the (i + 1)th layer. Eq. (60) is a one-step explicit scheme with firstorder accuracy, and can be used to simulate the propagation path of light in the atmosphere. During the simulation, the observation position coordinate P ! 0 x 0 ; y 0 ð Þand the angleθ 0 Fig. 15, Dalian is set as the origin, the horizontal and vertical directions are the same as the X and Y axes in Fig. 10, and the time is Beijing time. During the simulation τ Sun = 0, which means when the light reaches the atmosphere at the altitude of 50 km, its propagation direction is parallel to the X axis.
Defining δφ ¼ e φ DL −φ DL , φ DL is the theoretical zenith angle of the Sun, and e φ DL is the observed zenith angle of the Sun. In Fig. 16, the difference between φ DL and e φ DL is given, where the horizontal axis is Beijing time of a day, the vertical axis is δφ. From Fig. 16, it could be found that δφ is bigger at sunrise and sunset than at noon, a result caused by atmospheric refraction.
In the Dalian coordinate system A − X DL Y DL Z DL , the direction of the Sun can be described by the observed zenith angle e φ DL and the azimuth angle ψ DL Based on Eq. (61), the direction of the Sun in A − X DL Y DL Z DL can be calculated. Fig. 17 is the projection of the Sun's path on the Y DL Z DL plane at X DL = − 1km.
The length contraction effects in the Einstein's special relativity theory
Here, atmospheric refraction effects are neglected, and the length contraction phenomenon in the Einstein's special relativity theory is considered to investigate its effect on the Sun's visual shape. The length contraction phenomenon in the Sun's image can be described by the contraction ratio and the Because the Sun is far away from the Earth, and the Earth rotates around its axis and simultaneously around the Sun, the relative velocity between the Earth and the Sun is very big. According to the length contraction phenomenon in the Einstein's theory of special relativity, when an observer on the Earth sees the Sun, the size of the Sun's visual shape in its moving direction will become shorter, while the size perpendicular to the moving direction remains unchanged. Assuming that the Sun is a sphere, the Sun in the eyes of the observer will be an ellipsoid, and its ratio of the minor to the major axis can be expressed as where, c = 3 × 10 8 m/s, the velocity of light in vacuum; v is the relative velocity between the Sun and the observer. Taking into account that the average angular velocity of the Earth around its axis is ω E ¼ 15:0411 o =hour, the average angular velocity of the Earth around the Sun is ω S ¼ 0:0411 o =hour, the average distance from the Sun to the Earth is 1.496 × 10 8 km, the relative velocity between the Sun and the observer can approximate to v≈1:496 Â 10 8 ω E −ω S ≈1:088 Â 10 7 m=s ð63Þ Substituting Eq. (63) into Eq. (62) yields k SRT is the shape parameter of the Sun's visual shape, describing the effect of the Einstein's special relativity theory on the visual shape of the Sun. Eq. (64) proves that even if neglecting atmospheric refraction effects, the Sun's shape in our eyes is an ellipse but not a perfect circle, and the ratio of the length contraction or the ratio of the dimensional deviation between the minor and major axes to the major axis is about 0.07%. However, seeing from Fig. 17, the Sun's direction relative to the observer varies with time; as a result, the direction of the major axis in the ellipse image of the Sun also varies.
Applying Eqs. (31), (32) and (35), the transient angular velocity ω S (θ) of the Sun around the Earth can be expressed as The Sun rotates around the Earth in the elliptical plane, as shown in Fig. 11; its angular velocity vector ω ! S θ ð Þ can be written as The Earth rotates around its axis k E ! ; its angular velocity vector ω ! E can be written as Meanwhile, applying Eqs. (31) and (46), the relative radial where, V ! r θ ð Þ is caused by the variable distance between the Sun and the Earth.
Based on Eqs. (66), (67) and (68), the transient relative velocity V ! SE θ ð Þ between the Sun and the Earth can be calculated by Substituting Eq. (69) into Eq. (62) yields Eq. (70) describes the way that the Einstein's special relativity theory affects the visual shape of the Sun when the relative position varies between the Sun and the Earth.
When a high-resolution digital camera is used to take photos of the Sun, the camera remains horizontal and points to the Sun, which means the Sun's image is taken in the di-
rection OS
! . In the Dalian coordinate system A − X DL Y DL Z DL , the horizontal direction of the camera can be described by the Sun's azimuth angle ψ DL , the unit vectors AN ! DL and AW ! DL along two coordinate axes, that is where, H ! camera is the horizontal direction of the camera or image. The vertical direction V ! camera of the camera or image can be determined by In the Sun's image, the direction of the semi-minor axis caused by the Einstein's special relativity theory is along the projection of the Sun's velocity V ! SE θ ð Þ relative to the Earth on the image. Defining ψ bH as the angle between the horizontal direction of the camera and the semi-minor axis of the Sun' image, it describes the direction of the length contraction or semi-minor axis of the Sun's image, and can be calculated by Upon neglecting atmospheric refraction effects or only considering the Einstein's special relativity theory, and regarding the Sun's visual shape as an ellipse, its shape parameter k SRT and the angle ψ bH can be evaluated by Eqs. (70) and (73), respectively. In Fig. 18, the variation of the shape parameter k SRT is given, where the horizontal axis is the date in the whole year of 2018, and the vertical axis is k SRT . During the simulation, the observation time is set to 12:00:00 Beijing time every day, and the observation position is Dalian. Figure 18 shows that the Sun's visual shape is the roundest at the winter solstice, and the flattest at the spring and autumnal equinoxes.
In Fig. 19, the variation of the angle ψ bH is given, where the horizontal axis is the Beijing time of a day in mid-December 2018, the vertical axis is the angle between the semi-minor axis of the Sun's ellipse image and the horizontal direction of the camera, which describes the direction of the length contraction in the Sun's image. Figure 19 shows that the angle ranges from 47 o to −47 o , and the extremes occur at sunrise and sunset. The two directions of the length contraction occurring at sunrise and sunset basically satisfy the mirror-image relation.
Numerical examples of the atmospheric refraction effects
Eq. (57) shows that the light in the atmosphere propagates in a plane, and the plane is determined by the Earth's center, the incidence point and incidence direction of light, as shown in Fig. 10 or Fig. 14. In the following numerical examples, the propagation path is simulated and analyzed by applying Eqs. (47) and (60).
When an observer on the Earth sees the Sun, the rays from the Sun pass through the thick atmosphere, and then focus on the observer's eyes. The Sun's visual shape is determined by the rays from the upper and lower edges of the Sun; therefore, during the analysis, only the rays from the edges of the Sun are simulated and analyzed. Meanwhile, due to the reversibility of optical path, it is assumed that the rays travel from Dalian to the Sun. In addition, when the altitude in the atmosphere is above 50 km, its refractive index approaches 1; therefore, the atmosphere below the altitude of 50 km is only considered. If the Sun's rays propagate in straight lines, the observer on the Earth has an angle of view of 0.5334 o . As a result, when simulating the ray of the upper edge of the Sun, the angle constraint τ Sun is set to +0.2667 o , and when simulating the ray of the lower edge of the Sun, τ Sun is set to −0.2667 o . This means that when the rays reach the atmosphere at the altitude of 50 km, the angles between the rays and the x axis are restricted to be ±0.2667 o .
Dalian is taken as the observation position. The coordinates o f D a l i a n a r e P ! 0 OAcos φ DL ð Þ; OAsin φ DL ð Þ ½ , OA = 6371 km, and the observation times are in the morning, at noon and in the afternoon on one day in mid-December 2018. In order to determine the initial angleθ 0 between the ray from Dalian and the radial direction of the Earth, the method of bisection is performed by setting the initial range Fig. 20, Dalian is set as the origin, the horizontal and vertical directions are the same as the X and Y axes in Fig. 10. From the figure, it can be found that the horizon is basically tangent to the ray from the lower edge of the Sun in the morning. Figure 21 illustrates bending deformations of the rays in the atmosphere. In Fig. 21, the horizontal axis is the horizontal distance between the observer and the Sun, and the vertical axis is the angle between the propagation direction of the ray and the line connecting the Sun's and the Earth's centers. Compared with the morning, the propagation distance of the rays from the edge of the Sun becomes short in the 50-kmthick atmosphere. Meanwhile, the angles between the rays from the Sun and the Earth's radial direction passing through Dalian (point A) also become small. As a result, bending deformations of the rays in the atmosphere are weakened, which can easily be observed in Fig. 23. When the rays reach Dalian at noon, an observer's angle of view becomes 0.5327 o , bigger than 0.4457 o (in the morning) and a little less than 0.5334 o (the angular diameter of the Sun). According to Eq. (74), the shape parameter k Ref = 0.9986, which means that the Sun's image is rounder than that in the morning. In order to further investigate the effects of atmospheric refraction on the Sun's visual shape in the whole year, a variation of the shape parameter is given in Fig. 26, where the horizontal axis is the date in the whole year of 2018, and the vertical axis is the shape parameter k Ref , which is calculated by Eq. (74). During the simulation, the observation time is set as 12:00:00 Beijing time every day, the observation position is Dalian. Fig. 26 shows that the Sun's visual shape is the roundest at the summer solstice, and the flattest at the winter solstice.
Comparison and verification
When researching the effect of the Einstein's theory of special relativity on the Sun's visual shape, it is proved that the contraction ratio occurring in the Sun's image varies slowly and little with time. The shape parameter k SRT ranged from 0.99934 to 0.99948 in the whole year of 2018, as shown in Fig. 18. However, the contraction direction or the semi-minor axis direction in the Sun's ellipse image varies rapidly and greatly with time. The angle ψ bH between the horizontal direction of the image and the semi-minor axis of the Sun's ellipse image ranged from 47 o to −47 o on one day of mid-December 2018, as shown in Fig. 19.
When researching the effect of atmospheric refraction on the Sun's visual shape, it was shown that the shape parameter k Ref of the Sun's ellipse image changes greatly. k Ref ranged from 0.8357 to 0.9986 on one day of mid-December 2018, as shown in Figs. 21, 23 and 25. However, the contraction direction of the Sun's ellipse image caused by the atmospheric refraction remains unchanged. Because Eq. (57) proves that the propagation path of light in the atmosphere is a planar curve, the semi-minor axis of the Sun's ellipse image is always perpendicular to the horizontal direction of the image.
Assuming that the Sun is a circle, by applying the elementary transformation, it can be shown that after the Einstein's theory of special relativity and atmospheric refraction effects are applied to the circle, the circle will become an ellipse, and the shape parameter k of the ellipse can be calculated by Based on Eq. (70), the shape parameter k SRT can be figured out, which describes the effect of the Einstein's theory of special relativity on the Sun's visual shape; based on Eq. (74), the shape parameter k Ref can be evaluated, which describes the effects of atmospheric refraction on the Sun's visual shape; combining Eqs. (70), (73), (74) and (75), the shape parameter k can be calculated, which describes the effects of atmospheric refraction and the Einstein's theory of special relativity on the Sun's visual shape.
In order to show the reasons for the variation of the Sun's visual shape, the theoretical simulation results and the experimental results were compared, as shown in Figs. 27 and 28. The horizontal axis is Beijing time from sunrise to sunset, the vertical axis is the shape parameter, and the pink circles are the experimental results in Table 3. In Fig. 27, the blue line stands for the simulation results by only considering atmospheric refraction effects. In Fig. 28, the blue line stands for the simulation results by considering the combination of atmospheric refraction effects and the Einstein's theory of special relativity.
The relative error is defined as the difference between simulations and experimental results over the experimental results. Fig. 29 shows the relative errors of the simulated results, where the blue line is the errors calculated when only considering atmospheric refraction; the green dotted line is the errors calculated when considering the combination of atmospheric refraction and the Einstein's theory of special relativity. Fig. 29 shows that the Einstein's special relativity theory affects the Sun's visual shape much more at noon than at sunrise and sunset. For example, at 11:38 Beijing time, the shape parameter extracted from the Sun's photo is k Meas = 0.9995; the shape parameter caused by atmospheric refraction is k Ref = 0.9987 with a relative error of 0.8 × 10 −3 ; the shape 6:00 7:00 8:00 9:00 10:00 11:00 12:00 13:00 14: parameter calculated by considering the combination of atmospheric refraction and the Einstein's theory of special relativity is k = 0.9993 with a relative error of 0.2 × 10 −3 . Although the relative error of k Ref is about four times than that of k, the error is still very small and can be neglected. Figs. 27, 28 and 29 illustrate that the simulated results are coincident with the experimental ones in Table 3.
The comparison study showed that the theoretical analyses in the paper are corrected; the reason for the variation of the Sun's visual shape is mainly due to atmospheric refraction effects, while the length contraction effect of the Einstein's special relativity theory also contributes a little except at sunrise and sunset. Therefore, the Einstein's special relativity theory can explain why the Sun's visual shape always appears to be an ellipse instead of a perfect circle.
In order to further analyze the contributions of atmospheric refraction and the Einstein's special relativity to the variation of the Sun's visual shape, three variations of the shape parameter were simulated in Fig. 30, where the horizontal axis is the date in the whole year of 2018, and the vertical axis is the shape parameter. The blue line stands for k Ref caused by atmospheric refraction effects, similar to the curve in Fig. 26; the pink dotted line stands for k SRT caused by the length contraction effect in the Einstein's special relativity theory, which is the same with the curve in Fig. 18; the black line is the shape parameter calculated when considering the combination of atmospheric refraction and the Einstein's special relativity theory. During the simulation, the observation time was set at 12:00:00 Beijing time every day, and the observation position was Dalian. Fig. 30 shows that in the whole year, the Sun's visual shape at Dalian is the roundest at the end of February, and is the flattest at the winter solstice. The conclusion differs from that drawn from only considering atmospheric refraction or the Einstein's special relativity, because both of them contribute to the variation of the Sun's visual shape.
Conclusions
From sunrise to sunset, the Sun's visual shape changes continuously from a flatter ellipse to almost a circle and then back to a flatter ellipse. In the paper, the experimental measurements and theoretical analyses were performed to investigate the reasons for the variation of the Sun's visual shape. Some meaningful conclusions were drawn: The method of image processing, the method of moments and the least-square method were combined to perform experimental measurements and calculations. The statistical error analyses showed that the relative measurements accuracy was about 0.023%, and the standard deviation of the fitting curve of ellipse was only 4 pixels. The experimental results showed that the Sun's visual shape can be approximated by an ellipse accurately. 6:00 7:00 8:00 9:00 10:00 11:00 12:00 13:00 14:00 15:00 16:00 17:00 -3 -2 -1 0 1 2 3 x 10 -3
Beijing Time errors of k between simulation results and measurement data
Atmospheric Refraction Effects Atmospheric Refraction Effects +Special Relativity Theory Atmospheric refraction effects make the Sun's visual shape become an ellipse, and have a great influence on its shape during a day. Because the refraction index of the atmosphere was expressed as a function of altitude and wavelength of light, the trajectory of sunlight is a planar curve, when considering the atmospheric refraction effects only, the Sun's visual shape contracts in the zenith direction, resulting in an elliptic Sun. In the Sun's photo, the contraction direction remains unchanged, but the contraction ratio varies rapidly and greatly with time, especially at sunrise and sunset.
The length contraction effects in the Einstein's special relativity theory also transform the Sun's visual shape to an ellipse, and contribute a little to its shape variation on a day, especially at noon. Due to the relative movement of the Sun to the observer standing on the Earth, the Sun's visual shape contracts in a direction, resulting in an elliptic Sun. However, unlike atmospheric refraction effects, the contraction ratio varies slowly and little with time, and it is about 0.06%, while its contraction direction varies rapidly and greatly with time.
Comprehensively considering effects of atmospheric refraction and the Einstein's special relativity theory on the Sun's visual shape, although their contraction directions are different, it is shown that the Sun's visual shape is still an ellipse.
On a day, the Sun's visual shape is the roundest at noon and the flattest at sunrise and sunset; in the whole year, the Sun's visual shape for an observer at Dalian is the roundest at the end of February, and the flattest at the winter solstice. For observers at different geographic positions, if they see the Sun at the same moment, the Sun looks different. In theory, an observer at the equator could see the roundest Sun.
Comparing the theoretical simulations and experimental measurements, the relative errors were less than 0.3%. The results, thus, verified the theoretical analyses in the paper, including a set of formulas describing the relationship between the zenith angle of the Sun, the observation times and positions, the model of altitude and refraction index, the iterative algorithm for tracing rays of light in the atmosphere, and the calculation of feature parameters of the Sun's visual shape. | 13,302.8 | 2019-12-05T00:00:00.000 | [
"Physics"
] |
Cs2CO3-Mediated Regio- and Stereoselective Sulfonylation of 1,1-Dibromo-1-alkenes with Sodium Sulfinates
Abstract A highly selective synthesis of (Z)-1-bromo-1-sulfonyl alkenes via Cs2CO3-promoted sulfonylation of 1,1-dibromo-1-alkenes with sodium sulfinates is described. Notably, using excess amounts of Cs2CO3 and sodium sulfinate in such a reaction regenerated the parent aldehyde. Interestingly, the reaction of 1-(2,2-dibromovinyl)-2-nitrobenzene in the presence of sulfinates and Cs2CO3 produced isatin. The Sonogashira cross coupling of synthesized (Z)-1-bromo-1-sulfonyl alkenes with phenylacetylene gave selectively the corresponding sulfonylalkynyl alkenes.
This paper is dedicated to Professor Majid M. Heravi on the occasion of his 68 th birthday. Organosulfur compounds, particularly vinyl sulfone derivatives, have attracted considerable research interest because they are abundantly found in useful synthetic products and naturally occurring substances. 1 In addition, the vinyl sulfonyl group is extensively used as a synthetic intermediate in organic synthesis. 2 Usually, vinyl sulfones are prepared using sulfone agents such as sodium sulfinates, sulfonyl hydrazides, sulfinic acids, and tosyl methyl isocyanide (TosMIC), and alkene resources. 3 Among the sulfone agents, sodium sulfinates are readily available and stable; therefore, they are widely applied in organic transformations. 4 Furthermore, 1,1-dibromo-1-alkenes have been extensively used as an efficient alkene and alkyne resource for generating complex alkene and alkyne derivatives. 5 They can easily be derived from aldehydes or ketones using CBr 4 /PPh 3 . 6 One common and practical method for synthe-sizing terminal alkynes (Corey-Fuchs reaction) and bromoacetylenes is the treatment of 1,1-dibromo-1-alkenes with a base. 7 Although gem-dibromoalkenes are widely used in organic transformations, the occurrence of the selective reaction that produces 1,1-difunctional alkenes by the remaining one Br atom is rare. For instance, the monoalkenylation, 8 selenation, 9 alkynylation, 10 arylation, 11 etherification, 12 and borylation 13 of germinal dibromoalkenes have been reported. Chen et al. established the synthesis of 2-arylbenzofurans(thiophenes) via the tandem reaction of 2-(gem-dibromovinyl)phenols(thiophenols) and sodium arylsulfinates in the presence of the TBAF-PdCl 2 -Cu(OAc) 2 -NEt 3 system. 14 A stereoselective synthesis of vinyl triflones starting from gem-dibromovinyl derivatives has been achieved via triflyl migration reactions. 15 Although the synthesis of vinyl sulfones has been widely and intensively studied, the synthesis of 1-bromo-1-sulfonylalkenes has been shown only in few reports. For instance, sulfonylation of activated alkynes with sodium sulfinates, 16 as well as sulfinic acids 17 in water as the reaction medium have been developed. In both cases, ethyl 3-bromopropiolate reacted with sodium toluenesulfinate or toluenesulfinic acid to yield ethyl 3-bromo-3-tosylacrylate (Scheme 1, eq. 1). Fisher et al. have prepared -bromovinylsulfone for their study in four steps starting from propylene oxide (Scheme 1, eq. 2). 18 In the first step, ring opening of the epoxide took place with a sulfinate salt producing an alcohol that was converted to the corresponding vinyl sulfones via treatment with methanesulfonyl chloride and base. The vinyl sulfone was dibrominated with Br 2 under radical conditions and subsequent dehydrobromination mediated by DBU afforded -bromovinylsulfone. Condensation of bromomethyl sulfone and aldehyde generated the corresponding vinyl bromide. 19 Also diethyl bromo(phenyl-M. Shiri et al.
Scheme 1 Synthesis of -bromovinyl sulfones
From the perspective of diversity-oriented synthesis, the incorporation of bromine and sulfone functional groups onto the terminal carbon atom of an alkene may be a significant achievement as it produces novel and more complex molecules. Inspired by the aforementioned results and based on our continued interest in exploring the application of gem-dibromoalkenes, 21 we envisioned that the selective debromosulfonation of 1,1-dibromo-1-alkenes can be performed. Herein, we report a practical protocol for the highly regioselective synthesis of (Z)-1-bromo-1-sulfonylalkenes 3 using sodium sulfinates and 1,1-dibromoalkenes in a basic medium (Scheme 1, Eq. 4).
Under the established optimal conditions, the scope of this reaction with a variety of aromatic and heteroaromatic dibromoalkanes with aliphatic and aromatic sodium sulfinates was examined. As shown in Scheme 2, the tandem reaction of ,-dibromostyrenes containing Br, Cl, NO 2 , OMe, Me, or CF 3 substituents in para-, meta-, or ortho-position with sodium phenyl, methyl, or tolyl sulfinate produced the corresponding -bromo--sulfonylstyrenes 3a-k in 65-91% yields.
Next, we focused on the reaction of sodium phenylsulfinate with 2-chloro-3-(gem-dibromovinyl)quinoline because it bears an active C-Cl bond in the 2 position of quinoline. Interestingly, debromosulfonation occurred in the same way as described in the previous results and the C-Cl bond remained intact to yield 3l. When the other vinylquinolines were employed, the selectivity was identical for the aliphatic and aromatic sodium sulfinates to afford the corresponding products 3m-r in good to excellent M. Shiri et al.
To test the efficiency of this method in gram-scale synthesis, gem-dibromoalkene 1c (1.22 g) was chosen to react with sodium phenylsulfinate (0.778 g) in the presence of Cs 2 CO 3 (4 mmol) in DMSO (20 mL), which gave 3c in 71% yield after 10 hours (Scheme 3).
Scheme 3 Gram-scale synthesis of 3c
The debromosulfonylation reaction proceeded regioselectively and stereoselectively to produce the corresponding Z-brominated alkenyl sulfone 3 in good yields, and no E-isomer was observed. Although the reason for this high selectivity is not clear, the formation of an intramolecular hydrogen bond between C2-H and Br atom may be a factor.
The structure of compound 3c was confirmed via X-ray crystallographic analysis (Figure 1). 22 Further, 1-(2,2-dibromovinyl)-2-nitrobenzene (1l) was converted to isatin (5) in the presence of sodium phenylsulfinate and Cs 2 CO 3 in 90% yield (Scheme 4). The same results were obtained when sodium tolylsulfinate was used as the sulfone source. Considering the removal of two oxygens of the nitro group and the appearance of oxygen on positions 2 and 3 of isatin, we propose the mechanism outlined in Scheme 4. The reaction commences from the base-promoted HBr elimination of 1l to generate bromoacetylene A. 5b,7b To approve this step, 1l was treated to Cs 2 CO 3 in DMSO, which yielded A. Subsequently, the -addition of ArSO 2 − to intermediate A formed B. 16,23 Intermediate B was subjected to intramolecular oxa-Michael addition to yield C. 24 The ring opening of C followed by base promoted the intramolecular hydroalkylation of nitroso group to afford E. 25 Final-Scheme 2 Scope of various 1,1-dibromo-1-alkenes and sodium sulfinates
Paper Synthesis
ly, HBr elimination via oxaziridine formation led to the ring opening of F assisted by the removal of sulfonyl, eventually affording 5 (Scheme 4).
Scheme 4 Plausible mechanism for the formation of isatin (5) from 1l
Furthermore, the feasibility of using 3 to obtain more complex molecules was investigated. In this regard, the Sonogashira cross coupling of 3a and 3c with phenylacetylene resulted in the corresponding alkynylated products 6a and 6b in 91% and 88% yield, respectively (Scheme 5).
Scheme 5 Synthetic utility of (Z)--bromovinyl sulfones
In summary, we have developed a robust transitionmetal-free synthetic method for the highly regioselective and stereoselective debromosulfonylation of 1,1-dibromo-1-alkenes using sodium sulfinates. The Cs 2 CO 3 -mediated reaction of a wide range of aromatic, and heteroaromatic substrates has wide applicability with good functional-group compatibility. From the reaction of 1-(2,2-dibromovinyl)-2nitrobenzene with sodium phenyl-and tolylsulfinate, isatin was isolated as the sole product. As an example of the synthetic potential of 3, the selective palladium-catalyzed alkynylation of these compounds with phenyl acetylene was demonstrated.
The solvents and chemicals were purchased from Merck and Aldrich chemical companies. Unless otherwise mentioned they were used without further purification. The 1,1-dibromoalkenes were prepared according to the reported procedures. 26 Melting points are taken on an Electrothermal 9100 apparatus and are uncorrected. FT-IR spectra were recorded on a Shimadzu Infra-Red Spectroscopy IR-435. Nuclear magnetic resonance (NMR) spectra were recorded on a Bruker AVANCE Spectrometer 400 MHz for 1 H,100 MHz for 13 C) in DMSO-d 6 as solvent. Mass spectra recorded on Agilent Technology (HP) 5973 Network Mass Selective Detector operating at an ionization potential of 70 eV and a Leco CHNS, model 932 was used for elemental analysis.
Cs 2 CO 3 -Promoted Sulfonylation of 1,1-Dibromo-1-alkenes with Sodium Sulfinates; General Procedure
To a mixture of respective gem-dibromoalkene 1 (1.0 mmol) and Cs 2 -CO 3 (326 mg, 1.0 mmol) in DMSO (5.0 mL) was added the corresponding sodium sulfinate (1.2 mmol). The mixture was stirred at 100 °C for 5 h. Upon completion of the reaction, H 2 O (20 mL) was added and the whole was extracted with CH 2 Cl 2 (20 mL). The organic layer was washed with brine and dried (MgSO 4 ). The solvent was removed and the residue was purified by column chromatography using n-hexane/EtOAc (9:1) to obtain 3 in pure form.
Funding Information
We are thankful to Alzahra University and the Iran National Science | 1,971.2 | 2020-10-06T00:00:00.000 | [
"Chemistry"
] |
Rectangular Porous-Core Photonic-Crystal Fiber With Ultra-Low Flattened Dispersion and High Birefringence for Terahertz Transmission
We propose a novel porous-core photonic crystal fiber (PCF) consisting of asymmetrical rectangular air holes in the core and six-ring hexagonal lattice circular air holes in the cladding for achieving low-loss polarization terahertz transmission in a wide frequency range. By assuming TOPAS as the host material, the finite element method (FEM) is used to investigate its properties. The near-zero flattened dispersion of −0.01±0.02 ps/THz/cm is achieved over a frequency range of 1.0–2.0 THz, as well as a high birefringence of 7.1 × 10−2 which can be useful for polarization-maintaining applications. Also, critical parameters such as mode field distribution, effective material loss, confinement loss, and effective mode area are discussed in detail. Further, fabrication possibilities are discussed briefly by comparing recent work on similar waveguide structures. OCIS Codes: 040.2235 (Far infrared or terahertz), 060.4005 (Micro-structured fibers), 060.2420 (Fibers, polarization-maintaining), 160.5470 (Polymers).
INTRODUCTION
For the past few decades, Terahertz (THz) radiation or waves, which lie between microwave bands and infrared rays with a frequency range from 0.1 to 10 THz [1,2], have intrigued researchers and pioneers because of its extensive applications ranging from security-sensitive areas to medical imaging, sensing, and spectroscopy [3][4][5][6][7]. Nowadays imaging [5,[8][9][10], sensing [11], communication [11,12], astronomy [13], and biomedical engineering for diagnosis and detection [14][15][16][17] are largely dependent on the THz waveguide. The new generation 6G communication technology requires highly integrated THz systems, which will further boost the research attention of a compact THz waveguide with higher birefringence, ultra-flattened dispersion, and low loss [18,19]. Especially, various novel properties can be exploited in ultrafast optics by exploring the peculiar dispersive properties of the THz waveguide [20][21][22]. Several devices such as optical delay lines, dispersion compensators for short pulse generation, and white light generators have been developed. Recently, various types of waveguides have already been proposed, such as metallic wire [23], dielectric metal-coated tube [24], plastic fiber [25], polymer Bragg fiber [26], polystyrene foam [27], hollow core fiber [28], and solid core fiber [29]. However, these all are problematic due to their undesirable narrow band operation, higher material loss, high bending loss, and strong coupling with the surrounding environment.
Therefore, greater attention is now focused on porous core fibers [2,[30][31][32][33][34][35][36][37][38][39][40] where the waveguide parameters such as core diameter, pitch size, air filling fraction, air hole radius, and frequency can be determined by design. Furthermore, it is possible to achieve low effective material loss (EML), low confinement loss, low dispersion variation, high birefringence, and high core power fraction in a PCF by selecting the geometrical parameters [41]. A number of PCFs with high birefringence and low dispersion variation have been proposed in recent years [35][36][37][38][39][40][41][42][43]. For example, M. R. Islam et al. proposed a novel fiber with a honeycomb-like cladding structure and a hexagonal slotted core, which had a high birefringence of 0.083 and a low confinement loss and an effective material loss of 10 −8 cm −1 and 0.095 cm −1 at an operating frequency of 1.5 THz, respectively [42]. A slotted porous-core circular THz waveguide was proposed, which achieved an ultra-high birefringence of 0.075 and a dispersion value of 1 ps/THz/cm between 1 and 1.3 THz [35]. In 2016, a porous-core polarization-maintaining spiral photonic crystal fiber (PCF) with a birefringence of 0.0483 and a dispersion value of 0.57 ± 0.09 ps/THz/cm was proposed in the range of 1.2-1.8 THz [36]. R. Islam et al. proposed a double asymmetrical fiber that achieved a birefringence of 0.045 and a dispersion value of 0.9 ± 0.26 ps/THz/cm between 0.5 and 2 THz [37]. More recently, Islam M. S. et al. proposed a high birefringence of 6.3 × 10 −2 and flattened dispersion PCF based on Zeonex [43]. K. Paul and K. Ahmed suggested a highly birefringent ultra-low material loss PCF based on TOPAS, which have a birefringence of 1.34 × 10 −2 and a low material loss of 0.053 cm −1 at a frequency of 1THz [44]. Jakeya Sultana et al. reported an ultra-high birefringence of 0.086 and an ultra-flattened near zero dispersion of 0.53 ± 0.07 ps/THz/cm in a broad frequency range in 2018 [45]. However, given the excellent properties of the proposed waveguides, it is found that THz transmission efficiency can be limited by either low birefringence and high dispersion or high birefringence and high confinement loss. So, there is large scope for PCF improvement in the parament optimization of dispersion, birefringence, and loss.
In this paper, based on a TOPAS cyclic-olefin copolymer as the substrate, we propose a novel porous-core PCF consisting of asymmetrical rectangular air holes in the core and hexagonal circular air holes in the cladding. The aim is to increase birefringence and reduce confinement loss while flattening the dispersion. In addition, other crucial optical properties for the proposed PCF, such as effective material loss (EML), core power fraction, and effective mode area are also discussed under optimized structural parameters. The greatest advantage of the proposed PCF is its simple structure, which will make the fabrication feasible by using existing fabrication techniques, such as extrusion, sol-gel casting, 3D printing, and in-situ polymerization [46][47][48][49].
DESIGN PRINCIPLE OF THE FIBER STRUCTURE AND THEORETICAL MODEL
The proposed THz-PCF is composed of asymmetrical rectangular air holes in the porous core and hexagonal lattice-formed circular types of air holes in the cladding, whose cross-section view is shown in Figure 1, along with an enlarged view of the porous-core region. Here, a simple cladding is adopted with a six-ring layer hexagonal lattice structure consisting of rounded-corner air holes to flatten dispersion and facilitate manufacturing. The pitch is the distance between two adjacent air holes, d is the diameter of air holes. In our simulation, the ratio of d/ is kept at a large value of 5/6 based on the fact that a larger d parameter tends to better confine the light in the core [50]. To increase the birefringence, 17 rectangular air holes are introduced in the porous core to induce asymmetry in the structure. The length and width of the rectangular air holes are denoted by b and a, and the pitches x and y are the distances between two adjacent rectangular air holes in the horizontal and longitudinal directions, respectively. Core porosity in the porous core is defined as the ratio of the area of the rectangular air holes to the total area of the core, which is labeled as P. The diameter of the porous core area along the horizontal direction is defined as D core . The host background material of the entire THz-PCF is selected as TOPAS (scientifically known as a cyclic-olefin copolymer) because of its unique and useful characteristics, such as insensitivity to humidity, decency for bio-sensing, and the flexibility in fabrication to achieve high glass transition temperatures [39]. In addition, the bulk material loss of TOPAS is 0.06 cm −1 at 0.4 THz and increases at a rate of 0.36 cm −1 /THz, which has a nearly constant refractive index of 1.525 over a broad frequency range of 0.1 to 2 THz with a lower material absorption loss and dispersion [37]. The frequency-dependent character of the absorption coefficient of TOPAS is considered throughout the simulation.
The main concern of the proposed THz-PCF is to increase its birefringence and flatten its dispersion performance. To achieve this goal, it is important to optimize porous core parameters of core porosity (P) and core diameter D core . The electric distributions of the proposed THz-PCF for different core diameters D core at f = 1.5 THz are shown in Figure 2. Here, core porosity is fixed as 30% because any further increment in core porosity may result in overlapping air holes, making fabrication a great challenge. It can be seen from Figure 2 that the mode power distribution is well-confined in the core region, and the y-polarized mode is better than the x-polarized mode, which is essential for high birefringence as well as low dispersion in the transmission of THz waves. It worth noting that the mode power flow distribution is best constrained in the porous core area when the diameter of the porous core is 378 um.
Here, the key propagation properties of the proposed PCF-THz waveguide are numerically simulated to calculate the effective indices of the electromagnetic modes based on FV-FEM. An antireflective layer that is also known as a perfectly matched layer (PML) is used at the outer boundary of the waveguide to reduce the effect of the surrounding environment and materials to the confinement loss [51]. The thickness of the PML is set to 12.5% of the total fiber radius to easier meet the fabrication possibilities. Here, according the functional relationship between the effective refractive index (RI) and the PML thickness as shown in Figure 3, the influence of PML thickness on the real part of RI can be ignored, while the influence on the imaginary part of RI cannot be ignored. The variation of dispersion and birefringence depends directly on the real part of RI, but the confinement loss depends on the imaginary part of RI, thus the change of PML thickness has almost no effect on dispersion and birefringence, the effect on the confinement loss cannot be ignored. For all the optimum parameters, we found 5, 12.5, and 20% of PML thickness show a lower imaginary part of RI. As 5% of PML thickness creates fabrication complexity and 20% of PML thickness makes the fiber bulky. Therefore, we chose 12.5% of PML thickness for our proposed fiber.
SIMULATION RESULTS AND DISCUSSION
Birefringence and dispersion are two kinds of crucial properties that limit the quality of signal transmission in high speed THz communication systems. One of our design goals is to make the PCF operate with high birefringence and ultra-low flat dispersion. Therefore, it is more important to check the influence of different parameters such as core diameter (D core ) and core porosity (P) on birefringence and dispersion in THz waveguide design.
Firstly, to obtain higher birefringence for the proposed THz-PCF, the level of birefringence should be significant. Here, the mode birefringence (B) is first analyzed, which is defined as the difference of the real parts of RI for the two fundamental orthogonal polarization modes and can be expressed as [36]: where, n x and n y are RI of x-and y-polarized modes, respectively. The birefringence (B) on D core and core porosity are demonstrated in Figure 4, with higher birefringence on the level of 10 −2 . Figure 4A depicts the birefringence as a function of frequency for different D core at P = 30%. It can be seen from Figure 4A that the birefringence of the proposed THz-PCF increases gradually with the increase of the core-area diameter and presents a flat trend in a frequency range of 1.2-1.8 THz.
The reason for this can be understood as the fact that the ratio of length to width of the rectangular air holes in the porous core increases when D core is increased. Therefore, the asymmetry induced in the porous-core area becomes stronger and leads to an enhancement of birefringence. When D core = 378 µm, the birefringence values locate above 7.0 × 10 −2 over the frequency range of 1.2-1.8 THz, and an ultra-high birefringence of 7.1 × 10 −2 is obtained at f = 1.5 THz. Then, with a fixed D core = 378 µm, the birefringence as a function of frequency for different core porosity is shown in Figure 4B, in which it is observed that the birefringence increases when the core porosity is decreased. When the core porosity is 30%, birefringence higher than 6.0 × 10 −2 is obtained over the frequency range 0.9-2.0 THz, with the highest birefringence of 7.1 × 10 −2 at 1.5 THz. The vital unique feature of the proposed THz-PCF presents a nearly constant birefringence over a wide frequency region from 1.2 to 2 THz, which is very significant for a polarization-maintaining THz transmission application. Secondly, dispersion is another important property in the application of THz-PCF, which should be as low as possible to reduce the bit error rate and avoid optical signal overlap due to pulse broadening. Near-zero ultra-flat dispersion is particularly suitable for the effective transmission of broadband waves [36]. In general, dispersion can occur either for the used bulk material (material dispersion) or for the waveguide structure (waveguide dispersion). Here, we consider only the waveguide dispersion because the material absorption loss is negligible for TOPAS [37], which can be expressed by [13]: where, ω indicates the angular frequency, and n eff is the RI of the fiber. Obviously, dispersion mainly depends on the change of the RI of the waveguide with frequency. The dispersion profile of the proposed THz-PCF as a function of frequency for different core-diameter D core and core porosity are shown in Figure 5. Figure 5A describes β 2 as a function of frequency for different D core at P = 30%. It can be seen that the dispersion is decreasing and the corresponding flattened region becomes broader gradually when D core increases. There is a visible difference of dispersion values between the x-and y-polarized modes, the dispersion value of x-polarized is higher than that of y-polarized, which is consistent with the fact that the mode power flux is tightly confined in the core for y-polarized modes than x-polarized modes because of the asymmetrical PCF structure. It is worth noting that the flattened dispersion range nearly covers the interesting frequency range from 1.1 to 2.0 THz when D core = 378 µm, which coincides with the frequency range of high birefringence. The variation of the dispersion value is ±0.02 ps/THz/cm for the y-polarized mode and ±0.05 ps/THz/cm for the x-polarized mode. Furthermore, the dependence of β 2 on core porosity is shown in Figure 5B with a fixed D core = 378 µm. It is obvious that the dispersion value is decreasing and getting closer to zero with the reducing values of core porosity, which is the fact that more mode power flux is confined in the core for lower core porosity. In addition, we can also see that the y-polarized mode exhibits a lower dispersion than the x-polarized mode in the flattened dispersion region. Also, a near-zero ultra-flattened dispersion value of −0.01 ± 0.02 ps/THz/cm for the y-polarized mode and 0.2 ± 0.05 ps/THz/cm for the x-polarized mode can be obtained in the whole interesting frequency range from 1.1 to 2.0 THz for P = 30%. Compared to the reported THz-PCF in the literature [35][36][37][38][39][40][41][42][43][44][45][46][50][51][52][53], the proposed THz-PCF presents a near-zero ultra-flat dispersion over the widest frequency range of 1.1-2.0 THz with two zerodispersion points (1.06 THz and 1.74 THz). So D core = 378 µm and porosity = 30% is selected as optimal design parameters.
Then, it is well to know that waveguides with low absorption loss and high-power fractions at a higher frequency would be a good candidate for the application in broadband transmission in the THz regime. The losses will arise when light propagates through the whole length of the fiber. Effective material loss (EML, α eff ) is one of the major issues that causes operating signal energy dissipation, which can be numerated by Equation (3) [13]: where ε 0 is the permittivity and µ 0 is the permeability of the vacuum. N = 1.525 is the refractive index of the material, and S z is the z-component of the Poynting vector, which is defined as S z = 1 2 (E × H) ·ẑ, where E and H are the electric and magnetic field components of the transmitted power, respectively. The integration in the numerator of Equation (3) covers the material regions of TOPAS, and in the denominator covers all regions. Confinement loss is also a critical factor that limits the propagation length of the transmitted signal through the THz waveguide, which has a direct relationship with the number of air holes used, the spacing between adjacent air holes, and the number of rings in the cladding. In practice, confinement loss should be as low as possible to obtain a longer propagation length for the THz signal, which can be calculated from Equation (4) [40,50]: where f is the operating frequency, Im(n eff ) represents an imaginary part of the refractive index. Figures 6A,B gives EML and confinement loss as a function of frequency for different core porosity. It is observed that EML increases with an increase of frequency for both x-and y-polarized modes, and the x-polarized mode exhibits lower absorption loss than the y-polarized mode (see Figure 6A). It could be explained by the fact that the majority of the light is propagated through the porous-core area in the y-polarized mode, which is consistent with the power flow distributions shown in Figure 2. Also, EML decreased with the increase of core porosity at a fixed frequency, with the reason being more light is propagated through the air holes in the core rather than through the material. Note that, for the optimal parameters (D core = 378 µm and P = 30%), EML is found as 0.141 cm −1 and 0.201 cm −1 for x-and y-polarized modes at 1.5 THz, respectively. Confinement loss is decreased for both x-and y-polarized modes with the increase of frequency (see Figure 6B). Confinement loss in the y-polarized mode is less than the x-polarized mode due to better mode constraint for y instead of x. As a result, less mode power leaks out to the cladding, and confinement loss decreases. It is worth noting that the confinement loss is found to be 7.08 × 10 −12 cm −1 and 5.74 × 10 −9 cm −1 for the y-and x-polarized modes at 1.5 THz under the optimal parameters, respectively.
The mode power fraction presents the amount of electromagnetic power propagating through different regions of the fiber, which can be calculated by Equation (5) [31,45]. Achieving a higher core power ratio is our goal.
Fraction of Power
where x in the numerator defines the area of the region of interest, and the denominator defines the total area. The effective mode area (A eff ) is measured quantitatively by the distributed electric field energy inside the waveguides, which can be expressed by [40]: where I (r) = |E t | 2 is called the transverse electric field intensity in the proposed fiber. Figures 7A,B show mode power fraction and effective mode area as a function of frequency for different core porosity. The mode power fraction increases with the increase of core porosity when the frequency is constant, and the core power fraction in the y-polarized mode is smaller than the xpolarized mode, which corresponds to the result of the EML (see Figure 7A). It was found the power fraction remains unchanged over a wider frequency range from 1.2 to 2 THz, which is very necessary for broadband THz transmission by minimizing confinement loss. More than 52% of the power fraction is produced at 1.5 THz for the optimum THz-PCF structure. In addition, the effective mode area (A eff ) decreases with the increase of frequency and increases with the increase of core porosity (see Figure 7B). Therefore, the transmission quality of the light beam could be improved markedly by decreasing core porosity. Meanwhile, the modes would be more confined in the fiber core region. When D core = 378 µm and P = 30%, A eff are 5.13 × 10 −8 m 2 and 6.95 × 10 −8 m 2 for the x-and y-polarized modes at 1.5 THz, respectively.
From the above discussions, it can be clearly expressed that the proposed THz-PCF presents a near-zero ultra-flattened dispersion value of −0.01 ± 0.02 ps/THz/cm and an ultra-high birefringence of 7.1 × 10 −2 over broader frequency ranges of 1.1-2.0 THz, while it has lower confinement loss of 7.08 × 10 −12 cm −1 and a higher power fraction of 52%, which will be effective for broadband THz transmission systems. Table 1 represents the comparisons among several fundamental properties of the proposed THz-PCF with Ref. [35-38, 40, 42, 43, 45, 51-53]. It is obviously that it has excellent performance in the fields of high birefringence, ultra-low flat dispersion, and low confinement loss than others reported in the literature. The proposed THz-PCF with an ultra-low flat dispersion and a birefringence higher than 7.0 × 10 −2 over a wide frequency range of 1.1-2 THz will play an important role in ultra-wideband polarized THz transmission system. The proposed THz-PCF has the possibility to be fabricated easily due to its simple structure, consisting of a circular air hole in the cladding and a rectangular air hole in the core. Especially, more recent technologies, such as extrusion and 3D printing, are demonstrated to fabricate different complexshaped asymmetrical air holes [43]. The extrusion technique developed by J. Wang et al. [47] offers fabrication freedom for complex structures including crystalline and amorphous PCFs. The sol-gel casting technique and in-situ polymerization demonstrated in [48,49] offers the design freedom to fabricate micro-structured PCFs where air hole size and spacing can be adjusted independently. Therefore, it can be anticipated that the verified techniques should be enough to fabricate the proposed structure.
CONCLUSION
In conclusion, a near-zero ultra-flat dispersion and high birefringence porous-core THz-PCF based on TOPAS is designed for application in broadband THz transmission systems. It is composed of a porous core of rectangular air holes to induce high birefringence and cladding with hexagonal lattice circular air holes to enhance the guided-mode confinement. Its guiding properties are characterized by various geometrical parameters including different values of core porosity and core diameters over frequencies in the THz regime. In a broad frequency range of 1.1-2.0 THz, a birefringence of higher than 7.1 × 10 −2 and an ultra-low flattened dispersion of −0.01 ± 0.02 ps/THz/cm could be achieved simultaneously. The proposed fiber also has other advantageous profiles, such as the good mode confinement due to low EML (<0.2 cm −1 ), ultra-low confinement loss (∼7.08 × 10 −12 cm −1 ), and high-power fraction of the core (∼52%). These excellent guiding characteristics of the proposed THz-PCF can potentially make it extremely suitable for multi-functions such as efficient THz transmission, THz sensing, and other optical system designs.
DATA AVAILABILITY STATEMENT
All datasets generated for this study are included in the article/supplementary material.
AUTHOR CONTRIBUTIONS
YZ and XJ wrote the manuscript. DQ was a doctoral student who directed graduates during theoretical simulations. LX was a graduate student who implemented the simulation scheme. All the authors contributed to the conception and revised the text. | 5,238.8 | 2020-10-08T00:00:00.000 | [
"Physics",
"Engineering",
"Materials Science"
] |
Gas accretion damped by dust back-reaction at the snow line
Context. The water snow line divides dry and icy solid material in protoplanetary disks. It has been thought to significantly affect planet formation at all stages. If dry particles break up more easily than icy ones, then the snow line causes a traffic jam because small grains drift inward at lower speeds than larger pebbles. Aims. We aim to evaluate the effect of high dust concentrations around the snow line onto the gas dynamics. Methods. Using numerical simulations, we modeled the global radial evolution of an axisymmetric protoplanetary disk. Our model includes particle growth, the evaporation and recondensation of water, and the back-reaction of dust onto the gas. The model takes into account the vertical distribution of dust particles. Results. We find that the dust back-reaction can stop and even reverse the net flux of gas outside the snow line, decreasing the gas accretion rate onto the star to under 50% of its initial value. At the same time, the dust accumulates at the snow line, reaching dust-to-gas ratios of (cid:15) (cid:38) 0 . 8, and it delivers large amounts of water vapor towards the inner disk as the icy particles cross the snowline. However, the accumulation of dust at the snow line and the decrease in the gas accretion rate only take place if the global dust-to-gas ratio is high ( ε 0 (cid:38) 0 . 03), the viscous turbulence is low ( α ν (cid:46) 10 − 3 ), the disk is large enough ( r c (cid:38) 100au), and only during the early phases of the disk evolution ( t (cid:46) 1 Myr). Otherwise the dust back-reaction fails to perturb the gas motion.
Introduction
Protoplanetary disks are composed of gas and dust.In the classical picture, a gas disk evolves through viscous evolution driven by outward transport of angular momentum (Lynden-Bell & Pringle 1974) and orbits at a sub-Keplerian speed due to its own pressure support.In another view, dust particles couple to the gas motion according to their size (Nakagawa et al. 1986;Takeuchi & Lin 2002) and small grains quickly follow the motion of the gas, while large boulders are decoupled from it.The mid-sized grains, or pebbles, feel a strong headwind that causes them to drift towards the gas pressure maximum (Whipple 1972;Weidenschilling 1977) which, in a typical disk, is directed towards the star.
At interstellar dust-to-gas ratios of 1%, the force exerted by the dust into the gas is mostly negligible.Yet in regions such as dead zones (Kretke et al. 2009;Pinilla et al. 2016), the outer edges of gaps carved by planets (Dipierro & Laibe 2017;Kanagawa et al. 2018), snow lines (Brauer et al. 2008a;Estrada et al. 2016;Dra ¸żkowska & Alibert 2017;Stammler et al. 2017;Hyodo et al. 2019), and pressure bumps in general (Pinilla et al. 2012), particles can accumulate and grow to larger sizes, reaching concentrations where the dust back-reaction may be strong enough to alter the dynamics of the gas (Taki et al. 2016;Onishi & Sekiya 2017;Kanagawa et al. 2017;Gonzalez et al. 2017;Dipierro et al. 2018).
In particular, the water snow line acts as a traffic jam for the dust if there is a change in the fragmentation velocity between silicates and ices (Birnstiel et al. 2010;Dra ¸żkowska & Alibert 2017;Pinilla et al. 2017).Previous results showed that the icy particles outside the snow line can grow to larger sizes (Gundlach et al. 2011) and drift faster to the inner regions.After crossing the snow line, the ice on the solid particles evaporates, leaving only dry silicates behind.Then the silicates in the inner regions fragment to smaller sizes and drift at lower speeds, creating a traffic jam.The traffic jam effect can concentrate enough material to trigger the formation of planetesimals through streaming instability (Schoonenberg & Ormel 2017;Dra ¸żkowska & Alibert 2017;Dra ¸żkowska & Dullemond 2018).
In this paper we study the dynamical effect of the snow line on gas dynamics by considering the effect of the dust backreaction onto the gas.We want to find the conditions under which the dust can slow down or revert the gas accretion rate and to test if further structures can appear beyond the snow line.
We used one-dimensional simulations that consider gas and dust advection, dust growth, and the back-reaction effects.To treat the global evolution of the disk we used the model of Birnstiel et al. (2012), which includes the size evolution of solids by using representative species.We implemented the modifications introduced by Dra ¸żkowska & Alibert (2017) which model the evaporation and recondensation of water at the snow line.
The paper is structured as follows.In Sect.2, we describe the gas and dust velocities considering the back-reaction and we present our model for the snow line.In Sect.3, we present the setup of our simulations and list the parameter space that we explored.In Sect.4, we show the conditions in which the accumulation of dust at the snow line results in strong backreaction effects that are capable of damping the accretion of gas to the inner regions.In Sect.5, we discuss the general effects of the back-reaction, when it should be considered, and what observational signatures might reveal dust-gas interactions in the inner regions.We summarize our results in Sect.6.We
Gas and dust evolution
The evolution of gas and dust can be described with the advection-diffusion equations as in Birnstiel et al. (2010): where r is the radial distance to the star, Σ is the surface density, v r is the radial velocity, and D d is the dust diffusivity.The subindex "g" and "d" denote the gas and dust, respectively.
An expression for the velocities can be obtained from the momentum conservation equations for both components (Nakagawa et al. 1986;Tanaka et al. 2005;Kanagawa et al. 2017;Dipierro et al. 2018), in which the gas experiences the stellar gravity, the pressure force, the viscous force, and the drag from multiple dust species, while each dust species only experiences the stellar gravity and the drag force from the gas.
Dust dynamics
Solving the momentum conservation equations, the radial and azimuthal velocities of the dust are given as per Weidenschilling (1977), Nakagawa et al. (1986), and Takeuchi & Lin (2002): where, for convenience, the dust azimuthal velocity is written relative to the Keplerian velocity, v K , as ∆v d,θ = v d,θ − v K .The same convention is used for the gas azimuthal velocity ∆v g,θ .
The Stokes number St is the dimensionless stopping time that measures the level of coupling of a dust species to the gas motion and is defined as: where Ω K is the Keplerian angular velocity and t stop is: with a the particle size, ρ s the material density of the solids, ρ g the gas density.The isothermal sound speed c s is: where k B is the Boltzmann constant, T the gas temperature, m H the hydrogen mass, and µ the mean molecular weight.From Eqs. ( 3) and ( 4), it can be inferred that small particles (St 1) move along with the gas, while large particles (St 1) are decoupled from it.Particles with St ∼ 1 experience the headwind from the gas with the strongest intensity and drift most efficiently towards the pressure maximum; in turn, these particles will also exert the strongest back-reaction onto the gas.
At the midplane, the Stokes number can be conveniently written as: The size of the particles is not static in time (Birnstiel et al. 2010(Birnstiel et al. , 2012) ) as the dust grows until it reaches the fragmentation barrier, where the particles are destroyed by high velocity collisions among themselves (Brauer et al. 2008b), or until the drift limit, where they drift faster than they can grow.The fragmentation barrier dominates the inner regions of the protoplanetary disk, and the maximum Stokes number that dust grains can reach before fragmenting is: where v frag is the fragmentation velocity which depends on the dust composition, and α t is the turbulence parameter for the dust fragmentation (Birnstiel et al. 2009).
Following Birnstiel et al. (2012), the drift limit can be approximated by: with v K the Keplerian velocity, and P the isothermal gas pressure at the midplane: with the gas scale height h g = c s /Ω K .Additionally, we assume that dust diffuses with with = Σ d /Σ g the vertically integrated dust-to-gas ratio, and ν the turbulent viscosity of the gas (Shakura & Sunyaev 1973): controlled by the viscous turbulence parameter α ν .We notice that, as in Carrera et al. (2017), our model considers two different turbulence parameters: α t for the dust turbulence (that controls the dust fragmentation, Eq. ( 9)) and α ν for the viscous turbulence (that controls the gas viscosity, Eq. ( 13)).
The (1 + ) −1 factor in Eq. ( 12) comes from considering that the dust concentration diffuses with respect to the gas and dust mixture, instead of the gas only.We neglect the (1 + St 2 ) −1 factor from Youdin & Lithwick (2007) since the particle sizes in our simulations remain small (St 2 1).
Gas dynamics
The gas velocities, considering the dust back-reaction onto the gas, have the following form: The gas velocity depends on the viscous velocity v ν , the pressure velocity v P , and the back-reaction coefficients A and B (Gárate et al. 2019).
The information related to the dust back-reaction is contained in the coefficients A and B, which in a dust free disk have values of A = 1 and B = 0.
In the absence of dust, the gas moves with the viscous velocity (Lynden-Bell & Pringle 1974): Similarly, if there is no dust, the gas orbits at sub-Keplerian speeds due to the pressure support, this pressure velocity is given by: ( The back-reaction coefficients are defined as follows: where X and Y are the following sums defined by (Tanaka et al. 2005;Okuzumi et al. 2012;Dipierro et al. 2018): where (m) is dust-to-gas ratio of the dust species with mass m, and the Stokes number can be related to the particle mass through m = 4πa 3 ρ s /3 and Eq. ( 8).
In principle, Eqs. ( 14) and (15) describe gas motion assuming that the dust is well mixed with the gas in the vertical direction (dust-to-gas ratio constant with the distance to the midplane).However, since dust grains settle towards the midplane (Dubrulle et al. 1995), the gas velocity in the midplane layers will be more affected by the dust back-reaction than the surface layers (Kanagawa et al. 2017;Dipierro et al. 2018).In Sect.2.2.1, we show how to calculate the corrected gas velocity derived from the net mass flux.
We discuss a physical interpretation for the back-reaction coefficients A, B in Sect.2.2.2 and provide an approximated expression valid for the case with a single dust species.
Effect of the vertical structure on the net mass flux
The corrected gas radial velocity vg,r can be obtained from the net mass flux, which is defined as: where z is the distance to the midplane.The vertical profile of the radial velocity v g,r (z) depends on: the vertical density distributions of gas and dust (ρ g (z), ρ d (m, z)), and the vertical profiles of the viscous and pressure velocities (v ν (z), v P (z)).
Assuming that the gas and dust are in vertical hydrostatic equilibrium, their respective density profiles are: where the vertical scale height of the dust species with mass m is defined in Birnstiel et al. (2010) as: From these profile we can obtain the dust-to-gas ratio of every particle species at height z with: and plug it into Eqs.( 20) and ( 21) to obtain the back-reaction coefficients A(z) and B(z) at every height.At this point we can actually generalize our expression for the gas radial velocity (Eq.( 14)) to every height v g,r (z) = A(z)v ν (z) + 2B(z)v P (z).Now the final step would be to define the vertical profiles of v ν (z) and v P (z), however, we find that assuming v ν (z) = v ν (as given in Eq. ( 16)), and v P (z) = v P (as given in Eq. ( 17)) is a good approximation.The net flux is then calculated with Eq. ( 22).We test the validity of our approximation in Appendix B and further discuss its physical interpretation.
It is worth noting that under this assumption, the radial gas velocity takes the following form: where Ā and B are the back-reaction coefficients corrected for the vertical structure (i.e., derived from the mass flux in Eq. ( 22)).
We repeat the same process to obtain the corrected radial velocity for the dust, by taking the net mass flux for each species of mass m: The gas and dust velocities derived from the net mass flux (v g,r and vd,r ) are used to transport the gas and dust in the advectiondiffusion Eqs.(1) and (2).
Understanding the back-reaction coefficients
While the back-reaction coefficients may seem rather obscure to interpret at first glance, they can be better understood as: (1) a "damping" factor (coefficient A) that slows the radial viscous evolution and reduces the pressure support and (2) a "pushing" factor (coefficient B) that tries to move the gas against the radial pressure gradient and adds some degree of pressure support to the orbital motion.
A quick estimate of these coefficients can be obtained if we consider the case of a single (well-mixed) particle species (Kanagawa et al. 2017;Dipierro et al. 2018;Gárate et al. 2019): From here we can see that both coefficients have values between 0 and 1 and that if the particles are small (St 2 ), then A ≈ ( + 1) −1 and B ≈ St ( + 1) −2 .
A&A 635, A149 (2020) In the case where the gas velocity v g,r is dominated by the viscous term such that Av ν > 2Bv P , the global evolution of gas and dust can be approximated as a damped viscous evolution.In Appendix A we further develop this idea and present a semianalytical test comparing the evolution of a simulation with backreaction and dust growth to the standard viscous evolution of a disk with a modified α ν parameter.
An equivalent expression for the gas and dust velocities, including the contribution of the back-reaction coefficients (Eqs.( 18) and ( 19)), can be found in Kretke et al. (2009) under the assumption that the particle sizes follow a single power-law distribution.
Further analysis on the effects of back-reaction considering different particle size distributions can be found in Dipierro et al. (2018), where the velocities given in Eqs. ( 14) and ( 15) are equivalent to their Eqs.( 11) and ( 12), while the integrals X and Y are equivalent to their λ i (Eq.( 17)).
Evaporation and recondensation at the snow line
To include the snow line in our simulations, we follow the model given by Dra ¸żkowska & Alibert (2017), which evolves four different species: a mix of hydrogen and helium, water vapor, silicate dust, and water ice that freezes over the silicate grains.
The gas phase is the sum of both hydrogen-helium and water vapor.It is traced by the surface density Σ g and advected according to Eq. ( 1).The water vapor, with surface density Σ vap , is advected with the same velocity as the gas but it also diffuses according to the concentration gradient.The mean molecular weight of the gas phase is then: where µ H 2 = 2.3 and µ vap = 18 are respectively the mean molecular weights of the hydrogen-helium mixture and the water vapor, and Σ H 2 = Σ g − Σ vap is the surface density of the standard hydrogen-helium mixture.
The dust grains are assumed to be a mixture of silicates and ices traced by Σ d , evolved according to Eq. ( 2), and have a material density of: where ρ sil = 3 g cm −3 and ρ ice = 1 g cm −3 are the densities of the silicates and ices, respectively, and Σ sil = Σ d − Σ ice is the surface density of the silicates.The composition of the dust grains determines the fragmentation velocity, where icy grains are stickier and can grow to larger sizes than the silicate grains.As in Dra ¸żkowska & Alibert (2017), we assume that the particles have the fragmentation velocity of ices v frag = 10 m s −1 (Wada et al. 2011;Gundlach et al. 2011;Gundlach & Blum 2015) if there is more than 1% of ice in the mixture, and the fragmentation velocity of silicates v frag = 1 m s −1 (Blum & Wurm 2000;Poppe et al. 2000;Güttler et al. 2010) otherwise.
The limit between evaporation and recondensation of water is given by the equilibrium pressure: P eq = P eq,0 exp(−A/T ), (33) with P eq,0 = 1.14 × 10 13 g cm −1 s −2 and A = 6062 K (Lichtenegger & Komle 1991;Dra ¸żkowska & Alibert 2017).The evaporation and recondensation of water are set to maintain the pressure of the water vapor at the equilibrium pressure (Ciesla & Cuzzi 2006), with: When the water vapor pressure is below this threshold (P vap < P eq ) the ice evaporates into vapor as follows: and vice-versa, if the vapor pressure is higher then it recondenses into ice with: where the factor next to ±(P vap − P eq ) transforms the pressure difference at the midplane into surface density.
As shown by Birnstiel et al. (2010) and Dra ¸żkowska & Alibert (2017), at the snow line, a traffic jam of dust is created because of the difference in the fragmentation velocities of silicates and ices.Recondensation also contributes by enhancing the amount of solids when the vapor diffuses and freezes back beyond the snow line (Stammler et al. 2017).
Simulation setup
We use the code twopoppy (Birnstiel et al. 2012) to study the global evolution of a protoplantary disk for 0.4 Myr around a solar mass star, advecting the gas and the dust according to the back-reaction velocities described in Sects.2.1 and 2.2, with the snow line model of Dra ¸żkowska & Alibert (2017) summarized above in Sect.2.3.
Two-population dust model
In twopoppy, the dust is modeled as a single fluid composed of two populations, an initial small particle particle population, and a large particle population with the size limited by the growth barriers (Eqs.( 9) and ( 10)), with a factor correction: St max = min(0.37× St frag , 0.55 × St drift ).
The dust velocity and the back-reaction coefficients are then calculated considering the mass fraction of the two populations.Birnstiel et al. (2012) found that the mass fraction of the large population if f m = 0.97 for the drift limited case, and f m = 0.75 for the fragmentation limited case.
Disk initial conditions
The gas surface density and temperature profile are defined by the following power laws: with r 0 = 1 au, Σ 0 = 1000 g cm −2 , T 0 = 300 K, p = 1 and q = 1/2.The disk surface density initially extends until r = 300 au.The disk size is intentionally large to provide a continuous supply of material during the simulation, and to make the interpretation of the back-reaction effects easier.We discuss the effect of the disk size in the outcome of the dust accumulation at the snow line in Sect.4.4.
We start the simulations with an uniform dust-to-gas ratio ε 0 such that Σ d = ε 0 Σ g , assuming that the solid material is composed of a mixture of 50 % ice and 50 % silicate (Lodders 2003, Table 11).The water vapor is introduced in the simulation as the ice evaporates.
The dust phase has a turbulence parameter of α t = 10 −3 , and an initial size of a 0 = 1 µm.
Grid and boundary conditions
The region of interest in our simulation extends from 0.1 to 300 au, with n r = 482 logarithmically spaced radial cells.To avoid any possible effects of the boundary conditions in our region of interest, we added 20 additional grid cells in the inner region between 0.05 and 0.1 au, and 58 additional grid cells in the outer region between 300 and 600 au.In total, our simulation consists of 560 grid cells from 0.05 to 600 au.
The additional cells at the inner region were added to avoid measuring the accretion rate onto the star too close to the inner boundary.The additional cells in the outer region were added to give the gas enough space to spread outwards without being affected by the outer boundary conditions.
At the inner boundary, we assume a constant slope for the quantity Σ g,d • r.At the outer boundary we have an open boundary condition for the gas and set a constant dust-to-gas ratio (but because of the additional grid cells, the gas never expands all the way to the outer boundary).To calculate the gas and dust velocities and take into account dust settling (Eqs.( 22) and ( 28)), we construct a local vertical grid at every radius with n z = 300 points, logarithmically spaced between 10 −5 and 10 h g .
Parameter space
The two most important parameters that control the strength of the back-reaction are the global dust-to-gas ratio ε 0 , and the gas viscous turbulence α ν .
We focus our study on three simulations with "Low", "Mid", and "High" global dust-to-gas ratios, with the respective values for ε 0 summarized in Table 1.
For the sake of clarity, through the paper, we use a single value for the viscous turbulence, with α ν = 10 −3 .This turbulence is low enough for the back-reaction effects to start affecting the gas dynamics (i.e., the term 2 Bv P becomes comparable to Āv ν in the gas velocity, Eq. ( 14)).
For completeness, in Appendix C we further extend our parameter space1 to include different values for the viscous turbulence α ν , although for simplicity, we keep the dust turbulence constant, with α t = 10 −3 .
Dust accumulation and gas depletion at the snow line
The evolution of gas is initially only dominated by the viscous accretion, but as time passes and dust grows, the back-reaction effects start to become dynamically important to the gas.At the water snow line, the Stokes number changes by two orders of magnitude (Fig. 1).In the inner disk, the particles can only grow to small sizes given by the fragmentation limit of silicates, while in the outer regions the dust size is limited by the fragmentation of water ice or the drift limit.The simulations with the higher dust-to-gas ratio show an increment in the Stokes number at the snow line location that is caused by the higher concentration of water vapor, which increases the fragmentation limit (by increasing the mean molecular weight and decreasing the sound speed; see Eqs. ( 7), (9), and (31)).
In the "Low ε 0 " simulation (Fig. 2, top panel), the change in particle size alone causes a traffic jam at the snow line location as the small dry silicates drift slower than the large icy particles, which results in a higher concentration of dust in the inner regions.Outside the snow line, the dust-to-gas ratio remains low, so the back-reaction from the large particles is not strong enough to perturb the gas.In this scenario, the gas surface density remains very close to the initial steady state.
Further effects can be seen in the "Mid ε 0 " simulation (Fig. 2, middle panel).First we notice an increment in the gas density profile at the snow line location, caused by the additional water vapor delivered by the icy grains (Ciesla & Cuzzi 2006).The water vapor and the dust are also more concentrated towards the snow line in this case as the higher dust-to-gas ratio damps the viscous velocity (| Āv ν | < |v ν |) more efficiently, slowing the diffusion of both gas and small particles.At the same time, the additional water vapor also increases the gas pressure which, in turn also increases the drift velocity of the large icy particles towards the snow line, resulting in higher dust concentrations.We also observe a small decrease in the gas surface density outside the snow line, caused by the dust back-reaction that slows down the gas velocity, reducing the supply to the inner regions.This effect becomes more pronounced for higher dust-to-gas ratios.
The back-reaction of dust onto the gas causes notorious perturbations in the "High ε 0 " simulation (Fig. 2, bottom panel).As in the "Mid ε 0 " simulation, the solids also accumulate at the snow line location, but now the icy dust particles outside the snow line exert a stronger push onto the gas, and reverse the gas (g/cm2) 0 : 5% Fig. 2. Surface density radial profiles of gas (red) and dust (blue) around the snow line.The dashed lines mark the initial conditions, and solid lines mark the simulation after 0.4 Myr.The dotted line marks the snow line at 0.4 Myr.Top, middle, and bottom panels: correspond to the cases with "Low", "Mid", and "High" ε 0 , respectively.accretion of the outer regions.This results in a depletion of gas outside the snow line (between r > 2.5 au), reaching a minimum density of ∼50% of its initial value.
Furthermore, the drop in gas density outside the snow line reduces the pressure gradient.Consequently, the drift speed of the large icy particles is also slowed down, allowing for an extended accumulation of dust in the outer regions.This process of gas depletion and dust accumulation is expected to continue as long as dust is supplied from the outer regions.
In the inner regions inside 1 au, the gas is depleted to ∼65% of its initial value.Only the additional water vapor supplied by the dust crossing the snow line prevents a further depletion of gas.The evolution of this simulation is illustrated in Fig. 3, where we can see the initial traffic jam caused by the change (g/cm2) 0 : 5% t: 0.4 Myr Fig. 3. Surface densities of gas (red), dust (blue), vapor (green) and ice (purple) of the "High ε 0 " simulation (ε 0 = 0.05), at different times.As time passes, dust accumulates around the snow line and the gas surface density is perturbed by the back-reaction. in particle size (t = 0.01 Myr), followed by a further concentration of solids once the vapor accumulates in the snow line (t = 0.1 Myr) and, finally, the depletion of gas outside the snow line, accompanied by the extended accumulation of dust (t = 0.4 Myr).
From Fig. 4, we see that the dust-to-gas ratios can reach extremely high values depending on the simulation parameters.The "Low ε 0 " simulation reaches a concentration of ≈ 0.1 in the inner regions (where the particles are small) because of the traffic jam but no further accumulation occurs outside the snow line.
In the "Mid ε 0 " case, the dust-to-gas ratio reaches a high value of ≈ 0.85 at the snow line and ≈ 0.4 at 1 au.The dust is more concentrated towards the snow line in this case because the back-reaction slows down the viscous diffusion (Eq.( 14)), yet, 10 0 10 1 r (AU) 10 2 10 1 10 0 0 : 1% 0 : 3% 0 : 5% as time passes, the dust should spread more evenly towards the inner regions.
The most extreme case is the "High ε 0 " simulation, where the dust accumulates both inside and outside the snow line.The dust accumulates in the inner regions due to the traffic jam caused by the change in particle size and the pressure maximum caused by the water vapor, reaching concentrations between ≈ 0.5 and 1.0.Outside the snow line the dust back-reaction depletes the gas and reduces the pressure gradient, creating another concentration point between 2.5 and 4 au where the dust-to-gas ratio reaches values of ≈ 1.0−2.0.The recondensation of vapor also contributes by enhancing the concentration of solids outside the snow line (Dra ¸żkowska & Alibert 2017; Stammler et al. 2017).
Accretion damped by the back-reaction
The radial velocity of the gas now depends not only on the viscous evolution, but also on the pressure gradient and the dust distribution (Eqs.( 14)-( 21)).Therefore, for high dust-to-gas ratios and large particles sizes, the gas flow may be damped and even reversed.
Figure 5 shows the gas velocities of the different simulations.In the "Low ε 0 " simulation the dust-to-gas ratio is higher in the inner regions (where grain sizes are small) and lower at the outer Gas accretion rate over time, measured at 0.5 au.The accretion rate decreases over time, dropping to a 85% of the initial value for the "Low ε 0 " simulation and to a 30−45% for the higher ε 0 cases.regions (where particle sizes are large).This trade-off between concentration and size means that the dust back-reaction does not dominate the evolution of the gas and that the gas velocity is only damped with respect to the steady state viscous velocity by a factor of a few.
The gas velocity is roughly v g,r ≈ 0.85 v ν inside the snow line and v g,r ≈ 0.80 v ν outside the snow line, where the transition is caused by the change in both particle size and dust-to-gas ratio.
This damping in the viscous velocity also leads into a similar decrease in the gas accretion rate onto the star, from Ṁ = 8 × 10 −9 M yr −1 to 6.8 × 10 −9 M yr −1 (Fig. 6).Once the dust supply is depleted, the accretion rate should return to its steady state value.
In the "High ε 0 " simulation, where the dust concentrations are high inside and outside the snow line, we can see the full effects of the dust back-reaction.In the inner regions (r < 2.5 au), the particles are small (St ∼ 10 −4 ), so the gas velocity is dominated by the term Āv ν , which corresponds to the viscous velocity damped by a factor of Ā ≈ (1 + ) −1 .In the outer region (r > 2.5 au) where the particles are large (St 10 −2 ), the velocity is dominated by the pressure velocity term 2 Bv P , which moves the gas outward, against the pressure gradient (Eq.( 14)).This reversal of the gas velocity causes the observed depletion in the gas surface density.Figure 7 shows the damping and pushing terms of the gas velocity to illustrate how the gas motion is affected by the dust back-reaction.
Since the gas inner disk is disconnected from the outer disk at the snow line in terms of mass transport, the accretion rate into the star is considerably reduced.As solid particles accumulate around the snow line, and the inner regions become more and more depleted of gas, the accretion rate reaches a value as low as Ṁ = 2.5 × 10 −9 M yr −1 .The only reason why the gas is not further depleted in the inner regions is because of the water vapor delivered by the icy dust particles crossing the snow line (Ciesla & Cuzzi 2006).
Meanwhile, the mass outside the snow line is transported outwards at a rate of ∼10 −9 −10 −8 M /yr.No instabilities seem to appear in gas surface density in the outer regions as the mass transported to the outer disk is only a small fraction of the total disk mass.Once the dust supply is exhausted, the back-reaction push will stop being effective and the gas accretion rate should retake the standard viscous evolution.
The behavior of the "Mid ε 0 " simulation is consistently in between the "Low ε 0 " and "High ε 0 " cases, with the gas flux practically frozen (v g,r ≈ 0) in the outer regions (r > 4 au).7. Gas velocity profile of the "High ε 0 " simulation after 0.4 Myr (black), and the decomposition of the two velocity terms Āv ν (red) and 2 Bv P (blue) (see Eq. ( 14)).In the inner regions, the pushing term 2 Bv P is negligible, as the particles Stokes number is too small, and the total velocity is dominated by the damped viscous velocity Āv ν .In the outer regions the term 2 Bv P overcomes the viscous evolution, and pushes gas against the pressure gradient.
All simulations show that the back-reaction push is particularly strong in a narrow region outside the snow line (between r ≈ 2.5 and 4 au), where the concentration of icy particles increases because of the recondensation of water vapor.
In Sect.5.3 we comment on the effects of dust settling on the accretion rate at different heights.
Depletion of H 2 and He inside the snow line
From the gas velocities, we see that in the cases where the back-reaction is effective it can stop or reverse the accretion of gas outside the snow line, causing the inner regions to become relatively depleted of gas.In particular, the dust back-reaction reduces the supply of H 2 , He to the inner regions as outside the snow line, this is the dominant gas component.
At the same time, the icy grains cross the snow line and deliver water vapor to the inner regions.Therefore, the gas will present a lower H 2 , He mass fraction in the inner disk than in the outer disk.
The total amount of water delivered to the inner regions depends on the initial dust-to-gas ratio 0 , while the dust backreaction affects how it is distributed.
Figure 8 shows that even in the "Low ε 0 " case, the mass fraction of H 2 , He is reduced to a 90%.
For the "Mid ε 0 " and "High ε 0 " cases the dust back-reaction onto the gas reduces the supply of light gases to the inner regions, creating environments dominated by water vapor inside the snow line, with a H 2 , He mass fraction between 40%−65%.The depletion is more concentrated towards the snow line because the damping term of the gas velocity ( Ā v ν ) slows down the viscous diffusion of water vapor.After the dust supply is exhausted, the region inside the snow line will be gradually refilled with gas from the outer regions in the viscous timescale (t ν ≈ 0.5 Myr at 4 au), and the H 2 , He mixture will be replenished to become the dominant component once more.
What happens without the back-reaction?
So far we have studied the impact on the dust back-reaction into the gas and dust density profiles, and in the gas velocity.So, how different is the situation when the back-reaction effect is ignored?In Fig. 9, we turn off the back-reaction effects (v g,r = v ν , ∆v g,θ = −v P ) and ignore the collective effect of dust on its diffusivity (D d = ν).The simulation with ε 0 = 0.01 shows only minor differences, corresponding to a faster dust accretion.This is an indication that for low dust-to-gas ratios, the back-reaction onto the gas is not important.For the simulations with ε 0 ≥ 0.03, we observe that without the back-reaction effect, the dust only concentrates in the inner regions due to the traffic jam caused by the change in particle sizes at the snow line.Accordingly, the water vapor delivered by the icy particles also increases the total gas content.
Figure 10 shows how the dust-to-gas ratio profile is affected by the dust back-reaction.Only when the back-reaction is considered the solid particles can pile up outside the water snow line due to the perturbed pressure gradient and the slower dust motion.For the simulations with ε 0 ≥ 0.03, the dust backreaction increases the dust-to-gas ratio by over an order of magnitude outside the water snow line.This is in agreement with the previous results of Dra ¸żkowska & Alibert (2017) and Hyodo et al. (2019), where the dust back-reaction was incorporated as the collective drift of the dust species.
The importance of the disk profile and size
How much the dust can perturb the gas surface density depends on the dust-to-gas ratio and the dust sizes, and also on how long the back-reaction is, effectively, acting.
In the "High ε 0 " case, the dust first creates a small depletion into the gas outside the snow line, the pressure slope changes and allows for large particles to further accumulate.Yet this scenario assumes that icy particles are being constantly delivered towards the snow line, while in reality the supply has a limit given by the disk size.
We made a test simulation with ε 0 = 0.05 as in the "High ε 0 " case, but this time starting with a self-similar profile (Lynden-Bell & Pringle 1974), following: with a cut-off radius of r c = 100 au.
From Fig. 11 we can see the evolution of this simulation until 1 Myr.Although we still observe that dust accumulates at the (g/cm2) 0 : 5% Fig. 9. Comparison of the surface density profiles when the backreaction is considered (solid lines) and ignored (dashed lines), after 0.4 Myr.For the cases with ε 0 ≥ 0.03, the gas surface density is reduced when the back-reaction is considered in the inner regions and the dust concentration is extended.snow line, reaching dust-to-gas ratios between = 0.7−0.8, and that the back-reaction push still creates a small dip in the gas surface density outside the snow line, the supply of solids is not enough to perturb the gas over extended periods of time.In this disk of limited size, no extended dust accumulation outside the snow line is observed.
The effect that still remains present is the decrease of the accretion rate (Fig. 12).As long as dust is delivered at the snow line, the accretion rate of gas is damped, and the mass fraction of the H 2 , He mixture is decreased in the inner regions.We find that between 0.4 and 0.5 Myr the dust concentration reaches its maximum value at the snow line (roughly the time required for the dust in the outer regions to grow and drift through the disk), and 10 0 10 1 r (AU) 10 2 10 1 10 0 0 : 1% 0 : 3% 0 : 5% BR -Off BR -On Fig. 10.Comparison of the dust-to-gas ratio profiles when the backreaction is considered (solid lines) and ignored (dashed lines) after 0.4 Myr.When the back-reaction is ignored, the dust accumulates only inside the snow line.
the accretion rate reaches its minimum of 3.0 × 10 −9 M yr −1 , where only 60% of the accretion flow corresponds to H 2 , He.
After 1 Myr the dust is completely depleted, the disk surface density roughly recovers the self similar profile and the accretion rate rises back again.
When is dust back-reaction important?
So far, we have seen that when the back-reaction is effective, it can enhance the dust concentration at the snow line (Fig. 4), damp the gas accretion rate (Fig. 6), and deplete the inner regions from hydrogen and helium (Fig. 8).
All of these effects can be traced back to the push exerted by the dust back-reaction onto the gas (Eq.( 14)) that reduces the pressure gradient (which enhances dust accumulation) and slows down the flux of material from outside the snow line to the inner regions.
As a rule of thumb, the gas dynamic is altered whenever the pressure velocity term is comparable to the damped viscous velocity ( Āv ν ∼ 2 Bv P , Eq. ( 14)), which occurs roughly when the particles have large Stokes number and high dust-to-gas ratios such that St /( + 1) ∼ α (Kanagawa et al. 2017;Dipierro et al. 2018).
In an inviscid disk (α ν ≈ 0), the gas velocity is dominated by the term 2 Bv P and the gas moves against the pressure gradient (Tanaka et al. 2005).On the other side, if the disk is highly turbulent (α ν St ), then the gas evolves with a damped viscous velocity Āv ν .In Appendix D, we include an equivalent criterion to determine the effect of the back-reaction, based on the angular momentum exchange between the dust and gas.
Through this paper we found that a high global dust-to-gas ratio of ε 0 0.03 and a low viscous turbulence of α ν 10 −3 (see Appendix C) are necessary for the back-reaction push to perturb the combined evolution of gas and dust.
We also showed that the duration and magnitude of these effects depends on the disk size, as the dust accumulation and the perturbation onto the gas stop once the solid reservoir is exhausted (Fig. 11).In particular, for a disk with cut-off radius of r c = 100 au the dust drifts from the outer regions to the snow line in 0.4 Myr.Afterwards, the back-reaction effects decay in a viscous timescale of the inner regions (roughly another 0.5 Myr). . 11. Surface density profiles of gas (red) and dust (blue) at different times (solid lines).The initial condition corresponds to the self-similar profile (dashed lines).Top: simulation initially behaves in the same way as the power law profile until 0.1 Myr.Mid: at 0.4 Myr the dust supply gets exhausted before the back-reaction push can further deplete the gaseous disk.Bottom: after 1 Myr, the gas profile looks very similar to its initial condition, but most of the dust has been accreted.Moreover, part of the dust accumulated at the snow line will be converted into planetesimals through streaming instability (Youdin & Goodman 2005; Dra ¸żkowska & Alibert 2017) which, in turn, will reduce the dust-to-gas ratio and smear out the back-reaction effects.
We should keep in mind, however, that the results presented in this paper only occur if the snow line acts as a traffic jam for dust accretion, which is caused by the difference in the fragmentation velocities of dry silicates and icy aggregates.Yet recent studies suggest that there is no difference between the sticking properties of silicates and ices (Gundlach et al. 2018;Musiolik & Wurm 2019;Steinpilz et al. 2019), implying that the traffic jam should not actually form in the first place.M g M H2 M LBP Fig. 12. Accretion rate over time for the simulation with self-similar profile and ε 0 = 0.05.The gas accretion rate (red) decreases as the dust back-reaction damps gas velocity, and rises again after the dust is depleted.The accretion rate of H 2 , He (black) is even lower, as the gas supply of the outer regions is reduced at the snow line.The accretion rate of the standard self-similar solution (dotted line) is plotted for comparison.
Other scenarios where the back-reaction might be important
Similar traffic jams and dust traps can occur in different regions of the protoplanetary disk.Given high dust concentrations and large particles sizes, the dust back-reaction may perturb the gas in locations such as dead-zones (Kretke et al. 2009;Ueda et al. 2019;Gárate et al. 2019), the outer edge of gaps carved by planets (Paardekooper & Mellema 2004;Rice et al. 2006;Weber et al. 2018), and the edge of a photo-evaporative gap (Alexander & Armitage 2007).
In numerical models of protoplanetary disks, the backreaction effects should be considered when estimating the gas accretion rate (which is reduced by the interaction with the dust, Kanagawa et al. 2017), the planetesimal formation rate (which would be enhanced for higher dust concentrations, Dra ¸żkowska & Alibert 2017), or the width of a dusty ring in the outer edge of a gap carved by a planet (Kanagawa et al. 2018;Weber et al. 2018;Dra ¸żkowska et al. 2019).
The effects of the back-reaction could actually become effective at later stages of the disk lifetime, provided that other mechanisms continue to trap the dust delivered from the outer regions, for example, if a planet forms from the planetesimal population at the water snow line (Dra ¸żkowska & Alibert 2017), it would carve a gap that can effectively trap dust particles (Pinilla et al. 2012;Lambrechts et al. 2014), and create a new environment where the back-reaction can affect the gas and dust dynamics (Kanagawa et al. 2018).
On smaller scales, the dust back-reaction triggers the streaming instability, locally enhancing the concentration of dust particles until the solids become gravitationally unstable (Youdin & Goodman 2005), and close to the midplane, the friction between layers of gas and dust results in a Kelvin-Helmholtz instability between the two components (Johansen et al. 2006).
Finally, one scenario that we did not cover in our parameter space is when the turbulence is so low (α ν = 0) that the disk advection is reversed all the way to the inner boundary, which could lead to further perturbations at the snow line location, although a proper treatment of the dust sublimation should be included to account for this scenario.
Among our results, we could not reproduce the accumulation of dust in the outer regions of the disk described by Gonzalez et al. (2017) as the dust particles drift towards the inner regions before creating any perturbation in the outer gas disk.We also find that by taking into account the growth limits, the backreaction is less efficient than previously thought (Kanagawa et al. 2017) as the fragmentation barrier prevents the particles to grow to sizes beyond St frag and limiting the effect of the back-reaction even if the gas surface density decreases.
We do not expect our results to be significantly affected by changes in the disk mass or the stellar mass.Since particles sizes around the snow line are limited by the fragmentation barrier, the changes in any of these two parameters will only affect the physical size of the particles, but not their Stokes number (Eq. ( 9)), which controls the dynamical contribution of the particles to the gas motion.The timescales and the snow line location would change accordingly, but the qualitative results presented in this work should hold true.
Layered accretion by dust settling
Because large particles settle towards the midplane, the backreaction push onto the gas can be stronger at the disk midplane than at the surface (Kanagawa et al. 2017), which can result in the upper layers flowing inward (unperturbed by the dust), while the inner layers flow outward (due to the dust back-reaction).Depending on the particle sizes, this might result in different accretion rates at different heights.
While our approach to treat the vertical structure correctly traces the net mass transport (Sect.2.2.1, Appendix B) it does not provide information about layered accretion flow.To check if there is a substantial inflow of material at the upper layers, we calculate the accretion rate at every height (using the vertical model from Takeuchi & Lin 2002, see Appendix B) and measure the total mass inflow and outflow separately (Fig. 13).We find that inside the snow line (r < 2.7 au), where the dust particles are small (St ∼ 10 −4 ) and well mixed with the gas, the backreaction damps the gas motion uniformly at all heights, and the total inflow is of 3.0 × 10 −9 M yr −1 , only 6.0 × 10 −10 M yr −1 higher than the net accretion rate onto the star.
In the regions beyond the snow line dust accumulation (r > 3.0 au), we find that the accretion rate is layered, with the disk midplane flowing outward, while the surface layers move inward.The material inflow in this case is comparable to that of a dust free disk ( Ṁ ∼ 10 −8 M yr −1 ), even if the net mass flux is positive.This is in agreement with the results of Kanagawa et al. (2017).For these regions, we find that the dust back-reaction can revert the gas flow up to 2h d , which for the large dust particles at 3 au (St ∼ 10 −2 ) corresponds to 0.6h g .Interestingly, at the snow line location where the dust particles accumulate (2.7 au < r < 3.0 au), the dust back-reaction is strong enough to perturb the gas surface density.The steeper negative surface density slope found at the snow line causes the viscous accretion to be reduced or reversed at all heights (Takeuchi & Lin 2002).The accretion inflow is then reduced to a value of Ṁ = 8.0 × 10 −10 M yr −1 for the simulation with ε 0 = 0.03 (in which the reduced inflow only occurs above 0.7h g ), and to Ṁ = 6 × 10 −12 M yr −1 for the simulation with ε 0 = 0.05, where the inflow only occurs 2.5h g above the midplane.
The steepening of the surface density slope at the water snow line was not observed in the previous results of Kanagawa et al. (2017) as they did not include the snow line or a dust growth and recondensation model.We find that this perturbation caused by the dust accumulation at the snow line is key to reducing or stopping the accretion inflow over a wide vertical range, which can be larger than the dust scale height itself.
Given that the disk mass inside the snow line is of 2.0 × 10 −3 M , the composition of the gas phase described in Sect.4.2 should be corrected for the material flowing from the outer disk into the inner regions.For the simulation with ε 0 = 0.03, the H 2 + He ratio should be higher by a 20%, considering that the inflow is reduced by over an order of magnitude at the snow line location (though not completely stopped).For the simulation with ε 0 = 0.05, all our results hold.
Observational implications
The perturbation caused by the dust back-reaction at the snow line is only effective if the viscous turbulence is low, the dustto-gas ratio is high, and it only acts at early times of the disk evolution, when dust is supplied towards the inner regions.Given these constraints, we want to find out which disk properties would fit in this parameter space and what signatures we can expect to find if the back-reaction is effectively perturbing the gas.
Ideal targets
Young Class 0 and Class I disks seem to have typical sizes around 100-200 au (Najita & Bergin 2018, Table 1), so solids can be delivered to the inner regions only until 0.5−1 Myr, before A149, page 11 of 18 A&A 635, A149 (2020) the disk is depleted of dust (unless a pressure bump prevents particles from moving towards the star).This means that older disks (t > 1 Myr) are unlikely to present any perturbation from the back-reaction push.Then, among young disks and assuming viscous accretion, only those with low accretion rates of: could be subject to the back-reaction damping, as a low viscous evolution (α ν 10 −3 ) is required for the dust to affect the gas.
In terms of the dimensionless accretion parameter introduced by Rosotti et al. (2017), defined as: a disk of age τ would require η 0.1 for the dust back-reaction to effectively perturb the gas.
On the gas orbital velocity
If the concentration of dust in any region is high, then the gas pressure support is reduced and the orbital velocity approaches to the Keplerian velocity v K (Eq.( 15)).
At the midplane, where large grains concentrate, the gas motion deviates from the Keplerian velocity by: where the √ St/α t factor measures the concentration of large particles at the midplane by settling (see Appendix B).If in our disk the initial pressure velocity around the snow line was v P ≈ 2 × 10 −3 v K , then the dust back-reaction and the accumulation of water vapor makes the gas orbit at velocities of ∆v g,θ ≈ 7 × 10 −4 v K .At 2.7 au, where the snow line is located in our simulations, this correspond to a difference from the Keplerian velocity of approximately ∆v g,θ ≈ 10 m s −1 .
We expect that in future observations, the deviations from the Keplerian velocity could be used to constrain the dust content.Teague et al. (2018) already showed that the deviations from the Keplerian velocity can be used to kinematically detect a planet, reaching a precision of 2 m s −1 .Better characterizations of the orbital velocity profiles in dust rings may then be used to differentiate between a planet perturbation and a dust back-reaction perturbation, based on the profile shape.
Unfortunately, the spatial resolution required to observe this variation is less than 10 mas, for a disk at a distance of 100 au and a snow line at 3 au from the star, and next-generation instruments would be required.The velocity deviation could be easier to detect for disks around Herbig stars where the snow line is located at larger radii.
Shadows casted by dust accumulation
A recent study by Ueda et al. (2019) has shown that dust can accumulate at the inner edge of a dead zone (a region with low ionization and low turbulence, Gammie 1996) and cast shadows that extend up to 10 au.We notice that our accumulation of dust at the snow line is similar to the dead zone scenario, in the sense that high dust-to-gas ratios are reached in a narrow region of the inner disk (Fig. 4).Therefore, we hypothesize that similar shadows could be found in the regions just outside the snow line if enough dust is present.Still, radiative transfer simulations would be needed to determine the minimum dust-to-gas ratio necessary to cast a shadow.
Effects of the snow line traffic jam
The fast drift of the icy particles particles and the traffic jam at the snow line results in the accumulation of both small silicate dust and water vapor inside the snow line, even if the effect of the dust back-reaction is ignored (see Figs. 9 and 10).We find that if the initial dust supply is large enough (high 0 and large disk size) then, during the early stages of the disk evolution (t 1 Myr), we can expect the material accreted into the star to be rich in silicates and refractory materials carried by the dust (see Fig. 4), rich in oxygen (which is carried by the water vapor), and relatively poor in hydrogen, helium, and other volatile elements mixed with the gas outside the water snow line (see Fig. 8), such as nitrogen and neon.The X-ray emission could provide estimates of the abundance ratios in the accreted material (Günther et al. 2006), although the coronal emission of neon in young stars could mask some of these abundances (H.M. Günther, priv. comm.).
The increased concentration of water vapor in the warm inner regions would also enhance the emission from the water rotational lines.These lines have been already detected in different disks (Carr & Najita 2008;Salyk et al. 2008) in the mid-IR with Spitzer IRS, and could be further observed in the future using Mid-Infrared Instrument at the James Webb Space Telescope (MIRI, Rieke et al. 2015).Additionally, the excess of water should lead to low C/O ratios inside the snow line for young protoplanetary disks (Öberg et al. 2011;Booth & Clarke 2018).
Summary
In this study, we include the effects of the dust back-reaction on the gas in a model of the water snow line, which is known to act as a concentration point for dust particles due to the change in the fragmentation velocity between silicates and ices, together with the recondensation of water vapor into the surface of icy particles (Dra ¸żkowska & Alibert 2017).Our model shows how the dust back-reaction can perturb the gas dynamics and disk evolution, although the parameter space required for this to happen is limited.
In the vicinity of the snow line, provided that the global dustto-gas ratio is high (ε 0 0.03) and the viscosity low (α ν 10 −3 ), the effects of the dust back-reaction are to: -revert the net gas flux outside the snow line; -reduce the gas inflow at the snow line by over an order of magnitude; -damp the gas accretion rate onto the star to a 30−50% of its initial value; -reduce the hydrogen-helium content in the inner regions and concentrate water vapor at the snow line; -concentrate solids at the snow line, reaching dust-to-gas ratios of 0.8.These effects build up as long as dust is supplied from the outer disk into the snow line, with the duration set by the growth and drift timescale of the outer regions.After the dust reservoir is exhausted, the back-reaction effects decay in the viscous timescale of the inner regions.For a disk with size r c = 100 au, we find that dust accumulates only during the first 0.4 Myr and that the perturbation onto the gas has disappeared by the age of 1 Myr.
A149, page 12 of 18
The high dust-to-gas ratios required to trigger the backreaction effects and the traffic jam at the snow line can result in an enhanced water content in the inner regions, as well as in the accretion onto the star to be enriched with refractory materials and oxygen and, perhaps, a shadow that would be cast outside the snow line location by the accumulation of dust particles.Other types of dust traps could present similar behaviors, although each case must be revisited individually to evaluate the magnitude of the perturbation of the back-reaction into the gas velocity.3)) obtained from the simulations (red -DustPy, blue -TwoPopPy), and the analytical limits given by α eq, min (green) and α eq, max (black).The value obtained from the simulations is in between the two limits, in agreement with the analytical model.Right: accretion rate measured from the simulations, and the steady state accretion rate for the different α eq limits.
-the disk is initialized with a fully grown particle distribution (so that the back-reaction effects are uniform through the disk); -the back-reaction coefficients (in this test case) are implemented assuming that the dust-to-gas ratio is vertically uniform.If the simulations are working properly, then the disk will remain in steady state, and the accretion rate will be constant in radius with a value given by the damped equivalent viscosity α eq (Eq.(A.6)).Since in this test case all the particles are small < α ν ) and the size distribution is constant with radius, the dust-to-gas ratio and the back-reaction effects should also remain approximately uniform in time.
As shown in Fig. A.1, after 0.1 Myr the disk surface density between 5 and 100 au remains close to the steady state profile, with a deviation of less than 0.1% relative to its initial value.Figure A.2 shows that the mass accretion rate of the gas in the simulations is Ṁ ≈ 5.6 × 10 −8 M yr −1 and constant through the disk, in agreement with a steady state solution.More importantly, the value of the accretion is constrained between the minimum and maximum values given by α eq, min and α eq, max and Eq.(A.6).In terms of the viscous accretion, the back-reaction effect in our setup is equivalent to reduce the viscous turbulence α ν to a value of α eq ≈ 0.57 α ν .
Both twopoppy and DustPy deliver similar results, with a relative difference of roughly 5% in the α eq and Ṁ values.From here we can conclude that the back-reaction effects observed in the two population model are expected to be in agreement with those from a proper particle distribution.
A.2. Where the viscous approximation breaks
While we can always write the gas velocity in the form of Eq. (A.4) using the α eq parameter (Eq.(A.3)), the global disk evolution will still differ from a regular viscous evolution (unless α eq ∝ α ν ) as the value of γ ν does not depend on the slope of α eq .
In particular, the back-reaction effects cannot be treated as a viscous process if St /( + 1) α ν (Dipierro et al. 2018).In this case, the back-reaction push becomes more important than the inward viscous transport and results in negative equivalent α eq values, meaning that mass will be transported against the pressure gradient.Also, in the outer regions of the disk where the surface density profile becomes steeper (as in the self-similar solution Lynden-Bell & Pringle 1974), the viscous evolution spreads the gas outwards (γ ν < 0, v ν > 0).In these regions the dust backreaction pushes the gas in the same direction as the viscous spreading (2Bv P > 0) and, therefore, contributes to evolve the outer disk faster than the inner disk.
Appendix B: Modeling the vertical structure
In Sect.2.2.1, we discussed how to obtain the gas radial velocity from the net mass flux.For our simulations we considered the effect of the vertical structure of the gas and the settling of the dust on the back-reaction coefficients, but ignored the vertical profile of the pressure velocity v P (z) and viscous velocity v ν (z).In this appendix, we show that our results hold if we assume a standard vertical profile for the viscous and pressure velocity, and why our simple approximation works in the same way.
B.1.Vertical profiles for v ν and v P Following the Takeuchi & Lin (2002) model (see also Kanagawa et al. 2017;Dipierro et al. 2018), the vertical velocity profiles of v ν , v P are: where in this case, p and q are the local exponents of the gas surface density and temperature profiles.We include this information in the gas and dust velocity derived from the mass fluxes (Eqs.( 22) and ( 28)) and check how our results are affected.Figure B.1 shows the dust-to-gas ratio profile, and the evolution of the gas accretion present only minor differences when considering the vertical structure for the viscous and pressure velocities (Eqs.(B.1) and (B.2)).For the simulations with ε ≥ 0.03, only at the snow line location is the dust-to-gas ratio more spread over the radii, reducing the maximum concentration by a factor of a few.Consequently, the accretion rate onto the star is approximately a 5% higher when using the Takeuchi & Lin (2002) prescription.We find that our approximation reproduces the radial velocity profile calculated with the Takeuchi & Lin (2002) model (Fig. B.2) well, except in a narrow region beyond the snow line where the change in the gas surface density slope creates a spike in the gas velocity.However, this variation does not alter the rest of the simulation fields and our results are maintained independently of the vertical structure prescription used for the viscous and pressure velocities.
B.2. Explaining the vertical approximation
There are two distinguishable regimes for the gas and dust interaction.The first regime is the one in which the particles are small (St < α ν ).In this case, the particles are well mixed with the gas and, therefore, the dust-to-gas ratio (and the back-reaction coefficients) is uniform in the vertical direction.In this regime, the back-reaction pushing term is also negligible, and the gas velocity is well approximated by the damped viscous velocity vg,r ≈ Āv ν .
The second regime is the case in which the particles are large (St α ν ).In this case, the dust settles towards the midplane and the back-reaction push becomes important.The dust particles near the midplane will move the gas against the local pressure gradient with a velocity of v g,r (z h d ) ≈ 2 B(z h d )v P .Meanwhile, the upper layers of the disk have a low dust concentration and allow the gas to move inward with the viscous velocity, therefore the gas velocity above the characteristic dust scale height can be approximated by v g,r (z h d ) ≈ v ν .Then the velocity derived from the net flux (Eq.( 22)) will yield a good approximation considering the upper layers flowing inward (dominated by the viscous flow) and the midplane layers flowing against the pressure gradient (dominated by the back-reaction push).
This approximation seems to be valid for most of the disk, except on a narrow region where slope of the gas density is reversed (at r ≈ 3 au).However, this seems to be more related to a resolution problem than a physical reason; the surface density slope was smoothed in this case to avoid numerical problems in this region.Because of this region, we prefer using an approximate solution over the Takeuchi & Lin (2002) prescription.Solving the integral we obtain: From Eq. (D.3) we can also infer that the dust drifting from a radius r 1 to r 0 loses angular momentum (and delivers it to the gas) at a rate of: Jdrift = v 0 r 0 Ṁd (r 1 ) where the accretion rate of dust at r 1 is: Ṁd (r 1 ) = 2πr 1 d (r 1 )v d,r (r 1 ), (D.7) where we will assume an uniform dust-to-gas ratio, using Σ d = ε Σ g .Then, considering only the drifting component of the dust velocity (see Eq. ( 3)) we obtain that: where have taken the limit of small particles (St 1), and expanded the expression for the pressure velocity given in Eq. (A.2) using h/r = c s /v K .Now we can write the gap opening timescale as: with C(x) a function of the ratio x = r 1 /r 0 : We apply this formula to our simulations, using a scale height of h/r = 0.05, γ P = 2.75, and study the time required to clear a gap between r 0 = 2.5 au (which is approximately the location of the snow line) and r 1 = 10 au, and find: where we have taken r 0 = 2.5 au and r 1 = 10 au.We note that this condition is similar to 2Bv P Av ν (Eq.( 14)), which indicates whether gas motion is locally dominated by the dust back-reaction.In conclusion, looking at the values of t clear and t ν , it is easy to see why some disks create a gap-like perturbation in Fig. C.1.The disks with α = 10 −2 have viscous timescales that are too short in comparison with the clearing timescale by an order of magnitude, while the disks with α = 10 −4 are easily dominated by the dust back-reaction, provided that dust is delivered for enough time to complete a clearing timescale.
Fig. 1 .
Fig. 1.Stokes number radial profile after 0.4 Myr.Inside the water snow line (located between 2.5 and 3.0 au) the dust can grow only up to St ∼ 10 −4 .Outside the snow line it can reach values of St ∼ 10 −2 −10 −1 .
Fig.6.Gas accretion rate over time, measured at 0.5 au.The accretion rate decreases over time, dropping to a 85% of the initial value for the "Low ε 0 " simulation and to a 30−45% for the higher ε 0 cases.
Fig. 8 .
Fig. 8. H 2 , He mass fraction profile after 0.4 Myr.The mass fraction of light gases is lower inside the snow line as the dust crossing the snow line delivers water vapor.As the global dust-to-gas ratio increases, the back-reaction push outside the snow line reduces the flux of H 2 , He into the inner regions.
Fig
Fig. 11.Surface density profiles of gas (red) and dust (blue) at different times (solid lines).The initial condition corresponds to the self-similar profile (dashed lines).Top: simulation initially behaves in the same way as the power law profile until 0.1 Myr.Mid: at 0.4 Myr the dust supply gets exhausted before the back-reaction push can further deplete the gaseous disk.Bottom: after 1 Myr, the gas profile looks very similar to its initial condition, but most of the dust has been accreted.
Fig. 13 .
Fig. 13.Top: mass flux for the simulation with ε 0 = 5%, in the radial and vertical direction, obtained using the Takeuchi & Lin (2002) vertical velocity profiles.The blue regions show the material outflow, and the red regions show the inflow.Bottom: accretion inflow (red), outflow (blue), and the total mass accretion (dotted) profiles.
Fig. A.2. Left: equivalent α ν value (Eq.(A.3)) obtained from the simulations (red -DustPy, blue -TwoPopPy), and the analytical limits given by α eq, min (green) and α eq, max (black).The value obtained from the simulations is in between the two limits, in agreement with the analytical model.Right: accretion rate measured from the simulations, and the steady state accretion rate for the different α eq limits.
Fig. B.1.Comparison of the main results (top: dust-to-gas ratio profiles, bottom: accretion rate evolution) using our simple approximation for the vertical structure (solid lines) vs. the Takeuchi & Lin (2002) vertical structure model (dotted lines).
v d,r (r 1 ) = 2St v P (r 1 ) = −St γ P need to compare it with the viscous timescale, which can be understood in this case as the time necessary to close the gap: the snow line location of r 0 = 2.5 au, gives us a time of: we can derive from Eqs. (D.1), (D.9) and (D.12) a condition on α, ε , and St to see if the dust back-reaction can clear a gap: | 16,091.6 | 2019-06-18T00:00:00.000 | [
"Physics",
"Geology"
] |
Diagnosing Circumburst Environment with Multiband Gamma-Ray Burst Radio Afterglows
It has been widely recognized that gamma-ray burst (GRB) afterglows arise from interactions between GRB outflow and circumburst medium, while their evolution follows the behaviors of relativistic shock waves. Assuming the distribution of circumburst medium follows a general power-law form, that is, $n = A_{\ast} R^{-k}$, where $R$ denotes the distance from the burst, it is obvious that the value of density-distribution index $k$ can affect the behaviors of the afterglow. In this paper, we analyze the temporal and spectral behaviors of GRB radio afterglows with arbitrary $k$-values. In the radio band, a standard GRB afterglow produced by forward shock exhibits a late-time flux peak, and the relative peak fluxes as well as peak times at different frequencies show dependencies on $k$. Thus with multi-band radio peak observations, one can determine the density profile of circumburst medium by comparing the relations between peak flux/time and frequency at each observing band. Also, the effects of trans-relativistic shock waves, as well as jets in afterglows are discussed. By analyzing 31 long and 1 short GRBs with multi-band data of radio afterglows, we find that nearly half of them can be explained with uniform interstellar medium ($k=0$), $\sim 1/5$ can be constrained to exhibiting stellar wind environment ($k=2$), while less than $\sim 1/3$ samples show $0
INTRODUCTION
Gamma-ray bursts (GRBs) are stellar explosions arising from the collapses of massive stars (for long bursts, e.g., see Woosley 1993;Paczyński 1998;MacFadyen & Woosley 1999) or mergers of binary compact stars (for short bursts, e.g., see Paczynski 1986;Narayan et al. 1992;Gehrels et al. 2009;Abbott et al. 2017). The afterglows of GRBs are the results of interactions between relativistic ejecta from central engines and the circum-burst medium. The shock waves produced during such interactions can accelerate the swept-up electrons, thus giving rise to nonthermal radiation, such as synchrotron emission (e.g., see Piran 1999;van Paradijs et al. 2000;Mészáros et al. 2002). Our understanding of GRB afterglows has been greatly improved with the discoveries of GRB multiband afterglows (Costa et al. 1997;van Paradijs et al. 1997;Frail et al. 1997;Zhang 2007).
Assuming the profile of the circumburst density n follows a power-law form, that is, n = A * R −k , where R denotes the distance from GRB central engine, it is obvious that different values of the power-law index k lead to different light-curve behaviors (e.g., see Wu et al. 2005). The most widely discussed circumburst density model is the homogeneous interstellar medium (ISM) case with k = 0 (e.g., see Sari et al. 1998 & Kumar 2004), while the stellar-wind environment with k = 2 has also been proposed (e.g., see Dai & Lu 1998;Mészáros et al. 1998;Chevalier & Li 2000;Panaitescu & Kumar 2000, 2004Kobayashi & Zhang 2003;Wu et al. 2003Wu et al. , 2004Zou et al. 2005). Previously, it was thought that because the long GRBs originate from massive stars with significant mass losses via stellar winds before their demise, such events should preferably occur in stellar-wind environments. For example, Starling et al. (2008) performed a joint fitting for X-ray to IR afterglows of 10 GRBs, and put constraints on k for half of the sample, with 4 windlike cases and 1 ISM one. However, Panaitescu & Kumar (2002) found that half of their 10 GRBs favor a k = 0 medium; while Curran et al. (2009) made k constraints for 6 out of 10 GRBs, with 2 consistent with the ISM environment, 2 with the wind case, and another 2 samples compatible with both k = 0 and 2 cases. Yi et al. (2013) and van Eerten (2014) presented various relationships between observables in GRB X-ray and optical afterglows for arbitrary k. However, Yi et al. (2013) have drawn a typical value of k ∼ 1 for 19 Swift long bursts by analyzing their early afterglows that originated from forward-reverseshock interactions, implying a mass-loss scheme of the GRB progenitors other than stellar wind.
In order to give a better understanding of circumburst medium distributions, it is necessary to investigate this issue with methods other than early X-ray/Optical observations. In this work, we utilize a new way to constrain the density distribution of the circumburst medium, that is, to put a constraint on k by comparing multiband GRB radio afterglow peak times/fluxes with the corresponding frequencies. Observationally, the light curves of GRB radio afterglows usually show a double-peaked structure. In the rest frame, the typical peak times are 0.1 − 0.2 days and 2 days after triggers of prompt emissions, respectively (Chandra & Frail 2012). Generally speaking, the first peak can be explained with forward-reverse-shock interactions at early times, while the second peak is due to late-time forward-shock evolution and is unique for the radio band. In fact, because a strong dependency exists between GRB afterglow behaviors in the radio band and the underlying synchrotron radiation, Barniol Duran (2014) and Beniamini & van der Horst (2017) have already put constraints on the GRB microphysical parameters with GRB radio peaks, including the fraction of electron and magnetic energy among shock energy, that is, ε e and ε b , respectively.
In our circumburst density analysis, we ignore the effects of early radio peaks due to reverse shocks and only take the second peak into consideration. Since the peak flux F peak,ν and the peak time t peak,ν of this late peak depend on k, the value of k can be deduced with multiband radio-peak observations. Although in theory, the k value can also change the evolution of radio afterglows, it should be noted that radio observations are largely constrained by instrumental sensitivities, refractive interstellar scintillation (Chandra & Frail 2012), or the intrinsic properties of GRBs (Hancock et al. 2013).
Thus, the precise shapes of the complete light curves are usually hard to obtain. Hence, the main advantage of our radio-peak method is that the peak times/fluxes can be determined more easily and reliably due to the high fluxes of such peaks, as long as the afterglow light curves are sampled frequently enough. And because radio afterglows occur at a larger radius than early forwardreverse shocks, combining density profiles for earlier afterglows using the method proposed by Yi et al. (2013), and those for later radio peaks as shown in this work, a more complete circumburst medium profile at various R can be obtained.
This paper is organized as follows. In Section 2, we search for the peak time t peak,ν in the analytical light curve of the GRB radio afterglow with an arbitrary k, and calculate the relations between t peak,ν and F peak,ν with observing frequencies, with the role played by the k value clarified. Meanwhile, the effects of transrelativistic shocks, as well as jets, are discussed. In Section 3, we apply our theoretical predictions to 31 long GRBs, as well as 1 short burst, with multiband radio afterglow observations, and draw a conclusion that most of them can be explained with transrelativistic shock waves under ISM environments, which is different from Yi et al. (2013). This implies that the circumburst density profile may change with radius. In Section 4 our results are discussed and summarized. In our calculations, the ΛCDM cosmology is adopted, with cosmological parameters H 0 = 71km s −1 Mpc −1 , Ω m = 0.3, and Ω Λ = 0.7.
GRB RADIO LIGHT CURVES AND MULTIFREQUENCY PEAK EVOLUTION
In this section, our goal is to derive the relations between GRB radio afterglow peak times t peak,ν versus k, as well as peak fluxes F peak,ν versus k, i.e., the kdependent expressions of the power-law indices a and b for t peak,ν ∝ ν −a , and F peak,ν ∝ ν b , respectively. We investigate the analytical behaviors of multiband afterglow with arbitrary circumburst density-distribution index k in order to get the expressions of a and b. Ignoring the early afterglow evolution due to forward-reverseshock interactions, we consider the shock with radius R > R dec only, in which R dec is the deceleration radius of relativistic shock, with a corresponding deceleration time t dec = R dec (1 + z) /2Γ 2 0 c, a cosmological redshift of z, and an initial bulk Lorentz factor of Γ 0 . By this time, the GRB afterglow has already entered the selfsimilar phase. Here we do calculations for semiradiative shock waves for generality, although by the time of the emergence of radio peaks, i.e., several days after the initial GRB trigger, the shock waves are usually adiabatic. Assuming the circumburst density can be described as n = A * R −k = A (R/R * ) −k (k = 0 for uniformly distributed interstellar medium, k = 2 for stellar wind, with a typical value of the characteristic length R * ∼ 10 17 cm; see Wu et al. 2005), the hydrodynamical evolution should be Γ 2 ∝ R −m , and R ∝ t 1/(m+1) (Blandford & McKee 1976), with self-similar index m as where denotes the radiative efficiency of the shock, R the radius of the shock wave, Γ the bulk Lorentz factor of the shock, and t the observed time after the initial burst (Wu et al. 2005). The radiative efficiency can be considered as a constant at the early fast-cooling regime, and it evolves slowly after entering the slowcooling regime at later times. However, as long as we have power-law-distributed electrons, N e (γ e ) ∝ γ −p e , with γ e as the Lorentz factor of the electrons and N e the electron number, and the distribution index p does not deviate too much from 2, the effect of the evolving is not significant (Wu et al. 2005). Because the typical value of p for relativistic-shock-accelerated electrons is 2.2 (Achterberg et al. 2001, also see Curran et al. 2009 for observational constraints of an average value of k ∼ 2.24 with six Swift GRBs), which is close to 2, here we ignore the evolution of .
In this work, we focus on radio peaks occurring at T 90 + 0.5 days in the rest frame only, thus the effects of the equal-arrival-time surface (EATS) can be ignored. That is because, for a typical GRB, the radio-peak time t peak ∼ T 90 + 0.5 days < Γ −2 R/2c, which is the maximum time delay due to the EATS effects. Also, because the major concern of this paper is the behavior of GRB radio afterglows, we ignore the inverse Compton (IC) scattering, for IC is not a dominant factor at lower frequencies (Wu et al. 2005).
Following the procedures of Wu et al. (2005) and van der Horst (2007), we first consider the evolution of spherical shock waves with various k values. For synchrotron radiation caused by power-law-distributed electrons, a key time in the light curve is t cm , which marks the transition point between the early fast-cooling phase to the late slow-cooling phase, with an electron minimum Lorentz factor γ m equal to the electron cooling Lorentz factor γ c , and a minimum frequency ν m equal to the cooling frequency ν c (Sari et al. 1998;Panaitescu & Kumar 2000). We define ν cm = ν m (t cm ) = ν c (t cm ). The transition occurs at (2) where m e is the electron mass, m p the mass of a proton, e the charge of an electron, c the speed of light in vacuum, σ T the electron's Thomson scattering cross section, E cm the isotropic energy of the shock wave at t cm , and ζ 1/6 = 6 (p − 2) / (p − 1). Following the assumptions of Wu et al. (2005), we take typical values for parameters as follows: the isotropic energy E cm = 10 53 erg, ε e = 1/3, ε B = 10 −2.5 , and the electron distribution index p = 2.2. It should be noted that although some later works, including Nava et al. (2014), Beniamini & van der Horst (2017), as well as Santana et al. (2014), hinted that ε e ∼ 0.15 and ε B ∼ 10 −8 −10 −3 should be the case for some GRBs, the exact values of ε e and ε B do not change the overall shape of the afterglow light curves. Thus, our calculations still apply to their samples.
The characteristic flux density F ν,max at any given time can be obtained with the characteristic flux density of synchrotron radiation at a certain frequency on t cm as F ν,max (t cm ). (It should be noted that F ν,max is different from F peak,ν because the latter denotes the peak flux of the radio light curve at a given frequency ν, rather than the radiation peak across the radio spectrum). F ν,max can be calculated as follows, regardless of fast or slow cooling . (4) Also, at the radio band with low-enough frequencies, the synchrotron self-absorption (SSA) effects cannot be ignored. In this case, we have another characteristic time t a , which denotes the time when ν = ν a , where ν a is the self-absorption frequency. Because in this work the later-time behavior of the radio light curve is our main focus, we only discuss the later-time slow-cooling regime, with ν a = min {ν as,< , ν as,> }, and where c 0 ≈ 10.4 [(p + 2) / (p + 2/3)] ≈ 15.24, and the strength of the magnetic field B as well as shock radius R can be calculated with their corresponding values at t cm (e.g., see Equations (13) and (14) in (Wu et al. 2005)).
In the slow-cooling phase, the transition from ν as,< to ν as,> occurs at t am , when ν a = ν m . For various combinations of physical parameters, typically we have t am ∼ 10 3 t cm . For many cases, the GRB afterglow may have already finished the ultra-relativistic self-similar evolution and even began the transition to the Newtonian regime by the time of t am , making Blandford & McKee (1976) no longer applicable. And it is worth noting that in our numerical discussions of late-time afterglow, including the transrelativistic phase, in Section 2.1, the evolution of ν as,> is included in calculations, with corresponding effects of this factor in this regime considered. Thus, we do not take ν = ν as,> into consideration in analytical calculations.
For typical parameter values, we have ν cm ∼ 10 3 − 10 4 GHz, which is much higher than in the radio band; hence in this work, we only take the ν < ν cm band into consideration. Firstly, we calculate the light curves for radio afterglows. If the flux F ν ∝ t α has a positive temporal index α before a turning point and then shows a negative α, we define the turning point as a radio peak. We only consider the 0 k 2 case. The reason is that the self-similar solution proposed by Blandford & McKee (1976) only stands as long as k < 4. And in reality, it is difficult for the circumburst environment to show density distributions steeper than k > 2, since the original medium distribution and the stellar wind from the progenitor are the only main factors affecting k.
Thus, similar to Wu et al. (2005) and van der Horst (2007) by comparing the values of characteristic times t a , t cm , t c , as well as t m , with the latter two items corresponding to the time points when ν c and ν m equal to the observing frequency ν, respectively, we can get the overall profile of GRB radio afterglow light curves. Depending on the values of k, three types of light curves can be obtained. For smaller k with 0 < k < 32−12p−8 12−3p+(p−4) , when ν a (t cm ) < ν < ν cm , with ν a (t cm ) means the SSA frequency at t cm , we have t a < t cm < t m < t c , and the Huang et al. (1999). Here we take the GRB isotropic energy Eiso = 10 53 erg, characteristic density A = 1 cm −3 , and adiabatic shock wave = 0. The y-axis are shown in arbitrary units. It can be seen that for the k = 0 case, the low-frequency afterglows peak during the transrelativistic-to-Newtonian phase, and the relations between peak times/fluxes and frequencies are different from the ultrarelativistic regime, while for the k = 1.1 and 2 cases, the radio afterglows never enter the trans-relativistic phase during our calculations. Figure 3. The upper panel shows the temporal evolution of Γv/c with various isotropic energies. The transrelativistic evolution featuring 0.2 < Γv/c < 2 is marked with the gray-shaded region bounded by the dotted lines. Here we take k = 0 and n = 1 cm −3 . It can be seen that the lower the Eiso is, the earlier the transrelativistic phase begins. The middle panel shows the temporal evolution of Γv/c for various circumburst densities, with k = 0, Eiso = 10 53 erg. It can be seen that the larger the density n, the earlier the transition begins. The lower panel shows the temporal evolution of Γv/c with various k, Eiso = 10 53 erg, and characteristic density A = 100 cm −3 . It can be seen that the smaller the k is, the earlier the transition time.
peak occurs at t m , with flux before and after peak as For ν < ν a (t cm ), the order changes to t cm < t a < t m < t c , and the flux peaks again at t m . The light curve around peak time can be expressed as It should be noted that, in this case, the upper limit of k, i.e., 32−12p−8 12−3p+(p−4) , depends on the electron distribution index p, as well as the shock radiation efficiency , and is quite sensitive to both parameters. For p ∼ 2.2, ∼ 0, this upper limit is ∼ 1.037. When the value of k increases to 32−12p−8 12−3p+(p−4) < k < 4/3, at higher frequencies, ν a (t cm ) < ν < ν cm , we have t a < t cm < t m , with t c disappearing in the radio band. In this case, the high-frequency light curve still peaks at t m , and the radio flux around this time can be calculated as (9) For ν < ν a (t cm ), t cm < t a < t m , t m remains as t peak,ν , and the peak-time light curve is (10) And both the high-and low-frequency radio fluxes peak at t m again.
For a larger k (4/3 < k < 2), the SSA frequency at t ac is lower than ν cm , that is, ν ac < ν cm , and the radio band should be divided into three sections. For ν ac < ν < ν cm , t a < t c < t cm < t m , while we have t c < t a < t cm < t m for ν < ν a (t cm ). In these two cases, the slope between t cm and t m is slightly negative as long as > 0, only with an early peak at t a . However, a significant break does exist at t m , as can be seen from the equation below. Considering possible observational errors, such a turnover could still be taken as peak time: (11) While for ν < ν a (t cm ), we expect t c < t cm < t a < t m , again with t m serving as peak time. Here the light curve can be expressed as (12) It can be seen that the GRB radio afterglow peaks at t peak = t m , regardless of k value or observing frequency. Because we have t m ∝ ν −2(m+1)/(4m+k) , the peak times and peak fluxes can be calculated with Eqs. 7-12 as That is, the a and b indices we seek should be For = 0, it can be seen that t peak,ν ∝ ν −2/3 . Even for > 0 cases, as long as the shock radiation efficiency is low enough, the t peak,ν ∝ ν −a index a should not deviate too much from 2/3. For the F peak,ν ∝ ν b index b with = 0, b increases with k, that is, b = 0 when k = 0, and b = 1/3 when k = 2. If > 0, the b value is larger than that in the = 0 case, although such a difference is not significant for larger k. In Figure 1, the relations between k and b with various are shown, according to which the k value can be drawn from b. And it should be noted that for later peaks occurring at T 90 +several days, = 0 usually applies, and thus a single k value can be constrained with b, as shown in Eq. 16.
Eqs 15 and 16 cannot be applied for high-frequency (ν ac < ν < ν cm ) radio afterglows with k > 4/3, the flux of which peaks at t a 1 day. Because for real GRBs radio follow-up observations usually do not begin at such early times, and the behaviors of afterglows are often contaminated by the early reverse shocks, such a high-frequency early peak with a large k have little value for our purpose, and its behavior is beyond the scope of this work.
Late-time Afterglow Behaviors and Transition from Relativistic to Newtonian Phase
The GRB afterglow begins the transition from relativistic to Newtonian phase (the so-called "transrelativistic" stage) when the shock wave expands to a larger radius. The evolutionary phase with Lorentz factor 0.2 Γv/c 2 can be considered as "transrelativistic", because by this time, the shock velocity has been significantly decreased and the amount of swept-up circumburst medium becomes large enough, while the afterglow evolution in this stage can be described by neither Figure 4. The temporal evolution of characteristic frequencies for various k (lower panel). Here the dashed, dotted and dashdotted lines represent the evolution of cooling frequency νc, minimum frequency νm, as well as SSA frequency νa. The solid lines show the evolution of multi-band radio peak frequencies, as well as peak times. It can be seen that considering trans-relativistic effects and evolution of νa, the peak time t peak,ν for these cases should be ≈ ta. However, other possibilities for peak also exist for various physical parameters.
the relativistic self-similar solution proposed by Blandford & McKee (1976), nor the Newtonian Sedov-Taylor solution (Taylor 1950;Sedov 1969). Here, numerical calculations based upon dynamical equations proposed by Huang et al. (1999), which provide a unified framework from relativistic to deep Newtonian stages of shock-wave evolution, are required, with examples shown in Figure 2. And because in late phases GRB afterglows should be adiabatic, it is safe to assume = 0 in our transrelativistic calculations. Figure 3 shows the temporal evolution of Γv/c for adiabatic shocks exhibiting various physical parameters, with the trans-relativistic phase occurring during T 90 +dozens to hundreds of days. And considering the transrelativistic phase only begins when the swept-up mass is comparable with the mass of the shock wave itself, it is clearly seen that a larger E iso , a smaller A, and a larger k can lead to a later transition.
According to numerical calculations, for spherical ejecta, the radio flux peak no longer appears at t m during such late times, due to the change of blast-wave dynamics, as well as the transition from ν a = ν as,< to ν a = ν as,> , which may occur before or after the be-ginning of transrelativistic phase, depending on the circumburst density profiles and E iso . For example, as can be seen from Figure 4, t a acts as the peak time for our typical parameters, though other possibilities also exist. Hence, the values of a and b in this phase differ significantly with predictions made by the self-similar shock-wave theory.
In late times covering the transrelativistic regime, the analytical expressions to describe the evolution of the bulk Lorentz factor Γ and other quantities no longer exist. However, the relations between t peak,ν , F peak,ν and ν can be analyzed numerically. We fitted a and b with various parameter values and found that both a and b show no strong dependence on E iso or A. While these indices do change more significantly with k. For typical parameter values, the larger the k value, the flatter the late time a, and the smaller the b. Generally speaking, for shocks with E iso = 10 53 erg, A = 100cm −3 , = 0, p = 2.2, ε e = 1/3, and ε B = 10 −2.5 , we have a ∼ 1.6, and b ∼ 0.95 for the k = 0 case. While a decreases from ∼ 1.6 to ∼ 1.0, b decreases from ∼ 1.0 to ∼ 0.89 when k increases from 0 to 2. By comparing Fig. 4 3(c), it can be seen that the late peaks due to the transition of ν a could appear earlier than the beginning of the transrelativistic phase for the circumburst environment with k 1 and small A. However, because our numerical calculations cover the complete from as early as < T 90 + 0.1 days, the general trend mentioned above still applies.
We also investigate the effects on transrelativistic a and b caused by the electron distribution index p, the proportion of electron and magnetic field energies ε e and ε B , as well as shock radiation efficiency . We find that for electrons with distribution index 2.0 < p < 2.5, the values of a and b depend weakly on p, and a larger p can lead up to a steeper b, as well as a shallower a. However, such a trend is not as clear as the k dependence mentioned above, and the change in a is hardly noticeable. For example, a decreases from 1.6 to 1.55 when p increases from 2.1 to 2.4. Meanwhile, a larger p can give rise to a b value as large as ∼ 1.1. Besides, both a and b depend weakly on ε B , and a smaller ε B can lead to flatter a and b. For example, a decreases from ∼ 1.6 to ∼ 1.4 when ε B decreases from 0.01 to 0.001, while neither ε e nor shows a strong influence on a and b. In short, the variation of b within a wide parameter range is relatively small, while the value of a is largely determined by k. With the value of a in the transrelativistic regime, one can put a stringent constraint on k.
For wideband observations covering ∼ 1−10 2 GHz frequency range, the values of the t peak,ν −ν and F peak,ν −ν indices a and b fitted from peak data should fall between a ∼ 2/3, b ∼ 0 − 1/3 for the ultrarelativistic regime, and a ∼ 1 − 0.9 as well as b ∼ 1.6 − 1 for the transrelativistic regimes, depending on k ∼ 0 − 2. Similar to Eqs. 15 and 16, such a transrelativistic trend with steeper a and b compared with the ultrarelativistic case can also be used to put constraints on k qualitatively, using a and b derived with multiband light-curve peaks.
Corrections for the Jet
Many GRBs have already entered postjet break evolution when reaching radio afterglow peak times, especially for low observing frequencies, that is, t peak,ν > t b , where t b denotes the jet break time (Chandra & Frail 2012). It is necessary to discuss the effects brought by a GRB jet in order to get a full understanding of multiband afterglows. Assuming the GRB jet-opening angle Figure 6. Theoretical GRB radio afterglow light curves considering forward-reverse shock interactions with Eiso = 10 52 erg(left) and 10 53 (right) erg, respectively. The other parameters are adopted as the typical values listed in Section 2. It can be seen that such interactions mainly affect higher frequency light curves at early times ( 1 day), with a smaller Eiso can give rise to an earlier FS/RS peak. Also, For frequencies < 4.8 GHz, the FS/RS peak is hardly noticeable, with the only prominent structure as the later peak shown in Fig. 2. is θ j , with an initial opening angle θ 0 . When the bulk Lorentz factor of GRB shock wave decreases to Γ < θ −1 j , the jet-opening angle becomes larger than the relativistic beaming angle 1/Γ, and thus, approximately speaking, the flux density from the jet is ∼ Γ 2 θ 2 0 times smaller than that from spherical shock.
Firstly, we consider a jet without sideways expansions, that is, θ j = θ 0 . Here we perform a similar analysis to Mészáros & Rees (1999). Assuming the flux density from spherical shock evolves as F ν,1 ∝ t a1 , a jet with the same parameters can give rise to a flux density F ν,2 ∝ t a2 . Because in the ultrarelativistic phase the bulk Lorentz factor evolves as Γ ∝ t (3−k)/(8−2k) , and F ν,1 /F ν,2 ∝ t a1−a2 ∝ Γ −2 θ −2 0 , it can be seen that That is, analytically speaking, for a relativistic jet without sideways expansions, the slope of the afterglow light curves during t > t j is the same as the slopes presented by Eqs. 7-12 minus a correction factor 3−k 4−k . It can be seen that no radio peak exists after t j , no matter the k value. If a jet exists during the radio afterglow evolution, observationally the radio-peak time t peak,ν should equal the jet break time t j , if no peak occurs before t j , and t peak,ν no longer depends on the observing frequency ν in this case, making a = 0. If we assume that t j < t m , and that t j is not far from t m , we can calculate that b = 1/3. Such a b value cannot be distinguished from peaks caused by spherical shocks. Although Huang et al. (2000) show that the shock dynamics can be affected by the jet, according to the numerical calculation, the afterglow flux decreases smoothly after jet break when sideways expansions of the jet can be ignored even with evolving temporal index, and later emission peaks do not appear. Thus, the analytical conclusions on a and b still apply.
Analytical expressions for jets with sideways expansions only exist when k = 0, according to Sari et al. (1999). In this case, we also have a 2 < a 1 , and a 2 0. That is, no emission peak exists after the jet break. Hence, similar to jets without expansions, the possible peak marks the jet break time t j , with a = 0, b = 1/3 (providing that t j occurs not too early). For arbitrary k values, a numerical calculation based upon dynamical equations provided by Huang et al. (2000) is needed. We take the local speed of sound c s = c/ √ 3 as the speed of jet expansion to calculate the dynamical evolution and find that the existence of a jet has a greater impact on late-time afterglow behaviors. During this time, the condition Γ > θ −1 j can be satisfied once more, making the afterglow flux rise again, forming a new emission peak. As early peaks are produced by spherical shocks, the late peaks due to jet effects appear earlier at higher frequencies, too. Because for typical parameters the late peak appears at several hundred days after GRB prompt emission, the analytical solutions proposed by Sari et al. (1999) or Rhoads (1999) do not apply, even for the k = 0 ISM environment. The flux level of such a late peak is much lower than that of early peaks, for the afterglow has already entered transrelativistic phase during this time, making the relativistic beaming effect much less significant. And it should be noted that when k ∼ 2, we can almost have Γ > θ −1 j satisfied all the way. As a result, it is possible that no jet break exists in the stellar-wind environment. Figure 5 shows the temporal evolution of the jet-opening angle for various k, and its relation with 1/Γ. According to numerical results, the a value of the late emission peaks for sideways-expanding jets decreases with increasing k, which is similar to the case with spherical shock waves. The value of a decreases from ∼ 1.1 to ∼ 0.88 when k increases from 0 to 2. Meanwhile, the value of b for jets increases with larger k, which is opposite to the spherical case. And b is quite small this time, even smaller than the ultrarelativistic case. For example, our results show that b ∼ 0.15 for k = 0, and b ∼ 0.45 for k = 2. However, it should be noted that all such results are based upon one single assumption, that is, the jet expands at a constant speed of c/ √ 3. In reality, this assumption may not be satisfied after the afterglow enters the transrelativistic regime. Latetime afterglows may expand quite slowly, making the jet-opening angle θ j < 1/Γ, thus no late peak can be formed. Also, earlier numerical simulation shows that jets do not have significant sideways expansions . In one word, effects caused by jet may be more complex in real GRBs.
Sample Selection
We collected data on GRBs with multiband radio light-curve observations (that is, GRBs with radio light curves in no fewer than two bands) from the literature, in order to put constraints on the circumburst density profiles based on Eqs. 13 and 14. Only the observed flux and corresponding epochs, rather than fitted peak values, such as the ones shown in Table 4 of Chandra & Frail (2012) or Table 1 of Li et al. (2015), were utilized in this analysis. Because it can be seen from Eqs. 13 and 14, as well as in Section 2.1 that theoretically speaking, both relativistic and transrelativistic cases give rise to nonnegative a and b, no matter of the k value, high-frequency GRB radio afterglow light curves should exhibit a higher F peak , as well as a t peak no later than lower-frequency peaks, a general trend of later and weaker peaks at lower frequencies is required during our sample selection, in order to comply with the theoretical framework of this analysis. For this reason, GRB 980519 (Frail et al. 2000b) was excluded, due to the late appearance of higher-frequency peaks compared with the lower-frequency ones, which is hard to explain with the GRB afterglow model based upon synchrotron radiation, unless we take large observation errors and light-curve modulation due to interstellar scintillation into consideration. Besides, GRBs 050416A (Soderberg et al. 2007), 020813, and 050713B were omitted, due to their higher peak fluxes compared with their lower-frequency ones. Although our strategy is somewhat biased, such a bias could be considered as originating from model constraints, especially for our radio-peak-focused approach to get the k value. Notea "VLA" for Very Large Array; "JVLA" for the upgraded Karl G. Jansky Very Large Array; "ATCA" for Australia Telescope Compact Array; "GMRT" for Giant Meterwave Radio Telescope; "WSRT" for Westerbork Synthesis Radio Telescope; "Ryle" for Ryle Telescope; "AMI" for Arcminute Mircrokelvin Imager. b Error bars of t peak,ν denotes the times on which the adjacent measurements are taken around the corresponding peak of the light curve. c Flux decreases monotonically during the complete observing campaign without a definite peak; we took the highest Sν as peak flux. d Flux increases monotonically during the complete observing campaign without a definite peak; we took the highest Sν as peak flux. e The 4.9 MHz light curve of GRB 980329 exhibits two peaks, according to Taylor et al. (1998). Since the first peak appears earlier than the 8.3 MHz peak, which is in contradiction with the standard afterglow model, here we adopt the later peak for our analysis. f The 30 GHz data acquired by the Owens Valley Radio Observatory's 40 m telescope from Galama et al. (2000) were discarded due to scintillated light curve without a definite peak, and the later appearance of the highest flux compared to the 15 GHz data; the 1.43 GHz data acquired by VLA from Galama et al. (2000) were discarded due to the earlier peak time compared to higher frequencies. g The 22.46 GHz data acquired by VLA from Berger et al. (2001) were discarded due monotonic light curve and later appearance of the highest flux compared to the 15 GHz data. h The 8.46 GHz data point with Sν = 566 ± 34 µJy acquired at 2000 Octpber 5.216 (UTC) by VLA from Harrison et al. (2001) was adopted as the peak flux; a later rebrightening at October 12.771 with Sν = 644 ± 126 µJy was discarded due to the later appearance compared to the 4.86 GHz peak, as well as the larger error bar. i The 1.43 GHz data acquired by VLA from Berger et al. (2003) were discarded due to scintillated light curves and earlier appearance of the highest flux measurement compared to the 4.86 GHz data; the 22.5 GHz data were discarded due to monotonic light curve and a maximum flux significantly lower than the 8.46 GHz data. j Two flux measurements with similar flux levels, Sν = 728 ± 55 and 749 ± 63 µJy, were performed at 2004 January 4.33 and January 15.35 (UTC), respectively, as shown in Soderberg et al. (2004b). It seems that the radio afterglow of this burst suffered significantly from interstellar scintillation, according to data presented in Soderberg et al. (2004b), thus making the exact peak time somewhat hard to determine. k The 15 GHz measurements acquired by the Ryle Telescope from Soderberg et al. (2006) were discarded due to large measurement errors and lack of data points; the 22.5 GHz measurements acquired by VLA from Soderberg et al. (2006) were discarded due to lack of data points. l An 8.46 GHz measurement performed at 2007 October 12.04 (UTC) shows a flux level of Sν = 431 ± 51 µJy (Perley et al. 2008), which is slightly larger than the peak data listed in this table. The earlier Sν = 430 ± 50 µJy was chosen as a peak considering the theoretical predictions for earlier peaks at higher frequencies. However, due to no 8.46 GHz data has been taken between the two measurements, it is difficult to know the exact peak time for the GRB 071003 radio afterglow at this frequency. m ACTA 5.5 GHz data for GRB 100418A listed in Moin et al. (2013) were discarded, due to the monotonic light-curve behavior, which is inconsistent with the VLA data. 2015) were discarded due to the lower flux and later peak compared to the 4.8 GHz data. p The 13 and 15 GHz measurements acquired by JVLA from Cucchiara et al. (2015) were discarded due to the lower flux compared to the 7 GHz data. q The 1.255 GHz measurements acquired by JVLA from Laskar et al. (2016) were discarded due to a peak flux that appeared earlier than the 1.644 GHz peak, with a flux higher than the 5.0 GHz peak. r The 1.45 GHz measurements acquired by JVLA from Alexander et al. (2017) were discarded due to an earlier peak compared to 1.77 GHz; all data with observed frequency > 13.5 GHz were discarded due to monotonic decline with maximum Sν lower than 11.0 GHz peak, due to the lack of early observations.
In total, we selected 32 GRBs, including 31 long bursts and 1 short one (GRB 170817A), as our sample. For observations at certain frequencies in our sample, GRBs do not comply with the theoretical trend of higher F peak , earlier t peak at higher ν -such data points have been omitted during our analysis. Meanwhile, we largely ig-nored the light-curve peaks occurring earlier than ∼ 1−2 days in the observer's frame to avoid possible contamination from early reverse shocks, because as can be seen in Fig. 6, the typical peak time due to forward-reverseshock interactions should be at ∼ T 90 + several×10 −1 days, which corresponds to ∼ T 90 + 1 − 2 days in the observer's frame because the typical redshift value of GRB is z ∼ 3 (Jakobsson et al. 2005). Also, 50 GHz data are not taken into consideration, as in such high frequencies, the t peak = t m conclusion does not apply (see Section 2). Considering the sparsely sampled nature of GRB radio observations, as well as the not-sosteep slopes of temporal evolution for radio afterglow light-curve fluxes (see Fig. 2), we took an approximate approach to make our analysis: The maximum observed flux is recognized as of the peak flux F peak,ν at a certain observing frequency, the corresponding observing time as the estimated peak time t peak,ν , and the epochs of adjacent observations give the upper and lower limits of t peak,ν . Besides, in order to make constraints with as many observing frequencies as possible, we took another strategy to utilize the monotonically evolved light curves: the reading of the last data point of a monotonically increasing light curve was taken as the lower limit of F peak , and the second-to-last epoch as the possible lower range of error for t peak . Similarly, for a constantly declining light curve, the first data point sets the lower limit of peak flux, and the second observing epoch the upper limit of peak time.
Some GRBs have been observed by various telescopes at similar frequencies. For example, GRB 030329 has been observed by WSRT at 1.4 GHz (van der Horst et al. (2005), and GMRT at 1.28 GHz (van der Horst et al. (2008)). Also, GRB 100418A has been observed by VLA and ACTA at 4.95 and 5.5 GHz, respectively Moin et al. (2013). Because calibrations between different instruments could lead to extra complications, in this case, we select peak data for analysis from as few telescopes as possible. Hence, our main uncertainties arise from errors in t peak,ν estimation, especially for GRB light curves with fewer observed data points. Information on the t peak,ν and F peak,ν for each band of our GRB samples, as well as descriptions of the discarded outlier data, is listed in Table 1. It is worth noting that as seen in Fig. 3, considering the long-lasting process of transition between relativistic to deep Newtonian shock waves, nearly all of our samples should be still in the relativistic or transrelativistic phase when the light-curve peaks occur.
Constraints on Circumburst Environment
The values of the a and b indices of 32 sample GRBs are listed in Table 2. Figure 8 shows the a values, observing frequencies, as well as peak times in light curves of each sample GRB. Figure 9 shows the corresponding b value, the observing bands, along with peak fluxes. And a statistic of the k value in the circumburst environment for our complete sample can be found in Figure 7. We compare these fitted indices with theoretical predictions from Eqs. 15 and 16 and Section 2.1, taking the circumburst density data provided by Chandra & Frail (2012) as well as possible observational errors into consideration. It can be seen that more than 40% of our samples can be explained with uniform ISM distribution, while nearly one-fifth samples can be described with stellar-wind environment. And due to various observational complications, both k = 0 and k = 2, or any 0 < k < 2 circumburst density distributions work for about 30% of the samples, while two GRBs cannot be properly constrained with our analysis using multiband radio peaks.
GRBs with ISM Density Profiles
According to our method, a total number of 14 GRBs out of the 32 samples can be solely explained with a k = 0 ISM environment. As listed in Table 2, we find that the values of a and b for multiband radio afterglow peaks of GRBs 980329, 980703, 000301C, 020903, 031203, 070125, 100418A, and 171010A lie between the ultra-and transrelativistic cases, and thus can be explained with transrelativistic afterglows under an ISM environment. And as can be seen in Figs 8 and 9, although due to observational uncertainties, the time-frequency index a shows large fitting errors for GRBs980329, 020903, and 100418A, which may lead to multiple possible k values, the flux-ν index b can be constrained stringently enough. In fact, as noted in Section 2.1, considering the late occurrence of the transrelativis-tic phase for k = 1.1 or 2 cases, a b ∼ 1 value alone can still lead to a confident conclusion of k = 0, no matter of the a value. Besides, tt is worth noting that the radio peaks of GRBs 020903 and 070125 show "breaks" of a and b values at a certain frequency, which could imply a transition from ultra-to transrelativistic phases, as the high-frequency peaks could appear during the ultrarelativistic phase with flatter a and b values, while the lowfrequency ones occur later with steeper transrelativistic indices.
Compared with existing works using multiband modeling, it should be noted that Berger et al. (2000), Soderberg et al. (2004b), and Chandra et al. (2008) have drawn similar conclusions for GRBs 000301C, 031203, and 070125, respectively, that is, all these three GRBs occur in ISM environments. And Starling et al. (2008) Figure 7. A statistic on the k distributions for the 32 GRB sample. It can be seen that the radio peaks of 14 GRBs can be explained by the k = 0 density profile (with six samples showing possible contamination by the reverse shock). Another six samples exhibit behaviors compatible with predictions for the k ∼ 2 case. Also, we have 10 GRBs that can be explained by density distributions between the ISM and stellar-wind environment, along with another 2 bursts occurring in environment hard to constrain, due to observation errors.
found a possible ISM explanation for GRB 980329 and demonstrated that GRB 980703 can be explained by both k = 0 and 2 environments. Frail et al. (2003) also showed that the wind model cannot be ruled out (although not favored) to fit the light curve of GRB 980703, which is still compatible with our constraints. An exception is GRB 171010A, which has been explained by Bright et al. (2019) using a steep density profile, although some hard-to-explain features do exist. This conclusion is somewhat inconsistent with our constraints, which could be due to the relatively large fitting errors for the b index, thus leaving large room for alternative explanations.
However, although the rest six GRBs in this category, including GRBs 050820A, 051022, 130427A, 141121A, 160509A, and 160625B generally follow predictions of the k = 0 ISM environment, Cenko et al. 2006, Perley et al. (2014, Cucchiara et al. (2015), along with Alexander et al. (2017) all pointed out a possible reverse-shock contribution of the radio afterglow peaks in these bursts. With the existence of reverse-shock contamination, the multiband peak method of analysis may no longer apply, thus the reliability of our related results could be compromized.
GRBs Compatible with a Stellar-Wind Environment
GRBs occurring in k = 2 density profiles as implied by multiband radio peaks include GRBs 980425, 000418, 011121, 030329, and 100814A, along with the only short GRB in our sample, GRB 170817A. Among these, the behavior of the GRBs 980425 and 030329 light-curve peaks follows predictions for late-time afterglow (ν a = ν a,> ) in the wind medium. And Berger et al. (2003) and van der Horst et al. (2005) have fitted the afterglow light curves of GRB 030329 with both ISM and wind models, whose results are compatible with our conclusions.
Meanwhile, the b indices of GRBs 000418 and 031203 are relatively shallow and are consistent with expanding jets in the ISM environment. However, their a indices are too small to be explained with jets. It also should be noted that for several bursts, the medium-and lowfrequency radio peaks may be explained with jet breaks, although the details depend much on the jet-opening angles (hence the jet break times). Analysis with multi-band radio peaks may only hint at multiple explanations, and data from other bands (e.g., optical) should be utilized to clarify the existence of jets. Besides, it is quite possible that the speed of the jet's sideways expansion during the transrelativistic phase should be slowed down, making the results drawn in Section 2.2 no longer stand.
The value of a index of GRB 011121 is consistent with the existence of a jet, while the b index falls within predictions for the k = 2 case. Considering the simultaneous appearance of the multiband peak and the large errors in peak-time estimations that can be attributed to observational effects, a burst occurring in a wind density profile is a more suitable conclusion, which is supported by Price et al. (2002c). A similar case applies for GRB 100814A, for which De Pasquale et al. (2012) has pinned down to the existence of a jet.
For the gravitational-wave-associated GRB 170817A, the only short burst in our sample, its multiband radio afterglow exhibits a shallow a with a shallow b. Such a case is more consistence with relativistic jet, which is compatible with the off-axis jet identified by previous works (e.g., see Fong et al. 2019).
GRBs Explained by k between 0 and 2
This category consists of 10 samples, which are GRBs 970508, 970828, 990510, 991208, 000926, 010921, 021004, 060218, 111215, and 140304A. The uncertainties here mainly arise from the relatively large errors for peak-time/flux estimates. For example, the peak behavior of GRBs 970508, 990510, and 010921 lies somewhere in between the k = 0 trans-relativistic and k = 2 late time (ν a = ν a,> ) cases, when taking fitting errors into consideration, making any 0 k 2 possible. This is compatible with the conclusion drawn by Starling et al. (2008), who found a homogeneous circumburst medium for GRBs 970508 and 990510 with X-ray, optical, and IR afterglows. However, the exact k for each sample is hard to pin down precisely, due to large uncertainties in a and b.
For GRBs with high circumburst densities, including GRBs 991208, 000926, 021004, 060218, and 111215A, the indices of a and b can be explained with later transrelativistic shocks in the k ∼ 0 environment. However, because a larger A can lead to a later t cm (thus a later peak time t m in radio afterglow), the larger k possibility, which can produce a steeper b during the relativistic phase, cannot be excluded for these bursts. When compared with existing works, Harrison et al. (2001) showed that the temporal behavior of GRB 000926's light curve can be better modeled with a Comptonized uniform medium; Soderberg et al. (2006) fitted GRB 060218 with both k = 2 and k = 0 density profiles, as well as low an E iso ; and Zauderer et al. (2013) has pointed out that the circumburst medium of GRB 111215A should be in the form of stellar wind, which are all consistent with our analysis.
For GRBs 970828 and 140304A, although the a and b indices can be explained with both k = 0 and 2 possibilities when taking fitting errors into consideration, it should be noted that the afterglows of these bursts were significantly affected by reverse shock, as shown by Djorgovski et al. (2001) and Laskar et al. (2018). And the radio light curve of GRB 140304A might even be contaminated by interstellar scintillation (Laskar et al. 2018). All of these factors can complicate our analysis, and thus compromise the robustness of the results.
GRBs Hard to Explain with Multiband Light-curve Peaks
Finally, we have two outliers that cannot be well constrained by the multiband peak method. One of them is GRB 000911, whose a and b values are much steeper than theoretical predictions for both trans-or ultra-relativistic shock waves, thus making its afterglow behavior hard to explain, even considering the fitting errors. Another hard-to-explain case is GRB 071003, which is located near the Galactic plane, with its radio data severely affected by scintillation (Perley et al. 2008), thus leaving a large room for errors in the a and b indices, therefore making multiple explanations, including both relativistic or transrelativistic ejecta propagating in the k = 0 and k = 2 circumburst environment, possible.
SUMMARY AND DISCUSSION
In this paper, we investigated the radio afterglows of gamma-ray bursts occurring in a power-law-distributed circumburst medium with n ∝ R −k , and an arbitrary k. We show that one can use multiband radio afterglow peak time t peak,ν and F peak,ν data to put a constraint on the density-distribution index k. We find that in the relativistic phase of the afterglow evolution, the peak time t peak,ν corresponds to t m , that is, the time when the minimum frequency of electrons equals to the observing frequency. And we have t peak,ν ∝ ν −a , a ∼ 2/3, which is independent of the shock radiation efficiency, while for F peak,ν ∝ ν b , the value of b depends more strongly on k. For adiabatic shocks, b increases from 0 to 1/3, if we change k from 0 to 2. And in the transrelativistic phase, similar dependencies between peak time/flux and frequency exist, with steeper a and b values than in the ultrarelativistic phase. Figure 8. The relations between radio afterglow peak fluxes t peak,ν and observing frequencies ν/νmin for our sample GRBs. The dashed line with data points in each figure shows the fitting results from observations, the blue dotted lines denote the possible range of fitting errors, while the black solid lines from top to bottom represent the theoretical predictions for relativistic jet break, relativistic spherical shock wave, k = 2 late time (νa = νa,>), and k = 0 transrelativistic cases, respectively. . The relations between radio afterglow peak fluxes F peak,ν and observing frequencies ν/νmin for our sample GRBs. The blue dashed line with data points in each figure shows the fitting results from observations, the blue dotted lines denote the possible range of fitting errors, while the black solid lines from top to bottom represent the theoretical predictions for transrelativistic, k = 2, = 0 relativistic, and k = 0, = 0 relativistic cases, respectively. It should be noted that as seen in Section 2.1, the b value for transrelativistic cases is insensitive to k.
It can be seen that by comparing multiband radio-peak data, one can determine the value of the circumburst medium distribution index k. We carry out an analysis for 32 GRBs with multiband radio afterglow light-curve data, and find that half of them can be explained with transrelativistic afterglows under uniform ISM environments. Only ∼ 20% of our sample GRBs can be certainly determined that they occur in wind-like environments with larger k, although it should also be noted that the radio peaks of nearly 30% of our sample GRBs can be interpreted by a k value between 0 and 2. Most of these results are compatible with existing analyses of individual bursts. However, Yi et al. (2013) found that nearly all their 19 sample GRBs show signs of k ∼ 0.4 − 1.4, with a typical value of k = 1, by analyzing early forward-reverse-shock evolution. Obviously, our conclusion is different from that of Yi et al. (2013). One of the possible explanations could be that the emission regions of GRB afterglows become larger after entering the self-similar phase. And circumburst medium distributions at these regions may be quite different from the early forward-reverseshock-dominated areas, which are much closer to the GRB central engine. That is because the mass-loss processes of GRB progenitors induced by the stellar wind or other mechanisms have limited influence and may not change the density distribution farther away. Besides, the sample utilized by Yi et al. (2013) does not overlap with our radio sample. It is difficult to judge whether such inconsistency is due to diversity among individual bursts or changes in density distribution at different distances around a single GRB. If complete followup observations for one burst become available in the future, from early X-ray and optical detections to late-time radio observations, we can use these information to fully understand the GRB circumburst medium distributions at various distances and possible changes within.
As seen in Section 3, the main uncertainty in our analysis comes from errors in peak-time/flux estimation. As for each GRB we consider the maximum observed flux as its approximate peak flux and the corresponding observing time as an estimated peak time in our index fitting, while the real peak may be situated at any time between the two observing times adjacent to our estimation, with a flux no lower than our estimation, if the sampling of the light curve is not frequent enough during the follow-up observations, especially if the data points near the peak time are sparsely distributed, the fitted a and b values using our method do have larger errors, and the circumburst medium distribution cannot be pinned down without doubts. However, it is worth noting that although such uncertainty does exist, our constraints on k are still better than those from single-band light-curve slope fitting. On one hand, in many cases, it is nearly impossible to fit the light-curve slopes reliably in order to calculate k with only a handful of data points; while our approach described in Section 3 can show some preliminary results at least. On the other hand, even if we can make enough observations for one GRB in a single band, those data far away from the emission peak often exhibit large errors due to their low fluxes and instrument limitations, and a precise fitting on k cannot be guaranteed. On the contrary, with more data on hand, the uncertainties on peak estimation are greatly reduced this time. Hence, with more high-quality light-curve observations made available in the future, the parameters on multiband radio peaks can be better determined, and more stringent constraints on circumburst density distributions can be obtained.
Because multiband afterglow observations covering the low-frequency range may show signs of transrelativistic transitions, the values of a and b at various bands could be quite different. Because existing radio observations usually sample a few frequencies only, we can only see signs of such transitions in a handful of bursts. Thus, the uncertainties here cannot be ignored. If one can carry out radio follow-ups at more frequencies in the future, such transitions should be unveiled. Besides, the new generation of large radio telescopes, including the Five-hundred-meter Aperture Spherical radio Telescope (FAST) and the upcoming Square Kilometer Array (SKA), have much higher sensitivities at lower frequencies, and thus can be used to detect GRB late-time afterglows (e.g., see Li et al. 2015;Zhang et al. 2015;Ruggeri & Capozziello 2016), and sample more frequency bands at longer wavelengths in order to get a more complete picture of transrelativistic shock behaviors. | 13,889.4 | 2022-03-01T00:00:00.000 | [
"Physics"
] |
Powering Knowledge Versus Pouring Facts
Many problems related to the real world admit a mathematical description (i.e., a mathematical model) based on what is studied at school. Solving the mathematical model, however, often requires a higher level of mathematics, and this is the reason for not including such problems in the curriculum. We present several problems of this kind and propose solutions to their mathematical models by means of widely available dynamic mathematics software (DMS) systems. For some of the problems, it is possible to directly use the in-built functionalities of the DMS and to construct a computer representation of the problem that allows exploring the situation and obtaining a solution without developing a mathematical model fi rst. Using DMS in this way can broaden the applicability of school mathematics and increase its appeal. The ability of students to solve problems with the help of DMS has been tested by means of two types of competitions.
Introduction
There are two partially contradicting trends in high school mathematics education. On one hand, we want mathematical knowledge to be based on a solid logical base (rigor). On the other hand, we want this knowledge to be rich both in content and applications. These two trends cannot always (and easily) be reconciled (De Lange 1996). One of the reasons for this contradiction is the fact that only a few problems related to practice allow mathematically pure and complete treatment with the traditional rigor. The demonstration of patterns of logical thinking is time consuming and often related to simplified mathematical content that does not properly reflect the unavoidable complexity of the real world. The formulation of a mathematical model for a real-life situation cannot be based on rigor only. Dropping out some features and keeping only the most essential ones in the mathematical model requires skills that have little to do with rigor, and this is an obstacle for the inclusion of complex real-life situations in the mathematics curriculum. Furthermore, there are many problems related to practice (some of which will be considered below) that can be equipped with a reasonable mathematical model based on what is studied at school. The corresponding model may be a system of equations, an optimization problem, or something else of a mathematical nature. Solving this mathematical model, however, with the traditional rigor within the frame of the school mathematics is not always possible. It may require a higher level of mathematical knowledge, for instance, advanced calculus and/or numerical methods for approximation of the exact solution. This is another reason for avoiding the consideration of genuine real-life applications within the school mathematics. However, with the appearance of powerful and widely accessible dynamic mathematics software (DMS) systems it became possible to reduce, at least partially, the mentioned contradiction between rigor and applications. Solving a model can be performed by means of DMS. As mentioned in Hegedus et al. (2017) "This leaves more time for essential mathematical skills, e.g., interpreting, reflecting, arguing and also modeling or model building for which there is mostly no time in traditional teaching" (p. 20). With the help of technology, it is possible to offer to students much more demanding mathematical content and interesting applications (Hoyles and Lagrange 2009). Such a change would drastically increase the realm of real-life problems that can be considered in school. We do not have in mind only the traditional application of computers where a mathematical model of the problem is solved by a computer; in addition, some examples will be described below where the standard in-built operations ("buttons") of the DMS system can be used directly to make a computer representation of the problem without first writing the formulas of a mathematical model. This DMS representation of the problem will be called a "computer model of the problem." By means of this model and the in-built functionalities of DMS (such as dragging, measuring distances, and areas), the solution of the problem can be found with a reasonable degree of precision. This direct DMS modelling of the problem as well as the mathematical modelling of the problem, followed by a DMS-assisted solution, are in the focus of this paper, which is mainly oriented toward problem solving. Both types of modelling support the most natural way of knowledge acquisition: by experimenting, by formulating and verifying conjectures, by discussing with peers, and by asking more experienced people. In a nutshell, the technology provides the opportunity to learn mathematics by inquiry. This refers not only to what happens (and how it happens) in class but also to extracurricular activities that provide a fruitful playground for building mathematical literacy and cultivating elements of computational thinking (Freiman et al. 2009). Another advantage of using technology in this way is that much larger and more operational mathematical content could be given to the students at an earlier age.
Later in the paper, several problems are considered for which it is easy to assign a proper mathematical model based on school mathematics but whose solution with the necessary rigor while remaining in the frame of school mathematics is relatively difficult (or at least not easy). On the other hand, these models are easily solvable by means of DMS systems. This way of problem solving opens further opportunities for inquiry and cultivates the elementary computational thinking skills of the students, thus powering (in the sense of "adding power to") their existing knowledge and skills. Problems such as the ones considered below and the inquiry-based approach to their solving can make the mathematics studied in school more applicable and more appealing in contrast to the now prevailing pouring of mathematical facts. The ability of school students of different ages to solve such problems has been tested by means of two online competitions called VIVA Mathematics with Computer and Theme of the Month. The participants' scores show that the use of DMS for problem solving is gradually gaining popularity in Bulgaria. The students are interested in this approach and many are capable of using it. The problems considered next have been used in these competitions.
The Sample of Problems
The problems in this section illustrate the differences in the uses of the models we consider in this paper: a mathematical model which can (or cannot) be solved in the traditional way, a mathematical model allowing a simple DMS supported solution, and a computer model (direct DMS representation of the problem). Each of the problems is easy to formulate as a mathematical model but not so easy to solve with the usual rigor within school mathematics. On the other hand, an approximate DMS-assisted solution is readily available, or a computer model of the problem is easy to construct by means of which the problem can be solved even at the earlier stages of secondary education.
The Parking Entrance Problem
This problem is a further elaboration of one of the Problems of the Month used in the European Project MASCIL (http://www.mascil-project.eu/). We present both the computer modeling, which is amenable for younger students using DMS, and the pencil-and-paper mathematical modeling, which requires rather advanced knowledge of mathematics.
Problem 1 A vehicle (car, baby carriage, or wheel-chair) with a wheelbase b (the distance between the centers of the wheels) and clearance (ride height) c is to be moved from the street to the basement of a house over a slope of c degrees ( Fig. 17.1; c = 20º). Is this possible without damaging the bottom of the vehicle? (Fig. 17.2) The answer to this problem depends on the concrete values of the parameters b, c, and c. A steeper slope c is more likely to cause damage to the vehicle. Damage will occur also if b is big enough. The clearance c is also decisive. The interplay between these parameters is not simple and the usual intuition does not help much. The heavy scratches on the surface of many "sleeping policemen" (speed bumps) on the streets indicate that problems similar to this one are important. Further, for the sake of simplicity, we will depict the vehicle only by its two wheels (circles of radius c centered at A and B respectively) and the segment AB (the wheelbase) connecting the wheels. Both the computer model of this problem and its mathematical model rely on the very basic geometric fact that the opposite angles formed by two intersecting lines are equal (angles a in Fig. 17.3). Figure 17.3 shows the collision situation when the vertex at the beginning of the slope hits the bottom of the vehicle at some point C from the segment AB. The second arm of the angle b on Fig. 17.3 is the tangent from C to the front wheel.
A collision occurs only if a þ b\c (the front wheel is no longer rolling on the slope). This suggests the idea for the computer model that is visualized on Fig. 17.4. The numbers b, c, and c are entered in the model as parameters (sliders in GeoGebra). Using the built-in operations of GeoGebra, one constructs a segment AB of length b and two circles (the wheels) of radius c centered at A and B and takes an arbitrary point C on AB that is outside the two wheels. Further, tangents from C to these circles are drawn as shown in Fig. 17.4 and, finally, the angles a and b are measured by the corresponding operation in GeoGebra. The sum d ¼ a þ b is a function of the position of the point C. By moving (dragging) point C along AB and observing the change of d, one can establish experimentally that the function d attains its minimum at the point M, which is the middle of AB. If this minimum is bigger or equal to c, the vehicle could be parked safely in the basement. Otherwise a collision occurs. This observation confirms the intuitive expectation that the middle M of the segment AB is the critical and most vulnerable point. If it passes above the slope vertex, the vehicle can be parked safely in the basement. This observation also shows that even a simpler computer model can solve the problem. Note that if C and M coincide, then a ¼ b, and the condition for non-collision takes the form 2a ! c. Given the numbers b, c, and c, one finds the middle M of the segment AB, draws the tangents from M to the two wheels, and measures the angle d between these tangents ( Fig. 17.5). If d ! c, the vehicle can be moved safely. If d\c there will be a collision and moving it without damage becomes impossible.
The second computer model solution of this problem is completely amenable for students at earlier stages of secondary education. In contrast, as we will now see, the mathematical model of the problem requires knowledge of inverse trigonometric functions, and the classical solution uses some elements of calculus. Denote by x the length of the segment CA in Fig. 17.4. Then a ¼ arcsin c x and b ¼ arcsin c bÀx . One has to find the minimum of the function dðxÞ ¼ arcsin c x þ arcsin c bÀx in the interval [c, b − c] (this is the interval where the function dðxÞ is well-defined; we implicitly assume here that b > 2c). By finding the zeros of the derivative of dðxÞ, one can derive that the minimum of this function is attained for x ¼ b 2 and solve the problem. Here are some tasks for further inquiry with the computer or the mathematical model of this problem: Problem 1.1 What is the steepest slope (in degrees) that a baby carriage with b = 130 cm and c = 12 cm can overcome without troubles? Problem 1.2 If the slope to the basement is 20°and the wheelbase of the car is b = 290 cm, what is the smallest radius of the wheels such that moving the car to the basement will not be a problem? Problem 1.3 For some vehicles, the bottom line is different from the line connecting the centers of the wheels. Also, the front wheels and the rear wheels are not always of the same radius ( Fig. 17.6). Develop a computer and a mathematical model for the exploration of the dangers for moving such vehicles down slopes.
One could further explore the parking problem by means of the more realistic computer model developed by Toni Chehlarova. The corresponding GeoGebra file is available at http://cabinet.bg/content/bg/html/d22178.html (last visited December 2016).
The Cylindrical Container Problem
Problem 2 Two thirds of the volume of a closed cylindrical can of radius 5 cm ( Fig. 17.7) is filled with some liquid. What is the height of the liquid if the can is laid horizontally?
The problem seems to be three dimensional but could be easily reduced to a two dimensional one. In the horizontal position, two thirds of the circle area of the can base are covered by the liquid. Hence, the problem is reduced to finding a horizontal chord AB ( Fig. 17.8) in a circle of radius 5 cm with center at O that cuts off a circular segment (slice) of area one third of the total area of the circle. This can be done in different ways. The in-built operations of the DMS can be used to find the area of the circular sector outlined by the segments OA, OB, and the arc from B to A (in the counterclockwise direction) and the area of the triangle AOB. The difference between the two areas is the area of the circular segment we are looking for. If the horizontal chord AB is made movable (the DMS takes care of the dynamics and automatically re-calculates the areas), a position for the chord AB can be found such that the area of the circular segment is one third of the area of the entire circle. If C is the middle of the chord AB at this position, then the height of the liquid in the horizontal can is equal to the radius of the can base (5 cm) plus the length of the segment CO (which can be measured by the functionalities of the DMS). In our case, an approximate value for the height of the liquid is 6.32 cm. The computer model just developed allows exploration of similar situations with other cylindrical cans (the radius of the can could be made changeable, the part of the can volume which is filled with liquid in vertical position can change, etc.).
We will now proceed to a mathematical model of the problem. For the sake of generality (and since this will not introduce further complications), we will denote the radius of the can base by r. Let a be the measure (in radians) of the angle in the circular sector considered above. The area of this sector is a 2 r 2 . The area of the triangle OBA is 1 2 r 2 sin a. Hence, the angle a that corresponds to a circular segment with area equal to one third of the area of the circle has to satisfy the equation a 2 r 2 À 1 2 r 2 sin a ¼ 1 3 pr 2 . Equivalently, a À sin a À 2 3 p ¼ 0. As we see, the mathematical model of this problem is an exotic equation. School mathematics does not deal with such equations, and this seems to be the reason for not including this important cistern problem in the curriculum. The numerical/graphical solution of this model by DMS, however, is available. The graph of the function f ðxÞ ¼ x À sin x À 2 3 p is depicted in Fig. 17.9. The point A has been constructed as the intersection of the graph of f and the x-axis. The first coordinate of A gives the angle we are looking for: a ¼ 2:60533 (the precision of 5 digits after the decimal point is taken here arbitrarily; it can be increased or decreased).
The length of the segment OC corresponding to this a and r ¼ 5 can be calculated: OC ¼ r cos a 2 ¼ 1:32465. For the height of the liquid in the horizontal position of the can, we obtain 6.32465. If the angle a is measured in degrees, the area of the circular sector is a 360 pr 2 . Correspondingly, the equation from which the angle a will be determined has the following appearance: For further inquiries with either the computer model or with the mathematical model, one could consider the following related problems: Problem 2.1 A horizontally laid cylindrical tank with diameter 200 cm and length 500 cm is partially filled with petrol so that the level of the petrol is 80 cm. How many liters of petrol are there in the tank?
Problem 2.2 If the height of the can from Problem 2 is 24 cm, how much additional liquid should be poured into it in a horizontal position so that the level of the liquid is elevated by 1 cm? If after the addition of the liquid the can is turned into vertical position, what is the height of the liquid level? Problem 2.3 If the height of the can from Problem 2 is 24 cm, how much liquid should be removed from it so that in a horizontal position the liquid level drops down by 1 cm?
Problem 2.4 A heavy metal ball of radius 4 cm is placed into an empty vertically placed can of radius 5 cm and height 25 cm. Then liquid is poured into the can until its level reaches 20 cm and then the can is sealed. What would the liquid level be, if the can is laid horizontally (see Fig. 17.10)?
The Conical Container Problem
This problem is a well-known mathematics exercise for university students. It can be settled by means of calculus or by a mathematical trick with inequalities. We present the mathematical model and demonstrate that by means of a DMS the problem can be considered and solved in school.
Problem 3 A circular sector of measure a (in degrees) has been cut out from a circular plastic sheet of radius l with center O (Fig. 17.11). From the remaining part, a right circular cone is made by sticking (gluing) the cuts (Fig. 17.12). What is the size of angle a (in degrees) for which the volume of the resultant cone is maximal?
The mathematical model of this problem is based on the well-known formula for the volume V of the cone: V ¼ pR 2 3 h. Here R is the radius of the cone base and h is cone's height. Since a is measured in degrees, the length of the arc of the removed circular sector is a 360 2pl. Therefore, the length of the cone base circumference is what remains after the cutting: 2pl À a 360 2pl. Hence, 2pl À a 360 2pl ¼ 2pR. It follows that the radius R can be expressed as function of x ¼ a 360 :R ¼ lð1 À xÞ. Further, it follows from Pythagoras's theorem that h 2 ¼ l 2 À R 2 ¼ l 2 1 À ð1 À xÞ 2 . i.e., Thus, the volume of the cone is V ¼ 1 3 pl 3 ð1 À xÞ 2 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 1 À ð1 À xÞ 2 q . The essence of the problem, its mathematical model, is to find a number x; 0 x 1, for which the function f ðxÞ ¼ ð1 À xÞ 2 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 1 À ð1 À xÞ 2 q attains its maximal value. Once again we see that the derivation of the mathematical model is based on school mathematics. Solving this model however requires more advanced mathematics. Using calculus one can find the extremal values of this function f by finding the zeros of its derivative. These zeros are x ¼ 1 À There is a nice trick which allows solution of this mathematical model by means of the well-known inequality between the arithmetic mean and the geometric mean of any non-negative numbers a, b, and c: ffiffiffiffiffiffiffi abc . It is known also that equality is attained in this inequality if and only if a ¼ b ¼ c. Applying this inequality for a ¼ b ¼ ð1ÀxÞ 2 2 ; c ¼ 1 À ð1 À xÞ 2 , we get f ðxÞ ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð1 À xÞ 4 1 À ð1 À xÞ 2 r ¼ 2 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi ð1 À xÞ 2 2 ð1 À xÞ 2 2 1 À ð1 À xÞ 2 s The equality will be reached when ð1ÀxÞ 2 2 ¼ 1 À ð1 À xÞ 2 . This again yields If at all, calculus and the mentioned trick with the inequality are available only at the last stages of school mathematics. With the help of DMS, however, the mathematical model of this problem can be solved by younger students. It is possible to draw the graph of the function f ðxÞ and see where its maximum is. The graph of the function f(x) can be seen in Fig. 17.13.
It is clear from this picture that the function f has two maxima. Only the one in the interval [0, 1] on the x-axes is of interest for us. The DMS (GeoGebra) allows observation of the coordinates of a point A, which moves along the graph of the function. When A is dragged to the highest point in the graph, its first coordinate will be equal to the value of x we are looking for. In Fig. 17.13, this is the point A ¼ ð0:18; 0:38Þ. If the precision of calculations is increased, one gets x ¼ 0:1835, which is a very good approximation of x ¼ 1 À ffiffi 2 p ffiffi 3 p . This value of x corresponds to a % 66:06 , and the latter value could be accepted as a reasonable solution to Problem 3.
The Ice Cream Container Problem
The next problem is a challenge for pencil-and-paper technology, even for university students. With the help of DMS it is completely amenable for school students.
Problem 4 An ice cream container (as depicted in Fig. 17.14) is to be made of a circular plastic sheet of radius l with center O by cutting and gluing (sticking). The cutting and gluing operations allowed and the order in which they are performed are: (a) Cut a circular sector of measure a (in degrees) from the plastic sheet ( Fig. 17.15) and, by gluing, make from it a cone that will serve as the lower part of the ice cream container. (b) Cut off from the remainder (Fig. 17.15) a full circular sector of radius t (this number t is to be specified later) and glue a cut cone (truncated cone) that will serve as the upper part of the ice cream container.
For what size of a will the ice cream container have largest volume? The length of the arc of the circular sector of measure a is 2pla 360 . The cone made of this sector will have a radius r of the base determined from the equation 2pr ¼ 2pla 360 ; i.e., r ¼ lx where x ¼ a 360 . The radius t of the full circular sector mentioned in (b) is determined in such a way that the upper circle of the lower cone fits the lower circle of the upper truncated cone: ð360ÀaÞ 360 2pt ¼ 2pr. Hence t ¼ r 1Àx . Note that the length of the generatrix of the truncated cone obtained in (b) is l À t. The resultant container is depicted in Fig. 17.16. As in Problem 3, we see that the radius R of the upper circle of the truncated cone is R ¼ ð1 À xÞl. The altitude h 1 of lower cone is determined by Pythagoras's theorem: h 2 1 ¼ l 2 À r 2 ¼ l 2 1 À x 2 ð Þ. The volume of the lower cone is The altitude h 2 of the truncated cone is determined similarly (using Pythagoras's theorem): Fig. 17.14 The ice-cream container Fig. 17.15 The cutting and gluing process The volume V 2 of the truncated cone is Þ . The volume of the ice cream container is V ¼ V 1 þ V 2 . We note here that x must belong to the interval ½0; 1 2 . This follows from the fact that the number t ¼ r 1Àx ¼ lx 1Àx cannot be bigger than l. Finding the maximum of V by means of calculus is a challenge. With the help of a DMS it can be found, as in the previous problem, that the maximal value of V is attained for x % 0:23088, which corresponds to a % 83:12 .
Here are some problems for further inquiry: Problem 4.1 What is the minimal radius l of the initial circle from which the ice cream container is produced in the above way so that its volume is at least 200 cm 3 ?
Problem 4.2 A bucket (the far right of Fig. 17.17) with a circular base of radius r = 10 cm has to be made from a circular plastic sheet of radius l = 60 cm with center O by cutting and gluing (sticking). The cuts that are allowed and the order in which they are performed are: (a) Cut circles centered at O (i.e., concentric with the initial circle).
(b) Cut from the remainder a radial segment of measure a (in degrees).
For what size of a will the volume of the bucket be the largest?
A geometrical problem
This is the last of the sample problems: Problem 5 For an arbitrary triangle ABC, denote by D, E, and F its orthocenter, incenter, and the centroid, correspondingly (Fig. 17.18). Are there triangles ABC for which the area of the triangle DEF is bigger than the area of the triangle ABC itself?
This problem deviates in style from the previously considered problems. It contains a research-like component that is suitable for work on a project by the students. The computer model for this problem is easy to construct. The in-built operations of GeoGebra can be used to construct the orthocenter, the incenter, and the centroid of an arbitrary triangle. Using the "finding area of a polygon" command, the areas of the triangles ABC and DEF are calculated and displayed on the monitor. Due to the dynamic functionalities of GeoGebra, this computer model of Problem 5 allows to explore many triangles (by dragging some of the vertices A, B, and C). Playing with the vertices can experimentally establish that for some obtuse triangles ABC the answer to the question in Problem 5 is positive.
Note that this computer model solution of the problem does not require knowledge of more advanced mathematics (trigonometry, analytical geometry, etc.). It relies on the knowledge of the basic notions involved (orthocenter, incenter, and centroid), on acquaintance with the functionalities of GeoGebra, and on some modeling skills.
Problem 5.1 For an arbitrary triangle ABC, find the area of the triangle with vertices at the orthocenter, the circumcenter, and the centroid of ABC.
Exploring this task with the corresponding computer model can show that the required area is always zero and, therefore, the three points are collinear (they lie on the famous Euler line of the triangle ABC).
The following simplified form of Problem 5 was given as one of the tasks in the competition VIVA Mathematics with Computer.
Problem 5.2 Given is a triangle ABC (by its sides or by the coordinates of its vertices; see Fig. 17.18). Find the area of the triangle with vertices at the orthocenter D, the incenter E, and the centroid F of the triangle ABC.
The Competitions Viva Mathematics with Computer and Theme of the Month
In order to examine the attitudes of Bulgarian students to problems like those in the previous section and to test the students' ability to solve such problems, two online competitions named VIVA Mathematics with Computer (VIVA MC) and Theme of the Month (TM) were launched in 2014 with the financial support of VIVACOM, a major telecommunication operator in the country (https://www.vivacom.bg/bg). The VIVA MC competition is for students from Grade 3 to Grade 12 and has two rounds. The first round is conducted twice during the academic year (in December and April) and is with open access. The second round takes place in September or early October and is only for the best performers in the December and April editions of the first round from the previous academic year. Pre-registration is needed at the VIVAcognita portal (http://vivacognita.org/) for participation in VIVA MC. Each registered student chooses how to participate in the competition: from any place with internet access by desktop, tablet, or laptop. On a fixed day and time every participant gets access for 60 min to a worksheet that contains 10 tasks corresponding to the participant's age group. The easier tasks are equipped with several possible answers. i.e., these are multiple-choice questions. The participant is expected to select the correct answer on the basis of performing some mathematical operations. The majority of the remaining tasks require a decimal number (usually up to two digits after the decimal point) as an answer that has to be entered in a special answer field.
To find this answer, the student has to make a computer model of the task and explore it with the functionalities of DMS. Some of the most difficult tasks are accompanied by a file (a computer model) that solves a similar problem, and participants must modify the files accordingly in order to solve the tasks assigned to them. The number of points given for the answer to a task depends on both how close the student's answer is to the one calculated by the jury and/or by the author of the task and the difficulty of the problem. The maximum possible score is 50 points. There are no restrictions concerning the use of resources: books, internet search, advice from specialists, etc. More information about this competition can be found in Chehlarova and Kenderov (2015). In April 2016, there were 474 participants while in December 2016 the number of participants was 1321. In both cases there were five age groups (two grades per group). An impression of the degree to which the participants were capable of solving problems with the help of DMS can be gained by the overview of their scores presented in Tables 17.1 and 17.2. Students' scores in solving the problems from Sect. 17.2 were similar. Problem 5.2 from Sect. 17.2 was proposed as a last (presumably most difficult) task in the very first edition of VIVA MC (December 2014) to 207 students from Grades 8 to 12. The lack of experience with such problems and the short time to work on the problems (60 min) is clearly seen from the obtained results: About half of the students (48%) did not enter any answer for this task, 13% provided precise answer, and 2% gave an answer with satisfactory precision. The cylindrical container problem (Problem 2 from Sect. 17.2) was given to 317 students from Grade 8 to Grade 12 at the December 2015 edition of VIVA MC. An auxiliary DMS file was provided in order to facilitate the exploration of the problem. Only 13% provided an answer with sufficiently high precision. The answers of a further 37% were given with satisfactory precision. The general feeling has been that with every new edition of VIVA MC the performance of the participants improves, though rather gradually.
The other competition, TM, is conducted monthly. A theme of five tasks related to a common mathematical idea is published at the beginning of the month on the abovementioned portal (vivacognita.org). The tasks are arranged in the direction of increasing difficulty. The participants are expected to solve the problems and send responses online by the end of the month. Some of the problems are accompanied by auxiliary DMS files which allow the students to explore the mathematical problem, find suitable properties, try out different strategies, and find a (usually approximate) solution. To solve the more difficult tasks from the theme, the students have to adapt the auxiliary files from previous problems or to develop their own files for testing and solving the problem. Each problem brings at most 10 points (depending on the degree of preciseness of the answer). The maximum total score is 50 points. Usually there are hundreds of visits to the site where the theme is published. Only dozens, however, submit solutions. The theme for February 2015 was related to the parking problem (Problem 1 from Sect. 17.2). Seventeen participants submitted their solutions. Seven received between 35 and 50 points and two received between 20 and 34. Much better were the results from the theme from September 2015, which was related to conical containers (Problems 3, 4, and 4.2 from Sect. 17.2). Sixteen students submitted their solutions, with 14 scoring between 41 and 50 points and one scoring 34 points. The results of the first several runs of TM are published in Kenderov et al. (2015) and Chehlarova and Kenderov (2015). After the April 2017 edition of VIVA MC, the participants (more than 500) were asked to fill in a questionnaire and submit it to organizers. Of the 143 participants who returned the questionnaire, 95.51% said they liked the event. Here are some of their responses: The problems are interesting because they require logical thinking. I like it because I could use GeoGebra for each problem. The contest is nice since I don't feel pressed when solving the problems. The questions are at the right level for me. It is interesting and helps me develop. I find the problems entertaining. It was easy for me to understand the formulation of the problem by means of the dynamic file I could use. Every problem is interesting in its own way. I like the fact that I can explore while solving the problem. I like the parking entrance problem because it is something you could face in the real world. This relatively modest feedback confirms the expectation that providing the students with appropriate exploration tools can increase their awareness of both the beauty and the applicability of mathematics. | 8,029 | 2018-01-01T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
Evaluating Multimodal Transportation Education In Indonesia
. This research investigates the effectiveness of transportation education programmes in Indonesia, focusing on multimodal transportation. Through qualitative analysis of 50 Indonesian cadets enrolled in transportation programmes, the study assesses knowledge retention and application. Findings reveal strengths in curriculum breadth and practical application, but challenges in knowledge retention, faculty competence, and industry relevance. Recommendations include curriculum enhancements, active learning strategies, faculty development, industry collaboration, technology integration, and a global perspective. By addressing these recommendations, Indonesia can enhance the quality and relevance of transportation education, better preparing cadets for the complexities of the transportation industry.
INTRODUCTION
Transportation plays a pivotal role in facilitating economic growth, trade, and social connectivity in any nation (Litman, 2016;Vuchic, 2017).As the global economy becomes increasingly interconnected, the demand for efficient and sustainable transportation systems continues to rise.In the Indonesian context, the transportation sector holds particular significance due to the country's vast archipelago comprising thousands of islands.Effective transportation management is crucial for ensuring the seamless movement of goods, people, and resources across the diverse terrain of Indonesia.However, achieving excellence in transportation management requires a well-trained workforce equipped with the requisite knowledge and skills to navigate the complexities of the industry.
Against this backdrop, the field of transportation education assumes paramount importance in shaping the future of the sector.Transportation education programmes play a crucial role in nurturing the next generation of transportation professionals who will spearhead innovation and advancement in the industry (Green, 2021).Recognising the need for highquality education in this domain, institutions such as the Maritime Institute of Jakarta (Sekolah Tinggi Ilmu Pelayaran) have emerged as key players in imparting knowledge and skills to aspiring transportation professionals.These institutions adhere to international standards and strive to equip students with a comprehensive understanding of transportation management, safety protocols, and legal principles.However, despite the efforts to align educational programmes with global standards, there remains a pressing need to evaluate the effectiveness of transportation education in Indonesia.The ability of students to retain and apply knowledge acquired through these programmes is a critical indicator of their preparedness for real-world challenges.Moreover, with the transportation sector evolving rapidly in response to technological advancements and changing regulatory landscapes, it is essential to assess the relevance and adequacy of current educational curricula (Bruton, 2021;Hakim et al., 2022).Thus, this research aims to address the gap in existing literature by conducting a qualitative analysis of multimodal transportation education in Indonesia.
The primary objective of this research is to assess the retention and application of knowledge among 50 Indonesian cadets enrolled in transportation education programmes.
These cadets represent a diverse cohort of students pursuing degrees in multimodal transportation, logistics, transportation safety, and law and road management.By employing qualitative research methods, including interviews and descriptive analysis, this study seeks to gain insights into the effectiveness of current educational programmes in preparing students for the complexities of the transportation sector.Furthermore, the research aims to identify areas of improvement within existing curricula to enhance the quality and relevance of transportation education in Indonesia.
The research gap analysis reveals several key areas where existing literature falls short in addressing the specific challenges and nuances of transportation education in Indonesia.
Firstly, while there is ample research on transportation education in global contexts, there is a dearth of studies focusing specifically on Indonesia.Given the unique geographical and infrastructural characteristics of the country, it is essential to examine transportation education within the Indonesian context to identify region-specific challenges and opportunities.
Secondly, existing research often relies on quantitative methods to evaluate educational programmes, overlooking the rich qualitative insights that can be gleaned from in-depth interviews and descriptive analysis.By adopting a qualitative approach, this research aims to provide a nuanced understanding of the factors influencing knowledge retention and application among Indonesian cadets.Lastly, there is limited research evaluating the alignment of transportation education programmes in Indonesia with international standards and best practices.This research seeks to fill this gap by assessing the extent to which educational curricula meet global benchmarks and identifying areas for improvement to enhance the competitiveness of Indonesian transportation professionals in the global market.
METHOD
The research method employed in this study is rooted in qualitative inquiry, aiming to provide a comprehensive understanding of the retention and application of knowledge among Indonesian cadets enrolled in transportation education programmes (Kim et al., 2017;Willig, 2014).Qualitative research is deemed appropriate for exploring complex phenomena and capturing the depth and richness of human experiences, which aligns with the multifaceted nature of transportation education.
To begin with, the research design entails the selection of participants from the Maritime Institute of Jakarta (Sekolah Tinggi Ilmu Pelayaran), a prominent institution offering transportation education in Indonesia.The participants consist of 50 Indonesian cadets representing a diverse range of academic backgrounds and interests within the transportation field.These cadets are enrolled in programmes focusing on multimodal transportation, logistics, transportation safety, and law and road management, thus offering a broad spectrum of perspectives on the subject matter.
Data collection in this research is primarily facilitated through semi-structured interviews conducted with the participating cadets.Semi-structured interviews provide the flexibility to explore various aspects of the research topic while allowing for in-depth probing and clarification of responses (Castleberry & Nolen, 2018;Padgett, 2016).The interview questions are carefully crafted to elicit insights into the cadets' experiences, perceptions, and challenges related to transportation education.Topics covered in the interviews include the cadets' academic journey, the effectiveness of educational programmes, knowledge retention strategies, and the application of learned principles in real-world scenarios.
In addition to interviews, the research incorporates the use of descriptive analysis to systematically examine and interpret the collected data.Descriptive analysis involves organising and summarising qualitative data to identify patterns, themes, and trends.This IJETS -VOLUME. 1, NO. 2, JUNE 2024 method allows for the identification of commonalities and differences among participants' responses, facilitating a holistic understanding of the research phenomenon.Furthermore, the research employs a thematic analysis approach to analyse the qualitative data obtained from interviews and descriptive analysis.Thematic analysis involves identifying, analysing, and reporting patterns or themes within the data, which enables researchers to derive meaningful insights and draw conclusions.Through iterative coding and constant comparison of data, themes emerge, representing recurring ideas, concepts, or phenomena relevant to the research objectives.
To ensure the rigour and credibility of the research findings, various strategies are employed to enhance the trustworthiness of the data.These include member checking, whereby participants are given the opportunity to review and validate the accuracy of their responses, thereby enhancing the credibility of the findings.Additionally, peer debriefing and reflexivity are employed to critically reflect on the research process and mitigate potential biases or preconceptions that may influence data interpretation.
Findings
The findings of the research shed light on the retention and application of knowledge among Indonesian cadets enrolled in transportation education programmes.Through qualitative analysis of semi-structured interviews and descriptive analysis of data, several key themes emerged, providing insights into the effectiveness of educational programmes and areas for improvement.The analysis reveals that the curriculum content plays a significant role in shaping the cadets' understanding of transportation management, safety, and legal principles.Through content analysis of interview responses, it was evident that the curriculum covers a wide range of topics relevant to the transportation industry, with an intensity of importance rated as high by the majority of participants.This indicates that the content provided in educational programmes aligns well with the knowledge areas deemed essential for aspiring transportation professionals.However, while the breadth of content is commendable, there were some suggestions for enhancing the depth and relevance of certain topics to better prepare cadets for real-world challenges.
In terms of knowledge retention, the findings suggest a moderate level of success among cadets in retaining the information acquired through educational programmes.
Interview coding revealed that while cadets demonstrated an understanding of fundamental concepts, there were instances of gaps in knowledge, particularly in more complex or specialised areas.Factors influencing knowledge retention included teaching methods, individual study habits, and the level of engagement with course materials.The findings highlight the importance of adopting innovative teaching strategies and providing ongoing support to reinforce learning and promote deeper understanding among cadets.
Practical application emerged as a key area of focus, with cadets expressing a strong desire for hands-on learning experiences.Observation of cadets' practical skills and application of theoretical knowledge revealed a high level of enthusiasm and competence in applying learned principles to real-world scenarios.However, there were challenges reported in accessing practical training facilities and opportunities for experiential learning.This indicates a need for closer collaboration between educational institutions and industry stakeholders to bridge the gap between theory and practice and provide cadets with more immersive learning experiences.
Faculty competence emerged as a critical factor influencing the effectiveness of educational programmes.Interview coding highlighted the importance of knowledgeable and experienced instructors in delivering high-quality education and fostering a conducive learning environment.Cadets expressed appreciation for instructors who demonstrated expertise in their respective fields and were able to effectively communicate complex concepts.However, there were also concerns raised about inconsistencies in teaching quality across different courses, suggesting a need for ongoing professional development and standardisation of teaching practices within educational institutions.IJETS -VOLUME. 1, NO. 2, JUNE 2024 Lastly, the findings indicate a moderate level of industry relevance in educational programmes, with interview coding revealing mixed perceptions among cadets.While some cadets expressed satisfaction with the industry exposure provided through internships and guest lectures, others felt that there was room for improvement in terms of curriculum alignment with industry needs and emerging trends.This underscores the importance of regularly updating educational curricula to reflect the evolving demands of the transportation sector and ensure that cadets are equipped with the skills and knowledge required to succeed in the field.
The findings of this research provide valuable insights into the strengths and weaknesses of transportation education programmes in Indonesia.While the curriculum content and practical application received commendation from cadets, there are areas for improvement in knowledge retention, faculty competence, and industry relevance.By addressing these areas of concern and implementing targeted interventions, educational institutions can enhance the quality and relevance of transportation education, ultimately better preparing cadets for the challenges of the transportation industry.
Discussion
The discussion of the research findings provides a comprehensive analysis of the effectiveness of transportation education programmes in Indonesia and offers insights into ways to enhance the quality and relevance of these programmes.The findings reveal several key areas for improvement, including curriculum content, knowledge retention, practical application, faculty competence, and industry relevance.
One of the key strengths identified in the research is the breadth of curriculum content covered in transportation education programmes.The majority of cadets reported that the curriculum provided a solid foundation in transportation management, safety, and legal principles.This indicates that educational institutions are successfully imparting the necessary theoretical knowledge to cadets, equipping them with a broad understanding of the transportation industry.However, while the breadth of content is commendable, there were suggestions from cadets to enhance the depth and relevance of certain topics.This highlights the importance of regularly reviewing and updating educational curricula to ensure they remain current and aligned with industry needs.
Another area of strength identified in the research is the practical application of knowledge among cadets.Faculty competence emerged as another critical factor influencing the effectiveness of educational programmes.While the majority of cadets expressed satisfaction with the knowledge and expertise of their instructors, there were concerns raised about inconsistencies in teaching quality across different courses (Berg, 2013;House & Saeed, 2016).This suggests a need for ongoing professional development and standardisation of teaching practices within educational institutions.By ensuring that all instructors possess the necessary knowledge and skills to effectively communicate complex concepts, educational institutions can enhance the overall quality of transportation education programmes.
The research also highlighted the importance of industry relevance in educational programmes.While some cadets reported satisfaction with the industry exposure provided through internships and guest lectures, others felt that there was room for improvement in terms of curriculum alignment with industry needs and emerging trends.This underscores the importance of regularly updating educational curricula to reflect the evolving demands of the transportation sector.By incorporating feedback from industry stakeholders and integrating emerging trends into the curriculum, educational institutions can ensure that cadets are equipped with the skills and knowledge required to succeed in the field.By addressing the areas for improvement identified in the research, educational institutions can enhance the quality and relevance of these programmes, ultimately better preparing cadets for the challenges of the transportation industry (Chakroborty & Das, 2017).Through ongoing collaboration with industry stakeholders and a commitment to continuous improvement, Indonesia can develop a workforce of transportation professionals equipped to drive innovation and advancement in the sector.
RECOMMENDATION
Based on the findings and discussion of the research, several suggestions and recommendations can be made to enhance the effectiveness of transportation education programmes in Indonesia.
Curriculum Enhancement:
To address the need for more in-depth and relevant content, educational institutions should regularly review and update their curricula to ensure they reflect the latest industry trends and best practices.This can be achieved through collaboration with industry stakeholders, who can provide insights into emerging challenges and technological advancements in the transportation sector.
Additionally, incorporating more practical, hands-on learning experiences into the curriculum can help bridge the gap between theory and practice and better prepare cadets for real-world challenges.
7.
Global Perspective: Given the global nature of the transportation industry, educational programmes should incorporate a global perspective into their curricula.This could include modules on international transportation regulations, cross-border logistics, and global supply chain management to prepare cadets for careers in the international transportation arena.By implementing these suggestions and recommendations, educational institutions in Indonesia can enhance the quality and relevance of their transportation education programmes, ultimately better preparing cadets for successful careers in the transportation sector.Through collaboration with industry stakeholders, continuous professional development for faculty members, and a commitment to integrating technology and active learning strategies into the curriculum, Indonesia can develop a workforce of transportation professionals equipped to address the challenges of the 21st century transportation industry.CONCLUSION This research has provided valuable insights into the effectiveness of transportation education programmes in Indonesia, particularly in the context of multimodal transportation.The findings highlight several strengths, including the breadth of curriculum content, practical application of knowledge, and faculty competence.However, there are also areas for improvement, such as enhancing the depth and relevance of curriculum content, improving knowledge retention strategies, and strengthening industry relevance.To address these challenges, several recommendations have been proposed, including regular curriculum reviews, implementation of active learning strategies, continuous professional development for Evaluating Multimodal Transportation Education In Indonesia 10 IJETS -VOLUME. 1, NO. 2, JUNE 2024 faculty, collaboration with industry stakeholders, integration of technology, and a global perspective in educational programmes.By implementing these recommendations, educational institutions can enhance the quality and relevance of transportation education programmes, ultimately better preparing cadets for the challenges of the transportation industry.By addressing the identified gaps and implementing targeted interventions, Indonesia can develop a skilled workforce of transportation professionals capable of driving innovation and advancement in the transportation sector.
Table 1 : Summary of Key Findings Indicator Valuation Technique Value of Intensity of Importance Score Percentage
Note: Scores and percentages are based on a scale of 50, representing the total number of cadets interviewed.e-ISSN :3046-8337, p-ISSN :3046-8345, Page 01-10 The findings indicate that cadets are enthusiastic about applying theoretical knowledge to real-world scenarios and demonstrate a high level of competence in doing so.This suggests that educational programmes are effectively incorporating practical components, such as internships and hands-on training, to enhance the learning experience.However, there were challenges reported in accessing practical training facilities and Given the rapid advancements in technology within the transportation sector, educational institutions should integrate technology into their curricula to ensure that cadets are familiar with the latest tools and techniques used in the industry.This could include incorporating simulation software, virtual reality tools, | 3,641.4 | 2024-04-03T00:00:00.000 | [
"Engineering",
"Education"
] |
Pharmacological assessment of the aqueous extract of rose oil waste from Rosa x damascena Herrm cultivated in Georgia
Among Rosaceae family's most popular and important plants Rosa x damascena Herrm. holds one of the top places due to its centuries-long application in perfumery, cosmetics, aromatherapy and medicine . Despite this
Introduction
Rosa x damascena Herrm -one of the most famous representatives of the genus Rosa L., comprising more than 400 species [1] [2] has been introduced and grown in different countries, owing to its excellent decorative features and the presence of valuable product -essential oil in the flower petals.Since ancient times, rose oil and rose water have been used as raw material for cosmetics, perfumery, aromatherapy, and diverse medical purposes [3].
The yield of the rose oil from flower petals is quite low (0.030-0.045%), and oil production process generates a significant amount of waste (solid residues and waste water) that still contains biologically active compounds.Polyphenols, flavonoids and polysaccharides were found in wastes from the rose oil industry [4][5][6].On the other hand, the same constituents determine various pharmacological activities of R. damascena including but not limited to antiinflammatory [7] [8], antioxidant [7] [9] [10] [11], and analgesic [12].
Recently, industrial production of rose oil from local cultivar of R. damascena have been established in Kakheti region of Georgia, in present study, we attempted to evaluate some pharmacological properties of the aqueous extract (RDE) of rose oil waste from the R. damascena Georgian cultivar.It is established, that the predominant constituents of RDE are represented by flavonoids [13] [14].Flavonoids are of the most diverse and widespread groups of plant secondary metabolites well known for having a broad spectrum of biological activity, e.g., anti-inflammatory, anticarcinogenic, antidepressant, antibacterial, antimutagenic, anti-HIV, etc. [8] [15].Due to generally known ability of flavonoids to scavenge reactive oxygen species and suppress the production of pro-inflammatory factors [16] [17] we did not repeat these assays and focused on in vivo experiments.In particular, the evaluation of gastro-and hepatoprotective activity of the aforesaid extract, as well as its leucopoietic properties, was carried out in corresponding animal models.
Aqueous extract of R. damascena flower petals (RDE)
R. damascena flower petals were processed by hydro distillation according to the standard procedure described in the European Pharmacopoeia (Ph.Eur.2008).Waste water was concentrated using rotary vacuum evaporator below 40ºC, frozen in a layer of 10 mm in the Petri dishes at -20 °C for 12 h, and vacuum dried to constant weight at -90 °C under 3.33 Pa.Finally, the dried material was powder-grinded and stored in vacuum desiccator until further use.
Animals
Inbred white mice weighing 28 ± 2 g (n = 40) were obtained from the animal house of Tbilisi State Medical University I. Kutateladze Institute of Pharmacochemistry and quarantined for 1 week in the Department of Preclinical Pharmacological Research of above Institute.Animals were kept under standard conditions (temperature 20 ± 2℃, humidity 55-65%, 12/12-hour light/darkness cycle, granulated food -4 g/animal/day, water ad libitum).All experiments were carried out in accordance with the requirements of the EU Directive 2010/63.Research protocol was authorized by the Tbilisi State Medical University Ethics Committee on Animal Research (approval # AP-52-2021).
Determination of LD50
The study was conducted using Lorke method [18] modified by Akhila et al. [19].In brief, ten groups of three animals each were used.A range of doses of the RDE was tested, starting from the lowest dose (10 mg/kg, intraperitoneally), with increments of 2. The treated animals were monitored for 24 h for mortality after the administration of each dose.The geometric mean of the highest dose which did not killed any mice and the lowest dose which led to death of all animals these two doses was taken as the median lethal dose (LD50).
Hepatoprotective activity (modificalion of CCI4-induced prolongation of pentobarbital sleeping time in mice)
The hepatoprotective effect of RDE was evaluated in a modified model of CCl4-induced liver damage (potentiation of pentobarbital sleeping time).Experiments were performed on 30 mice of both sexes weighing 26-32 g (10 animals in a group).Group II (experimental) -RDE was administered intraperitoneally in a dose 50 mg/kg for 3 days.Within 1 hour after the third injection and also on the next day, CCl4 (diluted 1:1 with olive oil) in a dose of 1 ml/kg was injected subcutaneously.Then the same dose of RDE was given for another 6 days (9 in total).Group II (negative control) received saline (0.1 ml, i.p.) and the same dose of CCl4.Group I (intact control) -was given only saline (0.1 ml, i.p.) for 9 days and sham injected on day 3-4.On day 10 pentobarbital (45 mg/kg i.p.) was injected to mice of all groups and duration of pentobarbital sleeping time, defined as a time between loss and recovery of righting reflex, was recorded.Hepatoprotective efficacy was calculated by the following formula: where Tcon and Texp are the mean differen ces between sleeping time in groups II and III and II and I, correspondingly.
Gastroprotective activity (Ethanol induced ulcer model)
The experiment was carried out in accordance with the method described by Adinortey et al. [20].In brief, 24 outbred mice were randomly distributed in three groups of animals, each consisting of eight mice.24 hours prior to the experiment the access to food was restricted, and animals were relocated in cages with raised floors of wide wire mesh to prevent coprophagy.During the fasting period, all mice received a nutritive solution of 8% sucrose in 0.2% NaCl to avoid excessive dehydration.On day 2 absolute ethanol was given orally (1 ml/100 g) to all animals.RDE in a dose of 50 mg/kg, i.p. (Group III) or 100 mg/kg, i.p. (Group II) was given 1 hour prior the ethanol administration.Mice of control group (Group I) got 0.2 ml of saline.Animals were euthanized by CO2 inhalation 1 hour after the ethanol administration.The stomachs were immediately removed, opened along the great curvature, rinsed consecuently with water and 10% formalin solution which contains about 4% formaldehyde w/v, fixed on white EPS foam board, and digitally photographed.Ulcerative lesions were measured using Image-J software and macroscopic ulcer index (MUI) was calculated for each stomach according to the following scale: 0 -no lesions; 1 -single petechial lesions (n <10); 2multiple (n≥10) petechial or short linear (≤ 2 mm) haemorrhagic lesions; 3 -long (> 2 mm) linear haemorrhagic lesions; 4 -continuous linear haemorrhagic lesions along the entire length of the glandular part of stomach.The efficacy of RDE expressed as percentage of ulcer inhibition (% I) was estimated on the basis of the MUI and calculated using the formula: where MUIC and MUIT are macroscopic ulcer indexes in control and test groups, respectively.
Leucopoietic activity (cyclophosphamide-induced leukopenia in mice)
32 mice were randomly divided in three groups (8 animals in each).Group I -intact control; group II -negative control (only cyclophosphamide); group III (cyclophosphamide + RDE 20 mg/kg); group IV (cyclophosphamide + RDE 50 mg/kg).Acute leukopenia was induced in mice of groups II-IV by a single intraperitoneal injection of cyclophosphamide at a dose of 350 mg/kg.Starting from day 2 after the administration of cyclophosphamide groups II and III were given RDE orally for 5 days.Blood sampling was performed on day 1 (basal level), day 2 (to estimate the rate of leukopenia) and after the completion of treatment (day 8 from the beginning of the experiment).Blood samples were collected under anaesthesia with pentobarbital sodium (45 mg/kg, i.p.) from abdominal vena cava in accordance with sampling protocol [21], and then mice were sacrificed by decapitation.Total white blood cell (WBC) counts were performed manually for each sample using a Neubauer chamber and microscopic examination of Romanowsky-stained smears with 70X objective [22].
Statistical analysis
All values were expressed as mean ± SEM.Statistical analysis of the experimental data was performed using Student's t-test [23].Differences were considered significant at p < 0.05.
Acute toxicity studies
In the first phase, mice were divided into three groups of 3 mice each and treated with the RDE in a dose of 10, 500 and 5000 mg/kg body weight intraperitoneally.Animals were observed for 24 h for signs of toxicity, mortality and general behaviors.In the second phase, twenty-one mice were divided into 7 groups of three animals each that were administered with the RDE at doses interval between 10 and 650 mg/kg i.p. and median lethal dose (LD50) was calculated.LD50 of RDE appeared to be 350 mg/kg.
Since ethanol induced hemorrhagic damage on the gastric mucosa is commonly associated with oxidative stress [24] [25], it is natural to assume that natural compounds with antioxidant activity can prevent or reverse such damage.Among plant secondary metabolites, flavonoids are claimed to have various pharmacological activities in the gastroprotective domain, including anti-secretory, cytoprotective, antihistaminic and antioxidant characteristics [26][27][28][29][30]. On the other hand, it should be taken into consideration that the predominant constituents of the RDE are flavonoids, mainly quercetin glycosides.Hence it is likelihood that the observed gastroprotective effect of RDE may be attributed to antioxidative activity of its flavonoid content [24] [26] [31].
CCI4-induced prolongation of pentobarbital sleeping time
In in vivo investigations, liver injury caused by CCl4, the most well-studied system of xenobiotic-induced hepatotoxicity [32] [33] [34], is a common model for evaluating pharmacological anti-hepatotoxic/hepatoprotective activities [35] [36] [37].In particular, liver damage causes an increase in pentobarbital induced sleeping time after carbon tetrachloride poisoning.As pentobarbital is metabolized solely in the liver [38] sleep duration indicates the intensity of hepatic metabolism.Sleeping time is entirely or partially restored in the presence of a hepatoprotective medication [39] [40] [41] [42].In our experiment CCI4, expectedly, more than doubled the duration of pentobarbital-induced sleeping time, whereas the RDE treatment at a dose of 50 mg/kg reduced the hepatotoxic impact of CCI4 resulting in a 63% reduction in sleep duration (Fig. 2).
Most likely, the observed hepatoprotective effects of PDE may be related to the flavonoids present in the extract, since these phytochemicals, are claimed to have been implicated as hepatoprotectors on CCl4 induced toxicity [43] [44] [45].
Leucopoietic activity (cyclophosphamide-induced leukopenia in mice)
The basal count of total WBC was within the normal range for mice in all groups.On day 2 after the cyclophosphamide administration approximately 80% reduction in total WBC was observed in groups II-IV (Table 1).5 day RDE treatment led to the recovery of total WBC up to 64% and 75% of basal level in RDE treated groups III and IV, respectively (Fig. 3).
Conclusion
The data of presented study indicate some beneficial effects of the RDE.In particular, it significantly protected against mucosal damage induced by absolute ethanol, alleviated carbon tetrachloride hepatotoxicity and contributed the recovery of white blood cells after cyclophosphamide treatment.It is notable that all of aforesaid disorders are closely linked with oxidative damage, and the observed effects may be attributed to predominant flavonoidal constituents of the RDE, including, but not limited, to quercetin glycosides.These results clearly indicate that further experimentation is needed to determine the active principles of RDE responsible for the observed pharmacological activity, and to elucidate the exact mechanisms of their action.
Figure 1
Figure 1 Gastroprotective effect of RDE.A -Macroscopic view of ethanol induced ulcer lesions in control (I) and RDE treated (II and III) mice; B -Macroscopic ulcer index (MUI); C -Efficacy of RDE (%I).Each value represents mean ± SEM of 6 animals; * -p<0.05vs negative control
Figure 3
Figure 3 Rate of leucopenia (% reduction of total WBC) in CP and RDE treated animals.Each value represents mean ± SEM of 6 animals Presumably, the observed effect is associated with the presence of flavonoids in RDE as complex flavonoid-containing preparations are said to reverse leucopenia caused by cyclophosphamide when given in combination with it [47] [48] [49] [50].On the other hand, limitation of the study of such preparations is a lack of relation of the effect to the particular molecule in the complex compound.Therefore, further study is needed to determine the constituents of the RDE responsible for the leukopoietic activity. | 2,734.2 | 2021-07-30T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Chemistry"
] |
A dimensionless approach for the runoff peak assessment: effects of the rainfall event structure
The present paper proposes a dimensionless analytical framework to investigate the impact of the rainfall event structure on the hydrograph peak. At this aim a methodology to describe the rainfall event structure is proposed based on the similarity with the Depth-Duration-Frequency (DDF) curves. The rainfall input consists in a constant hyetograph where all the possible outcomes in the sample space of the rainfall structures can be condensed. Soil abstractions are modelled using the Soil Conservation Service method and the Instantaneous Unit Hydrograph theory is undertaken to determine the 10 dimensionless form of the hydrograph; the two-parameter gamma-distribution is selected to test the proposed methodology. The dimensionless approach is introduced in order to implement the analytical framework to any study case (i.e. natural catchment) for which the model assumptions are valid (i.e. linear causative and time invariant system). A set of analytical expressions are derived in case of constant-intensity hyetograph to assess the maximum runoff peak with respect to a given rainfall event structure irrespective of the specific catchment(such as the return period associated to the reference rainfall 15 event). Looking at the results, the curve of the maximum values of the runoff peak reveals a local minimum point corresponding to the design hyetograph derived according to the statistical DDF curve. A specific catchment application is discussed in order to point out the dimensionless procedure implications and to provide some numerical examples of the rainfall structures with respect to observed rainfall events; finally their effects on the hydrograph peak are examined.
Introduction
The ability to predict the hydrologic response of a river basin is a central feature in hydrology.For a given rainfall event, estimating rainfall excess and transforming it to a runoff hydrograph is an important task for planning, design and operation of water resources systems.For these purposes, design storms based on the statistical analysis of the annual maximum series of rainfall depth are used in practice as input data to evaluate the corresponding hydrograph for a given catchment.Several models are documented in the literature to describe the hydrologic response (e.g.Chow et al., 1988;Beven, 2012): the simplest and most successful is the unit hydrograph concept first proposed by Sherman (1932).Due to a limited availability of observed streamflow data mainly in small catchment, the attempts in improving the peak flow predictions have been documented in the literature since the last century (e.g.Henderson, 1963;Meynink and Cordery, 1976) to date.Recently, Rigon et al. (2011) investigated the dependence of peak flows on the geomorphic properties of river basins.In the framework of flood frequency analysis, Robinson and Sivapalan (1997) presented an analytical description of the peak discharge irrespective of the functional form assumed to describe the hydrologic response.Goel et al. (2000) combine a stochastic rainfall model with a deterministic rainfall-runoff model to obtain a physically based probability distribution of flood discharges; results demonstrate that the positive correlation between rainfall intensity and duration impacts the flood flow quantiles.Vogel et al. (2011) developed a simple statistical model in order to simulate observed flood trends as well as the frequency of floods in a nonstationary context including changes in land use, climate and water uses.Iacobellis and Fiorentino (2000) proposed a derived distribution of flood frequency, identify-Published by Copernicus Publications on behalf of the European Geosciences Union.
ing the combined role played by climatic and physical factors on the catchment scale.Bocchiola and Rosso (2009) developed a derived distribution approach for flood prediction in poorly gauged catchments to shift the statistical variability of a rainfall process into its counterpart in terms of statistical flood distribution.Baiamonte and Singh (2017) investigated the role of the antecedent soil moisture condition in the probability distribution of peak discharge and proposed a modification of the rational method in terms of a priori modification of the rational runoff coefficients.
In this framework, the present research study takes a different approach by exploring the role of the rainfall event features on the peak flow rate values.Therefore the main objective is to implement a dimensionless analytical framework that can be applied to any study case (i.e.natural catchment) in order to investigate the impact of the rainfall event structure on hydrograph peak.Since the catchment hydrologic response and in particular the hydrograph peak is subjected to a very broad range of climatic, physical, geomorphic and anthropogenic factors, the focus is posed on catchments where lumped rainfall-runoff models are suitable for deterministic event-based analysis.In the proposed approach, the rainfall event structure is described by investigating the maximum rainfall depths for a given duration d in the range of durations [d/2; 2d] within that specific rainfall event, differently from the statistical analysis of the extreme rainfall events.Other authors (e.g.Alfieri et al., 2008) have previously discussed the accuracy of literature design hyetographs (such as the Chicago hyetograph) for the evaluation of peak discharges during flood events; conversely the proposed methodology allows the investigation of the impact of the above-mentioned rainfall event structure on the magnification of the runoff peak neglecting the expected rainfall event features condensed in the depth-duration-frequency (DDF) curves.
The first specific objective is to define a structure relationship of the rainfall event able to describe the sample space of the rainfall event structures by means of a simple power function.The second specific objective is to implement a dimensionless approach that allows the generalization of the assessment of the hydrograph peak irrespective of the specific catchment characteristic (such as the hydrologic response time, the variability of the infiltration process, etc.), thus focusing on the impact of the rainfall event structure.
Finally a specific catchment application is discussed in order to point out the dimensionless procedure implications and to provide some numerical examples of the rainfall structures with respect to observed rainfall events; furthermore their effects on the hydrograph peak are examined.
Methodology
A dimensionless approach is proposed in order to define an analytical framework that can be applied to any study case (i.e.natural catchment).It follows that both the rainfall depth and the rainfall-runoff relationship, which are strongly related to the climatic and morphologic characteristics of the catchment, are expressed through dimensionless forms.In this paper, [L] refers to length and [T] refers to time.
The rainfall event is then described as constant hyetographs of a given durations; this simplification is consistent with the use of deterministic lumped models based on the linear system theory (e.g.Bras, 1990).The proposed approach is therefore valid within a framework that assumes that the watershed is a linear causative and time-invariant system, where only the rainfall excess produces runoff.In detail, the rainfall-runoff processes are modelled using the Soil Conservation Service (SCS) method for soil abstractions and the instantaneous unit hydrograph (IUH) theory.Consistently with the assumptions of the UH theory, the proposed approach is strictly valid when the following conditions are maintained: the known excess rainfall and the uniform distribution of the rainfall over the whole catchment area.
The dimensionless form of the rainfall event structure function
Rainfall DDF curves are commonly used to describe the maximum rainfall depth as a function of duration for given return periods.In particular for short durations, rainfall intensity has often been considered rather than rainfall depth, leading to intensity-duration-frequency (IDF) curves (Borga et al., 2005).Power laws are commonly used to describe DDF curves in Italy (e.g.Burlando and Rosso, 1996) and elsewhere (e.g.Koutsoyiannis et al., 1998).The proposed approach describes the internal structure of rainfall events based on the similarity with the DDF curves.Referring to a rainfall event, the maximum rainfall depth observed for a given duration is described in terms of a power function similarly to the DDF curve, as follows: where it is comparable to a constant intensity rainfall for n close to 1.As an example, Fig. 1 describes the rainfall event structure according to the approach illustrated above.In Fig. 1, the observed rainfall depth (at the top), the observed maximum rainfall depths (at the centre) and the corresponding rainfall structure exponent (at the bottom) are reported on hourly basis.
In order to correlate the rainfall event structure function to the DDF curve, a reference rainfall event has to be defined in terms of the maximum rainfall depth, h r , occurring for the reference duration, t r .Focusing on a given catchment, the reference duration, t r , is assumed to be equal to the hydrologic response time of the catchment; thus, assuming a specific return period T r [T], the reference value of the maximum rainfall depth, h r [L], is defined according to the corresponding DDF curves, as follows: where a(T r ) [LT −b ] and b [-] are respectively the coefficient and the scaling exponent of the DDF curve.
Referring to a rainfall duration corresponding to t r , the rainfall depth is assumed to be equal to the reference value of the maximum rainfall depth.Based on this assumption a relationship between the parameters of the DDF curve and the rainfall event structure function can be derived as follows: From Eq. (3) it is possible to derive the coefficient of the rainfall event structure function, a , for a given reference duration, t r .Note that the a coefficient is assumed to be valid in the range [t r 2; 2t r ] similarly to the n-structure exponent.The dimensionless approach is then introduced since it allows an analytical framework to be defined which can be applied to any study case (i.e.natural catchment) for which the model assumptions are valid (i.e.linear causative and timeinvariant system).The reference values h r and t r are directly linked to the climatic and morphologic characteristics of the specific catchment, and therefore the dimensionless approach based on the h r and t r values allows the generalization of the results irrespective of the specific catchment characteristic (such as the return period associated with the reference rainfall event).
Based on the proposed approach, the dimensionless form of the rainfall depth, h * , is defined by the ratio of the rainfall depth, h, to the reference value of the maximum rainfall depth, h r ; similarly the dimensionless duration, d * , is expressed by the ratio of the duration, d, to the reference time, t r .Therefore, the dimensionless form of the rainfall structure relationship may be expressed utilizing Eqs. ( 1), ( 2) and (3):
The dimensionless form of the unit hydrograph
The hydrologic response of a river basin is here predicted through a deterministic lumped model: the interaction between rainfall and runoff is analysed by viewing the catchment as a lumped linear system (Bras, 1990).The response of a linear system is uniquely characterized by its impulse response function, called the instantaneous unit hydrograph.
For the IUH, the excess rainfall of unit amount is applied to the drainage area in zero time (Chow et al., 1988).
To determine the dimensionless form of the unit hydrograph a functional form for the IUH and thus the Shydrograph has to be assumed.In this paper the IUH shape is described with the two-parameter gamma distribution (Nash, 1957): where is the shape parameter and k [T] is the scale parameter.In the well-known two-parameter Nash model, the parameters α and k represent the number of linear reservoirs added in series and the time constant of each reservoir, respectively.The product αk is the first-order moment, thus corresponding to the mean lag time of the IUH.Note that the IUH parameters can be related to the watershed geomorphology; in these terms the geomorphologic unit hydrograph (GIUH) theory attempts to relate the IUH of a catchment to the geometry of the stream network (e.g.Rodriguez-Iturbe and Valdes, 1979;Rosso, 1984).The use of the Nash IUH allows an analytical framework to be defined which assesses the relationship between the maximum dimensionless peak and the n-structure exponent for a given dimensionless duration, and similar analytical derivation can be carried out for simple synthetic IUHs.The dimensionless form of the IUH is obtained by using the dimensionless time, t * , defined as follows: The proposed dimensionless approach is based on the use of the IUH scale parameter as the reference time of the hydrologic response (i.e.t r = αk).Using the first-order moment in the dimensionless procedure, the proposed approach can be applied to any IUH form even if, for experimentally derived IUHs, the analytical solution of the problem is not feasible.By applying the change of variable t = αkt * , the IUH may be expressed as follows: The dimensionless form of the IUH, f (t * ), is defined and derived from Eq. ( 7) as follows: Note that for the dimensionless IUH the first-order moment is equal to 1 and the time to peak, t I * , can be expressed as follows: The dimensionless unit hydrograph (UH) is derived by integrating the dimensionless IUH: where S(t * ) is the dimensionless S curve (e.g.Henderson, 1963).For a dimensionless unit of rainfall of a given dimensionless duration, d * , the dimensionless UH is obtained by subtracting the two consecutive S curves that are lagged d * : where U (t * ) is the dimensionless UH.The time to peak of the dimensionless UH, t p * , is derived by solving dU (t * )/dt * = 0. Using Eqs. ( 8) and ( 11) and recognizing that t p * ≥ d * gives the following equation for t p * : Similar expressions for the time to peak are available in the literature (e.g.Rigon et al., 2011;Robinson and Sivapalan, 1997).Consequently the peak value of the dimensionless UH may be expressed as a function of d * by the following:
The dimensionless runoff peak analysis
Based on the unit hydrograph theory and assuming a rectangular hyetograph of duration d * , the dimensionless convolution equation for a given catchment becomes where Q (t * ) is the dimensionless hydrograph and i e (d * ) is the dimensionless excess rainfall intensity.
Note that the hypothesis of the rectangular hyetograph is not motivated in order to simplify the methodology but in order to describe the rainfall event structure.Based on such an approach, the rainfall event structure at a given duration is represented throughout the n-structure exponent, and it follows that the rainfall event is described by a simple rectangular hyetograph.It has to be noticed that the constant hyetograph derived by a given n structure is assumed to be valid in the same range of duration from which it is derived, [d i 2; 2d i ].
In the following sections the dimensionless hydrograph and the corresponding peak are examined in the case of constant and variable runoff coefficients.
The analysis in the case of a constant runoff coefficient
By considering a constant runoff coefficient, ϕ 0 [-], similarly to the dimensionless rainfall depth h * the dimensionless excess rainfall depth h e * is defined by The corresponding dimensionless excess rainfall intensity becomes From Eqs. ( 13), ( 14) and ( 16), the dimensionless hydrograph and the corresponding peak may be expressed by In order to investigate the critical condition for a given catchment which maximizes the runoff peak, the partial derivative of the Eq. ( 18) with respect to the variable d * is calculated.
The analytical expression for estimating the critical duration of rainfall that maximizes the peak flow was first derived by Meynink and Cordery (1976).Similarly, from Eq. ( 19) it is possible to analytically derive the n-structure value that maximizes the dimensionless runoff peak for a specific duration d * referring to a given catchment:
The analysis in the case of a variable runoff coefficient
The variability of the infiltration process across the rainfall event as well as the initial soil moisture conditions significantly affects the hydrological response of the catchment.In order to take into account these elements a variable runoff coefficient, ϕ, is introduced.The variable runoff coefficient is estimated based on the SCS method for computing soil abstractions (SCS, 1985).Since the analysis deals with high rainfall intensity events it would be reasonable to force the SCS method in order to always produce runoff (Boni et al., 2007).The assumption that the rainfall depth always exceeds the initial abstraction is implemented in the model by supposing that a previous rainfall depth at least equal to the initial abstraction occurred; therefore, the excess rainfall depth h e is evaluated as follows: where S is the soil abstraction [L].The variable runoff coefficient is therefore described as a monotonic increasing function of the rainfall depth.It follows that the runoff component is affected by the variability of the infiltration process: the runoff is reduced in case of small rainfall events and is enhanced in case of heavy events.
The dimensionless excess rainfall depth, h e * , is defined by where h e r [L] is the reference excess rainfall depth and ϕ r [-] is the corresponding reference runoff coefficient.
The corresponding dimensionless excess rainfall intensity becomes From Eq. ( 21) the ratio ϕ ϕ r may be determined in terms of h * : where S * is the dimensionless soil abstraction defined by the ratio of S to h r .
According to the dimensionless approach proposed in the present paper, different initial moisture conditions can be analysed by considering different S * associated with different CN conditions (i.e.CN I or CN III or different soil characteristics) for the same reference rainfall depth.
The ratio ϕ ϕ r is lower than 1 when the dimensionless rainfall depth is lower than 1 and vice versa.In the domain of h * < 1 (i.e.d * < 1), the variable runoff coefficient implies that the runoff component is reduced with respect to the reference case and vice versa.The impact of the ratio ϕ ϕ r on the runoff production is enhanced if S * increases, thus causing a wider range of runoff coefficients.
From Eqs. ( 13), ( 14) and ( 23), the dimensionless hydrograph and the corresponding peak may be expressed by Similarly to the runoff peak analysis carried out in the case of the constant runoff coefficient, the partial derivative of the Eq. ( 26) with respect to the variable d * is calculated: From Eq. ( 27) it is possible to implicitly derive the nstructure value that maximizes the dimensionless runoff peak for a specific duration d * referring to a given catchment.
Results and discussion
The proposed dimensionless approach is derived using the two-parameter gamma distribution for the shape parameter equal to 3. Such an assumption is derived by using the Nash model relation proposed by Rosso (1984) to estimate the shape parameter based on Horton order ratios according to which the α parameter is generally in the neighbourhood of 3 (La Barbera and Rosso, 1989;Rosso et al., 1991).In Fig. 2, the dimensionless rainfall duration is plotted vs. the dimensionless time to peak together with the dimensionless IUH and the corresponding dimensionless UH for d * = 1.0.Note that the dotted grey line indicates the UH peak while the dashed grey lines show t p * , f t p * and f t p * − d * , respectively.
The dimensionless UH is evaluated, varying the dimensionless rainfall duration in the range between 0.5 and 2 in accordance with the n-structure definition in the range of durations [d i 2; 2d i ]; then the runoff peak analysis is carried out in the case of constant and variable runoff coefficients.
The achieved results are presented with respect to the abovementioned dimensionless duration range [0.5; 2] that is wide enough to include the duration of the rainfall able to generate the maximum peak flow for a given catchment (Robinson and Sivapalan, 1997).
Finally the dimensionless procedure is applied to a small Mediterranean catchment.In the catchment application the dimensionless procedure is fully specified as from the evaluation of the rainfall structures associated with three observed rainfall events with regard to the determination of the reference peak flow and consequently of the dimensionless hydrograph peaks for the three observed rainfall structures.
Maximum dimensionless runoff peak with constant runoff coefficient
The dimensionless form of the hydrograph is shown in Fig. 3 with variation of the rainfall structure exponents, n, for the selected dimensionless rainfall duration.The hydrographs are obtained for excess rainfall intensities characterized by a constant runoff coefficient and rainfall structure exponents of 0.2, 0.3, 0.5 and 0.8.The impact of the rainfall structure exponents on the hydrograph form depends on the rainfall duration: for d * lower than 1, the higher n the lower is the peak flow rate and vice versa.Figure 4 illustrates the 3-D mesh plot and the contour plot of the dimensionless runoff peak as a function of the rainfall structure exponent and the dimensionless rainfall duration.In the 3-D mesh plot as well as in the contour plot, it is possible to observe a saddle point located in the neighbourhood of d * and n values equal to 1 and 0.3, respectively.Note that the intersection line (reported as bold line in Fig. 4) between the saddle surface and the plane of the principal curvatures where the saddle point is a minimum indicates the highest values of the runoff peak for a given n-structure exponent.
In Fig. 5, the maximum dimensionless hydrograph peak and the corresponding rainfall structure exponent are plotted vs. the dimensionless time to peak.Further, the dimensionless IUH and the corresponding dimensionless UH for d * = 1.0 are reported as an example.The reference line (indicated as short-short-short dashed grey line in Fig. 5) illustrates the lower control line corresponding to the rainfall duration infinitesimally small.Note that the rainfall structure exponent that maximizes the runoff peak for a given duration can be simply derived as a function of the dimensionless time to peak (see Eq. 20).The maximum dimensionless hydrograph peak curve tends to one for long dimensionless rainfall duration (d * > 3) when the corresponding n-structure exponent tends to one (see Eq. 18): for high values of n structure, the critical conditions occur for long durations that correspond to paroxysmal events for which the rainfall intensity remains fairly constant.The local minimum of the maximum dimensionless runoff peak curve (see Fig. 5) occurs at t p * of 1.29 corresponding to n-structure value of 0.31 and d * of 1, Hydrol.Earth Syst.Sci., 22, 943-956, 2018 www.hydrol-earth-syst-sci.net/22/943/2018/
Maximum dimensionless runoff peak with variable runoff coefficient
The excess rainfall depth, in the case of variable runoff coefficient, is evaluated by assigning a value to the reference runoff coefficient.In particular, the reference runoff coefficient is defined as follows, utilizing Eq. ( 21): In order to provide an example of the proposed approach, the presented results are obtained assuming a dimensionless soil abstraction S * of 0.25.It follows that the reference runoff coefficient ϕ r is equal to 0.8.Similarly to the results presented for the case of constant runoff coefficient, Fig. 6 illustrates the dimensionless hydrographs obtained for excess rainfall intensities characterized by variable runoff coefficient and n-structure exponents of 0.2, 0.3, 0.5 and 0.8 at assigned dimensionless rainfall duration (d * = 0.5, 1.0, 1.5 and 2.0).The dimensionless hydrographs, obtained for the variable runoff coefficient, show the same behaviour as those derived for the constant runoff coefficient (see Figs. 3 and 6), even if they differ in magnitude, thus confirming the role of the variable runoff coefficient on the runoff peak.In particular, due to the variability of the infiltration process, the runoff peaks slightly decrease for rainfall duration lower than 1 (i.e.d * = 0.5) when compared with those observed in the case of a constant runoff coefficient while they rise up for a duration larger than 1 (i.e.d * = 1.5 and 2).
Figure 7 shows the 3-D mesh plot and the contour plot of the dimensionless runoff peak as a function of the rainfall structure exponent and the dimensionless rainfall duration in the case of a variable runoff coefficient.By comparing Figs.7 and 4, it emerges that the contour lines observed in the case of a variable runoff coefficient reveal a steeper trend with respect to constant runoff coefficient trends; indeed, the impact of the n-structure exponent on the hydrograph peak is enhanced when the runoff coefficient is assumed to be variable.The saddle point is again located in the neighbourhood of d * and n values equal to 1 and 0.3, respectively, while the curve of the maximum values of the runoff peak (reported as bold line in Fig. 7) is moved to the left.
In Fig. 8, the maximum dimensionless hydrograph and the corresponding rainfall structure exponent are plotted vs. the dimensionless time to peak in the case of a variable runoff coefficient.Results plotted in Fig. 8 confirm that the maximum runoff peak curve reveals the local minimum point at t p * of 1.29, corresponding to n of 0.26 and d * of 1. Referring to S * of 0.25, the maximum dimensionless runoff peak tends to 1.25 for long dimensionless rainfall duration (d * > 3) when consequently the n-structure exponent tends to 1 (see Eqs. 24 and 26). Figure 9 illustrates the influence of different variable runoff coefficients (i.e. for instance dif- ferent initial moisture conditions or different soil characteristics) on the maximum dimensionless runoff peak.Similarly to Fig. 8, the maximum dimensionless hydrograph peak (see the top graph) and the corresponding rainfall structure exponent (see the centre graph) are plotted vs. the dimensionless time to peak in the case of a variable runoff coefficient (for S * values of 0.25 and 0.67) together with the comparison to the case of constant runoff coefficient.The maximum dimensionless runoff peak is similar for short rainfall duration (i.e.t p * lower than 1.5) when the variable runoff coefficient reduces the runoff component with respect to the reference runoff case (that is, also the constant runoff case i.e. S * = 0).On the contrary, the maximum dimensionless runoff peak increases with increasing the dimensionless soil abstraction for long rainfall duration.Such behaviour is due to the rate of change in the runoff production with respect to the rainfall duration: with increasing the rainfall volume the relevance of runoff with respect to the soil abstraction rises.In other words, the n-structure exponent that maximizes the runoff peak decreases when the dimensionless soil abstractions are increased (see Eq. 27).
Catchment application
In order to point out the dimensionless procedure implications and to provide some numerical examples of the rainfall event structures, the proposed methodology has been implemented for the Bisagno catchment at La Presa station, located at the centre of Liguria region (Genoa, Italy).
The Bisagno-La Presa catchment has a drainage area of 34 km 2 with an index flood of about 95 m 3 s −1 .The upstream river network is characterized by a main channel length of 8.36 km and mean streamflow velocity of 2.4 m s −1 .Regarding the geomorphology of the catchment, the area (R A ), bi-furcation (R B ) and length (R L ) ratios that are evaluated according to the Horton-Strahler ordering scheme are respectively equal to 5.9, 5.6 and 2.5.By considering the altimetry, vegetation and limited anthropogenic exploitation of the territory, the Bisagno-La Presa is a mountain catchment characterized by an average slope of 33 %.The soil abstraction, S II , is assumed to be equal to 41 mm; its evaluation is based on the land use analysis provided in the framework of the EU Project CORINE (EEA, 2009).The mean value of the annual maximum rainfall depth for unit duration (hourly) and the scaling exponent of the DDF curves are respectively equal to 41.31 mm h −1 and 0.39.Detailed hydrologic characterization of the Bisagno catchment can be found elsewhere (Bocchiola and Rosso, 2009;Rulli and Rosso, 2002;Rosso and Rulli, 2002).With regard to the rainfall-runoff process, the two parameters of the gamma distribution are evaluated based on the Horton order ratio relationship (Rosso, 1984).The shape and scale parameters are estimated to be equal to 3.4 and 0.25 h respectively, thus corresponding to the lag time of 0.85 h.
In this application, three rainfall events observed in the catchment area have been selected in order to analyse the different runoff peaks that occurred for the three rainfall event structures.For comparison purposes, the selected events are characterized by an analogous magnitude of the maximum rainfall depth observed for the duration equal to the reference time (i.e.h r = 80 mm, t r = 0.85 h).
Figure 10 illustrates the rainfall event structure curves derived for the three selected rainfall events.The graphs at the top report the observed rainfall depths while the central graphs show the estimated rainfall structure exponents.At the bottom of Fig. 10, by considering the three structure exponents corresponding to the Bisagno-La Presa reference time (i.e.n = 0.55, 0.62, 0.71), the rainfall event structure curves are derived for a rainfall durations ranging between 0.5 • t r and 2 • t r ; for comparison purposes, the DDF curve is also reported.Based on each rainfall structure curve, four rectangular hyetographs with duration of 0.425, 0.85, 1.275 and 1.7 h in the range [t r 2; 2t r ] are derived to evaluate the impact on the hydrograph peak of the Bisagno-La Presa catchment.Note that the analysis is performed in the case of a variable runoff coefficient whose reference value is equal to 0.66 (i.e. S * = 0.5; S = 41 mm).In Fig. 11, the excess rainfall hyetographs, the corresponding hydrographs and the reference value of the runoff peak flow are plotted for the three investigated rainfall structure exponents.The reference value of the runoff peak flow (dash-dot line) is evaluated by assuming a constant-intensity hyetograph of infinite duration and having excess rainfall intensity equal to that estimated for the reference time.The role of the rainfall structure exponent emerges in the different decreasing rate of the excess rainfall intensity with the duration, thus resulting in the corresponding increasing rate of the peak flow values.
Figure 12 shows the contour plot of the dimensionless hydrograph peak in the case of a variable runoff coefficient Hydrol.Earth Syst.Sci., 22, 943-956, 2018 www.hydrol-earth-syst-sci.net/22/943/2018/ (S * = 0.5).The maximum runoff peak curve is also reported (bold line) together with the dimensionless hydrograph peaks (grey-filled stars) for the selected rainfall structure exponents (n = 0.55, 0.62, 0.71) and durations (d * = 0.5, 1.0, 1.5 and 2.0).Note that these selected rainfall structures represent only three of the possible outcomes in the sample space of the rainfall structures that are described in the contour plot.Similarly to Fig. 7, the Bisagno-La Presa catchment application shows a curve of the highest values of the runoff peak characterized by a local minimum (saddle point) in the neighbourhood of d * and n values equal to 1 and 0.3, respectively.
Conclusions
The proposed analytical dimensionless approach allows the investigation of the impact of the rainfall event structure on the hydrograph peak.To this end a methodology to describe the rainfall event structure is proposed based on the similarity with the depth-duration-frequency curves.The rainfall input consists of a constant hyetograph where all the possible outcomes in the sample space of the rainfall structures can be condensed through the n-structure exponent.The rainfall-runoff processes are modelled using the Soil Conservation Service method for soil abstractions and the instantaneous unit hydrograph theory.In the present paper the twoparameter gamma distribution is adopted as an IUH form; however, the analysis can be repeated using other synthetic IUH forms obtaining similar results.The proposed dimensionless approach allows an analytical framework to be defined which can be applied to any study case for which the model assumptions are valid; the site-specific characteristics (such as the morphologic and climatic characteristics of the catchment) are no more relevant, as they are included within the parameters of the dimension- less procedure (i.e.h r (T r ) and t r ), thus allowing the implication on the hydrograph peak irrespective of the absolute value of the rainfall depth (i.e. the corresponding return period) to be figured out.A set of analytical expressions has been derived to provide the estimation of the maximum peak with respect to a given n-structure exponent.Results reveal the impact of the rainfall event structure on the runoff peak, thus pointing out the following features: -The curve of the maximum values of the runoff peak reveals a local minimum point (saddle point).
-Different combinations of n-structure exponent and rainfall duration may determine similar conditions in terms of runoff peak.
-Analogous behaviour of the maximum dimensionless runoff peak curve is observed for different runoff coefficients although wider range of variations are observed with increasing soil abstraction values.
Referring to the Bisagno-La Presa catchment application (h r = 80 mm; t r = 0.85 h and S * = 0.5), the saddle point of the runoff peaks is located in the neighbourhood of an n value equal to 0.3 and rainfall duration corresponding to the reference time (d * = 1).Further, it emerges that the maximum runoff peak value, corresponding to the scaling ex-ponent of the DDF curve, is comparable to the less critical value (saddle point).Findings of the present research suggest that further review is needed of the derived flood distribution approaches that coupled the information on precipitation via DDF curves and the catchment response based on the iso-frequency hypothesis.Future research with regard to the structure of the extreme rainfall event is needed; in particular the analysis of several rainfall data series belonging to a homogeneous climatic region is required in order to investigate the frequency distribution of specific rainfall structures.
The developed approach, besides suggesting remarkable issues for further researches and being unlike the merely analytical exercise, succeeds in highlighting once more the complexity in the assessment of the maximum runoff peak.
Figure 1 .
Figure 1.Rainfall event structure: the observed rainfall depth (a), the observed maximum rainfall depths (b) and the corresponding rainfall structure exponent (c) are reported.
Figure 2 .
Figure 2. Dimensionless rainfall duration vs. dimensionless time to peak; dimensionless instantaneous unit hydrograph and the corresponding dimensionless unit hydrographs for d * = 1.0.Note that the shape parameter α is equal to 3.
Figure 4 .
Figure 4. 3-D mesh (a) and contour plot (b) of the dimensionless hydrograph peak as a function of the rainfall structure exponent and the dimensionless rainfall duration in the case of a constant runoff coefficient.The maximum dimensionless hydrograph peak curve is also reported (bold line).
Figure 5 .
Figure 5. Maximum dimensionless hydrograph peak and the corresponding rainfall structure exponent vs. dimensionless time to peak in the case of a constant runoff coefficient; dimensionless instantaneous unit hydrograph and the corresponding dimensionless unit hydrographs for d * = 1.0.Note that the shape parameter α is equal to 3.
Figure 7 .
Figure 7. 3-D mesh plot (a) and contour plot (b) of the dimensionless hydrograph peak as a function of the rainfall structure exponent and the dimensionless rainfall duration in the case of a variable runoff coefficient.The maximum dimensionless hydrograph peak curve is also reported (bold line).
Figure 8 .
Figure 8. Maximum dimensionless hydrograph peak and the corresponding rainfall structure exponent vs. dimensionless time to peak in the case of a variable runoff coefficient; dimensionless instantaneous unit hydrograph and the corresponding dimensionless unit hydrographs for d * = 1.0.Note that the shape parameter α is equal to 3.
Figure 9 .
Figure 9. Maximum dimensionless hydrograph peak and the corresponding rainfall structure exponent vs. dimensionless time to peak in the case of variable runoff coefficients with respect to dimensionless maximum retention S * of 0.25 and 0.67.The comparison to the case of constant runoff coefficient is also reported.
Figure 10 .
Figure 10.Rainfall event structure of three events observed in Genoa (Italy): the observed rainfall depths (a) and the estimated rainfall structure exponents (b) are reported.At the bottom, the rainfall structure and depth-duration-frequency curves, evaluated for the reference time of the Bisagno-La Presa catchment, are reported.
Figure 11 .
Figure11.The excess rainfall hyetographs, the corresponding hydrographs and the reference value of the hydrograph peak flow evaluated for three rainfall structure exponents applied to the Bisagno-La Presa catchment.Note that each graph includes four rainfall durations (i.e.0.5, 1.0, 1.5 and 2.0 times the reference time).
Figure 12 .
Figure12.Contour plot of the dimensionless hydrograph peak evaluated for the Bisagno-La Presa catchment in the case of a variable runoff coefficient (S * = 0.5).The maximum dimensionless runoff peak curve is also reported (bold line) together with the dimensionless hydrograph peaks (grey-filled stars) for the selected rainfall structure exponents (n = 0.55, 0.62, 0.71) and durations (d * = 0.5, 1.0, 1.5 and 2.0). | 8,464.4 | 2017-06-06T00:00:00.000 | [
"Engineering"
] |
FeCl3-Modified Carbonaceous Catalysts from Orange Peel for Solvent-Free Alpha-Pinene Oxidation
The work presents the synthesis of FeCl3-modified carbonaceous catalysts obtained from waste orange peel and their application in the oxidation of alpha-pinene in solvent-free reaction conditions. The use of waste orange peel as presented here (not described in the literature) is an effective and cheap way of managing this valuable and renewable biomass. FeCl3-modified carbonaceous materials were obtained by a two-stage method: in the first stage, activated carbon was obtained, and in the second stage, it was modified by FeCl3 in the presence of H3PO4 (three different molar ratios of these two compounds were used in the studies). The obtained FeCl3-modified carbon materials were subjected to detailed instrumental studies using the methods FT-IR (Fourier-transform Infrared Spectroscopy), XRD (X-ray Diffraction), SEM (Scanning Electron Microscope), EDXRF (Energy Dispersive X-ray Fluorescence) and XPS (X-ray Photoelectron Spectroscopy), while the textural properties of these materials were also studied, such as the specific surface area and total pore volume. Catalytic tests with the three modified activated carbons showed that the catalyst obtained with the participation of 6 M of FeCl3 and 3 M aqueous solutions of H3PO4 was the most active in the oxidation of alpha-pinene. Further tests (influence of temperature, amount of catalyst, and reaction time) with this catalyst made it possible to determine the most favorable conditions for conducting oxidation on this type of catalyst, and allowed study of the kinetics of this process. The most favorable conditions for the process were: temperature of 100 °C, catalyst content of 0.5 wt% and reaction time 120 min (very mild process conditions). The conversion of the organic raw material obtained under these conditions was 40 mol%, and the selectivity of the transformation to alpha-pinene oxide reached the value of 35 mol%. In addition to the epoxy compound, other valuable products, such as verbenone and verbenol, were formed while carrying out the process.
Introduction
In recent years, there has been a growing interest in research on the use of industrial and agricultural waste to produce of activated carbons [1]. The availability and ease of the obtaining this waste makes it a good source of raw materials for the production of carbonaceous materials [2]. The main advantage of activated carbons obtained from biomass is their low production cost compared to commercial activated carbons. In addition, obtaining carbonaceous materials from biomass allows the utilization of raw materials that have not been used so far and is a new way to utilize biomass waste [3].
The Food and Agriculture Organization (FAO) estimates that the world's citrus fruit production is close to 88 million tons per year, of which 80% is oranges [4]. The most important orange-producing countries are Brazil, USA and China. Brazil is the major citrus-processing country (Brazil processes 47% of the world's citrus fruits). These fruits are mainly processed into juices. The other main food products, that are produced on a global scale are jams and marmalades [5]. The citrus fruit processing industry generates huge amounts of waste, with citrus peel accounting for as much as 60 to 65% [6]. Orange peel contains, among others, sugars, starch, cellulose, hemicellulose, lignin and pectin, which if inadequately stored may pollute the environment [7]. It is worth noting that the pH of orange peel is close to 4. Orange peel, which is generated as waste from the orange juice production process, must be properly disposed as it poses a threat to local water courses and leads to uncontrolled methane emissions through decomposition [8]. A solution to this problem may be the production of useful materials from orange peel that can be used in many areas of industry.
The synthesis of activated carbons from biomass on a laboratory scale, especially from orange peel, is very interesting for scientists working on obtaining new materials [9]. Activated carbons made of this raw material are characterized by a relatively well-developed specific surface area [10,11] and a large total pore volume [12], as well as the presence of micropores [13]. Moreover, the materials obtained contain very small proportions of other elements in their structure [14]. Activated carbons from waste orange peel have been successfully used in many sectors of electrical engineering [12,13,15,16], in chemical reactions [12,13,16] and as adsorbents in the adsorption of compounds from aqueous solutions [17]. Activated carbons can be also carriers of nanostructures in catalysts, which in turn can be used in chemical processes such as the adsorption of SO 2 [18], electrochemical detection of toxic metal ions [19] or removal and recovery of cadmium from aqueous solutions [20].
The oxidation of alpha-pinene in the presence of catalysts is also of interest to many researchers. Alpha-pinene is obtained from the liquid resin (turpentine) of coniferous trees. The main source of alpha-pinene is the wood-processing industry, which produces waste with a high content of alpha-pinene [21]. Significant amounts of alpha-pinene are also present in the essential oils from plants [22][23][24][25]. The healing properties of alpha-pinene are well understood and described in the literature and alpha-pinene is used as a therapeutic substance in many diseases [26][27][28].
Alpha-pinene is a cheap raw material for the synthesis of many valuable compounds used as fragrances, food additives [29], pharmaceuticals [30] and solvents [7]. The oxidized derivatives formed in the oxidation reaction, such as verbenol, verbenone and alpha-pinene oxide, are of the greatest practical importance. These compounds are used primarily as flavor compounds and as the raw material for the production of fine chemicals (menthol, sandalol and taxol) [31][32][33]. Currently, the oxidation of alpha-pinene is carried out in the presence of various catalysts, including catalysts containing metals in their structure. The use of metal-containing catalysts and modification of the reaction conditions are aimed at obtaining the highest possible conversion and the highest selectivity transformation to alpha-pinene oxide. Table 1 shows selected catalysts for the alpha-pinene oxidation described in the literature and the parameters allowing to compare them in terms of activity in this reaction (selectivity to alpha-pinene oxide, alpha-pinene con-version), and the type of oxidant and solvent used. The methods for the synthesis of FePO 4 nanostructures have been described in the literature. These methods include: hydrothermal method [41], sol-gel method [42], surfactanttemplate method [43] and biologic template method [44]. Methods for the synthesis of FePO 4 nanostructures using FeCl 3 and H 3 PO 4 have also been described. Wu et al. [45] obtained multi-wall carbon nanotubes supported by hydrated iron phosphate (FePO 4 ). Wang [46] synthesized FePO 4 microstructures (various crystalline forms). However, these methods require many reagents or are difficult to perform. The use of FePO 4 nanoplates deposited on activated carbon as a catalyst for the oxidation of alpha-pinene has not been described in the literature so far. The presented method of synthesizing nanoparticles with simultaneous application to activated carbon, compared to the methods presented by other researchers, is simple to perform and requires the use of a small amount of chemical reagents.
The aim of this work was to obtain the active carbon catalysts from orange peels, which are bio-waste from the industry producing fruit juices. In order to increase the activity of the activated carbons obtained in the process of the carbonization of orange peels, their surface was modified by treating them with FeCl 3 in the presence of H 3 PO 4 . The aim of the second stage of the research was to characterize the obtained modified carbon materials using the selected instrumental methods of DFT (Density-functional Theory), FT-IR (Fourier-transform Infrared Spectroscopy), XRD (X-ray Diffraction), SEM (Scanning Electron Microscope), EDXRF (Energy Dispersive X-ray Fluorescence) and XPS (X-ray Photoelectron Spectroscopy), while also the establishing the textural properties of these materials such as the specific surface area and total pore volume. The aim of the third stage was to conduct catalytic tests of the obtained modified carbon materials. The studies on catalytic activity focused on the process of alpha-pinene oxidation with oxygen. To our knowledge, catalysts based on activated carbons obtained from waste biomass from food processing, and modified with FeCl 3 in the presence of H 3 PO 4, were not used to carry out this process. First, in the catalytic tests, it was necessary to select the catalyst sample with the highest activity, and then to conduct full catalytic tests with it (studies on the influence of temperature, amount of the catalyst and the reaction time), including also studies on the kinetics of the oxidation process. The aim of this step was to determine the most favorable conditions for alpha-pinene oxidation using this type of catalysts. In order to determine the most favorable conditions, the values for the conversion of alpha-pinene and the selectivity of transformation to alpha-pinene oxide were mainly taken into account.
Preparation Activated Carbon (AC)
The raw material used for the activated carbon (AC) production was orange peel (Valencia, Spain). A saturated solution of potassium hydroxide (Sigma-Aldrich, Burlington, MA, USA) was used for the chemical activation of this bio-waste. In our method of preparation fresh orange peel was dried in air, and then in an oven (Alpina, Konin, Poland) at 50 • C for 24 h. After drying, the orange peel was ground. 90 g of ground orange peel was mixed with 117 mL of KOH solution and subjected to an intensive mixing. The mass ratio calculated for dry biomass to KOH was 1:1. The obtained mixture was left for 3 h at room temperature.
The impregnated carbon substrate was dried at 200 • C for 19 h. After 19 h, the carbon substrate was subjected to carbonization under nitrogen gas (flow of 18 L/h). The carbon substrate was carbonized at 800 • C and kept at this temperature for 1 h. After the carbonization process was completed, the sample was cooled down to room temperature under an inert gas atmosphere. The activator (potassium hydroxide) was removed by washing of the sample with deionized water and a 1 M aqueous solution of HCl (Sigma-Aldrich, Burlington, MA, USA) for 19 h and after that again with deionized water until a neutral pH was reached. Washed AC was dried for 19 h at 200 • C. The obtained material was ground to powder. After the drying process was completed, the carbon sample was weighed and further analyzed. This material was identified as O_AC. Next, the materials were rinsed several times with deionized water until pH of the filtrate became 7. In the next stage, the catalysts were dried in an oven at 100 • C for 24 h. These samples were identified as O_Fe3_H 3 PO 4 , O_Fe6_H 3 PO 4 and O_Fe9_H 3 PO 4 .
Characterizing the Catalysts Obtained from Biomass
A Sorption Surface Area and Pore Size Analyzer (ASAP 2460, Micrometrics, Novcross, GA, USA, 2018) was used to characterize the textural properties of the obtained materials. Before the measurements of adsorption isotherms at the temperature of liquid nitrogen (−196 • C), all samples were degassed at 250 • C for 19 h. The Brunauer-Emmett-Teller (S BET ) equation was used based on the obtained N 2 adsorption-desorption isotherms to determine the specific surface area. The total pore volume (V tot ) was determined by the volume of nitrogen adsorbed at a relative pressure of~0.98. The DFT method based on nitrogen adsorption was used to calculate the volume of micropores. The pore size distribution was determined using the DFT model (ASAP 2460 software version 3.01, 2018, Micrometrics, Novcross, GA, USA) based on the N 2 sorption isotherm. Applied DFT model: N 2 at 77 K on carbon (slit N2-DFT Model adsorption).
Photographs were also taken with a scanning electron microscope (Neon40 Crossbeam, Carl Zeiss SMT GmbH, Oberchoken, Germany, 2009) in order to visualize the surface structures of the obtained materials.
Infrared spectra were acquired at room temperature with a Nicolet 380 ATR (Attenuated Total Reflectance)-FT-IR spectrometer (Thermo Fisher Scientific Inc., Waltham, MA, USA, 2003). Sixteen scans were averaged for each sample in the range 4000-400 cm −1 .
X-ray diffraction (XRD) patterns of the catalyst were recorded by an X-ray diffractometer (X'Pert-PRO, Panalytical, Almelo, The Netherlands, 2012) using Cu K α (λ = 0.154 nm) as the radiation source in the 2θ range 10-80 • , with a step size of 0.026.
The composition of each catalyst was calculated using an energy-dispersive X-ray fluorescence (EDXRF) spectrometer (Panalytical, Almelo, The Netherlands, 2011).
The X-ray photoelectron spectroscopy measurements were performed in a commercial multipurpose (XPS, LEED (Low Energy Electron Diffraction), UPS (Ultraviolet Photoelectron Spectroscopy), AES (Auger Electron Spectroscopy)), UHV (Ultra High Vacuum) surface analysis system (PREVAC), which operates at a base pressure in the low 10 −10 mbar range. The analysis chamber of the UHV system was equipped with nonmonochromatic X-ray photoelectron spectroscopy (XPS, PREVAC, Rogów, Poland, 2007) and kinetic electron energy analyzer (SES-2002, Scienta Scientific AB, Uppsala, Sweden, 2002). The calibration of the spectrometer was performed using Ag 3d5/2 transition. Samples in the form of fine powder were thoroughly degassed prior to measurement so that during XPS measurements the vacuum was in the low 10 −9 mbar range. The X-ray photoelectron spectroscopy was performed using Mg K α (hν = 1253.7 eV) radiation. Charging effects were observed and the correction of binding energy scale was performed using C 1s peak at 284.6 eV.
Alpha-Pinene Oxidation Method
The reaction of alpha-pinene oxidation was carried out in a three-necked flask placed in an oil bath with a bubbler and a reflux condenser (CHEMLAND, Stargard, Poland). Oxygen with a purity of 99.99% was fed from the cylinder through the mass flow meter, and the oxygen flow rate was 40 mL/min. For studies on oxidation, 8 g of alpha-pinene (98%, Sigma-Aldrich, Burlington, MA, USA) and the appropriate amount of catalyst were used. The activity of the catalysts was tested under the following conditions: reaction temperature 100 • C, catalyst amount 2.5 wt%, reaction time 3 h and mixing speed 400 rpm. The most active catalyst was used to determine the most favorable reaction conditions. For this purpose, the influence of the following parameters was studied: temperature in the range 80-120 • C, catalyst content in the range 0.1-2.5 wt% and reaction time from 20 to 280 min.
Quantitative analyses of the post-reaction mixtures were performed by the gas chromatography method with a Thermo Electron FOCUS chromatograph (FOCUS GC, Waltham, MA, USA, 2010) equipped with a FID (flame ionization detector) and a ZB-1701 column (30 m × 0.53 mm × 1 µm, 14% Cyanopropylphenyl, 86% Dimethylpolysiloxane). The operating parameters of the chromatograph were as follows: helium flow 1.2 mL/min, injector temperature 220 • C, detector temperature 250 • C, furnace temperature isothermally for 2 min at 50 • C, increase at a rate of 6 • C/min to 120 • C, then rising at 15 • C/min to 240 • C. The method of normalization was used for quantitative analyses of the post-reaction mixtures. Qualitative analyses were performed by GC-MS (Gas chromatography-mass spectrometry) method using a ThermoQuest apparatus (Waltham, MA, USA, 2000) equipped with a Voyager detector and a DB-5 column. The obtained results of the analysis were compared with the spectral libraries, and then the determined products were confirmed with the use of commercial standards.
Characterization of the Obtained Catalysts
The porous structure of the obtained catalysts was confirmed by the N 2 adsorptiondesorption measurements. Figure 1a presents nitrogen adsorption-desorption isotherms and Figure 1b shows the pore volume distribution according to their size in the range of micropores and narrow mesopores, for the obtained modified carbonaceous catalysts. The isotherms of the tested materials demonstrated a high adsorption of N 2 at low relative pressure that is characteristic for microporous materials. According to the IU-PAC classification, the nitrogen sorption isotherms corresponded to type I(b). Type I(b) isotherms are characteristic for materials with pore distribution in the size of micropores and possibly narrow mesopores (<~2.5 nm) [47].
Curves presented in Figure 1b were determined by the analysis of adsorption isotherms N 2 at −196 • C, using the non-linear method DFT (density functional theory). Based on these data, it was noticed that the analyzed catalysts, apart from smaller pores with a diameter of approx. 1-2 nm, also contained a small share of narrow mesopores with the size of~2.5 nm. Table 2 shows textural properties and metal content measured using XRF spectroscopy in modified carbonaceous catalysts obtained from biomass (waste orange peel). Modified carbonaceous catalysts obtained from orange peel have surface area values in the range of 221-1300 m 2 /g and total pore volume values of 0.132-0.608 cm 3 /g. Sample O_Fe3_H 3 PO 4 was characterized by the highest iron content in its structure (25.01 wt%), while the lowest iron content was recorded for the O_Fe9_H 3 PO 4 sample (6.12 wt%).
Modifications in the use of iron precursors influenced the textural properties of the modified materials. A decrease in value of the specific surface area and total pore volume with the simultaneous increase of the content of Fe (wt%) in the material was also noted by Braun [48], Jiang [49] and Yuan [50].
The FT-IR spectrum of the obtained modified carbonaceous catalysts is shown in Figure 2. The characteristic band at 1628 cm −1 and wide double band between 2900 and 3750 cm −1 are attributed to the existence of adsorbed water. The internal vibrations of FePO 4 originating from the intramolecular vibrations of PO 4 tetrahedron are universally known to be located in the range of 400-1220 cm −1 [51,52]. Bands at 700, 980 and 1250 cm −1 are associated with C-P stretching vibrations. The absorption band between 2800 and 2900 cm −1 can be due to the aliphatic character of the C-H groups [53]. The bands below 600 cm −1 were related with the different Fe-O and P-O bending and stretching modes [54]. The bands around 1000 and 700 cm −1 can be assigned to the stretching mode of Fe-O [55].
To conclude on FT-IR characterization, the most important changes introduced by the increase of the acid concentration were the development of C-H vibrations (probably because of the loss of oxygen at the surface of the carbon material) as well as the increase of phosphoros group content (∼1100 cm −1 ). Benaddi et al. [56] suggested that dehydration of the biomass material by H 3 PO 4 is similar to dehydration of alcohols and that at higher temperatures the phosphorous oxides act as Lewis acids and can form C-O-P bonds. Figure 3 shows the diffractograms of the obtained modified carbonaceous catalysts. The XRD plots of O_Fe6_H 3 PO 4 and O_Fe3_H 3 PO 4 showed characteristic peaks of iron phosphate hydrate. Although O_Fe3_H 3 PO 4 and O_Fe6_H 3 PO 4 samples contained FePO 4 ·2H 2 O compound, the crystallographic systems were different. Namely these were orthorhombic (PDF4+ 04-014-3291) and monoclinic (PDF4+ 04-012-6194), respectively. The diffraction pattern of the O_Fe9_H 3 PO 4 sample shows two peaks at positions 23 • and 44 • 2θ. They can be assigned to reflections from the corresponding graphite planes: (002) and (101) [57]. The material O_Fe9_H 3 PO 4 has an amorphous structure, while materials O_Fe6_H 3 PO 4 and O_Fe3_H 3 PO 4 were crystalline. Similar results were presented by Wang [58] and Masquelier [59]. Well-developed crystals are confirmed by SEM micrographs.
The concentrations of different oxygen groups over the sample surfaces were determined using XPS. The results are presented in Table 3. These results were obtained by the careful deconvolution of C 1s signals presented in Figure 4a. The detailed deconvolution is presented elsewhere [60]. The oxygen atom, as an element more electronegative than carbon, causes a shift of valence electrons from carbon to oxygen. As a result, the electrons occupying 1s orbital exhibit increased binding energy. This effect is strongest for the COOH group, where two oxygen atoms participate in this phenomenon. Consequently, in Figure 4a, one can notice a dominant signal from elemental carbon and a shoulder located at higher binding energies corresponding to different carbon-oxygen groups.
The quantitative analysis of carbon-oxygen groups reveals that samples O_Fe3_H 3 PO 4 and O_Fe6_H 3 PO 4 were comparable in this respect. The sample O_Fe9_H 3 PO 4 exhibited lower content of C-O; C=O; COOH groups and higher content of keto-enolic groups.
The intensity of the C 1s signals indicate a screening effect of the carbon surface for the O_Fe3_H 3 PO 4 and O_Fe6_H 3 PO 4 samples by iron phosphate species.
The X-ray photoelectron spectroscopy survey analysis (Figure 4b) enables determination of the elemental composition of the surface. The elemental content of the surface expressed as atomic concentration is presented in Table 4. The sample O_Fe9_H 3 PO 4 contains no phosphor and the lowest amount of iron. This explains the screening effect observed in case of the C 1s signal (Cf. Figure 4a). The significant amount of oxygen in the O_Fe6_H 3 PO 4 and O_Fe9_H 3 PO 4 samples is predominantly the impact of the presence of iron phosphate species. These species are located on the carbon surface, resulting in a screening effect. Consequently, the C 1s signal is the lowest for the sample containing the highest amount of phosphorus. One should be aware that the XPS signals originate from about 1 nm depth and that their contribution to the signal decreases exponentially with the depth. The Auger signals like Fe LMM are much more surface-sensitive; i.e., the signal originates from about 0.1 nm. One can observe strong iron Auger signals in case of the samples O_Fe3_H 3 PO 4 and O_Fe6_H 3 PO 4 . This confirms that the iron phosphates are located over the carbon surface. Figure 5 shows SEM images of the obtained modified carbonaceous catalysts. The morphology of the catalysts' surface elements was characterized by means of this micrograph. The SEM images of O_Fe9_H 3 PO 4 show that the surface of this carbonaceous catalyst has cracks and crevices, and holes of different diameters. From the micrographs, it can be concluded that this material is characterized by a porous structure and the O_Fe9_H 3 PO 4 catalyst has a typically carbonaceous structure. The SEM images of O_Fe6_H 3 PO 4 show nanoplates present on the surface of the carbonaceous catalyst. These structures are characterized by a rectangular shape. The nanoplates can be defined as crystalline FePO 4 ·2H 2 O. The micrograph of the O_Fe3_H 3 PO 4 catalyst also shows structures of FePO 4 ·2H 2 O. The structures on the surface of this catalyst showed an irregular shape resembling a square. Similar nanostructures were synthesized by Masquelier [59], Pramanik [61] and Wang [58].
The careful reader might notice that a higher concentration of Fe(III) during the preparation process leads to a lower iron content detected by XRF and XPS methods. The most likely explanation for this phenomenon is that the Fe(III) concentration strongly affects the crystallization process. The XRD data indicates (Figure 3) orthorhombic and monoclinic crystallographic systems for the FePO 4 ·2H 2 O compound, and for O_Fe3_H 3 PO 4 and O_Fe6_H 3 PO 4 samples, respectively. However, the sample O_Fe9_H 3 PO 4 obtained in the highest concentration of Fe(III) gives no XRD signals originating from FePO 4 ·2H 2 O compound. The XRF data (Table 2) indicates a significant amount of iron which should be easily detected by the XRD method in the case of coarse grained material. Therefore, one can conclude that in case of sample O_Fe9_H 3 PO 4, the material was amorphous and was characterised with very fine crystallites or particles. The amorphous material obtained during the process of preparation can be easily washed out. This explains the decrease in iron content in the sample with increasing Fe(III) concentration during preparation our samples. The SEM data ( Figure 5) supports this observation.
Activity of the Obtained Modified Carbonaceous Catalysts
In the first stage of the research, the activity of the obtained catalysts in the oxidation of alpha-pinene was checked. The activity of the catalysts was tested under the following conditions: reaction temperature-100 • C, catalyst amount-2.5 wt%, reaction time-3 h, mixing speed-400 rpm. Figure 6 shows main products of alpha pinene oxidation. The compounds obtained with the highest selectivity are marked in the green box. Figure 7 shows a comparison of the selectivities for the main products and the conversion of alphapinene for tested catalysts. The O_Fe6_H 3 PO 4 catalyst was characterized by the highest catalytic activity at this stage of the study, because the highest values of conversion of alpha-pinene (52 mol%) and selectivity to alpha-pinene oxide as the one of main reaction products (24 mol%) were achieved on this catalyst. Other products with high selectivity in the reaction carried out with the use of O_Fe6_H 3 PO 4 catalyst were: verbenone (25 mol%) and verbenol (16 mol%).
With the use of other catalysts, the selectivity values for alpha-pinene oxide were similar to the selectivity value obtained for the O_Fe6_H 3 PO 4 catalyst and amounted to, respectively: 23 mol% for the O_Fe3_H 3 PO 4 catalyst and 22 mol% for the O_Fe9_H 3 PO 4 catalyst.
For O_Fe3_H 3 PO 4 and O_Fe9_H 3 PO 4 catalysts, lower conversion values (respectively 41 mol% and 33 mol%) were obtained compared to the conversion value obtained for the O_Fe6_H 3 PO 4 catalyst. These differences are probably related to the presence of different functional groups on the catalyst surface and the different iron content in the catalyst support. However, looking at the results in Tables 2 and 3, the difference in the content of functional groups on the catalyst surface seems to be more important here. Comparing the content of C-O, C=O and COOH groups in three modified carbon catalysts, it can be said that O_Fe3_H 3 PO 4 and O_Fe6_H 3 PO 4 catalysts have the highest content of these groups, while the O_Fe6_H 3 PO 4 catalyst is characterized by their slightly higher content than in the O_Fe3_H 3 PO 4 . The C-O, C=O and COOH groups can form peroxy groups during the oxidation process, thanks to which we observe the occurrence of the appropriate oxidation reactions in the tested process. Therefore, the O_Fe9_H 3 PO 4 catalyst should be the least active in the studied process of alpha-pinene oxidation as is confirmed by the results presented in Figure 7.
The second stage of catalytic research focused on determining the most favorable conditions for alpha-pinene oxidation process using the most active catalyst. Taking into account the results obtained in the preliminary studies on the catalytic activity all three modified carbonaceous catalysts, the O_Fe6_H 3 PO 4 catalyst was selected for the tests. The following parameters were tested: catalyst content in the range 0.1-1 wt%, temperature in the range 80-120 • C and reaction time from 20 to 280 min. The first parameter tested was the catalyst content in relation to the amount of alpha-pinene. The reaction was carried out at the temperature of 100 • C, and samples were taken after 3 h. The obtained results are shown in Figure 8. Figure 8 shows that increasing the catalyst content in the range of 0.1 wt% to 1 wt% increases alpha-pinene conversion, which reaches a maximum value (54 mol%) for the catalyst content of 0.5 wt%. The highest selectivity of transformation to alpha-pinene oxide (33 mol%) was also noted for this catalyst content. For the catalyst content of 1 wt%, the values of alpha-pinene conversion (44 mol%) and selectivity to alpha-pinene oxide (22 mol%) decreased significantly. At this stage of the research, the amount of catalyst equal to 0.5 wt% was considered to be the most advantageous. Figure 9 shows the influence of temperature on the oxidation of alpha-pinene in the presence of the O_Fe6_H 3 PO 4 catalyst. The oxidation of alpha-pinene was carried out for 3 h in the presence of 0.5 wt% catalyst. With increasing temperature, the conversion of alpha-pinene increases, reaching the maximum value of 54 mol% for the temperature of 110 • C, while at the temperature of 120 • C it decreases to 47 mol%. Similar values of alpha-pinene conversion (52 mol%) were also recorded for the temperature of 100 • C. The selectivity of transformation to alpha-pinene oxide is the highest at 90 • C (35 mol%), while for oxidation carried out at temperatures above 90 • C, the value of this selectivity decreases. This may be due to the transformation of alpha-pinene oxide to derivative compounds and compounds with higher molecular weights (dimers and polymers). Considering mainly the alpha-pinene conversion values, at this stage of the research, the temperature of 100 • C was taken as the most favorable. Figure 10 shows the influence of the reaction time on the oxidation of alpha-pinene. For the studies on the influence of reaction time, 8 g of alpha-pinene and 0.04 g of catalyst (0.5 wt%) were used and temperature of oxidation was 100 • C. Samples of the reaction mixtures for GC studies were taken in the range 20-280 min at 20 min intervals. The maximum value of selectivity to alpha-pinene oxide (35 mol%) was reached after 120 min of the oxidation process. After this time, the selectivity of the transformation to epoxide decreases and after 140 min this value amounts to 31 mol%, and after 280 min to only 1 mol%. The selectivity to verbenone increases with the prolongation of the reaction time and reaches its maximum value (31 mol%) for the reaction time of 280 min. The conversion of alpha-pinene also increases with the prolongation of the reaction time, reaching the maximum value (66 mol%) for a reaction time of 280 min. An increase in the alpha-pinene conversion values with a simultaneous decrease in the selectivity of alphapinene oxide values indicates further reactions in which alpha-pinene oxide undergoes isomerization, dimerization and polymerization.
Determination of the Kinetics Parameters
The comprehensive kinetic modeling of the alpha-pinene oxidation over FeCl 3 -modified carbonaceous catalysts obtained from orange peel was performed based on the series of experiments in which the effect of the temperature was checked, considering a constant oxygen uptake (expressed in mol/L). For each experiment (80, 90 and 100 • C; P O2 = 1 bar), the reaction mixture composition was determined at the varied points of oxygen uptake/alphapinene ratio (defined in mol O2 /mol α-pinene ). The alpha-pinene oxidation rates were calculated by kinetic curves differentiation. The turbulence created around the catalyst particles by a vigorous stirring of the reaction mixture helps to eliminate the external diffusion resistance between the bulk liquid and surface of the catalyst. Internal diffusion resistance was also negligible because of the small size of catalyst particles (0.07 mm ≤ d p ≤ 0.1mm) used in the runs. It was observed that product content depends slightly on the oxygen uptake/alpha-pinene ratio. Moreover, the increase of the reaction temperature results in the growth of the alpha-pinene oxidation rate. Activation energy estimated from Arrhenius dependence was 92.7 ± 3.4 kJ/mol and the effective kinetic constant was k eff = 1.0 × 10 10 mol 0.5 ·L −0.5 ·min −1 . The model fits the experimental data quite well (the regression coefficient equals 0.9819); thus, the activation energy calculated is the true activation energy. Therefore, under the reaction conditions the reaction rate of alpha-pinene oxidation by molecular oxygen can be expressed as follows: Calculated activation energy matches results achieved for typical values of activation energy for alpha-pinene oxidation by molecular oxygen (81.3 kJ/mol) [62] as well as cis-pinene oxidation (79.5 kJ/mol) [63] or dibenzyl ester oxidation initiated by azoisobutyronitrile (93.66 kJ/mol) [64].
Conclusions
Summarizing our research presented in this work, it can be seen that we were able to obtain active catalysts for alpha-pinene oxidation with oxygen. Among the tested catalysts, the O_Fe6_H 3 PO 4 catalyst was the most active, because the highest values of conversion of alpha-pinene (52 mol%) and selectivity transformation to alpha-pinene oxide as the one of main reaction products (24 mol%) were achieved on this catalyst. Other products with high selectivity in the reaction carried out with the use of O_Fe6_H 3 PO 4 catalyst were verbenone (25 mol%) and verbenol (16 mol%). The studies on the influence of the amount of this catalyst on the course of the oxidation process showed that amount of the catalyst is the important parameter for this process. It was seen that increasing the catalyst content in the range of 0.1 wt% to 1 wt% increased the alpha-pinene conversion (to the maximum value of 54 mol% for the catalyst content of 0.5 wt%). The highest selectivity of transformation to alpha-pinene oxide (33 mol%) was also noted for this catalyst content. Temperature as the process parameter also had considerable influence on the process. With increasing temperature, the conversion of alpha-pinene increased, reaching a maximum value of 54 mol% at the temperature of 110 • C. The selectivity of transformation to alpha-pinene oxide was highest at 90 • C (35 mol%), while for oxidation carried out at temperatures above 90 • C, the selectivity decreased and the main product was verbenone. The last parameter studied was reaction time. This parameter was very important for the course of oxidation of alpha-pinene. The maximum value for the selectivity of transformation to alpha-pinene oxide (35 mol%) was reached at 120 min. After 160 min, the main product was verbenone. The selectivity of transformation to verbenone increased with the prolongation of the reaction time and reached its maximum value (31 mol%) at the reaction time of 280 min. The conversion of alpha-pinene also increased with the prolongation of the reaction time, reaching its maximum value (66 mol%) at 280 min. The increase in alpha-pinene conversion with the simultaneous decrease in the selectivity of alpha-pinene oxide indicated further reactions of alpha-pinene oxide (isomerization, dimerization, and polymerization).
The studies presented in this paper show that waste biomass in the form of fresh orange peel can be an excellent raw material for obtaining catalysts active in the oxidation of alpha-pinene. Catalysts obtained from orange pulp and modified with FeCl 3 may become an alternative to synthetic catalysts used in this process in the future. This direction of technology development for the olefin oxidation process will ensure effective management of waste biomass and will allow the use of relatively cheap catalysts based on raw materials of natural origin in these processes. The manner of carrying out this process of alpha-pinene oxidation also needs to be emphasized, as it does not use any solvents, which reduces environmental nuisance and lowers the costs associated with the recovery of the solvent and its recycling. Alpha-pinene, which is oxidized in this process, is likewise obtained from raw materials of natural origin, which makes it a renewable resource. The oxidation process uses oxygen as the oxidizing agent, supplied from the cylinder and, under the most favorable conditions, the reaction is carried out at a temperature of 100 • C, at atmospheric pressure, with a catalyst content of 0.5 wt% and for 120 min. These are mild process conditions. The conversion of the organic raw material obtained under these conditions is 40 mol%, and the selectivity of the transformation to alpha-pinene oxide reaches the value of 35 mol%. In addition to the epoxy compound, other valuable products, such as verbenone and verbenol, can also be obtained in this process. Therefore, research on this process should be continued. This research should move towards modifications of carbon catalysts to be able to obtain higher selectivity of the main product or one of the most valuable by-products. | 8,243.2 | 2021-12-01T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
FUX-Sim: Implementation of a fast universal simulation/reconstruction framework for X-ray systems
The availability of digital X-ray detectors, together with advances in reconstruction algorithms, creates an opportunity for bringing 3D capabilities to conventional radiology systems. The downside is that reconstruction algorithms for non-standard acquisition protocols are generally based on iterative approaches that involve a high computational burden. The development of new flexible X-ray systems could benefit from computer simulations, which may enable performance to be checked before expensive real systems are implemented. The development of simulation/reconstruction algorithms in this context poses three main difficulties. First, the algorithms deal with large data volumes and are computationally expensive, thus leading to the need for hardware and software optimizations. Second, these optimizations are limited by the high flexibility required to explore new scanning geometries, including fully configurable positioning of source and detector elements. And third, the evolution of the various hardware setups increases the effort required for maintaining and adapting the implementations to current and future programming models. Previous works lack support for completely flexible geometries and/or compatibility with multiple programming models and platforms. In this paper, we present FUX-Sim, a novel X-ray simulation/reconstruction framework that was designed to be flexible and fast. Optimized implementation for different families of GPUs (CUDA and OpenCL) and multi-core CPUs was achieved thanks to a modularized approach based on a layered architecture and parallel implementation of the algorithms for both architectures. A detailed performance evaluation demonstrates that for different system configurations and hardware platforms, FUX-Sim maximizes performance with the CUDA programming model (5 times faster than other state-of-the-art implementations). Furthermore, the CPU and OpenCL programming models allow FUX-Sim to be executed over a wide range of hardware platforms.
The availability of digital X-ray detectors, together with advances in reconstruction algorithms, creates an opportunity for bringing 3D capabilities to conventional radiology systems. The downside is that reconstruction algorithms for non-standard acquisition protocols are generally based on iterative approaches that involve a high computational burden.
The development of new flexible X-ray systems could benefit from computer simulations, which may enable performance to be checked before expensive real systems are implemented. The development of simulation/reconstruction algorithms in this context poses three main difficulties. First, the algorithms deal with large data volumes and are computationally expensive, thus leading to the need for hardware and software optimizations. Second, these optimizations are limited by the high flexibility required to explore new scanning geometries, including fully configurable positioning of source and detector elements. And third, the evolution of the various hardware setups increases the effort required for maintaining and adapting the implementations to current and future programming models. Previous works lack support for completely flexible geometries and/or compatibility with multiple programming models and platforms.
In this paper, we present FUX-Sim, a novel X-ray simulation/reconstruction framework that was designed to be flexible and fast. Optimized implementation for different families of GPUs (CUDA and OpenCL) and multi-core CPUs was achieved thanks to a modularized approach based on a layered architecture and parallel implementation of the algorithms for both architectures.
A detailed performance evaluation demonstrates that for different system configurations and hardware platforms, FUX-Sim maximizes performance with the CUDA programming model (5 times faster than other state-of-the-art implementations). Furthermore, the CPU and OpenCL programming models allow FUX-Sim to be executed over a wide range of hardware platforms. PLOS
Introduction
In recent decades, there has been a rapid advance towards the use of digital equipment in radiology. The introduction of digital detectors, together with more flexible movement of the Xray source and detector, makes it possible to obtain 3D information from conventional X-ray systems. This new approach differs substantially from CT systems in that it involves the acquisition of a limited number of projections using non-standard scanning geometries, which demands new acquisition protocols for existing systems or the design of new systems with a wider range of movements. Research on new configurations for X-ray systems, new acquisition protocols, and advanced reconstruction algorithms to obtain tomographic images from a limited number of projections can benefit from simulation tools, which enable evaluation of possibilities before their actual implementation in real systems. The development of simulation/reconstruction algorithms in this context poses three main challenges. First, reconstruction algorithms for non-standard acquisition protocols are generally based on computationally expensive iterative approaches with large datasets that require both hardware and software optimizations. Second, possible optimizations are limited by the high flexibility required to explore new scanning geometries, including fully configurable positioning of source and detector elements. And third, the evolution of various computing architectures increases the effort required to maintain and adapt the implementations for current and future programming models.
The literature provides solutions that allow us to simulate the acquisition and/or reconstruction of tomographic studies. However, these solutions generally offer restricted possibilities for positioning the source and the detector, thus reducing their ability to simulate new acquisition protocols based on non-standard setups. For instance, CT Sim [1] is an open source CT simulator that enables the projection of various phantoms, although it is limited to 2D circular scans with ideal parallel-beam and fan-beam geometries. It provides analytical reconstruction methods (FBP and Direct Fourier), without supporting iterative reconstruction algorithms. A more flexible alternative is IRT, an open-source image reconstruction toolbox [2], which provides a number of iterative algorithms, together with tools to build new ones. The main drawback of this approach is that it focuses only on standard cone-beam CT systems and does not provide enough flexibility for the more sophisticated scanning geometries achievable with radiology systems. TomoPy [3] provides projection, reconstruction methods, and pre-processing and post-processing tools, such as filters and artifact removal algorithms. However, the geometries offered are again rather simple, with the possibility of only changing the center of rotation for projection and reconstruction.
Another drawback common to the abovementioned approaches is that they are all limited to CPU implementations. Given the high computational burden of some of the algorithms used in simulation and reconstruction, it is widely accepted that parallel implementations are needed to achieve reasonable execution times. Along these lines, more recent works have opted for graphic processor units (GPUs), with CUDA and OpenCL being the most widely used programming models [4]. X-ray Sim [5], which has a basic open-source version in the CPU, lacks flexibility in the available system geometries and is based on the projection of digital computer-aided design (CAD) models, thus hindering the direct use of real acquired images. A similar drawback is found in ImaSim [6], where objects are based on specific geometrical shapes and not voxels, thus precluding handling of voxelized objects such as actual CT datasets. CONRAD [7] is a Java-based software framework that uses GPU devices for hardware acceleration. It provides tools for simulating 4D studies, analytical reconstructions, and artifact correction. Flexible scanning geometries are supported, although not in a straightforward manner, since they are based on a projection matrix that needs to be obtained beforehand. Finally, the ASTRA toolkit [8] offers a solution based on CUDA that can be used to develop advanced reconstruction algorithms and allows the user to experiment with customized geometries. However, it is limited to datasets that fit completely in the memory space of the GPU and to circular orbits, thus precluding simulation of new acquisition geometries such as those used in tomosynthesis.
With respect to programming models for acceleration and optimization, previous works [9][10][11][12][13][14][15] conclude that the use of GPUs in these types of algorithms is remarkably faster than the single-thread version in CPU or even OpenMP implementations. The decision on the use of CUDA, OpenCL, or other programming models for the implementation of the algorithms is normally based on performance and portability. Direct comparisons have shown slightly better performance (10% of speedup) for CUDA implementations [10,12].
In previous works, the authors implemented projection and backprojection algorithms that focused mainly on performance optimizations and support for large data volumes. However, these algorithms lack support for flexible geometries or are not compatible with multiple programming models and platforms.
In this work, we present FUX-Sim, a geometrical X-ray simulation/reconstruction framework, which was designed to overcome the drawbacks set out above by providing fast support for flexible scanning geometries. Optimized implementation of programing models for GPUs (CUDA and OpenCL) and multi-core CPUs was achieved thanks to a modularized approach based on a layered architecture and parallel implementation of the algorithms in both the GPU and the CPU. We provide a general description of the layers, from the kernel layer and support layer at the bottom, with the basic algorithms, to the architecture layer at the top, with the various system configurations. We detail the optimizations carried out at each layer in terms of computation and memory management and evaluate different system setups by comparing three programming models.
General description of the FUX-Sim framework
FUX-Sim was designed with three main goals: (1) flexibility, enabling multiple geometries, with flexible positioning of source and detector; (2) easy compatibility with multiple current programming models and platforms; and (3) performance based on parallel programming models that take advantage of the underlying hardware, including multi-core CPUs and two families of GPUs (NVidia and AMD). To this end, the tool is organized as a framework with a layered software architecture that provides support for different hardware and programming models, as shown in Fig 1. The configuration layer implements various system configurations including circular scan, arbitrary position, wide field of view, tomosynthesis, and helical scan. The architecture layer enables the execution of the simulator on different hardware platforms. For this purpose, all the algorithms are implemented in three programming models, namely, OpenMP, CUDA, and OpenCL, all of which are identical in terms of functionality and results. The kernel layer represents the execution core of the simulator and provides the main building blocks for the upper layers. At the same level, the support layer contains the processing operations and platform management modules to handle memory for different GPUs and CPUs.
The architecture layer acts as a wrapper of optimized kernels and algorithms in lower layers. The execution of the simulator passes through the architecture layer to automatically reach the corresponding functionality in the kernel layer or support layer, depending on the availability of the GPU and the programming model chosen.
A detailed description of each layer can be found in the following sections.
Kernel layer
The kernel layer constitutes the simulator core and contains the projection and backprojection kernels, which are implemented based on cone-beam geometry (Fig 2). It is possible to set all the system geometrical parameters (projection angle, source-object distance, detector-object distance, matrix and pixel size of the detector, matrix and voxel size of the volume), as well as the deviations from the ideal position of the detector (shifts, skew, roll, and tilt in Fig 2). The adjustment of these parameters enables the study of the effects of misalignments and the simulation of non-regular geometries at arbitrary angular positions for other X-ray equipment such as a C-arm or tomosynthesis systems.
Linear shifts (x shift , y shift ) and skew angle (ϕ) are applied by simple geometrical operations (shift or rotation of pixel coordinates): The effect of detector inclination (roll and tilt) is shown in Fig 3, where ε is the inclination angle of the detector, A' is a pixel in the real detector, and A is the corresponding pixel in the ideal detector.
For each point in the ideal detector, we can calculate the corresponding point in the real detector according to the expression Projection and backprojection kernels are the main building blocks for the upper layers. FUX-Sim implements ray-driven, voxel-driven, and distance-driven interpolation approaches. Ray-driven methods tend to introduce artifacts (Moiré patterns) in the backprojection, whereas voxel-driven projection introduces grid artifacts into the projections [16]. With more accurate geometric modeling, distance-driven methods often lead to better image quality than ray-driven projection and voxel-driven back-projection [17]. This is done by projecting voxel and detector boundaries into the same axis and calculating the overlap between them (Fig 4), both for projection and for backprojection. Ray-driven and voxel-driven approaches rely on the computation of the trajectory corresponding to the center point of the voxel/pixel (black dot in Fig 4 for the case of voxel-driven backprojection), whereas distance- driven mode aims to obtain a more accurate representation of the contribution to the voxel/ pixel by computing trajectories for its limits (u 1 and u 2 in Fig 4).
Given that the kernels are the most time-consuming components, this layer is where most of the optimizations were made, including the full parallelization of the ray trajectories. We implemented two alternatives for projection and backprojection based on ray-/voxel-driven and distance-driven methods. Since each interpolation method needs a specific parallelization approach, we decided to implement two versions of each kernel in order to optimize performance.
Projection kernel
The projection kernel emulates data acquisition in an X-ray system: the line integral is based on the computation of the sum of Nstep values along the X-ray beam to update the contribution to the detector pixel: where rad is the maximum radius of the FOV (in mm), f (u, v, z) is the voxel value in the sample at coordinates (u, v, z), p θ (x, y) is the projection data for position (x, y) in the detector at angle θ, α is the angle of the ray with respect to the central ray of the beam, and Mag is the magnification due to the cone angle, given by where DSO and DDO are the distance from the center of the field of view (FOV) to the source and the detector, respectively (see Fig 2). Sampling is performed along the v-axis given by step (in mm), which is set by default to the minimum dimension of the pixel, covering 2×rad. We include the term 1/cosα to compensate for the higher sampling in rays that are distant from the central ray, as shown in Fig 5 for the case of the ray that corresponds to y 1 . Pseudocode 1 shows the projection kernel for both the ray-driven algorithm (italic font) and the distance-driven algorithm (bold font).
Pseudocode 1: Projection algorithm. Lines in italic font correspond to the ray-driven algorithm. Lines in bold font correspond to the distance-driven algorithm.
Data : volume, geometric parameters (tilt, skew, . . .) Result : projection data for θ in projections for x in x_proj: for y in y_proj: Compute centered x coordinate in projection Compute centered y coordinate in projection Compute centered x1 and x2 coordinate boundary in projection Compute centered y1 and y2 coordinate boundary in projection if skew Apply skew to (x,y) coordinates Apply skew to (x1,y1) and (x2,y2) coordinates end if tilt or roll
Backprojection kernel
The backprojection kernel implements the integral along all the angles of the result of spreading back the projection values (sometimes after filtering or other pre-processing steps) along each ray, according to the following equation (if all the geometrical parameters are zero): Where ini is the initial projection angle, nproj is the total number of projections, f (u, v, z) is the value in the back-projected volume at coordinates (u, v, z), p θ (x, y) the projection data for position (x, y) in the detector at angle θ, Δθ the step angle in radians, and Mag the magnification due to the cone shape of the beam given that where DSO and DDO are the distance from the center of the FOV to the source and the detector, respectively (see Fig 2). The implementation of the backprojection kernel is shown in Pseudocode 2 for the raydriven algorithm (italic font) and distance-driven algorithm (bold font).
Pseudocode 2: Backprojection algorithm. Lines in italic font correspond to the ray-driven algorithm. Lines in bold font correspond to the distance-driven algorithm.
Optimizations
The performance of the framework was optimized by applying different techniques, some of which depend on the hardware platform, while others can be applied indistinctly to the GPU and the CPU.
Data interpolation.
For the GPU version, FUX-Sim takes advantage of the texture memory in NVidia GPUs and in OpenCL-aware GPUs to reduce memory latencies and generate automatic bilinear or trilinear interpolations. The projections and volumes are uploaded to this memory space before kernel execution.
For the CPU-based version, projection data are stored in the main memory, through an explicit implementation of the bilinear or trilinear interpolation, which reduces the overall performance and consumes up to 25% of the total execution time.
GPU memory transfer pattern.
The pattern for the memory transfers from the CPU to the GPU can dramatically affect execution time. Transferring bigger datasets results in a more efficient exploitation of the bus capacity between the host and the GPU by taking advantage of the full memory bandwidth. Additionally, this approach enables simultaneous processing of various data and, therefore, optimal use of the available computational power of the GPU.
The transfer of projection data to the GPU memory in the backprojection algorithm is one of the bottlenecks of kernel execution. Although the kernel is applied in each projection independently, if the GPU memory can hold one or more projections simultaneously, data are transferred in groups of projections. Projections belonging to each group are stored in the same array object (i.e., slot) concatenated vertically and separated with a padding zone, thus avoiding the use of values from the end of previous projections at the beginning of the current processed projection. The slot size is a configurable parameter selected by the user after taking into consideration the size of the projections and the underlying hardware. As demonstrated in our previous work [17], there is a tradeoff between dataset size and performance for the case of the backprojection kernel. A huge dataset can be disadvantageous owing to the overhead in kernel execution, since the number of projections present in the GPU affects the complexity of the kernel (third line of Pseudocode 2).
In the case of the projection kernel, the subvolumes transferred to the GPU memory are formed directly by a group of contiguous axial slices (used for the 3D interpolation). In this case, the abovementioned tradeoff does not hold: since the large number of axial slices does not affect the complexity of the GPU kernel (the kernel does not iterate over the z-axis), it does not imply an overhead in kernel execution.
After execution, output data are transferred to the host memory for further processing or final storage.
Parallelism strategy.
Parallelism represents the fundamental optimization implemented in the kernel layer. The strategy consists of dividing workload among different computational threads executed in parallel on either the CPU or the GPU. This work division differs depending on the interpolation method used. However, in both cases, parallelism exploits the data independence of the processing of each voxel or pixel, as described in [18].
To optimize memory access, the minimal computational thread in our parallel implementation is the iteration over the v-axis (black-delineated voxels in Fig 4 are computed by the same computational thread). Each of the parallel executions is identified by u and z in the case of the projection kernel, and by x and y in the case of the backprojection kernel (see first two loops in Pseudocodes 1 and 2, respectively).
The number of threads that can be scheduled is optimized by taking into account the number of required GPU registers. As we increase the number of threads available for execution, we increment the occupancy of the GPU, thus reducing the memory latency perceived [19]. The calculation of these trajectories for ray-driven and voxel-driven methods is shown in Pseudocodes 1 and 2, which are highlighted in italic font.
Parallelization of the distance-driven algorithm is highly limited by the intensive calculation of overlapping areas for each ray (shown in Fig 5). The computation of the boundaries, either on the volume or in the detector, adds four operations at each iteration. These boundaries are the limits of the voxels/pixels projected on each u-z plane, as shown in Fig 4. Although independent, these boundaries have the same v-coordinate and access contiguous positions of the input data, thus increasing data locality when retrieving the values thanks to the memory layout. This loop is highlighted in bold font in Pseudocodes 1 and 2.
Support layer
The support layer contains two modules: processing operations, such as derivatives and filters, and platform management.
Processing operations
The support layer provides basic processing operations for the customization of the simulation and auxiliary kernels needed for reconstruction algorithms.
Customization includes functions for geometry computation and calculation of offsets for the definition of the volume/region of interest (VOI/ROI). These functions are always executed in the CPU owing to their low computational cost.
The support layer also includes auxiliary kernels responsible for matrix and elementwise operations such as arithmetic operations, derivatives, and computation of norms. Two important operations included here are the computation of the weighting factors W 1 and W 2 , a necessary step for backprojection, and the application of a ramp filter to enhance high frequencies, which is an essential step in FDK-based methods and could be used to enhance high frequencies in other reconstruction methods.
Factors W 1 and W 2 are given by where DSO is the distance from the source to the detector (in mm), x and y are coordinates in the projection, and v is the coordinate in the reconstructed volume (as shown in Fig 2) and size_x, size_y and size_v are the pixel/voxel size in mm along x-, y-and v-axis, respectively. The filtering operation involves Fourier transform and inverse Fourier transform steps, which are achieved by means of the cuFFT library (https://developer.nvidia.com/cuFFT) in CUDA and the clFFT library (http://clmathlibraries.github.io/clFFT) in OpenCL. For the CPU, the filter is applied in the spatial domain through a convolution.
Platform management kernels
The platform management kernels are dedicated to operations such as memory allocation and deallocation in the GPU and the CPU, input/output operations, memory transfers between the GPU and host memory, and resource management.
We designed two partitioning strategies to address memory limitations in both the CPU and the GPU. The first consists of the division of the volume into multiple sub-volumes called chunks along the z-axis. The second consists of the division of the projections into sets (covering different angles). The decision on the number of projections included in one set fixes an upper threshold for the slot size, which is described in Section 3.3.2 (maximum number of projections transferred to the GPU).
These partitioning strategies, which can be combined, enable the execution of the kernel with partial volumes or projections in both the GPU and the CPU. They also provide the possibility of speedup using multiple GPUs, where each GPU is in charge of the backprojection of a chunk or projection of a projection set.
The chunk-partitioning strategy (Fig 6, left) is used for both projection and backprojection kernel executions. In the case of the backprojection kernel, each chunk is computed and stored to disk independently. In the case of the projection kernel, each chunk is read and computed for all the projection angles independently. The projections that result from each chunk are added and stored to disk.
The set-partitioning strategy (Fig 6, right) follows a similar logic. For the backprojection kernel, each set of projections is read and processed independently. The results are added in a final volume that is stored when all projections have been processed. In the case of the projection kernel, each set of projections is created from the volume and stored independently on disk.
The parameters chunk size and set size are calculated automatically by FUX-Sim at the beginning of the execution based on the hardware characteristics and current usage of the available resources.
Architecture layer
The abstraction of the architecture layer makes it possible to create new configurations on several platforms (GPU, x86 CPU-based) and in different operating systems (Linux, Windows, and MacOS) without requiring a deep knowledge of accelerator architectures (Fig 7). For this purpose, all algorithms and kernels were implemented according to three programming models: CUDA (for NVidia GPUs), OpenCL (for GPUs and ARM architectures), and OpenMP (for CPUs), thus enabling execution of the same algorithm in a parallel manner.
The architecture layer provides a wrapper for the specific version of the algorithms, which is configurable by the user depending on the available resources. The execution flow of the simulator passes through the architecture layer to automatically reach the corresponding functionality in the kernel layer or support layer, depending on the availability of the GPU and the programming model chosen. In the example shown in Fig 6, the allocate memory function in the architecture layer is translated into cudamalloc, clcreatebuffer, or malloc in the kernel layer and support layers, depending on the devices and the available programming models.
Configuration layer
The configuration layer translates the parameters of the scanning geometry obtained from the command line or through the calibration file into a specific parameter set for the various system configurations.
Cone-beam with circular trajectory
The most standard configuration is a cone-beam system with the detector placed orthogonally to the line that passes through the source and the origin with the piercing point at its center, as shown in Fig 8-left, and with the source-detector pair following a circular trajectory.
The implementation of this geometry is based on several calls to the projector/backprojector kernels for each view angle. The view angle is either calculated from the span angle and number of evenly spaced projections or read from the calibration file.
Helical scan
The helical configuration is implemented based on the circular cone-beam geometry described above, with the position of the volume for each projection changed to simulate the movement of the bed (Fig 7-right). For each angular position, θ, the shift of the voxels in the z direction is calculated by where pitch is the displacement of the bed in one rotation, n is the number of projections per rotation, span is the total angle span covered during the acquisition, and thick is the slice thickness in the volume.
Arbitrary positioning
The arbitrary positioning configuration allows us to define an arbitrary trajectory for source and detector. Each position is translated into a set of linear displacements and angular inclinations from the ideal position (circular scan geometry), as shown in Fig 9. The translation is carried out in two steps: (1) u-and v-shifts are calculated so that the source-object line passes through the center of a virtual detector; and (2) inclinations (tilt and roll) are calculated as the angles formed between the real and virtual detectors around z and u axis, respectively.
Tomosynthesis
As shown in Fig 10, the simulator implements two system configurations for tomosynthesis: linear tomosynthesis, where the source follows a linear trajectory while the detector moves in the opposite direction, as in conventional tomography, and arc tomosynthesis, where the detector is static and the source follows a circular trajectory. In both cases, the structures contained in the focal plane are projected into the same position of the detector, while structures in other planes appear at different locations in the projections.
The implementation of these configurations is based on the use of a virtual detector that is larger than the real detector, as shown in Fig 11. In the case of linear tomosynthesis, the large detector size, D large , is calculated as where D x and S x are the displacements of the detector and source respectively, D real is the actual detector size, DDO is the object-detector distance, and DSO is the source-object distance.
For each projection, we will calculate an ROI in the virtual detector equal to the real detector size centered at D x +S x . FUX-Sim: Fast universal X-ray simulation/reconstruction framework For the case of Arc tomosynthesis, S x and DSO are calculated for each projection as where S β is the angle rotated by the source.
Wide field of view
FUX-Sim enables the possibility of simulating an increased FOV, which is useful in scenarios where the detector is smaller than the scanning area. In these cases, two or more projections can be obtained and stitched together using a post-processing algorithm to build a larger image.
Depending on the movement of the source, FUX-Sim provides two models: linear displacement and tilting, as shown in Fig 12. Linear displacement is based on the same idea as the helical scan: shift of the whole volume in the z-direction. The tilting configuration is based on defining a larger virtual detector, as in linear tomosynthesis. The large detector size, D large , is calculated as where D real is the detector size and N the total number of projections. For each projection at position n, we calculate an ROI on the virtual detector equal to the size of the real detector centered at D x : where Overlap is the overlap between two consecutive positions of the detector.
Evaluation
The performance of FUX-Sim was evaluated on two hardware architectures, namely, a highperformance workstation and a low-performance workstation. The high-performance work- Table 1. The last two rows show the configurable parameters set size and chunk size, which are calculated automatically during execution depending on data size. Table 2 presents the results of the circular scans for standard and high resolution and Table 3 the results of helical, linear, and arc tomosynthesis. Both tables show the processing time in seconds for the kernel including memory transfers (kernel execution) and the process including I/O operations (overall execution).
The poorest performance was with the CPU versions using OpenMP for parallelization of the core algorithms. Although OpenCL and CUDA used the same GPU and for high-resolution studies the performance was similar, OpenCL performed worse than CUDA for small volumes. In the case of the circular scan, the execution time of the projection kernel increased linearly with both number of projections (360 projections 2× faster than 720 projections) and resolution (standard resolution 32× faster than high resolution). Backprojection showed a different dependency on resolution, with the standard-resolution study only 8× faster than the high-resolution study. The reason for this is that we set slot size to 1 to evaluate the most limited case; better results could be obtained by optimizing the slot size, as explained in [18].
Finally, we evaluated the programming model that showed the best results, CUDA, in the low-performance workstation. We applied the most demanding study, namely, backprojection of the high-resolution Digimouse with a circular trajectory. The configuration enabled a total execution time of 376 seconds, which is 5× slower than on a high-performance computer. Chunk size and set size in this case are 1024×1024×286 (resulting in 4 chunks, the last one being slightly smaller) and 360, respectively.
Discussion
FUX-Sim was designed to address three key difficulties in the development of simulation/ reconstruction algorithms: (1) the need to manage large data volumes and are computationally expensive, thus necessitating hardware and software optimizations; (2) the limitation of optimizations by the high flexibility required to explore flexible scanning geometries, including fully configurable positioning of source and detector elements; and (3) the fast evolution of different hardware setups, which increases the effort required to maintain and adapt implementations to current and future programming models.
Simulation and reconstruction require large memory capacity because of the need to allocate both projections and volumes in memory to ensure efficient computation. We addressed memory limitations by including two efficient partitioning strategies that allow the processing of small partitions of the input data. These strategies made it possible to run FUX-Sim on standard workstations with commodity hardware and low-memory GPUs, even for simulating or reconstructing large studies.
The optimized implementation for the different systems, i.e., programing models for the GPU (CUDA and OpenCL) and CPU, is achieved thanks to a modularized approach based on a layered architecture and parallel implementation of the algorithms in both the GPU and the CPU. The modular approach enables flexible and easy creation of new system configurations using existing kernels and utilities. This flexibility implies a trade-off with performance, as it prevents application of very specific optimizations. An example of this type of optimization would be the overlap of input/output operations and kernel execution, which would require tighter coupling between the support layer and the kernel layer, thus leading to a loss of modularity. Another example is the reduction in geometrical parameters used in the projection and backprojection kernels, such as detector shifts and rotations, in the case of simple geometries (e.g., ideal circular cone-beam scans). This simplification would imply the need for customized kernels for each geometry, thus hindering the creation of new system configurations. However, our evaluation showed that performance was similar to that of previous works thanks to the other optimizations included in the different layers. As expected, the worst performance was observed with the CPU version of FUX-Sim, even with the parallelization of the core algorithms using OpenMP. We evaluated the GPU version of FUX-Sim on both a laptop and a high-performance computer. The possibility of using a wide range of underlying hardware is an advantage over other simulation/reconstruction platforms presented in previous works, where, despite using the same acceleration device, execution with CUDA was 10% faster than with OpenCL when backprojecting high-resolution studies [10,12]. However, we found a much larger difference in performance between CUDA and OpenCL when projecting smaller volumes: CUDA was 2× faster than OpenCL because hardware is used more efficiently with CUDA, which is compensated for when there is enough load to use the maximum computational capacity of the device.
Differences in hardware and software platforms make it difficult to compare execution times between studies. Nevertheless, an approximate comparison shows, for example, that FUX-Sim was around 4× faster when projecting and around 5× faster when backprojecting than in the TIGRE study [20]. We also obtained good results, even with our layered architecture, with respect to state-of-the-art implementations of the algorithms. Backprojection of similar volume sizes with FUX-Sim was more than 2× faster than the CUDA/C implementation in [10]. Finally, we showed that it was possible to simulate high-resolution studies in commodity computers, even when there is not enough memory to allocate the whole dataset.
The three configurable parameters that affect the overall performance of FUX-Sim are chunk size, set size, and slot size. Chunk size and set size are used for the optimization of memory transfers between the CPU and the GPU. Their value is automatically calculated based on the available resources of the computer (GPU global memory and CPU memory capacity). A low value for these parameters would increase the number of memory transfers and result in a low GPU utilization factor. The relationship between performance and slot size was studied in a previous work [18]. The value for this parameter is defined by the user after taking the texture memory capacity and GPU model into consideration. In the future, we plan to find a mechanism to automatize this setup.
The simulator can deal with a wide variety of scanning geometries but does not include the source model (heel effect, polychromatic nature, focal spot) or detector model (noise model, intensity response), both of which could easily be included in the future as new modules of FUX-Sim in the support layer.
The architecture we propose is significantly more flexible than that of previous simulators (CT Sim [1], IRT [2], TomoPy [3], X-ray Sim [5]), which do not allow the simulation of new acquisition protocols based on non-standard setups. The CONRAD [7] and ASTRA [8] toolkits allow flexible scanning geometries but present limitations. The simulation of nonstandard geometries with CONRAD is less straightforward, as it is based on a projection matrix that needs to be previously obtained, and the ASTRA toolkit is limited to datasets that fit completely in the memory space of the GPU and to circular orbits, thus preventing simulation of new acquisition geometries such as those used in tomosynthesis.
In conclusion, we present a new, highly flexible X-ray simulation/reconstruction framework that enables fully configurable positioning of source and detector elements. The implementation is optimized for two different families of GPUs (CUDA and OpenCL) and multicore CPUs using a modularized approach based on a layered architecture and the parallel implementation of the algorithms in both devices. Consequently, FUX-Sim can be executed in most current hardware platforms, since OpenCL is supported by AMD and NVidia GPUs and by Intel and ARM processors, while CUDA is the most widely applied programming model for GPUs [4]. The modular architecture also facilitates the maintenance and adaptation for current and future programming models. The execution times we measured were faster than other state-of-the-art implementations for different system configurations and hardware platforms. FUX-Sim can prove valuable for research on new configurations for X-ray systems with non-standard scanning orbits, new acquisition protocols, and advanced reconstruction algorithms. In addition, our framework will make it possible to obtain tomographic images from very few projections, thus enabling easy and inexpensive assessment before implementation in real systems. | 8,554 | 2017-07-10T00:00:00.000 | [
"Computer Science"
] |
Seismic Assessment of Six Typologies of Existing RC Bridges
: Over the last few decades, the attention on the safety of existing reinforced concrete (RC) structures has significantly increased. RC bridges, in particular, are highly relevant for strategic importance. In the Italian context, several of these bridges were built around 1960, when engineering practice commonly ignored or underestimated the presence of seismic actions. Therefore, it is fundamental to quantify as accurately as possible their seismic safety level with state-of-the-art analysis techniques. In this paper, an e ffi cient procedure based on the multi-modal pushover analysis approach is proposed for the risk evaluation of several bridges of the Italian highway network. This procedure, tailored for portfolio level assessment, takes into account the non-linear behavior and the complex dynamic response this type of structure with limited computational e ff ort. Three fundamental aspects are defined for the structural modelling of bridges, i.e., materials’ constitutive law, finite element type and nonlinear hinge models. Flexural and shear nonlinearities of piers are included to account for ductile and brittle damage potential. The standardized procedure guarantees consistent comparisons among di ff erent bridges of the same network in the form of risk indexes.
Introduction
During last years, several collapse phenomena have affected existing reinforced concrete (RC) bridges that led to considerable interest on the evaluation of the residual capacity of these structures under static and dynamic loads [1,2]. For this reason, several researches have presented different types of approaches in order to evaluate the safety level of the existing bridges or of other types of strategic infrastructures [3][4][5][6][7][8][9][10]. In the Italian Highway Network, the majority of these bridges were built in the 1960s and 1970s. Nowadays they require several maintenance operations in order to ensure standard safety levels. Moreover, according to new design codes, particular attention is given to the seismic capacity evaluation assessment of these structures. For instance, the Italian Civil Protection requires highway managing operators to collect relevant data of each asset in their portfolios and to conduct nonlinear seismic assessments for emergency planning and investment prioritization purposes [11].
Several issues remain arises when evaluating the seismic response of existing reinforced concrete (RC) bridges with nonlinear techniques [3]. On one hand, standard nonlinear pushover analysis, which is nowadays widely used in structural engineering firms, e.g., [12], fails to address the complex dynamic response of bridges that are not characterized by a predominant vibration mode. On the other hand, nonlinear time history analyses involve several challenges for professional engineers such as ground motion selection, modeling of strength/stiffness degradation, high computational cost, etc. [13]. An alternative solution presented in the literature is the Modal Pushover Analysis Infrastructures 2020, 5, 52 2 of 14 (MPA) that was initially developed by Chopra and Goel [14,15] to assess the seismic response of unsymmetrical-plan buildings. The MPA consists of an extension of the Response Spectrum Analysis (RSA), particularly effective for irregular structures that do not exhibit a principal mode shape with high participating mass. The methodology was furtherly extended to the case of bridges thanks to the work of Kappos et al. [16,17].
In this paper, an efficient procedure is proposed in order to evaluate the seismic vulnerability of bridges taking into consideration the above-mentioned aspects. This procedure is based on the implementation of Finite Element Models (FEM) where the non-linear behavior of the piers is represented with concentrated plastic hinges that consistently reduce the computational effort allowing the execution of MPA analyses. The result of the assessment is expressed in terms of Risk Index, i.e., the ratio between the maximum Peak Ground Acceleration (PGA) that the bridge can withstand (capacity) and the PGA of the site asset at stake for the given location (demand). The procedure is applied to six representative case studies of a bridge portfolio characterized by cantilever and frame type piers. The results are discussed highlighting the critical aspects of each typology with respect to their seismic behavior.
Multi-Modal Pushover Approach
Pushover analysis is commonly used for the evaluation of the non-linear behavior of an existing structure or infrastructure subjected to an incremental horizontal load. Three base concepts regulate the application of the pushover analysis [18]: (a) the capacity curve, (b) the demand spectrum and (c) the performance point.
The capacity curve defines the nonlinear response of a structure subjected to a predefined lateral load distribution. The curve usually consists in a top displacement versus base shear diagram. The shape of the lateral load profile is usually proportional to a mode shape: where: M is the mass matrix of the structure, φ n is the n-th eigenvector, s n * is the loading vector applied to the structure during the analysis. The obtained curve can be converted in the spectral displacement (S d ) versus spectral acceleration (S e ) plane through these fundamental relations: where: V bn is the base shear for the n-th vibration mode, u rn is the top displacement value of the control point for the n-th vibration mode, M n * is the modal mass of the n-th mode, Γ n is the modal participation factor and φ rn is the control point component of the n-th eigenvector. The seismic demand curve can be represented in the Acceleration Displacement Response Spectrum (ADRS) format, obtained from the horizontal acceleration response spectrum using the formula below: The performance point is obtained by intersecting the capacity curve with the demand curve and represents how the structure would behave under the specific seismic action (Figure 1a). Several techniques have been proposed in order to evaluate the performance point. An extended state-of-the-art review is reported by Causevic and Mitrovic [19]. Through a bilinearization of the capacity curve (Figure 1b), a demand reduction coefficient is obtained. This can be evaluated in terms of ductility factor or equivalent damping and takes into account the energy dissipation of the postelastic phase. The intersection between the reduced demand spectrum and the capacity curve identifies the performance point. In this work, the Capacity Spectrum Method (CSM) technique is adopted [18,20,21] where these fundamental steps are executed: 1. Definition of the seismic demand in the ADRS form; 2. Selection of the first iteration point api, dpi on the capacity; 3. Bilinearization of the capacity curve with KI as elastic stiffness followed by a hardening branch. The hardening branch is defined by applying the equal energy rule between the capacity curve and its bilinear idealization ( Figure 1b); 4. Scaling of the ADRS according to the effective damping coefficient. This takes into consideration both the hysteretic damping (referred to the cyclic plastic deformations) and the inherent damping (equal to 5% in the case of concrete structures), Figure 1c; 5. Evaluation of the performance point by intersecting the capacity curve and the scaled demand spectrum through an iterative process. The selection of the horizontal load profile for the pushover analysis is not univocal and can decisively influence the results. As discussed in the Introduction, there has been consistent research on the topic, e.g., [13]. In this work, mode-shape load profiles are adopted as for Chopra and Goel [14]. Operationally, N capacity curves are determined, one for each significant vibration mode. For each capacity curve, the performance point is evaluated with reference to the same seismic demand spectrum. Lastly, relevant internal-forces/displacements at the performance configuration are extracted and combined with the classical modal combination rules (e.g., CQC).
Structural Modelling
A fundamental step in assessing the seismic vulnerability of existing structures is the determination of the actual materials' characteristics through the execution of laboratory/in-situ tests. It is worth mentioning that, within the same structure, the variability of mechanical properties can be high. Therefore, the Italian Building Code [11,22] requires the use of an appropriate confidence factor that is related to the level of knowledge obtained through the survey campaign. Consequently, materials' strength is reduced for structural verifications. In absence of specific laboratory investigations, concrete and steel mechanical properties are taken from technical-scientific studies [23,32].
The materials' constitutive laws should take into account the mechanical phenomena that occur at both element and cross section levels. The concrete behavior is significantly influenced by the confining effect of transverse reinforcement. In this work, the concrete model developed by Kent and Park [24] was chosen, considering only the compressive behavior. The Kent and Park concrete model Several techniques have been proposed in order to evaluate the performance point. An extended state-of-the-art review is reported by Causevic and Mitrovic [19]. Through a bilinearization of the capacity curve (Figure 1b), a demand reduction coefficient is obtained. This can be evaluated in terms of ductility factor or equivalent damping and takes into account the energy dissipation of the post-elastic phase. The intersection between the reduced demand spectrum and the capacity curve identifies the performance point. In this work, the Capacity Spectrum Method (CSM) technique is adopted [18,20,21] where these fundamental steps are executed:
1.
Definition of the seismic demand in the ADRS form; 2.
Selection of the first iteration point a pi , d pi on the capacity; 3.
Bilinearization of the capacity curve with K I as elastic stiffness followed by a hardening branch. The hardening branch is defined by applying the equal energy rule between the capacity curve and its bilinear idealization ( Figure 1b); 4.
Scaling of the ADRS according to the effective damping coefficient. This takes into consideration both the hysteretic damping (referred to the cyclic plastic deformations) and the inherent damping (equal to 5% in the case of concrete structures), Figure 1c; 5.
Evaluation of the performance point by intersecting the capacity curve and the scaled demand spectrum through an iterative process.
The selection of the horizontal load profile for the pushover analysis is not univocal and can decisively influence the results. As discussed in the Introduction, there has been consistent research on the topic, e.g., [13]. In this work, mode-shape load profiles are adopted as for Chopra and Goel [14]. Operationally, N capacity curves are determined, one for each significant vibration mode. For each capacity curve, the performance point is evaluated with reference to the same seismic demand spectrum. Lastly, relevant internal-forces/displacements at the performance configuration are extracted and combined with the classical modal combination rules (e.g., CQC).
Structural Modelling
A fundamental step in assessing the seismic vulnerability of existing structures is the determination of the actual materials' characteristics through the execution of laboratory/in-situ tests. It is worth mentioning that, within the same structure, the variability of mechanical properties can be high. Therefore, the Italian Building Code [11,22] requires the use of an appropriate confidence factor that is related to the level of knowledge obtained through the survey campaign. Consequently, materials' strength is reduced for structural verifications. In absence of specific laboratory investigations, concrete and steel mechanical properties are taken from technical-scientific studies [23,24].
The materials' constitutive laws should take into account the mechanical phenomena that occur at both element and cross section levels. The concrete behavior is significantly influenced by the confining effect of transverse reinforcement. In this work, the concrete model developed by Kent and Park [25] was chosen, considering only the compressive behavior. The Kent and Park concrete model takes into account the confining effect of stirrups through the confinement parameter K. The coefficient Z defines the post-peak (softening) response of the material (Figure 2a). For the steel reinforcements, the Park Strain Hardening [26] constitutive law was adopted (Figure 2b).
Infrastructures 2020, 5, 52 4 of 15 takes into account the confining effect of stirrups through the confinement parameter K. The coefficient Z defines the post-peak (softening) response of the material (Figure 2a). For the steel reinforcements, the Park Strain Hardening [25] constitutive law was adopted (Figure 2b).
(a) (b) In this work, the FEM models of the reinforced concrete bridges were developed using the software MIDAS Civil [26]. A simplified approach has been adopted where (i) the deck, the piers and the pier cups are schematized with beam elements (ii) the bearings are modeled using general links with stiffness values calculated as in EN 1337-3:2005 [27]. The connection between beam elements and general links is guaranteed thanks to rigid links. The abutments are represented as restrains located at deck-abutments interface bearings base. Lastly, the piers are assumed fixed into rigid foundations. Stiffness reduction due to cracking is taken into account when assessing the natural frequencies of RC structures. This can be done with a specific reduction coefficient of the cross-section elastic stiffness obtained from the moment-curvature (M-χ) diagram, as for EN 1998-2:2005 [28]. Structural and non-structural masses are considered for eigenvalue analysis while traffic loads are neglected [11].
For the nonlinear response of elements two types of mechanisms that characterize piers have been considered: i) the flexural-ductile mechanism and ii) the shear-brittle mechanism. The ductile mechanism refers to the rotational capacity of the plastic hinges while the brittle mechanism depends on the shear strength. These two collapse mechanisms interact and affect simultaneously different structural elements. As a result, it is quite complex to obtain a reliable estimate of the nonlinear dynamic response. In this work, the bridges' capacity has been assessed by investigating these failure mechanisms separately.
The ductile response is modeled with concentrated plastic hinges. Figure 3 shows the example related to the piers characterized by a cantilever behavior. In this work, the FEM models of the reinforced concrete bridges were developed using the software MIDAS Civil [27]. A simplified approach has been adopted where (i) the deck, the piers and the pier cups are schematized with beam elements (ii) the bearings are modeled using general links with stiffness values calculated as in EN 1337-3:2005 [28]. The connection between beam elements and general links is guaranteed thanks to rigid links. The abutments are represented as restrains located at deck-abutments interface bearings base. Lastly, the piers are assumed fixed into rigid foundations. Stiffness reduction due to cracking is taken into account when assessing the natural frequencies of RC structures. This can be done with a specific reduction coefficient of the cross-section elastic stiffness obtained from the moment-curvature (M-χ) diagram, as for EN 1998-2:2005 [29]. Structural and non-structural masses are considered for eigenvalue analysis while traffic loads are neglected [11].
For the nonlinear response of elements two types of mechanisms that characterize piers have been considered: (i) the flexural-ductile mechanism and (ii) the shear-brittle mechanism. The ductile mechanism refers to the rotational capacity of the plastic hinges while the brittle mechanism depends on the shear strength. These two collapse mechanisms interact and affect simultaneously different structural elements. As a result, it is quite complex to obtain a reliable estimate of the nonlinear dynamic response. In this work, the bridges' capacity has been assessed by investigating these failure mechanisms separately.
The ductile response is modeled with concentrated plastic hinges. Figure 3 shows the example related to the piers characterized by a cantilever behavior. In this work, the FEM models of the reinforced concrete bridges were developed using the software MIDAS Civil [26]. A simplified approach has been adopted where (i) the deck, the piers and the pier cups are schematized with beam elements (ii) the bearings are modeled using general links with stiffness values calculated as in EN 1337-3:2005 [27]. The connection between beam elements and general links is guaranteed thanks to rigid links. The abutments are represented as restrains located at deck-abutments interface bearings base. Lastly, the piers are assumed fixed into rigid foundations. Stiffness reduction due to cracking is taken into account when assessing the natural frequencies of RC structures. This can be done with a specific reduction coefficient of the cross-section elastic stiffness obtained from the moment-curvature (M-χ) diagram, as for EN 1998-2:2005 [28]. Structural and non-structural masses are considered for eigenvalue analysis while traffic loads are neglected [11].
For the nonlinear response of elements two types of mechanisms that characterize piers have been considered: i) the flexural-ductile mechanism and ii) the shear-brittle mechanism. The ductile mechanism refers to the rotational capacity of the plastic hinges while the brittle mechanism depends on the shear strength. These two collapse mechanisms interact and affect simultaneously different structural elements. As a result, it is quite complex to obtain a reliable estimate of the nonlinear dynamic response. In this work, the bridges' capacity has been assessed by investigating these failure mechanisms separately.
The ductile response is modeled with concentrated plastic hinges. Figure 3 shows the example related to the piers characterized by a cantilever behavior. ultimate capacity of the member. The curvature ductility µ ϑ is the ratio between the ultimate and yield curvatures: The data extracted from the M-χ curve are incorporated in the moment-rotation diagram by integrating the curvatures over the plastic hinge length (L pl ). For constant distribution of the bending moment over L pl , the yield and ultimate rotations are: The plastic hinge length (L pl ) is calculated according to the EC8 [29]: where: d bl is the diameter of the longitudinal bars, f y is the yield stress of the steel rebars and f c is the concrete compressive strength.
The FEMA 356 [20] reports the definition of nonlinear hinge relationships for pushover analysis. ultimate capacity of the member. The curvature ductility μϑ is the ratio between the ultimate and yield curvatures: The data extracted from the M-χ curve are incorporated in the moment-rotation diagram by integrating the curvatures over the plastic hinge length (Lpl). For constant distribution of the bending moment over Lpl, the yield and ultimate rotations are: The plastic hinge length (Lpl) is calculated according to the EC8 [29]: where: dbl is the diameter of the longitudinal bars, fy is the yield stress of the steel rebars and fc is the concrete compressive strength.
The FEMA 356 [20] reports the definition of nonlinear hinge relationships for pushover analysis. The verification criterion corresponding to the Life Safety limit state is assumed equal to ¾ of the ultimate rotation ϑu. The brittle collapse mechanism depends on the shear capacity of the piers. It is worth mentioning that RC bridges were generally designed to resist small lateral loads. Thus, their horizontal bearing capacity (e.g., seismic resistance) is quite low.
In the present work, the shear strength of the piers is assumed according to EC8 [29], where the cyclic shear resistance VR in the plastic hinge region accounts for the contribution of three factors: (i) the axial load, (ii) the concrete strength and (iii) the transversal reinforcement. An elastic-brittle forcedisplacement constitutive law is considered in this work [20]. The verification criterion for the Life Safety limit state is assumed equal to the achievement of VR.
The Italian Building Code [11] requires quantification of the seismic response of the bridge at a given location in terms of risk index. The risk index is the ratio between capacity (C) of the bridge and the seismic demand (D) expressed in Peak Ground Acceleration, PGA (or return period). The PGAD is directly taken from the seismic hazard map for the given location. The estimation of the PGAC requires an iterative process. The N pushover curves are intersected with an increasing spectrum until the safety limit on at least one structural member is exceeded. According to the "second level vulnerability assessment form" by the Italian Civil Protection, risk indices are defined as follow [30]: The verification criterion corresponding to the Life Safety limit state is assumed equal to 3 /4 of the ultimate rotation ϑ u . The brittle collapse mechanism depends on the shear capacity of the piers. It is worth mentioning that RC bridges were generally designed to resist small lateral loads. Thus, their horizontal bearing capacity (e.g., seismic resistance) is quite low.
In the present work, the shear strength of the piers is assumed according to EC8 [30], where the cyclic shear resistance V R in the plastic hinge region accounts for the contribution of three factors: (i) the axial load, (ii) the concrete strength and (iii) the transversal reinforcement. An elastic-brittle force-displacement constitutive law is considered in this work [20]. The verification criterion for the Life Safety limit state is assumed equal to the achievement of V R .
The Italian Building Code [11] requires quantification of the seismic response of the bridge at a given location in terms of risk index. The risk index is the ratio between capacity (C) of the bridge and the seismic demand (D) expressed in Peak Ground Acceleration, PGA (or return period). The PGA D is directly taken from the seismic hazard map for the given location. The estimation of the PGA C requires an iterative process. The N pushover curves are intersected with an increasing spectrum until the safety limit on at least one structural member is exceeded. According to the "second level vulnerability assessment form" by the Italian Civil Protection, risk indices are defined as follow [31]: • Risk index in acceleration (RI PGA ): is the ratio between capacity (PGA C ) and demand (PGA D ) in terms of peak ground acceleration; • Risk index in return period (RI TR ): is the ratio between capacity (T RC ) and demand (T RD ) in terms of return period of the earthquake, raised to 0.41 [11,31].
Values close to one or larger than one characterizes cases where the risk level is acceptable. On the contrary, values close to zero characterizes high-risk cases.
Case Studies
The procedure, described in Section 3, has been applied to six different bridges, representative of the Italian Highway Network. In all the models, the longitudinal axis of the bridge is represented by the X axis, while the transversal direction is oriented with Y axis of the coordinate system.
The first bridge ( Figure 5) is characterized by the presence of two adjacent and independent carriageways, consisting in a sequence of seventeen simply supported 36 m spans (except for the central span which is 60 m long). The planimetric and altimetric layout is rectilinear. The overall width of the roadway is about 11 m and each span of the bridge is realized by a precast concrete slab of six prestressed U girders. The bridge deck consists in a 20 cm thick concrete slab. Each span of the bridge is supported by 2 × 6 elastomeric bearings placed at the ends of each longitudinal beam.
Infrastructures 2020, 5, 52 6 of 15 • Risk index in acceleration (RIPGA): is the ratio between capacity (PGAC) and demand (PGAD) in terms of peak ground acceleration; • Risk index in return period (RITR): is the ratio between capacity (TRC) and demand (TRD) in terms of return period of the earthquake, raised to 0.41 [11,30]. Values close to one or larger than one characterizes cases where the risk level is acceptable. On the contrary, values close to zero characterizes high-risk cases.
Case Studies
The procedure, described in Section 3, has been applied to six different bridges, representative of the Italian Highway Network. In all the models, the longitudinal axis of the bridge is represented by the X axis, while the transversal direction is oriented with Y axis of the coordinate system.
The first bridge ( Figure 5) is characterized by the presence of two adjacent and independent carriageways, consisting in a sequence of seventeen simply supported 36 m spans (except for the central span which is 60 m long). The planimetric and altimetric layout is rectilinear. The overall width of the roadway is about 11 m and each span of the bridge is realized by a precast concrete slab of six prestressed U girders. The bridge deck consists in a 20 cm thick concrete slab. Each span of the bridge is supported by 2 × 6 elastomeric bearings placed at the ends of each longitudinal beam. Each pier is structurally independent from the adjacent one and it has a rectangular tapered section where the base dimension is equal to 7 × 2 m while the top dimension is 8 × 2 m. At the top of every RC pier, there is a hammerhead cap where the elastomeric bearings are located. The piers are made of C25/30 concrete with 74Ø22 longitudinal AQ50 steel rebars confined by Ø10/30 cm stirrups.
The second viaduct ( Figure 6) is also made by two adjacent and independent carriageways. It is constituted by a sequence of five simply supported 22 m length spans, realizing a rectilinear planimetric and altimetric layout. The overall width of the roadway is 9.85 m and each span is realized with a precast concrete girder of four I longitudinal beams and four transverse beams. The viaduct deck consists in a 25 cm thick concrete slab. Each span of the bridge is supported by 2 × 4 elastomeric bearings. In this case, the piers are composed by two independent frames. Each frame has two cylindrical columns (diameter equal to 1 m). The two columns are connected at the top by a trapezoidal beam where the elastomeric bearings are located. The piers' characteristics are: C25/30 concrete, 16Ø20 longitudinal AQ50 steel rebars, Ø8/20 cm spiral stirrups. Each pier is structurally independent from the adjacent one and it has a rectangular tapered section where the base dimension is equal to 7 × 2 m while the top dimension is 8 × 2 m. At the top of every RC pier, there is a hammerhead cap where the elastomeric bearings are located. The piers are made of C25/30 concrete with 74Ø22 longitudinal AQ50 steel rebars confined by Ø10/30 cm stirrups.
The second viaduct ( Figure 6) is also made by two adjacent and independent carriageways. It is constituted by a sequence of five simply supported 22 m length spans, realizing a rectilinear planimetric and altimetric layout.
Infrastructures 2020, 5, 52 6 of 15 • Risk index in acceleration (RIPGA): is the ratio between capacity (PGAC) and demand (PGAD) in terms of peak ground acceleration; • Risk index in return period (RITR): is the ratio between capacity (TRC) and demand (TRD) in terms of return period of the earthquake, raised to 0.41 [11,30]. Values close to one or larger than one characterizes cases where the risk level is acceptable. On the contrary, values close to zero characterizes high-risk cases.
Case Studies
The procedure, described in Section 3, has been applied to six different bridges, representative of the Italian Highway Network. In all the models, the longitudinal axis of the bridge is represented by the X axis, while the transversal direction is oriented with Y axis of the coordinate system.
The first bridge ( Figure 5) is characterized by the presence of two adjacent and independent carriageways, consisting in a sequence of seventeen simply supported 36 m spans (except for the central span which is 60 m long). The planimetric and altimetric layout is rectilinear. The overall width of the roadway is about 11 m and each span of the bridge is realized by a precast concrete slab of six prestressed U girders. The bridge deck consists in a 20 cm thick concrete slab. Each span of the bridge is supported by 2 × 6 elastomeric bearings placed at the ends of each longitudinal beam. Each pier is structurally independent from the adjacent one and it has a rectangular tapered section where the base dimension is equal to 7 × 2 m while the top dimension is 8 × 2 m. At the top of every RC pier, there is a hammerhead cap where the elastomeric bearings are located. The piers are made of C25/30 concrete with 74Ø22 longitudinal AQ50 steel rebars confined by Ø10/30 cm stirrups.
The second viaduct ( Figure 6) is also made by two adjacent and independent carriageways. It is constituted by a sequence of five simply supported 22 m length spans, realizing a rectilinear planimetric and altimetric layout. The overall width of the roadway is 9.85 m and each span is realized with a precast concrete girder of four I longitudinal beams and four transverse beams. The viaduct deck consists in a 25 cm thick concrete slab. Each span of the bridge is supported by 2 × 4 elastomeric bearings. In this case, the piers are composed by two independent frames. Each frame has two cylindrical columns (diameter equal to 1 m). The two columns are connected at the top by a trapezoidal beam where the elastomeric bearings are located. The piers' characteristics are: C25/30 concrete, 16Ø20 longitudinal AQ50 steel rebars, Ø8/20 cm spiral stirrups. The overall width of the roadway is 9.85 m and each span is realized with a precast concrete girder of four I longitudinal beams and four transverse beams. The viaduct deck consists in a 25 cm thick concrete slab. Each span of the bridge is supported by 2 × 4 elastomeric bearings. In this case, the piers are composed by two independent frames. Each frame has two cylindrical columns (diameter equal to 1 m). The two columns are connected at the top by a trapezoidal beam where the elastomeric bearings are located. The piers' characteristics are: C25/30 concrete, 16Ø20 longitudinal AQ50 steel rebars, Ø8/20 cm spiral stirrups.
The third case study (Figure 7), is a long span bridge characterized by a total length equal to 77 m. The overall width of the roadway is about 10 m and the long span is realized by a spiroll prestressed precast concrete slab while the deck consists in a 20 cm thick concrete slab. The long span is supported by 20 elastomeric bearings divided between the two piers and the abutments. The third case study (Figure 7), is a long span bridge characterized by a total length equal to 77 m. The overall width of the roadway is about 10 m and the long span is realized by a spiroll prestressed precast concrete slab while the deck consists in a 20 cm thick concrete slab. The long span is supported by 20 elastomeric bearings divided between the two piers and the abutments. Each pier is characterized by a rectangular tapered section which presents a base dimension equal to 5 × 0.9 m and a top dimension equal to 5.6 × 0.9 m. The elastomeric bearings are placed in correspondence to the top of the pier. The two piers are made of C20/25 concrete with 38Ø18 longitudinal AQ50 steel rebars confined by Ø10 stirrups having a spacing of 25 cm.
The fourth bridge (Figure 8), is characterized by the presence of two adjacent and independent carriageways. It consists in a sequence of eighteen simply supported 29 m length spans. The layout presents a slight curvature. The overall width of the roadway is about 12 m and each span of the bridge is realized by a precast concrete lattice girder formed by four longitudinal beams (three characterized by a I section and one by a U section) and five transverse beams while the deck consists in a 22 cm thick concrete slab. Each span of the viaduct is supported by 2 × 4 elastomeric bearings placed at the end of each longitudinal beam.
The piers are characterized by a frame system consisting of four columns with a rectangular section 0.8 × 2 m. The fourth pier presents two independent columns characterized by a triangular section and by a rectangular section 3 × 2.4 m. The fifth pier is composed by a rectangular 6 × 2.4 m column. The piers are made of C32/40 concrete with 22Ø20 longitudinal FeB44K steel rebars confined by Ø12/20 cm stirrups. The fifth case study is a long span bridge with a total length equal to 65 m (Figure 9). The overall width of the roadway is 11.25 m and the long span is realized by a precast concrete lattice girder with five longitudinal I beams, seven transverse beams and a 20 cm thick concrete deck. The long span is supported by twenty elastomeric bearings divided between the two frame piers and the abutments. Each pier is characterized by a rectangular tapered section which presents a base dimension equal to 5 × 0.9 m and a top dimension equal to 5.6 × 0.9 m. The elastomeric bearings are placed in correspondence to the top of the pier. The two piers are made of C20/25 concrete with 38Ø18 longitudinal AQ50 steel rebars confined by Ø10 stirrups having a spacing of 25 cm.
The fourth bridge (Figure 8), is characterized by the presence of two adjacent and independent carriageways. It consists in a sequence of eighteen simply supported 29 m length spans. The layout presents a slight curvature. The overall width of the roadway is about 12 m and each span of the bridge is realized by a precast concrete lattice girder formed by four longitudinal beams (three characterized by an I section and one by a U section) and five transverse beams while the deck consists in a 22 cm thick concrete slab. Each span of the viaduct is supported by 2 × 4 elastomeric bearings placed at the end of each longitudinal beam.
The third case study (Figure 7), is a long span bridge characterized by a total length equal to 77 m. The overall width of the roadway is about 10 m and the long span is realized by a spiroll prestressed precast concrete slab while the deck consists in a 20 cm thick concrete slab. The long span is supported by 20 elastomeric bearings divided between the two piers and the abutments. The fifth case study is a long span bridge with a total length equal to 65 m (Figure 9). The overall width of the roadway is 11.25 m and the long span is realized by a precast concrete lattice girder with five longitudinal I beams, seven transverse beams and a 20 cm thick concrete deck. The long span is supported by twenty elastomeric bearings divided between the two frame piers and the abutments. The piers are characterized by a frame system consisting of four columns with a rectangular section 0.8 × 2 m. The fourth pier presents two independent columns characterized by a triangular section and by a rectangular section 3 × 2.4 m. The fifth pier is composed by a rectangular 6 × 2.4 m column. The piers are made of C32/40 concrete with 22Ø20 longitudinal FeB44K steel rebars confined by Ø12/20 cm stirrups.
The fifth case study is a long span bridge with a total length equal to 65 m ( Figure 9). The overall width of the roadway is 11.25 m and the long span is realized by a precast concrete lattice girder with five longitudinal I beams, seven transverse beams and a 20 cm thick concrete deck. The long span is supported by twenty elastomeric bearings divided between the two frame piers and the abutments. Each pier is composed by a reticular concrete frame where the columns are characterized by a rectangular tapered section that varies from 1.06 × 1.61 m at the base to 1.06 × 1.36 m. Concrete material is C40/50 with 30Ø20 longitudinal FeB44K steel rebars confined by Ø12/30 cm stirrups. Lastly, the sixth case study consists in a multi span bridge characterized by two simply supported 39 m spans ( Figure 10). The overall width of the roadway is about 12 m and each span is realized by a precast concrete lattice girder formed by four longitudinal I beams, four transverse beams and a 20 cm thick deck. Each span of the bridge is supported by 2 × 4 elastomeric bearings placed at the end of each longitudinal beam.
The pier is characterized by a spatial frame where four columns present a C section and are made of C28/35 concrete with 50Ø16 longitudinal AQ50 steel rebars and Ø8/20 cm stirrups. The piers of the analyzed case studies are characterized by quite different structural behaviors. Piers of bridge 1 and 3 present a cantilever boundary configuration. Other bridges' piers are characterized by double-clamped (frame) configuration. Figure 11 shows six representative momentcurvature diagrams, one for each of the analyzed bridges. Each pier is composed by a reticular concrete frame where the columns are characterized by a rectangular tapered section that varies from 1.06 × 1.61 m at the base to 1.06 × 1.36 m. Concrete material is C40/50 with 30Ø20 longitudinal FeB44K steel rebars confined by Ø12/30 cm stirrups.
Lastly, the sixth case study consists in a multi span bridge characterized by two simply supported 39 m spans ( Figure 10). The overall width of the roadway is about 12 m and each span is realized by a precast concrete lattice girder formed by four longitudinal I beams, four transverse beams and a 20 cm thick deck. Each span of the bridge is supported by 2 × 4 elastomeric bearings placed at the end of each longitudinal beam. Each pier is composed by a reticular concrete frame where the columns are characterized by a rectangular tapered section that varies from 1.06 × 1.61 m at the base to 1.06 × 1.36 m. Concrete material is C40/50 with 30Ø20 longitudinal FeB44K steel rebars confined by Ø12/30 cm stirrups. Lastly, the sixth case study consists in a multi span bridge characterized by two simply supported 39 m spans ( Figure 10). The overall width of the roadway is about 12 m and each span is realized by a precast concrete lattice girder formed by four longitudinal I beams, four transverse beams and a 20 cm thick deck. Each span of the bridge is supported by 2 × 4 elastomeric bearings placed at the end of each longitudinal beam.
The pier is characterized by a spatial frame where four columns present a C section and are made of C28/35 concrete with 50Ø16 longitudinal AQ50 steel rebars and Ø8/20 cm stirrups. The piers of the analyzed case studies are characterized by quite different structural behaviors. Piers of bridge 1 and 3 present a cantilever boundary configuration. Other bridges' piers are characterized by double-clamped (frame) configuration. Figure 11 shows six representative momentcurvature diagrams, one for each of the analyzed bridges. The pier is characterized by a spatial frame where four columns present a C section and are made of C28/35 concrete with 50Ø16 longitudinal AQ50 steel rebars and Ø8/20 cm stirrups.
The piers of the analyzed case studies are characterized by quite different structural behaviors. Piers of bridge 1 and 3 present a cantilever boundary configuration. Other bridges' piers are characterized by double-clamped (frame) configuration. Figure 11 shows six representative moment-curvature diagrams, one for each of the analyzed bridges. Each pier is composed by a reticular concrete frame where the columns are characterized by a rectangular tapered section that varies from 1.06 × 1.61 m at the base to 1.06 × 1.36 m. Concrete material is C40/50 with 30Ø20 longitudinal FeB44K steel rebars confined by Ø12/30 cm stirrups. Lastly, the sixth case study consists in a multi span bridge characterized by two simply supported 39 m spans ( Figure 10). The overall width of the roadway is about 12 m and each span is realized by a precast concrete lattice girder formed by four longitudinal I beams, four transverse beams and a 20 cm thick deck. Each span of the bridge is supported by 2 × 4 elastomeric bearings placed at the end of each longitudinal beam.
The pier is characterized by a spatial frame where four columns present a C section and are made of C28/35 concrete with 50Ø16 longitudinal AQ50 steel rebars and Ø8/20 cm stirrups. The piers of the analyzed case studies are characterized by quite different structural behaviors. Piers of bridge 1 and 3 present a cantilever boundary configuration. Other bridges' piers are characterized by double-clamped (frame) configuration. Figure 11 shows six representative momentcurvature diagrams, one for each of the analyzed bridges. The idealized moment-rotation relationships of the corresponding plastic hinges are summarized in Table 1. As previously discussed, the brittle failure is governed by the shear response. The idealized shear-displacement curves of the considered piers are reported in Table 2. The idealized moment-rotation relationships of the corresponding plastic hinges are summarized in Table 1. As previously discussed, the brittle failure is governed by the shear response. The idealized shear-displacement curves of the considered piers are reported in Table 2. Eigenvalue analysis has been performed for each case-study bridge. The most significant vibration modes have been used as modal-horizontal load profiles of the pushover analysis. Natural periods (T j ) and corresponding modal participation mass (m j ) of the vibration mode involving at least 5% of the total mass are listed in Table 3 (longitudinal direction) and Table 4 (transversal direction). These vibration modes are characterized by a prevalent value of participant mass in the longitudinal or transversal direction. Figure 12 shows one relevant pushover curve for each of the analyzed bridges where only the nonlinear bending response is considered (ductile mechanism). Given a specific seismic input (ADRS spectrum), the calculation of the performance point is carried out with the CSM for each relevant capacity curve as in [21]. Subsequently, corresponding internal forces are combined with the CQC technique and compared to the limit state's maximum capacity.
If the verification is satisfied, the PGA of the selected spectra is lower than PGA C . Therefore, the procedure has to be repeated with an increased spectrum until PGA C is detected. This iterative process leads to the calculation of the risk indexes in terms of PGA or return period T R (RI PGA or RI TR , respectively), i.e., the maximum bearable PGA (or T R ) over the corresponding site design values. Table 5 reports the results of the six bridges for the ductile mechanism.
Infrastructures 2020, 5, 52 10 of 15 Eigenvalue analysis has been performed for each case-study bridge. The most significant vibration modes have been used as modal-horizontal load profiles of the pushover analysis. Natural periods (Tj) and corresponding modal participation mass (mj) of the vibration mode involving at least 5% of the total mass are listed in Table 3 (longitudinal direction) and Table 4 (transversal direction). These vibration modes are characterized by a prevalent value of participant mass in the longitudinal or transversal direction. -------------- Figure 12 shows one relevant pushover curve for each of the analyzed bridges where only the nonlinear bending response is considered (ductile mechanism). Given a specific seismic input (ADRS spectrum), the calculation of the performance point is carried out with the CSM for each relevant capacity curve as in [21]. Subsequently, corresponding internal forces are combined with the CQC technique and compared to the limit state's maximum capacity.
If the verification is satisfied, the PGA of the selected spectra is lower than PGAC. Therefore, the procedure has to be repeated with an increased spectrum until PGAC is detected. This iterative process leads to the calculation of the risk indexes in terms of PGA or return period TR (RIPGA or RITR, respectively), i.e., the maximum bearable PGA (or TR) over the corresponding site design values. Table 5 reports the results of the six bridges for the ductile mechanism. Analogously, pushover analyses of relevant vibration modes are performed for the brittle mechanism. Figure 13 shows one relevant pushover curve for each of the analyzed bridges.
The corresponding risk indexes in terms of PGA and T R (RI PGA or RI TR ) for both the longitudinal and transversal directions are listed in Table 6. Analogously, pushover analyses of relevant vibration modes are performed for the brittle mechanism. Figure 13 shows one relevant pushover curve for each of the analyzed bridges. The corresponding risk indexes in terms of PGA and TR (RIPGA or RITR) for both the longitudinal and transversal directions are listed in Table 6.
Discussion
The risk indexes estimated for the six case-studies reflect the well-known seismic deficiencies of existing RC bridges. Looking at the bending (ductile) capacity, all considered viaducts are compliant with the code-prescribed seismic safety level. The piers, properly designed to resist high vertical loads, have a sufficient amount of longitudinal reinforcement to withstand the bending actions generated by the earthquake shaking. Most of the bridges have PGA C larger than 0.2g. The average value is 0.36 g while the coefficient of variation is 0.39. In general, the higher values of PGA C refer to short viaducts or characterized by wall-piers. Each analyzed viaduct has a longitudinal and transversal risk index larger than one. On the contrary, shear (brittle) capacity results quite limited. The corresponding PGA C has average equal to 0.15g and consistently high scatter (1.19 coefficient of variation). In most cases the PGA C is lower than 0.1g for both longitudinal and transversal directions. Only bridges 4 and 6 present risk indexes larger than one, while the other viaducts are affected by the poor construction details of the piers in terms of transversal reinforcement.
Conclusions
In this paper, an efficient procedure to evaluate the seismic vulnerability of existing RC bridges has been described with reference to six typical bridges of the Italian Highway Network. The procedure, based on the modal pushover analysis approach, guarantees a low computational cost resulting in a balanced solution for the assessment of large portfolios of bridges. Risk indexes, expressed in terms of peak ground acceleration or return period, have been calculated (i) considering bending-ductile/shear-brittle collapse mechanisms (ii) for the two principal directions of the structure [32]. The results of the analyses have shown that these bridges are not affected by bending failure of piers (i.e., risk indexes larger than one) but are quite vulnerable with respect to shear-brittle damage. This result reflects the lack of construction details of these types of bridges that were constructed in the post WWII period. These results are not only useful to define the correct seismic retrofitting interventions to be implemented, but are important decision-making parameters for bridge management, investment prioritization and loss assessment at regional scale. | 10,609.2 | 2020-06-26T00:00:00.000 | [
"Engineering"
] |
Classification of SARS-CoV-2 sequences as recombinants via a pre-trained CNN and identification of a mathematical signature relative to recombinant feature at Spike, via interpretability
The global impact of the SARS-CoV-2 pandemic has underscored the need for a deeper understanding of viral evolution to anticipate new viruses or variants. Genetic recombination is a fundamental mechanism in viral evolution, yet it remains poorly understood. In this study, we conducted a comprehensive research on the genetic regions associated with genetic recombination features in SARS-CoV-2. With this aim, we implemented a two-phase transfer learning approach using genomic spectrograms of complete SARS-CoV-2 sequences. In the first phase, we utilized a pre-trained VGG-16 model with genomic spectrograms of HIV-1, and in the second phase, we applied HIV-1 VGG-16 model to SARS-CoV-2 spectrograms. The identification of key recombination hot zones was achieved using the Grad-CAM interpretability tool, and the results were analyzed by mathematical and image processing techniques. Our findings unequivocally identify the SARS-CoV-2 Spike protein (S protein) as the pivotal region in the genetic recombination feature. For non-recombinant sequences, the relevant frequencies clustered around 1/6 and 1/12. In recombinant sequences, the sharp prominence of the main hot zone in the Spike protein prominently indicated a frequency of 1/6. These findings suggest that in the arithmetic series, every 6 nucleotides (two triplets) in S may encode crucial information, potentially concealing essential details about viral characteristics, in this case, recombinant feature of a SARS-CoV-2 genetic sequence. This insight further underscores the potential presence of multifaceted information within the genome, including mathematical signatures that define an organism’s unique attributes.
Introduction
The evolution of viruses poses a significant challenge in pandemic control.Two main mechanisms are responsible for the high rate of viral evolution: mutation and genetic recombination.
Both mechanisms occur with great frequency during viral replication [1,2].Mutations introduce random errors into the genetic material of a virus, resulting in genetic variants of the same virus [3].Genetic recombination is the exchange of genetic information between two viral genomes of the same or different viruses, resulting in a new hybrid genome [4,5].Both processes can generate variants that hinder the prevention and treatment of infectious diseases.Recombination can occur in many RNA viruses, having been detected with high frequency in picornaviruses [6], coronaviruses [7,8], and retroviruses [9].Recombinant viruses represent an enigma in the realm of infectious diseases, as their consequences can vary widely.In some instances, genetic recombination may result in viruses showing no significant changes in their behavior or pathogenicity, or remaining relatively harmless.However, in other scenarios, this recombination can lead to the emergence of new viral strains with unique characteristics, such as increased transmission capacity or virulence [10][11][12][13][14][15][16].Even on rare occasions, genetic recombination has been the cause of failure of attenuated virus vaccines [17,18].The genetic recombination that occurs among different viruses, as observed in SARS [19] and MERS [20], can have significant implications for viral evolution and its impact on public health [21].
SARS-CoV-2, short for Severe Acute Respiratory Syndrome Coronavirus 2, is a novel coronavirus that emerged in late 2019 and quickly spread to become a global pandemic [22].This highly contagious virus is responsible for the Coronavirus Disease 2019 (COVID-19), characterized by a range of symptoms from mild respiratory issues to severe pneumonia and, in some cases, death [23].According to World Health Organization (WHO), as of September 1st, 2023, COVID-19 has caused 770,437,327 confirmed cases, including 6,956,900 deaths [24].
The recombination events detected in SARS-CoV-2 have occurred solely between different lineages of the virus, with no substantial modifications in terms of morbidity, mortality, etc [31].Most of these recombinants have occurred between co-circulating Omicron sublineages, such as BA.1 (or BA.1.1),and the Delta variant or BA.2 [32].However, recombination between Omicron and Delta could potentially result in a virus with Omicron's transmissibility and Delta's potentially increased risk of severe illness, leading to a new and more concerning scenario for public health [33].
Understanding genetic recombination phenomena holds paramount importance in anticipating and addressing pandemics, as well as in the surveillance and control of the emergence of new viruses or variants [34].
Deep Learning tools can provide new insights into the study of genetic recombination [35,36] and the anticipation of new pandemics or emerging viruses [37].By analyzing large genomic datasets and identifying patterns, Deep Learning tools can detect signs of genetic recombination in viruses more quickly than traditional methods.In this way, we can not only reduce response times to emerging viruses but also anticipate their emergence.Likewise, we can unravel the mysteries of the genetic code from new perspectives, such as the search for mathematical patterns within the genome itself.
SARS-CoV-2 complete genomic sequences compendium
We downloaded the complete collections of SARS-CoV-2 sequences by variant from the NCBI Virus Database (National Center for Biotechnology Information, Virus Database), in March 2023.Out of a total of 1,541,293 sequences, 1,539,728 were assigned as non-recombinant, and 1,565 recombinant, and their variant distribution are detailed in Tables 1 and 2, respectively.
The Variants of Concern (VOCs) began to emerge around November 2020, with the Alpha variant being the most prominent at that time [38].For classification purposes, we refer to variants identified between January 2020 and November 2020 as pre-VOC variants.Table 1 shows the compilation of non-recombinant sequences by variants (pre-VOC and VOC).
The prevalent variants are, first and foremost, the collection known as pre-VOC, with 218,198 sequences, followed by Alpha, with 198,722 sequences, Delta, with 325,285 sequences, and, above all, Omicron, along with all its sub-variants, totaling 748,635 variant sequences.The impact of the remaining variants has been more limited, due to the dominance of those prevalent ones that gained an evolutionary advantage.Therefore, the rest of the variants represent only 48,888 sequences.
Table 2 shows the SARS-CoV-2 compilation of recombinant sequences by variants.The compilation of recombinant sequences is more balanced than in the case of non-recombinants, with a slight prevalence of XBB sub-variants over the others.[39], the total of 1,539,728 sequences corresponds to an approximate date of March 2023 (Release Date of NCBI Virus Database).The variants are sorted by the approximate date of appearance (data obtained from GISAID Initiative-Tracking of hCoV-19 Variants) [40].The column "Variant" indicates the WHO Name of the SARS-CoV-2 variant.The column "No." indicates the total number of downloaded complete sequences.The "Percentage" column indicates the percentage that this total number of complete sequences represents out of the total downloaded sequences.
Variant
No
Dataset design
The prevalent variants throughout the SARS-CoV-2 pandemic (and its worst moments) were non-recombinants [42].Primarily for this reason, the number of non-recombinant variant sequences is substantially greater than that of recombinants.In response to this disparity, we opted to implement a subsampling technique in the larger non-recombinant dataset.This strategy involves selecting a random, representative subsample from the larger category, thereby equalizing the number of data points between both categories.This, in turn, helps mitigate potential biases in our analysis and enhances the validity of our results [43].We randomly selected 1,565 non-recombinant sequences to work with a balanced dataset.To ensure the generalization of our results, we performed a significant and sufficient number of different subsamplings.In this case, 10 subsamplings, labeled with sequential numbers from 01 to 10 (SUB_01-SUB_10).
Once the subsampling of non-recombinant sequences was completed, they were randomly distributed among the Training, Validation, and Test sets [44], as detailed in Table 3, which illustrates the structure of each dataset generated by subsampling.
Generation of genomic spectrograms
The generation of spectrograms follows the procedure corresponding to the superposed spectrograms in our previous work [45], were we applied transfer learning to a pre-trained [39], the total of 1,565 sequences corresponds to an approximate date of March 2023 (Release Date of NCBI Virus Database).The nomenclature for recombinant variants begins with an "X" in the Pango nomenclature [41].The column "Variant" indicates the WHO Name of the SARS-CoV-2 variant.The column "No." indicates the total number of downloaded complete sequences.The "Percentage" column indicates the percentage that this total number of complete sequences represents out of the total downloaded sequences.
In this spectrogram representation, the z-axis represents the summation of values from each of the four nucleotide types along the z-axis.In both cases, the y-axis represents a frequency range from 0 to 0.5 Hz.The z-axis represents the spectrogram computation with an applied jet colormap scale.Lower values are represented in blue with a progressive scaling towards the color red, which represents higher values, transitioning through intermediate colors in the range of greens, yellows, and oranges.
We generated spectrograms of both datasets using Python, the Scipy library, scipy.signal.spectrogram.The length of the SARS-CoV-2 genome is approximately three times greater than that of the HIV-1 genome, and this contrast is evident in the spectrogram due to the fixed value of 256 used as the length of each time segment for FFT calculation (nperseg).In the spectrogram of SARS-CoV-2, the x-axis is three times longer than in the case of HIV-1, resulting in smaller color points along the z-axis [48].
In both cases, the horizontal line at f = 1/3 is clearly visible.This observation may be related to the conversion of three nucleotides into a single amino acid in the coding regions of the genome [49].It is expected that this line will be sharp in these coding regions but less pronounced in non-coding regions.In the case of viruses, a significant portion of the genome is coding [50].Hence, this line is perfectly visible throughout nearly the entire genomic spectrogram.The appearance of this line at f = 1/3 is an indicative of a correct generation of genomic spectrograms.
This graphical representation of genome in the frequency spectrum allows for a more accurate identification of the genome regions crucial for the recombinant feature.
Two-stage transfer learning
We performed a two-stage transfer learning process as indicated in Fig 3 .The first of these stages started from the network derived from [45] for superposed spectrograms.
All the experiments were performed using the MATLAB2021b App Deep Learning Designer.
Test bench
We conducted a Test Bench on each of the 10 subsamplings, evaluating hyperparameter values with a fixed Learning Rate of 0.0001, a fixed Batchsize of 52, and varying the number of Epochs at 10, 25, 50, 75, 100, 150, 200, and 250.
Consequently, we conducted a total of 80 experiments, resulting in 80 VGG-16 trained through Two-Stage Transfer Learning.
We conducted preliminary tests with a Batchsize of 128 and a Learning Rate of 0.01, which yielded suboptimal results.Consequently, we found that the optimal values for Batchsize and Learning Rate in this second stage of transfer learning align with those used in the first stage (HIV-1).
Performance measurement
As performance metrics, alongside Validation Accuracy and Test Accuracy, we calculated the Area Under the Curve (AUC) and the Confusion Matrix on the Test Set for all experiments.
Our criterion for determining the optimal configurations was based on selecting those that, with remarkable values of AUC and Validation Accuracy, not only achieve the highest Test Accuracy but also maintain a balance in both categories (recombinant and non-recombinant) [51].
We calculated all performance measurements using MATLAB2021b.
Interpretability analysis
We applied interpretability techniques to discern, via heatmaps, the critical influences on the outputs of each of the generated models.Our tool of choice for pinpointing the regions of the genome where the network looks to make decisions is Grad-CAM [52].It offers greater visual clarity compared to other tools like LIME [53] or Gradient Attribution [54], albeit at the cost of some precision.The color scale applied to these heatmaps is a jet map, whose color distribution based on the value of the scoremap at each point is shown in Fig 4 .To process the Grad-CAM results, we performed a three-step image processing to progressively determine the relevant Total Hot Zones in the recombinant feature.In the first step, we obtained the scoremaps for each sequence, in each subsampling, and for each hyperparameter configuration.In the second step, we calculated the total hot zones per category in each subsampling and for each hyperparameter configuration.In the third step, we calculated the total scoremap image per category for each hyperparameter configuration, considering the ten subsamplings.The result of this third step represents the relevant hot zones across the set of subsamplings by category.
For the calculation of the images resulting from Steps Two and Three, we applied two different techniques, which were determined by their input data.For the calculation of the Total Hot Zones in Step Two, we processed the scoremaps (numerical matrices corresponding to each of the sequences) and summed the numerical values at each position in the matrix.
The scalar summation of Grad-CAM (Class Activation Maps) and the generation of an average heatmap are two different approaches to summarize and visualize the importance of regions of interest in the images.Each approach has specific advantages.
The arithmetic summation (Step Two) allowed, in a straightforward and computationally efficient manner, to calculate the total hot zone for each of the 80 experiments conducted, without any inherent loss of image processing accuracy.Subsequently applying the same color map to this resulting numerical matrix allowed us to obtain the total hot zone image per category for each subsampling and each experiment in the test bench.
This way, we obtained a clear and representative heatmap of the hot zones in the dataset.For example, the total hot zones per category in subsampling 05 for 200 Epochs are shown in Figs 6 and 7.In Step Three, we generated composite hot zone images for each hyperparameter configuration across the 10 subsamplings.These images represent the arithmetic mean of the hot zones obtained from each subsampling.
Where n represents the number of subsamplings, in this case, 10.
Step Three visually represents the common and relevant hot zones for the recombinant feature considering the 10 subsamplings.Since we only had 10 images per category (non-recombinant and recombinant) in the initial data, we implemented image processing techniques to calculate a weighted average of each pixel in the image set, creating a comprehensive total image of the hot zones across the 10 subsamplings.This technique allowed us to diminish the significance of noisy or atypical regions in the individual maps, achieving a generalized view of the important areas in each category.
The application of both image processing techniques enabled us to attain a more comprehensive view of the total hot zones throughout the whole process.
After obtaining the total hot zone in Step Three, we modified the color scale to visually enhance the hot zones.We achieved this by normalizing the average matrix so that the lowest value equals 0, and the highest equals 255 as follows [55]: Where Avg.Matrix represents the resulting average matrix from the Step Three.Avg.Matrix Min.represents the minimum value contained in Avg.Matrix, and Avg.Matrix Max.represents the maximum value.Finally, in one last adjustment, we generated the negative of the resulting matrix so that the minimum value appears as a light color and the maximum as a dark color.All of this was done with the aim of enhancing visualization and highlighting the location of the total hot zones per epochs [56].
All of these processes were conducted using MATLAB 2021b along with Python library functions, utilizing cv2 for image processing and numpy for multidimensional array manipulation and algebraic operations [57].
Results per subsampling
S1 Appendix includes the complete set of results per subsampling in terms of performance.The performance metrics are detailed in Section 2.6.Those configurations (specified by the number of epochs) that yielded best results, meaning highest test accuracy values and a more balanced distribution of hit rates between the two categories, are highlighted in green.
We evaluated the balance between the two categories by computing the Standard Deviation (SD) between the test accuracy values for recombinants and non-recombinants.
Therefore, we considered the optimal configurations to be those with the highest hit rate in the test set and the most balanced distribution (lower SD between the test accuracies of both categories).4. Table 4 summarizes the best configurations for each of the 10 generated subsamplings.These are the ones that exhibit the highest test accuracy values with a greater balance between both categories.
Results per number of epochs
Complete results are provided in S2 Appendix Table 5 summarizes the most relevant data.The configurations corresponding to 10 and 25 Epochs yielded deficient results in terms of test accuracy in both categories, and the results are unbalanced, making them inappropriate configurations due to the insufficient training with such low values of the number of epochs.The qualitative advantage of 200 Epochs over 150 or 250 is its higher degree of balance between the test accuracy of both categories.Therefore, although these three configurations exhibit high hit rates on the test set, in the case of 200 Epochs, the minimum value of Inter-Category SD was achieved.Step Three) and their enhanced counterparts.In the case of non-recombinants, the main hot zones are more diffuse, as opposed to the greater sharpness observed in recombinants.For the latter, all configurations clearly converge towards a single area.
We processed 17,215 complete sequences of SARS-CoV-2, utilizing virtually all available complete recombinant sequences at the beginning of the experimentation.We are aware that handling 10 subsamplings of the total non-recombinant pool involved processing only approximately 1% of the available non-recombinant sequences.Nevertheless, the results obtained are significant, especially in the recombinant category, indicating that it is a representative sample.Our results confirm this point.
Optimal configuration selection
The configurations with the highest number of correct predictions in the test set are 150 epochs and 200 epochs, with a total test accuracy in both cases of 94.60%.At similar test As we discussed in previous sections, the decision criteria cannot be based solely on the mere measure of total accuracy in a single category or in both categories combined.We require balanced results, hence the need to include inter-category SD in the decision criteria.In this case, the 200 Epochs configuration achieves remarkable accuracy rates in both categories (see Fig 11), and the relative difference in absolute terms is minimal.Indeed, the inter-category SD is the lowest (see Fig 12).
Reference sequence
To identify the genomic regions where the hot zones are located, we relied on the Severe Acute Respiratory Syndrome Coronavirus 2 isolate Wuhan-Hu-1, complete genome NCBI Reference Sequence: NC_045512.2[58].The location of each structural, non-structural, and accessory protein is indicated in Table 6.Based on the consensus reference sequence, we constructed a scaled graphical representation of the SARS-CoV-2 genome, which will serve as a pivotal tool for the precise identification of regions involved in the recombinant feature.
Analysis of non-recombinant results
Once it is established that the optimal configuration corresponds to 200 Epochs, the next step is to identify the high-impact hot zones for classifying a sequence as non-recombinant.To do this, we calculate the overall average image (Step 3), the enhanced image, and the localization of the epicenters of the main hot zones on the x-axis (indicating genomic region involvement) and the y-axis (frequency range identification), all in accordance with the guidelines outlined in Section 2.7.In a comparison between the main hot zones using a representative scale of the SARS-CoV-2 genome, it becomes evident that the two primary hotspots are located in the vicinity of the Spike protein.See Fig 17.Therefore, even though the areas are subtle, we can observe that the main decision regions are at S protein for the frequencies f = 1/12 and f = 1/6.
Analysis of recombinant results
In the case of recombinant SARS-CoV-2 sequences, the main hot zone, where the CNN looks to detect the recombinant feature, is clearly delineated.Regarding the vertical axis, whose total range is 0.5 Hz, as interpreted from the result in Fig 21, the preliminary identification of the epicenter of these total hot zones is located around 0.183 Hz, that is, around f = 1/6.
Despite Grad-CAM's imprecise interpretability, and considering that the location of each protein may vary depending on the variant and inherent sequence variability, the hot zone closely aligns enough to infer that the neural network is focusing on the S protein to identify the recombinant feature within the sequence.See Fig 22.
Using our methodology, we determined that the main hot zone is clearly located in the S protein.The use of this methodology allowed us to pinpoint the areas of the genomic spectrogram image where the pre-trained CNN "looks" to classify a sequence as recombinant or nonrecombinant.
At this point, we must make a distinction between the results obtained in our research and the fact that genetic recombination occurs in the S protein.In the research conducted on HIV-1 [45], the mathematical signature embedded in the genome that caused the pre-trained CNN to classify a sequence as recombinant or non-recombinant, was predominantly located in areas near the LTRs at a frequency of f = 1/3, regardless of the genomic regions where genetic recombination actually occurs between the different pure subtypes of HIV-1.
In the case of SARS-CoV-2, the location of this mathematical signature was detected in the same region where multiple genetic recombination events occurred.That is, the S protein.
The coincidence in location between the mathematical pattern, the mathematical signature detected by the pre-trained CNN, and the fact that the S protein is where abundant genetic recombination events occurred in SARS-CoV-2 should be studied in future investigations to determine if there is any relationship between these factors, as well as to unravel the significance of this phenomenon from a biological perspective.Considering that the y-axis range is 0-0.5 Hz, this equates to a frequency of 0.18 Hz.Given the limited accuracy of Grad-CAM and the additive errors in the successive mathematical transformations performed in calculating the total hot zones, it is not unreasonable to consider the vertical epicenter at frequencies close to f = 1/6.To perform the most reliable verification hot zone is located at frequencies close to f = 1/6, so it is a plausible hypothesis to consider this frequency as influential in determining the recombinant feature.
Conclusions
Using genomic spectrograms with 10 random subsamplings to address the disparity in size between non-recombinants and recombinants, we designed a test bench to elucidate the optimal hyperparameter configurations.We applied transfer learning in 2 phases using a pretrained VGG-16 model on the ImageNet dataset.Phase 1 was focused on HIV-1 genomic spectrograms, and Phase 2 on those of SARS-CoV-2.All of this with the goal of detecting the recombinant feature of a SARS-CoV-2 genomic sequence.Subsequently, we applied the Grad-CAM interpretability tool in 3 steps to identify the hot zones (where the CNN looks for classification) in each sequence, in each subsample for every configuration, and in total in each configuration.We applied image processing techniques to enhance the localization of the hot zones.These 3 steps involve not only the mere application of Grad-CAM but also the mathematical processing of its results to extrapolate the obtained outcomes.The image processing techniques used allowed us to delineate the relevant areas for the recombinant feature as clearly as possible.
We obtained consistent and well-defined results in each category.In the case of SARS-CoV-2, the spike protein emerges as a determinant in both recombinant and non-recombinant categories.
The evident significance of the S protein in identifying the recombinant feature in SARS-CoV-2 aligns with the excellent research conducted by Nikolaidis et al. [59].They uncovered multiple instances of double crossover genetic recombination events across various CoVs, and interestingly, the majority of these events are precisely located within this protein.Therefore, our work in a way reinforces their results by means of a different approach.
In the case of the non-recombinants, the hot zones (Step 3) are more diffuse, although they appear to pivot around the area of the spike protein within the frequency range of f = 1/12 and f = 1/6.
Nevertheless, the clarity of the main hot zones in Steps 2 and 3 is particularly striking in the case of recombinant sequences.A region corresponding to the Spike protein is clearly elucidated, at an approximate frequency of f = 1/6.
By utilizing Deep Learning tools, with their high potential in pattern recognition in images [62,63], we were able to identify the determinant regions in the recombinant feature of genomic spectrograms of SARS-CoV-2.Achieving high test accuracy and robust, distinguishable hot zones in both categories.
Future research
In summary, we detected a mathematical signature that characterizes a genomic sequence of SARS-CoV-2 as recombinant.This signature is located in the S protein, with its epicenter at a frequency of f = 1/6.Consequently, the location of this mathematical signature is related to a nucleotide periodicity of 6, meaning that in the arithmetic series, every 6 nucleotides (two triplets) in S may encode crucial information related to the recombinant feature in SARS-CoV-2.
We know where the CNN looks to classify a SARS-CoV-2 sequence as recombinant.Now, we want to understand what it sees.What is the mathematical pattern embedded in the frequency spectrum of the genome of the Spike protein that causes a sequence to be classified as recombinant?
Our future research should focus on determining not only the formulation of this mathematical signature embedded in the genome but also its biological significance.
Another interesting line of research would be to determine the relationships between the dispersion detected in the hot zones in the non-recombinant category with the abundance and phylogenetic diversity of the set of non-recombinant variants in SARS-CoV-2.
In light of the results obtained, the identification of mathematical signatures in the virus genome through genomic spectrogram analysis opens up new avenues to investigate potential functions associated with these mathematical patterns.
Fig 1
Fig 1 shows an example of the genomic spectrogram of HIV-1, and Fig 2 shows that of SARS-CoV-2.The length of x-axis matches the genome length.In both cases, the y-axis represents a frequency range from 0 to 0.5 Hz.The z-axis represents the spectrogram computation with an applied jet colormap scale.Lower values are represented in blue with a progressive scaling towards the color red, which represents higher values, transitioning through intermediate colors in the range of greens, yellows, and oranges.We generated spectrograms of both datasets using Python, the Scipy library, scipy.signal.spectrogram.The length of the SARS-CoV-2 genome is approximately three times greater than that of the HIV-1 genome, and this contrast is evident in the spectrogram due to the fixed value of 256 used as the length of each time segment for FFT calculation (nperseg).In the
Fig 3 .Fig 4 .
Fig 3. Two-stage transfer learning methodology.We started with a pre-trained VGG-16 using the ImageNET dataset.In Phase 1, we applied transfer learning to the genomic spectrogram dataset of complete HIV-1 sequences to detect the recombinant feature.In Phase 2, we applied transfer learning once again to the resulting network from Step 1 (VGG-16 HIV-1) using a genomic spectrogram dataset of complete SARS-CoV-2 sequences to also detect the recombinant feature (VGG-16 SARS-CoV-2).https://doi.org/10.1371/journal.pone.0309391.g003 Fig 6 shows the total hot zones in the case of non-recombinants and Fig 7 in the case of recombinants.
Fig 5 .
Fig 5. Three-step interpretability.Considering that we conducted 80 experiments (test bench applied to 10 subsamplings), the first step involved a total of 50,080 images, taking into account that each complete test set contains 626 sequences.The second step involves 160 images (80 experiments and 2 categories).And the third step involves a total of 16 images per category and number of Epochs.https://doi.org/10.1371/journal.pone.0309391.g005
Fig 8
graphically represents the confusion matrix scheme outlined in Table
Fig 9
displays the summary of the total hot zones per category for each subsampling by number of epochs (Step Two).From the images shown in Fig9, we generated the corresponding images for Step Three, that is, the weighted average hot zones for each configuration.We omitted 10 and 25 Epochs as their performance ratios were not suitable, possibly due to insufficient training.Fig 10 displays the weighted average hot zones for each configuration (
Fig 8 .
Fig 8. Confusion matrix scheme.The top row corresponds to the Non-recombinant Category, and the bottom row to the Recombinant Category.https://doi.org/10.1371/journal.pone.0309391.g008
Fig 10 .Fig 11 .
Fig 10.Total hot zones per configuration.The average hot zones represent the hot zones for each number of epochs across the 10 subsamplings.The enhanced figures are the average hot zones with color scale modifications to clarify the relevant hot zones in each category.https://doi.org/10.1371/journal.pone.0309391.g010
Figs 13 -
Figs 13-16 show the graphical analysis of the Total Hot Zones (Step 3) in Non-recombinants for 200 Epochs.As seen in Fig 13, non-recombinant sequences do not exhibit a distinct hot zone, and the boundaries of hot zones are somewhat blurred.This phenomenon could be attributed to the greater diversity of sub-lineages and strains among non-recombinant variants, resulting in increased variability due to subsampling.As can be seen more clearly in Fig 14, subtle hotspots are hinted at around the Spike protein region.A potential third hotspot may exist towards the end of the genome, although its relevance appears to be less pronounced.By direct extrapolation to the calculations shown in Fig 15, the epicenters of the main hot zones are situated at nucleotide positions between 23,341 and 23,362.The central position of the Spike protein (S) corresponds to nucleotide 23,473.Therefore, we can place the epicenters of both zones in the central region of the S protein.By directly extrapolating from the calculations shown in Fig 16, regarding the vertical axis, considering its range is 0-0.5 Hz, the critical frequencies fall approximately at f = 1/12 and f = 1/6 respectively.In a comparison between the main hot zones using a representative scale of the SARS-CoV-2 genome, it becomes evident that the two primary hotspots are located in the vicinity of the Spike protein.See Fig17.Therefore, even though the areas are subtle, we can observe that the main decision regions are at S protein for the frequencies f = 1/12 and f = 1/6.
Figs 18-21 compile the graphical analysis of the Total Hot Zones (Step 3) in Recombinants for 200 Epochs.The sharpness of the total hot zone shown in Figs 18 and 19 denotes a prevalence of this hot spot across all subsamplings for 200 epochs.
Fig 17 .Fig 18 .
Fig 17.Localization of hot zones in non-recombinant sequences.Positioned in relation to a true-to-scale schematic representation of the composition of the SARS-CoV-2 genome.All notable proteins are appropriately marked.https://doi.org/10.1371/journal.pone.0309391.g017
Fig 23 .
Fig 23.Vertical positioning at 200 Epochs configuration subsampling 06.Indeed, given that the total length of the y-axis is 534.5 points, with a range of 0-0.5 Hz, the epicenter's position at 178 points precisely corresponds to 0.167, a value aligning with f = 1/6 in the sharpest case across the entire test bench.https://doi.org/10.1371/journal.pone.0309391.g023
Table 1 . Non-recombinant SARS-CoV-2 sequence compendium.
Downloaded from the NCBI Virus Database (National Center for Biotechnology Information, Virus Database)
Table 2 . Recombinant SARS-CoV-2 sequence compendium.
Downloaded from the NCBI Virus Database (National Center for Biotechnology Information, Virus Database)
Table 3 .
Dataset structure.We conducted all experiments using three balanced datasets between both categories, allocating 60% to the Training Set, 20% to the Validation Set, and 20% to the Test Set. https://doi.org/10.1371/journal.pone.0309391.t003
Table 4 .
Best results per subsampling.The best configurations feature 200 Epochs in 80% of the subsamplings.
Table 5 . Best results per number of epochs.
We include the mean value and standard deviation of AUC, the mean Test Accuracy for non-recombinant and recombinant sequences, and the standard deviation between the mean test accuracy values of recombinants and non-recombinants.As a measure of the network's accuracy, the total number of sequences in each category within the test set is 313.
Fig 9. Summary table of total hot zones Step Two.N stands for the non-recombinant category, and R stands for the recombinant one.https://doi.org/10.1371/journal.pone.0309391.g009
Table 6 . Location of protein coding regions in the SARS-CoV2 Wuhan-Hu-1 reference genome sequence (NC_045512.2) [58]. The
"Beginning" column specifies the first nucleotide of the corresponding protein, while the "End" column indicates the last nucleotide. | 7,271.6 | 2024-08-26T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Numerical Investigation of the Coastal Atmosphere and Ocean at Meaípe
The monazite sands of Meaípe attract a great interest because of their ionizing radiation. This radiation is inert, but its intensity varies with time and space, and may be influenced by the atmospheric conditions. The present study performed numerical simulations with the atmospheric model Weather Research and Forecasting to investigate this coastal region atmospheric dynamics. The simulations refer to a period from 01 to 07 January of 2019 and include the emission of an inert tracer with a constant rate to represent the dispersion and transport of the ionizing radiation by the atmospheric circulations. During the simulated period, the predominant wind direction was from north-northeast. Topography has a substantial influence on the local atmospheric circulation and the influence of the sea breeze circulation was also noticed. The ionizing radiation is not horizontally transported to a large area, staying confined to a region near its emission. Its vertical dispersion is usually limited by the top of the planetary boundary layer.
Introduction
Meaípe, a beach at Guarapari city in Espirito Santo state -Brazil, is of great interest because of its monazite sand, that emits an ionizing radiation. Once the radiation is present in the air, it can be transported and dispersed by atmospheric circulations and, therefore, it is necessary to investigate the spatial and temporal evolution of these circulations, particularly the evolution of the planetary boundary layer (PBL).
A meteorological tower was recently installed at Meaípe to record meteorological and soil variables (http://www.iag.usp.br/meteo/liam/data/MEA01/MEA01_RT.html). However, these measurements are taken at a single point. A numerical simulation is able to represent the threedimensional flow at a determined volume of interest and is a valuable tool to help understand the interaction among different scales of atmospheric flow, as well as the interaction between atmosphere and surface.
The Weather Research and Forecasting (WRF) model has being successfully used to simulate atmospheric circulations at coastal areas [1]. It is an open source model, designed for realistic simulations. The model also allows the investigation of the transport and dispersion of inert tracers. Therefore, it is a suitable tool to investigate the influence of the atmospheric circulations on the spatial and temporal variability of the monazite sand ionizing radiation at Meaípe.
Methodology
The simulations were performed using the computational numerical atmospheric mesoscale model WRF version 3.8.1. WRF is a three-dimensional Eulerian model, fully compressible, nonhydrostatic, with sigma (terrain-following) coordinates in the vertical direction [2]. At the atmospheric surface layer, the model uses the Monin-Obukhov similarity theory. It also uses the Noah Land Surface Model [3] with 4 soil layers.
A series of combinations of parameterization schemes for different processes is possible. The options used in this work are: the Yonsei University PBL scheme, Grell 3D cumulus scheme, RRTM longwave scheme, and Dudhia shortwave scheme.
The necessary input data are: topography of the area (Fig. 1) obtained from satellite pictures; land/vegetation type or land use ( Fig. 2) also obtained from satellite pictures; and initial and boundary meteorological data, obtained from global meteorological model reanalysis ERA-Interim [4], including sea surface temperature every 6 hours. Figure 1: Topography of the innermost domain.
It is also necessary to define the domains: 4 nested domains were used, all with 100 x 100 grid points in both horizontal directions, with grid spacing of 62.5 km, 12.5 km, 2.5 km, and 500 m from the outermost to the innermost domain. Vertical levels have the smallest grid spacing near the surface and increasing towards the top of the atmosphere. Here 38 levels were used. To represent the dispersion and transport of the ionizing radiation, a tracer emission was set up in the WRF model. Some grids at the Meaípe coast emitted a tracer at a constant rate and this tracer was dispersed and transported by the atmospheric circulation. Figure 3 shows the time series of near surface variables: air temperature and relative humidity at 2 m of height and wind speed and direction at 10 m. Temperature (Fig. 3a) and
I Workshop sobre Areias Monazíticas
Meaípe, Guaraparí -ES, Brasil, May 2019 humidity (Fig. 3b) show a diurnal pattern compatible with coastal areas: temperature (humidity) increases (decreases) rapidly after sunrise and then decreases (increases) until the other day, probably because of the influence of marine colder and moister air brought by the sea breeze. Wind direction (Fig. 3c) also shows a sea-breeze pattern, with a stronger easterly direction during the late morning and the afternoon. During the simulated period, wind direction was almost always from NNE. Wind speed (Fig. 3d) is relatively high. However, the model tends to overestimate wind speed. The surface variables spatial pattern evolution during the day for the innermost domain is presented in Figure 4. Air temperature increases more rapidly over the continent, creating the thermal gradient that causes sea breeze. In fact, there is a change in wind direction along the coast from 0900 LT (local time, Fig. 4a) to 1200 LT (Fig. 4b) that is the surface flow of the sea breeze circulation. From 1200 LT to 1500 LT (Fig. 4c) and to 1800 LT (Fig. 4d) the inland propagation of marine colder air, caused by the sea breeze, may be noticed along the coast.
Despite the sea breeze influence, wind direction is predominantly from NNE. At 1800 LT, sea breeze is not noticeable anymore. Topography at the northwest corner of the domain also influences the wind flow, creating a convergence over the continent near the coast, where the altitude is smaller.
Figure 4: Near surface air temperature (color scale) and wind velocity (arrows) at the innermost domain at (a) 0900 LT, (b) 1200 LT, (c) 1500 LT, and (d) 1800 LT Jan 01.
At night (Fig. 5), from 2100 LT of the previous day to 0600 LT, the temperature slowly decreases and the wind speed over the continent also decreases, except over the northwestern corner of the domain, where the topography greatly influences the wind speed and direction, suggesting the presence of a katabatic flow, that changes from N at 2100 LT to NW at 0600 LT of the following day. However, the predominant wind direction is from NE at the majority of the domain area and a land breeze circulation is not noticeable.
The influence of the predominant wind direction can be noticed at Fig. 6a, that shows the average normalized tracer concentration for the whole simulation period and its transport. The ionizing radiation is kept near its source, only dispersing a low concentration plume to SW areas. In the vertical direction (Fig. 6b), the plume is usually confined in the PBL, that varies from 200 m to 1.25 km. Since it is a coastal area, the PBL does not reach high values at the coast as it may over the continent. These results indicate that the ionizing radiation is mostly confined to small areas, near its source and near the surface.
Discussion
Meteorological conditions during the period of simulation showed a predominant NE wind flow, with local influence of the sea breeze circulation. Topography also influences local circulation, particularly at night. The ionizing radiation concentration is significantly larger near its source and is usually confined inside the PBL. However, it was transported SW by the predominant wind. PBL height is low near the coast, influenced by sea breeze, preventing radiation vertical dispersion to higher altitudes. In future work, the simulations will be compared to the measurements and may help explain the temporal and spatial variability of the ionizing radiation at Meaípe. | 1,740.2 | 2019-08-01T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Geodesy and metrology with a transportable optical clock
The advent of novel measurement instrumentation can lead to paradigm shifts in scientific research. Optical atomic clocks, due to their unprecedented stability and uncertainty, are already being used to test physical theories and herald a revision of the International System of units (SI). However, to unlock their potential for cross-disciplinary applications such as relativistic geodesy, a major challenge remains. This is their transformation from highly specialized instruments restricted to national metrology laboratories into flexible devices deployable in different locations. Here we report the first field measurement campaign performed with a ubiquitously applicable $^{87}$Sr optical lattice clock. We use it to determine the gravity potential difference between the middle of a mountain and a location 90 km apart, exploiting both local and remote clock comparisons to eliminate potential clock errors. A local comparison with a $^{171}$Yb lattice clock also serves as an important check on the international consistency of independently developed optical clocks. This campaign demonstrates the exciting prospects for transportable optical clocks.
and secondly the area exhibits long-term land uplift (Alpine orogeny) accompanied by a secular gravity potential variation. Furthermore, LSM lacks the metrological infrastructure and environmental control on which the operation of optical clocks usually relies. The selected location thus constitutes a challenging but realistic testbed with practical relevance.
The transportable 87 Sr lattice clock was operated in both locations, LSM and INRIM, to eliminate the need for a priori knowledge of the clock's frequency. The schematic outline of the experiment is given in Fig. 1
. LSM and INRIM
were connected by a 150 km noise-compensated optical fibre link (see Supplement). At LSM, a transportable frequency comb measured the optical frequency ratio between a laser resonant with the Sr clock transition at 698 nm and 1.542 µm radiation from an ultrastable link laser transmitted from INRIM. In this way, the frequency of the optical clock at LSM could be directly related to the frequency of the link laser even without a highly accurate absolute frequency reference. In addition to the optical carrier, the fibre link was used to disseminate a 100 MHz radio frequency reference signal from INRIM for the frequency comb, frequency counters, and acousto-optic modulators at LSM (see Supplement). At INRIM, a cryogenic Cs fountain clock 22 and a 171 Yb optical lattice clock 16 served as references. The connection between the clocks at INRIM and the link laser is provided by a second frequency comb.
Ten days after arriving at LSM in early February 2016, the first spectra of motional sidebands on the 1 S0 -3 P0 clock transition were recorded from the 87 Sr transportable clock. The operation of the lattice clock (see Methods) was was placed in the LSM underground laboratory close to the France-Italy border in the Fréjus tunnel (top left). The clock was connected by a noise-compensated fibre link to the Italian national metrology institute INRIM in Torino (red line). There, a primary Cs fountain clock and a 171 Yb optical lattice clock were operated (right). At both sites, frequency combs were used to relate the frequencies of the 1 S0 -3 P0 optical clock transitions and the 1.5 µm laser radiation transmitted through the link. After the remote frequency comparison, the transportable clock was moved to INRIM for a side-by-side frequency ratio measurement. (b) Frequency of the transportable Sr clock as seen by the INRIM Cs fountain clock (black circles, uncertainties are 1σ). The potential difference ΔU is based on the geodetic measurement. The red line shows the expected variation of the Sr clock transition frequency due to the relativistic redshift. (c) The potential difference between LSM and INRIM was also determined independently by a combination of GNSS, spirit levelling and gravimetric geoid modelling (see Methods). similar to the procedure described in previous works. All laser systems required for laser cooling, state preparation and trapping of the Sr atoms in the optical lattice were operated, together with the vacuum system and the control electronics, in an air-conditioned car trailer. The ultrastable interrogation laser for the Sr clock transition was placed next to the trailer in the underground laboratory to avoid its performance being degraded by vibrations induced in the trailer by its air conditioning system. The frequency comb was also operated next to the trailer.
During this first run of the apparatus in particularly challenging environmental conditions at LSM, the transportable clock operated less reliably than in initial tests before transport. 13 Interruptions were mainly caused by degradation of the light delivery setup for the first cooling stage of the magneto-optical trap (MOT) on the 461 nm 1 S0 -1 P1 transition, which was based on a commercial semi-monolithic fibre-based light distribution system. 22 ), the frequency of the Sr lattice clock at LSM was measured by the fountain clock at INRIM with an uncertainty of 18×10 16 (see Fig. 1). With these changes, the availability of the Sr clock was improved significantly, allowing for several hours of data taking per day after the initial setup phase was completed. With systematic uncertainties comparable to the first campaign and a fountain instability of 3.6×10 13 τ 1/2 , the total uncertainty was reduced by a factor of two to 9×10 16 (see Supplement for Sr transitions frequencies). In this chronometric levelling demonstration, we resolved a relativistic redshift of the optical clock lattice clock of 47.92(83) Hz * (Fig. 1), from which we infer a potential difference of 10 034(174) m 2 /s 2 . This is in excellent agreement with the value of 10 032.1(16) m 2 /s 2 determined independently by geodetic means (Methods). Though our result does not yet challenge the classical approach in accuracy, it is a strong first demonstration of chronometric levelling using a transportable optical clock.
With the increased reliability of the transportable Sr clock, we were also able to measure its optical frequency ratio with the Yb lattice clock 16 operated on the 1 S0 -3 P0 transition at 578 nm. In total, 31 000 s of common operation of the two optical clocks and the frequency comb were achieved over a period of 7 days. This optical-optical comparison (Fig. 2) shows much higher stability than the optical-microwave one. Consequently, the optical frequency ratio measurement is limited by the systematic uncertainty of the clocks (Tab. 1), rather than by their instability. This demonstrates the key advantage of optical frequency standards: they are able to achieve excellent uncertainties in short averaging times even though they may operate less reliably than their microwave counterparts.
The 171 Yb/ 87 Sr frequency ratios measured on different days are summarized in Fig. 3, which also shows previous measurements of this ratio. After averaging (Supplement), we determine the ratio to be R = νYb / νSr = * The number in parentheses is the uncertainty referred to the corresponding last digits of the quoted result. (34), which is within two standard deviations of the most accurate previous measurement 24 (Fig. 3). To our knowledge, this is the only optical frequency ratio that has been measured directly by three independent groups. 24,27,28 It therefore constitutes an important contribution to verifying the consistency of optical clocks worldwide. 25 Such measurements are key to establishing more accurate secondary representations of the second 26 as provided by the International Committee for Weights and Measures (CIPM) as a step towards a future redefinition of the SI second.
Note that even with the only slightly improved transportable Sr apparatus as used at INRIM, chronometric levelling against the Yb lattice clock with considerably improved resolution would be possible. We expect that the transportable clock will be able to achieve an uncertainty of 1×10 17 or better after a revised evaluation. This uncertainty will enable height differences of 10 cm to be resolved, which is a relevant magnitude for geodesy in regions such as islands, which are hard to access using conventional approaches. As metrological fibre links become more common, chronometric levelling along their paths 29 will become a realistic prospect.
Operation of lattice clocks
The realization and operation of the 171 Yb (I = 1/2) and 87 Sr (I = 9/2) clocks are very similar and have been presented in detail. 16,13,30 Ytterbium and strontium atoms are cooled to microkelvin temperatures in two-stage magnetooptical traps (MOTs), exploiting the strong 1 S0 -1 P1 and weaker 1 S0 -3 P1 transitions (at 399 nm and 556 nm for Yb and 461 nm and 689 nm for Sr, respectively). The atoms are then trapped in one-dimensional optical lattices operating at the magic wavelengths 31 Yb magic ≈ 759 nm and Sr magic ≈ 813 nm.
Finally, the atoms are prepared for spectroscopy in a single magnetic sublevel mf by optical pumping. As a result, shifts due to cold collisions and line pulling are reduced. The two π-transitions from the mf = 1/2 sublevels in Yb (mf = 9/2 in Sr) are probed alternately at approximately halfwidth detunings so that the interrogation laser is locked to their average transition frequency. This effectively removes the linear Zeeman shift.
Uncertainties of lattice clocks
Here, we discuss the most important uncertainty contributions listed in Tab We measured the linear shift near the magic wavelength while the nonlinear induced lattice light shift can be calculated using data from Nemitz et al. 24 For the Sr lattice clock, the typical lattice depth was about 100 Er as measured from sideband spectra. These also yielded an atomic temperature of about 3.5 µK. The light shift cancellation frequency was determined earlier; a reference resonator served as a wavelength reference during the experiments discussed here. The uncertainty of the linear lattice light shift allows for a resonator drift of 50 MHz and changes due to variations of the scalar and tensor light shift. 32 Higher order light shifts were calculated using the coefficients in the same reference. As a check, three of the measurements in Fig. 2 were performed with a deeper lattice of about 160 Er which resulted in uncertainties for the linear lattice light shift and higher order shifts of 29×10 17 and 1×10 17 , respectively. No significant variation of the measured frequency ratio R was observed.
Density shift: The density shift was evaluated in both lattice clocks by varying the interrogated atom number.
Corrections for changes of the atomic temperature have been applied for the Sr clock.
Blackbody radiation (BBR) shift:
The influence of BBR on the clock frequency has been discussed elsewhere. 33,2,34 Temperatures of the atomic environment were measured with calibrated platinum resistance thermometers. The uncertainty of the BBR shift is mostly related to temperature inhomogeneity.
H-maser as flywheel
A flywheel oscillator with good stability and high reliability, such as a H-maser, can be used to extend the averaging time between a less reliable system such as our Sr lattice clock and a Cs primary clock with lower stability. 23 The frequency ratio νSr/νCs was thus determined from the frequency ratios νSr/νH and νH/νCs using datasets with different length. The noise of the flywheel means that it had different average frequencies for these two intervals, but the additional uncertainty can be calculated 23 if the noise is well characterized, as it often is for masers. We modelled the maser noise by a superposition of flicker phase noise 6×10 14 τ 1 (1×10 13 τ 1 ), white frequency noise 5×10 14 τ 1/2 (4.5×10 14 τ 1/2 ), and a flicker noise 1.7×10 15 (1×10 15 ) in March and (May) 2016, respectively.
Gravity potential determination
To provide an accurate reference for the chronometric levelling, we performed a state-of-the-art determination of the gravity (gravitational plus centrifugal) potential with the best possible uncertainty at each clock site. Here, besides the global long-wavelength and eventually the temporal variations 35 of the Earth's gravity potential, the local spatial influence of the gravity potential on the clock frequency needs to be considered. To refine the gravity field modelling around the clock sites and to improve the reliability and uncertainty of the derived geoid model, measurements; the larger uncertainty for the clocks is due to the simple method used to determine the local height differences between the clocks and the reference markers.
Supplement: Frequency transfer INRIM -LSM
The remote clock comparison was performed by comparing the frequency of a link laser at 1542.14 nm, sent from INRIM to LSM by a telecom optical fibre, to the frequencies of the clocks operated in the two locations. Two fibre frequency combs spanned the spectral gaps between the link laser and the clock interrogation lasers. The combs employed the transfer oscillator principle, 36 making the measurements of the optical frequency ratios immune to the frequency noise of the combs. The frequency of the link laser was stabilized using a high-finesse cavity, whose long term drift is removed by a loose phase-lock to a H-maser via a fibre frequency comb. As a result, the beat notes with the combs remained within a small frequency interval, facilitating long-term operation and reducing potential errors arising from any counter de-synchronization between INRIM and LSM. 18 The link laser used a multiplexed channel in the telecom fibre. Its path was equipped with two dedicated bidirectional Erbium-doped fibre amplifiers that allowed a phase stable signal to be generated at LSM through the Doppler noise cancellation technique. 18,19 The contribution of fibre frequency transfer to the total fractional uncertainty was assessed to be 3×10 -19 by looping back the signal from LSM using a parallel fibre. The occasional occurrence of cycle slips was detected by redundant counting of the beat note at INRIM. At LSM, the signal was regenerated by a diode laser phase locked to the incoming radiation with a signal to noise ratio >30 dB in 100 kHz bandwidth; this ensured robust and cycle-slip-free operation.
In addition to the optical reference, a high-quality radio frequency (RF) signal was needed at LSM to operate the Sr clock apparatus (frequency shifters and counters) and the frequency comb. Given the impossibility of having a GNSS-disseminated signal in the underground laboratory, a 100 MHz RF signal was delivered there by amplitude modulation of a second 1.5 µm laser that was transmitted through an optical fibre parallel to the first. At LSM, the amplitude modulation was detected on a fast photodiode, amplified and regenerated by an oven-controlled quartz oscillator (OCXO) at 10 MHz to improve the signal-to-noise ratio. The inherent stability of the free-running fibre link is in this case enough to deliver the RF signal with a long-term instability and uncertainty smaller than 10 -13 .
This resulting uncertainty contribution to the optical frequency ratio measurement is below 1×10 -19 .
Averaging of the optical frequency ratio data
We made eight different optical frequency ratio measurements with a total measurement time of 15 h over a period of one week in May 2016 (Fig. 3). The data acquired on different days have different statistical and systematic uncertainties. We applied a statistical analysis that considers the correlations between the measurements coming from the different systematic shifts where the covariance matrix of the eight daily measurements is used to calculate a generalized least squares fit for the average. 25,37 We regarded the systematic uncertainties of the clocks (Tab. 1) as fully correlated, while the statistics related to the measurement duration were uncorrelated.
Absolute Frequency of the Sr lattice clock
The chronometric levelling can be viewed from an alternative perspective: If we assume the conventional measurement of the gravity potential difference is correct then we can deduce an average absolute frequency The open diamonds at the bottom of the graph show the results from the campaign discussed here. For the LSM data, a correction for the gravitational redshift of -48.078 Hz as derived from the geodetic data has been applied. The other data have been compiled from various references (38, 39,40,41,42,43,44,45,46,47,48,49,50,23,51,52,53). The vertical line indicates the frequency recommended for the secondary representation of the second by Sr lattice clocks 26 and its uncertainty (dashed lines). | 3,712.8 | 2017-05-11T00:00:00.000 | [
"Physics",
"Engineering"
] |
An efficient broadcasting routing protocol WAODV in mobile ad hoc networks
Information broadcasting in wireless network is a necessary building block for cooperative operations. However, the broadcasting causes increases the routing overhead. This paper brings together an array of tools of our adaptive protocol for information broadcasting in MANETs. The proposed protocol in this paper named WAODV (WAIT-AODV). This new adaptive routing discovery protocol for MANETs, lets in nodes to pick out a fantastic motion: both to retransmit receiving request route request (RREQ) messages, to discard, or to wait earlier than making any decision, which dynamically confgures the routing discovery feature to decide a gorgeous motion through the usage of neighbors’ knowledge. Simulations have been conducted to show the effectiveness of the using of techniques adaptive protocol for information broadcasting RREQ packet when integrated into ad hoc on-demand distance vector (AODV) routing protocols for MANET (which is based on simple flooding).
subject to a number of constraints that make such a deployment very complex. These include: Radio medium constraints, the highly dynamic nature, and the lack of a centralized administration. For some applications such as multimedia or real-time applications the best effort service is not at all sufficient. Such applications require guarantees in terms of certain quality of service criteria (minimum bandwidth, and maximum delay not to be exceeded). Indeed, it seems important to adapt MANETs to support a certain level of QoS in order to deploy demanding applications.In this mode of communication, each mobile node of the network has the possibility to communicate directly with all its neighbors, i.e. all the nodes capable of receiving the signal sent and understanding it. Each node can move or disconnect from the network at any time. There is no infrastructure. We then speak of ad hoc networks or mobile ad hoc network (MANET).
As an extension to the work the authors proposed the adaptive broadcasting schemes in [7]- [9] to deal with these problems with recognize to packet delivery ratio (PDR) [10], normalized routing load, and average of end-to-end delay. In a latest work, in [11]- [14] the authors include more than a few mobility parameters to finds an most advantageous route discovery with minimal transmit energy. Simulations have been carried out and results exhibit that the proposed scheme has giant PDR, decrease in average end-to-end delay [15] and normalized routing load is maintained, in contrast to the AODV protocol (which is primarily based completely on convenient flooding) as well. The the relaxation of this paper is structured as follows: section 2 evaluations the simple AODV protocol mannequin proposed in literature for wireless ad hoc networks. In section 3, we current the proposed broadcasting scheme WAODV. Section 4 affords the simulation parameters, average overall performance metrics and offered results. Conclusions and future work are brought in section 5.
THE BASIC AODV PROTOCOL 2.1. Description of the AODV protocol
The routing protocol AODV is a reactive routing protocol primarily based on the principle of distance vectors, capable of both unicast and multicast routing [16]. It essentially represents an improvement of the proactive DSDV algorithm [17]. This protocol uses both mechanisms "Route discovery" and "Route maintenance", it builds the routes by using of a query cycle "Route Request/Route Reply" of node by node. AODV use the principle of sequence number to maintain the consistency of recent routing information. In mobile ad hoc mobile networks, the routes frequently change because of the mobility of nodes. Hence, the route maintained by certain nodes become invalid. To use the fresh routes, the nodes use the sequence numbers. A node updates them whenever new information comes from a route request (RREQ), route replie (RREP), and route error (RERR) message, it increments its own sequence number in the following circumstances: i) It is itself the destination node and offers a new route to reach it; ii) It receives an AODV message (RREQ, RREP, and RERR) containing new information on the sequence number of a destination node; iii) The path to a destination is not valid. AODV continues the paths in a distributed manner by maintaining a routing table at every transit node embedded in the searched path.
Route request mechanism (RREQ)
AODV uses the principles of sequence numbers to maintain the consistency of routing information. Because of the mobility of nodes in ad hoc networks, routes change frequently, so that routes maintained by some nodes become invalid. Sequence numbers allow the use of the newest or freshest routes. AODV uses a route request to create a path to a certain destination.
However, AODV maintains paths in a distributed fashion by keeping a routing table at each transit node belonging to the path being searched. A node forward a route request discovery packet RREQ in case it wishes to recognize a route to a sure destination and such a route is now not accessible as shown in Figure 1. This can occure if the destination is no longer known beforehand, or if the current course to the destination has expired its lifetime or has become faulty (i.e. the metric related with it is infinite). The field, destination sequence range of the RREQ packet, consists of the ultimate acknowledged price of the sequence variety associated with the destination node.
After the broadcast of the route request discovery packet RREQ, the source is ready for the route response packet (RREP). If the latter is no longer obtained during a sure period, the supply can rebroadcast a new RREQ request. This statistics used to construct the reverse path as shown in Figure 2, which will be traversed through the unicast route response packet.
Since the route reply RREP packet will be sent to the source, the nodes belonging to the return direction will regulate their routing tables following the direction contained in the response packet. In order to limit the cost in the network, AODV proposes to extend the search progressively. Initially, the query is broadcastto a limited number of hops. If the source receives no response after a specified timeout, it will retransmit another search message by increasing the maximum number of hops. In case of no answer, this procedure is repeated a maximum number of times before declaring this destination unreachable. With each new broadcast, the Broadcast ID field of the RREQ packet is incremented to identify a particular route request associated with a source address. If the request RREQ packet is rebroadcast a number of times (RREQ_RETRIES) without receiving a response, an error message is issued to the application. The destination returns a message RREP, this message can be routed to the source. Each traversed node will increment the number of jumps. In addition, add an entry to his table for the destination. A node located between the source and the destination can also give an adequate response. In this case, obtaining bidirectional routes is nevertheless possible thanks to the "Freeous RREP" flag. The intermediate node will then send a RREP to the destination. The nodes between the intermediate node and the destination will therefore add to their table an entry to the source of the RREQ. This arrangement will allow the destination to send packets directly to the source without having to search for a route. This is useful when establishing TCP communications for sending the first ACK [18].
WAODV SCHEME
Since the WAODV [19] protocol is inspired by the AODV protocol, it retains most of these operating mechanisms with modifications during the broadcast of the route discovery request. In addition, to facilitate the admission control performed during the RREQ broadcast, a dynamic approach will be used to estimate the replay of the RREQ request. Using the periodic broadcast of RREQ messages performed by the WAODV protocol, each node then infers its own choice of replay. Another RAD value will be calculated periodically for each node. This will be used when receiving an RREQ to calculate the likely time spent in the queue of the node considered the RREQ for use in admission control performed by the WAODV protocol.
If a source node wants to communicate with another node that does not have a valid route in its routing table for that destination, a route discovery procedure is initiated. When an intermediate node receives an RREQ, it first checks if there is a valid route in the routing table, if there is a route it sends an RREP back to the source, if not, before retransmitting the message, it performs an admission check to test the delay. This consists in comparing the value of the RAD field of the RREQ request with the calculated delay.
The protocol proposed in this study, called WAODV (WAIT-AODV) [19], agrees to contract to choose an appropriate first-order action, i.e. retransmit, ignore a received RREQ packet, or wait before any selection is made (i.e., now not too a great deal statistics handy so a long way to make proper decision). The proposed protocol relies entirely on gaining knowledge of the model that collects neighbor statistics such as a variety of RREQ packets received and their arrival time t (s) to prevent some nodes from rebroadcasting using appropriate action selection (i.e., rebroadcast, discard).
Unlike fully based AODV or modified AODV protocols proposed from the literature that choose between two movements [20], [21], the proposed new protocol adds every other procedure, called waiting, in addition to ignoring and rebroadcasting. In other words, when a node receives a broadcast RREQ packet, it takes advantage of the neighbors' understanding to determine the appropriate action with the help of another addition (wait procedure). A decision to rebroadcast or discard the request message.
For example, a rebroadcast should be specified when neighboring nodes do not receive any request, while a wait action is chosen when the node cannot make any choice due to the fact of lack of information. Exclusion traffic must be determined when a node receives the same packet at one time. It is the capacity of neighboring nodes that have already received the matched request packet, that is, more copies that the node receives. The precept of this mannequin is described as follows. The node received a RREQ route request discovery packet for the first time at instance , the host starts a counter S. A random evaluation delay RAD (τ) to also be generated for the RREQ request broadcast. During τ, if RREQ was heard again at instance , and Update table [∆ ] ; by recording ∆ = ( − ), and increases counter S, and when τ has expired, if RREQ received only once, Rediffuse RREQ request packet and exit the procedure. Otherwise the system takes the ∆ values stored in the table and compares them with τ : when ∆ is less than one third of τ, then decreases the counter S, if the latter is less than two thirds of τ then the system increases counter S. finally if the counter positive, then Rediffuse the RREQ request packet and exit the procedure, Otherwise the RREQ will be dropped directly by this intermediate node and exit the procedure. The rebroadcasting selection algorithm is as follows: (The algorithm is proven beneath in detail.) WAODV Protocol S1. Node received a route request discovery packet RREQ for the first time at instance , the host starts a counter S. S2. A random assessment delay ( to be generated for broadcasting the RREQ request.
PERFORMANCES EVALUATION
This section describes the simulation parameters, common overall performance metrics used in our contrast and finally simulation consequences the use of network simulator NS2. We in general focus on evaluating and evaluation the proposed model with the fundamental protocol AODV. Nonetheless, simulations parameters and performance metrics are first to show the testbed environment. Simulation consequences are then to study the influence of network density and traffic load on considered performance metrics.
Simulation parameters
Since the goal of our simulations is to analyze the properties of the AODV protocol extension, we chose that the traffic sources be constant bit rate (CBR). Traffic between nodes is generated by initializing CBR traffic connections that start at fixed times via the simulation script as shown in Table 1. The size of the data packets is 1000 bytes. We did not employ TCP traffic sources for example because TCP offers a statecompliant load on the network, i.e., TCP traffic changes the times at which it sends packets based on its perception of the network's ability to deliver these packets [17]. In this study, we used such a community configuration to limit the possibility of a partitioning of network taking place for the duration of the simulation time. In addition, these values have been chosen due to the fact they are on computing and time assets for running the most scenarios. A node mobility situation used to be generated the use of random-waypoint mannequin [22]. Furthermore, with the aid of varying nodes' velocity from minimal speed its price is set to one meter per second up to maximum speed cost equal to five meters per second. In the beginning of the simulation the nodes stay motionless in the course of a pause time and each node chooses a random destination, then begins transferring towards it with velocity is various from 1 to 5 m/s. Additionally, this cycle repeats until the simulation terminates for investigating the conduct of the proposed protocol when various the site visitors load by using it constant bit rate (CBR) [23]- [25]. Table 1 lists the parameters used in our simulations.
In the rest of this section, results related to our investigation to show the effect of nodes density within the range from 50 to 300 nodes according traffic load of network varying from 10 to 50. However, we measured and compared the performance of the WAODV scheme, under different network densities and traffic load of network, against the basic AODV protocol. The performance metrics evaluated are as follows: Packet delivery ratio (PDR), normalized routing load (NRL), and average of end-to-end delay. This protocol could give significant results without basic protocol.
Simulation results and discussion
The WAODV scheme was implemented and integrated into the network simulator-2 (NS-2) version 2.34 [26] and compared it to fundamental AODV protocol. We frequently investigated the traffic load of network's within the CBR connection as 10, and by varying number of nodes from 50 to 150 nodes. Where a dynamic topology where nodes speed is fixed to 1, and 5 m/s. We performed thirty simulation trials for each scenario and computed the average number of NRL, PDR, and average end-to-end delay, and obtained with each scheme. The simulation model consists of two sets of scenario files: topology scenario and traffic generation files. Figure 3 presents results that illustrate the effect of network's density on PDR for the all schemes. We can see that the PDR decreases barely with the make bigger in network's density for all schemes.
The WAODV protocol has considerable PDR as in contrast to the simple AODV and this underneath unique network's density. More precisely, outcomes verify that in WAODV, data obtained by using neighbors was once really used to enhance the getting to know process. WAODV protocol has greater PDR, that implying for this reason some nodes do now not want to rebroadcast the message, i.e., it inhibit extra nodes from rebroadcasting in contrast to the simple AODV protocol. Figure 4 shows the comparison of WAODV protocol against of the basic AODV. We can see that all the protocols showing a force increase in terms of NRL with a different density of networks. In AODV routing protocol, NRL is increase with increase in number of node in network. The NRL of our AODV improvement is the highest compared to the AODV base protocol with the increase in node density. AODV demonstrates significantly lower routing load and fairly stable as compare to WAODV with an increasing number of sources. As proven in this figure, the quicker the node density is. The greater end-to-end delay is incurred. The AODV protocol has the best delay. This is due to the reality that, in flooding protocol, the numbers of retransmissions is very excessive and messages are queued for lengthy time. In WAODV protocol, the common extend is low in contrast to AODV protocol due to the fact the wide variety of retransmissions is additionally low and then the queue time at every node is short. Moreover, this discern indicates that AODV simple has greater and unstable end-to-end delay cost as a end result of the greater range of redundant rebroadcasts of RREQ packets; this can be defined through with contentions and collisions in which many RREQ packets fail to attain the destination, ensuing in growing delay. Figure 6 depicts Packet delivery ratio according of traffic load of network for WAODV and basic AODV. This discern offers that our proposed protocol has a greater cost of PDR in contrast with AODV basic. However, Packet delivery ratio will increase as the range of connections increases. This potential that there are extra connections to join two nodes and facilitate the transmission in every area. Thus, there is a increased risk that the broadcast will be correctly resubmitted, increasing the delivery ratio. Figure 6 shows the NRL comparison of WAODV with the basic AODV protocols. We can see that the NRL increases as traffic load of network increases. This is due to the increase in packets retransmissions. We can see also that our proposed approach exhibits the lowest traffic load percentage against other approachs. According to results shown in Figures 6-8, WAODV has significant NRL, good PDR, and lower delay compared to the AODV basic protocol. It is the most efficient proposal with respect to considered metrics in terms of both network density and traffic load of network.
The Figure 8 shows the average delay results that all packets experience from the moment they are sent by the source nodes until they reached all the nodes. This figure depicts average end-to-end delay at various traffic load of network. WAODV is shown as the most efficient protocol because packets traverse fewer hops, because of its adaptive nature according to the traffic load.
CONCLUSION
This paper describes the specifications of a solution that extends the AODV protocol to guarantee RREQ packet replay. It also details the working principle of the new protocol and the method used for the estimation of the RREQ packet replay. This article was dedicated to the design of WAODV. In it, we have discussed the main phases that show the relationships between the different classes of the protocol. We presented a simulation to analyze the properties of the WAODV protocol extension with basic AODV using the NS2 network simulator. The simulation results were plotted on graphs and interpreted. These simulations led us to know well how the WAODV protocol operates in the face of network density and traffic load as well as to validate the variation of the connection acceptance rate in the presence of admission control based on bandwidth availability in the routing nodes. | 4,415.6 | 2021-12-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Control of Sunroof Buffeting Noise by Optimizing the Flow Field Characteristics of a Commercial Vehicle
: When a commercial vehicle is driving with the sunroof open, it is easy for the problem of sunroof buffeting noise to occur. This paper establishes the basis for the design of a commercial vehicle model that solves the problem of sunroof buffeting noise, which is based on computational fluid dynamics (CFD) numerical simulation technology. The large eddy simulation (LES) method was used to analyze the characteristics of the buffeting noise with different speed conditions while the sunroof was open. The simulation results showed that the small vortex generated in the cab forehead merges into a large vortex during the backward movement, and the turbulent vortex causes a resonance response in the cab cavity as the turbulent vortex moves above the sunroof and falls into the cab. Improving the flow field characteristics above the cab can reduce the sunroof buffeting noise. Focusing on the buffeting noise of commercial vehicles, it is proposed that the existing accessories, including sun visors and roof domes, are optimized to deal with the problem of sunroof buffeting noise. The sound pressure level of the sunroof buffeting noise was reduced by 6.7 dB after optimization. At the same time, the local pressure drag of the commercial vehicle was reduced, and the wind resistance coefficient was reduced by 1.55% compared to the original commercial vehicle. These results can be considered as relevant, with high potential applicability, within this field of research.
Introduction
Buffeting noise is the aerodynamic acoustic response when a vehicle is moving with the sunroof or side window opened, and it occurs due to the differences in the air inside the vehicle and the external transient airflow. The buffeting noise has a low frequency (less than 20 Hz) and a high sound pressure level (greater than 110 dB) [1]. The low-frequency pressure pulsation of the buffeting noise causes a strong sense of pressure on the ears, which can result in fatigue and unpleasant feelings for drivers and passengers in a short period. Staying in such an environment for a long-time could even cause damage to hearing [2,3]. Therefore, further research on the optimization of buffeting noise is needed to improve the acoustic comfort of vehicles.
In terms of buffeting noise research on the side windows of the vehicle, He [4] used the method of the vehicle aero-acoustic wind tunnel test to analyze the influence of different factors, including the spatial position of the car, wind speed, side window opening area, yaw angle, the different ways of opening side windows on the sound pressure level, and the frequency of the side window buffeting noise. Yang [5] also studied the side window buffeting noise of a car, pointing out that when a single window is opened, the rear window buffeting noise is higher than that of the front window, and that different ways of opening the side windows can reduce the buffeting noise.
In the research on the sunroof buffeting noise, reducing the buffeting noise has always been the research focus of scholars. Oettle [6] used the Lattice Boltzmann method to evaluate the sunroof buffeting characteristics of a specific model of a vehicle, and the suppressing effect of the spoiler with or without the grid on the sunroof buffeting characteristics was analyzed. Wang [7] studied the mechanism of the sunroof buffeting noise at the speed of low Mach numbers based on a three-dimensional cavity model. The results showed that airflow separation, vortex shedding, vortex impact, periodic pressure wave feedback, and Helmholtz resonance are responsible for sunroof buffeting. Wang [8] studied the suppression effect of a serrated sunroof trailing edge on the sunroof buffeting noise, and pointed out that this strategy can break down the strong vortex into smaller eddies and effectively reduce the sound pressure level in the car.
With the development of CFD technology and convenient computing resources, numerical simulation methods have been widely used in the research on sunroof buffeting. Gu [9] used the LES method to calculate the side window buffeting noise near the driver while the side window was partially opened, and verified the correctness of the analysis results by a road test. Gong [10] analyzed the LES transient simulation model of an SUV-type vehicle to obtain the buffeting frequency and the sound pressure level on the passenger's left ear while the sunroof was opened. At the same time, the sound pressure level of the passenger's left ears was reduced by optimizing the skylight spoiler. He [11,12] studied the sunroof buffeting noise with different vehicle speeds based on the LES method, and designed a new type of baffle plate to reduce the sound pressure level of the sunroof buffeting noise. The existing wind buffeting noise control strategies include: adding aerodynamic accessories such as spoilers, grooves, and setting columns in the skylight or side window; adjusting the structure of the cab to stagger the resonance frequency; reducing the size of the turbulence vortex; and using sound wave superposition technology and jet flow technology to suppress the wind buffeting noise [13][14][15]. The cab of a commercial vehicle is a closed space, and it is easy for turbid air and uncomfortable odors in the cab to be produced. Installing skylights allows the air to circulate inside and outside the cab which can improve the air quality. As with passenger cars, there is still a need to install sunroofs in commercial vehicles. The current sunroof buffeting noise research has mainly focused on passenger vehicles, and little work has been conducted on heavy commercial vehicles. In addition, adding aerodynamic accessories will increase the cost and even affect the appearance of the vehicles. Due to the complexity of the wind buffeting noise, it is difficult to apply sound wave superposition technology and jet flow technology based on an active control strategy.
With the above information in mind, the proposed study uses the LES method to investigate the phenomenon and the formation mechanism of the sunroof buffeting noise while the sunroof is opened in a heavy commercial vehicle. A scheme of optimizing the existing accessories including a sun visor and a roof dome is proposed to improve the flow field above the roof and to reduce the sunroof buffeting noise. The results of this study could contribute to the optimization of the sunroof buffeting noise in commercial vehicles, which can be considered of relevance in this field of research. The remaining chapters are arranged as follows. In Section 2, some basic mathematical theories used in this paper are introduced. In Section 3, a simulation model for the sunroof buffeting noise of a commercial vehicle is established. Then, in Section 4, the sunroof buffeting noise characteristics at different driving conditions are analyzed. In Section 5, the optimization of the sunroof buffeting noise is discussed, and the wind resistance of the vehicle is analyzed. The last section is a summary of this paper.
Large Eddy Simulation
The commercial vehicle cab is equal to a Helmholtz cavity (a cavity where air resonance takes place) while the sunroof is opened. The buffeting noise generated in this type of cavity is mainly a low-frequency discrete noise [16]. While the skylight is opened, the air flow is complex and irregular, making it highly nonlinear. Usually, large eddy simulations are used to analyze the transient flow field of such problems, as they are based on mathematical modelling for turbulence used in computational fluid dynamics. The basic governing equations for turbulence calculation are as follows [17]: continuity equation ∂ρ ∂t where t is the time, x i and x j are the coordinate axis components, ρ is the fluid density, v i and v j are the time-averaged velocity, P is the air pressure, µ is the turbulent viscosity coefficient, and τ ij is the sub-grid scale stress, all of which are expressed in the appropriate units.
Taking into account the fact that vortex identifiers can be used to build eddy-viscosity sub-grid scale models for large eddy simulation, in the current study the vortex viscous sub-grid model is introduced as follows: where δ ij is the Kronecker delta, µ t is the sub-grid scale stress turbulent viscosity, τ kk is the iso-tropic sub-grid scale stress, and S ij is strain rate tensor. Once again, all of these values are expressed in the appropriate units.
Ffowcs Williams and Hawkings (FW-H) Equation
Based on the acoustic analogy theory proposed by Lighthill, Williams and Hawkins developed the FW-H equation, which is suitable for the moving solid's boundary and its differential form as follows [18]: where p is the sound pressure, n i is the surface normal vector, v n is the normal velocity, c is the sound velocity, and T ij is the Lighthill stress tensor. As always, all of the variables are expressed in the appropriate units. The three terms on the right side of the equation represent quadrupole, dipole, and monopole generating waves, respectively. This equation allows the calculation of sound pressure in space.
Acoustic Post Processing
The obtained sound pressure of the monitoring point is a pressure fluctuation signal that changes with time. The time-domain sound pressure is converted into a frequencydomain sound pressure by Fast Fourier Transform (FFT): Through the logarithmic operation to the sound pressure after the FFT operation, the sound pressure level result of the monitoring point can be obtained: where P ref is the reference sound pressure related to the minimum sound pressure amplitude that can be heard by a human, and the value is 2 × 10 −5 Pa.
Geometric Modeling
In this work, a heavy commercial vehicle was taken as the research object to study the sunroof buffeting noise. The size of the sunroof was 470 mm × 670 mm. To better simulate the actual characteristics of the cab, a geometric model with a proportion of 1:1 to the actual vehicle was established, as shown in Figure 1. On the premise of not affecting the simulation accuracy, the door handles and the lamp of the commercial vehicle were simplified. Important interior decorating pieces such as berths, seats, and other relevant parts were retained.
Acoustic Post Processing
The obtained sound pressure of the monitoring point is a pressure fluctuation signal that changes with time. The time-domain sound pressure is converted into a frequencydomain sound pressure by Fast Fourier Transform (FFT): Through the logarithmic operation to the sound pressure after the FFT operation, the sound pressure level result of the monitoring point can be obtained: where Pref is the reference sound pressure related to the minimum sound pressure amplitude that can be heard by a human, and the value is 2 × 10 −5 Pa.
Geometric Modeling
In this work, a heavy commercial vehicle was taken as the research object to study the sunroof buffeting noise. The size of the sunroof was 470 mm × 670 mm. To better simulate the actual characteristics of the cab, a geometric model with a proportion of 1:1 to the actual vehicle was established, as shown in Figure 1. On the premise of not affecting the simulation accuracy, the door handles and the lamp of the commercial vehicle were simplified. Important interior decorating pieces such as berths, seats, and other relevant parts were retained. Assuming that the length, width, and height of the cab are L, W, and H, respectively, the virtual wind tunnel is a box that surrounds the cab model, and the size parameters were 20L, 5W, and 6H, respectively. The distance from the inlet of the virtual wind tunnel to the front end of the cab, the side of the virtual wind tunnel to the side, and to the top of the cab, were 3L, 2W, and 4H, respectively, as shown in Figure 2. Assuming that the length, width, and height of the cab are L, W, and H, respectively, the virtual wind tunnel is a box that surrounds the cab model, and the size parameters were 20L, 5W, and 6H, respectively. The distance from the inlet of the virtual wind tunnel to the front end of the cab, the side of the virtual wind tunnel to the side, and to the top of the cab, were 3L, 2W, and 4H, respectively, as shown in Figure 2. The pre-processing module of STAR-CCM+ software (Version 12.06, Siemens Digital Industries, Berlin, Germany) was used to mesh the cab and virtual wind tunnel. The cab and the wall were divided into different size surface elements, which ranged from 10 to 250 mm. Solid elements were generated according to the surface elements. At the same time, six boundary layer elements were stretched on the surface of the cab to simulate the flow characteristics of the surface of the cab, and the size of the innermost elements was and the wall were divided into different size surface elements, which ranged from 10 to 250 mm. Solid elements were generated according to the surface elements. At the same time, six boundary layer elements were stretched on the surface of the cab to simulate the flow characteristics of the surface of the cab, and the size of the innermost elements was 0.25 mm. A monitoring point was set at the driver's right ear to record the pressure pulsation of the sunroof buffeting. Figure 3 is a partial mesh of the middle section of the cab and the virtual wind tunnel. The pre-processing module of STAR-CCM+ software (Version 12.06, Siemens Digital Industries, Berlin, Germany) was used to mesh the cab and virtual wind tunnel. The cab and the wall were divided into different size surface elements, which ranged from 10 to 250 mm. Solid elements were generated according to the surface elements. At the same time, six boundary layer elements were stretched on the surface of the cab to simulate the flow characteristics of the surface of the cab, and the size of the innermost elements was 0.25 mm. A monitoring point was set at the driver's right ear to record the pressure pulsation of the sunroof buffeting. Figure 3 is a partial mesh of the middle section of the cab and the virtual wind tunnel.
Boundary Condition Setting
Due to the fact that the established wind tunnel model was a limited simulation area, it was necessary to set the model boundary conditions to make the simulation conform to the actual physical conditions. The model boundary conditions in this paper were set as follows.
(1) The virtual wind tunnel inlet speed was set according to different working conditions. (2) The outlet pressure of the virtual wind tunnel was 0 Pa. At this time, the outlet pressure was equal to the atmospheric pressure. (3) The cab and computing domain ground was a non-slip wall.
(4) The upper and the sidewall in the virtual wind tunnel were the free slip walls.
In the transient simulation process of the wind buffeting noise of skylights, the stationary solution of the finite element model was first calculated according to the turbulence model, and then the stationary solution was used as the initial value of the transient simulation. In this paper, the SST-κω turbulence model (a two-equation eddy-viscosity model) was used in the steady-state calculation process, the coupling numeration of velocity field and stress was based on the SIMPLE algorithm (a numerical procedure frequently used to solve the Navier-Stokes equations), and the discretization was second-
Boundary Condition Setting
Due to the fact that the established wind tunnel model was a limited simulation area, it was necessary to set the model boundary conditions to make the simulation conform to the actual physical conditions. The model boundary conditions in this paper were set as follows.
(1) The virtual wind tunnel inlet speed was set according to different working conditions. (2) The outlet pressure of the virtual wind tunnel was 0 Pa. At this time, the outlet pressure was equal to the atmospheric pressure. In the transient simulation process of the wind buffeting noise of skylights, the stationary solution of the finite element model was first calculated according to the turbulence model, and then the stationary solution was used as the initial value of the transient simulation. In this paper, the SST-κω turbulence model (a two-equation eddy-viscosity model) was used in the steady-state calculation process, the coupling numeration of velocity field and stress was based on the SIMPLE algorithm (a numerical procedure frequently used to solve the Navier-Stokes equations), and the discretization was second-order upwind. In the transient calculation, the numerical solution was based on the Detached-Eddy Simulation (DES). The simulation time was 1 second, and the step length was 0.0005 s. Each step was iterated five times before the next iteration calculation.
Simulation Results and Analysis of Sunroof Buffeting Noise
The commercial vehicle cab with the sunroof open can be regarded as a Helmholtz resonance cavity. The resonance frequency can be obtained according to the formula of the Helmholtz resonant cavity [19]: where c is the speed of sound, A is the area of the skylight, V is the cavity volume, h is the thickness of the skylight, and D h is hydraulic diameter of the skylight, all of which are expressed in appropriate units. According to the geometry dimensions of the cab, it was found that the resonance frequency of the cab cavity was 18.4 Hz. The actual driving feedback of the commercial vehicle showed that there was a strong sunroof buffeting noise at a speed of 70 km/h when the sunroof was opened. According to the established model, the sunroof buffeting noise of a commercial vehicle with a different inlet wind speed was analyzed, and the formation of the sunroof buffeting noise was studied to provide a guideline for the optimization of the sunroof buffeting noise of commercial vehicles.
Working Condition One
The buffeting noise of the sunroof with an inlet wind speed of 70 km/h was studied. Figure 4 shows the pressure pulsation results of the monitoring point. It can be seen that the amplitude of the pressure pulsation was increasing until 0.6 seconds. Although the sound pressure of the monitoring point was periodic, the flow field inside and outside the cab was not in a steady-state at this time. The pressure pulsation began to stabilize after 0.6 s. According to the pressure pulsation results in Figure 4, the lowest pressure value in the cab was around −240 Pa. Performing the FFT transformation on the pressure pulsation of the monitoring point makes it possible to obtain the spectrum as a result. Figure 5 shows the sound pressure level spectrum of the monitoring point after the FFT transformation. It can be seen that the maximum sound pressure level at the monitoring point was 111.5 dB and the corresponding frequency was 17.8 Hz while the sunroof was opened, and the inlet speed was 70 km/h. The simulation results are consistent with the buffeting noise characteristics including the low frequency and high sound pressure levels. The buffeting noise frequency was close to the resonance frequency, and it can be inferred that the cab had a resonance response at the inlet speed of 70 km/h.
where c is the speed of sound, A is the area of the skylight, V is the cavity volume, h is the thickness of the skylight, and Dh is hydraulic diameter of the skylight, all of which are expressed in appropriate units. According to the geometry dimensions of the cab, it was found that the resonance frequency of the cab cavity was 18.4 Hz. The actual driving feedback of the commercial vehicle showed that there was a strong sunroof buffeting noise at a speed of 70 km/h when the sunroof was opened. According to the established model, the sunroof buffeting noise of a commercial vehicle with a different inlet wind speed was analyzed, and the formation of the sunroof buffeting noise was studied to provide a guideline for the optimization of the sunroof buffeting noise of commercial vehicles.
Working Condition One
The buffeting noise of the sunroof with an inlet wind speed of 70 km/h was studied. Figure 4 shows the pressure pulsation results of the monitoring point. It can be seen that the amplitude of the pressure pulsation was increasing until 0.6 seconds. Although the sound pressure of the monitoring point was periodic, the flow field inside and outside the cab was not in a steady-state at this time. The pressure pulsation began to stabilize after 0.6 seconds. According to the pressure pulsation results in Figure 4, the lowest pressure value in the cab was around −240 Pa. Performing the FFT transformation on the pressure pulsation of the monitoring point makes it possible to obtain the spectrum as a result. Figure 5 shows the sound pressure level spectrum of the monitoring point after the FFT transformation. It can be seen that the maximum sound pressure level at the monitoring point was 111.5 dB and the corresponding frequency was 17.8 Hz while the sunroof was opened, and the inlet speed was 70 km/h. The simulation results are consistent with the buffeting noise characteristics including the low frequency and high sound pressure levels. The buffeting noise frequency was close to the resonance frequency, and it can be inferred that the cab had a resonance response at the inlet speed of 70 km/h. The formation process of the buffeting noise of a sunroof with a speed of 70 km/h was studied. Figure 6 shows the color diagram of the transient pressure in the cab. In a commercial vehicle, the airflow separates on the forehead of the cab to produce turbulent The formation process of the buffeting noise of a sunroof with a speed of 70 km/h was studied. Figure 6 shows the color diagram of the transient pressure in the cab. In a commercial vehicle, the airflow separates on the forehead of the cab to produce turbulent vortices, which is different from that in the sedan. These vortices gradually become larger during the backward movement and fall off into the cab at the sunroof. The pressure wave generated by the turbulent vortex breaks causing the pressure in the cab to drop sharply. The formation process of the buffeting noise of a sunroof with a speed of 70 km/h was studied. Figure 6 shows the color diagram of the transient pressure in the cab. In a commercial vehicle, the airflow separates on the forehead of the cab to produce turbulent vortices, which is different from that in the sedan. These vortices gradually become larger during the backward movement and fall off into the cab at the sunroof. The pressure wave generated by the turbulent vortex breaks causing the pressure in the cab to drop sharply. The process of the influence of the turbulent vortex on the sound pressure of the cab was analyzed in detail as follows. At t = 0, the low-pressure turbulence vortex was at the edge of the skylight and began to fall off into the cab. At this time, the pressure at the skylight area was lower than that in other areas. At t = 1/6T, the scale of the low-pressure turbulent vortex became larger. The turbulent vortex entered and broke in the cab, and the pressure wave generated by the breaking of the turbulent vortex in the cab made the The process of the influence of the turbulent vortex on the sound pressure of the cab was analyzed in detail as follows. At t = 0, the low-pressure turbulence vortex was at the edge of the skylight and began to fall off into the cab. At this time, the pressure at the skylight area was lower than that in other areas. At t = 1/6T, the scale of the low-pressure turbulent vortex became larger. The turbulent vortex entered and broke in the cab, and the pressure wave generated by the breaking of the turbulent vortex in the cab made the pressure level in the cab drop significantly. The pressure wave spread through the entire cab, so the pressure gradient was very small. At t = 1/3T, the scale of the turbulent vortex continued to increase, and the turbulent vortex sustained shedding and breaking, reducing the pressure in the cab. At t = 1/2T, the impact of the shedding turbulent vortex on the indoor pressure reached the maximum, and the overall pressure level in the cab was less than −180 Pa. At t = 2/3T, it can be seen that the turbulent vortex moved to the rear edge of the skylight, the shedding effect of the turbulent vortex was weakened and the pressure in the cab rose. At the same time, a new small turbulent vortex was formed in front of the skylight. At t = 5/6T, the turbulent vortex moved away from the skylight with the airflow, and the pressure in the cab rose to a higher level. The scale of the new turbulent vortex expanded. Then the pressure state of the cab returned to the initial level, and at the same time the new turbulent vortex started a new process that affected the pressure in the cab.
Working Condition Two
To study the influence of different inlet wind speeds on the buffeting noise of the skylight, the inlet speed of the virtual wind tunnel was set to 60 km/h and the pressure fluctuation at the monitoring point was analyzed. Figure 7 shows the results of pressure pulsation at the monitoring point. It can be seen that, similar to the pressure pulsation at a speed of 70 km/h, the pressure pulsation was periodic. At the beginning of the simulation, the amplitude of the pressure fluctuation of the monitoring point increased with time, and the pressure fluctuation tended to be stable after 0.75 s. The pressure pulsation of the monitoring point in condition two was transformed by FFT and compared with the frequency spectrum results of condition one. Figure 8 is the comparison result of the buffeting noise spectra. It can be seen that the maximum sound pressure level of condition two was 108.8 dB, which is lower than that of condition one by 2.7 dB. The corresponding frequency was 17.0 Hz, which is lower than that of condition one by 0.8 Hz. This means that reducing the inlet wind speed can improve the characteristics of the sunroof buffeting noise.
Working Condition Two
To study the influence of different inlet wind speeds on the buffeting noise of the skylight, the inlet speed of the virtual wind tunnel was set to 60 km/h and the pressure fluctuation at the monitoring point was analyzed. Figure 7 shows the results of pressure pulsation at the monitoring point. It can be seen that, similar to the pressure pulsation at a speed of 70 km/h, the pressure pulsation was periodic. At the beginning of the simulation, the amplitude of the pressure fluctuation of the monitoring point increased with time, and the pressure fluctuation tended to be stable after 0.75 s. The pressure pulsation of the monitoring point in condition two was transformed by FFT and compared with the frequency spectrum results of condition one. Figure 8 is the comparison result of the buffeting noise spectra. It can be seen that the maximum sound pressure level of condition two was 108.8 dB, which is lower than that of condition one by 2.7 dB. The corresponding frequency was 17.0 Hz, which is lower than that of condition one by 0.8 Hz. This means that reducing the inlet wind speed can improve the characteristics of the sunroof buffeting noise. To study the impact of different inlet wind speeds on the sunroof buffeting noise in more detail, the cab forehead pressure contour curves of conditions one and two were analyzed. Figure 9 shows the comparison results of the forehead pressure contour curve when the inlet speeds were 70 km/h and 60 km/h, respectively. It can be seen that while the inlet wind speed was 70 km/h, the negative pressure area above the roof was larger than with a speed of 60 km/h. At the same time, reducing the inlet speed can help to reduce the low-pressure area at the forehead corner, which helps to weaken the turbulent vortex generated in the forehead. It can also be seen that while the inlet speed was 70 km/h, there were some small low-pressure turbulent vortices in the negative pressure area. These small low-pressure turbulent vortices can merge with the large turbulent vortex while moving with the airflow. The influence of the large turbulent vortex on the cab pressure will be enhanced. To study the impact of different inlet wind speeds on the sunroof buffeting noise in more detail, the cab forehead pressure contour curves of conditions one and two were analyzed. Figure 9 shows the comparison results of the forehead pressure contour curve when the inlet speeds were 70 km/h and 60 km/h, respectively. It can be seen that while the inlet wind speed was 70 km/h, the negative pressure area above the roof was larger than with a speed of 60 km/h. At the same time, reducing the inlet speed can help to reduce the low-pressure area at the forehead corner, which helps to weaken the turbulent vortex generated in the forehead. It can also be seen that while the inlet speed was 70 km/h, there were some small low-pressure turbulent vortices in the negative pressure area. These small low-pressure turbulent vortices can merge with the large turbulent vortex while moving with the airflow. The influence of the large turbulent vortex on the cab pressure will be enhanced. than with a speed of 60 km/h. At the same time, reducing the inlet speed can help to reduce the low-pressure area at the forehead corner, which helps to weaken the turbulent vortex generated in the forehead. It can also be seen that while the inlet speed was 70 km/h, there were some small low-pressure turbulent vortices in the negative pressure area. These small low-pressure turbulent vortices can merge with the large turbulent vortex while moving with the airflow. The influence of the large turbulent vortex on the cab pressure will be enhanced. Figure 10 is a contour curve of the cab pressure when the inlet speed was 60 km/h. It can be seen that the turbulent vortex did not fall into the cab while it moved through the sunroof but continued to move backward with the airflow, which is the reason why the pressure fluctuation decreased while the inlet speed was 60 km/h. The simulation results show that improving the flow field characteristics above the cab has a good effect on reducing the sunroof buffeting noise. Figure 10 is a contour curve of the cab pressure when the inlet speed was 60 km/h. It can be seen that the turbulent vortex did not fall into the cab while it moved through the sunroof but continued to move backward with the airflow, which is the reason why the pressure fluctuation decreased while the inlet speed was 60 km/h. The simulation results show that improving the flow field characteristics above the cab has a good effect on reducing the sunroof buffeting noise.
Sunroof Buffeting Noise Control
There are two ways to reduce sunroof buffeting noise, including active control [20] and passive control [21]. The active control method requires the development of an active sound control device, which is more expensive to implement in practice. The passive control method is used to reduce the buffeting noise by designing the control structure. The passive control method is easy to implement and widely used in practice. After comprehensive consideration, the passive control alternatives were adopted in the current study to reduce the buffeting noise.
Optimization Scheme
From the comparative results of the sunroof buffeting noise with different inlet speeds, it can be seen that the flow field characteristics in the cab forehead area are complex at the inlet speed of 70 km/h, and there will be a smaller turbulent vortex. Since the commercial vehicles studied are already mass-produced, optimizing the forehead of the cab will vastly increase the economic costs, particularly in relation to the installation of a small spoiler in front of the sunroof. The principle is to use the spoiler to change the flow field so as to prevent the turbulent vortex falling into the cab. However, installing a small
Sunroof Buffeting Noise Control
There are two ways to reduce sunroof buffeting noise, including active control [20] and passive control [21]. The active control method requires the development of an active sound control device, which is more expensive to implement in practice. The passive control method is used to reduce the buffeting noise by designing the control structure. The passive control method is easy to implement and widely used in practice. After comprehensive consideration, the passive control alternatives were adopted in the current study to reduce the buffeting noise.
Optimization Scheme
From the comparative results of the sunroof buffeting noise with different inlet speeds, it can be seen that the flow field characteristics in the cab forehead area are complex at the inlet speed of 70 km/h, and there will be a smaller turbulent vortex. Since the commercial vehicles studied are already mass-produced, optimizing the forehead of the cab will vastly increase the economic costs, particularly in relation to the installation of a small spoiler in front of the sunroof. The principle is to use the spoiler to change the flow field so as to prevent the turbulent vortex falling into the cab. However, installing a small spoiler increases parts and costs. It is proposed that the existing accessories including the sunshade and roof dome on the cab should be optimized to suppress the sunroof buffeting noise. The optimization content includes the installation angle of the sun visor and the shape of the roof dome. Adjusting the flow field at the forehead position by optimizing the sun visor reduces the turbulent vortex that is generated. By optimizing the roof dome, the lifting effect of the roof dome on the airflow is improved, so as to better guide the turbulent vortex away from the cab and to avoid the turbulent vortex falling off at the skylight. The specific design scheme is shown in Figure 11. Figure 11a is the schematic diagram of the installation angle of the sun visor. The installation angle is optimized based on the original sun visor, and the new installation angle is increased by 8. Figure 11b is the optimized schematic diagram of the roof dome. The roof dome is symmetrical about the center plane, and the shape optimization is explained with half of the roof dome. The optimization content includes the following steps: removing the upward edge of the original sun visor, adjusting the front end of the roof dome to a circular arc, and reducing the distance between the roof dome and the sunroof. The upper surface of the roof dome is changed to a convex arc surface. The height of the roof dome at the symmetrical plane is the same as that of the original roof dome. For this commercial vehicle, the corresponding height at the A-A section of the original car's roof dome is reduced by a 1/3, and the height after optimization is 110 mm. The roof dome between the symmetrical section and the A-A section is connected by a circular arc surface.
Buffeting Noise Simulation of the Optimized Scheme
The sunroof buffeting noise of the cab after the optimization of the sun visor and roof dome was analyzed. The inlet speed was 70 km/h. Figure 12 shows the pressure color map of the forehead before and after optimization. It can be seen that the scale of the negative pressure area above the roof and at the forehead corner decreased, which suppresses the phenomenon of the small vortex generated at the forehead. The position of the turbulent vortex above the roof is also improved. Figure 13 shows the sound pressure level spectra of the monitoring point before and after optimization. It can be seen that the optimization scheme can effectively improve the flow field characteristics above the roof. The sound pressure level at the monitoring point decreased by 6%, from 111.5 dB to 104.8 dB, meaning that the sunroof buffeting noise has been reduced. Figure 11b is the optimized schematic diagram of the roof dome. The roof dome is symmetrical about the center plane, and the shape optimization is explained with half of the roof dome. The optimization content includes the following steps: removing the upward edge of the original sun visor, adjusting the front end of the roof dome to a circular arc, and reducing the distance between the roof dome and the sunroof. The upper surface of the roof dome is changed to a convex arc surface. The height of the roof dome at the symmetrical plane is the same as that of the original roof dome. For this commercial vehicle, the corresponding height at the A-A section of the original car's roof dome is reduced by a 1/3, and the height after optimization is 110 mm. The roof dome between the symmetrical section and the A-A section is connected by a circular arc surface.
Buffeting Noise Simulation of the Optimized Scheme
The sunroof buffeting noise of the cab after the optimization of the sun visor and roof dome was analyzed. The inlet speed was 70 km/h. Figure 12 shows the pressure color map of the forehead before and after optimization. It can be seen that the scale of the negative pressure area above the roof and at the forehead corner decreased, which suppresses the phenomenon of the small vortex generated at the forehead. The position of the turbulent vortex above the roof is also improved. Figure 13 shows the sound pressure level spectra of the monitoring point before and after optimization. It can be seen that the optimization scheme can effectively improve the flow field characteristics above the roof. The sound pressure level at the monitoring point decreased by 6%, from 111.5 dB to 104.8 dB, meaning that the sunroof buffeting noise has been reduced. pressure area above the roof and at the forehead corner decreased, which suppresses the phenomenon of the small vortex generated at the forehead. The position of the turbulent vortex above the roof is also improved. Figure 13 shows the sound pressure level spectra of the monitoring point before and after optimization. It can be seen that the optimization scheme can effectively improve the flow field characteristics above the roof. The sound pressure level at the monitoring point decreased by 6%, from 111.5 dB to 104.8 dB, meaning that the sunroof buffeting noise has been reduced.
Wind Resistance Analysis
The wind resistance of the commercial vehicle after the optimization of the sun visor and roof dome accessories was analyzed. The analysis model added the main structural components of commercial vehicles including a trailer, wheels, and other parts. Table 1 shows the wind resistance coefficients of the commercial vehicle. Figure 14 is the color diagram of the surface pressure of the commercial vehicle before and after optimization when the inlet wind speed was 70 km/h.
Wind Resistance Analysis
The wind resistance of the commercial vehicle after the optimization of the sun visor and roof dome accessories was analyzed. The analysis model added the main structural components of commercial vehicles including a trailer, wheels, and other parts. Table 1 shows the wind resistance coefficients of the commercial vehicle. Figure 14 is the color diagram of the surface pressure of the commercial vehicle before and after optimization when the inlet wind speed was 70 km/h.
Wind Resistance Analysis
The wind resistance of the commercial vehicle after the optimization of the sun v and roof dome accessories was analyzed. The analysis model added the main struct components of commercial vehicles including a trailer, wheels, and other parts. Tab shows the wind resistance coefficients of the commercial vehicle. Figure 14 is the c diagram of the surface pressure of the commercial vehicle before and after optimiza when the inlet wind speed was 70 km/h. It can be seen from Table 1 that after the optimization of the installation angle of th sun visor and the shape of the roof dome, the wind resistance coefficient of the commerci vehicle was reduced by 1.55% compared with the original vehicle. In the original car, ther was a negative pressure area in the middle of the roof dome, which was caused by th airflow above the roof. Affected by the tail vortex behind the cab, there was also a larg negative pressure area at the front end of the trailer. After the optimization of the su visor and roof dome, the surface pressure of the roof dome and the trailer increased du to the improvement of the airflow characteristics above the roof and the trailing vorte behind the cab, which achieves the target of reducing the pressure resistance of the com mercial vehicle.
Sunroof Buffeting Noise Test
A road test was carried out on the sunroof buffeting noise of a commercial vehic with the sunroof fully opened. The test was carried out on a highway with asphalt pav It can be seen from Table 1 that after the optimization of the installation angle of the sun visor and the shape of the roof dome, the wind resistance coefficient of the commercial vehicle was reduced by 1.55% compared with the original vehicle. In the original car, there was a negative pressure area in the middle of the roof dome, which was caused by the airflow above the roof. Affected by the tail vortex behind the cab, there was also a large negative pressure area at the front end of the trailer. After the optimization of the sun visor and roof dome, the surface pressure of the roof dome and the trailer increased due to the improvement of the airflow characteristics above the roof and the trailing vortex behind the cab, which achieves the target of reducing the pressure resistance of the commercial vehicle.
Sunroof Buffeting Noise Test
A road test was carried out on the sunroof buffeting noise of a commercial vehicle with the sunroof fully opened. The test was carried out on a highway with asphalt pavement, using the Test.Lab noise test equipment, and the vehicle speed was 70 km/h. The weather was fine, the wind speed was less than 3 m/s, and the environmental noise was less than 40 dB. Figure 15 shows the test equipment and the opening skylight. The sound pressure level near the driver's right ear was measured, and the results are shown in Table 2. It can be seen that after the optimization of the sun visor and roof dome accessories, the sunroof buffeting noise of the commercial vehicle was reduced from 116.3 dB to 109.2 dB, a decrease of 7.1 dB. The results of the simulation and the experiment are relatively close, indicating the effectiveness of the optimization scheme.
Conclusions
This work focused on heavy commercial vehicles to study the sunroof buffeting noise. The existing accessories including the sun visor and the roof dome were optimized to improve the flow field characteristics of commercial vehicles to reduce the sunroof buffeting noise and wind resistance coefficient. The main conclusions were: (1) Based on numerical simulations, the airflow separates on the forehead of the cab to produce turbulent vortices. These vortices gradually become larger during the backward movement and fall off into the cab at the sunroof. The pressure wave generated by the turbulent vortex breaks causing the pressure in the cab to drop sharply. Turbulent vortices have periodic characteristics. (2) After analyzing the sunroof buffeting noise for two-speed conditions, it was evidenced that reducing the speed can improve the flow field characteristics above the roof and reduce the number of the small turbulent vortices. (3) Optimizing the sun visor and roof dome accessories of the cab can reduce the sunroof buffeting noise. The validity of the simulation results was verified through experiments.
(4) The optimization scheme of the sun visor and roof dome improves the flow field characteristics of the commercial vehicles. This scheme reduces the impact of airflow on the roof dome and the local pressure drag and reduces the aerodynamic drag coefficient of the commercial vehicle, improving fuel economy, and the design cost is also reduced.
All of the above can be considered of relevance in this field of research and should be further explored.
Conclusions
This work focused on heavy commercial vehicles to study the sunroof buffeting noise. The existing accessories including the sun visor and the roof dome were optimized to improve the flow field characteristics of commercial vehicles to reduce the sunroof buffeting noise and wind resistance coefficient. The main conclusions were: (1) Based on numerical simulations, the airflow separates on the forehead of the cab to produce turbulent vortices. These vortices gradually become larger during the backward movement and fall off into the cab at the sunroof. The pressure wave generated by the turbulent vortex breaks causing the pressure in the cab to drop sharply. Turbulent vortices have periodic characteristics. (2) After analyzing the sunroof buffeting noise for two-speed conditions, it was evidenced that reducing the speed can improve the flow field characteristics above the roof and reduce the number of the small turbulent vortices. (3) Optimizing the sun visor and roof dome accessories of the cab can reduce the sunroof buffeting noise. The validity of the simulation results was verified through experiments. (4) The optimization scheme of the sun visor and roof dome improves the flow field characteristics of the commercial vehicles. This scheme reduces the impact of airflow on the roof dome and the local pressure drag and reduces the aerodynamic drag coefficient of the commercial vehicle, improving fuel economy, and the design cost is also reduced.
All of the above can be considered of relevance in this field of research and should be further explored. | 10,638.6 | 2021-06-16T00:00:00.000 | [
"Physics",
"Engineering"
] |
The visible and near infrared module of EChO
The Visible and Near Infrared (VNIR) is one of the modules of EChO, the Exoplanets Characterization Observatory proposed to ESA for an M-class mission. EChO is aimed to observe planets while transiting by their suns. Then the instrument had to be designed to assure a high efficiency over the whole spectral range. In fact, it has to be able to observe stars with an apparent magnitude Mv = 9–12 and to see contrasts of the order of 10−4–10−5 necessary to reveal the characteristics of the atmospheres of the exoplanets under investigation. VNIR is a spectrometer in a cross-dispersed configuration, covering the 0.4–2.5 μm spectral range with a resolving power of about 330 and a field of view of 2 arcsec. It is functionally split into two channels respectively working in the 0.4–1.0 μm and 1.0–2.5 μm spectral ranges. Such a solution is imposed by the fact the light at short wavelengths has to be shared with the EChO Fine Guiding System (FGS) devoted to the pointing of the stars under observation. The spectrometer makes use of a HgCdTe detector of 512 by 512 pixels, 18 μm pitch and working at a temperature of 45 K as the entire VNIR optical bench. The instrument has been interfaced to the telescope optics by two optical fibers, one per channel, to assure an easier coupling and an easier colocation of the instrument inside the EChO optical bench.
Introduction
The discovery of over a thousand exoplanets has revealed an unexpectedly diverse planet population. We see gas giants in few-day orbits, whole multi-planet systems within the orbit of Mercury, and new populations of planets with masses between that of the Earth and Neptune-all unknown in the Solar System. Observations to date have shown that our Solar System is certainly not representative of the general population of planets in our Milky Way [1]. The key science questions that urgently need addressing by EChO are therefore: What are exoplanets made of? Why are planets as they are? How do planetary systems work and what causes the exceptional diversity observed as compared to the Solar System? The EChO mission [2] will take up the challenge to explain this diversity in terms of formation, evolution, internal structure and planet and atmospheric composition. This requires in-depth spectroscopic knowledge of the atmospheres of a large and well-defined planet sample for which precise physical, chemical and dynamical information can be obtained.
In order to fulfill this ambitious scientific programme, EChO is designed as a dedicated survey mission for transit and eclipse spectroscopy capable of observing a large, diverse and well-defined planet sample within its 4-year mission lifetime. The transit and eclipse spectroscopy method, whereby the signal from the star and planet are differentiated using knowledge of the planetary ephemerides, allows us to measure atmospheric signals from the planet at flux levels of at least 10 −4 relative to the star. This can only be achieved in conjunction with a carefully designed stable payload and satellite platform. It is also necessary to provide an instantaneous broad-wavelength coverage to detect as many molecular species as possible, to probe the thermal structure of the planetary atmospheres and to correct for the contaminating effects of the stellar photosphere. This requires wavelength coverage of at least 0.55 to 11 μm with a goal of covering from 0.4 to 16 μm. Only modest spectral resolving power is needed, with R3 00 for wavelengths less than 5 μm and R~30 for wavelengths greater than this. The transit spectroscopy technique means that no spatial resolution is required. A telescope collecting area of about 1 m 2 is sufficiently large to achieve the necessary spectrophotometric precision: in practice the telescope will be 1.13 m 2 , diffraction limited at 3 μm. Placing the satellite at L2 provides a cold and stable thermal environment as well as a large field of regard to allow efficient time-critical observation of targets randomly distributed over the sky. EChO is designed, without compromise, to achieve a single goal: exoplanet spectroscopy. The spectral coverage and signal-to-noise ratio to be achieved by EChO, thanks to its high stability and dedicated design, will be a game changer by allowing atmospheric compositions to be measured with unparalleled exactness: at least a factor 10 more precise and a factor 10 to 1,000 more accurate than current observations. This will enable the detection of molecular abundances three orders of magnitude lower than currently possible. Combining these data with estimates of planetary bulk compositions from accurate measurements of their radii and masses will allow degeneracies associated with planetary interior modeling to be broken, giving unique insight into the interior structure and elemental abundances of these alien worlds.
EChO will carry a single, high stability, spectrometer instrument. The baseline instrument for EChO is a modular, three-channel, highly integrated, common field of view, spectrometer that covers the full EChO required wavelength range of 0.55 μm to 11.0 μm. The baseline design includes the goal wavelength extension to 0.4 μm while an optional LWIR channel extends the range to the goal wavelength of 16.0 μm. Also included in the payload instrument is the Fine Guidance System (FGS), necessary to provide closed-loop feedback to the high stability spacecraft pointing. The required spectral resolving powers of 300 or 30 are achieved or exceeded throughout the band. The baseline design largely uses technologies with a high degree of technical maturity.
The spectrometer channels share a common field of view, with the spectral division achieved using a dichroic chain operating in long-pass mode. The core science channels are a cross-dispersed spectrometer VNIR module covering from 0.4 to~2.5 μm, a grism spectrometer SWIR module covering from 2.5 to 5.3 μm, and a prism spectrometer MWIR module covering from 5.3 to 11 μm. All science modules and the FGS are accommodated on a common Instrument Optical Bench. The payload instrumentation operates passively cooled at~45 K with a dedicated instrument radiator for cooling the FGS, VNIR and SWIR detectors to 40 K. An Active Cooler System based on a Neon Joule-Thomson Cooler provides the additional cooling to~28 K which is required for the longer wavelength channels.
In the following, the characteristics of the VNIR module are described in detail.
Scientific and technical requirements
The VNIR design must fulfill both scientific and technical requirements imposed by the Echo mission. Spectroscopy of planetary transits of a large variety of exoplanets required the use of a multichannel spectrometers to cover the wide wavelength range. The EChO payload is therefore constituted by 3 modules (VNIR, SWIR and MWIR); moreover, a quite low operational temperature is needed to operate both SWIR and MWIR detectors and to reduce the background noise. Photometric stability and SNR are also crucial parameters in order to assure the scientific objectives of the mission. Table 1 summarize the most important VNIR requirements. The complete list can be found in EChO Mission Requirements Document [3].
Module design
3.1 Optical layout The system covers the spectral range between 0.4 and 2.5 μm without gaps and the resulting resolving power is nearly constant, R≈330. The wide spectral range is achieved through the combined use of a grating with a ruling of 14.3 grooves/mm and blaze angle of 3.3°for wavelength dispersion in horizontal direction and an order sorting calcium fluoride prism (angle 22°), which separates the orders along the vertical direction. The collimator (M1) and the prism are used in double pass (see Fig. 1). The prism is the only optical element used in transmission. All other optics are made of reflecting surfaces: 2 off-axis conic mirrors, 1 spherical mirror, 1 flat mirror and 1 grating. All reflecting elements will be made of the same aluminium alloy as the optical bench. This simplifies the mechanical mount and alignment of the system. The light is fed to the spectrometer via two fibres positioned on the side of the M2 mirror. The fibres are commercial, radiation resistant, space qualified, fused-silica with ultra-low OH content and core diameter of 50 μm. Their internal absorption is lower than 1 db/m up to 2.4 μm, and reaches 2 db/m at 2.5 μm. Therefore, by limiting their length to 0.2 m, one can achieve an internal transmission >90 % over the full wavelength range. The 1 Optical layout of the VNIR spectrometer fibres are separately fed by two identical off-axis parabolic mirrors (M0) which intercept the collimated light transmitted from the first dichroic (D1b), IR, and reflected by the beam-splitter, VIS. The use of an optical fibres coupling gives a larger flexibility in the location of the VNIR spectrometer within the EChO payload module. The VNIR characteristics are summarized in Table 2.
A Mercury Cadmium Telluride (MCT, HgCdTe) detector has been considered for VNIR (its technical characteristics are detailed later in section 4). Figure 2 shows the observable spectral orders, m, projected on the MCT array, starting from m=3 at the bottom (near infrared spectral range) to m=20 on the top (visual spectral range).
Namely, the figure shows the distribution of the light on the array between 2,500 nm (m=3) and 400 nm (m=20). The central wavelength in each order m, positioned at the blaze angle of the grating, is given by the relationship λ=8.1/mμm. The VIS and IR spectral ranges are separated on the detector because the fibres are placed at the spectrometer entrance are separated by 1 mm. In general, most wavelengths are sampled twice on different orders, i.e. in different areas of the detector, as shown in Fig. 2. The spectrum in each order is spread across several pixels in the vertical Fig. 2 Grating diffraction orders projected on the VNIR detector, starting from m=3 at the bottom (near infrared spectral range) to m=20 on the top (visual spectral range). The wavelengths in nanometers are also indicated direction. Thus, a sum over 5 pixels will be done to increase the sensitivity of the system in order to provide a so-called spectral channel. The last two instrumental features, about wavelength sampling, also have the advantage to reduce systematic errors in the measurements once properly exploited.
As previously said, the coupling of the VNIR module to the telescope will be done through the use of a dichroic element that will select and direct the visible and near infrared light towards the combined system VNIR and FGS. A beam-splitter is foreseen to further divide the light beam between FGS and VNIR. The balance of this beamsplitter will need to be studied in conjunction with the FGS team during the assessment phase to maximize the science return while maintaining sufficient signal for the guider system. As the performance of the module optics should be very good to assure the observations of transient planets in transit or in occultation of a star, the detector is going to be a key element in the system. In order to meet the EChO visible channel performance requirements, it is possible to pursue different ways, based on different detectors and readout electronics as well as on the optical spectrometer design characteristics.
Internal calibration unit
The instrument calibration is going to be performed looking at a known reference star before and after any target observation. The star calibration is meant to verify mainly the position of the spectral lines but also the radiometric response. A very high level of radiometric accuracy, better than 10 −4 , is assured by the continuous monitoring of the mother star during the transit observations. The observation session is supposed to vary from minutes to about 10 h depending on the characteristic of the target itself. However, as standalone procedure regardless any request for a star pointing, it is important to monitor also the stability of the instrument and, in particular, of the detector along the mission. For this purpose, a less demanding accuracy and stability is needed, of the order of a few percent. The calibration unit will be equipped with two Halogen-Tungsten lamps for redundancy. These kinds of lamps are currently used as spectral calibration sources of optical systems (see [4,5]) and they are the baseline for the development of the VNIR calibration unit too. The calibration lamps will be equipped with a close loop control system to assure the requested stability over the observation time. The lamps will have color temperature higher that 3,000 K and they will be operating for very short times during the observation sessions. The signal coming from one of the two lamps, can be used to perform several instrumental checks during the development of the mission: to verify the in-flight stability of the instrumental spectral response and registration; to perform a check on the relative radiometric response of the instrument; to monitor the evolution of possible defective pixels. The lamps inject their light into an integration sphere, which will have two output fibers that will feed the two input fibers to the spectrometer (ranges 0.4-1.0 μm and 1.0-2.5 μm respectively). Figure 3 gives the spectrum in input to the fibers. The feeding of the main fibers will be done using 2 in-1 out fiber connectors. The two fibers will be illuminated at the same time.
The calibration unit will be located in a separate box on a side of the service box where the mirrors collect the light from the VNIR feeding optics and focus it on the optical fibers inputs. Figure 4 shows the calibration unit and its arrangement on the service side of the VNIR optical bench.
Mechanical and thermal design
VNIR instrument is housed in a mechanical structure, that will be flat-mounted on the spacecraft interface (an alternative isostatic mounting could be evaluated if needed to reduce optical bench distortions). The optical elements (mirrors and prism) are shown in the right panel of Fig. 5. The figure shows the box without the calibration unit mounted below the spectrometer optical bench. A view inside the box is given in the right panels of the figure where the location of the optical elements of the spectrometer is shown. The lower part of the VNIR optical bench will be dedicated to the services to spectrometer: the input box where the mirrors concentrate the light on the optical fibers and the calibration unit in two separated box in order to minimize light and thermal contamination of the rest of the instrument. The VNIR calibration unit switches on/off and overall control will be performed by the EChO Instrument Control Unit (ICU) [6]. The mass of the instrument is estimated to be about 6.62 kg (20 % margin included). The overall dimensions are: 342×325×190 mm as depicted in the lower left panel of Fig. 5. The VNIR First Resonant Frequency is planned to be larger than 150 Hz. VNIR CFEE (Cold Front End Electronics-SIDECAR ASIC as baseline) will be located on the telescope optical bench; these are supposed to be at temperatures lower than 50 K; the detector is planned to work at a temperature in a range of 40-45 K, dissipating about 30 mW. In order to minimize the thermo elastic deformations and assure a good performance also at low temperatures, the instrument (optical bench, optical supports and mirrors substrates) will be realized in the same material of the payload optical bench (aluminium) and the box will be thermally linked through its feet to it.
Instrument performances
The grating's orders of diffraction, as shown in Fig. 2, on the detector would not be equally illuminated if the input light would have a constant intensity over the entire spectrum because the grating's efficiency changes along the order. The maximum efficiency is around the center of the blue curves in Fig. 2. In this spectrometer configuration some wavelengths can be observed on two adjacent diffraction orders. To completely recover the light at those wavelengths the signal coming from the adjacent order has to be summed. The sum has to be done to maximize the result and keep the highest feasible signal to noise ratio. A reasonable compromise has been found in summing the adjacent orders when the grating efficiency is higher than 80 % Fig. 4 Sketch of the internal calibration unit. The calibration unit is equipped with two halogen-tungsten lamps for redundancy. The lamps inject their light into an integration sphere, having two output fibers that will feed the two input fibers to the spectrometer (ranges 0.4-1.0 μm and 1.0-2.5 μm respectively). The calibration unit is located in a separate box on a side of the service box where the mirrors collect the light from the VNIR feeding optics and focus it on the optical fibers inputs with respect to the maximum. The result is a component of the Instrument Transfer Function (ITF) that will be given as result of the on-ground instrumental calibrations by measuring and combining the optical efficiency of the spectrometer and the detector performances. Figure 6 shows the spectrometer efficiency calculated with 80 % criterion. The present calculation has been done by considering aluminum mirrors without any coating to improve the performances at wavelengths lower than 1.0 μm. In the picture the expected behavior obtained by the use of coated aluminum or protected silver mirrors is also shown for comparison.
The photometric stability is a key factor in the noise budget of the observations. The photometric stability of the instrument throughout consecutive observations lasting up to tens of hours (to cover the goal of phase curve observations) is mainly governed by the following factors: a) Pointing stability of the telescope quantified in terms of Mean Performance Error (MPE), Pointing Drift Error (PDE) and Relative Performance Error (RPE), see below for details; b) Thermal stability of the optical-bench and mirrors: thermal emission of the instrument can be regarded as negligible for most wavelengths, but become observable at wavelengths beyond 12 μm. The stability payload module (instrument and telescope) is therefore an important factor for the photometric stability in MWIR and LWIR channels. c) Stellar noise and other temporal noise sources: whilst beyond the control of the instrument design, noise is an important source of temporal instability in exoplanetary time series measurements. This is particularly true for M dwarf host stars as well as many non-main sequence stars. Correction mechanisms of said fluctuations must and will be an integral part of the data analysis of EChO [7].
As mentioned above, the pointing stability is affected by the following jitter types: Relative Performance Error (RPE), defining the high frequency (>1 Hz), unresolved jitter component. Performance Reproducibility Error (PRE), defining the low frequency (<1 Hz), resolved PSF drifting due to pointing jitter and Mean Performance Error (MPE) which is the overall offset (in time series, the flux offset) between two or more observation windows. The effect of the relative performance error (RPE) is a photometric error within an observation while the effect of the mean performance error (MPE) is a loss of efficiency from observation to observation. To quantify the effects of jitter on the observations, a simulation has been performed at two representative wavelengths (0.8 and 2.5 μm). The illumination pattern of the telescope is obtained from optical modeling. The energy collected by the fiber is then studied as a function of MPE, RPE and PRE. The MPE is varied in accordance to EIDA-R-0470. The impact of three different RPEs is studied: i) RPE1 = 30 mas-rms from 1 to 10 Hz; ii) RPE2 = 50 mas-rms from 1 to 300 Hz; iii) RPE3 = 130 mas-rms from 1 to 300 Hz. These three cases correspond to three different AOCS (Attitude and Orbit Control Systems) solutions. A fixed PRE = 20 mas-rms from 0.020 to 4 mHz is used in this simulation.
The results of the simulations are discussed in [8] and are here briefly summarized. The effect of the MPE on the normalized transmitted energy is shown in Fig. 7. The combined effect of the RPE and PRE on the photometric error is shown in Fig. 8. The worst case photometric error is obtained when observing a bright target (a star with visual magnitude Mv=4) with the RPE3 option and results in 10 % of the total allowed system noise variance in 1 s of integration for this channel.
The analysed optical system is the Echo Telescope and the concentrating system (f#4) in input of fibre. The configuration optimized consists in primary mirror telescope distance M T1 -M T2 =1.500 mm, the configuration defocused determines WFE 250 rms with shift M1-M2 position of 87 μm. (WFE calculated at 1 μm wavelength). The fibre with 50-μm diameter corresponds to a Field of View (FOV) of 2 arcsec. Figure 9 shows that the spot diagram of the aberrated beam after defocusing is collected inside the fiber diameter. Table 3 resumes all obtained results, spot diagram and Encircled Energy collected on entrance fiber of VNIR channel.
Spot diagram inside the fibre diameter and collected Encircled Energy (96.75 %) demonstrate that the introduction of a defocusing of 250 WFE rms in entrance beam of fibre.
The efficiency of a fiber is the product of three effects, namely internal transmission (which is at most 95 % in our case), reflection losses at the entrance/exit (which amount The solid lines correspond to the RPE1 (black), RPE2 (red) and RPE3 (green) cases discussed in the text to 6 %) and focal ratio degradation (FRD), which measures the fraction of light exiting from the fiber within a given solid angle. The value of FRD depends on the aperture angle (i.e. the focal aperture F/#) by which the fiber is fed, and by the focal aperture accepted by the spectrometer. The VNIR fiber receives an F/4 input beam and feeds the spectrometer with an F/3.5 output beam. Therefore, the FRD losses are about 5 % and total efficiency is about 85 %.
The light from the telescope can be fed to the fiber on the image plane or on the pupil plane. The former solution is used in HARPS, the ultra-high precision astronomical spectrometer which has reached the highest accuracy in the detection of extra-solar planets. On the other hand, pupil-feeding are often used in fiber-fed astronomical instruments. In the case of VNIR we can use both solutions, the only difference being the curvature of the input surface of the fiber, which is flat in case of image-feeding. For pupil-feeding, instead, the curvature is such that the first part of the fiber acts as a micro-lens adapter. We plan to test both solutions and select the one providing the best performances in terms of total efficiency and scrambling gain.
The detector choice
For the visible and near infrared channel, two options have been considered for VNIR in order to cover the (90 um) spectral resolution element on the focal plane: 512×512 matrix with 18 μm square pixels (binning 5×5) and 256×256 format with 30 μm square pixels (binning 3×3) Mercury Cadmium Telluride, MCT, operating at high frame rate (of the order of 10 Hz). The first option is considered as baseline in this paper. MCTs have a good efficiency in the VNIR spectral range keeping a very low readout noise. Like other spectrometric EChO channels working in the infrared, the choice of an MCT permits the detector to work at a temperature around 40-45 K, matching that of the optical bench of the modules. This fact will allow the instrument to have a very low thermal noise. From the performance point of view, readout noise, pixel size and dark current are the most crucial parameters that have been taken into consideration for the selection. This because the VNIR signal to noise ratio drops below 1 μm and detector noise performances are crucial to meet the requirements. As far as the 512× 512 format is concerned, Selex and US manufacturers (Teledyne and Raytheon) offer comparable performances. While the US detectors appears to be in a mature state, Selex has a series of technical activities ongoing and planning to improve the performances of the VNIR detector, taking one of these devices at TRL 5 at beginning of 2015. From both the technical and programmatic information we have received from manufacturers, we assume Teledyne as baseline and Selex as a backup. Teledyne detectors can also be connected directly to the SIDECAR ASIC, chosen as baseline for the VNIR CFEE (as well as for the SWIR and FGS modules). This solution is better in terms of power consumption, thermal coupling and simplifies the overall harnessing between the detector and the CFEE and between CFEE and WFEE/ICU.
Noise effects studies
A study has been carried out to evaluate the best readout mode to adopt with the selected detector, taking into account the following main aspects: the need to minimise the equivalent noise in both bright and faint stars observations, the need to detect and correct for the cosmic rays hits effects and, finally, the need to simplify the on board data processing procedure in order to reduce the data rate and volume.
The MCT detectors allow for non-destructive readout modes, such that multiple readouts are possible without disturbing ongoing integration.
In Fig. 10 a non destructive readout sampling scheme is shown, for a single MCT pixel, in which the detectors integrating ramps are indicated in blu. In the sample upthe-ramp readout mode, the detectors readouts are equally spaced in time, sampling uniformly the ramp. By collecting all samples it is possible to fit the ramp slope. Provided that the number of samples is statistically significant, in case of cosmic rays hits, a jump or even a smooth modification of the slope can be detected and the corresponding samples rejected. This method is accurate but quite demanding in terms of real time processing power. In the multi-accumulate readout mode, only contiguous groups of samples are considered. The groups are equally spaced in time, but the samples between the groups are discarded. In Fig. 10 the samples groups are highlighted in red.
With reference to Fig. 2, where the expected location of the observed spectrum on the focal plane is reported, it can be seen that different pixels will be interested by different spectrum wavelengths and orders and, consequently, by different input flux levels. Considering the spectral types of the sources that will be observed by EChO, and convolving their flux in the various spectral channels with the channel bandpass and all other instrumental effects, included the detectors quantum efficiency, the obtained focal plane intensities in the range 0.6 μm-2.5 μm show a regular behaviour with similar values in all considered spaxels (where spaxels are the equivalent of the spectral channels defined in section 3.1 in which pixels are binned in both the spectral and spatial directions) while in the range 0.4 μm-0.6 μm the expected flux is considerably lower. Given the early phase of the mission and the status of the design of the overall detectors data acquisition chain, in our present work we have used the same readout mode for all wavelengths. This assumption does not allow to optimise the results for the shorter wavelengths, but shall be considered as the first step of a more detailed investigation that will be performed in the next phases of the work.
The general expression for the total noise variance of an electronically shuttered instrument using the non-destructive readout can be computed using well known relations based on fundamental principles. It has been presented for the first time in its complete form by [9,10] and is reported herein below: R is the readout noise and f is the flux, including photonic flux and dark currents. R is in units of e − rms and f is in units of e − s −1 spaxel −1 ; k is the number of samples per group and n is the number of groups per exposure. The frame time t f is the time interval between the acquisition of two consecutive frames (frame sampling time). The group time t g is the time interval between the acquisition time of the first frame of one group and the first frame of the next group.
We used this relation to evaluate the noise expected for the VNIR detectors when read out using the sample up-the-ramp method. The result has then been compared with the system requirements for the two different detector arrays under study for the EChO mission, see [9]. The aim of the work has been to provide indications on how to optimize the EChO focal plane arrays sampling rate and data processing procedures in order to achieve the best signal to noise ratio and to identify and remove cosmic rays effects. The results of this activity will also be used to dimension the on-board data processing unit hardware and to define the architecture of the on-board data processing software.
The sampling rates of 8 Hz for bright sources and 1/16 Hz for faint sources have been considered to limit properly the overall data volume and processing resources (see [6]). The adopted integration times are 3 s for bright sources and 600 s for faint sources. Given the estimated input fluxes for the two types of sources, these times allowed to couple with the maximum detectors well capacity in both cases. All comparisons have been made assuming an operating temperature of 40 K.
For bright sources, it was possible to obtain an optimized set of readout mode parameters only considering the Teledyne detectors this providing an expected total noise below the scientific requirement. The estimated noise for the Selex detectors was always well above the noise requirement and the obtained trend was not decreasing with the increase of the integration time.
In particular, in the case of the Teledyne detectors, for k≥2, the minimum n to satisfy the requirement was always very low. This situation will allow to tune the overall measurement duration (max integration time) based only on the deglitching procedure performances, keeping it as short as possible, thus minimizing the expected number of cosmic hits.
In case of faint sources the results obtained for the 1/16 Hz sampling rate were similar to the previous ones, even if in this case the Selex detectors were able to meet the noise requirements in at least one case, with k=3 and the minimum n equal to 7.
Considering these preliminary results it was shown that with Teledyne sensors it is possible to better combine bright and faint sources results, while Selex detectors performances in terms of the overall noise obtainable with different readout strategies need to be better investigated. In particular, the main conclusion of our analysis is that two different readout rates and sampling methods are needed for bright sources and faint sources. With the noise performances considered for the Teledyne MCT detectors, it is possible to meet the noise requirements well within the maximum allowed integration times in both cases.
Future investigations are planned to improve the overall detector readout chain performances. The possibility to apply hardware coded ramps coaddition and to modify the detectors sampling rates will allow to explore a wider parameter space for the optimization of the readout mode procedures.
With respect to the cosmic hits effects, assumptions based on studies made for the JWST telescope (see [11,12]) give an expected rate of cosmic events with impact on the detector confined between 5 and 30 events/s/cm 2 . The estimated hit rates obtained for the whole VNIR focal plane array are reported in Table 4. Assuming that at least Table. It can be seen that, in case of bright sources, the percentage of pixels affected by the glitches will be very low and therefore it will be not necessary to correct the ramps for the cosmic hits effects, but it will be sufficient to identify and discard the affected readouts (only a max 0.25 % of the overall array will be affected by the cosmic hits in a 3 s exposure). In case of faint sources a more detailed evaluation is needed, to confirm the necessity to implement a deglitching procedure onboard.
The detector's electronics
The MCT-based detector will be coupled with a ROIC (Read Out Integrated Circuit) bump bonded to the device's sensitive area. The ROIC will acts as a proximity electronics in order to extract the low level noise analogue signal from the detector, addressing the very low power dissipation requirements imposed by the environmental thermal aspects. The analogue signal will be amplified by the ROIC output OPAMP(s) (typically 4 or 8 for the two detector halves collecting respectively the VIS and NIR signals of the target spectrum) and fed to the cold front-end electronics (CFEE) where A/D conversion will take place.
The two detector halves OPAMPs gains shall be properly set in order to reach a proper signal level and maximize the S/N ratio for the VIS and NIR spectra with respect to the acquired number of up-the-ramp collected samples, the detector's QEs and the overall instrumental efficiency in the VIS and NIR spectral bands. Another way to be explored and verified is connected to the possibility to set different ramp time lengths and samples number for the two detector halves, in order to reach the desired S/N ratio. The latter solution could however complicate the clocks sequencing, the digital data acquisition timing and the overall detector management.
The payload's warm section electronics is essentially constituted by the warm frontend electronics (WFEE) generating driving signals for the detector ROIC/CFEE and the Instrument Control Unit (ICU) acting as the main payload processing electronics and collecting the digitized signals from all scientific channels. WFEEs will reside in a box specifically designed and located near the ICU which will be kept at a temperature in the range 0-40°C.
The detector is expected to be integrated easily and operate well with a range of electronics solutions. The distance between the Detector Sub Assembly and the CFEE and between the CFEE and the WFEE appears unavoidable in this system presentation and introduces technical challenges associated with a distributed signal chain including driving load capacitance, achieving settling, minimizing cross talk, ensuring stability and reducing noise.
The selected detector can be easily interfaced with the SIDECAR electronics solution, which helps to mitigate a number of electronics design challenges in implementing a full-functional solution. The key benefit is the closer integration of the ADCs to the detector which is expected to simplify the interface design, safeguard SNR and mitigate cross talk and some noise sources.
The baseline SIDECAR CFEE will receive a master clock and sync signals to be properly operated and to generate the detector clocks and control signals. The WFEE could also include an FPGA to provide a serial interface both to the SIDECAR and ICU and to perform pre-processing on scientific digital data (e.g. masking) and housekeepings.
The WFEE, if definitively adopted, shall implement stabilized voltage regulators and bias generators for the CFEE and the detector and shall interface the CFEE using a suitable cryo harness design [6,13]. This critical subsystem is designed as part of the signal interface between the detector, CFEE and WFEE in order to ensure that the best subsystem trades and required signals performance are achieved by design.
Summary
In the present paper the scientific objectives of the EChO mission have been presented. The VNIR module has been designed to fulfill both technical and scientific requirements of the proposed mission. Some of the adopted technical solutions have been shown. | 8,194.2 | 2014-06-14T00:00:00.000 | [
"Physics"
] |
Crystal structure of a new molecular salt: 4-aminobenzenaminium 5-carboxypentanoate
The asymmetric unit of the title molecular salt, consists of half a 4-aminobenzenaminium cation and a half a 5-carboxypentanoate anion. Each ion lies about an inversion centre, the other half being generated by inversion symmetry. In the crystal, charge-assisted O—H⋯O, N—H⋯O and N—H⋯N hydrogen bonds together lead to the formation of a three-dimensional supramolecular framework.
Chemical context
p-Phenylenediamine (PPDA) has been widely used to synthesize hair dyes, engineering polymers and composites. The coordination chemistry of PPDA is well documented (Adams et al., 2011;Bourne & Mangombo, 2004). Adipic acid (AA) is an industrial chemical used to manufacture nylon and is also used in many drugs and food additives (Rowe et al., 2009). A number of salts and co-crystals involving p-phenylenediamine have been reported (Thakuria et al., 2007;Delori et al., 2016), and adipic acid is also widely known as a coformer in co-crystal formation (Swinton Darious et al., 2016;Lemmerer et al., 2012;Lin et al., 2012;Matulková et al., 2014;Thanigaimani et al., 2012). A 2:1 salt of 4-aminoanilinium (PPDAH) and sebacate, and a 1:1 salt of PPDAH and dihydrogen trimesate have been reported recently (Delori et al., 2016). We have previously reported various salts of o-phenylenediamine with aromatic carboxylic acids (Mishra & Pallepogu, 2018). Herein, we report on the synthesis and crystal structure of the 1:1 salt formed between p-phenylenediamine and adipic acid, (I).
Structural commentary
The asymmetric unit of the title salt (I), illustrated in Fig. 1, consists of half each of a 4-aminobenzenaminium cation (4-ABA) and a 5-carboxypentanoate anion (5-CP); both ions (space group P1) lie about inversion centres. Partial protonation (50%) has occurred at atom N1 of the cation, resulting in the formation of a salt with the formula unit C 6 H 9 O 4 À ÁC 6 H 9 N 2 + . One of the two adipic acid H atoms binds to atom N1 with a site-occupancy factor (SOF) of 0.5 (for atom H1NC), thereby positioned at two sites (because of inversion symmetry) in the cation. The other acid H atom (H2O) is located on an inversion center and is therefore shared equally by two O2 atoms of inversion-related anions. The C1-N1 bond length [1.4361 (13) Å ] in the 4-ABA cation is longer than literature values for a non-protonated amine (C-NH 2 ) group [cf. 1.418 (2) Å ; Czapik et al., 2010] and this can be attributed to the partial protonation with SOF = 0.50 at each site. In the 5-CP anion, the C6 O1 and C6-O2 bond lengths [1.2379 (12) and 1.2802 (11) Å , respectively] are similar to the values reported for 2-methylimidazolium hydrogen adipate monohydrate [1.244 (2) and 1.264 (2) Å , respectively; Meng et al., 2009] in which a carboxylic acid H atom is also statistically distributed between the two carboxy groups and a hydrogenbonded chain is formed. In (I), the position of this H atom (H2O) was located in a difference-Fourier map and found to be situated on an inversion centre ( 1 2 , 1 2 , 1 2 ). It is positioned symmetrically between two O2 atoms of two inversion-related 5-CP ions, which accounts for the long O-H bond length of 1.22 Å (see Table 1). The C4 ii -C4-C5-C6 torsion angle of À179.82 (9) indicates that the carbon chain in the anion is fully extended [see Fig. 1 for symmetry code (ii)].
Figure 2
A view along the c axis of the O-HÁ Á ÁO hydrogen-bonded chain of 5-CP anions (see Table 1). The H atoms (H2O; shown as grey balls) are shared between O2 atoms of inversion-related anions. The C-bound H atoms and the cations have been omitted.
A search of the CSD for salts of adipic acid (AA) with different amines yielded 67 hits. One such structure of particular interest, viz. 2-methylimidazolium hydrogen adipate monohydrate, has been reported twice, once at room temperature (BOTTOU: Meng et al., 2009), where the same type of partial disorder is observed with the carboxylic acid H atom statistically distributed between the two carboxy groups and a hydrogen-bonded chain is formed. However, the lowtemperature analysis at 120 K using synchrotron radiation (BOTTOU01: Callear et al., 2010), describes the structure as bis(2-methylimidazolium) adipate adipic acid dihydrate. In the crystal, the adipate and adipic acid molecules also form a hydrogen-bonded chain. A second structure, tetrakis(cytosinium) dihydrogen bis(adipate), also exhibits the same type of disorder of the carboxylic acid H atom (OYEREQ; Das & Baruah, 2011), and in the crystal it forms a hydrogen-bonded chain.
Synthesis and crystallization
The title molecular salt (I), was synthesized by mixing a 5 ml methanolic solution of adipic acid (AA: 0.5 mmol, 73 mg) and 3 ml of an acetonitrile solution of p-phenylenediamine (PPDA: 0.5 mmol, 54 mg). The reaction mixture was heated to 323 K with magnetic stirring for ca 30 min, and then filtered and allowed to evaporate slowly at room temperature. Purple block-like crystals of (I) were obtained after 5 d (m.p. 438 K). FTIR (KBr pellet, cm -1) : 3337, 3180, 2946, 2383, 1706, 1515, 1255, 821, 743, 501, 475. The PXRD pattern obtained from the product of the LAG experiment, and the simulated PXRD pattern of the crystal structure of the title molecular salt. The PXRD patterns of the reactants used for the cocrystallization and LAG syntheses are also shown.
Figure 3
A partial view, normal to the ab plane, of the crystal packing of the title molecular salt (I). Hydrogen bonds are shown as dashed lines (see Table 1), and C-bound H atoms have been omitted for clarity.
Figure 4
A view along the b axis of the crystal packing of the title molecular salt (I), showing the three-dimensional supramolecular framework. Hydrogen bonds are shown as dashed lines (see Table 1), and C-bound H atoms have been omitted for clarity.
The title compound was also synthesized by liquid-assisted grinding (LAG). For this mechanochemical synthesis, equimolar amounts of AA (1 mmol, 146 mg) and PPDA (1 mmol, 108 mg) were ground for 20 min. in a mortar and pestle using 3 to 4 drops of acetonitrile. The powdered sample was collected for PXRD and the resultant pattern was scrutinized for new peaks, as evidence for the formation of the title molecular salt (I), by comparing this pattern with the simulated pattern obtained from the CIF file of salt (I). The PXRD pattern of the compound obtained from the LAG experiment matches the simulated pattern obtained for (I), formed by co-crystallization (Fig. 5).
Refinement
Crystal data, data collection and structure refinement details are summarized in Table 2. C-bound H atoms were placed in calculated positions and refined using a riding-model approximation: 0.95-09.99 Å with U iso (H) = 1.2U eq (C). The ammonium and carboxyl H atoms were located in difference-Fourier maps and were freely refined. The H atom (H1NC) bound to N1 of the 4-ABA cation, with an occupancy factor of 0.5, is positioned at two sites of the cation due to inversion symmetry, giving rise to a monoprotonated species. The carboxylic acid H atom (H2O) is positioned symmetrically between the two O2 atoms of inversion-related 5-CP ions (H-O = 1.22 Å ). This H atom (H2O) is located on an inversion center ( 1 2 , 1 2 , 1 2 ) with an occupancy factor of 0.5, and hence gives rise to a mono-deprotonated species. | 1,712.2 | 2018-01-26T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
Impulse Noise Induced Hidden Hearing Loss, Hair Cell Ciliary Changes and Oxidative Stress in Mice
Recent studies demonstrated that reversible continuous noise exposure may induce a temporary threshold shift (TTS) with a permanent degeneration of auditory nerve fibers, although hair cells remain intact. To probe the impact of TTS-inducing impulse noise exposure on hearing, CBA/J Mice were exposed to noise impulses with peak pressures of 145 dB SPL. We found that 30 min after exposure, the noise caused a mean elevation of ABR thresholds of ~30 dB and a reduction in DPOAE amplitude. Four weeks later, ABR thresholds and DPOAE amplitude were back to normal in the higher frequency region (8–32 kHz). At lower frequencies, a small degree of PTS remained. Morphological evaluations revealed a disturbance of the stereociliary bundle of outer hair cells, mainly located in the apical regions. On the other hand, the reduced suprathreshold ABR amplitudes remained until 4 weeks later. A loss of synapse numbers was observed 24 h after exposure, with full recovery two weeks later. Transmission electron microscopy revealed morphological changes at the ribbon synapses by two weeks post exposure. In addition, increased levels of oxidative stress were observed immediately after exposure, and maintained for a further 2 weeks. These results clarify the pathology underlying impulse noise-induced sensory dysfunction, and suggest possible links between impulse-noise injury, cochlear cell morphology, metabolic changes, and hidden hearing loss.
Introduction
Noise-induced hearing loss (NIHL) acquired in leisure or occupational settings is a common cause of hearing impairment in industrialized countries, with a prevalence second only to age-related hearing loss (ARHL) [1,2]. It is well known that after highintensity exposure, an irreversible increase in hearing thresholds can occur, leading to a permanent threshold shift (PTS) due to the loss of cochlear sensory hair cells [3]. Hearing loss associated with mild acoustic overexposure is reversible, and hearing recovers within 2-3 weeks [4]. This temporary loss is known as a temporary threshold shift (TTS), and is probably due to reversible damage to the stereocilia of hair cells [5] and/or swelling, followed by recovery, of cochlear nerve terminals [6,7].
Some recent studies demonstrated that TTS exposure may cause the loss of more than 50% of the synapses that lie between the cochlear nerve fibers and inner hair cells (IHCs), without hair-cell damage and without alteration in hearing thresholds [8,9]. This selective synaptopathy occurring after noise exposure was thus named "hidden hearing loss" [10]. The threshold recovery to normal levels was attributed to the recovery of OHC function, together with the unexpected resilience of the high-spontaneous-rate auditory fibers encoding the best thresholds. Nevertheless, the fragility of the low-spontaneous rate fibers is not yet understood. Cochlear synaptopathy might contribute to impairment of the ability to understand speech in loud background noise [11], and also to hyperacusis and/or tinnitus [12,13]. Finally, the synaptopathy induced by noise exposure would contribute to the early onset of neural age-related hearing loss in mice [14,15].
However, to date most findings from studies in animal models have demonstrated cochlear synaptopathy and neurodegenerative processes apparently linked to continuous octave-band noise exposure at sound pressure levels of~100 dB SPL for~2 h [10,11,14,16]. Only a recent study in blast-noise-exposed chinchillas [17] showed that the synapses between inner hair cells and dendrites of spiral ganglion neurons are most vulnerable to blast exposure of 165 dB SPL peak.
Impulses noises, resulting from the sudden release of energy into the atmosphere (explosions, gunshot) or from impacts between objects (machine tools, hammering) are common in industrial activity, construction and military [18][19][20]. Perceptual anomalies such as tinnitus or hyperacusis, and difficulty hearing in noise, often occur and persist after blastinduced damage, even in cases where threshold sensitivity has returned to normal [21][22][23]. Surprisingly, although there is a growing body of knowledge documenting the effects of continuous noise exposure on hearing PTS or TTS [10,11,14,16,[24][25][26][27][28], there is very little literature on the effects of blast-wave and impulse noise-exposure on hearing function.
The purpose of this study was to probe cochlear function, morphology and metabolic state after a moderate impulse noise exposure to determine whether, as for continuousnoise exposures, the synaptic connections between hair cells and cochlear nerve fibers are more vulnerable than the hair cells themselves to impulse-noise exposures. To do so, awake mice were exposed to an impulse noise with peak pressures of 146 dB SPL. The effects of this exposure were assessed using complementary approaches combining morpho-physiology, biochemistry and molecular biology.
Animals
Male CBA/J mice were purchased from Janvier Laboratories (Le Genest-Saint-Isle, France) and were housed in facilities accredited by the French Ministry of Agriculture and Food (D- 34-172-36;20 May 2021). Experiments were carried out in accordance with French Ethical Committee stipulations regarding the care and use of animals for experimental procedures (agreements C75-05-18 and 01476.02, license #6711). All experimental procedures were conducted with 10-14-week-old male mice. All efforts were made to minimize the number of animals used and their suffering.
Impulse Exposures
Awake mice were placed, singly and unrestrained, in a small wire mesh cage suspended directly below the acoustic horn of a loudspeaker that extended into an exposure chamber lined with acoustic foam to reduce sound reverberation. The explosion-like impulses were generated by a customized system. Briefly, the noise was generated by a PCI 4461 card (National instruments, Austin, TX, USA) using the Friedlander equation in LabVIEW as described by Qin et al., 2015 [29]. To measure the impulse noise generated, a 1 4 " highsensitivity condenser microphone set (GRAS 46BF) was used. The microphone was aligned at the center of the horn and the impulse noise generated at different output voltages (from 0.3 to 8 V). A pilot study was performed to obtain a temporary threshold shift (TTS) without visible eardrum rupture by varying the intensity and the rate of presentation. The best compromise was a presentation of 700 broad-spectrum (0.25-24 kHz) impulses with peak SPL of 145 ± 0.5 at 1 Hz pulse repetition rate (total duration: 11 min 40 s) ( Figure 1).
(TTS) without visible eardrum rupture by varying the intensity and the rate of presentation. The best compromise was a presentation of 700 broad-spectrum (0.25-24 kHz) impulses with peak SPL of 145 ± 0.5 at 1 Hz pulse repetition rate (total duration: 11 min 40 s) ( Figure 1).
Functional Hearing Assessments
Functional evaluation of ears was performed by recording auditory brainstem responses (ABRs) and distortion-product otoacoustic emissions (DPOAEs) in anesthetized mice before, 30 min, 1 day and 2 and 4 weeks after noise exposure (n = 11 animals, 22 cochleae for each time point). All functional evaluations were carried out in a Faradayshielded, anechoic soundproof cage. Rectal temperature was measured with a thermistor probe, and maintained at 38.5 ± 1 °C using an underlying heated blanket. For evaluation of age-related hearing loss, 4 age-matched additional mice (n = 8 cochleae) were recorded at 14 weeks of age and at 18 weeks of age (see Supplementary Figure S1).
Distortion-Product Otoacoustic Emission (DPOAEs)
DPOAEs were recorded in the external auditory canal using an ER-10C S/N 2528 probe (Etymotic research Inc. Elk Grove Village, IL, USA.). Stimuli were two equi-level (65 dB SPL) primary tones, f1 and f2, with a constant f2/f1 ratio of 1.2. The DPOAE 2f1-f2 was extracted from the ear canal sound pressure and processed by a HearID auditory diagnostic system (Mimosa Acoustic, Champaign, IL USA) on a computer. The probe was self-calibrated for the two stimulating tones before each recording. f1 and f2 were presented simultaneously, stepping f2 from 20 to 20 kHz in quarter-octave steps. For each frequency, the distortion product 2f1-f2 and the neighboring noise amplitude levels were measured and expressed as a function of f2.
Auditory Brainstem Response (ABRs)
ABRs were recorded using three subcutaneous needle electrodes placed on the vertex (active), on the pinna of the tested ear (reference) and in the hind leg (ground). Strong correlations were observed between click-evoked ABR thresholds and pure-tone thresholds at 2 and 4 kHz [30]. To obtain more frequency-specific estimates of hearing sensitivity in the high-frequency range, we chose to use tone-burst stimulation for ABR recording. Sound stimuli were generated by a NI PXI-4461 signal generator (National Instruments) and consisted of 9 ms tone bursts, with a 7 ms plateau and 1 ms rise/fall times, delivered
Functional Hearing Assessments
Functional evaluation of ears was performed by recording auditory brainstem responses (ABRs) and distortion-product otoacoustic emissions (DPOAEs) in anesthetized mice before, 30 min, 1 day and 2 and 4 weeks after noise exposure (n = 11 animals, 22 cochleae for each time point). All functional evaluations were carried out in a Faradayshielded, anechoic soundproof cage. Rectal temperature was measured with a thermistor probe, and maintained at 38.5 ± 1 • C using an underlying heated blanket. For evaluation of age-related hearing loss, 4 age-matched additional mice (n = 8 cochleae) were recorded at 14 weeks of age and at 18 weeks of age (see Supplementary Figure S1).
Distortion-Product Otoacoustic Emission (DPOAEs)
DPOAEs were recorded in the external auditory canal using an ER-10C S/N 2528 probe (Etymotic research Inc. Elk Grove Village, IL, USA.). Stimuli were two equi-level (65 dB SPL) primary tones, f1 and f2, with a constant f2/f1 ratio of 1.2. The DPOAE 2f1-f2 was extracted from the ear canal sound pressure and processed by a HearID auditory diagnostic system (Mimosa Acoustic, Champaign, IL USA) on a computer. The probe was self-calibrated for the two stimulating tones before each recording. f1 and f2 were presented simultaneously, stepping f2 from 20 to 20 kHz in quarter-octave steps. For each frequency, the distortion product 2f1-f2 and the neighboring noise amplitude levels were measured and expressed as a function of f2.
Auditory Brainstem Response (ABRs)
ABRs were recorded using three subcutaneous needle electrodes placed on the vertex (active), on the pinna of the tested ear (reference) and in the hind leg (ground). Strong correlations were observed between click-evoked ABR thresholds and pure-tone thresholds at 2 and 4 kHz [30]. To obtain more frequency-specific estimates of hearing sensitivity in the high-frequency range, we chose to use tone-burst stimulation for ABR recording. Sound stimuli were generated by a NI PXI-4461 signal generator (National Instruments) and consisted of 9 ms tone bursts, with a 7 ms plateau and 1 ms rise/fall times, delivered at a rate of 11/s with alternate polarity by a JBL 2426H loudspeaker in a calibrated free field. Stimuli were presented to the ear by varying levels from 100 to 0 dB SPL, in 5 dB steps. Stimuli were generated and data acquired using Matlab (MathWorks, Natick, MA, USA) and LabView (National Instruments) software. The difference potential between vertex and mastoid intradermal needles was amplified (20,000 times, Grass P511 differential amplifier), sampled (at a rate of 50 kHz), filtered (bandwidth of 0.3-3 kHz) and averaged (700 times). Data were displayed using LabView software and stored on a computer (Dell precision 3630). ABR thresholds were defined as the lowest sound level that elicited a clearly distinguishable wave II. Recordings and analysis were performed blindly.
Morphological Assessments
Eighteen additional mice were used for the assessment of noise-induced ultrastructural changes in sensory neural cells of the cochlea using scanning (SEM, Hitachi S4000) and transmission electron microscopy (TEM, Tecnai F20 FEI 200KV). In addition, 24 additional mice were required for counting of ribbon synapses of IHCs and immunocytochemistry using confocal microscopy.
Counting of Sensory Hair Cells
Sensory hair-cell loss was evaluated using SEM. The cochleae were processed and evaluated using previously reported standard techniques [31]. Counting of inner (IHC) and outer (OHC) hair cells was performed in three different 300 µm-long segments of the organ of Corti located at 0.5 to 1, 1.1 to 2.5, and 2.6 to 3.7 mm from the apex tip, corresponding to the 6 to 8, 8 to 16 and 16 to 25 kHz regions, respectively, from both control and noiseexposed cochleae with different post-exposure times (n = 4 to 6 cochleae per time point). Hair cells were considered to be absent if the stereociliary bundles and cuticular plates were missing [31,32]. Disruption of the OHC stereociliary bundle was defined as bending at their base of the outermost row of stereocilia toward the lateral side, while the other bundle rows remained straight. To minimize bias, two different experimenters performed the counts.
Ultrastructural Analysis
Morphological damage related to noise exposure was investigated using TEM of the basal cochlear region. Animals were decapitated under deep anesthesia, and their cochleae were prepared according to a standard protocol for fixation and plastic embedding. Semithin sections were observed under a Zeiss Axioscope light microscope, and ultrathin radial sections of the organ of Corti were analyzed using TEM (n = 5-6 cochleae per time point).
Counting of the Auditory Nerve-Fiber Terminals
The density of the auditory nerve-fiber terminals was measured in 3 to 4 habenular openings of the cochlear semi-thin sections of the osseous spiral from the cochlear regions coding between 16 and 25 kHz of control and noise-exposed mice at 2 weeks after. The mean value of each section was then averaged for each animal and each group (n = 3 sections per animal, 4-5 cochleae per group).
Enzymatic Activities and Lipid Peroxidation
Cochlear homogenates were prepared as described by Casas [33] and the protein concentration measured using the Bradford method. Lipid peroxidation was assessed using the thiobarbituric acid-reactive substances method, and expressed in nmol/mg malondialdehyde (MDA) [33]. Catalase and SOD activities were measured as described, respectively, by Beers and Sizer [34] and Marklund [35]. Complex I, complex II and COX activities were measured as described previously [36][37][38] and expressed in mU/mg protein. Enzymatic activities and lipid-peroxidation analysis required 8 additional animals (16 cochleae) per time point. All experiments were performed in triplicate.
Statistics
Data are expressed as the mean ± SEM, statistical analyses were carried out using GraphPad Prism8 (GraphPad, San Diego, CA, USA). Normality of the variables was assessed using the Shapiro-Wilks test. The significance of the group differences for normal data was assessed with a two-way ANOVA followed by Dunnett's multiple comparisons test. If data failed to pass the normality test, a Friedman ANOVA followed by Dunn's multiple comparison test was used. Comparisons were made with the control condition, or between before the noise exposure and multiple times after noise exposure. The level of statistical significance was set to be p ≤ 0.05.
Based on data from our previous reports [39] or from preliminary experiments, we calculated the sample size using G*Power 3.1.9.2 to ensure adequate power of key experiments for detecting prespecified effect sizes.
Impulse Noise Induced Reversible Threshold Shifts at the Higher Frequencies
We first evaluated the impact of impulse noise exposures on hearing function. Our results showed that 30 min after exposure, mice displayed an elevation of ABR thresholds of ≥30 dB for all frequencies (Figure 2A). During the first 24 h following the noise exposure, there was a partial recovery of ABR thresholds of around 20 dB. Four weeks after, complete recovery of ABR thresholds was observed at the higher frequencies (8 to 32 kHz), but not the lower frequencies (4 and 6 kHz), where <5 dB PTS still remained (Figure 2A). These results were confirmed by the mean ABR thresholds at 6 and 25 kHz, that showed a significant increase in the ABR thresholds 30 min after impulse-noise exposure. Complete recovery was observed in the 25 kHz region from 2 weeks, but not at 6 kHz, where a slight but significant elevation in ABR thresholds was still observed 4 weeks after exposure (p < 0.05, Figure 2B,C).
iments for detecting prespecified effect sizes.
Impulse Noise Induced Reversible Threshold Shifts at the Higher Frequencies
We first evaluated the impact of impulse noise exposures on hearing function. Our results showed that 30 min after exposure, mice displayed an elevation of ABR thresholds of ≥30 dB for all frequencies (Figure 2A). During the first 24 h following the noise exposure, there was a partial recovery of ABR thresholds of around 20 dB. Four weeks after, complete recovery of ABR thresholds was observed at the higher frequencies (8 to 32 kHz), but not the lower frequencies (4 and 6 kHz), where <5 dB PTS still remained (Figure 2A). These results were confirmed by the mean ABR thresholds at 6 and 25 kHz, that showed a significant increase in the ABR thresholds 30 min after impulse-noise exposure. Complete recovery was observed in the 25 kHz region from 2 weeks, but not at 6 kHz, where a slight but significant elevation in ABR thresholds was still observed 4 weeks after exposure (p < 0.05, Figure 2B,C).
Alteration of Distortion-Product Otoacoustic Emissions
OHCs act as nonlinear feedback amplifiers that enhance the sensitivity and the frequency selectivity of the hearing organ. DPOAEs are the by-product of this nonlinear amplification process and hence can serve as a measure for evaluating the integrity of OHCs. Thirty minutes following noise exposure, a moderate decrease in the amplitude of DPOAEs in the frequency range from 6 to 20 kHz was observed, returning to nearly pre-exposure levels at the higher frequencies two weeks later ( Figure 2D). These results were confirmed by the mean DPOAE amplitude evoked by frequencies at 7 and 20 kHz. Significantly (p < 0.01) reduced amplitude of DPOAEs was observed at 7 kHz until 4 weeks after exposure, while complete recovery of DPOAE amplitude was observed at 20 kHz ( Figure 2E,F) from 2 weeks after exposure. These results suggest impulse noise induced impaired sound processing at lower frequencies in the cochlea.
Reduced ABR Wave-I Amplitude and Elevated Central Gain following Exposure
Continuous noise exposures produce robust TTS and permanently reduced ABR wave-I amplitudes at supra-threshold levels in mice, together with degeneration of lowspontaneous-rate auditory nerve fibers [14]. Here, 30 min after impulse-noise exposure, ABR wave-I amplitudes elicited by 4, 8, 16 and 32 kHz tone-burst stimulation were dramatically reduced at all sound levels tested, and more importantly in the 32 kHz region. These results suggest impulse noise induced acute dysfunction of OHCs and cochlear synaptopathy. A progressive partial recovery of ABR wave-I amplitudes was observed from 24 h to 4 weeks following noise exposure at all sound levels and for all frequencies. At 4 weeks after noise exposure, although ABR thresholds had completely recovered to pre-exposure levels at the higher frequencies (8 to 32 kHz, Figure 2A-C), the amplitudes of wave I of the ABR remained lower at all sound levels tested and from low to high frequencies compared to pre-exposure values.
The average ABR waveforms elicited by all tone-burst frequencies at 80 dB SPL ( Figure 3A) showed strong reduction in the amplitude of all waves, except for wave V elicited by 16 kHz at 30 min following impulse noise exposure. Four weeks after exposure, while an almost complete recovery in the amplitude of wave V and peaks I and II still remained smaller than pre-exposure ( Figure 3A), suggesting that a compensatory mechanism might have affected central processing, mean ABR amplitudes elicited by 4, 8, 16 and 32 kHz tone-bursts at 60 ( Figure 3B, red plots) and 80 dB SPL ( Figure 3B, black plots), showed a significant reduction in wave I amplitudes until 4 weeks after exposure ( Figure 3B), except for 16 kHz at 80 dB SPL stimulus, where a complete recovery was seen after 24 h ( Figure 3B).
In addition, a significant increase in the V/I wave ratio was observed for all tone-burst frequencies at 60 or 80 dB SPL stimulus by 24 h, and maintained to 4 weeks after exposure for 8, 16 and 32 kHz ( Figure 3C). The differences between before and 4 weeks after exposure at 60 or 80 dB SPL stimulus for 4 kHz, and for 16 kHz at 60 dB SPL stimulus, however, were not significant (p > 0.05, Figure 3C).
Together, these results suggest that impulse noise induced permanent, moderate OHC dysfunction, together with reduced ABR wave-I amplitudes at supra-threshold levels. In addition, there was evidence of compensatory mechanisms in central processing.
Disturbances in Stereociliary Bundle Morphology of the Outer Hair Cells
To assess the effects of impulse-noise exposure on the hair cells, we performed scanning electron microscopy (SEM), which allows the visualization of the surface of the organ of Corti. Twenty-four hours after exposure, disturbance of stereocilial morphology was observed mainly in outer hair cells located in the regions coding the frequencies from 4 to 16 kHz that matched the changes in the DPOAE ( Figure 4C,D), compared to a normal appearance of the hair cell bundles in control unexposed cochleae ( Figure 4A,B). The bundle disruption did not recover until 15 days after exposure ( Figure 4E,F). In addition, a few IHCs also showed fused stereocilia ( Figure 4D,F).
Quantification analysis revealed that a significantly higher number of OHCs had disrupted or fused hair bundles in the cochlear region coding 6 to 16 kHz in noise-exposed mice by 30 min, and maintained to 2 weeks after exposure (p < 0.05 vs. before, Figure 4H). A slight but significant increase in IHCs with fused hair bundles was also observed in the 8-16 kHz coding region of the exposed mice at 2 weeks after (p < 0.05 vs. before, Figure 4G). By contrast, no significant loss of IHCs and OHCs was observed in either control unexposed or noise-exposed mice ( Figure 4I,J). Antioxidants 2021, 10, x FOR PEER REVIEW 8 of 18 , and recorded before, 30 min, 24 h or 4 weeks after impulse-noise exposure. Two-way ANOVA test was followed by Dunnett's multiple comparison: * p ≤ 0.05, ** p ≤ 0.01, *** p ≤ 0.001, time after noise exposure vs. before. (C): Mean ABR Wave V/I amplitude ratios recorded before, 24 h, or 4 weeks after impulse noise-exposure at 60 dB SPL (red), or 80 dB SPL (black). ABR waves V/I ratio failed to pass the normality test, so Friedman ANOVA was followed by Dunn's multiple comparison test: * p ≤ 0.05, ** p ≤ 0.01, *** p ≤ 0.001, time after noise exposure vs. before. All data are expressed as mean ± SEM (n = 22 cochleae per time point).
Disturbances in Stereociliary Bundle Morphology of the Outer Hair Cells
To assess the effects of impulse-noise exposure on the hair cells, we performed scanning electron microscopy (SEM), which allows the visualization of the surface of the organ of Corti. Twenty-four hours after exposure, disturbance of stereocilial morphology was observed mainly in outer hair cells located in the regions coding the frequencies from 4 to 16 kHz that matched the changes in the DPOAE ( Figure 4C,D), compared to a normal appearance of the hair cell bundles in control unexposed cochleae ( Figure 4A,B). The bundle disruption did not recover until 15 days after exposure ( Figure 4E,F). In addition, a few IHCs also showed fused stereocilia ( Figure 4D,F).
Quantification analysis revealed that a significantly higher number of OHCs had disrupted or fused hair bundles in the cochlear region coding 6 to 16 kHz in noise-exposed mice by 30 min, and maintained to 2 weeks after exposure (p < 0.05 vs. before, Figure 4H). A slight but significant increase in IHCs with fused hair bundles was also observed in the 8-16 kHz coding region of the exposed mice at 2 weeks after (p < 0.05 vs. before, Figure 4G). By contrast, no significant loss of IHCs and OHCs was observed in either control unexposed or noise-exposed mice ( Figure 4I,J). , and recorded before, 30 min, 24 h or 4 weeks after impulse-noise exposure. Two-way ANOVA test was followed by Dunnett's multiple comparison: * p ≤ 0.05, ** p ≤ 0.01, *** p ≤ 0.001, time after noise exposure vs. before. (C): Mean ABR Wave V/I amplitude ratios recorded before, 24 h, or 4 weeks after impulse noise-exposure at 60 dB SPL (red), or 80 dB SPL (black). ABR waves V/I ratio failed to pass the normality test, so Friedman ANOVA was followed by Dunn's multiple comparison test: * p ≤ 0.05, ** p ≤ 0.01, *** p ≤ 0.001, time after noise exposure vs. before. All data are expressed as mean ± SEM (n = 22 cochleae per time point).
Reversible and Moderate Loss of IHC Ribbon Synapses
To determine the contribution of the loss of ribbon synapses in impulse noise-induced reduction of ABR wave-I amplitudes at suprathreshold levels, we examined IHCs by double-labeling presynaptic and postsynaptic structures and 3D confocal imaging analysis [40] in the cochlear regions coding between 6 and 45 kHz. Synapses were identified as juxtaposed presynaptic ribbons and postsynaptic AMPA receptor clusters that were characterized by staining with antibodies against CtBP2 and Homer 1, respectively ( Figure 5A-C). Our results showed that control, unexposed ears displayed a broad peak of roughly 15 to 19 ribbons and paired synaptic puncta (synapses) per IHC at cochlear regions tuned to frequencies between 8 and 45 kHz ( Figure 5D,E). Data are expressed as mean ± SEM (n = 6 to 12 cochleae per age and genotype). One-way ANOVA test was followed by Dunn's test. * p ≤ 0.05, *** p ≤ 0.001, vs. control. Data are expressed as mean ± SEM (n = 6 to 12 cochleae per age and genotype). One-way ANOVA test was followed by Dunn's test. * p ≤ 0.05, *** p ≤ 0.001, vs. control. total (all CtBP2 puncta, blue, green and orange lines) and orphan ribbons (unpaired CtBP2 puncta, light blue, green and orange lines) per IHC along the tonotopic axis of the cochleae from control (blue), and impulse noise-exposed mice at 24 h (orange) and 2 weeks (green) after exposure. Two-way ANOVA test was followed by Dunnett's multiple comparison: * p ≤ 0.05, time after noise exposure vs. before. All data are expressed as mean ± SEM (n = 7-8 cochleae for per time point). (E): Quantifications of synapses (paired CtBP2-Homer1 puncta). Two-way ANOVA test was followed by Dunnett's multiple comparison: * p ≤ 0.05, *** p ≤ 0.001, time after noise exposure vs. before. All data are expressed as mean ± SEM ( . All data are expressed as mean ± SEM, one-way ANOVA test was followed by Dunn's test. One day post exposure, there was a moderate loss of ribbons ( Figure 5D) and a more severe loss of synapses ( Figure 5E), spanning the cochlear regions from 8 to 45 kHz. The loss reached significance in the 32 and 45 kHz regions (32 kHz: p < 0.001, 45 kHz: p < 0.05) for synapse counting ( Figure 5E), but only at 32 kHz for ribbon counting (p < 0.05, Figure 5D). In addition, one day after exposure, a significant increased number of orphan ribbons was observed at 45 kHz ( Figure 5D). Two weeks after exposure, the loss of ribbons and synapses was nearly completely recovered, the synaptic counts for the control vs. 2 weeks post exposure are statistically indistinguishable ( Figure 5D,E). These results suggest that repair of damaged synapses had taken place, as shown by previous studies [41][42][43].
To examine the ultra-structure of ribbons synapses, we used TEM ( Figure 5F,G). In control unexposed IHCs, we often found ribbon synapses which were regular in shape and size. The ribbon bodies were surrounded by a well-organized halo of synaptic vesicles ( Figure 5F). Two weeks after exposure, most of the ribbon synapses that we found were presenting electron-dense cores ( Figure 5G). In addition, some IHC synapses have immature morphology with double ribbons (left in Figure 5G) which is typically a developmental trait that occurs during the period of cochlear development and during synaptic repair in the post-traumatic period [43][44][45]. By contrast, in most HCs from noise-exposed mice at 2 weeks after, the postsynaptic density was still clearly visible (blue arrows) and so were docked vesicles. Together these results suggest incomplete synaptic regeneration or repair of IHC ribbon synapses which may explain at least partial recovery of the ABR Wave I amplitudes.
Finally, the density of the auditory nerve-fiber terminals in the habenular openings ( Figure 5H-J) in both control and noise-exposed mice at 2 weeks after was similar (90.7 ± 1.9 versus 92.9 ± 2.6 fibers per 1000 µm 2 for control and exposed mice, respectively, Figure 5J).
Oxidative Stress
Mitochondria play a key role in cochlear homeostasis and in maintaining cell function during exposure to sound. To assess the effects of impulse-noise exposure on mitochondrial activity, we measured citrate synthase (CS) activity (a marker of mitochondrial density) and cytochrome c oxidase activity (COX) (complex IV of mitochondrial respiratory chain) in unexposed cochleae and in others at different times after noise exposure. Here, we report no significant differences between the groups ( Figure 6A,B).
One of the commonly recognized mechanisms mediating noise-induced cochlear damage is oxidative stress [46]. To test the occurrence of oxidative stress in cochlear tissues after impulse-noise exposure, we assessed the activity of some first-line anti-oxidant enzymes such as catalase (Cat), superoxide dismutase (SOD), and the glutathione peroxidase (GPx), which are recruited to counteract free-radical damage. In addition, lipid peroxidation and protein oxidation were analyzed by measuring the levels of malondialdehyde (MDA) and thiols (SH). Whereas catalase activity was not influenced by noise exposure (Figure 6C), we found that SOD activity was significantly reduced 2 weeks after (p < 0.05, Figure 6D). By contrast, the activity of glutathione peroxidase was significantly increased by 1 week after exposure and maintained to 4 weeks after (vs. control: p < 0.001, Figure 6E). We also observed that MDA levels were strongly reduced 30 min after sound exposure (vs. control: p < 0.001, Figure 6E), whereas no significant difference in the level of SH was found between groups ( Figure 6G).
Confocal microscopy observations revealed very strong SOD2 expression in the cytoplasm of the OHCs and their supporting Deiters' cells, SGNs, and the cells of stria vascularis of the cochleae 24 h after exposure ( Figure 6J), compared with control unexposed cochleae ( Figure 6H) and 30 min after ( Figure 6I). Finally, a drastically reduced SOD2 level was observed in the sensory hair cells, SGNs, and strial cells by 2 weeks after exposure ( Figure 6K). Together, these results show that noise exposure elicits oxidative stress. and thiols (SH), (G) using spectrofluorochemistry in whole cochlear extracts from control and impulse-noise-exposed cochleae at the different times after exposure (n = 16 cochleae per condition). All data are expressed as mean ± SEM, one-way ANOVA test was followed by Dunn's test. * p ≤ 0.05, *** p ≤ 0.001, vs. control. (H-K): Confocal images of transverse cryostat sections of the organ of Corti (left columns), SGNs (mid columns) and stria vascularis (right columns) from the cochleae of control (H) and impulse noise-exposed mice at 30 min (I), 24 h (J), and 2 weeks (K) after. Sections were immunolabeled with antibodies against SOD2 (green), and NF 200 (red in mid columns). Phalloidin rhodamine and DAPI were used to label actin and nuclei, respectively. Note that SOD2 immunoreactivity is increased at 24 h after, and reduced at 2 weeks after, exposure, mainly in OHCs, Deiters' cells (DCs), SGNs, and stria vascularis. tC: tunnel of Corti, sgn: spiral ganglion neuron, sv: stria vascularis, sl: spiral ligament. Scale bars =15 µm.
Discussion
Firearms and some industrial equipment can generate high levels of impulse noise that may cause traumatic TTS or PTS and/or tinnitus through mechanical injuries of the structures of the middle (e.g., eardrum rupture, disruption of the ossicular chain) [47] and inner ear (e.g., mainly OHC damage) [48,49], as well as metabolic disturbances of the cochlea (e.g., ischemia/reperfusion injury, oxidative stress) [50,51].
Impulse Noise Did Not Induce Eardrum Rupture
It has been suggested that the "threshold" for eardrum rupture in humans is about 185 dB SPL peak [52]. A mouse study demonstrated that exposure to 199 dB SPL peak and thiols (SH), (G) using spectrofluorochemistry in whole cochlear extracts from control and impulse-noise-exposed cochleae at the different times after exposure (n = 16 cochleae per condition). All data are expressed as mean ± SEM, one-way ANOVA test was followed by Dunn's test. * p ≤ 0.05, *** p ≤ 0.001, vs. control. (H-K): Confocal images of transverse cryostat sections of the organ of Corti (left columns), SGNs (mid columns) and stria vascularis (right columns) from the cochleae of control (H) and impulse noise-exposed mice at 30 min (I), 24 h (J), and 2 weeks (K) after. Sections were immunolabeled with antibodies against SOD2 (green), and NF 200 (red in mid columns). Phalloidin rhodamine and DAPI were used to label actin and nuclei, respectively. Note that SOD2 immunoreactivity is increased at 24 h after, and reduced at 2 weeks after, exposure, mainly in OHCs, Deiters' cells (DCs), SGNs, and stria vascularis. tC: tunnel of Corti, sgn: spiral ganglion neuron, sv: stria vascularis, sl: spiral ligament. Scale bars =15 µm.
Discussion
Firearms and some industrial equipment can generate high levels of impulse noise that may cause traumatic TTS or PTS and/or tinnitus through mechanical injuries of the structures of the middle (e.g., eardrum rupture, disruption of the ossicular chain) [47] and inner ear (e.g., mainly OHC damage) [48,49], as well as metabolic disturbances of the cochlea (e.g., ischemia/reperfusion injury, oxidative stress) [50,51].
Impulse Noise Did Not Induce Eardrum Rupture
It has been suggested that the "threshold" for eardrum rupture in humans is about 185 dB SPL peak [52]. A mouse study demonstrated that exposure to 199 dB SPL peak blasts caused rupture of the tympanic membrane and widespread loss of OHCs in all animals [53]. To assess the effects of impulse noise on cochlear function and morphology, we used an impulse noise with 145 dB SPL peak, 1 impulse/s for 700 impulses. Our results showed no visible eardrum rupture. These results are consistent with DPOAE changes, showing a smaller reduction in the amplitude of DPOAEs than in ABR threshold shifts, since a larger reduction in DPOAEs than ABR threshold shifts would be expected in the case of middle ear damage (middle ear damage affects DPOAE twice).
Reversible Shifts of ABR Thresholds and Reduction of DPOAE Amplitude at Higher, but Not at Lower, Frequencies
To date, relatively little is known about the effects of impulse-noise exposure on the cochlea. A recent study showed that exposure to impulse noise with peak pressures from 160 to 175 dB SPL caused >40 dB TTS, with minimal PTS or HC loss, although often causing a synapse loss of 20-45% in chinchillas [17]. Here, we showed that exposure to impulse noise with peak levels of 146 dB SPL induced~30 dB TTS with <5 dB PTS only at lower frequencies (4 and 6 kHz), but not at higher frequencies (8 to 32 kHz). These ABR results are consistent with DPOAE assessments showing a very small reduction in DPOAE amplitude, and only at lower frequencies, reflecting an impaired function of the OHCs located in the apical part of the cochlea. SEM evaluation revealed noise-induced, persistent disturbance, revealed in fragmented or fused stereocilia of the OHCs, mainly those located in the cochlear region coding the frequencies from 4 to 16 kHz. Our data are consistent with previous findings in rats, showing that the elevation of ABR thresholds after blast exposure was primarily caused by outer hair cell dysfunction induced by stereociliary bundle disruption [54]. Henderson et al. also demonstrated that exposure to 50 impulses of 166 dB peak SPL induced a median 5 to 15 dB PTS, with all chinchillas having substantial hair-cell lesions [55]. In the present study, complete recovery of the ABR thresholds and DPOAE amplitude in the higher frequencies was consistent with morphological assessments showing no significant hair-cell lesions in the basal part of the cochlea. These data showing the influence of impulse exposure on lower-frequency hearing might infer that the OHCs located in the apical part of the cochlea are more vulnerable to impulse-noise-induced injuries.
Persistent Reduction of ABR Wave-I Amplitude and Elevated Central Gain
Some recent studies in guinea pigs and mice have shown that, for continuous noise exposures, IHC synapses are the most vulnerable elements in the inner ear [11]. A permanent loss of ≥50% of the IHC synapses causes a reduction in ABR wave I amplitude without any elevation of ABR thresholds [11,40]. Here, significant reductions in wave I amplitudes were observed at all frequencies during the acute injury and recovery phases, with little PTS (<5 dB), and that only at lower frequencies (4 and 6 kHz). This small PTS is due to disruption of the OHC stereociliary bundles, mainly in the apical part of the cochlea, and might at least in part explain the reduction in ABR wave I amplitudes in the lower-frequency region. Surprisingly, our impulse-noise paradigm induced ∼30% of IHC synapse loss during acute injury, which then almost recovered by 2 weeks after exposure in all cochlear regions, despite a reduction in ABR wave I amplitude. This mismatch between reduced ABR wave I amplitude and recovery of IHC synapses might be explained by aberrant synaptic reconnection and ribbon morphology, as illustrated by our TEM evaluation of the ribbon synapses. These results are consistent with the data of Song et al., [56] showing coding deficits in hidden hearing loss induced by noise. Finally, we observed that decreased ABR wave I amplitude is associated with an increased Wave V amplitude, suggesting that a decrease in the input to the auditory central nervous system induced a compensatory increase in central gain. This latter might be a causal factor of noise-induced tinnitus and hyperacusis.
Oxidative Stress
Numerous authors have reported that noise exposures may cause subsequent secondary cochlear lesions through oxidative stress, an inflammatory process [26]. Cells are armed against oxidative stress and are endowed with robust anti-oxidant defenses to counteract excessive ROS/RNS production via the activity of anti-oxidant enzymes such as manganese superoxide dismutase (MnSOD/SOD2), copper/zinc superoxide dismutase (Cu/Zn SOD/SOD1) or catalase, GPX (glutathione peroxidase) [57].
It has been reported that blast exposure that is known to cause the ear and lung injuries that can induce oxidative stress in the lung characterized by anti-oxidant depletion, lipid peroxidation and hemoglobin oxidation in rats [58]. Anti-oxidant treatments reduced hemoglobin oxidation and lipid peroxidation [58] as well as impulse noise-induced hearing loss in rats [59]. In addition, administration of 4-[2-aminoethyl] benzenesulfonyl fluoride, an inhibitor of NADPH oxidase activation onto the round window membrane of the impulse noise-exposed chinchillas reduced noise-induced permanent threshold shift [60]. Overall, these results suggest a link between impulse noise-induced oxidative stress and cochlear cell damage.
In this study, we demonstrate that oxidative stress was induced in the cochlea after impulse-noise exposure, as demonstrated by the early increase in SOD2 expression, together with a significantly reduced MDA level, suggesting the activation of anti-oxidative defense mechanisms in the cochlea under stressful conditions. At a later stage (from 1 to 2 weeks after exposure), the appearance of oxidative stress in the cochlea was characterized by markedly reduced anti-oxidant SOD activity and SOD2 expression, together with an overproduction of lipid peroxidation (MDA). These results indicate ROS-induced, progressive oxidative damage. On the other hand, a persistent and significant increase in the activity of GPXs was observed from 1 to 4 weeks, which is widely accepted as a stress "enzyme" and proposed as a biomarker for sublethal metal toxicity in plants [61]. Altogether, these results indicate that even though our impulse-noise paradigm caused only very small PTS at lower frequencies, it induced persistent oxidative stress in the cochlear cells.
Conclusions
In the present study, we have shown that a moderate impulse-noise exposure caused an elevation of ABR thresholds and a reduction in DPOAE amplitude immediately after exposure, which then returned to normal at the higher frequencies 2 weeks later. Only a very small level of PTS at the lower frequencies (4 and 6 kHz) was seen 4 weeks later. The small PTS was due to a permanent disturbance of the stereociliary bundle of OHCs located in the apical part of the cochlea. Even though the ABR threshold shifts have completely or almost completely recovered, a permanently reduced amplitude of the ABR wave I was observed for all frequencies tested even four weeks after exposure. The permanent reduction in the amplitude of wave I is despite a complete recovery of the number of synapses. This could be explained by a morphological modification of the regenerated synapses revealed with TEM evaluation. Finally, we observed a persisting increase in the levels of oxidative stress up to 2 weeks after noise exposure. These results highlight the potential roles of oxidative stress in impulse-noise-induced damage to the cochlear sensory neural cell resulting in hidden hearing loss.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/antiox10121880/s1. Figure S1. ABR audiograms tracking changes of thresholds for a period of 4 weeks.
Author Contributions: J.W. and J.-L.P. designed the experiments. J.-C.C. made the impulse noise generation and acoustic measurements system. P.G. and J.N. performed electrophysiological assessments. J.N., P.G. and C.A. acquired the histological data. P.G., J.N., F.F. and F.C. performed molecular biological assessments. P.G., J.N. and C.A. carried out quantitative analysis. J.W., J.-L.P. and P.G. wrote the manuscript. J.W., P.G., J.-L.P., R.P., F.C. and S.P. reviewed the manuscript. All authors have read and agreed to the published version of the manuscript. | 9,555 | 2021-11-25T00:00:00.000 | [
"Biology"
] |
A Parametric Investigation on Energy-Saving Effect of Solar Building Based on Double Phase Change Material Layer Wallboard
In order to further understand the thermal performance of the double phase change material (PCM) layer wallboard, the wallboard model was established and a comprehensively numerical parametric investigation was carried out. The variation laws of inner wall temperature rise and the heat flux transferred under different phase transition temperatures and thermal conductivities are presented in detail. The main results show that the temperature of the inside wall for case 2 can be reduced by about 1.5 K further compared to that for case 1. About 83% of the heat transferred from the outside is absorbed by the PCM layer in case 2. Reducing the phase transition temperature of the PCM layer can decrease the inside wall temperature to a certain extent in the period of high temperature. The utilization of double PCM layers shows much more performance compared to that of the single PCM layer case, and the temperature of the inside wall can be reduced by 2K further.
Introduction
Energy demand has been increasing quickly with the development of economy.And the conventional fossil energy sources such as oil, coal, and gas are limited.Their use leads to climate changes and environmental pollution [1].Building energy consumption has become a serious problem due to a large amount of energy that is consumed by the heating, ventilation, and air conditioning system of buildings every day.According to [2], about 40% of the world's total energy was used for buildings and more than 30% of the primary energy consumed in buildings is for the heating and air conditioning system.Therefore, some energy-saving and environmentfriendly techniques have been investigated in recent years.Thermal energy storage techniques used in buildings to decrease the energy consumption were considered an effective way [3,4].Thermal energy storage can be divided into sensible heat storage, latent heat storage, and chemical energy storage.And among them, the latent heat storage has received considerable attention in comparison with the other two methods attributed to the obvious advantages of latent heat storage using phase change material (PCM), like high energy storage density and narrow operating temperature range [5,6].Furthermore, PCM can store and release a large amount of latent heat during the process of melting and solidifying in its narrow phase transition range [7,8].
In the past two decades, researches on the application of PCM in building energy conservation can be divided into two categories.One is combining the PCM with the active air conditional system where the PCM system serves as the heat source or cold source of the air conditioning system to increase the refrigerating efficiency or the heat efficiency [9,10].For instance, Tyagi et al. [11] designed and experimentally studied the thermal performance of a PCM-based building thermal management system for cool energy storage.The other is the usage in the passive heat insulation and preservation system, comprising the combination of PCM and building materials to obtain novel energy conservation building materials and directly inserting shapestabilized PCM into the enclosure structure of the building [12][13][14].The first category of PCM application needs to be considered early in the design process of the air conditioning system and also needs later maintenance.However, the second one is concentrated on the design of new building material, like coating material with phase change function, insulation wallboard, and brick with PCM encapsulated in it, and is a way to enhance the ability of the building itself to adapt to the climate.Therefore, it is widely of concern to scholars.Li et al. [15] compared the thermal performance of lightweight buildings with and without a PCM layer attached to the inside wallboard and found that the energy consumption to maintain comfortable temperature can be reduced by 40-70%.Ramakrishnan et al. [16] numerically investigated the thermal control effect of building fabrics integrated with PCM under extreme heatwave periods.The result shows that the indoor heat stress risks can be reduced effectively without the function of an air conditioner.Meanwhile, Thiele et al. [17] constructed a numerical model based on a modified admittance model to evaluate the thermal performance of building envelops integrated with PCM whose result turns to agree well with that of the existing finite element simulations.Zhu et al. [18,19] put forward a new structure of wallboards with double shape-stabilized PCM, proposed a related simplified dynamic model, and then used it to analyze energy performance of office building under different conditions.However, the model and related analysis are concentrated on the whole system and overall efficiency.The heat transfer process and the influence of the PCM parameters on the heat transfer law are also quite important for the actual design and need to be further understood.
In this paper, in order to further explore the heat transfer law and thermal performance of the double PCM layer wallboard put forward by Zhu et al. [18] under different conditions, the wallboard model was established and a comprehensively parametric numerical investigation was carried out.The variation law of temperature rise at the inner side of the wall and the heat flux transferred under different phase transition temperatures, thermal conductivities, and arrangements of PCMs are presented and discussed in detail in the following sections.
Model and Methodology
2.1.Model Description.Figure 1 shows the schematics of the resident house and enlargement of the analysis region with structure mesh.The performance of the wall determines the economic and energy-saving efficiency of the whole building to a great extent.Therefore, the wallboard is the key research object.The structure of the wallboard is presented in the enlarged view clearly.As shown in Figure 2, three cases of wallboard with different layer combinations were designed and compared.Case 1 represents the convection wallboard with insulation material only.The insulation layer outside is replaced by the PCM layer in case 2, and both insulation layers are replaced by PCM layers in case 3. The dimensions and thermo-physical properties of these two layers and the concrete can be seen in Table 1. 2 International Journal of Photoenergy computational fluid dynamics package, FLUENT 14.0, was utilized.The mesh density and the computational parameters such as time step and number of iterations per time step were evaluated by checking the dependency of the total heat transfer flux on models with various mesh quantities and different computational parameters.The time step was set to 10 seconds, and the number of iterations per time step was 50.The pressure-based 1st-order implicit algorithm for this unsteady problem was considered.Some assumptions were made in the following simulation work.The specific heat, the phase transition temperature, and the thermal conductivity of PCMs were constant.Also, the PCMs utilized were isotropic and homogenous.The volume change of the PCM during phase transition was ignored.
The energy conservation equation for the concrete region can be presented as follows: where ρ c , c pc , and λ c are the density, heat capacity, and thermal conductivity of the concrete, respectively.The enthalpyporosity model was adopted to model the phase changing process in this work.The liquid fraction is computed at each iteration, based on an enthalpy balance.The energy equation of PCM can be expressed as follows [20]: where H represents the total enthalpy of PCM, H 0 is the sensible enthalpy, ΔH is the latent heat, β is the liquid fraction, and T m is the phase transition temperature.
− π , 6 00 am ≤ t ≤ 6 00 pm, 0, 6 00 pm < t ≤ 6 00 am, 3 The external boundary condition is based on the total solar radiation of the Xuzhou area, which can be seen in Figure 3 [21].The average solar radiation q ave ″ of the Xuzhou area in June is about 385.8 W/m 2 , and the maximum daily solar radiation can be observed through (4).Boundary conditions at the top and bottom of the model are thermal isolation.The sun radiation reaches the left side to heat the wall, and the convection heat transfer exists at the same time to cool the wall.But the heat flux and radiation cannot appear in the boundary condition at the same time to complete the numerical solution.Therefore, after simplification, the left boundary condition is time-dependent temperature boundary, which is described as follows: The right side boundary conditions are mixed boundary conditions.The right side of the wall heats the air inside by radiation and convection synchronously, which can be described as follows: A parametric study was undertaken to investigate the influence of phase transition temperature of PCM and thermal conductivities of PCM and the thermal control effect of the double PCM layers compared to other two cases.
Result and Discussion
The aim of this work was to investigate a special kind of wallboard with two PCM layers attached both sides of the concrete wall for heat insulation and energy saving.Temperature variation of the outside wall with time in summer is exhibited in Figure 4.As the figure shows, the temperature of the outside wall increases to 325 K linearly with a relatively high rate of rise before 10:00 am, and the tendency of temperature rising gradually reduces from 10:00 am to 02:00 pm.The highest temperature of the outside wall in one solar day is up to about 338 K at 02:00 pm.After 02:00 pm, the temperature of the outside wall gradually decreases to 330 K (at 06:00 pm).This variation trend of the outside wall Figure 5 presents the temperature variation and the heat flux of the inside wall for different cases.As shown in Figure 5(a), the temperature of case 1 and case 2 has little change before 10:00 am, keeping a temperature of 299 K, and after 10:00 am, the temperature of case 1 gradually increases to 300 K, while that of case 2 has only little variation until 02:00 pm.After 02:00 pm, variation of temperature rising of case 1 has a significant improvement and case 2 starts a temperature rising with a similar tendency with that of case 1, and the temperatures of case 1 and case 2 are, respectively, 302 K and 300 K.As the wall is without any heat insulation layer or PCM layer, the temperature of the inside wall rises promptly after 09:00 am and finally rises to 312 K at 06:00 pm.In summary, the function of the insulation layer and PCM can both greatly retard the velocity of temperature diffusion, and the inside wall temperature can be by more than 10 K. Case 2 has a certain advantage over case 1, which is attributed to the phase change endothermic behavior of the PCM layer.When the outside insulation layer was replaced by the PCM layer, the temperature of the inside wall can be reduced by about 1.5 K further.As shown in Figure 5(b), the heat flux is positive in the morning, indicating that the wall absorbs the air heat in the room, for the reason that the temperature of external air is set larger than the initial temperature of the wall in the simulation process.The heat flux of case 2 begins to turn negative, and the heat began to go through the wall completely until about 04:00 pm.It can be concluded from area C and area A in Figure 5(b) that the heat transferred into the indoor can be reduced about 98% by the function of the PCM layer and insulation layer.It can also be deduced that about 83% of the heat transferred from the outside is absorbed by the PCM layer through comparing area B and area C.
Figure 6 shows the temperature contours of the wallboard at 06:00 pm for three different cases obviously.The temperature distribution exhibits uniform gradient in the concrete wall case, in which the double layers are replaced by the concrete.It can be seen from case 1 and case 2 that the overall average temperature of the concrete wall in case 2 is obviously lower than that in case 1 for the reason that PCM can absorb large amount of latent heat under lower and stable temperature region, the heat transfer driving force and temperature difference are relatively weak, and less heat gets across the border to the concrete wall.
The Effect of Phase Transition
Temperature of the PCM Layer. Figure 7 presents temperature variation and heat flux of the inside wall for case 2 under different phase transition temperatures (from 299.15 K to 302.15 K).As shown in Figure 7(a), the temperature under different phase transition times is identical before 10:00 am and after 04:00 pm.In the middle range of the solar day, the temperature difference under different phase transition temperatures increases firstly and then decreases.When time goes after 02:00 pm, the temperature difference under different phase transition temperatures decreases in contrast.The temperature of the inside wall under different phase transition temperatures is identical again at 04:00 pm, and the final identical temperature is about 301 K, increasing about 2.7 K. As shown in Figure 7(b), heat flux of the inside wall is also identical under 3.2.The Effect of Thermal Conductivity of PCM. Figure 8 exhibits the temperature variation and heat flux of the inside wall for case 2 under different thermal conductivities of PCM.As shown in Figure 8(a), the temperature of the inside wall has little change before 10:00 am, keeping a temperature of 299 K, though the thermal conductivity of PCM is changed.After 10:00 am, the temperature of the inside wall gradually increases and the final temperatures are 303.2K, 305.4 K, and 307.4K while the thermal conductivities of PCM are 0.4 W/(m•K), 0.8 W/(m•K), and 2 W/(m•K), respectively.As the thermal conductivity of PCM reaches 0.2 W/ (m•K), the rising trend of the inside wall temperature becomes obvious after 02:00 pm, and the final temperature rises to 301 K.However, the temperature of the inside wall has little change during the whole solar day when the thermal conductivity of PCM is as low as 0.1 W/(m•K).It is obvious that decreasing the thermal conductivity of the PCM layer is beneficial to heat insulation and energy saving.Less heat can be transferred to the indoor.As shown in Figure 8(b), heat flux is almost stable around 5 W/m 2 and flows toward the outside before 10:00 am.And after 10:00 am, heat flux gradually reverses its direction and reaches about 22.5 W/ m 2 , 37.9 W/m 2 , and 52.3 W/m 2 at 06:00 pm as the thermal conductivities of PCM are 0.4 W/(m•K), 0.8 W/(m•K), and 2 W/(m•K), respectively.Figure 9 shows the phase change ratio of the PCM layer for case 2 under different thermal conductivities.It can be found that when the thermal conductivity is 2 W/(m•K), the PCM melts entirely in almost 7200 s, while it takes 4.5 times longer to melt the PCM layer with a thermal conductivity of 0.1 W/(m•K).
3.3.The Effect of Double PCM Layers.In order to further increase the energy-saving capacity of the wallboard, the 5 International Journal of Photoenergy right insulation layer is also replaced by the PCM layer, and the PCM has the same thermal physical property as the left layer.Figure 10 presents the temperature variation and heat flux of the inside wall for case 3. It can be observed that the temperature curve for case 3 increases first and becomes stable almost the whole day.The temperature can be stabilized at about 299.15 K.The wallboard with double PCM layers shows much better thermal performance compared to the single PCM layer case, and the temperature of the inside wall can be reduced by 2 K further.The inner wall almost can exclude the interference from external environment.As shown in Figure 10(b), the heat flux for case 3 is positive in the whole daytime.The heat outside cannot be transferred to the indoor, which is the reason why the temperature of the inside wall can be stable.Figure 11 illustrates the phase change ratio of each PCM layer for case 3. The phase change ratio of PCM 1 is on the rise before 12:00 pm, but that of PCM 2 remains constant until 02:00 pm.It can be seen that PCM 1 just takes 21600 s to melt totally.However, the phase
Conclusion
In order to further understand the heat transfer law and thermal performance of the double PCM layer wallboard under different conditions, a comprehensively parametric numerical investigation was carried out.The variation law of temperature rise at the inner side of the wall and the heat flux flowed through under different phase transition temperatures, thermal conductivities, and arrangements of PCMs are presented and discussed in detail.The main conclusions can thus be summarized as follows: (1) The function of the insulation layer and PCM layer can both greatly retard the velocity of temperature diffusion, and the inside wall temperature can be decreased by more than 10 K.About 83% of the heat transferred from the outside is absorbed by the PCM layer in case 2.
(2) Reducing the phase transition temperature of the PCM layer can decrease the inside wall temperature to a certain degree in the period of high temperature.
Increasing the thermal conductivity of the PCM layer is not beneficial to heat insulation and energy saving.More heat can be transferred to the indoor easily.
(3) The utilization of the double PCM layer shows much more performance compared to that of the single PCM layer case, and the temperature of the inside wall can be reduced by 2 K further.
2. 2 .Figure 1 :
Figure 1: Schematics of the resident house and enlargement of the analysis region with structure mesh.
Figure 2 :
Figure 2: Schematics of the building wallboard with/without PCM.
Figure 4 :
Figure 4: Temperature variation of the outside wall at different moments in summer.
2 )Figure 3 :
Figure 3: Total solar radiation of the Xuzhou area in different months.
Figure 5 :
Figure 5: Temperature variation and heat flux of the inside wall under different cases.
Figure 6 :
Figure 6: Temperature contour of the wallboard at 06:00 pm under different cases.
Figure 7 :
Figure 7: Temperature variation and heat flux of the inside wall for case 2 under different phase transition temperatures.
Figure 9 :
Figure 9: Phase change ratio of the PCM layer for case 2 under different thermal conductivities.
Figure 8 :
Figure 8: Temperature variation and heat flux of the inside wall for case 2 under different thermal conductivities of PCM.
NomenclatureT:Figure 10 :
Figure 10: Temperature variation and heat flux of the inside wall for case 3.
Figure 11 :
Figure 11: Phase change ratio of each PCM layer for case 3. | 4,254.6 | 2018-05-27T00:00:00.000 | [
"Engineering",
"Physics"
] |
ELLIPTIC BOUNDARY VALUE PROBLEMS IN SPACES OF CONTINUOUS FUNCTIONS
. In these notes we consider second order linear elliptic boundary value problems in the framework of different spaces on continuous functions. We appeal to a general formulation which contains some interesting particular cases as, for instance, a new class of functional spaces, called here H¨olog spaces and denoted by the symbol C 0 ,λα (Ω) , 0 ≤ λ < 1 , and α ∈ R . One has the following inclusions
1. Introduction and main results. To fix ideas, simply consider the Poisson equation − ∆ u = f under the homogeneous boundary condition u = 0 . It is well known that f ∈ C(Ω) does not guarantee ∇ 2 u ∈ C(Ω) . This led us to look for "minimal assumptions" on f which guarantees continuity of the second order derivatives of u . By assuming that f belongs to a suitable functional space C * (Ω) , characterized by a Dini's continuity condition, continuity of ∇ 2 u up to the boundary follows, but without any further interesting additional property, see theorem 2.1 below. Roughly speaking, we say here that ∇ 2 u "totally forgets" its C * (Ω) origin. On the contrary, a full regularity result holds for data in Hölder spaces C 0, λ (Ω) , 0 < λ < 1 , since f and ∇ 2 u have precisely the same regularity. In this situation we say that ∇ 2 u "fully remembers" its C 0, λ (Ω) origin.
HUGO BEIRÃO DA VEIGA
This regularity result is optimal in the sense that ∇ 2 u ∈ D 0, β (Ω) , for β > α − 1 , is false in general. Actually, optimality is proved in a sharper form, quite significant when, as for Log spaces, full regularity does not occurs.
In these notes we set distinct situations in a unique framework by considering a more general family of data spaces D ω (Ω) satisfying the inclusions C 0, 1 (Ω) ⊂ D ω (Ω) ⊂ C(Ω) . Hölder spaces and Log spaces turn out to be particular cases. Furthermore, we introduce a new family of functional spaces, called here Hölog spaces and denoted by the symbol C 0, λ α (Ω) , for which ∇ 2 u and f enjoy the same regularity (full regularity) if λ > 0 . For fixed λ , the family C 0, λ α (Ω) , is a refinement of the single Hölder classical space C 0, λ (Ω) = C 0, λ 0 (Ω) . For λ = 0 , C 0, 0 α (Ω) = D 0, α (Ω) is a Log space. Proofs will be shown in a forthcoming paper. Another interesting research field is the extension of theorem 2.1 to data spaces larger then C * (Ω) . In fact, there may be other significant functional spaces, possibly larger then C * (Ω) , satisfying the required properties. An attempt in this direction was done in the preparation's manuscript to reference [1], where a functional space B * (Ω) was defined and studied. For some information and results, see section 7.
2. Some preliminaries. In the following Ω is an open, bounded, connected set in R n , locally situated on one side of its boundary Γ . The boundary Γ is of class C 2, λ , for some λ > 0 . By C(Ω) we denote the Banach space of all real continuous functions f defined in Ω . The "sup" norm is denoted by f . We also appeal to the classical spaces C k (Ω) endowed with their usual norms u k , and to the Hölder spaces C 0, λ (Ω) , endowed with the standard semi-norms and norms. C 0, 1 (Ω) , is sometimes denoted by Lip (Ω) , the space of Lipschitz continuous functions in Ω .
Symbols c and C denote generic positive constants. We may use the same symbol to denote different constants.
In these notes we consider linear elliptic boundary value problems with data and solutions belonging to suitable spaces of continuous functions, which have the main role here. For simplicity, consider the very basic case of constant coefficients, second order, elliptic operators under the homogeneous Dirichlet boundary condition The main lines of the proofs apply to more general situations, at the cost of additional technicalities. The starting point of these notes was reference [1], where the main goal was to look for minimal assumptions on the data which guarantee classical solutions to the 2 − D Euler equations in a bounded domain. For a brief, clear, exposition on the links between the Euler equations and the problems treated in these notes, the reader is invited to have a look at reference [3]. The study of the above problem led to consider a Banach space of Dini's type, denoted by the symbol C * (Ω) . Let us recall here definition and some properties of C * (Ω) (see [1] and, for complete proofs, [2]).
The following are some of the main properties of this space: The following result holds (see Theorem 4.5, in [1]).
Theorem 2.1. Let f ∈ C * (Ω) and let u be the solution of problem (2). Then u ∈ C 2 (Ω) , moreover, The regularity results proved for data in C * (Ω) , like theorem 2.1, led us to look for data spaces, between Hölder and C * (Ω) spaces, for which solutions "remember", at least partially, their origin, see section 1. The following is a significant example of a functional space of "intermediate type", based on the well known formulae where 0 < α < +∞ (for α = 1 , the right hand side should be replaced by − log (− log r) ). We assume that 0 < r < 1 . Equation (7) shows that the C * (Ω) semi-norm (4) is finite if for some α > 1 . This led to define, for each fixed α > 0 , a semi-norm and a related functional space D 0, α (Ω) , as follows.
Note that we have merely replaced in the definition of Hölder spaces the quantity and allow α to be arbitrarily large. This similitude led us to have called these spaces, in reference [5], H-log spaces. Below we call these spaces simply Log spaces. Spaces D 0, α (Ω) are Banach spaces. Furthermore, the (compact) embeddings
In reference [5] we claimed, and left the proof to the reader, that C ∞ (Ω) is dense in D 0, α (Ω) . Actually, as shown below in theorem 4, this result is false.
In reference [5] we proved the following result.
The above result is optimal.
Concerning the optimality claimed above it is worth noting that it is not confined to the particular family of spaces under consideration, but is something stronger. Let us illustrate this distinction. Let α > 1 be given, and let u be the solution of problem (2), where f ∈ D 0, α (Ω) . The theorem claims that ∇ 2 u ∈ D 0, α− 1 (Ω) . Optimality restricted to the Log spaces framework means that, given β > α − 1 , there is at least a data f as above for which ∇ 2 u does not belong to D 0, β (Ω) . This situation does not exclude that (for instance, and to fix ideas) for all f ∈ D 0, α (Ω) the oscillations ω(r) of ∇ 2 u satisfy the estimate for all β > α − 1 .
Our optimality's proof, reported below in section 6, avoid the above possibility. This fact is significant in all cases in which full regularity is not reached, as in the above example. In fact, full regularity implies the above sharp optimality.
In section 6 we prove the sharp optimality result, as an opportunity to show a proof in these notes.
3. The spaces D ω (Ω). Our next aim has been to extend the theorem 2.3 to more general data spaces, denoted here by the symbol D ω (Ω) . These functional spaces satisfy the inclusions The basic results proved for data in D 0, α (Ω) and in C 0,λ (Ω) , are now a particular case. Clearly, specific proofs in particular cases could be more stringent (dependence of constants, for instance). We start by defining these spaces and showing of their main properties. Consider real, continuous, non-decreasing functions ω(r) , defined for 0 ≤ r < R . Furthermore, ω(0) = 0 , and ω(0) > 0 for r > 0 . We call these functions oscillation functions.
We set Hence, Further, we define the linear space One easily shows that [f ] ω is a semi-norm D ω (Ω) . We define a norm by setting Two norms, with distinct values of the parameter R , are equivalent. Next we establish some useful properties of the above functional spaces.
for r in some neighborhood of the origin, then D ω (Ω) = D ω0 (Ω) , with equivalent norms.
4.
Spaces D ω (Ω) and regularity. We start by putting each oscillation function ω(r) , satisfying the assumption for some constant C R , in correspondence with a unique, related oscillation function ω(r) . Hence, to a functional space D ω (Ω) there corresponds a well defined functional space D ω (Ω) . Note that assumption (20) is equivalent to the inclusion D ω (Ω) ⊂ C * (Ω) .
Define ω( r) by setting for 0 < r ≤ R , and ω( 0) = 0 . Obviously, ω satisfies all the properties described in section 3 for generic oscillation functions. In particular, Banach spaces turn out to be well defined.
We extend Theorem 2.3 to data in D ω (Ω) spaces. For clearness, and for the reader's convenience, we impose simple conditions to the oscillation functions ω(r) , which hold in the more interesting cases. Here we do not discuss more general assumptions.
Consider the linear elliptic boundary value problem (2). We have excluded, in advance, data spaces whose elements are characterized by boundedness or continuity of f , since these singular cases have been largely investigated in the past. Hence we imposed the limitation to the data spaces D ω (Ω) . Exclusion of Lip(Ω) means that ω(r) does not verify ω(r) ≤ c r , for any positive constant c . Hence lim sup( ω(r)/ r ) = + ∞ , as r → 0 . We simplify, by assuming that In particular the graph of ω(r) is tangent to the vertical axis, at the origin (as for Hölder and Log spaces). This picture also shows that concavity of the graph is a quite natural assumption. Concavity implies that left and right derivatives are well defined, for r > 0 . By also taking into account that ω(r) is non-decreasing, we realize that pointwise differentiability of ω(r) , for r > 0 , is not a particularly restrictive assumption. This claim is reinforced by the equivalence result for norms, under condition (18), which allows regularization of oscillation functions ω(r) , staying inside the same original functional space D ω (Ω) . Summarizing, differentiability, for r > 0 , and concavity, both in a neighborhood of the origin, are natural assumptions here. In the sequel "differentiability" and "concavity" have this localized meaning. Furthermore, if ω(r) is concave, not flat, and differentiable for r > 0 , then necessarily ω(r) r ω (r) for all r > 0 . This led us to the condition lim r→0 ω(r) r ω (r) where C 1 = + ∞ is admissible. Furthermore, "limit" could be replaced by "lower limit". The significance of assumption (26) is reinforced by the particular situation in Lipschitz, Hölder, and Log cases in which the limit exists and is given by, respectively, 1 , 1 λ , and + ∞ . As expected, the Lipschitz case stays outside the admissible range. Note that, basically, the larger is the space, the larger is the limit.
On the other hand, since we look for classical solutions, we have to impose assumption (20) . Note that, due to a possible loss of regularity, it could happen that a "regularity space" D ω (Ω) , necessarily contained in C(Ω) , is not contained in C * (Ω) .
Clearly, we must have D ω (Ω) ⊂ D ω (Ω) . By appealing to a de l'Hôpital's rule one shows that Note that if 0 < C 1 < ∞ proposition 1 shows equivalence of norms.
The following result holds. and let u be the solution of problem (2). Then ∇ 2 u ∈ D ω (Ω) , where ω(r) is defined by (21). The estimate holds, for some positive constant C . If in equation (26) the constant C 1 is finite then full regularity holds, namely D ω (Ω) = D ω (Ω) . The above regularity result is optimal, in the sharp sense (see below).
The above theorem holds under more general assumptions. The proof of theorem 4.1 follows that developed in Hölder spaces in [6], part II, section 5.
For previous related results we refer to [7] and [10]. The author is grateful to Piero Marcati who, after an exposition of our results, found the above related references.
Concerning other references, not related to our regularity results but merely to Log spaces (mostly for n = 1 , or α = 1), the author is grateful to Francesca Crispo for calling our attention to the treatise [8], to which the reader is referred. In particular, as claimed in the introduction of this volume, the space D 0, 1 (Ω) was considered in reference [11]. See also definition 2.2 in reference [8]. 5. Hölog spaces C λ, α (Ω) and full regularity. Assume that, for some λ > 0 , in a neighborhood of the origin. Then there is a k > 0 , such that This fact could suggest that Hölder spaces are the unique full regularity class inside our framework. However, full regularity is also enjoyed by other spaces. The following is a quite challenging example. Consider oscillation functions of the form where 0 < λ < 1 and α ∈ R . For λ = 0 and α > 0 we re-obtain D 0, α (Ω) , and for α = 0 and λ > 0 we re-obtain C 0, λ (Ω) . The compact inclusions where ω(r) is given by (31). The following result follows from theorem 4.1.
Theorem 5.1. Let f ∈ C λ, α (Ω) for some λ ∈ ( 0, 1 ) and some α ∈ R . Let u be the solution of problem (2). Then ∇ 2 u ∈ C λ, α (Ω) . Moreover 6. On the optimality of the regularity result. In this section we discuss and prove the sharp optimality of the regularity result, claimed in theorem 4.1.
The "singular point" is the origin. It would be more elegant summation for all indexes i = j however the conclusion is the same. To fix ideas, assume that n = 3 .
The point here is that, due to the term x 1 x 2 in (33), the second order derivative ∂ 1 ∂ 2 u(x) leave unchanged the "bad term" ω(|x|) . This does not occur for square derivatives ∂ 2 i u(x) , hence for ∆ u(x) . Straightforward calculations show that in I(0, 1 4 ) one has where f (0) = 0 . Furthermore, for x = 0 , The functions f (x) and ∂ 1 ∂ 2 u(x) are continuous, and vanishes for x = 0 . In the above equations, the specific expressions of the coefficients of ω(|x|) and |x| ω (|x|) , are, essentially, secondary (up to some remarks). The point is that they are homogeneous of degree zero. Hence they have no effect on the minimal regularity. It readily follows from the above expressions that f ∈ D ω (I) , and ∂ 1 ∂ 2 u ∈ D ω (I) . Due to the explicit term ω(|x|) , the regularity claimed for the mixed second order derivative is optimal. For instance, the presence of the term ω(|x|) in (34) does not allow the estimate (13), since in this example ω(r) = (− log r) − (α−1) .
To conclude, note that, in accordance to the regularity result claimed in theorem 4.1, the second and third terms in the right hand side of (34) can not be less regular then ω(|x|) .
Clearly, the above argument is fruitful if, in (34), a possible elimination of the term ω(|x|) by means of the other two terms is excluded. This would make fruitless the counterexample. In particular this is not possible since these coefficients are positive.
Let us briefly present a more "compact" argument. Denote by H k (x) , k integer, generic homogeneous functions of degree k. Recall the differentiation rules for homogeneous functions. One has . Hence However, if i = j , the term ω(x) is still present in the right hand side of (35).
7.
On elliptic problems with more general data. Uniform boundedness of ∇ 2 u .. In the context of [1], the Theorem 2.1 was marginal. So the proof, written in a still existing manuscript, remained unpublished. Actually, at that time, we have proved the above result for more general elliptic boundary value problems. The proof depends only on the behavior of the related Green's functions. Recently, by following the same ideas, we have shown the following result for the Stokes system (see the Theorem 1.1 in [2]): belongs to C 2 (Ω) × C 1 (Ω) . Moreover, there is a constant c 0 , depending only on Ω , such that the estimate holds.
The proof of the above theorem, as that of theorem 2.1, is quite different from that of theorem 4.1. Both are based on estimates for Green's functions like those shown in the classical treatise of Olga Ladyzhenskaya, see [9]. They may also be found in Solonnikov paper [12]. In the manuscript quoted above we also tried to extend the result claimed in Theorem 2.1 to data belonging to functional spaces larger then C * (Ω) . Together with C * (Ω) , we have considered a functional space B * (Ω) obtained by commuting integral and sup operators in the right hand side of definition see (4). For each f ∈ C(Ω) , we defined the semi-norm and the related functional space endowed with the norm f * ≡ f * + f . B * (Ω) is a Banach space. We have shown that the inclusion C * (Ω) ⊂ B * (Ω) is proper, by constructing strongly oscillating functions which belong to B * (Ω) but not to C * (Ω) . This construction was recently published in reference [3], Proposition 1.7.1. Furthermore, we have shown that Theorem 2.1, and similar, holds in a weaker form for data f ∈ B * (Ω) . We have proved that the first order derivatives of the solution u are Lipschitz continuous in Ω . Furthermore, the estimate holds. The proof is published in reference [3], actually for data in a functional space D * (Ω) containing B * (Ω) . See Theorem 1.3.1 in [3]. A similar extension holds for the Stokes problem, as shown in reference [4], Theorem 6.1, where we have proved that if f ∈ D * (Ω) , then the solution (u, p) of problem (36) satisfies the estimate | 4,594.8 | 2015-12-01T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
INSIGHTS FROM EU POLICY FRAMEWORK IN ALIGNING SUSTAINABLE FINANCE FOR SUSTAINABLE DEVELOPMENT IN AFRICA AND ASIA
It is conspicuous that the mainstream financial system in the EU is transforming into a sustainable financial system by a supra/national policy and institutional framework for meeting the goals of SDGs and the targets of the Paris Agreement for climate change together with Nationally Determined Contribution. However, Botswana or Sri Lanka has no such framework. Hence, a need of the hour has arisen to evaluate the sustainable finance policies in Botswana, Sri Lanka, together with the EU seeking insights from the EU’s policy framework. Since sustainable finance is not a well-grown branch of the conventional mainstream financial system, the nature of the knowledge is produced by social constructivism based on the grounded theory and the theory is inductively developed for achieving the purpose of the research. The study found, among other things, that incorporating existing policies into the multiple ministries and affiliated institutions together with the current industry-led policy initiatives to manage ESG risks are not adequate. Hence it is recommended various insights to be taken into consideration by the policymakers to formulate a national framework for mobilizing public and private capital to meet the goals of sustainability.
When investigating the reason behind this, the EU has been able to attract sustainable investments. It is because of their systematic and gradual transition from the conventional mainstream financial system to a sustainable finance system developed on a legal, policy, and institutional regulatory framework (from now on referred to as national policy framework). As a result, the EU has been able to mobilize not only public capital but also private capital for sustainable investment. Both the institutional investors, such as pension funds, insurers, universities, foundations, banks, mutual funds, private equity funds, hedge funds, and the retail investors who are individuals, individual investors in professionally managed funds in banks or other investment platforms. Their contribution is now 75% and 25%, respectively in 2018 (GSIA, 2019). The contribution of both types of investors is crucial. It is because it has been estimated that to keep the average global temperature to 2°C, the energy supply and energy-efficient investments for decarbonization of the economies for the next 20 years would be 50 trillion US$, which is roughly equal to the GDP of the entire OECD countries (Kaminkar and Youngman, 2015).
Problem Statement
Even though there appears to have corporate-level/industry level policies for ESG risks in Botswana and Sri Lanka, for example, King IV Report: Code of Corporate Governance Institute of Directors (2016), Global Reporting Initiatives (GRI), International Integrated Reporting Council (IIRC), Code of Best Practice on Corporate Governance of Sri Lanka (ICASL, 2017), the sustainable finance market in Botswana and Sri Lanka is less developed and not been able to attract sustainable investments from both types of investors, institutional investors, and retail investors. In these circumstances, the objective of this research is to investigate the policy gap in Botswana and Sri Lanka in comparison with the European Union. The comparative study about sustainable finance policies enables ascertaining if the current state of the national policy framework related to principles of sustainable finance in Botswana and Sri Lanka is satisfactory or not. If the current national policy framework is not adequate, the objective of this investigation is to identify insights of the EU policy framework. The insights drawn from the inquiry can bridge the gap in conventional finance systems in Botswana and Sri Lanka. Therefore, the purpose of the research is to recommend factors to be considered in formulating a national framework for the transition of the mainstream financial system to the sustainable financial system.
To achieve the objectives and the purpose of the research, the following research questions (RQ) guide the study, namely, RQ: 01-Why is the sustainable finance system in Botswana and Sri Lanka is less developed? RQ: 02-What are the insights of the EU (supra) national policy framework of the Sustainable Finance System? RQ: 03-What are the factors which could be recommended for a national policy framework to transform the mainstream conventional finance system to a sustainable financial system in Botswana and Sri Lanka?.
Significance of the Study and Limitation
The finding of the study is significant for policymakers to make a sound national policy framework that enables transforming the current conventional mainstream financial system to a sustainable financial system to achieve the goals of SDGs and the Paris Climate Agreement together with Nationally Determined Contribution. Further, these findings apply to many other African and Asian countries that have similar characteristics in the conventional finance system, which is not conducive for achieving goals of sustainable development and the Paris Agreement together with Nationally Determined Contribution.
However, the scope of this research is limited to four pillars of the EU policy framework, which aims to transform the conventional mainstream finance system to a sustainable finance system. The EU policy with four pillars is to top up its existing policy framework latter, which is not within this study.
After the introduction above is mentioned, the remainder of the research paper is dealt with the following manner. The methodology is explained in the next section. After that, the literature review is provided. The research findings are summarized before the beginning of the discussion. After the discussion, the conclusions and recommendations together with policy implications, are stated.
METHODOLOGY
For the purpose above mentioned, the underlying research philosophy of this study is the interpretivism because a national policy framework for the inclusion of sustainable finance principles into the mainstream financial system in a country is a new phenomenon appearing during the last decade. Since sustainable finance is not a well-grown branch of sustainable development, the nature of the knowledge is produced by social constructivism because the knowledge is rather subjective than objective. Hence, the value questions of qualitative nature are used to collect data by semi-structured interviews from financial institutions and analysis of various policy documents. Data so collected are qualitatively analyzed then and there till the saturation point is satisfied. However, the qualitative study is based on the grounded theory by constant comparison with the same source and different sources for triangulation as well, namely, interviews and document analysis, the multi-method qualitative methodology underlying with interpretivism. Mills et al. (2006) argue that constructivist grounded theorists design their research for mutual construction of meanings and meaningful reconstruction of the findings on the grounded theory. The qualitative process began with identifying codes. Vivo codes were used in this respect. Codes were categorized later into themes. After that, the concepts were developed, identifying relationships among themes. Finally, the theory was inductively constructed by the constructivist approach subject to the reconstruction based on the grounded theory. For example, the responses of the interviewees that the ESG framework is not effectively applied in the absence of a national policy framework were reconstructed by constant comparison that a national framework for sustainable development is required to establish first and followed by ESG policy framework. In the same approach, the researchers who are conversant with business matters tested wherever possible the perceptual map introduced by Mojtahed et al. (2014), identifying different perspectives when developing the theory under the constructivist approach.
LITERATURE REVIEW
The success of the transition of the current mainstream financial system to a sustainable financial system is dependent on embracing more ethics than a legal framework of sustainable development by the finance system. Morality for sustainable development means the principles that govern the behavior of individuals, organizations, societies, or economies. Ethics is the collection of morals for sustainable development, in particular, behavior for the creation of ethical capital in particular in the context of this study. The ethical capital is one of six types of the capital of an organization, in addition to (i). Physical capital, which mobilizes natural resources, (ii). Economic capital, which mobilizes financial capital, (iii). Human capital, which mobilizes labor resources, (iv). Intellectual capital, which mobilizes intellectual resources, and (v). Social capital, which mobilizes civil society resources. Bull et al. (2010) argue that the organizations may have all of these six types of capital, but the mixture of them may be different among organizations.
Wagner-Tsukamoto (2007) identifies three levels of ethical capital, namely, passive unintended moral agency, passive intended moral agency, and active intended moral agency. Players of finance system (people or organizations) who maximize shareholders' wealth while complying with a minimum standard of the law accumulate passive unintended moral agency of ethical capital. For example, these organizations follow the rule of the business paying the minimum salary and wages for maximizing shareholders' wealth. Players of finance system (people or organizations) who maximize shareholders' wealth, while accepting the fact that they operate in a community, and therefore they acknowledge the importance of other stakeholders, accumulate passive intended moral agency of ethical capital. For example, these organizations follow the rule of the business paying the salary and wages above the minimum while maximizing shareholders' wealth. Players of finance system (people or organizations) which maximize stakeholders' wealth by corporate social responsibility while accepting the interests of other stakeholders accumulate active intended moral agency of capital. For example, these organizations internalize CO 2 emission with the use of renewable energy and create value for the environment/planet as a stakeholder. The capital utilized for this purpose is ethical. The absence of a national policy framework contributes to unsustainable economic activities, which enable encroaching nine ecological boundaries (Rockström et al., 2009), making the planet inhabitable. It is because the earth is unable to be resilient with distortions caused by the unsustainable developmental activities that have been taking place since the industrial revolution. As a result, the planet is now unable to provide its essential services to all beings. For example, Sea level rises, glaciers are melting, the long summer is hotter than earlier. The shorter winter is colder than earlier. The gravity of the water shortage has been intensified. The precipitation has caused variation of the seasons, decreasing the productivity of the harvest of agricultural products. Species are extinct.
When considering the context of Botswana, it is the most vulnerable country other than Namibia for the adverse impact of global warming. Botswana is a semi-arid country that has got characteristics such as unreliable rain, low rainfall, constant drought, and a high rate of evaporation. New (2018) elaborates on what global warming 1.5°C or higher means to Botswana. Accordingly, 1.5°C means 2.2°C and 2°C means 2.8°C. As a result, under these two scenarios, annual rail rainfall will drop by 5% and 9%, respectively. Dry days will increase by 10 days and 17 days respectively, extreme weather events, heat waves will increase by 50 days and 75 days respectively, maize yields could drop by 20% and 35% respectively. The hotter and drier future makes less water for agriculture and poor health. Therefore, urgent actions are needed for climate mitigation actions and climate adaptation actions. In this respect, the biggest challenge is to align the current finance system to a sustainable finance system.
These adverse effects have increased not only the economic cost of human beings but also the survival of all beings. As a result, sustainable investments have become the need of the hour of the planet. Hence, a national policy framework is needed. It enables to translate sustainable development goals and climate change targets of the Paris agreement into tools in the respective country by which the investors, institutional and retail investors in that country, are directed to have their investment practices required for the sustainable development of that country.
However, it is worth noticing that a national policy framework does not effectively operate in isolation. There shall be an underlying layer of corporate-level/industry level policy framework called ESG (Environment, Social, and Governance) policy framework. These two layers of policy frameworks go hand in hand together.
ESG policy framework paves the way for (six) approaches that could be used in sustainable investment decision making by the investors. Even-though there are overlaps among them, there is a way to differentiate them. One of them is the negative screening, which means the exclusion of specific types of companies of industries such as gambling, alcohol, tobacco, burning fossil fuels. The second approach is positive screening, which means the inclusion of companies that are environmentally friendly and socially responsible, for example, companies that are concerned with pollution, diversity, product safety. The third approach is thematic investments. It means the investment directly relates to sustainability, such as investments for climate mitigation, climate adaptation. Another approach is Environment, Social, and Governance (ESG) integration, which is used to understand risk and opportunities in a better way. Environment means the operations of the company do not harm the environment by activities relates to renewable energy, water management, pollution control, and lower carbon emission. Social means the company is concerned with community/people related activities such as fair labor practices, data protection, no forced labor, no child labor, health standards, freedom of associations. Governance means the quality of governance, such as the constitution of the board, corruption policy, audit policy. The fifth approach is active ownership, which means the investor as a shareholder or shareholders actively involved and get the management engaged in decision making for creating a long term value together with sustainability. The last but not least approach is impact investing, which means investments are considered if it can make both profit and social/sustainable impact as well.
To conclude, it is remarked here that corporate-level ESG framework such as King IV Report: Code of Corporate Governance Institute of Directors (2016), Global Reporting Initiatives (GRI), and International Integrated Reporting Council (IIRC), Code of Best Practice on Corporate Governance of Sri Lanka (ICASL, 2017), cannot alone achieve the goals of sustainable development.
The national policy framework is more powerful, authoritative, and superior to guide and develop the corporate/industry level policy framework to work hand in hand together for common goals of sustainable development.
FINDINGS
Public capital alone is not adequate, but private capital as well, is imperative to achieve the goals of SDGs and Paris Climate Change together with Nationally Determined Contribution. Botswana needs $18.4 billion for climate mitigation and climate adaptation programs to reduce CO 2 emission by 15% based on 2010 by 2030. The commitment of Sri Lanka for NDC is 5-10% voluntarily and further 15-24% on a conditional basis (Haque et al., 2019). The conventional mainstream financial system, which is guided by the ESG policy framework, cannot bridge the financial gap in the absence of a national framework for transforming the conventional finance system to a sustainable finance system.
Hence, there is a need to transform the conventional financial system into a sustainable financial system. In this regard, what is required is to have a national policy framework in place. Recently introduced, the national policy framework of the EU provides many insights for the purpose.
One of the insights of the (supra) national policy framework of the EU is that the objectives of EU policies required for sustainable development have been codified under four pillars. One of the pillars is sustainable finance policies. The second insight is that the first three pillars have been aligned with the sustainable financial system. The next insight is that there is a coherent action plan, implementation, and supervision process tied with the energy resilience introducing supranational/national frameworks to implant the principles of sustainable finance for energy vulnerability, security, poverty, and justice (Gatto and Drago, 2020). Another insight is the EU's strategy for energy research and innovation activities with the EU. Further, comply or explain the strategy of disclosure or even regulatory pressure is preferred to than giving an option for not reporting. The last but not the least insight is that the EU is transforming the conventional mainstream finance system to a full-fledged sustainable finance system to meet sustainable development goals on a fast track, probably before others.
DISCUSSION
As discussed in the literature review, there are two layers of policy frameworks: national-level policy framework and corporate level policy framework. They operate hand in hand in the EU by attracting ethical capital from institutional and retail investors for achieving the goals of sustainable development. Hence the EU enables achieving goals of sustainable development (Mikova et al., 2019 However, the absence of a national policy framework for aligning sustainable finance into the mainstream financial system in Botswana demonstrates that the industry-led compliance with the ESG approach has detached the financial and capital markets from a sustainable finance system. Further, such an ESG approach alone is unable to meet the national and global goals of sustainable development. It is evident by the fact that "Nonetheless, many companies do not have specific sustainability policies because they are still not fully conversant with the issues as well as the global agenda on the SDGs, and thus have adopted mostly isolated practices that are not entirely integrated into their business operations. Conversely, other companies, which have recognized sustainability as key to their business, have adopted some practices but have not attained the level of reporting and accounting for sustainability. Indeed, some companies are well advanced and have adopted some global reporting systems, which have earned them international recognition in the international frontiers" (UNDP and BSE, 2018).
Hence, to understand the nature and quality of the national policy framework for meeting the goals of sustainable development by aligning sustainable finance into the mainstream financial system, the four pillars policy framework of the EU is benevolent.
Pillar 01: Climate and Energy
Accordingly, one of the pillars of policies of the EU relates to climate and energy. In this respect, one of them is The 203°C limate and Energy Framework (European Commission, 2014a), which aims among other things, to reduce greenhouse gas emission by 40% below the 1990 level by 2030 and by 85-95% below 1990 level by 2050. A Framework Strategy for a resilient Energy Union with a Forward-Looking Climate Change (European Commission, 2015), also called Energy Union Package aims to achieve several objectives. They are to take action to form a single energy market, reduce the dependency on third countries for the supply of its energy, increase energy efficiency, and increase renewable energy use. The third important policy is the EU Strategy on Adaptation to Climate Change (European Commission, 2013), which aims, among other things, to promote climate adaptation strategies and funding in critical areas such as coastal and marine, health, infrastructure, and rural development.
When considering the situation in Botswana, the country is endowed with a conducive natural environment for the production of solar electricity. The sun-drenched country receives 320 sunny days with 3200 sunshine hours per annum with average insolation of 2200 kWh/m 2 (6-6.5 kWh/m 2 /day). One of the highest levels of irradiation in the world (Mooiman and Edwin, 2016). The next most available renewable energy is bioenergy from cow dung. The cattle population is 2.2 million. As solar power plants, there are a few biogas digesters have been installed in the country. Wind power is not potential for large scale wind power projects, and Hydropower is not possible as well.
Botswana has agreed by Nationally Determined Contribution (NDC) to meet the Paris Climate Change agreement by reducing CO 2 emission by 15% based on 2010 and further estimated the cost required as $ 18.4 billion. Botswana import fossil fuel, diesel, petrol, petroleum gas, aviation gas, and paraffin for transport and to produce a part of electricity (Sekantsi and Timuno, 2017). The vision 2036 of Botswana provides for the importance of energy security, clean energy, and a net exporter of energy (Government of Botswana, 2016). Sri Lanka also imports fossil fuel, including coal for producing a part of electricity and transport. Sarangi et al. (2019) point out that the intensity of energy security is high in countries that import and subsidize fossil fuel to the public in the absence of generating electricity with renewable energy but emit excessively CO 2 in producing electricity. The Vision 2025 of Sri Lanka also provides energy security and clean energy (Integrated Research and Action for Development, 2018). However, there is no pillar of policies which relate to climate and energy in both countries.
However, in 2007, the electricity Supply Act was amended to give authority to the minister for issuing and controlling the licenses for generating electricity under which the Botswana government issued a permit for a grid-tied 1MW solar PV project in Tobela village in the Shoshong constituency. However, the only power purchase agreement of the independent power producer so far reported is not available to the public. A few small and mediumsize grid-tied and off-grid solar PV projects have been installed for internal consumption. The amendment is not adequate for derisking the investment and deregulation the (solar) energy market. The situation in Sri Lanka in this respect is better than Botswana. 50% of the total electricity is produced with renewable energy sources. Large scale and small scale hydro projects represent 44.5% (installed a long time ago), wind 3.5%, biomass 1%, and Solar PV 1%. The balance of 50% is produced with coal and oil (Integrated Research and Action for Development, 2018). There is no specially designed legal, policy, or institutional framework for renewable energy in Botswana (Mooiman and Edwin, 2016;Motsholapheko et al., 2018;Sekantsi and Timuno, 2017) and Sri Lanka as well (Haque et al., 2019;Mohamed Nijam and Abdul Nazar, 2017).
Botswana and Sri Lanka in a Sunbelt country have not yet started harvesting solar energy. Access to electricity in Botswana has increased. The access was a national level of 55.6% in 2009, 63% in 2013, and further increased to 72% in 2016 (Motsholapheko et al., 2018). The access for electricity in rural areas below the urban areas and the consistent supply without breakdowns is a challenge, and therefore renewable energy is advocated as alternative energy. The renewable energy transition is a promising strategy that can be used for the economic development of rural areas (Clausen and Rudolf, 2020), where there is greater energy poverty than urban areas. The energy poverty can be defined as "the lack of access to modern energy services and products" (World Economic Forum, 2010 (as cited by Kumar, 2020). He further points out that energy poverty in different forms, such as lack of accessibility for modern energy services, non-availability of reliable services, and non-affordability. However, access to electricity in Sri Lanka is 98.7% (Integrated Research and Action for Development, 2018).
When deregulating the fossil fuel energy market to include the solar energy market, small, medium, and large scale solar installations are to be treated, taking into consideration of their inherent characteristic because all types of systems are essential. For example, Best and Truck (2020) point out that 70% of the total PV capacity represents small scale systems below 100 kWp in Australia. They further explain that a large number of small scale solar plants displaces a large scale solar power plant and even fossil fuel energy plants, which require a large upfront investment. Derisking the investment is the golden rule applied when deregulating the energy market. In this respect, there are various policies already in place in the global energy market, such as Renewable Energy Certificates (RECs), Feed-in-tariff system (FIT), Solar renewable Energy credits, renewable energy portfolio (Ndebele, 2020). Freire-Gonzalez and Ho (2018) assert the importance of Environmental Fiscal Reforms (EFR) by Environmental Tax Reform (ETR) in demotivating pollutant emission and motivating clean energy. Dissanayake et al. (2020) point out that out of three carbon mitigation strategies, carbon tax, fuel tax, and carbon emission trading, fuel tax enables reducing CO 2 emission than the other two strategies. They further point out that mix policy between fuel tax and carbon tax enables to reduce carbon emission for meeting Nationally Determined Contribution (NDC).
A deregulated market is essential because it offers customers choices from competitive suppliers who provide renewable energy/ solar energy to customers at competitive prices (Ndebele, 2020). He further points out that deregulation for promoting renewable energy takes place, providing premiums or support for all renewable energy or specific energy such as solar or wind. In the deregulated energy market, energy innovation scenarios is a modern tool which is used by EU countries to forecast carbon emission targets committed by Nationally Determined Contribution (Paltsev, 2016). Kim and Wilson (2019b) point out that since an innovation system is embraced with uncertain variables, scenario analysis is a better way of exploring uncertainties. It enables identifying potential risk by understanding salient uncertainties and take informed decisions from near term actions to long term outcomes. Mikova et al. (2019), who analyzed low carbon energy scenarios of six EU countries, UK, Germany, France, Netherland, Denmark and Belgium, found ten common features of low carbon policy settings. Nine out of ten characteristics are relevant for all, such as modeling framework for diverse pathways, the ambitiousness of the targets, stakeholder involvement in particular public involvement, transparent technology options, non-technological aspects such as social acceptance, an economic component such as cost-benefit analysis, the degree of usage of scenarios in policy design, intermediate indicators of targets for achievement and revision of scenarios. Having analyzed, they concluded that these countries enable them to achieve their targets for the reduction of carbon emissions as modeled by their scenarios.
The facts above discussed enlighten what is required for Botswana and Sri Lanka is to have a legal, policy, and institutional framework to de-risk investment in renewable energy. Solar energy, which is the most abundant renewable energy, can be focused by both countries on meeting the targets of the Paris climate agreement together with their Nationally Determined Contribution.
Pillar 02: Policies Relating to Other Environmental Aspects
The second pillar is the Policies relating to other environmental aspects. One of the policies is called Circular Economy Package, also called the Circular Economy. Closing the loop: An ambitious EU Circular Economy Package (European Commission, 2019) aims to stimulate transition towards a circular economy covering the whole cycle: from production, consumption, waste management, and the market for secondary raw material. The clean air policy package was published in 2013 (European Council, 2020) aims to substantially reduce the air pollution for reducing the health and environmental impacts of air pollution by 2030 with an estimation to avoid 58000 premature deaths, save 123000K 2 of the ecosystem from nitrogen pollution, save 19000k 2 of forest ecosystem from acidification.
When considering the policies relating to other environmental aspects in Botswana, there are regulations, guidelines, ratification of conventions, multilateral environmental agreements for waste management, and clean air policy (Mmereki, 2018;Wiston, 2017). However, this regulatory framework has not specifically addressed the recent developments in these areas, such as circular economy, waste to energy, and zero waste management. Circular economy refers to an economic system that aims to minimize waste by use and reuse of products, material, and resources for a long period as possible by creating a secondary market. Waste to energy refers to processes used to generate energy such as electricity, heat. Zero waste management refers to processes to prevent waste so that there is no trash to be dumped to landfills and incineration. Hence, what is required in Botswana and Sri Lanka is to incorporate the insights of the Circular Economy Package, especially creating a secondary raw material market connected with the finance system.
The sources of air pollution in Botswana include industrial operations such as coal-fired power stations, mining, and smelting activities, metal processing, traffic emission, household fires for cooking, heating and lighting by burning fossil fuels, wood and biomass, natural sources such as dessert and wildfire irruptions, windblown soil erosion and mineral dust. Wiston (2017) points out that Botswana has been ranked as the most polluted countries with serious air pollution due to the facts that non-application of standards set by the regulatory framework and inadequacy of them as well. He further points out that there is a need to link air pollution and prevention of health effects. Hence, what is required in Botswana is to assess the link and introduce an air policy to avoid health hazards similar to the clean air policy of the EU. When considering air pollution, it is not different from Botswana. In addition to industrial air pollution. It is reported that the deaths caused by indoor and outdoor pollution are 4200 and 1000, respectively (Nandasena et al., 2012). They further argue that air pollution mitigation policies are not adequate and need a revision of policies related to the air quality and air quality monitoring system (Manawadu and Ranagalage, 2013).
Pillar 03: Investment and Growth
The third pillar of the policies relates to investment and growth.
Pillar 04: Sustainable Finance
The fourth pillar of policies is Sustainable finance. The insight which can be drawn here is that the first three pillars have been aligned with the sustainable finance system. Niculescu (2017) elaborates that there is a need of $ 5 to 7 trillion investment with $ 2.5 trillion gaps in developing countries for achieving the goals of sustainable development and further points out that the World Bank estimates that 50 to 80% will come from domestic sources including great potential from private funding and private capital, but private sector contributes currently only 10% of the current infrastructure.
One of the pillars of sustainable development policies is committed to transforming the conventional mainstream financial system into a sustainable financial system that is crucial for sustainable development. It strengthens the other three pillars of sustainable development policies. In this mission connecting with other policies, procedures, and the process began with appointing a High-Level Expert Group (HLEG) to collaborate with the European Union and investors in December 2016. HLEG consists of 20 experts from civil society, the financial sector, academia, observers from the EU, and international organizations. The main objective of HLEG was to ascertain which areas of reform are necessary to align the financial services industry for a sustainable finance stream. It published its interim report in July 2017, and in a few weeks, two recommendations were implemented (University of Cambridge, 2017), demonstrating that prompt actions for sustainable development are required. The final report was published in January 2018 (High-Level Expert Group on Sustainable Finance, 2018) and recommended, among other things, a classification system/ taxonomy, clarifying the duties of investors, improving disclosure, green fund, and green bonds. In responding to the recommendations, an action plan was published in March 2018 providing ten actions (Appendix 01) clustered under three areas, namely, reorientation of capital flows towards sustainable investments, mainstreaming sustainability into risk management, fostering transparency and long-termism in financial and economic activity (Principles for Responsible Investment, 2018). In May 2018, four legislative proposals, taxonomy, disclosure and Duties, Benchmarks, and Sustainability Preferences, were published. The remarkable insight of EU taxonomy is that it integrates the ESG policy framework for disclosure by creating low carbon benchmarks.
In July 2018, the Technical Expert Group (TEG) with 35 members representing civil society, academia, business, finance sector, observers, and international public bodies, was established to determine the EU on various technical aspects required for the implementation of action plan such as technical screening criteria to determine if economic activity is sustainable or not, developing principles and standards applicable for issuing the EU wide green bonds, creation of low carbon benchmarks and positive carbon impact benchmarks and recommendation of nonbinding guidelines under the non-financial reporting directives which cover corporate disclosures and ESG issues taking into the consideration of the findings of Task Force on Climate-related financial disclosures.
In January 2019, the EU published draft amendments to Insurance Distribution Directives (IDD) and Markets in Financial Instruments Directives (MiFID II) to regulate the investment firms and insurance intermediaries to comply with ESG considerations. These amendments address the investment advising process, the portfolio management process, and the disclosure requirements.
In June 2019, TEG published three reports, a Taxonomy report, Green Bond report, and Benchmarks report, which are under the public consultation process for receiving the feedback, and the delegated acts are expected to be adopted by the EU in early 2020. The insights which can be drawn with the fourth pillars is that there is a coherent action plan and implementation and supervision process tied with the goals of sustainable development introducing a (supra) national frameworks to implant the principles of sustainable finance for energy vulnerability, security, poverty, and justice (Gatto and Drago, 2020) in the fabric of conventional mainstream financial system in their countries. Further, Gatto and Busato (2019) explain the role of resilience that it enables to be adaptive for improving the performance by learning and adaptation, informed but continuous change for economic, societal, and ecological governance and the last but not the least insight is that the EU is transforming the conventional mainstream finance system to a sustainable finance system to meet sustainable development goals before long.
The transition from a conventional mainstream financial system to a sustainable financial system in Botswana and Sri Lanka is imperative. The investments required for low carbon infrastructure and sustainable development are not adequate. Additional sources of finance are needed by institutional investors and retail investors. They are not adequately investing in renewable energy infrastructure and other sustainable activities due to various factors such as lack of confidence, technological risk, inadequate policies, high capital intensive investment, and unsatisfactory experiences. Hence, legal, policy, and institutional framework for deregulating the energy market and de-risking the investment with an objective of mobilization of capital from institutional and retail investors for renewable energy are imperative (Hafner et al., 2020).
The energy innovation portfolio is the primary strategy used for achieving public goals. Strategic Energy Technology (SET) plan was established to coordinate energy research and innovation activities for achieving climate change policy objectives of the EU such as renewable energy, energy efficiency, energy security, energy union, economic growth, creation of employment, and global competitiveness (Kim and Wilson, 2019a). Energy research and innovation is another insightful area of EU policy framework. There are many energy innovation portfolios schemes in place during the last decades, such as the EU SET-Plan which consists of Renewable energy, Energy efficiency, Carbon capture, and Storage, Smart grid, Sustainable transport, and Nuclear power.
ARPA-E in the US and Mission Innovation (Kim and Wilson, 2019b). A regulatory framework that focuses energy innovation portfolio is continuously required to be subject to scrutiny by measurement, verification, and enforcement of policies to be a resilient energy framework for achieving the goals of low carbon economies (Thomas and Rosenow, 2020). In this respect, Galeotti et al. (2020) point out various environmental policy indicators used in many countries such as Pollution abatement and control expenditures over GDP in Australia, Government R and D and expenditures over GDP in Austria, Implicit tax rate on energy in Belgium, Total revenue for energy and environmental taxes over GDP in Canada, OECD environmental policy stringency indicator (all instruments Denmark, OECD environmental policy stringency indicator (market-based Instruments) in Finland, OECD environmental policy stringency indicator (non-market-based instruments) in France.
The energy innovation portfolio is a vital strategy that can be adopted in Botswana and Sri Lanka. Renewable energy, Energy efficiency, Carbon capture and Storage, Smart grid, Sustainable transport are possible components of the energy innovation portfolio. In this respect, solar energy is the priority of the sundrenched solar belt countries where the business fundamentals can be connected with goals of low carbon economy only with a sustainable finance system.
When considering the sustainable financial system in Sri Lanka, an emerging country, Sri Lanka, which faces the adverse impacts of climate change as one of the 40 vulnerable countries for climate change. The increasing environmental risks include frequent floods, earth slips, erosion of coastal and marine ecosystems, air, water pollution, and loss of biodiversity in one of the hotspots in the world. Unless the climate risk is managed, there is a possibility of losing 1.2% of gross Domestic Product (GDP) by 2050 (CBSL, 2019 The road map provides explicitly six strategies as core pillars, which would be addressed by policies and implemented by an action plan during 3-time frames, short between 2019 to 2020, medium-term between 2021 -2025, and long term between 2025-2030. These six core pillars are financial vision 2030, ESG Integration into financial markets, Financial inclusion, Capacity building, International Cooperation, and Measurement and Reporting. When considering the situation in Botswana it is similar to Sri Lanka in some respect. Multiple ministries and related institutions have been assigned SDGs, Paris Agreement, and NDC within the existing policies, but there is no at least a road map. The financial system in Botswana also consists of both domestic and international institutions such as retail banks, development banks, investment banks, Insurance companies, investment funds, sovereign wealth funds, pension funds, stock exchange, and nonbanking financial institutes. But the alignment of the mainstream financial system together with the sustainable financial system is critically important for achieving the national and global goals of sustainable development. In this respect, in a UNEP inquiry into the design of a Sustainable Financial System in African countries concluded that innovative financial and capital market policies, regulations, and standards are required for high potential areas such as disclosures, credit risk management, fiduciary duties, lender and investor liability and bond markets (UNEP, 2015). In this respect, Camilleri (2015) points out an insight for disclosures that most EU countries use to comply or explain strategy instead of giving an option for not reporting and further explain that organizations respond to more regulatory pressures of reporting, which provide more benefits to stakeholders.
Accordingly, there is no national framework for transforming the existing financial system to a sustainable financial system in
CONCLUSION AND POLICY IMPLICATIONS
Climate finance is the finance required for meeting the targets of the Paris Agreement, together with Nationally determined Contribution (NDC). Sustainable finance is the finance needed for sustainable development, including climate finance. There is a global trend to transform the conventional mainstream finance system to a sustainable finance system. In this regard, there are two policy frameworks in operation, National policy framework, for example, EU policy framework discussed above and Industry level ESG policy framework, for example, King IV: Code of Corporate Governance, Southern Africa and Code of Best Practice on Corporate Governance of Sri Lanka. The striking feature of these two policy frameworks of the EU is coherent within them and between them for the common objective of sustainable development through the sustainable finance system.
In Botswana, there were several joint programs between government and UN agencies, academia, and Parliament on the 2030 agenda for sustainable development (Republic of Botswana and United Nations, 2017). There was only one high-level symposium on sustainable finance had in 2016, and after that follow-up process was conducted with selected players in the field, the banking sector, and the BSE (2017).
The existing the industry-led compliance with Environmental, Social and Governance standards prescribed by King IV, Global reporting Initiatives (GRI), International Integrated Reporting Council (IIRC), Code of Best Practice on Corporate Governance of Sri Lanka, or any other professional organization is not merely enough to meet the national and global goals of Sustainable development and Paris agreement of climate change together with Nationally Determined Contribution of Botswana.
Policy Implications
Therefore, to introduce a national framework to transform the current financial system to a sustainable financial system in Botswana, it is recommended i. To have separate pillars of sustainable development policies, including policies for sustainable finance with its own identity adequate for the transition similar to the EU Sustainable finance system. ii. To strengthen the policy framework for Sustainable finance by the inclusion of a classification system/taxonomy, green bond standards, Benchmarks, and financial and non-financial disclosure for ESG risk compliances. iii. To align the sustainable finance systems enable the provision of the investment required for sustainable development, which includes $ 18.4 billion for the Nationally Determined Contribution of Botswana (Republic of Botswana and United Nations, 2017) and Sri Lanka. This financial commitment to the EU is EUR 177 billion per annum (European Environment Agency, 2017). iv. To strengthen the national policy framework by an action plan, implementation, and supervision. v. To maintain coherence with each of pillars and between pillars for sustainable development trough sustainable finance system vi. To obtain technical and financial support from international organizations such as Sustainable Banking Network and International Finance Corporation (IFC, 2019). vii. To collaborate and share experiences of sustainable finance system with International organizations such as The Equator Principles Association, Network for greening the financial System, Climate Action in Financial Institutions (2017). viii. To make awareness of Sustainable financial system among all players such as Institutional Investors, Retail investors, Intermediaries, all institutions, directly and indirectly, engaged with the finance system and the public ix. To make all the relevant data required for evaluation of the progress of the transition x. To make available political willingness on a fast track for the above
Conclusion
Aligning the mainstream financial system with sustainable development by incorporating a sustainable finance system does not take place automatically. It is necessary to have a (supra) national policy framework for sustainable development through the sustainable financial system. Hence, what is essential is that a national policy framework with legislative and non-legislative elements for the transformation of the existing conventional financial system to a sustainable finance system. It enables creating innovative and profitable opportunities for institutional and retail investors for their fair contribution for sustainable development.
Zeppini and van den Bergh (2020) elaborate on the importance of a framework stating that an increase in oil prices will induce to shift from oil to gas, worsening climate change in the absence of regulatory framework. It is because the investment in renewable energy is not attractive because of fuel subsidies. Hence, a proper regulatory framework is required to motivate investments in renewable energy.
Such a policy framework should be strengthened by an action plan followed by implementation and supervision to meet the national and global goals of sustainable development from the whole financing and investment chain similar to EU: Action Plan: Financing Sustainable Growth. In this respect, the strengths of political intervention and fast track are the factors that determine the success story. Any delay in formulating policies necessary for sustainable development goals, mandatory compliance, for example, a carbon tax rather than voluntary compliance, urgent actions, and disruptive actions will be inevitable in the future. | 9,795.8 | 2020-12-01T00:00:00.000 | [
"Economics"
] |
Refractive index and formaldehyde sensing with silver nanocubes
We report the synthesis of Ag nanocubes by using a sodium sulfide assisted solvothermal method. Small edge-length nanocubes (32 and 44 nm) were obtained at 145 and 155 °C reaction temperature in the synthesis process. The refractive index sensitivity of synthesized nanocubes was investigated with an aqueous solution of glucose. The refractive index sensitivity of 161 nm per RIU was found in the colloidal dispersion of nanocubes. On the LSPR chip made by immobilization of nanocubes on the (3-aminopropyl)trimethoxysilane modified glass coverslip, the obtained sensitivity was 116 nm per RIU. Detection of formaldehyde in water and milk samples was also performed with nanocubes of edge-length of 44 nm. Formaldehyde detection was performed by utilizing the interaction of the aryl amine of 4-aminothiophenol immobilized on the nanocubes and electrophilic carbon atom of the formaldehyde. In water and in diluted milk, the formaldehyde sensitivity of 0.62 and 0.29 nm μM−1 was obtained, respectively.
Introduction
Noble metal nanoparticles exhibit unique optical properties due to resonant oscillation of conduction electrons present on the surface, this phenomenon is known as localized surface plasmon resonance (LSPR). 1 The LSPR peak position of plasmonic nanostructures is sensitive to the changes in the refractive index of the surrounding medium. 2 Anisotropic plasmonic nanoparticles exhibit higher refractive index sensitivity due to large surface charge polarizability and local eld enhancement. 3,4 Strong response of LSPR peak position of anisotropic plasmonic nanoparticles enables the sensing of small molecules 5 to large biomolecules. 6 These nanoparticles can be utilized to construct LSPR based sensors both in solution and on the substrate. Due to difficulties in colloidal stability, immobilization of the particles on the substrate is gaining interest. 7 Silver nanocubes (AgNCs) are anisotropic nanoparticles and are widely used in surface enhanced Raman scattering (SERS), [8][9][10] uorescence enhancement, [11][12][13] refractive index based sensing [14][15][16] due to its strong plasmonic properties. The synthesis of different edge-length nanocube strongly depends on the reaction atmosphere. 17 Various routes have been followed to synthesize silver nanocubes including polyol synthesis, 17,18 solvothermal method, [19][20][21][22] hydrothermal synthesis, 23 wet chemical method 24 and microwave assisted method. [25][26][27] In the synthesis process using ethylene glycol, silver nanocubes were synthesized by reducing the silver ions in ethylene glycol in the presence of polyvinylpyrrolidone (PVP) at higher temperatures. To promote the formation of perfect silver nanocubes chemical reagents such as HCl, 28,29 Na 2 S, 30 NaHS 31 and FeCl 3 32 are used in the polyol process. Formalin (37% aqueous solution of formaldehyde) is widely used for the increase of shelf-life of food products such as sh, meat and milk. Although, formaldehyde is also a metabolic product in animals, it is classied as human carcinogen and higher dose of it can cause eyes and nose irritation, damage to the central nervous system, immune system disorders, nasopharyngeal cancer and leukemia. 33 The carcinogenic nature of formaldehyde makes its sensitive detection in the food product of utmost importance. In the recent past, several analytical methods such as high performance liquid chromatography 34 (HPLC), gas chromatography 35,36 (GC), gas chromatography/ mass spectrometry 37,38 (GC/MS), chemiluminescence 39,40 and uorimetry 41 have been utilized to detect formaldehyde in trace amount. Although these methods show very high sensitivity for formaldehyde detection, the equipment used in these techniques are bulky and of high cost, which is not suitable for the on-site real time analysis of samples. Apart from these methods, among the optical detection modes, spectrophotometry 41-43 and surface enhanced Raman scattering (SERS) [44][45][46][47] have also been utilized for formaldehyde detection. While sensitivity of spectrophotometry method is limited, the equipment used in SERS is expansive. Also, SERS detection of formaldehyde requires derivatization of metal nanoparticles. Apart from these studies, silver nanoparticles sensitized titanium dioxide, 48 Ag nanoparticle decorated carbon nanotube 49 have also been used for formaldehyde sensing. Martínez-Aquino et al. have used resorcinol functionalized gold nanoparticles for the colorimetric detection of formaldehyde. 50 Gold spherical nanoparticles and nanorods were also utilized for formaldehyde detection through refractive index sensing. 51 Although, scattered reports on sensing of formaldehyde based on Ag and Au nanoparticle is available in literature, there is a need of further investigation of use of metal nanoparticles as formaldehyde sensor as these refractive index based sensors have potential to be developed into miniaturized and multiplexed platform. 52 In this work, we present the synthesis of silver nanocubes (AgNCs) using PVP and sodium sulde nonahydrate. The synthesis method used in the work is solvothermal method, as this method provides excellent control over the reaction atmosphere, which is necessary for the silver nanocube synthesis. The synthesized AgNCs of edge lengths 32 and 44 nm were investigated for their refractive index sensing capability utilizing the aqueous solution of glucose. The LSPR sensor chip is prepared by immobilization of the AgNCs on (3-aminopropyl) trimethoxysilane modied glass coverslips. The sensing capability of LSPR chip is also demonstrated with aqueous solution of glucose. In the nal section of work, the sensitive detection of formaldehyde in water and diluted milk is demonstrated with 4aminothiophenol functionalized synthesized Ag nanocubes.
Synthesis of Ag nanocubes
Silver nanocubes were synthesized by solvothermal method following the procedure reported in a previous report. 19 For the synthesis, 100 mM of sodium sulde (Na 2 S$9H 2 O) and 0.15 M PVP solution was prepared in 20 mL of ethylene glycol. This solution was mixed with 20 mL of 0.1 M AgNO 3 in ethylene glycol with constant stirring. This mixture was transferred to 50 mL Teon lined autoclave and heated for three hours. For three set of samples, the heating temperature was xed at 145, 155 and 165 C. Aer completion of the reaction, the autoclave was allowed to cool to room temperature naturally. The products were washed with acetone and water and centrifuged at 6000 rpm for 20 minutes. The particles settled at the bottom were re-dispersed in water and further used for characterizations and applications.
Spectral and morphological characterization
The spectral characterization of synthesized silver nanocubes was performed using lab-built UV-Vis spectroscopy setup explained in our earlier work. 53,54 All the LSPR experiments were also performed with this lab-built experimental setup. The morphological characterization of synthesized silver nanocube was performed with FESEM. To record the FESEMS images, colloidal solution of silver nanocubes were drop casted on a gold coated glass coverslip. The images were acquired with TESCAN-MIRA 3 FESEM instrument.
Immobilization of Ag nanocubes on glass surface
The immobilization of AgNCs was performed on the silanized glass coverslips. Prior to silanization, the glass coverslip was cleaned in freshly prepared piranha solution (mixture of 3 : 1 conc. sulfuric acid to 30% hydrogen peroxide solution). Following the cleaning, the glass coverslips were rinsed vigorously with deionized water and dried. The dried coverslips were immersed in 10% APTMS ethanolic solution for een minutes. Coverslips with APTMS as surface layer was then rinsed two times with ethanol and mildly sonicated in ethanol for one minute. Coverslips were further rinsed with deionized water followed by drying at 120 C for three hours. The silanized coverslips were cooled to room temperature and immersed in silver nanocubes colloidal solution for different time span. The fabricated LSPR chip was stored in deionized water.
Functionalization of Ag nanocubes with 4-ATP
Functionalization of Ag nanocubes with 4-aminothiophenol (4-ATP) was performed for the detection of formaldehyde. For the functionalization, 0.5 mL of 10 mM 4-aminothiophenol solution in ethanol was added to 9.5 mL of Ag nanocubes colloidal solution and stored in dark. To ensure the functionalization, the solution mixture was centrifuged and the nanocubes were collected and re-dispersed in the deionized water.
Synthesis and characterization of Ag nanocubes
Silver nanocubes were synthesized by solvothermal method. In the synthesis, sodium sulde was used with a xed concentration of 50 mM. It has already been reported that the higher concentration of sodium sulde (100 mM) leads to the formation of silver nanowires and lower concentration (12.5 mM) leads to the mixture of nanocubes and regular tetrahedrons. 19 The reaction temperature was varied to tune the size of the nanocubes. The synthesized AgNCs were characterized with UV-Vis absorption spectroscopy with lab-built set up. 53,54 Fig. 1(a) shows the UV-Vis spectra of silver nanocubes synthesized at 145, 155 and 165 C respectively. For the AgNCs obtained at 145 and 155 C, the spectrum shows two distinct bands whereas a broad spectrum was observed for the product obtained at 165 C. The broadness of this spectrum could be due to the aggregation of nanoparticles. The UV-Vis spectrum of AgNCs obtained at 145 C shows a prominent band at $419 nm along with as shoulder band at $358 nm. For the AgNCs obtained at reaction temperature 155 C, the prominent band was observed at 436 nm with a clear small band at 350 nm. The prominent band observed in two spectra are due to dipole resonance and the small band on lower wavelength side is due to the octupole resonance. 55 The morphological characterization of synthesized AgNCs was performed with FESEM. Fig. 1(b) shows FESEM image of AgNCs synthesized at 155 C reaction temperature. Although apart from AgNCs, some other shape nanoparticle can also be seen in the image, the average edge length of silver nanocubes obtained was found to be $44 nm. The nanocube synthesized at 145 C shows $32 nm edge length.
Immobilization of silver nanocubes on glass substrate
The synthesized nanocubes were immobilized on glass coverslips to make an LSPR sensor chip. For the immobilization, the glass coverslips were silanized with APTMS and the silanized coverslips were incubated in the silver nanocube colloidal solution. The density of nanocubes on the coverslip surface was controlled by controlling the incubation time and is investigated by recording of UV-Vis spectrum. Fig. 2(a) shows the UV-Vis spectra of nanocubes on coverslips for different incubation time. As it can be seen, the spectrum shows both the characteristic bands of nanocube. Also, the spectrum do not show any change in intensity aer 40 hours of incubation, indicating the saturation of the coverslip surface. It is also evident from the spectra that the full width at half maximum (FWHM) is smaller compared to the nanocube spectrum in the colloidal solution.
For the colloidal solution, the FWHM was 100 nm which is reduced to $70 nm in case of immobilized nanocubes. The decrease in the FWHM could be due to the fact that in the immobilization process, nanocubes are aligned in the same plane with preferred orientation. 56 It has to be noted that the absorption maximum does not show any red shi during the immobilization process which indicates that, although the nanocubes are immobilized on the APTMS modied glass surface, the inter-particle separation is higher compared to the edge-length of nanocubes 57 and therefore, there is no plasmonic interaction between nanoparticles.
In order to ensure the sensitivity of the LSPR chip towards change in refractive index, the LSPR chip was dried in air and immersed in water. Fig. 2(b) shows the spectrum of dry LSPR chip and the effect of refractive index change from air to water (RI: 1.33). As it can be seen in the gure, the dried sensor chip showed bands at 416.4, $380 and 348 nm. The band at 416.4 nm corresponds to the dipolar mode whereas the 348 nm band is due to the octupolar resonance. 56 It has been observed in earlier reports on AgNCs that the quadrupole resonance is dark mode and is absent in the colloidal nanocube solution. However, when nanocubes are immobilized and interaction occurs between nanocubes and substrate, this dark mode becomes active due to mode hybridization and marks it appearance on the blue-side of the dipolar band. [58][59][60][61][62] The band appeared at 380 nm could be due to the quadrupolar mode appeared as a result of interaction of nanocube surface with the substrate. When the LSPR chip is immersed in water, the quadrupole resonance band disappeared completely and the dipolar resonance band shows a red-shi to 430.7 nm. It has also been observed that for small AgNCs, both dipole and quadrupole modes are sensitive to the RI change. 56 So, it is possible that aer the immersion of LSPR chip in water, the band representative of quadrupole mode merged with the dipolar band. However, a shi of 14.3 nm for the dipolar resonance band conrms the sensitivity of immobilized nanocubes on the APTMS modied glass substrate.
Refractive index sensitivity of Ag nanocubes towards glucose
The refractive index sensitivity of the synthesized AgNCs in colloidal solution was examined with aqueous solution of glucose. Although substantial amount of work have been reported on the detection of glucose, in the present work, we opted for this molecule as the refractive index of the solution can be controlled very minutely and accurately using this easily available water soluble small molecule. Therefore, rather than providing the sensitivity of nanocube towards glucose, we intend to evaluate the sensitivity of synthesized nanocubes in solution and on LSPR chip. For the experiment, glucose concentration was varied from 0 to 20% in the step of 5%. Fig. 3(a) shows the extinction spectra of AgNCs synthesized at 155 C with variation of glucose concentration. The accurate position of LSPR bands were obtained by calculating the rst order derivative of spectra near the LSPR band maxima. The numerical root of linear t to the derivative data is considered as the LSPR band maximum position. Fig. 3(b) shows the variation of LSPR band maximum with refractive indices of glucose solution. The solid line in the gure represents the linear t, the slope of which provides an estimate to the RI sensitivity. In case of AgNCs obtained at reaction temperature 155 C, the sensitivity was 161 nm per RIU whereas the sensitivity of AgNCs obtained at 145 C is 113.7 nm per RIU. The UV-Vis spectra and RI sensitivity plot for AgNCs obtained at 145 C are shown in ESI Fig. S1 63 The RI sensitivity of nanocubes was also measured on the LSPR chip prepared by immobilization of nanocubes on the glass coverslip. Fig. 4(a) shows the effect of various glucose concentrations on the extinction spectrum of nanocubes. The analysis ( Fig. 4(b)) shows the bulk refractive index sensitivity of LSPR chip to be 116 nm per RIU. The observed RI sensitivity of nanocubes on substrate is lower compared to sensitivity in the colloidal solution. This observed reduction in the RI sensitivity could be due to the substrate. Several studies have been reported on the effect of substrate refractive index on the dipolar and quadrupolar plasmon resonance in the silver nanocubes. 56,59-64 Mahmoud et al. observed refractive index sensitivity of 113 nm per RIU on the quartz (RI: 1.458) surface for 65 nm edge-length Ag nanocubes. 60 Ahamad et al. observed that the refractive index sensitivity is reduced by $50% when Ag nanocube of side-length 40 nm was immobilized on the glass substrate. The sensitivity reduction was higher when nanocubes were immobilized on TiO 2 thin layer. 56 In the work by Ahamad et al., refractive index sensitivity in solution was 176 nm per RIU which is reduced to 93 nm per RIU on glass and 57 nm per RIU on TiO 2 . In a similar study, Martinsson et al. observed low amount of reduction in the refractive index sensitivity when Ag nanocubes were immobilized on APTES or polyelectrolyte modied glass substrate. 63 It was observed that, for the 40 nm edge-length Ag nanocubes, the sensitivity in solution was 158 nm per RIU which is reduced to 124 nm per RIU in case of APTES (RI: 1.420) modied glass substrate and up to 137 nm per RIU for polyelectrolyte lm (RI: 1.46) modied glass substrate. All these studies indicate towards strong dependence of refractive index sensitivity towards the substrate. In the present work, the nanocubes were immobilized on APTMS (RI: 1.424) modied glass surface which is similar to the AgNCs immobilized on APTES modied surface and therefore the sensitivity reduction is consistent with earlier report. 63 3.4 Formaldehyde detection using Ag nanocubes 3.4.1 Functionalization of Ag nanocube with 4-ATP. Among the optical sensing technique, the colorimetric detection has been vastly utilized for the detection of formaldehyde. In these colorimetry based detection, basic reaction between nucleophilic aryl or alkyl amine with electrophilic carbon of formaldehyde was utilized. 65,66 Following the same principle, in the present work, functionalization of synthesized nanocubes was performed with 4-aminothiophenol (4-ATP). For the functionalization, ethanolic solution of 4-ATP was mixed with colloidal silver nanocubes solution. The UV-Vis spectrum of silver nanocubes with 4-ATP was recorded and shown in Fig. 5 along with the spectrum of AgNCs without 4-ATP. As it can be seen in the gure, the dipolar resonance shied from 437.5 to 451.4 nm. It has already been reported that the thiol group of 4-ATP strongly adsorbs to the surface of silver nanoparticles leaving the amine group free 67 and the observed shi in spectrum could be a result of this binding. To ascertain that the observed spectral shi is a result of immobilization of 4-ATP on the AgNCs surface and not due to the refractive index variation in the AgNCs colloidal solution, the 4-ATP functionalized silver nanocubes solution was centrifuged and the nanoparticles were collected and re-dispersed in deionized water. The spectrum of the re-dispersed nanocubes is also shown in Fig. 5. Compared to the spectrum of AgNCs with 4-ATP, this spectrum does not show spectral shi indicating the immobilization of 4-ATP molecules on the surface of AgNCs.
It is important to mention that various concentration of 4-ATP from 0.5 mM to 2.0 mM was investigated for functionalization of nanocubes. However, it was observed that the concentration above 0.5 mM results in broader extinction spectrum. The broadness in the spectrum can result in poor resolution and therefore not adequate for sensing experiments. In view of this, the 4-ATP concentration for the functionalization of Ag nanocubes was kept constant at 0.5 mM.
3.4.2 Formaldehyde sensing in water and milk. Formaldehyde contaminated water and food products such as milk can cause serious health risk such as cancer in human beings. A tolerable concentration limit of 2.6 mg L À1 in ingested products has been recommended by World Health Organization (WHO). 68 The severity of formaldehyde consumption through water or milk makes the detection of it very important in these two media. The 4-ATP functionalized silver nanocubes were utilized to sense formaldehyde in water and milk. For the sensing experiment in water, formaldehyde concentration was varied from 127 mM to 1270 mM (0.001% to 0.01% (v/v) of formalin in water). From the diluted formaldehyde solutions, 0.2 mL was mixed with 1.8 mL of 4-ATP functionalized silver nanocubes thus reducing the concentration of formaldehyde by a factor of ten. The mixture solution was incubated in dark for an hour followed by recording of UV-Vis spectra. Fig. 6(a) shows the UV-Vis spectra of the incubated samples. A progressive change in the LSPR band maximum can be seen in the gure. The band maxima position were plotted with the formaldehyde concentration and shown in Fig. 6(b). The solid line in the gure represents the linear t to the data. It is evident that the response is linear below 76.2 mM concentration of formaldehyde with slope of 0.62 nm mM À1 . Above this concentration, the saturation of sensitivity was observed with slope of 0.047 nm mM À1 .
The formaldehyde sensing experiment was also performed with commercially available milk sample. For the experiment, the milk was diluted and mixed with the 4-ATP functionalized silver nanocubes. The refractive index change due to milk was observed and the LSPR peak shied from 451.4 nm to 457.7 nm. For the formaldehyde sensing, the diluted milk was contaminated with formaldehyde (nal concentration: 127-1270 mM) and mixed with 4-ATP functionalized silver nanocubes followed by incubation for one hour in dark. The UV-Vis spectra of formaldehyde contaminated milk sample in 4-ATP immobilized nanocubes are shown in Fig. 7(a). As it is evident, successive change in the LSPR peak position occurs when different concentration of formaldehyde was used in the mixture solution. Fig. 7(b) shows the plot of LSPR band maxima with the concentration of formaldehyde in milk. Although the successive increase in the LSPR band position is observed in case of formaldehyde in milk samples, the extent of red-shi is nearly half compared to that for formaldehyde in water. This is also evident from the slope of linear t in the lower concentration region. In case of milk, it is 0.29 compared to 0.62 in case of water. In case of water the observed red-shi was 51.3 nm for 0 to 1270 mM concentration of formaldehyde, whereas in case of milk samples, the observed shi is 26.8 nm.
The limit of detection LOD (3.3 Â standard deviation of yintercept/slope of regression line) and the limit of quantication LOQ (10 Â standard deviation of y-intercept/slope of regression line) were calculated for formaldehyde in water and milk samples. The LOD and LOQ for formaldehyde in water was found to be 1.05 and 3.18 mg L À1 . Whereas, in case of milk sample, the LOD and LOQ were 1.14 and 3.45 mg L À1 respectively.
The selectivity of the 4-ATP functionalized AgNCs towards formaldehyde versus acetaldehyde, benzaldehyde, acetone, glucose and sucrose were investigated following the same method. The concentrations of all the analyte molecules were 1 mM and the corresponding absorption spectra are shown in Fig. 8(a). As it is evident, the wavelength shi in presence of formaldehyde is very large compared to other analyte ( Fig. 8(b)) which shows the selectivity of 4-ATP functionalized AgNCs towards formaldehyde.
The mechanism of sensing of formaldehyde with 4-ATP functionalized nanocubes can be understood from the Scheme 1 shown below. The Ag nanocube adsorbs the 4-ATP molecule on its surface though the thiol group. When this functionalized nanoparticle interacts with the formaldehyde molecule, the amine terminal of the 4-ATP molecule undergoes chemical modication to imine with a release of one water molecule. The chemical process seems to be irreversible in nature.
In some of the recent work, the formaldehyde sensing was performed with ber optic sensor 69 and silver nanocluster modied Tollen's reagent. 70 Using the ber optic sensor, 69 lower detection up to 0.2 mg L À1 was achieved whereas the lower detection limit obtained in the method based on silver nanocluster is 27.99 mM ($0.84 mg L À1 ). 70 In the present work, the formaldehyde with concentration 12.7 mM ($0.38 mg L À1 ) has been detected successfully. Similar to the earlier reported values, the sensitivity obtained in the present work is well below the tolerable limit (2.6 mg L À1 ) of formaldehyde in ingested food products. This establishes that the Ag nanocubes functionalized with 4-aminothiophenol molecules can be used for sensitive detection of formaldehyde in water and milk. The establishment of 4-ATP functionalized silver nanocube as formaldehyde sensor will also paves the way to use other Ag/Au nanoparticle for the formaldehyde detection with better sensitivity depending on their shape and size. Further, compared to other analytical methods, it will be easier to prepare LSPR sensor chips with these nanoparticles which can be extremely useful for the on-site eld applications.
Conclusions
Silver nanocubes of edge-length 44 and 32 nm were synthesized by solvothermal method using PVP as capping agent in the presence of sodium sulphide. Colloidal suspension of silver nanocubes showed refractive index sensitivity of 161 nm per RIU for glucose solution in water. Silver nanocubes were immobilized on glass substrate using (3-aminopropyl)trimethoxysilane to construct LSPR chip. The refractive index sensitivity of the LSPR chip towards glucose was found to be 116 nm per RIU. The lower RI sensitivity could be due to the interaction of nanocubes with the substrate. Silver nanocubes were functionalized with 4-aminothiophenol for sensing of formaldehyde in water and milk. The sensitivity obtained for formaldehyde in water and in diluted milk are 0.62 nm mM À1 and 0.29 nm mM À1 respectively. Bulk refractive index sensitivity of silver nanocubes in solution and on the substrate proves the potential of it to be used in LSPR based sensing applications. Formaldehyde sensing capability of 4-aminothiophenol functionalized silver nanocubes demonstrates its usefulness in detecting adulterating molecules in milk.
Conflicts of interest
Authors declare no conict of interest.
Acknowledgements
Financial support from Department of Science and Technology (DST), India under the project grant no. IDP/BDTD/11/2019 is | 5,667.2 | 2021-02-17T00:00:00.000 | [
"Physics"
] |
HSP90β Impedes STUB1‐Induced Ubiquitination of YTHDF2 to Drive Sorafenib Resistance in Hepatocellular Carcinoma
Abstract YTH domain family 2 (YTHDF2) is the first identified N6‐methyladenosine (m6A) reader that regulates the status of mRNA. It has been reported that overexpressed YTHDF2 promotes carcinogenesis; yet, its role in hepatocellular carcinoma (HCC) is elusive. Herein, it is demonstrated that YTHDF2 is upregulated and can predict poor outcomes in HCC. Decreased ubiquitination levels of YTHDF2 contribute to the upregulation of YTHDF2. Furthermore, heat shock protein 90 beta (HSP90β) and STIP1 homology and U‐box‐containing protein 1 (STUB1) physically interact with YTHDF2 in the cytoplasm. Mechanically, the large and small middle domain of HSP90β is required for its interaction with STUB1 and YTHDF2. HSP90β inhibits the STUB1‐induced degradation of YTHDF2 to elevate the expression of YTHDF2 and to further boost the proliferation and sorafenib resistance of HCC. Moreover, HSP90β and YTHDF2 are upregulated, while STUB1 is downregulated in HCC tissues. The expression of HSP90β is positively correlated with the YTHDF2 protein level, whereas the expression of STUB1 is negatively correlated with the protein levels of YTHDF2 and HSP90β. These findings deepen the understanding of how YTHDF2 is regulated to drive HCC progression and provide potential targets for treating HCC.
Introduction
Hepatocellular carcinoma (HCC) is a challenging disease with high incidence and fatality, [1] and extremely poor survival (less than 6%), strongly associated with late tumor diagnosis. [1]Additionally, due to the high heterogeneity, [2] patients with HCC can hardly benefit from a specific therapy.1a,3] Therefore, developing key therapeutic targets for the effective treatment of HCC is urgent for current medical studies.
As the core of maintaining protein homeostasis and cellular functions, the ubiquitin-proteasome system (UPS) controls the elimination of most proteins and participates in biological reactions from diverse levels, such as stress response, DNA damage response, and cell proliferation. [4]lteration in UPS is related to many diseases, including various cancers. [5]Ubiquitination is a cascade reaction that requires E1, E2, and E3.5a,b,6] Epigenetic modifications are critical in the pathogenesis of many kinds of tumors.Over the years, post-transcriptional modification has attracted extensive attention in biomedical research.For example, N6-methyladenosine (m 6 A) methylation, a prevalent mRNA modification in eukaryotic cells, controls the status of mRNA, including RNA processing, translocation, stability, and translation, thereby regulating multiple biological processes. [7]7a,8] m 6 A methylation is installed by methyltransferases (termed "writers"), such as METTL3/14, WTAP, etc., [8c] which are recognized by m 6 A-binding proteins (termed "readers"), such as YTHDC1/2, YTHDF1/2/3, etc. [8c] Like many other epigenetic modifications, m 6 A methylation is a reversible process.8a,9] However, the role and modification of YTHDF2 in HCC are still not fully understood.
This study showed that the ubiquitination level of the m 6 A reader YTHDF2 is significantly decreased in HCC.Mechanically, the heat shock protein 90 beta (HSP90) interacts with YTHDF2 and STIP1 homology and U-box-containing protein 1 (STUB1), a well-characterized E3 ligase, in the cytoplasm with its large and small middle domain.STUB1 triggers ubiquitination and degradation of YTHDF2 via the 26S proteasome, whereas HSP90 blocks this biological process.Consequently, HSP90 boosts the growth and sorafenib insensitivity via deubiquitination and stabilization of YTHDF2.Moreover, our clinical observations showed that the expression of HSP90 or STUB1 is correlated with the protein expression of YTHDF2.In summary, this study furthers the understanding of the regulatory network of YTHDF2 in HCC progression.
The Ubiquitination Level of YTHDF2 is Downregulated in HCC
To explore whether YTHDF2 is critical for HCC progression, the mRNA expression of YTHDF2 in various stages/grades of HCC was analyzed using the public TCGA database via UALCAN website.YTHDF2 was notably increased in stage 1-3 and grade 1-4; yet, HCC tissues with higher grades showed higher expression of YTHDF2 (Figure 1A).The relationship between YTHDF2 expression and the survival of patients with HCC was further analyzed using the public TCGA database via Kaplan-Meier curves website.We found that upregulation of YTHDF2 was associated with poor outcomes, including overall survival and relapse-free survival (Figure 1B).Next, the protein expression of YTHDF2 was determined in HCC samples (n = 31).We showed that tumor tissues had higher expression of YTHDF2 versus adjacent normal tissues (Figure 1C,D).We next assessed whether the upregulation of YTHDF2 protein expression might result from abnormal ubiquitination of YTHDF2.Co-immunoprecipitation (co-IP) analysis was performed in 11 pairs of tumor or adjacent normal tissues among the samples with higher protein expression of YTHDF2.Ubiquitination level of YTHDF2 was determined by the ratio of ubiquitin density/YTHDF2 density in the co-IP results.The case N22/T22 was finally excluded because the ubiquitination level of YTHDF2 was undetectable in these paired samples.As shown, the ubiquitination level of YTHDF2 was notably decreased in HCC tissues compared with adjacent normal tissues (Figure 1E,F).Together, these findings demonstrate that a deficiency of ubiquitination level contributes to the overexpression of YTHDF2 and drives the malignant progression of HCC.
YTHDF2 Interacts with HSP90𝜷 and STUB1
It has been observed that YTHDF2 can be modified by ubiquitin in HCC samples.Thus, we subsequently examined whether the proteasome may degrade YTHDF2.Our co-IP results showed that the ubiquitination level of YTHDF2 was notably upregulated post the short exposure of MG132, a potent proteasome inhibitor, in HepG2 and Hep3B cells (Figure 2A), indicating that the canonical ubiquitin-proteasome pathway degrades YTHDF2.We previously reported that heat-shock proteins (HSPs/chaperones) have a critical role in controlling the degradation of specific proteins. [10,11]In this study, we further investigated whether HSPs may regulate the ubiquitination of YTHDF2 and lead to its abnormal expression.Co-IP assay combined with liquid chromatography with tandem mass spectrometry (LC-MS/MS) analysis showed that HSP90 emerged as the most important YTHDF2-interacting HSPs in HCC cells (Figure 2B,C and Figure S1, Supporting Information).As reported previously, [12] STUB1 is an HSP70/90-interacting E3 ligase.Thus, we performed co-IP and Western blot to detect their protein interactions using anti-YTHDF2, anti-STUB1, and anti-HSP90, respectively.Our results showed that YTHDF2, STUB1, and HSP90 can interact with each other (Figure 2D-F).Next, exogenous immunofluorescence (IF), endogenous IF, proximity ligation, and confocal microscopy assays were performed in HCC cells to further clarify the subcellular location of their interactions.These results consistently showed that their interactions were mainly localized in the cytoplasm in HCC cells (Figure 2G-I and Figure S2A, Supporting Information).Thus, we further aimed to investigate whether there is a specific binding domain of HSP90 to STUB1.Truncated mutants and full-length of HSP90 were engineered in plasmids with FLAG-tag in their C-terminals (Figure 2J).These plasmids were transfected with HA-STUB1, respectively, in HEK293T cells.Co-IP results showed that the large and small middle domain (276-602 aa) of HSP90 was critical to its binding to STUB1 and YTHDF2 (Figure 2K).In addition, we found that the N-terminus (1-384 aa) of YTHDF2 is required for its binding to HSP90 (Figure 2L).The co-IP results in HepG2 cells were also consistent with the findings in HEK293T cells (Figure S2B, Supporting Information).Thus, the above results indicated that HSP90 interacts with the YTHDF2 physically interacts with HSP90 and STUB1 in HCC.A) Co-IP/Western blot assays were performed using YTHDF2 antibodies in lysates from HCC cells treated with MG132 (10 × 10 −6 m) for 8 h, subjected to the immunoblotting for ubiquitin (Ub) and YTHDF2.B) Co-IP assay was performed in HepG2 and Hep3B cell lysates using YTHDF2 or IgG control antibodies.The Co-IP products were subjected to Sodium dodecyl sulfatepolyacrylamide gel electrophoresis (SDS-PAGE) separation, silver staining, and biological mass spectrometry (LC-MS/MS analysis).C) The peptide numbers and coverage of YTHDF2 and HSP90 from the LC-MS/MS analysis.D-F) Co-IP assay was performed in HepG2 and Hep3B cell lysates using YTHDF2, STUB1, HSP90, or IgG control antibodies, followed by immunoblotting for YTHDF2, STUB1, and HSP90.G,H) The HA-labeled STUB1 plasmids were transfected in HepG2 and Hep3B for 48 h.IF assay/confocal microscopy was further performed to observe the subcellular location of YTHDF2, STUB1, and HSP90.Scale bar, 10 μm.I) PLA assay was performed in HepG2 and Hep3B cells using STUB1, HSP90, and YTHDF2 antibodies.The orange point represents positive interaction.Scale bar, 25 μm.J) The full length and diverse truncated mutants of HSP90 with FLAG-tag were constructed.Linear models were shown.K) Diverse truncated mutants of HSP90 were transfected in HEK293T cells with HA-STUB1 and 6×His-YTHDF2 plasmids for 48 h.Co-IP assay was performed using HA antibodies, followed by immunoblotting for FLAG and HA.L) Truncated mutants of 6×His-YTHDF2 were transfected in HEK293T cells with FLAG-HSP90 plasmids for 48 h.Co-IP assay was performed using FLAG antibodies, followed by immunoblotting for FLAG and His.
N-terminus of YTHDF2 (1-384 aa) and STUB1 through its large and small middle domains in the cytoplasm.
HSP90𝜷 and STUB1 Regulate the Stability of YTHDF2
We further determined the role of STUB1 or HSP90 in YTHDF2 expression.As shown by Western blot, knockdown (KD) of STUB1 upregulated the protein level of YTHDF2 (Figure 3A), whereas the overexpression reversed this process (Figure 3B).Additionally, inhibition of HSP90 with NVP-AUY922 or HSP90-KD resulted in the downregulation of YTHDF2 in HCC cells (Figure 3C and Figure S3, Supporting Information).According to a previous report, [13] Bcl-2 was used as a marker to indicate the effect of NVP-AUY922 in this study.Next, our CHX-tracking analysis revealed that STUB1-KD significantly postponed the rate of YTHDF2 degradation (Figure 3D), whereas the inhibition of HSP90 notably accelerated the degradation of YTHDF2 (Figure 3E).Meanwhile, neither STUB1-KD nor HSP90 inhibition altered the mRNA level of YTHDF2 (Figure 3F,G).Furthermore, the downregulation of YTHDF2 caused by HSP90 inhibition or STUB1 overexpression was significantly reversed by bortezomib, a specific proteasome inhibitor (Figure 3H,I).Together, these findings indicated that STUB1 reduces the protein stability of YTHDF2, whereas HSP90 increases the protein stability of YTHDF2 in HCC.
HSP90𝜷 Inhibits the STUB1-Induced Ubiquitination of YTHDF2
To further determine whether HSP90 and STUB1 may alter the ubiquitination of YTHDF2, co-IP assays were performed in HepG2 cells treated with si-STUB1, NVP-AUY922, or FLAG-HSP90 plasmids.Our co-IP analysis showed that the K48linked ubiquitination and pan-ubiquitination levels of YTHDF2 were notably downregulated by STUB1-KD in HCC cells, while they were upregulated by HSP90 inhibition (Figure 4A,B).In addition, overexpression of HSP90 reduced the levels of K48-linked ubiquitination and pan-ubiquitination of YTHDF2, and decreased the interaction between STUB1 and YTHDF2 (Figure 4C).To explore whether the regulation of the YTHDF2 by HSP90 is really mediated by STUB1, co-IP assays were performed in HEK293T cells transfected with Myc-Ub, 6×His-YTHDF2, HA-STUB1, or FLAG-HSP90 plasmids.The results showed that overexpression of STUB1 notably increased the ubiquitination of YTHDF2, while further overexpression of HSP90 reduced the level of STUB1-induced ubiquitination of YTHDF2 (Figure 4D).
Ubiquitination mostly occurs at Lys residue.To investigate the ubiquitination site on YTHDF2, six plasmids containing Lysmutant types of YTHDF2 were established according to GPS-Uber, a website to help ubiquitination site prediction.These plasmids were transfected into HEK293T cells, respectively.Our co-IP results showed that the ubiquitination level of YTHDF2 (K245A), but not other mutant type of YTHDF2, was downregulated, indicating that K245 is a critical ubiquitination site on YTHDF2 (Figure 4E).Moreover, our in vitro ubiquitination assay showed that STUB1 directly triggered ubiquitination of YTHDF2, whereas HSP90 blocked the STUB1-induced ubiquitination (Figure 4F).Together, our findings indicated that HSP90 blocks the STUB1-induced ubiquitination of YTHDF2, thereby maintaining the protein level of YTHDF2 in HCC cells.
HSP90𝜷/STUB1 Regulates the Proliferation of HCC in a YTHDF2-Dependent Manner
Next, we assessed whether HSP90, YTHDF2, and STUB1 might be functional in regulating malignant phenotypes of HCC.Cell viability assays were conducted on consecutive 5 days post the treatment of HSP90/YTHDF2/STUB1-KD to observe cell proliferation.STUB1-KD promoted the proliferation of HCC cells (Figure 5A,B), whereas HSP90-KD or YTHDF2-KD suppressed HCC proliferation (Figure 5C,D).We further determined whether HSP90/STUB1 may regulate the proliferation of HCC in a YTHDF2-dependent manner.Our cell viability assay showed that overexpression of YTHDF2 significantly reversed the growth inhibition induced by the HSP90-KD or overexpression of STUB1 in HepG2 and Hep3B cells (Figure 5E,F).In addition, in vivo assay showed that overexpression of YTHDF2 rescued the tumor suppression induced by the HSP90-KD or overexpression of STUB1 in HepG2 xenografts (Figure 5G-I).Next, we aimed to determine which mRNA might be regulated by YTHDF2 to drive HCC progression.It has been reported that OCT4 is a downstream effector for YTHDF2 regulating liver cancer stem cell phenotype via m 6 A RNA methylation.YTHDF2 upregulates the m 6 A level in the 5′-untranslated region of OCT4 mRNA to elevate the translation and expression of OCT4. [14]Thus, we next assessed whether OCT4 mediates the HSP90/STUB1-regulated cell proliferation in HCC.The results showed that OCT4-KD significantly reversed the growth promotion induced by the STUB1-KD or overexpression of HSP90 (Figure 5J).Together, these findings illuminate that HSP90 and STUB1 have opposite roles in HCC cells, which is largely associated with their opposite functions in regulating the ubiquitination of YTHDF2.
HSP90𝜷 Blockade Restores the Responsiveness of HCC to Targeted Therapy
Sorafenib, a multi-kinase inhibitor, has become one of the most prevalent targeted therapies for advanced HCC.However, the effectiveness of prolonging the overall survival of HCC patients remains limited.Thus, we attempted to determine whether inducing the degradation of YTHDF2 by inhibition of HSP90 may enhance the sensitivity of HCC cells to the targeted therapy with sorafenib.First, we explored the effect of NVP-AUY922 on the cell proliferation of HCC cells.NVP-AUY922 notably reduced cell viability and colony formation (Figure 6A,B).Next, we explored the effect of NVP-AUY922 combined with sorafenib on proliferation in HCC cells.We found that the combination remarkably reduced cell viability and colony formation compared with treatment with NVP-AUY922 or sorafenib alone (Figure 6C,D).In addition, this combination more obviously induced apoptosis, compared with the alone treatments in HCC cells (Figure 6E).
In order to explore the in vivo effects of the combination, xenograft models were established on nude mice.The results and STUB1 in HepG2 cells exposed to STUB1 or control siRNAs for 48 h, followed by cycloheximide treatment (CHX, 100 μg mL −1 ) for 12, 24, and 36 h.Quantification was shown on the right.E) Western blot assay for YTHDF2 was performed in HepG2 cells exposed to NVP-AUY922 (0.5 × 10 −6 m) for 24 h, followed by the treatment of CHX for 12, 24, and 36 h.Quantification was shown on the right.F) RT-qPCR assays for YTHDF2 and STUB1 were performed in HepG2 cells exposed to STUB1 siRNAs or control siRNAs for 36 h.G) RT-qPCR assays for YTHDF2 were performed in HepG2 cells exposed to NVP for 12 h.H) Western blot assay for YTHDF2 in HepG2 cells exposed to NVP for 24 h, followed by bortezomib (BTZ) treatment for 24 h.Quantification was shown on the lower side.I) Western blot assay for YTHDF2 and STUB1 in HepG2 cells exposed to HA-STUB1 or control plasmids for 24 h, followed by the treatment of bortezomib (BTZ) for 24 h.Quantification was shown on the lower side.*p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001, ns represents not significant.Ubiquitination level of YTHDF2 is controlled by the balance of HSP90 and STUB1.A) Co-IP assays were performed using YTHDF2 antibodies in lysates from HepG2 cells exposed to STUB1 siRNAs or control siRNAs for 48 h, subjected to the immunoblotting for ubiquitin (Ub), K48-linked ubiquitin (K48-Ub), STUB1, and YTHDF2.MG132 was used to treat the cells for 8 h before harvest.B) Co-IP assays were performed using YTHDF2 antibodies in lysates from HepG2 cells exposed to NVP or vehicle control in the presence of MG132 for 8 h, subjected to the immunoblotting for Ub, K48-Ub, and YTHDF2.C) Co-IP assays were performed using YTHDF2 antibodies in lysates from HepG2 cells transfected with FLAG-HSP90 or control plasmids, subjected to the immunoblotting for Ub, K48-Ub, YTHDF2, HSP90, and STUB1.MG132 was used to treat the cells for 8 h before harvest.D) Co-IP assays were performed using His-tag antibodies in lysates from HEK293T cells transfected with 6×His-YTHDF2 and Myc-Ub plasmids, with or without the transfection of FLAG-HSP90 or HA-STUB1 plasmids for 48 h, subjected to the immunoblotting for Myc-tag and His-tag.MG132 was used to treat the cells for 8 h before harvest.E) Co-IP assays were performed using His-tag antibodies in lysates from HEK293T cells transfected with various Lys-mutant types of 6×His-YTHDF2 and Myc-Ub plasmids for 48 h, subjected to the immunoblotting for Myc-tag and His-tag.MG132 was used to treat the cells for 8 h before harvest.F) In vitro ubiquitination assay was performed using the ubiquitinylation kit and specific purified proteins as indicated.
showed that tumor size and tumor weight of HCC xenografts, but not body weight, were remarkably decreased by the combination treatment, i.e., NVP+sorafenib (Figure 6F-I).Additionally, we further confirmed that HSP90-KD or YTHDF2-KD also restored the sensitivity of both HepG2 and Hep3B cells to sorafenib (Figure 6J).More importantly, inhibition of HSP90 with NVP-AUY922 increased the interaction of STUB1 and YTHDF2 in HCC cells (Figure 6K), suggesting that NVP targets HSP90, but not YTHDF2.Together, we demonstrated that the inhibition of HSP90 can enhance the sensitivity of targeted therapy to HCC cells via induction the interaction between STUB1 and YTHDF2.
Clinical Relationship of HSP90𝜷/STUB1 and YTHDF2 in HCC
We explored the relationship between HSP90/STUB1 and YTHDF2 in clinical samples derived from 40 HCC cases to further validate our findings in vitro and in vivo.The immunohistochemistry (IHC) assay showed that protein expressions of YTHDF2 and HSP90 were upregulated, while STUB1 was reduced in HCC tissues, compared with that in normal adjacent tissues (Figure 7A-D).Additionally, protein expression of YTHDF2 was positively correlated with HSP90 expression, while it was negatively correlated with STUB1 expression (Figure 7E,F).Meanwhile, protein expression of STUB1 was negatively correlated with HSP90 expression (Figure 7G).Analysis of the TCGA database via UALCAN showed that HSP90 had higher mRNA levels in various stages or tumor grades of HCC (Figure 8A,B).Moreover, the overall survival analysis with Kaplan-Meier curves showed that higher expression of HSP90 was associated with poor survival in HCC patients, including all stages (Figure 8C).In contrast, higher expression of STUB1 indicated better outcomes in HCC patients, including stage 2-4 (Figure 8D-F).Collectively, our findings on clinical tissues from HCC were highly consistent with the molecular and cellular biology results, further supporting the hypothesis that HSP90 impedes STUB1-induced ubiquitination of YTHDF2 to drive the growth and sorafenib-insensitivity of HCC (Figure 8G).
Discussion
HCC is a challenging and hazardous type of solid tumor.High heterogeneity, drug resistance, postoperative recurrence, and a high risk of metastasis are the leading causes of poor outcomes for patients with HCC.Over the years, sorafenib has been widely used as a first-line targeted therapy for advanced HCC; ≈30% of patients may benefit from this treatment.1a,3a] Therefore, there is still an urgent need to elucidate the molecular and cellular mechanisms of HCC development and progression to excavate more effective intervention measures for the treatment of HCC.
m 6 A is one of the most abundant mRNA modifications.Like many other modifications, m 6 A is also characterized as a dynamic and reversible process.7a] The expression levels of YTHDF2 differ among malignant tumors, and its exact function is still debatable.8a] Mechanistically, in prostate cancer, YTHDF2 mediates the mRNA degradation of tumor suppressors, including LHPP and NKX3-1, to boost AKT phosphorylation-induced tumor proliferation and migration. [16]n addition, YTHDF2 may stabilize the transcripts of MYC and vascular endothelial growth factor A to facilitate tumor progression in glioblastoma stem cells in some potentially indirect manner. [17]8a,18] This study identified the cancer-promoting role of YTHDF2 in HCC because the loss of YTHDF2 significantly inhibits tumor growth and sorafenib insensitivity.Our clinical observations showed that YTHDF2 is overexpressed and predicts poor prognosis in patients with HCC.
Previous studies on YTHDF2 mainly focused on its function as an m 6 A binding protein, whereas the molecular mechanisms of how YTHDF2 is regulated at various levels are still unclear.UPS, the selective elimination pathway of proteins to maintain homeostasis, regulates various biological processes.4a,b,6] According to the existing reports, the post-translational modification mechanisms of YTHDF2 include ubiquitination, [19] SUMOylation, [20] and O-GlcNAcylation. [21]The SUMOylation of YTHDF2 increases its m 6 A modification function and subsequently changes the gene expression profile, thereby promoting the malignant progression of lung cancer. [20]In addition, a significant increase in O-GlcNAcylation of YTHDF2 was observed during hepatitis B virus infection, which may further inhibit its ubiquitination and enhance its protein stability and carcinogenic activity. [21]Furthermore, FBW7, a component of the SCF E3-ubiquitin ligase, may induce ubiquitination of YTHDF2 to suppress ovarian cancer development. [19]This study revealed that the ubiquitination level of YTHDF2 was downregulated in tumor tissues of HCC patients compared to normal tissues.Additionally, we identified that the Figure 5. HSP90 and STUB1 regulate the proliferation of HCC in a YTHDF2-dependent manner.A-D) Cell viability analyses were performed in HepG2 and Hep3B cells treated with STUB1/YTHDF2/HSP90 siRNAs or control siRNAs for 5 days.The OD values were measured every day.E) Cell viability analyses were performed in HepG2 and Hep3B cells stably expressing 6×His-YTHDF2 or control plasmids, and subjected to the treatment with HSP90 siRNAs or control siRNAs for 72 h.F) Cell viability analyses were performed in HepG2 and Hep3B cells stably expressing 6×His-YTHDF2 or control plasmids, and subjected to the transfection with HA-STUB1 or control plasmids for 72 h.G-I) HepG2 cells stably expressing 6×His-YTHDF2 or control plasmids, with or without HSP90 shRNAs or HA-STUB1 plasmids, were transplanted on BALB/c nude mice for 3 weeks.Tumor volume was recorded every 3 days.Tumor image, tumor size, and tumor weight were shown.J) Cell viability analyses were performed in HepG2 and Hep3B cells treated with STUB1 siRNAs or FLAG-HSP90, with or without the transfection of OCT4 siRNAs for 72 h.*p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001.
UPS degraded YTHDF2 in HCC cells because inhibition of proteasome with MG132 leads to its ubiquitin accumulation.
In order to further explore the potential mechanism of YTHDF2 regulated by the UPS, we identified the protein interaction between YTHDF2 and molecular chaperone HSP90 using biological mass spectrometry (LC-MS/MS analysis).]22] For example, our previous studies have revealed that the molecular chaperone GRP78 binds to the E3 ligase SIAH2 and forms a GRP78-SIAH2-AR-V7 degraded complex to trigger the canonical degradation of AR-V7. [10]Additionally, the mitochondria-associated molecular chaperone GRP75 recruits the deubiquitinating enzyme USP1 to form a GRP75-USP1-SIX1 complex, thereby mediating the deubiquitination and stabilization of SIX1. [11]This study identified the protein-protein interactions among HSP90, YTHDF2, and the E3 ligase STUB1 in HCC cells via co-IP and exogenous/endogenous IF assays.Moreover, we revealed that the large and small middle domain (276-602 aa) of HSP90 is required for its binding to STUB1 and YTHDF2 in the cytoplasm.At the same time, the N-terminal (1-384 aa) of YTHDF2 is required for its binding to HSP90.
Next, the following evidence confirmed that STUB1 promotes ubiquitination and degradation of YTHDF2 in HCC: first, the knockdown of STUB1 did not affect the transcription level of YTHDF2, but prolonged its half-life, upregulated its protein level, and inhibited its ubiquitination level in HCC cells.In contrast, the overexpression of STUB1 reduced the YTHDF2 protein level, which can be reversed by bortezomib, a proteasome inhibitor.More importantly, we identified that HSP90 acts as a functional inhibitor of STUB1 as it potently suppresses the ubiquitination and degradation of YTHDF2 via binding STUB1 through in vivo and in vitro ubiquitination assays.Further co-IP analysis revealed that the K245 residue is the critical ubiquitination site on YTHDF2.Functionally, the knockdown of STUB1 promoted cell proliferation, while the knockdown or inhibition of HSP90 significantly limited cancer progression in HCC, similar to the knockdown of YTHDF2.In addition, we revealed that STUB1/HSP90 regulated proliferation in a YTHDF2-OCT4dependent manner.Furthermore, inhibition of HSP90 with NVP-AUY922 can significantly enhance the sorafenib sensitivity to HCC in both cell lines and xenografts.These findings were consistent with the previous studies that NVP-AUY922 can attenuate drug resistance in diverse models. [23]Moreover, the protein expression and correlation of HSP90, YTHDF2, and STUB1 were also verified in the clinical samples derived from HCC patients.
Yet, there are several shortages of this study.First, there are multiple cell types especially stroma and noncancerous cells in tumors or normal adjacent tissues.Noises from these nontumorigenic cells cannot be ruled out in the co-IP assay performed in fresh HCC tissues; second, co-IP conditions by antibodies are lacking stringency to exclude ubiquitylation signals from YTHDF2 binding proteins as contaminants; third, the effects of sorafenib in combination with NVP-AUY922 on sorafenibresistant model from clinical patients with HCC (such as patientderived tumor xenograft) need to be further explored in future.
In summary, this study examined the post-translational modification of the m 6 A reader, YTHDF2, from the perspective of ubiquitination modification.Our data suggest that HSP90 promotes the growth and sorafenib resistance of HCC cells by suppressing STUB1-induced YTHDF2 ubiquitination and degradation, which could inaugurate a novel intervention strategy for the clinical treatment of HCC.
Cell Culture: Embryonic kidney cell line HEK293T and HCC cell lines HepG2/Hep3B were obtained from ATCC.Identities of these cell lines were validated by short tandem repeat profiling.HCC cells were cultured in Roswell Park Memorial Institute (RPMI)-1640 medium containing 10% fetal bovine serum (FBS), while HEK293T cells were cultured in Dulbecco's modified Eagle medium (DMEM) containing 10% FBS in a humidified atmosphere containing 5%CO 2 /95% air at 37 °C.
Fresh HCC Samples: The fresh HCC samples, including malignant tumors/adjacent normal tissues, were obtained from the discarded material utilized for routine laboratory tests at the Department of Hepatopancreatic Surgery, First People's Hospital of Foshan (Foshan, China).All procedures were performed with the approval of the Medical Ethics Committee of the First People's Hospital of Foshan (ethics approval number: L[2023] No. 2) and with the full, informed consent of the subjects.The protein extraction steps were performed as previously described. [24]o-IP and Immunoblotting Assays: Protein interaction was detected by co-IP analysis with an Antibody Coupling Kit (Invitrogen).Dynabeads were used to couple the specific antibodies, including STUB1, HSP90, and YTHDF2, with incubation for 16-24 h.Cell lysates isolated from HCC Figure 6.Inhibition of HSP90 sensitizes HCC cells to the treatment of sorafenib.A) Cell viability analyses were performed using MTS assay in HepG2 and Hep3B cells exposed to NVP for 24 and 48 h.B) HepG2 and Hep3B cells were treated with NVP or vehicle for 24 h.Plate colony formation assay was performed post-treatment.Images were shown on the left, while the quantitative data were shown on the right.C) Cell viability analyses were performed using MTS assay in HepG2 and Hep3B cells exposed to sorafenib with or without NVP for 24 h.D) HepG2 and Hep3B cells were treated with sorafenib with or without NVP for 24 h.Plate colony formation assay was performed post-treatment.Images were shown on the upper side, while the quantitative data were shown on the lower side.E) Annexin V-FITC/PI staining assays were performed in HepG2 cells treated with sorafenib with or without NVP for 24 h.Green indicates FITC positive, and red indicates PI positive.Scale bar, 100 μm.Quantification was shown below the images.F) HepG2 xenografts were established and grown in BALB/c nude mice.The mice were divided into four groups and treated with NVP (i.p. 25 mg kg −1 /2 days), sorafenib (p.o.20 mg kg −1 /2 days), the combination of NVP and sorafenib, or vehicle for 3 weeks.Images of the xenografts.G) Tumor volume was recorded every 3 days.The curves of tumor volume.H) Tumor weight and I) body weight of mice.J) Cell viability analyses were performed in HepG2 and Hep3B cells exposed to sorafenib treated with two pairs of YTHDF2/HSP90 siRNAs or control siRNA for 48 h.K) Co-IP assay was performed using YTHDF2 antibodies in lysates from HepG2 cells exposed to NVP for 24 h, subjected to the immunoblotting for YTHDF2 and STUB1.*p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001.or HEK293T cells were incubated with Dynabeads-coupled antibodies.Next, SDS buffer was added to the mixture containing protein-Dynabeadsantibodies, followed by incubation at 70 °C for 10 min.Finally, the targeted/combined proteins were isolated from the mixtures via centrifugation.The supernatant was used for further LC-MS/MS analysis or western blotting, a previously described routine assay.[10] In Vitro Ubiquitination Assay: The in vitro ubiquitination of YTHDF2 was determined using the ubiquitinylation kit (BML-UW9920-0001, Enzo Life Sciences, Switzerland) and specific purified proteins.According to the kit instruction, Ubiquitinylation Buffer, E1, E2 (UbcH5a and UbcH5b), Mg-ATP Solution, Biotinylated Ubiquitin Solution, and human recombinant purified proteins including YTHDF2 (0.5 × 10 −6 m) (H00051441-P01, Abnova), STUB1 (100 × 10 −9 m) (HY-P71340, MCE), and HSP90 (100 × 10 −9 m) (ab80033, Abcam) were mixed into a 50 μL ubiquitination reaction system.The reaction mixtures were incubated at 37 °C for 4 h, then boiled with the nonreducing gel loading buffer for 5 min and analyzed by Western blotting.
LC-MS/MS Assay: The above co-IP products were subjected to LC-MS/MS assay to screen YTHDF2-interacting proteins.Co-IP products were first subjected to gel electrophoresis.Next, the protein bands were developed by silver staining, which was further acquired and washed with double distilled water for three times and subjected to decolor reaction.After digestion with trypsin, the samples were centrifugated and dried.Easy nLiquid chromatography (LC) 1200 system (ThermoFisher, USA) was applied to fractionate each tryptic peptide mixture.The trapping, desalting procedure was carried out with a volume of 20 μL 0.1% formic acid.Next, an elution gradient of 80% acetonitrile, 0.1% formic acid was used on an analytical column.Data-dependent acquisition (DDA) mass spectrum techniques were applied to acquire tandem MS data on a ThermoFisher Q Exactive mass spectrometer (ThermoFisher, USA) fitted with a Nano Flex ion source.Data were acquired using an ion spray voltage of 1.9 kV, and an interface heater temperature of 275 °C.For a full mass spectrometry survey scan, the target value was 3 × 10 6 and the scan was ranged from 350 to 2000 m/z at a resolution of 70 000 and a maximum injection time of 100 ms.For the MS2 scan, only spectra with a charge state of 2-5 were selected for fragmentation by higher-energy collision dissociation with a normalized collision energy of 28.The MS2 spectra were acquired in the ion trap in rapid mode with an AGC target of 8000 and a maximum injection time of 50 ms.Dynamic exclusion was set for 25 s.The MS/MS data were analyzed for protein identification and quantification using A PEAKS Studio 8.5.
For transfection, HCC cells were seeded on a 6-well plate and cultured to 50% confluence.Supernatant was replaced with medium containing lentiviruses and polybrene (5 μg mL −1 ) at a multiplicity of infection of 10.After incubation for 12 h, supernatant was replaced with medium containing 10% FBS and cultured for 48 h.Puromycin or/and Neomycin were used to select stably transfected cells.
Reverse Transcription Polymerase Chain Reaction (RT-PCR) Assay: Total RNAs were isolated from the cultured cells and subjected to real-time PCR analysis using specific primers for YTHDF2 and STUB1 (sequences listed in Table S3, Supporting Information).This assay was performed with at least three independent repeats, as described before. [25]mmunofluorescence Assay: Cells were seeded in a chamber slide and transfected with HA-STUB1 plasmids for 48 h.Next, they were washed, fixed, permeabilized, and blocked, as previously reported. [26]The primary antibodies anti-HA tag, anti-YTHDF2, and anti-HSP90 were used to bind the specific proteins.Secondary antibodies were used to link the primary antibodies.4′,6-diamidino-2-phenylindole (DAPI, Abcam, #ab104139) containing resin was used for mounting and nuclear visualization.A confocal microscope (Leica TCS SP8) was used to capture the fluorescent images.
Proximity Ligation Assay (PLA): PLA assay was performed using Duolink In Situ Orange Starter Kit Mouse/Rabbit (DUO92102, Sigma-Aldrich) in HCC cells according to the standard technique.In brief, HCC cells cultured in glass bottom culture dishes were washed with phosphatebuffered saline (PBS) solution, fixed with paraformaldehyde for 15 min, permeabilized with 0.5% Triton X-100 for 10 min, and then subjected to blocking for 1 h, primary antibody incubation at 4 °C overnight, Duolink PLA probe (PLUS and MINUS) incubation for 1 h, ligation reaction for 30 min, PCR amplification for 100 min, and finally imaged under a confocal microscope after the final wash by adding Duolink in situ mounting medium containing DAPI.The primary antibodies applied in this assay included anti-HSP90 (YM0342, Immunoway; ab203085, Abcam), anti-STUB1 (sc-133066, Santa Cruz), and anti-YTHDF2 (ab246514, Abcam).
Cell Proliferation Assays: Analysis of HCC cell proliferation was assessed by cell viability and clonogenic assays as previously described. [27]he MTS Kit (Promega, Peking, China, #G5421) was used for viability assay.After reaching an exponentially growing phase, HCC cells were counted, trypsinized, and 2000-2500 cells per well were plated in a 96well plate for 24, 48, 72, 96, and 120 h.After each time point, MTS reagent (20 μL per well) was directly added to each well in dark and incubated for another 2 h at 37 °C.The absorbance at 490 nm was determined using a microplate reader.
For the clonogenic assay, HCC cells were plated in a 6-well plate (inner diameter, 35 mm) after treatment for the 48 h and cultured for 2 more weeks.After being washed with PBS, the cells were fixed with 4% paraformaldehyde and stained with 1% crystal violet.The images were captured after drying.A diameter >60 μm of the colony under the microscope was included in the analysis.
IHC Assay: 40 cases of paraffin-embedded HCC and adjacent normal tissues were obtained from the discarded material that was utilized for routine laboratory tests at the Department of Pathology, First People's Hospital of Foshan (Foshan, China).The embedded tissues were sectioned according to standard steps.A MaxVision Kit (Maixin Biol) was used for IHC according to the manufacturer's instruction.The primary antibodies included anti-YTHDF2, anti-HSP90, and anti-STUB1.All images were captured and quantified as described previously. [24]nimal Study: 32 male BALB/c nude mice (5 weeks old) were obtained from Charles River Laboratories (Beijing, China).All the animals were housed in a specific pathogens-free environment with a temperature of 22 ± 1 °C, a relative humidity of 50 ± 1%, and a light/dark cycle of 12/12 h.All animal studies (including the mice euthanasia procedure) were done after approved by Guangzhou Medical University institutional animal care and use committee (ethics approval number: GY2018-043), and in compliance with the regulations and guidelines of the committee and conducted according to the ARRIVE guidelines.
For Figure 5, HepG2 cells (5 × 10 6 cells in 100 μL PBS/mouse) stably expressing 6×His-YTHDF2 or control plasmids, in the presence or absence of HSP90 shRNAs or HA-STUB1 plasmids, were subcutaneously inoculated on BALB/c nude mice for 3 weeks (n = 8 per group).For Figure 6, mice were randomly divided into four groups (n = 8 per group) after subcutaneously inoculating into HepG2 cells (5 × 10 6 cells in 100 μL PBS/mouse): NVP, sorafenib, NVP+sorafenib, and vehicle group.Mice treated with NVP received i.p. 25 mg kg −1 /2 days; mice treated with sorafenib received p.o. 20 mg kg −1 /2 days.An NVP+sorafenib group was first treated with NVP and then with sorafenib.All mice were treated for 3 weeks, after which they were sacrificed by cervical dislocation.Tumor size/weight and body weight were calculated as reported previously. [24,28]tatistical Analysis: Data were presented as mean and standard deviation (SD) from three independent repeats.Paired/unpaired Student's t-tests or one-way analysis of variance were conducted to determine statistical probabilities where appropriate.SPSS 16.0 and GraphPad Prism 7.0 were used to perform statistical analysis.p < 0.05 indicated a statistically significant difference.
Figure 1 .
Figure 1.The ubiquitination level of YTHDF2 is downregulated and predicts poor outcomes in HCC.A) Analysis of YTHDF2 mRNA expression in HCC tissues based on cancer stages and tumor grades by analyzing the TCGA and UALCAN databases.*p < 0.05, *** p < 0.001, **** p < 0.0001.B) Kaplan-Meier curves from HCC patients expressing low and high YTHDF2 from the tissue microarray.Overall survival and relapse-free survival data are shown.C) YTHDF2 in lysates from the fresh HCC and adjacent normal tissues analyzed by Western blot.GAPDH was used as an internal control.D) Quantification of YTHDF2 in (C).Data were analyzed with paired t-tests.E) Co-IP/Western blot assays in lysates from the fresh HCC tissues and adjacent normal tissues were performed to determine the levels of ubiquitinated-YTHDF2 using YTHDF2 antibodies.F) Quantification of ubiquitinated-YTHDF2 levels in (E).Ubiquitinated-YTHDF2 was calculated with (ubiquitin density)/(YTHDF2 density) from the Co-IP/Western blot assays.Data were analyzed with paired t-tests.
Figure 2 .
Figure 2. YTHDF2 physically interacts with HSP90 and STUB1 in HCC.A) Co-IP/Western blot assays were performed using YTHDF2 antibodies in lysates from HCC cells treated with MG132 (10 × 10 −6 m) for 8 h, subjected to the immunoblotting for ubiquitin (Ub) and YTHDF2.B) Co-IP assay was performed in HepG2 and Hep3B cell lysates using YTHDF2 or IgG control antibodies.The Co-IP products were subjected to Sodium dodecyl sulfatepolyacrylamide gel electrophoresis (SDS-PAGE) separation, silver staining, and biological mass spectrometry (LC-MS/MS analysis).C) The peptide numbers and coverage of YTHDF2 and HSP90 from the LC-MS/MS analysis.D-F) Co-IP assay was performed in HepG2 and Hep3B cell lysates using YTHDF2, STUB1, HSP90, or IgG control antibodies, followed by immunoblotting for YTHDF2, STUB1, and HSP90.G,H) The HA-labeled STUB1 plasmids were transfected in HepG2 and Hep3B for 48 h.IF assay/confocal microscopy was further performed to observe the subcellular location of YTHDF2, STUB1, and HSP90.Scale bar, 10 μm.I) PLA assay was performed in HepG2 and Hep3B cells using STUB1, HSP90, and YTHDF2 antibodies.The orange point represents positive interaction.Scale bar, 25 μm.J) The full length and diverse truncated mutants of HSP90 with FLAG-tag were constructed.Linear models were shown.K) Diverse truncated mutants of HSP90 were transfected in HEK293T cells with HA-STUB1 and 6×His-YTHDF2 plasmids for 48 h.Co-IP assay was performed using HA antibodies, followed by immunoblotting for FLAG and HA.L) Truncated mutants of 6×His-YTHDF2 were transfected in HEK293T cells with FLAG-HSP90 plasmids for 48 h.Co-IP assay was performed using FLAG antibodies, followed by immunoblotting for FLAG and His.
Figure 3 .
Figure 3. Protein level of YTHDF2 is regulated by HSP90 and STUB1 in HCC.A) Western blot assay for YTHDF2 and STUB1 in HepG2 and Hep3B cells exposed to STUB1 siRNAs or control siRNAs for 72 h.Quantification was shown below the images.B) Western blot assay for YTHDF2 and STUB1 in HepG2 cells exposed to HA-STUB1 or control plasmids for 48 h.Quantification was shown below the images.C) Western blot assay for YTHDF2 in HepG2 and Hep3B cells exposed to NVP-AUY922 (NVP) for 48 h.Quantification was shown below the images.D) Western blot assay for YTHDF2and STUB1 in HepG2 cells exposed to STUB1 or control siRNAs for 48 h, followed by cycloheximide treatment (CHX, 100 μg mL −1 ) for 12, 24, and 36 h.Quantification was shown on the right.E) Western blot assay for YTHDF2 was performed in HepG2 cells exposed to NVP-AUY922 (0.5 × 10 −6 m) for 24 h, followed by the treatment of CHX for 12, 24, and 36 h.Quantification was shown on the right.F) RT-qPCR assays for YTHDF2 and STUB1 were performed in HepG2 cells exposed to STUB1 siRNAs or control siRNAs for 36 h.G) RT-qPCR assays for YTHDF2 were performed in HepG2 cells exposed to NVP for 12 h.H) Western blot assay for YTHDF2 in HepG2 cells exposed to NVP for 24 h, followed by bortezomib (BTZ) treatment for 24 h.Quantification was shown on the lower side.I) Western blot assay for YTHDF2 and STUB1 in HepG2 cells exposed to HA-STUB1 or control plasmids for 24 h, followed by the treatment of bortezomib (BTZ) for 24 h.Quantification was shown on the lower side.*p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001, ns represents not significant.
Figure 4 .
Figure 4. Ubiquitination level of YTHDF2 is controlled by the balance of HSP90 and STUB1.A) Co-IP assays were performed using YTHDF2 antibodies in lysates from HepG2 cells exposed to STUB1 siRNAs or control siRNAs for 48 h, subjected to the immunoblotting for ubiquitin (Ub), K48-linked ubiquitin (K48-Ub), STUB1, and YTHDF2.MG132 was used to treat the cells for 8 h before harvest.B) Co-IP assays were performed using YTHDF2 antibodies in lysates from HepG2 cells exposed to NVP or vehicle control in the presence of MG132 for 8 h, subjected to the immunoblotting for Ub, K48-Ub, and YTHDF2.C) Co-IP assays were performed using YTHDF2 antibodies in lysates from HepG2 cells transfected with FLAG-HSP90 or control plasmids, subjected to the immunoblotting for Ub, K48-Ub, YTHDF2, HSP90, and STUB1.MG132 was used to treat the cells for 8 h before harvest.D) Co-IP assays were performed using His-tag antibodies in lysates from HEK293T cells transfected with 6×His-YTHDF2 and Myc-Ub plasmids, with or without the transfection of FLAG-HSP90 or HA-STUB1 plasmids for 48 h, subjected to the immunoblotting for Myc-tag and His-tag.MG132 was used to treat the cells for 8 h before harvest.E) Co-IP assays were performed using His-tag antibodies in lysates from HEK293T cells transfected with various Lys-mutant types of 6×His-YTHDF2 and Myc-Ub plasmids for 48 h, subjected to the immunoblotting for Myc-tag and His-tag.MG132 was used to treat the cells for 8 h before harvest.F) In vitro ubiquitination assay was performed using the ubiquitinylation kit and specific purified proteins as indicated.
Figure 7 .
Figure 7. Clinical relationship of YTHDF2, HSP90, and STUB1 in HCC tissues.A) IHC assay was performed in paraffin-embedded HCC tissues and adjacent normal tissues using YTHDF2, HSP90, or STUB1 antibodies.Representative images were shown at 400×.Scale bar, 50 μm.B-D) Quantification of YTHDF2, HSP90, and STUB1 in (A) was shown.Data were analyzed with paired t-tests.E-G) Correlation analysis of YTHDF2 with HSP90 or STUB1 protein levels, and STUB1 with HSP90 protein levels based on B using Pearson r assay.Tumor tissues and adjacent normal tissues were included in the statistics.
Figure 8 .
Figure 8. Overall survival analysis of HCC patients using Kaplan-Meier curves.A,B) Analysis of HSP90 mRNA expression in HCC tissues based on cancer stages and tumor grades by analyzing the TCGA and UALCAN databases.*p < 0.05, & p < 0.01, &&& p < 0.0001.C,D) Kaplan-Meier curves from HCC patients expressing low and high HSP90/STUB1 from the tissue microarray.Overall survival data were shown.E,F) Kaplan-Meier curves from HCC patients (in stage 2+3 or stage 3+4) expressing low and high STUB1 from the tissue microarray.G) A proposed model of HSP90/STUB1 in the regulation of YTHDF2 in HCC.*p < 0.05, ** p < 0.01, **** p < 0.0001. | 9,601.2 | 2023-07-28T00:00:00.000 | [
"Biology"
] |
Relativistic finite-difference time-domain analysis of high-speed moving metamaterials
In this paper, we apply a relativistic finite-difference time-domain (FDTD) method by using the Lorentz transformation to analyze metamaterials moving at a high speed. As an example, we consider a slab of left-handed metmaterial (LHM) with both relative permittivity and permeability equal to −1. Simulation results show that when the LHM slab moves at a high speed, its electromagnetic responses are drastically different from the static case. Specifically, when the LHM slab moves toward the source, for the case of normal incidence, there exists a special velocity at which fields experience a zero spatial phase delay through the LHM slab; while for the oblique incidence, above a certain velocity fields inside the LHM become evanescent. On the other hand, when the LHM slab moves away from the source, for the case of normal incidence, at the same special velocity the magnitudes of both electric and magnetic fields inside the LHM slab reach their minimum values; for the oblique incidence, the slab functions as a field converter. Besides, the transmitted waves through the LHM slab experience a red-shift (to a lower frequency) and the shift is proportional to the velocity of the LHM slab regardless of the moving direction.
Metamaterials are defined as artificial periodic structures which possess extraordinary and desirable electromagnetic properties that have not been found in naturally occurring materials 1 . Typical applications of metamaterials include superresolution imaging 2,3 , cloaking 4 , perfect wave absorption 5 , and subwavelength image magnification 6,7 etc. The early development of metamaterials has been focused on the realization of three-dimensional (3-D) structures 8 . However, due to their nature of being lossy, having a narrow bandwidth, and difficulty in fabrication, practical applications of 3-D metamaterials are very limited. For these reasons, more recently, extensive efforts have been put into the analysis and design of metamaterial structures with reduced dimensions, i.e. two-dimensional (2-D) metamaterials, referred to as metasurfaces 9,10 . These thin structures have been shown to possess exceptional abilities to control electromagnetic waves, allow cost-effective fabrications, and may hold promises for future applications in imaging, sensing, and quantum information processing etc.
To date, most of the research works have been conducted for static metamaterial structures, whose material parameters are time invariant and their spatial locations remain unchanged. For dynamic metamaterials, the initial analyses of time-gradient metasurfaces have revealed their interesting properties such as nonreciprocity and frequency tunability 11,12 . Nonetheless, so far the second issue of dynamic metamaterials-when the structure is moving in space-has not been addressed in literature.
In the study of metamaterials, both analytical methods and numerical techniques have been extensively used. The finite-difference time-domain (FDTD) method is especially popular due to its simplicity and capability of handling inhomogeneous, anisotropic, nonlinear, and frequency dispersive materials 13 . In this paper, we develop a numerical model by combining the FDTD method and the Lorentz transformation, for modeling metamaterials moving at a high speed. As an example, we consider a special type of metamaterial-the left-handed metamaterial (LHM) with both relative permittivity and permeability equal to −1. Simulation results show that electromagnetic responses of fast moving LHMs are drastically different from the static case, and many interesting properties can be obtained when the LHM moves at a high speed.
Methods
In general there are two types of relativistic FDTD methods: applying the relativistic boundary condition (RBC) [14][15][16] , and applying the Lorentz transformation [17][18][19] . Each method has its advantages and drawbacks. Specifically, the is called the Lorentz factor, c is the speed of light in free space, and v is the velocity unit vector. The above transformations allow the calculation of field components in the rest frame, K′. However in FDTD, in order to analyze field scattering from moving objects in the laboratory frame K, it is necessary to solve the above equations for the B, E, D, H components using their counterparts in the rest frame K′. Relevant equations are provided in the next section for 1-D and 2-D cases. The FDTD method was first proposed by Yee in 1966 21 . The method is based on an iterative process which allows to obtain electromagnetic fields throughout the computational domain at a certain time step in terms of fields at previous time steps using a set of updating equations 13 . The typical discretization scheme involves forming a dual-electric-magnetic field grid with electric and magnetic cells spatially and temporally offset from each other. The main advantages of the FDTD method are its simplicity, effectiveness and accuracy as well as the Figure 1. The FDTD simulation domain for modelling metamaterials moving at a velocity v = +vy, containing a laboratory frame, K which is assumed to be static, and a rest frame, K′ which moves in the same direction and at the same velocity as the metamaterial. capability of handling frequency dispersive and anisotropic materials. Although the FDTD method has been extensively used to model various types of materials including metamaterials [22][23][24] , its applications to analyze electromagnetic responses from moving objects are still limited [14][15][16][17][18][19]25 . To the best of authors' knowledge, the FDTD method has not been used to analyze high-speed moving metamaterials, which behave considerably different comparing with the static case, as will be shown later in this paper. Besides, based on our proposed method, other types of metamaterials such as metasurfaces 9,10 when moving at a high speed, can be conveniently analyzed in a straightforward manner.
Due to their frequency dispersive nature, metamaterials can be modelled as an effective medium, and are usually characterized by the Drude dispersion model: where ω′ is the angular frequency in the rest frame, and ω p and γ are plasma and collision frequencies, respectively. Here we assume both permittivity and permeability to have the same dispersion form, which is suited for modeling the isotropic LHM with ε r = μ r = −1. In order to model frequency dispersive materials using the FDTD method, the auxiliary differential equation (ADE) method is applied, which is based on the Faraday's and Ampere's laws: as well as the constitutive relations D′ = ε 0 ε r E′ and B′ = μ 0 μ r H′ where ε 0 and μ 0 are free-space permittivity and permeability, respectively, and ε r and μ r are expressed by (5). Note that in the above and following equations in this section, all field components, FDTD cell size, and the discrete time step refer to those in the rest frame, since the LHM remains static in that frame, and the ADE dispersive FDTD method is applied. Equations (6) and (7) can be discretized following a standard procedure 13,21 which leads to the conventional FDTD updating equations: where ∇ ∼ is the discrete curl operator, Δt′ is the discrete FDTD time step and n is the number of time steps. In addition, auxiliary differential equations have to be taken into account and they can be discretised through the following steps. The constitutive relation between D′ and E′ reads Using inverse Fourier transform and the following rules: Equation (10) can be rewritten in the time domain as The FDTD simulation domain is represented by an equally spaced 3-D grid with periods Δx′, Δy′ and Δz′ along x′-, y′-and z′-directions, respectively. Following a standard discretization procedure 13 , Eq. (11) can be approximated as Therefore the updating equation for E′ in terms of E′ and D′ at previous time steps is as follows: The updating equation for H′ is in the same form as (13) by replacing E′, D′ and ε 0 , by H′, B′ and μ 0 , respectively, i.e. Equations (8), (9), (13) and (14) form the FDTD updating equation set for LHMs in the rest frame. The field components in the laboratory frame can be calculated by solving Eqs (1) and (2) for B, E, and Eqs (3) and (4) for D, H, respectively at every time iteration.
Results and Discussion
In our analysis, we model an LHM slab with its thickness equal to L. Its front and back interfaces are parallel and perpendicular to the y-direction, as shown in Fig. 1. Assume that the slab is moving at a constant speed along the y-direction, i.e. =vy v . To allow the LHM slab to be relatively static, the rest frame also moves at the same speed v along the y-direction. The reason for aligning the direction of movement with the coordinate axis is that the Lorentz transformation can be significantly simplified. For more general cases with arbitrary moving directions, the equations with additional terms can be derived in a similar manner.
Implementation in one dimension and validation of the Lorentz-FDTD method.
For the 1-D implementation of the Lorentz-FDTD method, plane-wave incidence is assumed, with only ′ E z , ′ D z , ′ H x , and ′ B x components being non-zero. In the FDTD domain, the slab is infinite in both x-and z-directions. Under the 1-D assumption and =vy v , the Lorentz transformation equations (1-4) reduce to Solving for the field components in the laboratory frame gives Notice that the above equations can be also obtained simply by replacing primed field components by their unprimed ones and vice versa, and also letting v → −v, since the Lorentz transformation and its inverse transformation have the same form. Figure 2 shows the 1-D FDTD simulation domains for the rest and laboratory frames, as well as an incident-field domain for the implementation of the total-field scattered-field (TFSF) method. The TFSF method is used to introduce the incident wave at the boundary between the scattered and total field regions (see Fig. 2), and only allows the wave to propagate toward the right side of the domain. The excitation source is introduced in the incident domain, and the arrows with dotted lines indicate that the E z inc and H x inc fields (calculated from FDTD simulations) are then introduced into the rest frame. Absorbing boundary conditions (ABCs) are applied to terminate the FDTD domain for both the incident domain and the rest frame. However for the laboratory frame, no ABCs are required and all field values are directly calculated by Eqs (19)(20)(21)(22) at every FDTD time step. Two observation points are located in the laboratory frame which are assumed to be static. The distances from the left observation point to the TFSF boundary, and from the right boundary of the slab to the right observation point are both sufficiently long considering the cases of the slab moving in both directions, to ensure that only the scattered (reflected) and/or transmitted fields are recorded. The incident domain and the rest frame move at the same constant velocity, and due to such movement, a mapping of FDTD grids between the rest and laboratory frames is necessary when applying Eqs (19)(20)(21)(22). Figure 3 shows the grid offset caused by the movement. In order to ensure accuracy in FDTD simulations, the field components in the laboratory frame are calculated using weighted average of adjacent components in the rest frame. For example, when calculating the E z component, the following equation is used, where d 1 , d 2 , d 3 , and d 4 are the offset distances between the E z component at location i in the laboratory frame, to two nearest ′ E z components at locations i′ and i′ − 1 in the rest frame, and two nearest ′ B x components at locations i′ and i′ − 1 in the rest frame, as defined in Fig. 3.
The equations to calculate other field components in the laboratory frame can be derived similarly.
In addition to the field updating equations in both rest and laboratory frames introduced in the previous section, the source excitation also needs a special treatment. In our analysis, we assume that the metamaterial is moving while the source remains static with respect to the laboratory frame, thus the Lorentz transformation is applied to the excitation function to obtain its form in the rest frame (assuming sinusoidal excitation applied to the E z component): z 0 where β = v/c, and f is the frequency of the sinusoidal wave. It is evident from these equations that the incident wave in the rest frame undergoes a Doppler shift in both amplitude and frequency due to the movement of the rest frame. In order to validate the Lorentz-FDTD method, we first model the slab as a perfectly electric conductor (PEC), and compare the calculated frequency shift and amplitude variation due to the movement of the slab from FDTD simulations with the theoretical ones. Here both cases of the slab moving away from the source = +vy v ( ) and moving toward the source = −vy v ( ) are considered. The source is located at the origin of the rest frame, O′, and the observation point is located on the −y axis of the laboratory frame to allow only the reflected fields from the PEC slab to be recorded. The operating frequency of the incident wave is f = 1 GHz. The FDTD cell size in the rest frame is Δy′ = 5.0 × 10 −3 m, equivalent to λ/60 where λ is the wavelength at the operating frequency. According to the Courant-Friedrichs-Lewy (CFL) condition 13 , the discretized time step is chosen as Δt′ = Δy′/c where c is the speed of light in free space. Figure 4 shows the reflected waveforms of electric field from the PEC when it is moving at various velocities both toward and away from the source.
It can be seen that both the amplitude and frequency of the reflected wave increase when the PEC slab moves toward the source (v < 0), and decrease when the slab moves away (v > 0). The theoretical Doppler shifts in amplitude and frequency can be calculated as 20 The slab is then modelled as an LHM with its thickness equal to 1.5 m. The spatial distributions of electric field in both rest and laboratory frames are plotted and shown in Fig. 5 for two cases of v = −0.2c and v = +0.2c.
The electric field distributions directly observed in the rest frame shows that the amplitude remains the same after the wave enters the LHM slab, independent of the slab's moving direction, while the spatial frequency decreases or increases when the slab moves toward or away from the source, respectively. In the laboratory frame, on the other hand, inside the LHM slab the spatial frequency remains the same as that in the rest frame, while the wave amplitude increases for the case of v < 0 and decreases when v > 0. By varying the velocity in simulations, some interesting wave behaviours can be observed. Particularly, when the slab moves toward the source and the velocity gradually increases, backward waves are generated inside the LHM slab. When the velocity is further increased after passing a critical value, backward waves change to forward waves which cannot be observed inside a static LHM slab. In simulations, this critical velocity is found to be v ≈ −0.3322c. In addition, the spatial frequency of wave also decreases when the velocity gradually increases, and the spatial frequency increases again after this critical velocity. At the exact critical velocity, we can observe a wave with spatially constant amplitude inside the LHM slab. In other words, waves experience zero spatial phase delay through the LHM slab, as shown in Fig. 6(a). The constant wave amplitude and zero phase delay inside the LHM slab can be explained by the effective permittivity and permeability of the slab as follows. According to the Lorentz transformation for the angular frequency, ω′ 20 , where ω and k are the angular frequency and wave vector in the laboratory frame, respectively, when the LHM slab is moving at various velocities, the effective permittivity and permeability in Eq. (5) vary accordingly. Theoretically, when the slab is moving at v = −c/3, both effective permittivity and permeability are equal to zero, resulting in a slab of zero-index material which introduces zero phase delay to the incident wave 26 . Nonetheless, the slight discrepancy (with an error of 0.3%) between the theoretical velocity (v = −c/3) and the numerical value v = −0.3322c obtained from FDTD simulations is mainly due to the numerical dispersion effect, and can be further reduced when a finer FDTD mesh is used. However, this may result in an excessive requirement for the computational time and computer memory. Note that the plasma frequency remains unchanged regardless of the moving velocity of the LHM slab 27 .
If the LHM slab moves away from the source, it is observed in simulations that, the waves inside the LHM are always backward waves. However, in the laboratory frame the amplitude of wave inside LHM varies with the velocity. It is found that there also exists a critical velocity with the same absolute value, i.e. v ≈ +0.3322c below which the wave amplitude decreases when the velocity increases, and the amplitude increases when v is increased further from the critical value. At this critical velocity, the amplitude of waves inside the LHM slab reaches its minimum value, as shown in Fig. 6(b). Similar to the analysis above for v < 0, the minimum amplitude inside the LHM slab in the laboratory frame can be also explained by the effective permittivity and permeability. Applying Maxwell's equations inside the LHM slab in the rest frame, x y z , the 1-D Lorentz transformation for the E z component, Eq. (20) can be written as, (due to backward waves). At the exact theoretical velocity, v = +c/3, according to Eqs (5) and (27), the effective material parameters due to the movement of the LHM slab can be calculated as ε r = μ r = −3. Substituting the above values into Eq. (28) results in E z = 0, i.e. zero field amplitude inside the LHM slab moving at v = +c/3. Again the slight discrepancy between the theoretical value of v = +c/3 and v = +0.3322c obtained in FDTD simulations is mainly due to the numerical dispersion effect. Note that the above results are obtained when a small amount of loss is used for the LHM (ε r = μ r = −1 − 0.001j). When the loss increases, the behaviors of waves remain the same, while the amplitude of wave decreases with propagation distance.
To investigate the frequency domain characteristics of transmitted waves, we record the fields both inside the LHM slab in the rest frame and in the free space behind the LHM slab in the laboratory frame. As shown in Fig. 7, due to the movement of the slab, the spectra of waves observed inside the LHM slab shift to different frequencies, which is a similar behavior to dielectric slabs.
However, behind the LHM slab the transmitted waves experience a red-shift of frequency when the slab moves in either direction, as shown in Fig. 7(b).
The above results show significantly different behaviours of transmitted waves and waves inside a moving LHM slab for the case of normal incidence. For the oblique incidence, it is well known that negative refractions occur at the interfaces between the free space and LHM slab. Using our proposed Lorentz FDTD method implemented in 2-D, we also investigate the effect when the LHM moves at a high speed. Two-dimensional implementation. In the 2-D implementation, we consider the transverse electric (TEz) H z , and ′ B z components being non-zero. Same as the 1-D case, the slab is moving along the y-direction, i.e. = ±vy v . The Lorentz transformation equations from the rest to laboratory frames can be derived as 28 are used to terminate the domain in y-direction, and periodic boundary conditions (PBCs) are applied in x-direction to model an infinitely-long LHM slab. The thickness of the LHM slab is L = 0.5 m, and the source plane is located at a distance equal to the thickness of LHM slab, as shown in Fig. 8(a). The line source can be defined as x i i 0 where k 0 is the free space wave number, and θ i is the angle of incidence. Note that a correction term, (v/c)cosθ i is included in Eq. (36) due to the movement of the slab. The operating frequency is 1 GHz, same as in 1-D FDTD simulations in the previous subsection. The FDTD cell size also remains the same, i.e. Δx′ = Δy′ = 5.0 × 10 −3 m (λ/60). The time step is chosen according to the CFL condition, Δ ′ = Δ ′ t x c / 2 13 . The TFSF method is implemented for the 2-D case for the rest frame. For the laboratory frame, the fields are directly calculated by Eqs (29-34) and no boundary conditions are required. In all cases of oblique incidence, we restrict the angle of incidence to be 30 degrees. As it is clearly shown in Fig. 8(b), when the LHM slab is static, negative refractions occur at the interfaces between the free space and LHM slab.
However, when the slab is non-static, the field distributions appear to be very different, especially when the moving velocity is high. Figure 9 shows the distributions of magnetic field component, H z in the laboratory frame when the LHM slab is moving toward the line source with two different moving speeds, v = −0.1c and v = −0.2c. It can be observed from Fig. 9(a) that when v < 0, the LHM slab is no longer matched to the free space and reflections occur at the interfaces. The angle of refraction also becomes larger than the angle of incidence (in the case of static LHM, these two angles are equal), and fields inside the LHM slab have a larger wavelength comparing with the free-space one. When the velocity is further increased from v = −0.1c, the LHM slab becomes more reflective, and more components of waves inside the LHM slab turn into evanescent such that little or no fields can be transmitted through the slab when the velocity is high, as shown in Fig. 9(b). Note that the field distributions in the rest frame are very similar to those shown in Fig. 9, thus only distributions in the laboratory frame are shown.
When the LHM slab is moving away from the line source, we can observe a smaller angle of refraction and a smaller wavelength inside the slab. Besides, the LHM slab is still matched to the free space and no apparent reflections occur at the interfaces, as shown in Fig. 10(a) for the case of v = +0.1c. When the velocity increases, both the angle of refraction and the wavelength of fields inside the LHM slab become even smaller. Moreover, it can be also observed that as the velocity gradually increases, the magnitudes of H z and E y components inside the LHM slab decrease, while the magnitude of E x component increases. When the LHM slab moves at the critical velocity of v = +0.3322c, the H z component inside the slab retains a very small magnitude, while the magnitude of E x becomes very high, as shown in Fig. 10(b). After passing through the LHM slab, the magnitudes of H z , E x and E y are restored to their original values before the LHM slab with slight decay due to the loss in the slab, and the magnitudes decrease as the amount of loss increases. Thus we can conclude that when the LHM slab moves away from the source at the critical velocity, it functions as a field converter.
Conclusion
In conclusion, we have applied a relativistic FDTD method by combining the Lorentz transformation with an ADE dispersive FDTD method for analysing metamaterials moving at a high speed. As an example, we consider the left-handed metamaterial (LHM) with both relative permittivity and permeability equal to −1. It is shown from our results that when the LHM slab is moving toward a source, the spatial frequency inside the LHM slab decreases, and the spatial frequency increases when the slab is moving away from the source. For the case of normal incidence, it is also found that there exists a special velocity at which fields inside the LHM slab experience a zero spatial phase delay when the slab is moving toward the source; when the slab is moving away, the magnitudes of both electric and magnetic fields inside the slab reach their minimum values. For the case of oblique incidence, the angle of refraction is inverse proportional to the moving velocity, and the fields inside the LHM slab become evanescent when the slab is moving toward the source above a certain velocity; when the slab is moving away, it functions as a field converter and the maximum conversion is achieved when the slab moves at the critical velocity. In the present work we have considered the LHM as an example, and results show considerably different wave behaviours comparing with the static case. Using our proposed Lorentz-FDTD method, other types of metamaterials can be readily analysed such as the mu-negative material (MNG), zero-index material and metasurfaces etc. We anticipate that when these metamaterials move at high velocities, some interesting properties can be further discovered. | 5,894 | 2018-05-16T00:00:00.000 | [
"Engineering",
"Physics"
] |
Saturated Fatty Acid Blood Levels and Cardiometabolic Phenotype in Patients with HFpEF: A Secondary Analysis of the Aldo-DHF Trial
Background: Circulating long-chain (LCSFAs) and very long-chain saturated fatty acids (VLSFAs) have been differentially linked to risk of incident heart failure (HF). In patients with heart failure with preserved ejection fraction (HFpEF), associations of blood SFA levels with patient characteristics are unknown. Methods: From the Aldo-DHF-RCT, whole blood SFAs were analyzed at baseline in n = 404 using the HS-Omega-3-Index® methodology. Patient characteristics were 67 ± 8 years, 53% female, NYHA II/III (87%/13%), ejection fraction ≥50%, E/e’ 7.1 ± 1.5; and median NT-proBNP 158 ng/L (IQR 82–298). Spearman´s correlation coefficients and linear regression analyses, using sex and age as covariates, were used to describe associations of blood SFAs with metabolic phenotype, functional capacity, cardiac function, and neurohumoral activation at baseline and after 12-month follow-up (12 mFU). Results: In line with prior data supporting a potential role of de novo lipogenesis-related LCSFAs in the development of HF, we showed that baseline blood levels of C14:0 and C16:0 were associated with cardiovascular risk factors and/or lower exercise capacity in patients with HFpEF at baseline/12 mFU. Contrarily, the three major circulating VLSFAs, lignoceric acid (C24:0), behenic acid (C22:0), and arachidic acid (C20:0), as well as the LCSFA C18:0, were broadly associated with a lower risk phenotype, particularly a lower risk lipid profile. No associations were found between cardiac function and blood SFAs. Conclusions: Blood SFAs were differentially linked to biomarkers and anthropometric markers indicative of a higher-/lower-risk cardiometabolic phenotype in HFpEF patients. Blood SFA warrant further investigation as prognostic markers in HFpEF. One Sentence Summary: In patients with HFpEF, individual circulating blood SFAs were differentially associated with cardiometabolic phenotype and aerobic capacity.
Introduction
Heart failure (HF) with preserved ejection fraction (HFpEF) is a heterogeneous clinical condition with a number of underlying etiologies [1]. Its prevalence continues to rise collinear to the aging population and/or risk factors such as (central) obesity, type 2 diabetes mellitus (T2D), and hypertension [2]. Obesity-related HFpEF is an important phenotype present in the subgroup of the population with metabolic disorders such as T2D [3], and prognostic outcome in these individuals is largely determined by comprehensive treatment of comorbidities such as low aerobic capacity and/or cardiovascular risk factors [3,4].
Saturated fatty acids (SFAs) are a heterogeneous group of fatty acids that occur in a variety of foods including whole-fat dairy, meat, cocoa, and industrially processed foods [5,6]. They are, furthermore, endogenously synthesized by de novo lipogenesis (DNL) in the liver, a metabolic pathway that converts dietary starch, sugar, protein, and alcohol into fatty acids (FAs), in the presence of nutrient overabundance and/or (hepatic) insulin resistance [7][8][9].
Individual SFAs, as assessed in different lipid compartments in the body such as plasma or erythrocytes, have been differentially linked to incident HF [10][11][12], clinical traits associated with the obesity-related HFpEF phenotype such as T2D and/or atrial fibrillation [13,14], and cardiovascular endpoints [15] in previous analyses. In this regard, higher circulating levels of some SFAs such as palmitic acid (PA, C16:0) have been linked to increased risk of developing T2D [16], to higher risk of incident HF [17], and to higher risk of mortality in patients referred for coronary angiography [15]. Conversely, higher levels of other SFAs, such as the three very long-chain saturated fatty acids (VLSFAs) arachidic acid (AA, C20:0), behenic acid (BA, C22:0), and lignoceric acid (LA, C24:0), have been broadly linked to lower risk of these outcomes [10,13] (all in comparison to lower levels), which overall supports the notion that SFAs are a biologically heterogeneous group of fatty acids.
SFAs biomarkers such as red blood cell (RBC) SFAs-and by extension whole blood SFAsreliably reflect cardiac and other tissue SFA levels over the preceding three months [18,19] (i.e., the balance of intake, endogenous production, distribution volume, and catabolism) independent of subjective memory-based assessment methods such as food-frequency questionnaires [20]. In the latter regard, it is important to acknowledge that overabundance of SFAs such as C16:0, which have been linked to adverse health effects, seems to be more related to endogenous overproduction due to energy excess and/or metabolic conditions such as non-alcoholic fatty liver disease (NAFLD), T2D, and MetS than to dietary uptake [8,16,17].
In patients with HFpEF, the association of blood SFA levels and phenotypic traits is not known. To fill this gap, we report individual SFA whole blood levels in a large cohort comprised of 404 HFpEF patients and associations with cardiometabolic phenotype, functional capacity, echocardiographic markers indicative of left ventricular diastolic function (LVDF), and neurohumoral activation in the framework of the Aldosterone in Diastolic Heart Failure (Aldo-DHF) trial.
Study Design
This is a post hoc analysis of the Aldo-DHF trial (ISRCTN 94726526). We analyzed associations of whole blood SFA levels at baseline with patient characteristics at baseline and after 12 months (12 mFU). From a total of 422 patients enrolled in the Aldo-DHF trial, 18 whole blood aliquots were not available due to loss during storage/transfer or missing blood sampling at baseline.
Aldo-DHF Trial
The Aldo-DHF trial was a multicenter, prospective, randomized, double-blind, and placebo-controlled trial that evaluated the effect of a 12-month aldosterone receptor blockade on diastolic function (E/e') and maximal exercise capacity (VO2peak) in patients with HFpEF. Participants were eligible if they were men and women aged 50 years or older with current HF symptoms consistent with New York Heart Association (NYHA) class II or III, had left ventricular ejection fraction (LVEF) of 50% or greater, had echocardiographic evidence of diastolic dysfunction (grade I) or atrial fibrillation at presentation, and had maximum exercise capacity (VO2peak) of 25 mL/kg/min or less [21]. Exclusion criteria have been published before [21]. In total, 422 patients (mean age 67 (SD, 8) years; 52% female) with evidence of diastolic dysfunction were included. Data acquisition took place between March 2007 and April 2012 at 10 sites in Germany and Austria [21].
Laboratory Measurements
In the Aldo-DHF Trial, venous blood samples were drawn after 20 min of rest in supine position under standardized conditions. Samples were immediately cooled and processed for storage at −80 • C (−112 • F). N-terminal pro-brain-type natriuretic peptide (NT-proBNP) was analyzed with the Elecsys NT-proBNP immunoassay (Roche Diagnostics) [21].
The process of immediate freezing and storage at -80 • C of the blood samples from the Aldo-DHF trial resulted in stable fatty acid levels [22]. For gas chromatographic analysis of fatty acid composition, 2.0 mL aliquots of frozen (−80 • C) EDTA-blood were shipped to a reference laboratory for fatty acid analyses (Omegametrix, Martinsried, Germany). At Omegametrix, whole blood fatty acid composition was analyzed according to the HS-Omega-3 Index ® methodology, as previously described [23]. Fatty acid methyl esters were generated by acid transesterification and were analyzed by gas chromatography using a GC2010 Gas Chromatograph (Shimadzu, Duisburg, Germany) equipped with a SP2560, 100 m column (Supelco, Bellefonte, PA, USA) using hydrogen as carrier gas. Fatty acids were identified by comparison with a standard mixture of fatty acids characteristic of erythrocytes. Individual fatty acid results are given as relative amounts of myristic acid (C14:0), palmitic acid (C16:0), stearic acid (C18:0), arachidic acid (C20:0), behenic acid (C22:0) and lignoceric acid (C24:0) expressed as a percentage of a total of 26 identified fatty acids in whole blood. Analyses were quality-controlled according to DIN ISO 15189.
Echocardiography and Other Variables
In the Aldo-DHF Trial, clinical data were obtained and diagnostic procedures were completed according to predefined standard operating procedures based on international guidelines [21]. Diastolic function on echocardiography was assessed in accordance with American Society of Echocardiography guidelines [24].
Ethics
The Aldo-DHF Trial complied with the Declaration of Helsinki and principles of good clinical practice. The study protocol was approved by the responsible ethics committees (approval code 6/12/06; date 25 February 2007). All participants gave written informed consent prior to any study-related procedures.
Statistical Analysis
Continuous variables are reported as mean +/− standard deviation (SD) or median (interquartile range (IQR)), according to their scale and distribution. Categorical variables are presented as absolute and relative frequencies. Spearman's correlation coefficients and multiple linear regression analyses, using sex and age as covariates, were used to describe the association of individual SFAs (C14:0 (myristic acid), C16:0 (palmitic acid), C18:0 (stearic acid), C20:0 (arachidic acid), C22:0 (behenic acid), C24:0 (lignoceric acid)) with cardiometabolic risk markers, echocardiographic markers of left ventricular diastolic function, and neurohumoral activation at baseline and at 12 mFU. To account for the randomization group, all analyses were repeated as sensitivity analysis with group as covariate. Further, a principal component analysis (PCA) was conducted to consolidate variables due to the limited sample size. A significance level of α = 5% was used for all tests. All tests were hypothesis-generating without confirmatory interpretation. Therefore, no correction was applied to counteract the problem of multiple comparisons. All statistical analyses were performed using IBM SPSS Statistics for Windows, version 25 (IBM Corp., Armonk, NY, USA) as well as R and RStudio, version R-4.2.125 (R Foundation for Statistical Computing, Vienna, Austria).
Very Long Chain Fatty Acids
Higher blood levels of the three long-chain SFAs, arachidic acid (AA, C20:0), behenic acid (C22:0), and lignoceric acid (C24:0), were all associated with a lower triglycerides-to-HDL-C ratio, lower triglycerides, lower non-HDL-C, and lower LDL-C (the latter only in the linear regression models) at baseline and after 12 mFU.
Sensitivity Analyses
Sensitivity analyses with group as covariate showed significant effects of group allocation (spironolactone +/− ) for the 12 mFU outcomes of systolic/diastolic blood pressure (p = <0.001), heart rate (only C16:0), E/e , and HbA1c but not for the markers reported above, such as blood lipids, liver enzymes, BMI/central adiposity, or aerobic capacity.
Sex-Specific Analyses
All models were adjusted using sex as a covariate and overall, since sex had a significant influence in several models while not turning the results in a specific direction regarding the fatty acid. In addition, sex-specific analyses within the gender subgroups were completed for all outcomes:
Principal Component Analysis
A PCA was conducted, but the sum of variance of the first two principal components (PCs) was relatively low at 49.2%. Therefore, these PCs were not used for any further analysis. However, the first and second PCs are shown as a biplot to visualize the similarities and differences between the used predictors in Figure 4. The PCA shows that there are two variable groups, C14:0 and C16:0, as well as C18:0, C20:0, C22:0, and C24:0, which have a similar information content within the group. This means that C14:0 and C16:0 contain similar information in the data set. Further, C18:0, C20:0, C22:0, and C24:0 are similar regarding their information content. This is the same for age and sex.
Main Findings
We analyzed the associations of individual whole blood SFA proportions with cardiometabolic risk factors, exercise capacity, echocardiographic markers of left ventricular diastolic function, and neurohumoral activation in the well-phenotyped Aldo-DHF cohort, comprising 404 patients with HFpEF.
Our main finding is that individual blood SFAs, a standardized analytical biomarker for SFA levels with low analytical variability, are differentially related with cardiometabolic phenotype and aerobic capacity but not with echocardiographic markers of left ventricular diastolic function, in patients with HFpEF. We observed that individual blood SFAs had opposing associations with cardiometabolic phenotype in HFpEF patients: conglomeration 1, comprised of C14:0 and C16:0, which are both markers of endogenous fatty acids synthesis in the context of nutrient overabundance, was associated with a higher-risk cardiometabolic phenotype. Conversely, conglomeration 2, comprised of the three very long-chain saturated fatty acids (C20:0, C22:0, and C24:0) and the LCSFA C18:0, was associated with a less pronounced risk profile in HFpEF patients. These findings question the use of generalizing umbrella terms such as "saturated fatty acids". Blood long-chain saturated fatty acids (LCSFAs) reliably reflect cardiac and other tissue SFA levels over the preceding three months (i.e., the balance of intake, endogenous production, distribution volume, and catabolism). Mean whole blood percentages of C14:0 (myristic acid) and C16:0 (palmitic acid), expressed as a percentage of the total of 26 identified FAs in whole blood in the Aldo-DHF cohort, were 0.69% and 24.89% respectively. There are no comparable data available on whole blood SFA concentrations in HFpEF patients, according to our literature search. In a cohort of patients with HF, Berliner et al. reported comparable whole blood concentrations of C16:0 (22.56 ± 1.76 % of total FA in whole blood) to the levels observed in our HFpEF population [25]. C14:0 was not reported in this analysis [25]. We observed a positive association of the LCSFAs C14:0 and C16:0 with established risk factors for cardiovascular disease and/or HF such as triglycerides and non-HDL-C [26], with HbA1c [3], with triglycerides-to-HDL-C ratio, a metabolic marker for plaque phenotype (i.e., thin-cap fibroatheromas) [27][28][29], and with anthropometric markers indicative of high cardiometabolic risk such as waist-to-height ratio [30]. The liver enzymes alanine aminotransaminase and γ-glutamyltransferase, which are surrogate markers of NAFLD and MetS, were directly associated with C16:0 [31]. These results are in line with biologic plausibility. C16:0 is the most abundant SFA in the human body, accounting for 20%-30% of total FAs [8]. It serves physiological functions related to membrane physical properties, protein palmitoylation, palmitoylethanolamide (PEA) biosynthesis, and efficient pulmonary surfactant activity [8]. C16:0 comes from the diet and is synthesized endogenously via DNL [8]. Its homeostasis is physiologically tightly controlled. However, certain conditions such as the presence of a positive energy balance, excessive intake of carbohydrates (in particular, fructose [32]), a sedentary lifestyle, and medical conditions related to these risk factors such as NAFLD, MetS, and T2DM may strongly induce DNL, resulting in increased tissue content of C16:0 [7,8]. In line with these physiological considerations, sugar-sweetened beverage consumption (i.e., dietary fructose intake) was consistently positively associated with higher concentrations of C16:0 ceramides in the Framingham Offspring Cohort [33]. High levels of C16:0 can disrupt tissue membrane phospholipid balance, which may depend on an optimal ratio of C16:0 with unsaturated fatty acids, especially n-3 and n-6 polyunsaturated fatty acids (PUFAs) [8]. Therefore, aligning with our findings, overaccumulation of C16:0 in tissues [8] and enrichment of C16:0 in very low-density lipoprotein particles (VLDL-P) but not in plasma [34] has been linked to dyslipidemia, pancreatic β-cell dysfunction/hyperglycemia, increased ectopic fat accumulation, and re-emergence of T2D in relapsers in a vicious cycle. Furthermore, C16:0 contributes to systemic and vascular inflammation through dimerization and activation of toll-like receptor (TLR) 2/4 [35]. In our analysis, higher C16:0 was, furthermore, predictive of lower submaximal aerobic capacity, a risk indicator for adverse outcomes in HFpEF, but there are no previous data on C16:0 and functional capacity to the best of our knowledge. C14:0 is another marker of DNL. Aligning with our observation of a positive association of C14:0 and triglycerides, increasing concentrations of C14:0 have been linked to progressive increases in triglycerides and ApoCIII concentrations, a hallmark of inflammatory potential of low-density lipoproteins [36], independently of coronary artery disease (CAD) diagnosis and gender in plasma samples from 1370 subjects with or without angiographically demonstrated CAD [37].
Individual SFAs and Patient
Overall, in line with physiological considerations (i.e., C16:0 as a marker of DNL in the context of nutrient excess) [8] and previous analyses in HF [17] and in ASCVD [15], C16:0-compared to other SFAs-was most consistently associated with a higher risk cardiometabolic phenotype. Interestingly, in 424 subjects from the PREDIMED randomized dietary trial, participants in the virgin olive oil and nut group showed increased plasma concentrations of C16:0 after completing a 1-year intervention program in the context of stable weight. [38] However, the methodology used (plasma concentrations) does not reflect long term dietary intake [38] and is, therefore, not directly comparable to the methodology used in our analysis.
SFA Conglomeration 2 (C18:0 and Very Long-Chain Saturated Fatty Acids)
Whole blood very long-chain saturated fatty acids (VLSFAs) are biomarkers of metabolism (i.e., elongation of LCSFAs) and, to a lesser extent, dietary intake. The mean whole blood percentages of C20:0 (arachidic acid), C22:0 (behenic acid), and C24:0 (lignoceric acid) expressed as a percentage of a total of the 26 identified FAs in whole blood in this cohort were 0.18%, 0.5% and 0.75%, respectively. The mean whole blood percentage of C18:0 (stearic acid) was 13.87%. No comparable data are available on whole blood SFA concentrations in HF patients, according to our literature search. Berliner et al. reported slightly lower whole blood concentrations of C18:0 (10.80 ± 2.08% of total FA in whole blood) and C24:0 (0.22 ± 0.10% of total FA in whole blood) in a cohort of HFrEF patients than the levels observed in our HFpEF population. [25] Notably, C20:0 and C22:0 were not reported by Berliner et al. [25].
Stearic acid (C18:0), the second-most-abundant SFA after C16:0, is derived from the diet (e.g., from full-fat dairy such as cheese and from meat) [5], through elongation from C16:0, and serves as a substrate for the synthesis of VLSFAs that are produced from C18:0 by elongases [13]. Contrary to C14:0 and C16:0, we observed that higher blood levels of the long-chain SFA C18:0 were broadly associated with a more favorable lipid profile (lower triglycerides-to-HDL-C ratio, triglycerides, non-HDL-C, and LDL-C) at baseline and at 12 mFU. Higher whole blood C18:0 was, furthermore, associated with lower liver enzymes (aspartate aminotransaminase and γ-glutamyltransferase) at 12 mFU in the entire cohort, suggesting an overall beneficial association of whole blood C18:0 with biomarkers of the DNL pathway.
In our cohort, the most consistent inverse association of whole blood VLSFAs was observed with lipid risk markers, in particular triglycerides. In line with this finding, Zhao et al. reported that plasma levels of the VLCSFAs C20:0, C22:0, and C24:0 were significantly and inversely associated with risk of MetS and individual components of MetS, in particular triglycerides, in 1729 Chinese adults aged 35-59 years [39]. Regarding individual VLSFAs, C20:0 was inversely associated with triglycerides-to-HDL-C ratio, triglycerides, non-HDL-C, LDL-C, anthropometric markers indicative of (central) adiposity, blood pressure, alanine aminotransaminase, and γ-glutamyltransferase at baseline and 12 mFU and, furthermore, was predictive of higher maximal aerobic capacity (VO2peak) at 12 mFU. The VLSFA C22:0 was inversely associated with triglycerides-to-HDL-C ratio, triglycerides, and non-HDL-C. The VLSFA SFA C24:0 was inversely associated with triglycerides-to-HDL-C ratio, triglycerides, non-HDL-C, and γ-glutamyltransferase at baseline and 12 mFU and predictive of a higher distance covered in 6MWT at 12 mFU.
No consistent significant associations and/or specific clusters were found between cardiac function and/or neurohumoral activation and blood SFA concentrations in this analysis, consistent with our previous analysis investigating the associations of individual Omega-3 FAs and echocardiographic markers of left ventricular diastolic function [23].
Strengths and Limitations
This analysis has limitations. First, all Aldo-DHF participants were of Caucasian origin and our findings may, therefore, not be representative for a random population sample or applicable to other ethnicities. Second, FA measurements were done only once in baseline samples. Proportions of whole blood SFAs may vary over time due to dietary changes, lifestyle changes, or diseases-in other words, they reflect a balance of influx (i.e., dietary factors or metabolic factors such as the DNL pathway) and efflux (i.e., fasting), but nothing is known about the relative determinants driving the blood SFA concentrations in this population. Third, we do not have data on prognosis, i.e., clinical endpoints. Finally, we do not have data on odd-chain SFAs (C15:0 and C17:0), which are markers for, e.g., dairy-fat intake, and which have been linked to improved mitochondrial function [6] and lower risk for T2D [40].
A major strength of our analysis is the precise clinical and metabolic characterization of the Aldo-DHF cohort comprising 404 HFpEF patients (with follow-up data after one year). Furthermore, with a proportion of 53% of the patients included in Aldo-DHF being female, this analysis adequately reflects the gender distribution in HFpEF [1]. Finally, this is the first analysis of whole blood SFA in patients with HFpEF. Whole blood FAs, as a biomarker of FA intake and endogenous production, offer a number of advantages over the assessment of FA intake via subjective methods such as food-frequency questionnaires and measurement of SFA in other lipid compartments such as plasma; regarding the latter, blood is easily accessible, whole blood levels have low biological variability, and the use of the HS-Omega-3 Index ® methodology provides low analytical variability [19]. Furthermore, compared to assessing the relations between FAs and surrogate markers for risk and/or clinical endpoints by subjective memory-based methods, a concept that has been questioned due to the high percentage of implausible data generated [20], measuring biomarkers is more biologically accurate.
Translational Outlook
Prognosis in HFpEF is determined by optimal risk factor control and treatment of comorbidities [4]. We found that individual blood SFAs could be broadly categorized into two main SFA conglomerations, which are differentially linked to biomarkers and anthropometric markers indicative of higher-/lower-risk cardiometabolic phenotype in HFpEF patients. In line with prior data supporting a potential role of DNL and/or DNLrelated LCSFAs in the development of HF [17], we showed that baseline blood levels of C14:0 and C16:0, which are both markers for DNL [8,16], were associated with a more pronounced risk profile in the Aldo-DHF cohort at baseline and after one year. Contrarily, the three major circulating VLSFAs, arachidic acid (C20:0), behenic acid (C22:0), and lignoceric acid (C24:0), as well as the LCSFA C18:0, were broadly associated with a lowerrisk HFpEF phenotype. In particular, the association of C16:0 and the higher-risk phenotype and the association of VLSFAs and the lower-risk phenotype warrant further research to explore potentially relevant new risk pathways in HFpEF.
Conclusions
In HFpEF patients, higher blood levels of C14:0 and C16:0 were associated with cardiometabolic risk factors. Contrarily, C18:0 and the three VLSFA were associated with a lower cardiometabolic risk profile. FA-based biomarkers warrant further investigation as prognostic markers in HFpEF.
Author Contributions: All authors contributed to this manuscript. K.L. and A.D. conceptualized and designed this analysis. K.L. did the literature search and drafted the manuscript. J.S., E.L., M.B., B.L., M.H., R.W., and A.D. critically revised and edited the manuscript. C.v.S. and F.E. supervised the writing process and made major contributions. A.K. and B.H. assisted with the statistical analysis of the data. All authors gave final approval and agree to be accountable for all aspects of their work, ensuring integrity and accuracy. All authors have read and agreed to the published version of the manuscript. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. | 5,182.8 | 2022-09-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Close limit of grazing black hole collisions: non-spinning holes
Using approximate techniques we study the final moments of the collision of two (individually non-spinnning) black holes which inspiral into each other. The approximation is based on treating the whole space-time as a single distorted black hole. We obtain estimates for the radiated energy, angular momentum and waveforms for the gravitational waves produced in such a collision. The results can be of interest for analyzing the data that will be forthcoming from gravitational wave interferometric detectors, like the LIGO, GEO, LISA, VIRGO and TAMA projects.
I. INTRODUCTION
The phrase "collision of black holes" has an aura of a mysterious and exotic happening that is not far from the reality of such an event. A black hole is not an ordinary object defined by the amount and properties of the material of which it is made. Rather it is a region from which no signal can escape. The surface, the black hole horizon, bounding this region is defined by the formal "no escape" property. Unlike the surface of an ordinary object the horizon has no local properties that would be sensed by an observer with the bad fortune to fall inward through it. A collision of two holes is the process in which two no-escape regions merge to become a single, larger, region of no escape. In the last few years such mergers have become the focus of much research attention, for two not entirely independent reasons.
The first reason is the development of numerical relativity [1]. General relativity, Einstein's theory of gravity, sets the dynamics of space-time via a set of nonlinear partial differential equations of such complexity that analytic solutions have been limited to two classes: solutions of high symmetry, or solutions based on approximation techniques, such as linearized weak field theory. The study of Einstein's equations on computers has been viewed as the key to finding more general asymmetric strong field solutions and it was natural for this key to be be applied to black hole collisions. Black holes are incontrovertibly strong field regions, but single isolated black holes are stationary solutions of Einstein's theory, and the simplifying symmetry of time independence allows for closed form well-understood solutions [2]. Collisions of black holes, on the other hand, are necessarily nonstationary as well as being crucially strong-field events. It is known that the collision will result in a single final black hole and in the generation of gravitational waves carrying off some of the mass energy originally associated with the holes. But this is all that is known with certainty. The nature of the merging of the horizons, in the general collision, is not even qualitatively understood.
A reasonably complete understanding awaits progress in numerical relativity, and the wait has been longer than anticipated. The solution of general black hole collisions on computers has proved to be remarkably difficult. There is, however, a class of cases in which reliable answers are available. If the collision is a 'head-on' collision along a straight line, then there is rotational symmetry about the line of the collision. Though the collision is still highly dynamic and nonlinear, the simplifications afforded by this symmetry reduce the computational demands sufficiently that the collision could successfully be simulated even in the mid 1970's, and run with good reliability in the mid 1990's [3]. The simplification of head-on collisions, however, masks some of the physics of the most interesting types of collisions, the fully three dimensional collisions at the end point of the inspiral of a mutually orbiting pair of black holes.
The second development that directed attention to black hole collisions is the advent of sensitive gravitational wave detectors. In the next few years, several interferometric gravitational wave observatories (the LIGO project in the US, the VIRGO and GEO projects in Europe and the TAMA project in Japan [4]) may be capable of detecting gravitational waves. Whether near term searches are successful will depend more than anything else on the strength of astrophysical sources. Attributes of a good generator of gravitational waves include strong gravitational fields and high velocities, so black hole processes are a natural source to consider. It is astrophysically plausible that black holes form binary associations with other objects, including other black holes [5]. Due to the loss of energy by the emission of gravitational radiation, the separation and period of the binary orbits would decrease. If the binary consists of two black holes, the inspiral would end with a rapid strong field merger that has the potential to be a powerful source of detectable gravitational waves [6].
The whole process of inspiral generates gravitational radiation, but in the early large-separation stages the radiation is relatively weak and is reasonably well described by Newtonian gravity theory and Post-Newtonian extensions of it [7]. It is only the final strong field merger that could in principle produce a powerful burst of gravitational waves, but at this point only one parameter of the burst is reliably known. The characteristic frequency of the waves is inversely proportional to the mass of the final black hole formed, and works out to be on the order of 10 3 Hz for a 10M ⊙ hole, a typical expected mass of a "stellar" sized hole. For supermassive holes of mass ≥ 10 6 M ⊙ typical of galactic nuclei, the waves would be less than 1 Hz. The maximum sensitivity of the next generation of gravitational wave detectors occurs at frequency around 100 Hz and the detectors will be ideally suited to waves from a black hole with mass of several hundred M ⊙ . Some recent observations [8] offer indirect evidence that black holes in this range may exist. If they do not, then the detection of the collision of black holes may require the deployment of space-based detectors [9] sensitive to the low frequency waves produced by supermassive holes.
The ratio of the masses in a binary determines both how difficult it is to analyze, and how exciting it is as a potential source. If the mass of a black hole M 1 is much larger than the mass of its binary companion M 2 , then the smaller mass object can be treated as a perturbation to the well understood spacetime of the larger mass black hole. The equations that describe perturbations are linear, and hence relatively easily dealt with in general. In the specific case of perturbations to black hole spacetimes, the techniques of calculation were worked out in the 1970s and resulted in the Regge-Wheeler and Zerilli equations [10,11] for perturbations of Schwarzschild (nonrotating) black holes, and in the Teukolsky [12] equation for perturbations of Kerr (rotating) black holes. The relatively easily analyzed [13] "particle limit" case M 2 ≪ M 1 may be of interest in connection, say, with neutron stars merging with supermassive black holes, but this process cannot give the hoped for high power. It is easy to show the gravitational wave power generated scales in the masses as (M 2 /M 1 ) 2 . High power requires roughly equal masses, and this means the simplifications of the particle limit do not apply to the most interesting sources.
If not directly applicable to equal mass inspiral, the clarity of the particle limit can, at least, help us to formulate questions about the nature of the endpoint of inspiral, like the existence of a last stable circular orbit. As a particle orbits a black hole it reaches a radius at which it can no longer stably orbit with slowly decreasing radius and it begins a rapid inward plunge. For the inspiral of two roughly equal mass holes it can be imagined that the binary gradually spirals inward or that it reaches a point at which a discontinuous plunge begins. If the late orbits are being degraded rapidly enough by the emission of gravitational radiation, there might not even be any meaning to late stage "stability." This uncertainty about even the qualitative nature of the late stage of the inspiral is related to an important, but totally unresolved, question: How does the inspiraling binary shed enough angular momentum to form a black hole? In a relativist's units in which c = G = 1, a black hole must have a total angular momentum J that is limited by the maximum angular momentum J = M 2 that a rotating (Kerr) black hole can possess. Until the binary pair is close, its angular momentum will be above this limit, but technical considerations [14] limit the rate at which angular momentum can be shed in gravitational waves at very late stages. If both black holes of the pair are rapidly rotating with angular momentum in the same direction the shedding appears to present a barrier to the formation of the final single black hole. It is possible that even the qualitative details of the late stage inspiral depend on the angular momentum of the inspiraling binary.
The set of possibilities is considerable and the answers are important both to our understanding of nonlinear gravitational interactions and to an understanding of gravitational wave sources. Real answers will require advances in numerical relativity that will be several years in coming, but interest in the questions justifies approximation methods that can help, even slightly, to close some of the wide open questions. We take such an approach here. We offer an estimate of the gravitational radiation generated during the late stage of inspiral of two black holes. Our method involves a number of assumptions and limitations that constrain its applicability and reliability, but for all its shortcomings it is one step towards a complete understanding.
The approximation method we use, the "close limit," [15] takes advantage of the property of a black hole horizon. Late in the merger of the binary the single horizon of the final black hole engulfs the entire binary. All the complex structure of the binary will be inside that final horizon, and cannot influence spacetime outside the horizon. It is only what is outside the horizon that can generate gravitational waves that can be detected by distant observers. Since the ultimate fate of the merger is a stationary black hole, it follows that sufficiently late in the merger what is outside the hole will be a perturbation of the final stationary hole. Thus the gravitational waves generated during the latest stage of inspiral can be computed using the techniques of perturbations of black hole spacetimes, with the Zerilli, Regge-Wheeler, and Teukolsky equations.
To understand how the close limit method is to be used, it is necessary to consider the general problem addressed by numerical relativity. Einstein's field equations are divided into "initial value" equations and equations of time evolution [16]. The initial value equations determine the nature of spacetime at a chosen initial moment. The solutions of these initial value equations are the initial values for the remaining differential equations of Einstein's theory, the equations that determine the spacetime (including its gravitational wave content) to the future of the initial time. The two tasks of numerical relativity are first to find an initial value solution representing a moment in the life of the colliding holes, and second to find the future spacetime for those initial values. The more computationally difficult task is that of evolving to the future and the codes that accomplish this task tend to be unstable for long time evolutions. For long evolutions to be avoided, the initial value solutions must be chosen to be a moment late in the life of the inspiral. If that moment is late enough, the close limit method can be brought to bear and evolution can be carried out with the stable linearized equations of perturbation theory. But choosing too late a starting moment for evolution creates a new difficulty.
The connection of an initial value solution to a "sensible" physical configuration for the binary is reasonably secure only if the binary pair is well separated. At close separations, the gravitational field of each of the binary holes strongly affects the other hole, and the individual mass, individual angular momentum, and physical separation of the holes lose clear meaning. The problem then requires navigating between the Scylla of numerical instabilities for evolution, and the Charybdis of uncertain initial conditions. By using a very late initial moment and linearized evolution, the close limit method completely avoids the former hazard.
There are reasons beyond speculation to believe that close limit evolutions give useful answers. Numerical relativity results are available for axisymmetric head-on collisions [3]. These represent evolution of a number of initial value solutions, in particular the closed form solution due to Misner [17], containing a single parameter representing the initial separation of equal mass holes in units of the mass of the spacetime (in c = G = 1 units). This separation index defines a parameterized family of initial value solutions. Choices of this parameter can be made corresponding to large or small initial separation. When numerical relativity and close limit results are compared it is seen that agreement is excellent for small initial separations, and is surprisingly good even when the initial configuration is not close enough for a horizon to engulf the entire binary [18,19]. Arguments can be made also, that the gravitational waves calculated in the late stage of inspiral are not highly sensitive to details of initial data. Particularly interesting in this regard is work by Abrahams and Cook [20].
In the past several years the close limit method has been extensively studied for head-on collisions of boosted and spinning holes and compared with the results of numerical relativity. Most notably, second order perturbation theory has been developed for the close limit method [21]. In this process of comparison much has been learned about the strengths and limitations of the close limit method, with the goal of applying the method to problems that cannot yet be handled with numerical relativity. The present work represents the first example of this. We report here the results of the application of the close limit method for the three dimensional problem of the late stage inspiral of two black holes.
We will use the close limit method for the initial data families constructed by the Bowen and York [22] method and the associated "punctures" families [23]. It is known that these families possess an artificial radiation content when one considers black holes that are close, but such content is also known to be moderate [24]. An important advantage of these methods is that they are typically the starting point for numerical relativity, and thus close limit evolution of these starting points can be compared with the numerical evolution of these same initial data when such evolutions become available. The most important disadvantage, for our purposes, is that the Bowen-York family does not include the Kerr solution, the solution for a rotating hole. This precludes finding a family of initial value solutions that goes, in the limit of small initial separation, to a Kerr black hole. With Bowen-York initial value solutions, then, we cannot consider a collision that will result in a rapidly rotating hole. Rather, we limit our attention to collisions involving a modest amount of total angular momentum and consider the angular momentum as well as the initial separation to be a perturbation of a nonrotating final hole. It it should also be mentioned that currently fashionable astrophysical scenarios suggest that the individual holes might not carry a significant amount of spin [5] in realistic black hole collisions.
The organization of this paper is as follows: in the next section we review the method for obtaining the initial data and describe the approximations involved. In the following two sections we discuss how to set up the perturbative formalism geared towards evolution. Since the collisions have net angular momentum we will evolve them both as a perturbation of a rotating and a non-rotating black hole. The comparison of both approaches is given in the subsequent section and we will see that insight is gained by treating the problem in two different ways. We end with a discussion of the results in terms of waveforms and radiated energies and we describe a puzzle in the calculation of the angular momentum radiated.
For the reader who wishes to be spared all the details, we summarize our results in a brief punchline: the final ringdown of the inspiraling collision of two non-spinning black holes is unlikely to radiate more than 1% of the mass of the system or more than 0.1% of its angular momentum in gravitational waves.
II. INITIAL DATA
To evolve a spacetime in general relativity, one needs to provide initial data, a 3-geometry g ab and an extrinsic curvature K ab , that solve Einstein's equations on some starting hypersurface (i.e., at some starting time). For two black holes, this is an easy task if the holes are far apart, since one can superpose the solutions for two individual holes ignoring their interactions. When the black holes are close on the initial hypersurface, the astrophysically correct initial data is the solution corresponding to what would have evolved during the binary inspiral, but such an evolution cannot be computed with present day techniques. One must therefore use a somewhat artificial initial data solution that is a best guess at a representation of close black holes. The need for such a guess is one of the sources of uncertainty in our result.
A. Summary of the Bowen-York construction: The initial value equations for general relativity are, where g ab is the spatial metric, K ab is the extrinsic curvature and 3 R is the scalar curvature of the three metric. If we propose a 3-metric that is conformally flat g ab = φ 4 g ab , with g ab a flat metric, and φ 4 the conformal factor, and we use a decomposition of the extrinsic curvature K ab = φ −2 K ab , and assume maximal slicing K a a = 0, the constraints become, where ∇ is a flat-space covariant derivative.
To solve the momentum constraint, we start with a solution that represents a single hole with linear momentum P [25],K one ab = 3 2r 2 2P (a n b) − (δ ab − n a n b )P c n c .
In this expression for the conformally related extrinsic curvature at some point x a , the quantity n b is a unit vector, in the "base" flat space with metricĝ ab , directed from a point representing the location of the hole to the point x a . The symbol r represents the distance, in the flat base space, from the point of the hole to x a . It is straightforward to show that the solution of the Hamiltonian constraint corresponding to eq. (5) corresponds to a spacetime with ADM momentum P a . The next step is to modify this to represent holes centered at x = ±L/2 in the conformally flat metric. Since the momentum constraint is linear, we can simply add two expressions of the above form, We will choose in further expressions to use a polar coordinate system in the flat space determined byĝ ab centered in the mid-point separating the two holes and label the polar coordinates as (R, θ, φ). So R will be the distance in the flat space from the midpoint between the holes. To solve the Hamiltonian constraint 4, we introduce an approximation, (the slow approximation) which we will show is enough for our purposes. In fact, in this approximation the solution for the conformal factor turns out to be the familiar Misner [17] solution if one chooses the topology of the slice to have a single asymptotically flat region, or the Brill-Lindquist [26] solution if there are three asymptotically flat regions.
B. The slow approximation
We assume that the black holes are initially close, and that the initial momentum P is small. We denote by n + and n − the normal vectors corresponding, respectively, to the one hole solutions at x = +L/2 and at x = −L/2, and we define R to be the distance to a field point, in the flat conformal space, from the point midway between the holes. For large R, the normal vectors n + and n − almost cancel. More specifically n + = − n − + O(L/R). A consequence of this is that the total initial K ab is first order in L/R, and its (R, θ, ϕ coordinate basis) components can be written aŝ This solution forK ab is first order, both in P and L. Thus the source term in the Hamiltonian constraint is quadratic in P . If we choose to find a solution to the conformal factor to first order in P (which should give us a good approximation in the case of slowly moving holes), we can ignore this quadratic source term. So now, the Hamiltonian constraint looks like the one for zero momentum, which is simply the Laplace equation. A well known solution to this, is the Misner solution [17]. This solution, is characterized by a parameter µ 0 which describes the separation of the two throats. We can relate this parameter to the conformal distance L in the following way [3], To clarify: in the slow approximation we are considering, the data we use in our simulations consists of the extrinsic curvature proposed by Bowen and York and the conformal factor due to Misner. This might appear as odd, since the conformal factor of Misner is "symmetrized" through the throats and the extrinsic curvature due to Bowen and York is not. What we do is not inconsistent, it is just a different (and perhaps from a certain point of view less natural) choice of boundary conditions for the fields. In practice, in the close limit and to first order in perturbation theory, the conformal factor of Misner differs from that of Brill and Lindquist by a numerical factor that can be absorbed in the definition of the separation of the holes [27].
Some readers may be disturbed by the slow approximation, since in the computation of certain quantities, for instance the ADM mass, the higher order terms in the expansion in terms of the momentum are crucial. We have already discussed this in detail in previous head-on simulations [28]. The bottomline is that to get an accurate estimate of the ADM mass for high values of the momentum one indeed needs a full solution of the Hamiltonian constraint and not a "slow approximation" solution. For the values of the separations and the momenta we will consider in this paper (a < 0.5) the ADM mass computed with the slow approximation and the one computed with the full solution differ by less than 10% so we will ignore this difference.
We must now map the coordinates of the initial value solution to the coordinates for the Schwarzschild/Kerr (in the vanishing spin limit) background. To do this, we interpret the R as the isotropic radial coordinate of a Schwarzschild spacetime, and we relate it to the usual Schwarzschild radial coordinate r by R = ( √ r + √ r − 2M ) 2 /4. From this we arrive at the following expression for the components of the extrinsic curvature, Here we have used the fact that
III. THE CLOSE LIMIT AS A PERTURBATION OF A SCHWARZSCHILD HOLE
In this paper we will evolve the initial data we just constructed using the perturbative evolution equations for linearized first order perturbations: the Zerilli equation in the case of a Schwarzschild background and the Teukolsky equation in the case of a Kerr background. We need to construct the initial data for these equations in terms of the metric and extrinsic curvature we discussed above. In this section we discuss the setup of initial data and evolution of the problem as a perturbation of a Schwarzschild black hole, using the Zerilli-Regge-Wheeler formalism.
A. Setting up the initial data for the Zerilli function Given the three metric and the extrinsic curvature, one can explicitly construct the zeroth and first order term of a power series expansion in a fiducial time variable t of the space-time metric. From this expression one can read off the appropriate coefficients of the multipolar expansion of the metric in the Regge-Wheeler [10] notation. The only nonvanishing perturbations at t = 0 are, We compute the time derivative of these quantities, using the extrinsic curvature K ij obtained in the last section. The nonvanishing ones are, where i is the imaginary unit. (Here we are using the standard conventions for the spherical harmonics. Notice that the m = 2 and m = −2 perturbations are individually complex, but when they are added to give the total perturbation the resulting function of t, r, θ and ϕ is real, as of course it must be. We also have an odd parity contribution, This perturbation represents the difference between the Kerr solution that represents the rotating space-time and the Schwarzschild background used in the perturbative approach. To first order it decouples from all other perturbations, and in fact is unchanging in time, corresponding to the conservation of angular momentum to first order in the perturbations. The change over time in the quantity induced by second order perturbations, will be discussed below in connection with the radiation of angular momentum. The Zerilli function is defined by (see for instance [30]), Therefore, for t = 0 we have and ψ (2,m) (0, r) = r(r − 2M ) 3(2r + 3M ) Ḣ 2(2,m) (0, r) − r ∂K (2,m) (0, r) ∂r + 3r ∂Ġ (2,m) ∂r After some simplifications we have the initial data for the Zerilli function, andψ (2,2) (18) and the Zerilli function for (ℓ = 2, m = −2) is the complex conjugate of ψ (2,2) (t, r). The initial data for the ℓ = 2, m = 0 Zerilli function is
B. Evolution of the Zerilli function and computation of physical quantities
Given the Cauchy data from the last section, the time evolution is obtained from the Zerilli equation [30], where V (r * ) is the (m-independent) Zerilli potential, where λ = (ℓ − 1)(ℓ + 2)/2 and r * = r + 2M ln(r/2M − 1). We need to establish a convenient formula for the radiated energy, similar to that present in [31] but applied to the non-axisymmetric case. We start from the expression of the radiated energy computed via the Landau-Lifshitz pseudo-tensor following the notation and derivations of [31], and translating to the Regge-Wheeler notation and integrating on solid angles we get, and one can obtain the radiated energy integrating over time. The power naturally comes out in units of the mass of the background spacetime.
To compute the radiated angular momentum one could also start by considering the Landau-Lifshitz pseudo-tensor and construct and asymptotic expression for angular momentum flux. This approach was pursued, for instance, in [32] to compute expressions for the radiation of angular momentum in terms of multipoles. An alternative approach is to simply compute the change in the angular momentum of the spacetime, which we characterize to linear order in perturbation theory through the function, This is a first order gauge invariant if odd h 0 , and odd h 1 are first order perturbations. Moreover, for ℓ = 1, m = 0, this gauge invariant is constant, equal to 4 √ 3πJ, where J is the total angular momentum, if the perturbations are axially symmetric.
If we look at second order perturbations we find ∂ ∂t r 2 odd h 0 , r (r, t) − 2r odd h 0 (r, t) − r 2 odd h 1 , t (r, t) = S Jdot (26) where S Jdot is a 'source', quadratic in first order perturbations. Therefore the change in angular momentum, due to radiation may be obtained by integrating S Jdot for all t (or from t = 0 to t = ∞, it makes no difference), in the limit r → ∞. After several simplifications and cancelling terms that result from integration by parts, we end up with If we write we have and we find We have checked by explicit substitution that this form coincides with the results from the flux formulas of Thorne [32]. It reassures our confidence in the consistency of the Regge-Wheeler-Zerilli perturbative formalism to notice that the changes to second order are in accordance with the first order flux.
IV. EVOLUTION AS A PERTURBATION OF A KERR BLACK HOLE
To treat the problem as a perturbation of a Kerr black hole we need to set up initial data and evolve the Teukolsky equation. The formalism for setting up initial data in terms of Cauchy metric data was developed in [33], we only give a brief sketch here and refer the reader to that paper for further details.
The relevant Weyl scalar for gravitational radiation is since it is directly related to outgoing gravitational waves. We can rewrite this as which in turn can be written in terms of hypersurface quantities g ij and K ij . For the last term in this expression, we can use vacuum Einstein equations to eliminate terms that have time derivatives of K ij . Also, we are interested merely in the first order perturbations of this scalar. Putting all this together, the final result for the first order expansion of the Weyl scalar is [33], where N (0) = (g tt kerr ) −1/2 is the zeroth order lapse, n i , m j are two of the null vectors of the (zeroth order) tetrad, Latin indices run from 1 to 3, and the brackets are computed to only first order (zeroth order excluded).
This expression can be used, to obtain the time derivative of the Weyl scalar too. We simply replace the first order quantities above by their time derivatives (which can be obtained via the Einstein equations).
In our treatment, the extrinsic curvature and the metric, from the last section, shall be treated as a perturbation of the corresponding Kerr hypersurface quantities. Since we attempt calculations only to first order in P L (which we identify with M a, where M is the mass of the background Kerr black hole and a its angular momentum parameter), the Kerr 3-metric is (in this approximation) conformally flat. Hence we justify using the Bowen York recipe for constructing initial data for the inspiral problem.
A. Initial Data for the Teukolsky function: Using the methodology and expressions we just discussed, the initial data for the Teukolsky function, Ψ = ρ −4 ψ 4 , where ρ = −1/(r − ia cos θ), is: For the azimuthal modes, m = ±2 And for the azimuthal mode, m = 0 Here, R is the Schwarzschild isotropic radial coordinate.
B. Evolution of the Data using the Teukolsky equation
Given the Cauchy data from the last section, the time evolution is obtained from the Teukolsky equation [12], where M is the mass of the black hole, a its angular momentum per unit mass, Σ ≡ r 2 + a 2 cos 2 θ, and ∆ ≡ r 2 − 2M r + a 2 . The radiated energy is given by [34], and the angular momentum carried away by the waves can be obtained from [34], . The gravitational wave one obtains from the close limit non-head-on collision of two black holes. Depicted is the "strain amplitude" of the "+" polarization mode in the equatorial plane (assuming the collision has initial angular momentum aligned with the z axis). We chose to depict it in "realistic" units assuming that the binary has a mass of 10M⊙ and we are observing the wave at a distance of 100M P C. The angle of observation is θ = π/2, φ = 0.
V. RESULTS OF THE EVOLUTIONS
We have evolved the Zerilli and Teukolsky equations using codes that have already been tested in other situations [28], [35]. Figure 1 shows the amplitude of the waves, depicting the "+" component of the polarization, defined in terms of the Zerilli function as, In figure 2 we give a spatial visualization of the waves, by plotting the "×" polarization of the waves, defined as, The figure suggests a rotation pattern, but as can be seen in the accompanying movie, the shown patterns just propagate outward. Let us turn now to the evaluation of the radiated energies and angular momentum. Figure 3 shows the radiated energy as a function of the initial angular momentum, for a fixed separation of the holes. The figure compares the Regge-Wheeler-Zerilli (Z) and Teukolsky (T) calculations. As expected, they differ for large values of the angular momentum, since the Teukolsky calculation contains terms higher than linear in the angular momentum. As we explained before, one is not keeping consistently these higher order terms so one cannot argue that the Teukolsky result is "better". A conservative view that can be taken should be that both results disagree when higher order terms start to be important, and this gives us a rough measure of the error in the Zerilli calculation. We therefore conclude that for the separation in question, one should not trust first order perturbation theory beyond a = 0.5. One should stress that this view can be somewhat overconservative, our experience with explicit second order calculations for the head-on collisions [36,37,30] shows that one should include all second order terms to have a consistent formulation and a reliable set of "error bars". This is not accomplished by the first order Teukolsky formalism in this context. In this respect, second order Teukolsky results for this problem will be quite welcome [34]. The second order Zerilli For large values of r this is proportional to the "×" strain of the gravitational wave that a detector would measure (see equation (24)). The proportionality factor is 100M pc/r. The overall vertical scaling is the same as in the figure of the quasi-normal ringing if r = 100M pc. In the attached movie one can see the time evolution leading to these pictures. The viewer should keep in mind that in order to visualize "strain" a factor of 100M pc/r should be included, and the strain is only a well defined concept in the far zone where the metric is approximately flat. The plot is for a given fixed colatitude θ = π/4, as a function of x, y. The left picture is a side view, the right picture a top view. The picture corresponds to a snapshot at t = 80M . The spatial scale is ±100M in each direction. The vertical scale is the same as in figure 1. 3. The radiated energy in a non-head-on collision of two non-spinning black holes as a function of the total initial angular momentum, for a fixed separation of 3.64 radii (see text for details). We depict the results of treating the problem as a perturbation of a non-rotating hole (Z) and a rotating hole (T). The agreement of both curves up to angular momenta of a = 0.4 − 0.5 gives confidence in the linear perturbative results. The "real data" very likely lies in a curve below the Zerilli (Z) curve, which allows us to roughly extrapolate the results to the extremal a = 1 case, where we see that still less than 1% of the mass of the system is radiated in the close limit. As can be seen, the Regge-Wheeler-Zerilli (Z) calculation of perturbations of a non-rotating hole disagrees with the Teukolsky (T) rotating black hole calculation. The radiated angular momentum is a more delicate quantity to compute than the energy and it appears that the potentially inconsistent higher order terms included in the rotating perturbation approach changes its value significantly. Overall we see the radiation is small. The Regge-Wheeler-Zerilli curve predicts less than 0.1% of the total angular momentum will be radiated, even in the extreme rotating case.
calculations appear as quite prohibitive in complexity. The separation of the holes quoted in figure 3 requires some explanation. The simulations start with the construction of the initial data by the Bowen-York procedure we described in section II A. As discussed there, the construction starts with the introduction of a fiducial conformal space. In such a space the separation is 0.91M where M is the ADM mass of the spacetime. The radius of each hole (if they were non-moving, the momentum slightly changes the shape of the horizon and the radius, see [25]) is approximately M/4, from there the separation of 3.64M quoted in the caption. To translate to more commonly used terms, one could convert the number to the µ 0 parameter in the Misner solution, which for our case is µ 0 = 1.5. Finally, another commonly used measure of the separation is the length of the geodesic threading the throat in the Misner geometry.
In terms of such a parameter, it is equivalent to 2.75 times the ADM mass of the spacetime or approximately 5.5 times the mass of each individual hole.
A remarkable aspect of figure 3 is that linear perturbation theory has a tendency to overestimate the radiated energies for large values of the perturbative parameter, at least from our experience with head-on collisions of nonboosted [18], boosted [28] and some preliminary unpublished results we have for spinning holes. This would suggest that, if the same behavior takes place for the inspiralling collisions, "reality" should lie below the curve corresponding to the Regge-Wheeler-Zerilli formalism (Z). This would indicate that the estimation obtained using the Teukolsky formalism is actually worse for the particular kind of collision under consideration. This is what we were alluding to when we warned in the introduction that it was not obvious that representing the spacetime as a perturbation of a non-rotating hole was a worse choice than of that of a Kerr hole.
We now turn to the evaluation of the radiated angular momentum. This is depicted in figure 4. The two curves shown in 4 disagree significantly. They do not even agree for very small values of the angular momentum. We have checked that there is no numerical error: if in the Teukolsky evolution one keeps the initial data intact but "turns off" the a-dependent terms in the evolution equation, the RWZ straight line is reproduced. It should be noticed that the radiated angular momentum is a qualitatively different quantity insofar as its computation than the energy. The energy is roughly obtained by squaring and integrating the waveforms. The angular momentum depends on subtle phase differences. It is much more easy to disturb the calculation of the radiated angular momentum than that of the radiated energy. This, in particular, points out to the potential difficulty of estimating this kind of quantity in full numerical simulations, where phase lags in the waveforms due to grid stretching and other problems are well known. In our approach it appears that the potentially inconsistent higher order terms in the angular momentum we introduce when considering a rotating background are causing problems in the computation of radiated angular momentum. If one wishes to be ultra-conservative, one could simply conclude that both calculations only predict the correct result for zero angular momentum. Otherwise, one could conclude that for this family of initial data the Teukolsky approach really only works for non-rotating black holes, something suggested by the fact that the background spacetime is only recovered in the close limit with vanishing angular momentum. At the moment we can only say that the accurate computation of the radiated angular momentum for this problem is an open problem. It is likely that the RWZ estimate is correct, but we do not have "error bars" (even rough ones) to validate this prediction.
VI. CONCLUSIONS
We have used the "close limit" to estimate the radiation in the collision at the end of the inspiral of two equal mass nonrotating black holes. The assumptions and restrictions were: (i) only the "ringdown" radiation was computed; (ii) we assumed that a simple initial data set gave an adequate representation of appropriate astrophysical conditions; (iii) we assumed that the final hole is not near the extreme Kerr limit; (iv) we used close limit estimates of the evolution. Our main conclusion is that the energy radiated in ringdown is probably not more than 1% of the total mass of the system, and the angular momentum radiated is not more than 0.1% of the initial angular momentum. The most serious uncertainty in this result is the possibility that the radiation from the early merger stage of coalescence is very much larger than the ringdown radiation. With our 1%M c 2 estimate, collisions of black holes of 100M ⊙ would be detectable with signal to noise of 6 out to distances on the order of 200Mpc by the initial LIGO configuration and to distances of 4Gpc with the advanced LIGO detector. | 9,549.4 | 2000-03-01T00:00:00.000 | [
"Physics"
] |
A Multi-Layered Study on Harmonic Oscillations in Mammalian Genomics and Proteomics
Cellular, organ, and whole animal physiology show temporal variation predominantly featuring 24-h (circadian) periodicity. Time-course mRNA gene expression profiling in mouse liver showed two subsets of genes oscillating at the second (12-h) and third (8-h) harmonic of the prime (24-h) frequency. The aim of our study was to identify specific genomic, proteomic, and functional properties of ultradian and circadian subsets. We found hallmarks of the three oscillating gene subsets, including different (i) functional annotation, (ii) proteomic and electrochemical features, and (iii) transcription factor binding motifs in upstream regions of 8-h and 12-h oscillating genes that seemingly allow the link of the ultradian gene sets to a known circadian network. Our multifaceted bioinformatics analysis of circadian and ultradian genes suggests that the different rhythmicity of gene expression impacts physiological outcomes and may be related to transcriptional, translational and post-translational dynamics, as well as to phylogenetic and evolutionary components.
Introduction
Tissue and cellular functions underlying physiology of living organisms show time-dependent variations predominantly featured by 24-h (circadian) periodicity and driven by molecular clockworks operated by rhythmically expressed genes and proteins hardwiring transcriptional/translational feedback loops (TTFL) [1][2][3][4]. Circadian expressed genes include a handful of core-clock genes that in turn drive thousands of downstream clock-controlled genes [5,6]. Experiments performed in animal models showed that approximately half of the transcriptome shows 24-h oscillations that manage crucial biological processes such as the cell cycle, proliferation, metabolism, DNA damage repair, apoptosis and autophagy [7][8][9][10].
The interacting positive and negative limbs of the TTFL regulate gene transcription through sequential cycles of transcriptional activation of the expression of clock genes followed by transcriptional suppression by their protein products [11,12]. The positive limb is operated by CLOCK and BMAL1 that heterodimerize and activate the transcription of cryptochrome genes (Cry1 and Cry2) and period genes (Per1, Per2 and Per3), which operate the negative limb encoding repressors hindering gene transcription. Conversely, Bmal1 rhythmic expression is driven by the nuclear receptors REV-ERBα and RORα through competitive binding at its promoter region [13,14].
Gene expression profiling performed by means of high-throughput measurements with DNA microarrays and quantitative PCR in mouse liver specimens collected at regular time intervals showed that two groups of genes oscillate at the second (12-h) and third (8-h) harmonic of the fundamental (24-h) frequency [15].
The aim of our study was to characterize genomic and proteomic features of the clusters of genes oscillating with harmonics of circadian periodicity. We exploited bioinformatics tools for functional prediction to identify the biological functions and enriched signalling pathways and to perform comparative qualitative proteomic analysis. Finally, we implemented several computational strategies in order to detect the presence of significant de-novo regulatory motifs and known transcription factor binding sites in the promoter region of clock genes, with respect to the whole mouse genome.
We investigated the following working hypotheses: (i) Circadian genes and genes oscillating with harmonic frequencies show dissimilar biological facets and encode different proteome profiles; (ii) canonical and non-canonical DNA structures are found within the upstream regions of the oscillating genes subsets; (iii) ultradian genes connect to an identified circadian network through distinctive upstream short nucleotide sequences and DNA binding sites. Our results show that the three subsets of oscillating genes are hallmarked by very different functional annotation and proteomic features, as well as peculiar transcription factor binding motives, in addition to canonical binding sites. These are found within the upstream regions of rhythmically expressed target genes and seemingly allow for the link of the ultradian gene sets to a known circadian network.
Results
To characterize particular features of the gene sets with ultradian and circadian periodicity (8-h, 12-h, 24-h gene sets), we used a variety of computational and bioinformatics methods including a comprehensive analysis at the gene expression level namely: a sequence analysis for known transcription factor binding sites, multiple sequence alignment and phylogenetic analysis, enrichment analysis of the three gene sets, as well as the analysis of epigenetic and non-epigenetic regulation of oscillating gene expression. We further carried out an analysis at the protein level and investigated the electrochemical properties of oscillating proteins and completed our analysis by generating chromosomal co-localization networks created upon homology mapping of oscillating genes.
Known Transcription Factor Binding Sites Are Enriched in the Promoter Regions of the 8-h, 12-h, and 24-h Rhytmically Expressed Genes
To characterize a putative differential functionality of ultradian genes, we searched for enriched transcription factor (TF) binding sites in the promoter regions of 8-h and 12-h gene sets as compared to 24-h rhythmically expressed gene set, using the MEME SUITE AME tool [23]. We found several significantly enriched binding sites (p > 0.05), Tables S1-S6. The top 5 enriched TF binding sites for the (Table S1),
119
In addition, using the AME tool of the MEME suite we searched for E-boxes and D-boxes, 120 conserved motifs known to be the present in the promoter region of clock-controlled genes, and 121 bound by core-clock elements. Both E-boxes and D-boxes were detected in the upstream promoter 122 region of the 8-h gene set and of the 12-h gene set (p < 0.05). However, other motifs are more 123 significantly enriched (Figure 1). We detected E-boxes in the 8-h gene set in 38.3% of the upstream 124 promoter sequences (adj. p = 4.99e-2), and in the 12-h gene set in 6.9% of the upstream promoter 125 sequences (adj. p = 3.75e-8). In particular, we detected CLOCK (adj. p = 1.25e-16) and BMAL1 (adj. p 126 = 5.83e-06) binding motifs in the upstream promoter regions of the 12-h gene set (Table S3). For the 127 24-h gene set we detected E-boxes in 26.7% of the upstream promoter sequences (adj. p = 3.04e-48).
In addition, using the AME tool of the MEME suite we searched for E-boxes and D-boxes, conserved motifs known to be the present in the promoter region of clock-controlled genes, and bound by core-clock elements. Both E-boxes and D-boxes were detected in the upstream promoter region of the 8-h gene set and of the 12-h gene set (p < 0.05). However, other motifs are more significantly enriched ( Figure 1). We detected E-boxes in the 8-h gene set in 38.3% of the upstream promoter sequences (adj. p = 4.99e-2), and in the 12-h gene set in 6.9% of the upstream promoter sequences (adj. p = 3.75e-8).
Phylogenetic Analysis Shows Similarity within the Promoter Regions of the 8 h and 12 h Gene Sets
The significantly enriched motifs found point to a common regulatory system for the 8-h and 12-h gene sets, hence we hypothesized the existence of an evolutionary connection between the promoter regions of both gene sets as the key mechanism of activation is most likely evolutionary ancient and well conserved (as the clock itself). To further investigate this hypothesis we generated phylogenetic trees, as an output visualization of the multiple sequence alignments of the 3500 bp upstream promoter region of the 8-h and 12-h gene sets ( Figure 2). First, we produced a multiple sequence alignment of the promoter region sequences together with 10 control sequences of non-oscillating genes. Second, we created phylogenetic trees from the resulting alignment using the Felsenstein (F84) (Figure 2A) and Jukes-Cantor (JC) ( Figure 2B) nucleotide substitution models [24]. We tagged the 8-h oscillating genes with red markers in the resulting trees and the 10 control genes with green markers. When utilizing 10 non-oscillating genes as control sequences, the 8-h and 12-h gene sets do not show a clustering pointing at strong evolutionary conservation of the entire sequence regardless of the substitution model used. Hence, while individual binding sites for TFs are highly enriched and conserved, the promoter sequence itself varies greatly and may allow for the fine-tuning of expression for individual genes. (Table S6). Interestingly, CLOCK (adj. p = 130 0.02) and BMAL1 (adj. p = 3.28e-05) binding motif are also present in the downstream promoter region of the 24-h gene set (Table S5).
133
The significantly enriched motifs found point to a common regulatory system for the 8-h and To search for specific functions of the individual gene sets, we investigated a possible enrichment of Gene Ontology and Reactome Pathways terms for the 8-h, 12-h and 24-h rhythmically expressed genes. In all cases, significant enrichments, generated with ConsensusPathDB, are present (p < 0.01). While the 8-h gene set showed an enrichment of terms related to metabolism, the 12-h set showed an enrichment of terms related to endoplasmic reticulum (ER)-related processes, splicing, translation and gene expression regulation. The 24-h rhythmically expressed genes showed an enrichment of terms related to meiosis, and splicing ( Figure S3), which is in line with our previous findings [25][26][27][28].
We further explored the putative connection of the ultradian gene sets to a known circadian (approximately 24 h rhythmically expressed elements) network (NCRG-network of circadian regulated genes [10]) for that, we performed a series of simulations based on randomized protein-protein interaction networks. The random network generation is based on the IntAct database contained in iRefIndex. We quantified the number of interactions between the elements of the 8-h gene set, and 100 random networks of the same size as the NCRG. In addition, we also quantified the number of interactions between the 8-h gene set and the NCRG. While the average interactions between the 8-h gene set and the random networks was 3.99 ± 2.91 (mean and SD), the number of interactions between the same gene set and the NCRG was 12. We applied the same procedure to the 12-h gene set and obtained 19 ± 9.25 (mean and SD) connections to the random networks, while the number of connections between the NCRG and the 12 h gene set was 87. Both sets therefore exhibit a connectivity to the NCRG that was higher than the connectivity displayed by the random gene sets. Thus, the randomized network analysis points to a connection between the ultradian rhythmically expressed genes, the core-clock and clock-controlled genes.
Epigenetic and Non-epigenetic Regulation of Oscillating Gene Expression
We performed an enrichment analysis for histone modifications associated with the 8-h, 12-h and 24-h gene sets available in the public Encode 2015 project data [29]. The enriched histone methylation pattern of H3K79me2 associated with the 12-h and 24-h gene sets is tissue-specific for the liver in agreement with the original data, generated from liver cells. This points to a tissue specificity of the methylation pattern and the corresponding expression of rhythmically expressed genes. The methylation pattern of the 8-h gene set is associated with a different cell line (M. Musculus MEL, p = 3.555e-02).
The H3K79me2 histone modification is associated with the function of the RNA polymerase II ( Figure S4). RNA Polymerase II plays a major role in the transcription regulation of the 12-h and 24-h rhythmically expressed genes based on ChIPSeq data from the ENCODE project and as suggested by the previous histone modification data. However, GABPA (GA Binding Protein Transcription Factor Subunit Alpha, p = 4.315e-08) TF scores higher in the 12-h oscillating gene set than RNA Polymerase II. GABPA is related to the mitochondrial gene expression pathway, thus pointing again at the potential metabolic role of the genes with ultradian oscillations. For the 8-h gene set we detected an enrichment of TCF12 (Transcription Factor 12) binding. This transcription factor recognizes E-boxes and is involved in the formation of lineage-specific gene expression. This enrichment illustrates the important role of the E-boxes, which even though not being the most enriched motif seem to attract the strongest TF activity ( Figure S4).
Moreover, we investigated the protein-protein interactions of the transcription factors potentially influencing the gene sets according to the ENRICHR database [30]. This enrichment showed interesting candidates-POLE (DNA polymerase Epsilon, Catalytic Subunit, p = 1.026e-02) for the 8-h gene set and ESR1 (Estrogen receptor 1) for the 12-h (p = 1.137e-07) and 24-h (p = 7.235e-13) gene sets. Figure S4. We further investigated the role of POLE in a potential cancer context. From publicly available data the high expression of POLE is an unfavourable marker in renal cancer and melanoma [31] ( Figure S5).
Altogether, the enrichment information points towards very specific processes that govern the regulation and output of the ultradian oscillating genes. Often a single microRNA (such as miRNA-1295 for the 8-h gene set) or a single gene such as POLR2A in the 24-h gene set are predicted to have the most significant results in terms of interaction with other genes, regulation of transcription or computationally predicted targets in the genomic sequence.
Electrochemical Properties of Oscillating Proteins
Next, we analyzed the electrochemical features of the proteins encoded by circadian and ultradian genes as compared to a randomly sorted set of proteins encoded by non-oscillating genes ( Table 1, Table S7). The overall stability as predicted by the FoldX algorithm [32] was higher in oscillating proteins when compared to non-oscillating proteins (although with a barely detectable statistically significant difference), whereas the terms represented by interresidue Van der Waals' clashes and electrostatic interaction (Table S8) between molecules in the precomplex were significantly different. A correlation analysis showed negative correlation of these terms with free energy values in oscillating proteins and positive correlation in non-oscillating proteins, suggesting a different contribution to the overall protein stability.
Among the oscillating proteins subsets, 8-h oscillating proteins showed statistically significant differences in respect to 12-h and 24-h oscillating proteins, with lower average number of residues and with higher free energy (lower energy of unfolding), suggesting lower overall stability. In this regard, the components that were different in a statistically significant way were represented by solvation of polar and hydrophobic atoms, water binding, Van der Waals energy, steric clashes, hydrogen bonds, electrostatic interactions (Table 1). Gibbs free energy was negatively correlated with these statistically significant variables in the 8-h oscillating proteins, while an inverse correlation was found in the set of 12-h oscillating proteins, hinting at a diverse involvement in the net equilibrium of forces settling on unfolded or folded protein state. On the other hand, similar correlations were found for 8-h and 24-h oscillating proteins, except for the contribution of hydrophobic groups to free energy difference (Figure 3 and Figure S6).
Chromosome Mapping of Oscillating Genes
All 8-h subset (56) and nearly all 12-h subset of mouse genes (202 out of 205) were mapped to human homologs, while only 1826 out of 2054 mouse circadian genes were suitably mapped. The genes of the three classes were distributed along all chromosomes, with no chromosome left uncovered and no homolog and paralog gene localized on the same chromosome both in human and in mouse (Figure 4). Only a few oscillating genes mapped to chromosome Y, precisely one circadian gene in the mouse set, and one ultradian and one circadian gene in human set (Table S9). The intersection of mouse and human co-localization networks created upon homology mapping of oscillating genes revealed high localization conservation for the 8-h gene sets between both species (65%), a moderate conservation for 12-h gene sets (23%) and poor conservation of chromosomal localization for circadian genes (6%) ( Table 2).
Discussion
Frequency multiplication is a common occurrence in rhythmic phenomena observed in multifaceted systems of interest for a variety of scientific disciplines, for instance physics, chemistry, biology, astronomy. In natural and life sciences, harmonics of circadian frequency have been initially reported prior to the foundation of chronobiology as a separate area of scientific research addressing rhythmic phenomena in living beings. Nonetheless, the scientific literature on the multiplication of circadian periodicity in biological processes remains limited at the present time.
The comprehensive bioinformatics analyses performed on transcriptomics and proteomics data in mammalian genes expressed with 24-h periodicity and with harmonics of circadian rhythmicity allowed us to highlight a number of interesting differences among the subsets of oscillating genes: (i) circadian genes and genes oscillating at the second and third harmonic of 24-h periodicity show divergent functional annotation and proteomic characteristics; (ii) within their upstream regions unusual transcription factor binding motives other than canonical binding sites are found; (iii) genes oscillating at the second and third harmonics are connected by specific regulatory motifs and transcription factor binding sites to a recognized circadian network.
In particular, concerning shared enriched transcription factor binding sites in the promoter regions of the circadian and ultradian genes suggest equivalent transcriptional control of time-dependent gene expression. In the upstream promoter region of ultradian genes, in addition to other motifs more significantly enriched, we identified E-boxes and D-boxes, which were not found in their downstream promoter regions. Moreover, the phylogenetic analysis of the promoter regions of the ultradian gene sets showed variability of the entire promoter sequence, which could eventually allow the accurate regulation of expression of the different genes. Furthermore, a randomized network analysis suggested a possible connection between the ultradian genes subset and the circadian clock circuitry. The subsequent enrichment analysis showed that the 8-h oscillating genes were enriched in terms related to metabolism, the 12-h oscillating genes in terms related to ER-related processes, splicing, translation and gene expression regulation, while the 24-h oscillating genes in terms related to meiosis, and splicing. This is in agreement with previous results [25,26,33].
Oscillating Proteins Are Hallmarked by Higher Overall Stability when Compared to Non-Oscillating Proteins
Bioinformatics analysis of the electrochemical properties of non-oscillating and oscillating proteins showed that the oscillating proteins are hallmarked by higher overall stability when compared to non-oscillating proteins, mainly in relation to significant dissimilarity of two components of free energy calculation in the FoldX protein design algorithm, one related to inter-residue close contacts and the other represented by electrostatic contribution of interactions at interfaces, which differently contributed to the free energy value in the two subsets. Considering the three oscillating proteins subsets, 8-h oscillating proteins showed lower mean residue number and lower overall stability, mainly in relation to different polar and hydrophobic desolvation, water binding, Van der Waals energy, steric clashes, hydrogen bonds, electrostatic interactions, interestingly with opposite correlations when matched up to the other ultradian subset of proteins. Protein folding allows free volume to decrease and considerably impacts protein conformational/binding equilibrium and ultimately physiological function in conditions of macromolecular crowding, such as those hallmarking cellular and sub-cellular volume-restricted compartments [34,35]. The spatio-temporal gathering of oscillating proteins may impact the effects of macromolecular crowding on equilibrium stability of proteins with different folds, cofactors and mechanisms. Protein folding and unfolding kinetics are influenced by crowding, with stabilizing effects whose degree will hinge on intrinsic stability and protein fold [34,35]. In this context, the molecular clockwork could manage the phase relation between subcellular oscillation patterns of folded, intermediate, and unfolded proteins, as well as of molecular chaperones that assist these transitions, especially considering that macromolecular crowding accelerates folding, but over a given limit the folding process will be hindered [34,35].
Specific Enriched Processes Govern the Regulation and Output of the Ultradian Oscillating Genes
The enrichment analysis for histone modifications showed association with the 12-h and 24-h oscillating genes of H3K79me2, involved in RNA polymerase II function, whereas 8-h oscillating genes showed binding enrichment for TCF12, a transcription factor capable of binding to E-boxes. Altogether, the enrichment information points towards very specific processes that govern the regulation and output of the ultradian oscillating genes. Often a single microRNA (such as miRNA-1295 for the 8 h gene set) or a single gene such as POLR2A in the 24-h gene set are predicted to have the most significant result in terms of targets. The enrichment analysis for computationally predicted miRNA targets pinpointed to the 8-h gene set as targets for miRNA-1295, to the 12-h gene set for miRNA 344, miRNA 344c, miRNA 1244 and miRNA 499, whereas the 24-h oscillating genes appeared as targets for miRNA-4637. Furthermore, the protein-protein interactions of the transcription factors potentially influencing the oscillating gene sets identified as major candidates POLE for the 8-h gene set and ESR1 for the 12-h and 24-h gene sets. Interestingly, elevated POLE expression predicts poorer outcome marker in renal cancer and melanoma patients.
Homology Mapping of Oscillating Genes Revealed Different Degree of Localization Conservation for the Three Gene Sets
Mapping of the 8-h, 12-h and 24-h oscillating genes in mouse and human chromosomes revealed scattering of the three classes along all chromosomes, with no chromosome left uncovered and homologs and paralogs of core-clock genes and clock-controlled genes never localized on the same chromosome. Nevertheless, in both species only a few oscillating genes mapped to chromosome Y, probably in relation to the peculiar role played by this allosome in male fertility and sex determination in mammals. In addition, we found high localization conservation for the 8-h genes (65%) between both species, a moderate conservation for 12-h genes (23%) and a poor conservation of localization for circadian genes (6%).
Primary Dataset
Bioinformatics analyses were performed on publicly available genomic data (GSE11923). Briefly, liver samples were collected every hour for 48 h from n = 3-5, 6-week-old male C57BL/6J mice (Jackson) per time point, the specimens were pooled, and high-temporal resolution profiling was performed using Affymetrix arrays to detect cycling genes. Fisher's G-test at a false-discovery rate of < 0.05 and COSOPT were jointly exploited to recognize rhythmic transcripts, which were classified, depending on the length of the oscillation period, as circadian (24 ± 4 h) and ultradian (12 ± 2 h and 8 ± 1 h) [15]. Array probe IDs/nucleotide sequences of 8-h, 12-h and 24-h oscillating genes were registered and by using BioDBnet (https://biodbnet-abcc.ncifcrf.gov/db/db2db.php) 56, 202 and 2396 Ensembl Transcript IDs were recovered from the primary dataset, respectively.
Sequence Analysis for Known Transcription Factor Binding Sites
For the initial data acquisition, we performed an analysis on pre-selected data sets corresponding to the above-mentioned gene-probes with 8-h, 12-h and 24-h rhythmic oscillations. To perform the sequence analysis, we extracted and analyzed the 3500 bp flanking sequences upstream and the 300 bp flanking sequences downstream of the complete corresponding genes. The mapping and sequence selection were carried out with Ensembl biomaRt (Ensembl revision 84). We searched for enriched known motifs and specific acceptance for gapped motifs with the MEME SUITE software (http://meme-suite.org/) [36]. The length of the motif correlates with its statistical significance. MEME defines the most statistically significant motif based on its E-value (low E-value). The E-value of a motif is based on its log likelihood ratio, width, sites, the background letter frequencies, and the size of the training set. The E-value is an estimate of the expected number of motifs with the given log likelihood ratio (or higher), and with the same width and site count, as found in a similarly sized set of random sequences. We used the AME tool and the HOCOMOCOv11 [37,38] database as motif sources. The AME tool specifically searches for enrichments of known motifs from the database selected. The 3500 bp upstream promoter region was scanned, as well as the 300 bp downstream motif region.
Multiple Sequence Alignment and Phylogenetic Analysis
Multiple sequence alignments of the promoter regions of the 8-h and 12-h gene sets were created with MUSCLE (http://www.drive5.com/muscle/). The resulting alignments were used for further phylogenetic analysis of the promoter regions of the 8-h and 12-h gene sets. Phylogenetic tree creation was performed with PHYLIP's neighbor joining method with F84 and Jukes-Cantor substitution models [24] (http://evolution.genetics.washington.edu/phylip.html).
Enrichment Analysis of the 8-h, 12-h and 24-h Gene Sets
The enrichment analysis for the mouse gene sets was performed with ConsensusPathDB (http://consensuspathdb.org) [39]. An analysis for Enriched Reactome and GO terms was performed. The p-value cut-off was set to 0.01 and the GO terms were set to level 4. Each of the 8-h, 12-h and 24-h gene sets was analyzed individually.
Randomized Network Analysis on Ultradian and Circadian Genes
A network analysis to explore the connection between the ultradian genes, and the core-clock and clock-controlled genes was performed through a series of simulations based on randomized protein-protein interaction networks. For the network creation the probes were mapped to Uniprot and Entrez ID. The network was generated from IntAct data contained in the iRefIndex database (snapshot from 2015) (http://irefindex.org) which summarizes protein-protein interaction data from different sources. The computations were performed using the iRefR R package with 100 random sets of genes of matching size.
Impact of Protein Expression on Survival
Survival data associated with protein expression was retrieved from the Protein Atlas database [31].
Electrochemical Properties of Oscillating Proteins
To predict the electrochemical properties of proteins encoded by ultradian and circadian genes we used the corresponding three-dimensional structural data stored in the protein data bank (PDB; http://www.rcsb.org/pdb/) [42]. All complex analyses were performed with FoldX, which is one of the best stability predictors and is easily implementable in a pipeline [32]. FoldX is an empirical force field that was developed or the rapid evaluation of the stability, folding of proteins and nucleic acids. It is composed of a solvation term, a van der Waals term, H-bond, and electrostatic terms and entropic terms for the backbone and side chains. In the case of protein complexes, an extra term related to the electrostatic contribution is also considered. The software package FoldX includes subroutines, e.g., RepairPDB. The way it operates is the following: first it looks for all Asparagine, Glutamine and Histidine residues and flips them by 180 degrees. This is done to prevent incorrect rotamer assignment in the structure due to the fact that the electron density of Asparagine and Glutamine carboxamide groups is almost symmetrical and the correct placement can only be discerned by calculating the interactions with the surrounding atoms. The same applies to Histidine. It does a small optimization of the side chains to eliminate small VanderWaals' clashes. This way it prevents moving side chains in the final step. "RepairPDB" identifies the residues that have very bad energies and mutates them and their neighbors' to themselves exploring different rotamer combinations to find new energy minima. Correlations between the parameters were investigated by pairwise correlation analysis (Spearman correlation; R package PerformanceAnalytics). The statistical analysis for electrochemical features of the ultradian and circadian gene sets was conducted using the energy values as from the FOLD-X [32] energy function and performing a Kruskal-Wallis one-way analysis of variance with Residue Number as covariate and Dunn's post hoc test with false discovery rate (FDR) correction.
Chromosome Mapping of Oscillating Genes
H. sapiens homologs for 8-h, 12-h and 24-h M. musculus oscillating genes (GSE11923) were retrieved using biomaRt. Genes not matching between mouse and humans were sought manually using the latest version of Ensembl web portal. In case of multiple homologs (one-to-many or many-to-many relationships), the following scores were considered, in this order of priority: confidence score; gene order conservation (GOC) score; target %ID, which refers to the percentage of the sequence in the target species (human) that matches to the query sequence (mouse); query % ID, which refers to the percentage of the sequence in the query species that matches to the homologue; dN/dS ratio. The number of oscillating genes divided by the total number of genes in each chromosome was represented by bar plots. The extent of gene co-localization overlap was assessed by using networks. Genes of the three subsets, i.e., 8-h, 12-h and 24-h oscillating genes, were represented as networks, where the genes, symbolized as nodes, were linked by edges if they were located on the same chromosomes. For each mouse and human subset of genes networks were built and then intersected by means of Pyntacle (http://pyntacle.css-mendel.it/). An intersection network was built considering only nodes and edges in common between the two original networks. Pairs of intersecting genes were considered to be on the same chromosome in mouse and human, even if the chromosome were not the same between the two species.
Conclusions
High-throughput analysis over time-series microarray expression data unveils harmonics in oscillation patterns of omics that, intermingling with spatial hierarchical branching networks lapsing in size-invariant units, could endow a fourth temporal dimension at least complementary to the fourth spatial dimension blueprinted by fractal-like networks broadly pervasive in nature. Our wide-ranging characterization of genomic, proteomic and functional properties of oscillating genes and proteins suggests that ultradian and circadian rhythmicity in omics could subtend or alternatively be related to specific mechanisms underlying the functioning of various and complex biological phenomena crucial to make life possible.
Supplementary Materials: Supplementary materials can be found at http://www.mdpi.com/1422-0067/20/18/4585/s1. Figure S1. Enrichment analysis for Gene Ontology and Reactome Pathways terms in the transcription factors for which enriched binding sites were detected in the promoter regions of the 8-h, 12-h and 24-h gene sets. Figure S2. High-resolution format of Figure 2. Figure S3. Pathways and GO terms enrichment analysis points at the specific functions for each gene set. Figure S4. Enrichment analysis of histone modifications. Figure S5. High POLE expression is associated with negative outcome for renal cancer and melanoma. Figure S6. Correlation matrix for parameters of the 8-hours, 12-hours and 24-hours gene sets. Figure S7. Cytoscape files for the network in Figure 4. Table S1. Enriched binding sites for known transcription factors in the upstream regions of the promoters of the 8-h gene set. Table S2. Enriched binding sites for known transcription factors in the downstream regions of the promoters of the 8-h gene set. Table S3. Enriched binding sites for known transcription factors in the upstream regions of the promoters of the 12-h gene set. Table S4. Enriched binding sites for known transcription factors in the downstream regions of the promoters of the 12-h gene set. Table S5. Enriched binding sites for known transcription factors in the downstream regions of the promoters of the 24-h gene set. Table S6. Enriched binding sites for known transcription factors in the upstream regions of the promoters of the 24-h gene set. Table S7. Descriptive statistics for the data in Table 1. Table S8. Free energy (∆G) terms included in the core function of FoldX, the empirical force field algorithm aiming to calculate the change of ∆G in kcalmol −1 . Table S9 | 6,934.2 | 2019-09-01T00:00:00.000 | [
"Biology"
] |
Experimental Study on Force Sensitivity of the Conductivity of Carbon Nanotubes-Modified Epoxy Resins
The addition of a conductive material into polymer improves its mechanical properties, electrical properties and thermal conductivity and bestows it with good self-sensing and self-adjusting properties. In this study, carbon nanotubes-modified epoxy resins (CNTs-EP) were successfully prepared with good dispersion through the combined methods of three roller rolling, ultrasonic processing and adding surfactant. Tests were conducted to evaluate the resistivity of unloaded modified epoxy resins with different mixing amounts of carbon nanotubes (CNTs), to determine the conductive percolation threshold. On the basis of the test results, a series of monotonic and cyclic uniaxial tensile tests were then conducted to investigate the force sensitivity of the conductivity of epoxy resins modified with different mixing amounts of CNTs. The relationship between the stress and the resistivity under various mixing amounts was studied, indicating that the resistance response could play a good warning role on the damage of the modified polymer material.
Introduction
The application scope of polymer matrix materials has expanded from the aerospace and military industries to commercial airplanes, automobiles, civil structures and leisure/sport equipment by using them to replace traditional ceramic and metal-based materials [1][2][3]. The addition of a conductive material into polymer can improve its mechanical properties, electrical properties and thermal conductivity and entitles it with good self-sensing and self-adjusting properties [4]. The modified polymer material will be refactored under external forces within the conductive pathway, indicating certain force sensitive properties. Due to their good flexibility and processability, polymer conductive composites are often used as a sensing material in force sensitive sensors. The modified polymer materials own the broad application prospects and important research values in the field of structural health monitoring [5].
The discovery of nanomaterials has opened a new prelude for the study of nanosized sensing materials. Among which, the carbon nanotubes (CNTs) own a larger aspect ratio, a larger specific surface and better conductivity even with a lower mixing amount [6,7], under which CNTs show better conductivity while enhancing the mechanical properties of the polymer [8,9]. The modified polymer materials composed of CNTs present excellent sensitivity to force [10,11]. Since the size of CNTs is in nano-scale, there is an obvious tunnel effect between particles, which can be used to improve the sensitivity. Therefore, there exist broad research prospects on the use of CNTs in force sensitive composites [12]. Producing conductive polymer nanocomposites with a small amount of CNTs dispersed in insulating polymers then becomes possible and such electrically conductive CNT/polymer nanocomposite can be applied to various fields, including piezoresistive and high sensitive resistance-type strain sensors [13][14][15].
The dispersion of CNTs in epoxy resin (EP) was studied by researchers and a variety of physical and chemical methods of CNTs dispersion have been put forward [16][17][18][19][20]. Regarding the electrical properties of CNTs-EP, a lot of studies have been done to achieve a lower conductive percolation threshold in the matrix and verified that a low mixing amount of CNTs can make the epoxy resin more conductive [21][22][23][24]. Mechanical tests have also been conducted on the resin matrices modified with different mixing amounts of CNTs. It was found that a certain amount of CNTs can effectively enhance the mechanical properties of the matrix material such as the elastic modulus and the tensile strength [25,26]. However, research on the force sensitivity of CNTs-EP is still limited.
In this study, a series of experimental studies were conducted on CNTs-EP to investigate the sensitivity of their conductivity on force. Uniaxial tensile tests and cyclic loading tests were performed on the specimens with various mixing amounts of CNTs to reveal the relationship between the mixing amount and the force sensitivity of the conductivity of the modified epoxy resins.
Material Selection and Preparation Process
The materials required in specimen preparation mainly included epoxy resin, CNTs, dispersant and short carbon fiber powder. The descriptions of the materials are summarized in Table 1. Carbon nanotube is a kind of nanoscale material with a large ratio of length to diameter and high surface energy. In the process of its dispersion into the resin composite matrix, aggregation phenomenon usually occurs, which will create difficulty for the formation of conductive network and then greatly affect the conductivity. Therefore, to find an optimal dispersion method for CNTs is the key to the successful preparation of CNTs-EP.
The dispersion methods of CNTs can be divided into two categories: physical and chemical dispersion methods. The physical dispersion methods mainly include high-energy ball grinding, ultrasonic processing, three roller rolling and centrifugal dispersion. The chemical dispersion methods mainly contain adding surfactant and strong acid and alkali washing. In this study, the used resin matrix belonged to a type of structural adhesive with high viscosity. It is therefore difficult to achieve good dispersion with the simple grinding or ultrasonic processing method. On the other hand, the introduction of other solvents such as acetone may affect the mechanical properties of structural adhesive and the added solvents are difficult to be completely removed afterward. In this experimental study, a dispersant activator was firstly added into the CNTs and the epoxy resin (i.e., part A) and the mixture was grinded with a three-roll machine. Secondly, a curing agent (i.e., part B) and short carbon fiber powder were added. The uniformly stirred short carbon fiber with the length of 1 mm and the diameter of 7 um was used to optimize the conductivity of the modified matrix.
The mixture was further dealt with the normal temperature ultrasonic processing for the full dispersion of CNTs. Finally, the method of vacuum was used to remove air bubbles. The above preparation process is schematically illustrated in Figure 1. difficult to achieve good dispersion with the simple grinding or ultrasonic processing method. On the other hand, the introduction of other solvents such as acetone may affect the mechanical properties of structural adhesive and the added solvents are difficult to be completely removed afterward. In this experimental study, a dispersant activator was firstly added into the CNTs and the epoxy resin (i.e., part A) and the mixture was grinded with a three-roll machine. Secondly, a curing agent (i.e., part B) and short carbon fiber powder were added. The uniformly stirred short carbon fiber with the length of 1 mm and the diameter of 7 um was used to optimize the conductivity of the modified matrix. The mixture was further dealt with the normal temperature ultrasonic processing for the full dispersion of CNTs. Finally, the method of vacuum was used to remove air bubbles. The above preparation process is schematically illustrated in Figure 1.
Specimen Fabrication
The shape of the tensile specimens of the carbon nanotube-modified epoxy resin was dumbbell with a total length of 220 mm. In the central part of 60 mm length, the cross section was a rectangular with the dimension of 10 mm × 4 mm, while at the two ends of 53 mm length, the cross section was also rectangular but with the dimensions of 20 mm × 4 mm. For the transition part in between the length was 26.9 mm and the chamfer radius were 75 mm.
Firstly, the modified epoxy resin was inserted into silica gel mold and a silica gel plate was used to seal the surface of the mold. Then, a hard plastic sheet was added to the silica gel plate and heavy weight was then placed on the sheet to ensure a close connection between the silica gel plate and the mold. At the room temperature of 20 + 0.5 °C, the mixtures were cured for 7 days. Then the
Specimen Fabrication
The shape of the tensile specimens of the carbon nanotube-modified epoxy resin was dumbbell with a total length of 220 mm. In the central part of 60 mm length, the cross section was a rectangular with the dimension of 10 mm × 4 mm, while at the two ends of 53 mm length, the cross section was also rectangular but with the dimensions of 20 mm × 4 mm. For the transition part in between the length was 26.9 mm and the chamfer radius were 75 mm.
Firstly, the modified epoxy resin was inserted into silica gel mold and a silica gel plate was used to seal the surface of the mold. Then, a hard plastic sheet was added to the silica gel plate and heavy weight was then placed on the sheet to ensure a close connection between the silica gel plate and the mold. At the room temperature of 20 + 0.5 • C, the mixtures were cured for 7 days. Then the specimens were cut and polished. All the specimens were measured by a Vernier caliper. Figure 2 shows the detailed fabricating process of the specimens.
Materials 2018, 11, x FOR PEER REVIEW 4 of 13 specimens were cut and polished. All the specimens were measured by a Vernier caliper. Figure 2 shows the detailed fabricating process of the specimens.
Test Method
The conductivity of CNTs-EP is expected to be improved. The jointed CNTs in the matrix may form a conductive network and adjacent particles or aggregation particles of CNTs can form channel current or electric current under the excitation of an electric field. These two electronical transfer ways improve the conductive property of the modified epoxy resin matrix.
In the resistivity test of the modified epoxy resin matrix, two-point measurement was adopted. A copper sheet of 2 mm width was wound around the surface of the specimen and then a double-sided conductive copper tape of 6 mm width was attached onto the copper sheet. In this way, two electrodes were made and attached to the central part of the specimen with an interval of 50 mm. Figure 3 presents the locations of the two electrodes along the test coupon. The test equipment was an electro-mechanical universal testing machine AG-Xplus (Shimadzu Co. Ltd., Kyoto, Japan). The electric resistance measuring instrument was an insulation resistance tester CHT3530 (Hopetech Co. Ltd., Shenzhen, China). In this study, monotonic and cyclic uniaxial tensile tests were conducted, respectively. The loading rate was fixed at 2 mm/min. In the monotonic uniaxial tests, the elastic modulus, elongation and tensile strength could be obtained. As for the cyclic loading tests, the maximum loading was set approximately at 2/3 of the tensile failure load obtained in the monotonic uniaxial tensile test. During the test process, the time-dependent stress, strain and electrical resistance were all automatically recorded. Considering the possible interference of the metal clip of the universal testing machine on the electrical performance of the specimens, the gripping part of the specimens was wrapped by a coarse sand paper (the resistance of the sand
Test Method
The conductivity of CNTs-EP is expected to be improved. The jointed CNTs in the matrix may form a conductive network and adjacent particles or aggregation particles of CNTs can form channel current or electric current under the excitation of an electric field. These two electronical transfer ways improve the conductive property of the modified epoxy resin matrix.
In the resistivity test of the modified epoxy resin matrix, two-point measurement was adopted. A copper sheet of 2 mm width was wound around the surface of the specimen and then a double-sided conductive copper tape of 6 mm width was attached onto the copper sheet. In this way, two electrodes were made and attached to the central part of the specimen with an interval of 50 mm. Figure 3 presents the locations of the two electrodes along the test coupon. specimens were cut and polished. All the specimens were measured by a Vernier caliper. Figure 2 shows the detailed fabricating process of the specimens.
Test Method
The conductivity of CNTs-EP is expected to be improved. The jointed CNTs in the matrix may form a conductive network and adjacent particles or aggregation particles of CNTs can form channel current or electric current under the excitation of an electric field. These two electronical transfer ways improve the conductive property of the modified epoxy resin matrix.
In the resistivity test of the modified epoxy resin matrix, two-point measurement was adopted. A copper sheet of 2 mm width was wound around the surface of the specimen and then a double-sided conductive copper tape of 6 mm width was attached onto the copper sheet. In this way, two electrodes were made and attached to the central part of the specimen with an interval of 50 mm. Figure 3 presents the locations of the two electrodes along the test coupon. The test equipment was an electro-mechanical universal testing machine AG-Xplus (Shimadzu Co. Ltd., Kyoto, Japan). The electric resistance measuring instrument was an insulation resistance tester CHT3530 (Hopetech Co. Ltd., Shenzhen, China). In this study, monotonic and cyclic uniaxial tensile tests were conducted, respectively. The loading rate was fixed at 2 mm/min. In the monotonic uniaxial tests, the elastic modulus, elongation and tensile strength could be obtained. As for the cyclic loading tests, the maximum loading was set approximately at 2/3 of the tensile failure load obtained in the monotonic uniaxial tensile test. During the test process, the time-dependent stress, strain and electrical resistance were all automatically recorded. Considering the possible interference of the metal clip of the universal testing machine on the electrical performance of the specimens, the gripping part of the specimens was wrapped by a coarse sand paper (the resistance of the sand The test equipment was an electro-mechanical universal testing machine AG-Xplus (Shimadzu Co. Ltd., Kyoto, Japan). The electric resistance measuring instrument was an insulation resistance tester CHT3530 (Hopetech Co. Ltd., Shenzhen, China). In this study, monotonic and cyclic uniaxial tensile tests were conducted, respectively. The loading rate was fixed at 2 mm/min. In the monotonic uniaxial tests, the elastic modulus, elongation and tensile strength could be obtained. As for the cyclic loading tests, the maximum loading was set approximately at 2/3 of the tensile failure load obtained in the monotonic uniaxial tensile test. During the test process, the time-dependent stress, strain and electrical resistance were all automatically recorded. Considering the possible interference of the metal clip of the universal testing machine on the electrical performance of the specimens, the gripping part of the specimens was wrapped by a coarse sand paper (the resistance of the sand paper is in the magnitude of TΩ), to prevent the electrical connection of the specimen ends with the metal clip. The test set up is shown in Figure 4. paper is in the magnitude of TΩ), to prevent the electrical connection of the specimen ends with the metal clip. The test set up is shown in Figure 4.
Resistivity of Modified Epoxy Resin
The resistivity tests were divided into seven groups with different mixing amounts of CNTs 0.0 wt %, 0.4 wt %, 0.8 wt %, 1.2 wt %, 1.6 wt %, 2.0 wt % and 3.0 wt %. On the other hand, the mixing amount of carbon fiber powder was fixed at 0.2 wt % for all the specimens. There were 5 identical specimens tested for each combination of test parameters, leading to 35 specimens in total. Figure 5 shows the resistivity test results. The test results showed that without CNTs, the addition of carbon fiber powders had little influence on the resistivity of the specimens compared to the pure resin case. However, when the mixing amount of CNTs increased from 0.0 wt % to 0.8 wt %, the resistivity of the specimens
Resistivity of Modified Epoxy Resin
The resistivity tests were divided into seven groups with different mixing amounts of CNTs 0.0 wt %, 0.4 wt %, 0.8 wt %, 1.2 wt %, 1.6 wt %, 2.0 wt % and 3.0 wt %. On the other hand, the mixing amount of carbon fiber powder was fixed at 0.2 wt % for all the specimens. There were 5 identical specimens tested for each combination of test parameters, leading to 35 specimens in total. Figure 5 shows the resistivity test results. paper is in the magnitude of TΩ), to prevent the electrical connection of the specimen ends with the metal clip. The test set up is shown in Figure 4.
Resistivity of Modified Epoxy Resin
The resistivity tests were divided into seven groups with different mixing amounts of CNTs 0.0 wt %, 0.4 wt %, 0.8 wt %, 1.2 wt %, 1.6 wt %, 2.0 wt % and 3.0 wt %. On the other hand, the mixing amount of carbon fiber powder was fixed at 0.2 wt % for all the specimens. There were 5 identical specimens tested for each combination of test parameters, leading to 35 specimens in total. Figure 5 shows the resistivity test results. The test results showed that without CNTs, the addition of carbon fiber powders had little influence on the resistivity of the specimens compared to the pure resin case. However, when the mixing amount of CNTs increased from 0.0 wt % to 0.8 wt %, the resistivity of the specimens The test results showed that without CNTs, the addition of carbon fiber powders had little influence on the resistivity of the specimens compared to the pure resin case. However, when the mixing amount of CNTs increased from 0.0 wt % to 0.8 wt %, the resistivity of the specimens dropped slowly. The conductive particles were still isolated during this stage and the resistivity of the composites was measured at the magnitude of TΩ·cm. When the mixing amount of CNTs increased further from 0.8 wt % to 1.2 wt %, the resistivity fell sharply by about 8 magnitudes. In this range, the conductive particles adjoined and the space between particles decreased, improving the tunnel effect among the particles and shifting the modified resin matrix from an insulator to a conductor. So, 1.2 wt % might be the conductive percolation threshold of the composite material. With the increase of mixing amount from 1.2 wt % to 1.6 wt %, the decrease of the resistivity became less sharp while in the range of 1.6 wt % to 2.0 wt %, the change of resistivity kept marginal, indicating that the conductive network in the modified matrix was sufficiently connected. When the mixing amount exceeded 2.0 wt %, a slightly increase of the resistivity was even seen probably due to the aggregation phenomenon of the CNTs. In other words, a very high mixing amount may lead to worse dispersion of the CNTs and thus weaken the electrical conductivity of the modified matrix.
For further explanation of the test results, the scanning electron microscopy (SEM) was performed to investigate the morphological characterization as shown in the Figure 6a-g. With the increase of the mixing amount of CNTs, it is seen that the white particles and filaments gradually increase which represents the added CNTs. At the mixing amount of 2.0 wt % the CNTs dispersed uniformly in the epoxy resin matrix in the form of network, which presents a good dispersion property. In Figure 6f, the aggregation phenomenon occurred at the mixing amount of 3.0 wt %. With the increase of the mixing amount of CNTs, the viscosity of the resin matrix increased and the fluidity decreased, which are consistent with many other previous investigations [27,28].
With the mixing amount of 0.8 wt % and 1.2 wt %, the specimens were polished and then SEM was conducted for further morphological investigation. The corresponding results are shown in Figures 7 and 8. As illustrated in Figure 7, at the mixing amount of 0.8 wt %, the specimen surface appeared to have many short carbon fibers. The carbon fibers dispersed homogeneously in the modified matrix in conjunction with the CNTs. This benefited the formation of the conductive network. There were also junction regions between carbon fibers and CNTs at the mixing amount of 1.2 wt %, as shown in Figure 8. dropped slowly. The conductive particles were still isolated during this stage and the resistivity of the composites was measured at the magnitude of TΩ·cm. When the mixing amount of CNTs increased further from 0.8 wt % to 1.2 wt %, the resistivity fell sharply by about 8 magnitudes. In this range, the conductive particles adjoined and the space between particles decreased, improving the tunnel effect among the particles and shifting the modified resin matrix from an insulator to a conductor. So, 1.2 wt % might be the conductive percolation threshold of the composite material.
With the increase of mixing amount from 1.2 wt % to 1.6 wt %, the decrease of the resistivity became less sharp while in the range of 1.6 wt % to 2.0 wt %, the change of resistivity kept marginal, indicating that the conductive network in the modified matrix was sufficiently connected. When the mixing amount exceeded 2.0 wt %, a slightly increase of the resistivity was even seen probably due to the aggregation phenomenon of the CNTs. In other words, a very high mixing amount may lead to worse dispersion of the CNTs and thus weaken the electrical conductivity of the modified matrix. For further explanation of the test results, the scanning electron microscopy (SEM) was performed to investigate the morphological characterization as shown in the Figure 6a-g. With the increase of the mixing amount of CNTs, it is seen that the white particles and filaments gradually increase which represents the added CNTs. At the mixing amount of 2.0 wt % the CNTs dispersed uniformly in the epoxy resin matrix in the form of network, which presents a good dispersion property. In Figure 6f, the aggregation phenomenon occurred at the mixing amount of 3.0 wt %. With the increase of the mixing amount of CNTs, the viscosity of the resin matrix increased and the fluidity decreased, which are consistent with many other previous investigations [27,28].
With the mixing amount of 0.8 wt % and 1.2 wt %, the specimens were polished and then SEM was conducted for further morphological investigation. The corresponding results are shown in Figure 9 shows the tensile stress-strain curves and the relative resistance-strain curves of specimens with various mixing amounts of CNTs (0.4 wt %, 0.8 wt %, 1.2 wt %, 1.6 wt %, 2.0 wt %), which were obtained from the monotonic uniaxial tensile tests. Here the relative resistance ΔR/R is denoted as the ratio of the resistance change to the initial resistance. It should be noted in Figure 9 that some piezoresistive instability existed at the initial stages of the curves. Such instability was removed during the regression analysis of the curves.
Force Sensitivity in Monotonic Uniaxial Tensile Tests
The improvement of conductivity of modified epoxy resin matrix by addition of CNTs was mainly composed of two mechanisms: (1) the Ohmic contacted the conductive network formed by the CNTs in the matrix, which can be explained by the seepage theory [29]. The segment resistance Rc of the conductive network can be used to characterize such effect; (2) the conductive path formed by the other adjacent CNTs, which can be explained by the tunnel effect theory. The tunneling junction resistance Rj between adjacent CNTs can be used to characterize such effect. Figure 9 shows the tensile stress-strain curves and the relative resistance-strain curves of specimens with various mixing amounts of CNTs (0.4 wt %, 0.8 wt %, 1.2 wt %, 1.6 wt %, 2.0 wt %), which were obtained from the monotonic uniaxial tensile tests. Here the relative resistance ΔR/R is denoted as the ratio of the resistance change to the initial resistance. It should be noted in Figure 9 that some piezoresistive instability existed at the initial stages of the curves. Such instability was removed during the regression analysis of the curves.
Force Sensitivity in Monotonic Uniaxial Tensile Tests
The improvement of conductivity of modified epoxy resin matrix by addition of CNTs was mainly composed of two mechanisms: (1) the Ohmic contacted the conductive network formed by the CNTs in the matrix, which can be explained by the seepage theory [29]. The segment resistance Rc of the conductive network can be used to characterize such effect; (2) the conductive path formed by the other adjacent CNTs, which can be explained by the tunnel effect theory. The tunneling junction resistance Rj between adjacent CNTs can be used to characterize such effect. Figure 9 shows the tensile stress-strain curves and the relative resistance-strain curves of specimens with various mixing amounts of CNTs (0.4 wt %, 0.8 wt %, 1.2 wt %, 1.6 wt %, 2.0 wt %), which were obtained from the monotonic uniaxial tensile tests. Here the relative resistance ∆R/R is denoted as the ratio of the resistance change to the initial resistance. It should be noted in Figure 9 that some piezoresistive instability existed at the initial stages of the curves. Such instability was removed during the regression analysis of the curves.
Force Sensitivity in Monotonic Uniaxial Tensile Tests
The improvement of conductivity of modified epoxy resin matrix by addition of CNTs was mainly composed of two mechanisms: (1) the Ohmic contacted the conductive network formed by the CNTs in the matrix, which can be explained by the seepage theory [29]. The segment resistance R c of the conductive network can be used to characterize such effect; (2) the conductive path formed by the other adjacent CNTs, which can be explained by the tunnel effect theory. The tunneling junction resistance R j between adjacent CNTs can be used to characterize such effect. As displayed in Figure 9, for the mixing amounts of 0.4 wt % and 0.8 wt %, the relative resistance-strain curves are in a convex shape and the gauge factors are high (Figure 9a,b). Here, the slope of the resistance-strain curves in the elastic stage is used to define the gauge factors. The gauge factors under various mixing amounts are summarized in Table 2. When the tensile strain is small, the modified matrix is in the elastic stage. Because of the small mixing amount of CNTs, the jointed CNTs were less and the conductive network was not formed. The conductivity of the modified matrix mainly depended on the conductive path formed by tunnel effect of the other adjacent CNTs. The junction resistance Rj between adjacent CNTs played a leading role. At the micro level, the As displayed in Figure 9, for the mixing amounts of 0.4 wt % and 0.8 wt %, the relative resistance-strain curves are in a convex shape and the gauge factors are high (Figure 9a,b). Here, the slope of the resistance-strain curves in the elastic stage is used to define the gauge factors. The gauge factors under various mixing amounts are summarized in Table 2. When the tensile strain is small, the modified matrix is in the elastic stage. Because of the small mixing amount of CNTs, the jointed CNTs were less and the conductive network was not formed. The conductivity of the modified matrix mainly depended on the conductive path formed by tunnel effect of the other adjacent CNTs.
The junction resistance R j between adjacent CNTs played a leading role. At the micro level, the spacing of CNTs particles had a great influence on the junction resistance R j . Therefore, at the macro level, the resistance was very sensitive to the strain change. The relative resistance value increased obviously with the increase of strain. The gauge factors are in high values. When the strain continued to increase, there was a certain level of damage occurring in the modified matrix. At the micro level, the spacing of CNTs particles exceeded the tunneling distance. As a result, at the macro level, the relative resistance became less sensitive to the strain change and the increase of resistance was slowed down. When the mixing amounts were 1.2 wt %, 1.6 wt % and 2.0 wt %, the relative resistance-strain curves are in a concave shape. The gauge factors reduce and the linearity increases. Under the small tensile strain, the modified matrix was in elastic stage. Due to the larger mixing amount of CNTs, the connected internal conductive network was perfect and contributed significantly to the conductivity of the modified matrix. The segment resistance R c played a leading role. The number of jointed CNT particles decreased linearly with the increase of the strain, so there was a linear increase of the relative resistance value with the increase of the strain and the gauge factors are relatively in low values. With the continuous increase of the stain, microcracks occurred in the modified matrix, inducing and result in the fluctuation of the relative resistance values.
Force Sensitivity in Cyclic Uniaxial Tensile Tests
In the uniaxial tensile test, nonlinear deformation occurred after the 1/3 of the peak tensile load and the relative resistance-strain curves exhibited a nonlinear manner. In order to investigate whether the relative resistance response could reflect the internal damage of the modified epoxy resins, cyclic uniaxial loading tests were conducted. The maximum tensile load during the tests was about 1100 N and the cyclic loading range was taken between 100 N and the 2/3 of the mean value of the tensile failure load. Due to the poor conductivity of the modified resin when the mixing amounts were 0.4 wt % and 0.8 wt % and the poor dispersion of CNTs when the mix amount exceeded 2.0%, only the mixing amounts of CNTs of 1.2 wt %, 1.6 wt % and 2.0 wt % were chosen for the cyclic loading tests.
The tests results are shown in Figure 10. spacing of CNTs particles had a great influence on the junction resistance Rj. Therefore, at the macro level, the resistance was very sensitive to the strain change. The relative resistance value increased obviously with the increase of strain. The gauge factors are in high values. When the strain continued to increase, there was a certain level of damage occurring in the modified matrix. At the micro level, the spacing of CNTs particles exceeded the tunneling distance. As a result, at the macro level, the relative resistance became less sensitive to the strain change and the increase of resistance was slowed down. When the mixing amounts were 1.2 wt %, 1.6 wt % and 2.0 wt %, the relative resistance-strain curves are in a concave shape. The gauge factors reduce and the linearity increases. Under the small tensile strain, the modified matrix was in elastic stage. Due to the larger mixing amount of CNTs, the connected internal conductive network was perfect and contributed significantly to the conductivity of the modified matrix. The segment resistance Rc played a leading role. The number of jointed CNT particles decreased linearly with the increase of the strain, so there was a linear increase of the relative resistance value with the increase of the strain and the gauge factors are relatively in low values. With the continuous increase of the stain, microcracks occurred in the modified matrix, inducing and result in the fluctuation of the relative resistance values.
Force Sensitivity in Cyclic Uniaxial Tensile Tests
In the uniaxial tensile test, nonlinear deformation occurred after the 1/3 of the peak tensile load and the relative resistance-strain curves exhibited a nonlinear manner. In order to investigate whether the relative resistance response could reflect the internal damage of the modified epoxy resins, cyclic uniaxial loading tests were conducted. The maximum tensile load during the tests was about 1100 N and the cyclic loading range was taken between 100 N and the 2/3 of the mean value of the tensile failure load. Due to the poor conductivity of the modified resin when the mixing amounts were 0.4 wt % and 0.8 wt % and the poor dispersion of CNTs when the mix amount exceeded 2.0%, only the mixing amounts of CNTs of 1.2 wt %, 1.6 wt % and 2.0 wt % were chosen for the cyclic loading tests. The tests results are shown in Figure 10. The stress-strain curves demonstrate that all the specimens experienced the accumulation of plastic deformation during the cyclic loading. The relative resistance response recovered well under each unloading cycle while the peak value increased with the time (i.e., the number of the loading cycles). Non-linearity was obtained at large strains, leading to the modification of the piezoresistive properties under the same loading cycles. This phenomenon, which is regarded as being caused by losing overlapping contact with each other so that tunneling resistance becomes a dominant phenomenon, is similar to the results obtained by [13][14][15]. With the accumulation of plastic deformation, the relative location of the CNTs in the modified epoxy resin exhibited some irreversible changes, leading to the occurrence of damage in the conductive network. The variation of the relative resistance was in several magnitudes and the resistance response became more obvious with the increase of loading cycles. However, the stress-strain curves indicated no abnormal information during the whole cyclic loading. To sum up, the resistance response curve could play a good role on warning the damage of the CNT-modified epoxy resins.
Conclusions
In this study, CNTs-EP was successfully prepared with good dispersion through the combined methods of three roller rolling, ultrasonic processing and adding surfactant. Then the modified epoxy resins were used to make tensile specimens for force sensitivity tests. Static tests of the resistivity of unloaded modified epoxy resins were conducted. It was found that the mixing amount of 1.2 wt % was the conductive percolation threshold of the modified material, which provided a reference for the following loading tests. The stress-strain curves demonstrate that all the specimens experienced the accumulation of plastic deformation during the cyclic loading. The relative resistance response recovered well under each unloading cycle while the peak value increased with the time (i.e., the number of the loading cycles). Non-linearity was obtained at large strains, leading to the modification of the piezoresistive properties under the same loading cycles. This phenomenon, which is regarded as being caused by losing overlapping contact with each other so that tunneling resistance becomes a dominant phenomenon, is similar to the results obtained by [13][14][15]. With the accumulation of plastic deformation, the relative location of the CNTs in the modified epoxy resin exhibited some irreversible changes, leading to the occurrence of damage in the conductive network. The variation of the relative resistance was in several magnitudes and the resistance response became more obvious with the increase of loading cycles. However, the stress-strain curves indicated no abnormal information during the whole cyclic loading. To sum up, the resistance response curve could play a good role on warning the damage of the CNT-modified epoxy resins.
Conclusions
In this study, CNTs-EP was successfully prepared with good dispersion through the combined methods of three roller rolling, ultrasonic processing and adding surfactant. Then the modified epoxy resins were used to make tensile specimens for force sensitivity tests. Static tests of the resistivity of unloaded modified epoxy resins were conducted. It was found that the mixing amount of 1.2 wt % was the conductive percolation threshold of the modified material, which provided a reference for the following loading tests.
Monotonic and cyclic uniaxial tensile loading tests were conducted respectively on epoxy resin specimens with various mixing amounts of CNTs. It was found that when the mixing amount was lower than the conductive percolation threshold, the junction resistance R j between adjacent CNTs played a leading role, while when the mixing amount exceeded the conductive percolation threshold, the connected internal conductive network contributed significantly to the conductivity of the modified epoxy resins and segment resistance R c became more important instead For the cyclic uniaxial tensile loading tests, with the accumulation of plastic deformation, the relative locations of CNTs in the modified epoxy resins exhibited irreversible changes, leading to the damage of the conductive network and the dramatic changes (i.e., with several magnitude) of the resistance during the cyclic loading, demonstrating that the resistance response could play a good warning role on the damage of CNTs-modified material. CNTs-EP, with good self-sensing and self-adjusting properties, could act as the main matrix material of fiber reinforced polymer (FRP) reinforcement for use in structures to facilitate damage detection, reinforcement and repair. | 8,359.4 | 2018-07-01T00:00:00.000 | [
"Materials Science",
"Engineering",
"Physics"
] |
Doing Harm: A Reply to Klocksiem
In a recent article in this journal, Justin Klocksiem proposes a novel response to the widely discussed failure to benefit problem for the counterfactual comparative account of harm (CCA). According to Klocksiem, proponents of CCA can deal with this problem by dis-tinguishing between facts about there being harm and facts about an agent ’ s having done harm. In this reply, we raise three sets of problems for Klocksiem ’ s approach.
Introduction
In a recent article in this journal, Justin Klocksiem proposes a novel response to the widely discussed failure to benefit problem for the counterfactual comparative account of harm. 1 This account can be formulated as follows: The Counterfactual Comparative Account of Harm (CCA): An event e harms a person S if and only if S would have been better off had e not occurred. 2 The failure to benefit problem is that CCA classifies some actions that intuitively merely fail to benefit a person as harming her. Klocksiem focuses on the following case: benefitted, this does not mean he is harmed, and the claim that Batman has harmed him seems preposterous. (p. 430) However, Klocksiem argues that proponents of CCA can deal with this problem by distinguishing 'between facts about the existence of harm and facts about attributions of having done harm to persons or other responsible entities ' (p. 432). According to Klocksiem, while CCA does imply that Robin is harmed, this is acceptable so long as we can avoid the result that Batman does harm to, or harms, Robin. By showing that Batman is neither causally nor morally responsible for the harm that Robin suffers, Klocksiem argues, we can indeed avoid the latter result.
We shall argue that Klocksiem's approach is unsuccessful. After presenting its main features (section 2), we shall criticize it on three grounds. To begin with, it arguably fails to properly address the problem it was supposed to solve, and it presupposes an implausible view of what it is to do harm (section 3). Moreover, while Klocksiem emphasizes that CCA does not have the unacceptable result that a person is harmed every time she is not benefitted, his own approach has precisely that result (section 4). Finally, Klocksiem's approach has implausible implications in cases where an agent makes someone well off but would otherwise have made her even better off (section 5).
Klocksiem's approach
According to Klocksiem, earlier discussions of the failure to benefit problem have suffered from a neglect of the distinction between what it takes for there to be harm and what it takes for an agent (or other object) to do harm. CCA, he contends, is solely concerned with the former issueand what it says about it is, moreover, entirely correct. The correct view about the latter issue, by contrast, is the following: Harming is Responsibility (HiR): An agent (or other object), S 1 , does harm to S 2 if and only if there exists an event, e, such that e is a final harm for S 2 , and S 1 is causally or morally responsible for e. 4 (p. 436) A final harm, Klocksiem explains, is the same as a non-instrumental harm. Assuming CCA, he says (p. 429), an event e is an instrumental harm for a person just in case (i) she would have been better off had e not occurred, and (ii) it is because e causes or otherwise produces some other event e*, such that the person would have been better off had e* not occurred, that (i) holds. An event e is a final harm for a person just in case it satisfies (i) but not (ii). As these claims strongly suggest, Klocksiem contends that any instrumentally harmful event owes its harmfulness to its connection with some finally harmful eventthat is, at least one relevant e* is a final harm. Klocksiem illustrates: For example, stubbing one's toe is an instrumental harm. Although I would have been better off had I not stubbed my toe, this is because stubbing my toe produced an episode of pain, and I would have been better off had I not suffered that episode of pain … The episodes of pain that are associated with or caused by toe stubbings, by contrast, are non-instrumental harms. Events such as these are, in some sense, 4 Though he does not say so explicitly, Klocksiem apparently treats the expressions 'S 1 does harm to S 2 ' and 'S 1 harms S 2 ' as equivalent. We shall follow him in doing so. directly harmful -I am better off if I do not suffer the episode of pain, and this is not because the episode of pain is connected to some further event such that I would have been better off had that event not occurred. (p. 429) Despite what this passage may seem to suggest, however, it is not the case that an event is a final harm for a person in Klocksiem's sense just in case it is intrinsically (or finally) bad for that person. For example, Klocksiem suggests that a person's death can harm her in virtue of the fact that it results in 'deprivations of well-being or other goods' (p. 438, n. 26), with those deprivations being the relevant final harms. On all standard views of well-being, such deprivations are not themselves intrinsically (or finally) bad for the person who dies.
Klocksiem proposes the following principle of causal responsibility: Causal Responsibility (CR): An agent (or other object), S 1 , is causally responsible for a final harm, e, suffered by S 2 if and only if S 1 initiates or makes a sufficiently significant causal contribution to a causal chain that results in e. (p. 437) While Klocksiem leaves many key notions in CR unspecified, he stresses that making only a minor causal contribution to a final harmas opposed to, for instance, causing someone pain by punching her in the faceis not enough to be causally responsible for that harm on CR.
Klocksiem's principle of moral responsibility is the following: Moral Responsibility (MR): An agent, S 1 , is morally responsible for a final harm, e, suffered by S 2 if either S 1 is causally responsible for e in a way that violates one or more of S 1 's moral obligations, or S 1 failed to satisfy a moral obligation to prevent e, or to reduce its severity. 5 (p. 438) According to Klocksiem, these three principles -HiR, CR, and MRtogether allow proponents of CCA to avoid the result that Batman does harm to Robin in Golf Clubs. To begin with, CR is not satisfied, as 'Batman's role in bringing about the harm to Robin is passive; Batman does not make an active contribution to a causal chain that affects Robin's well-being' (p. 439). According to Klocksiem, even if Batman does play an active and substantial role in causing various events that harm Robin on CCAsuch as Batman's decision to keep the clubs, and his keeping the clubsthose events are only instrumental harms for Robin. Due to CR's restriction to final harms, then, Batman's playing an active and substantial role in causing those events does not imply that CR is satisfied. Furthermore, Klocksiem suggests that on a natural understanding of Golf Clubs, MR is not satisfiedthere is no particular reason to think that Batman violates any moral obligation. By HiR, then, since Batman is neither causally nor morally responsible for any event that is a final harm for Robin, Batman does not do harm to Robin. 5 We shall assume that although MR as formulated by Klocksiem provides only a sufficient condition, it should be taken to state also a necessary condition. Obviously, Batman's not satisfying a merely sufficient condition for being morally responsible for a final harm does not rule out his being morally responsible for it.
Klocksiem's solution to the failure to benefit problem
It is not entirely clear how Klocksiem's approach is supposed to solve the failure to benefit problem. Those who find CCA's verdict implausible in Golf Clubs are unlikely to object only or primarily to the conclusion that Batman, the agent, does harm to (or harms) Robin. The failure to benefit problem is usually framed in terms of an event harming a person. 6 Thus, what critics of CCA typically find implausible in Golf Clubs is the conclusion that Batman's decision harms Robin. This conclusion remains true on Klocksiem's approach, since his approach includes CCA. It may thus seem that his approach fails to address the main part of the failure to benefit problem.
Klocksiem could, however, be interpreted as offering a debunking explanation of the intuition that Batman's decision does not harm Robin. This intuition is, he might claim, the result of a failure to distinguish between facts about there being harm and facts about doing harm. He might argue that once we are made aware of this distinction, and realize that it can be true that Batman's decision harms Robin even though it is false that Batman does harm to Robin, the intuition is dispelled.
An immediate worry about this debunking strategy is that it rests on a questionable explanation of the intuition that Batman's decision is harmless. We suspect that many would still object to the conclusion that Batman's decision harms Robin, even if they became convinced that Batman does not do harm to Robin.
A more serious problem for Klocksiem's strategy, however, is that it assumes a highly problematic view of the relation between an event's harming a person and an agent's doing harm to (or harming) that person. The reason is that it is implausible that S 1 may do something which is a harm for S 2 , without thereby harming or doing harm to S 2 . 7 This can be brought out in two closely related ways.
First, for any term 'X' that is used both as a noun and as a verb, it seems obvious that if there is an X which S performs, then S thereby Xs. Thus if there is a kick (run, shout, etc.) which S performs, then S kicks (runs, shouts, etc.). And it would be very surprising if 'harm' turned out to be an exception to this rule. Admittedly, since the verb 'harm' is normally used transitively, the phrase 'S harms' is somewhat anomalous, without a specified object of the harm. But the same is true of, for example, 'S insults' and 'S praises', and presumably nobody would consider this as evidence that 'insult' or 'praise' are exceptions to the general rule stated. Note also that 'S does harm' seems quite regular, even without a specified object.
Second, the inference from 'S 1 does something which is a harm to S 2 ' to 'S 1 does a harm to S 2 ' seems clearly valid, regardless of whether the harm in question is final or instrumental. The same holds for the inference from 'S 1 does a harm to S 2 ' to 'S 1 does harm to S 2 '. From 'S 1 does something which is a harm to S 2 ', then, we can infer 'S 1 does harm to S 2 '.
The upshot of these considerations is that it cannot plausibly be maintained both that Batman's decision harms Robin in Golf Clubs, as CCA implies, and that Batman 6 For example, Hanna's (2016) discussion of the problem focuses on whether the event of Batman's changing his mind harms Robin (Hanna 2016, p. 253), and Purves's (2019) discussion focuses on whether Batman's action of keeping the clubs harms Robin (Purves 2019(Purves , p. 2632. While Johansson and Risberg (2020) discuss slightly different examples, they also focus on CCA's implications about certain intuitively harmless actions. Bradley's (2012) presentation of the problem involves both claims about whether Batman harms Robin and claims about whether certain events harm Robin (Bradley 2012, p. 397). 7 Klocksiem mentions this objection (p. 442), but does not consider the support for it that we provide below.
does not do harm to Robin, as Klocksiem takes HiR to imply. This combination of views is inconsistent with the validity of the rules just cited. Since this combination of views is essential to Klocksiem's solution to the failure to benefit problem, we conclude that this solution fails.
Indeed, the foregoing also provides more direct reasons to think that CCA proponents are, after all, forced to accept that Batman does harm to (and harms) Robin. For, again, CCA implies that Batman does somethingdeciding against giving Robin the clubsthat is a harm to Robin. By the rules cited, Batman thereby does harm to (and harms) Robin.
Negative events
We have argued that Klocksiem's approach fails to provide a plausible solution to the failure to benefit problem in its standard form. In this section and the next, we shall argue that Klocksiem's approach, to a much greater extent than CCA, also leads to two even more serious versions of the failure to benefit problem.
According to Klocksiem (p. 434), it is important to note that CCA does not have the unacceptable implication that a person is harmed every time she is not benefitted. Clearly, any view that has that implication thereby faces an extreme version of the failure to benefit problem. Ironically, however, Klocksiem's own approach seems to have precisely this implication.
The point that CCA lacks this implicationand thereby avoids colossal overgeneration of harmis a familiar one. 8 In order for CCA to imply that a person is harmed, there has to be some event such that the person would have been better off had it not occurred. In many cases in which a person is not benefitted, however, there seems to be no event that satisfies this condition. Indeed, even events such as decisions not to benefit a person often fail to satisfy this condition. Klocksiem offers the following illustration: Fifty Dollars: Clark has a $50 bill in his pocket, and for no particular reason, contemplates whether to give it to Jimmy. Clark decides not to give the $50 bill to Jimmy. (p. 434) Assuming that Jimmy would have been better off if Clark had given him the $50 bill, CCA may seem to obviously entail that Jimmy is harmedin particular, by Clark's decision not to give him the $50 bill. But it might well be false that if Clark had not made this decision, then he would have given Jimmy the $50 billperhaps he would instead have remained indecisive or forgotten about the whole thing. 9 Moreover, there is no particular reason to think that any other event in the case leaves Jimmy worse off than he would have been had it not occurred. As a result, Jimmy might well fail to be harmed on CCA.
In our view, this is a plausible line of reasoning. It is important to note, however, that it presupposes the non-existence of 'negative' events consisting in a person's not having 8 See e.g. Feit (2015, p. 385), Hanna (2016, p. 252). 9 See especially Hanna (2016, p. 252). Klocksiem himself says that '[i]n ordinary circumstances, the nearest possible worlds in which it is not the case that Clark decides not to give Jimmy the $50 bill are worlds in which Clark does not even consider giving it to him' (p. 435). This seems to presuppose objectionable backtracking. a certain property, and such that the person would have had that property if the negative event in question had not occurred. (It may seem obvious that any putative negative event of the form S's not being F is such that S would have been F if that event had not occurred. But this is not entirely clear. For example, a possible view is that any negative event of the form S's not being F consists in some positiveperhaps very complexevent of the form S's being G, where the fact that S is G is what explains why S is not F. And it may not always be true that if S had not been G, then S would have been F. There are of course further questions about how the distinction between positive and negative events is to be drawn in a principled way, but for present purposes, these questions can be set aside.) Suppose, for example, that there is an event such as Jimmy's not receiving the $50 bill, and that Jimmy would have received the $50 bill (and hence would have been better off) if this event had not occurred. On this supposition, CCA entails that this event harms Jimmy. More generally, given the existence of negative events like this one, CCA does imply, after all, that a person is harmed every time she is not benefitted. In fact, given the existence of such events, CCA also has the even more implausible implication that a person is harmed virtually every time she is benefitted; for example, even if Jimmy had received the $50 bill, there would have been an event such as Jimmy's not receiving a $1000 bill, which would have harmed him according to CCA.
The problem for Klocksiem is that negative events of this sort seem to be required in order for his approach to avoid another problem. As noted in section 2, Klocksiem recognizes that CCA implies that Batman's decision to keep the clubs, as well as various other events, harms Robin in Golf Clubs. As Klocksiem apparently also recognizes, Batman obviously makes a very significant causal contribution to a causal chainin particular, to his own process of deliberationthat results in his decision. Hence, if Batman's decision were a final harm for Robin, HiR and CR would jointly imply that Batman does harm to Robinthe verdict that Klocksiem's approach is primarily designed to avoid. Of course, Klocksiem contends that Batman's decision is not a final harm, but an instrumental harm (see section 2). In that case, Batman's significant role in causing his decision does not imply that he does harm to Robin on HiR and CR.
Here, however, it is crucial to return to Klocksiem's view of instrumental and final harm (see again section 2). To repeat, according to him, every instrumental harm owes its harmfulness to its causing or otherwise producing some final harm. Assuming CCA, moreover, an event e is an instrumental (final) harm for a person just in case e satisfies CCA's condition, where this is (is not) because e produces some other event e* which satisfies CCA's condition. Now what could be a relevant final harm for Robin in Golf Clubs? No 'positive' event seems to be a promising candidate. To begin with, there is no reason to suppose that Batman's decision produces any event that is intrinsically bad for Robin, such as a painful experience. If it did, Golf Clubs would not even initially appear to constitute a problem for CCA. What about an event such as Robin's continuing to use his old golf clubs? Even supposing that this event occurs, and that it is in some sense produced by Batman's decisionwe might assume, at least, that it would not have occurred had Batman not decided to keep the clubs (as Robin would have used the new ones instead)there is no reason to suppose that Robin would have been better off had this event not occurred. In particular, it would presuppose objectionable backtracking to claim that if Robin had not continued to use his old clubs, then Batman would have given him the new clubs earlier. It is much more natural, and requires a considerably smaller divergence from the actual past, to suppose that in the nearest possible world in which Robin does not continue to use his own clubs, he uses some third set of clubs instead, or refrains from playing golf (and Batman keeps his clubs). Similar remarks apply to other positive events to which Klocksiem might try to appealsuch as, for example, Robin's being happy to degree 5 or Batman's playing golf with his new clubs (even assuming that they occur).
Given the existence of negative events of the sort described above, on the other hand, it does seem possible for Klocksiem to provide a relevant final harmthat is, one that the harmfulness of Batman's decision could derive from, on his approach. Suppose that Robin would have been happy to degree 10 had Batman not made his decision. Given the existence of negative events of the pertinent sort, one event that occurs is e* = Robin's not being happy to degree 10. Plausibly, while e* satisfies CCA's condition, this is not because it causes or otherwise produces some other event which satisfies CCA's condition. Moreover, it seems plausible to say that Batman's decision in some sense produces e* (at least, e* would not have occurred had Batman not made this decision)and also that this is why Batman's decision satisfies CCA's condition. In this way, Batman's decision avoids being a final harm: it owes its harmfulness to its connection with e*, which is a final harm. 10 Consequently, Batman's active and substantial role in causing his decision does not imply that he does harm to Robin on HiR and CR. Obviously, however, this line of reasoning presupposes the existence of negative events of the pertinent sort. And as already explained, given the existence of negative events of this sort, Klocksiem is, in the end, committed to the unacceptable claim that a person is harmed every time she is not benefittedand, indeed, at virtually all times at which she is benefitted.
A further problematic case
Klocksiem's approach also faces yet another version of the failure to benefit problema version that, like the one in the previous section, seems even more serious than the standard one (involving cases like Golf Clubs), and more serious for Klocksiem's approach than for CCA. This version concerns the implications of Klocksiem's approach for a case that is very similar to Golf Clubs. Consider: More Golf: Batman has bought two sets of golf clubsone that is of extremely high quality and one that is of slightly lower quality, but still very good. Now he has three options: (A1) to give Robin the slightly inferior clubs; (A2) to give Robin the better clubs; and (A3) to keep all the clubs for himself. Batman performs (A1), which makes Robin's well-being level increase significantly. If Batman had not done so, he would have given Robin the even better set of clubs, whereby Robin's well-being level would have been even higher. 11 Klocksiem's approach seems to imply that Batman does harm to Robin in More Golf. To begin with, CCA implies (counterintuitively) that Batman's action of giving Robin the slightly inferior clubs harms Robin, as Robin would have been better off if Batman had not performed that action. Moreover, Batman undeniably makes a very 10 One might worry that Batman is causally responsible for e* itself, in which case HiR implies that he does harm to Robin after all. As we understand Klocksiem, however, he takes the causal connection that needs to hold between instrumental and final harms to be considerably weaker than the one that CR's condition requires to hold between agents and final harms. 11 This case is taken, with minor modifications, from Johansson and Risberg (2020, p. 1543). significant causal contribution to a causal chain that results in this action. Hence, if this action is a final harm for Robin, HiR and CR together imply that Batman does harm to Robin.
Of course, Klocksiem might suggest that Batman's action in More Golf is only an instrumental harm for Robin. Given his view of instrumental and final harm, this requires the harmfulness of Batman's action to derive from some final harm which the action causes or otherwise produces. One candidate for such a final harm is e* = Robin's becoming the owner of the slightly inferior golf clubs. Plausibly, Robin would have been better off if e* had not occurred (as he would then have received the even better clubs instead). Moreover, this does not seem to be because e* causes or otherwise produces any further event that harms Robin according to CCA. 12 By taking e* to be a final harm for Robin, then, Klocksiem avoids having to say that Batman's action is a final harm for Robin. However, his view still implies that Batman does harm to Robin. Since Batman undeniably makes a very significant causal contribution to a causal chain that results in e*, HiR and CR together imply that Batman does harm to to Robin.
The implication that Batman does harm to Robin in More Golf is problematic for Klocksiem's approach for several reasons. For one, this result is counterintuitive in its own right. Moreover, because Klocksiem's approach implies that Batman does harm to Robin in More Golf, it provides no help for proponents of CCA in dealing with the theory's problematic implication that Batman's action harms Robin in that case. In particular, the type of debunking explanation considered in section 3namely, that this intuition rests on a confusion of what it is for an event to harm someone with what it is for an agent to do harm to someoneis unavailable, since Klocksiem's approach and CCA together imply both that Batman does harm to Robin and that his action harms Robin. Klocksiem's defence of CCA thus fails to generalize.
Last but not least, More Golf reveals that Klocksiem's approach is committed to several implausible combinations of verdicts. It implies that Batman does harm to Robin in More Golf but not in Golf Clubsa pair of views which seems very hard to support on independent grounds. (Indeed, if it is at all justifiable to treat the two cases differently, it would surely be more plausible to claim that Batman harms Robin in Golf Clubs but not in More Golf.) It also implies that although Batman actually does harm to Robin in More Golf, by giving him the slightly inferior golf clubs, Batman would not have done harm to Robin if he had kept all the clubs for himself. This is absurdespecially from the perspective of a counterfactual comparative view of harmsince the former action leaves Robin much better off than the latter would have done. Proponents of CCA, on the other hand, easily avoid commitment to these combinations of viewsunless, of course, they adopt Klocksiem's attempt to defend it. 13 Competing Interests. The authors declare there are no competing interests. 12 At least this is so provided that there are no relevant 'negative' events, such as Robin's not owning the better clubs, involved in the case. If there are such events involved, Klocksiem's view again faces the problems discussed in section 4. 13 Thanks to Ben Eggleston and two anonymous referees for very helpful comments. Erik Carlson's and Jens Johansson's work on this article was supported by Grant 2018-01361 from Vetenskapsrådet and Grant P21-0462 from Riksbankens Jubileumsfond. Olle Risberg's work on this article was supported by Grant 2020-01955 from Vetenskapsrådet. | 6,280.4 | 2023-05-29T00:00:00.000 | [
"Philosophy"
] |
Experimental study on the thermal characteristics of urban mockups with different paved streets
Pavements in urban area absorb more sunlight due to the canyon-like geomorphology of the urban geometry and store more heat due to the great thermal bulk properties of concrete. Heat released from pavements warms up the urban air, contributing to the urban heat island. Recently, the uses of cool pavements to reduce the pavement temperature as an urban heat island mitigation have gained momentum. Understanding the temperature and solar insolation of a pavement in an urban area is important to adopt the right cool pavement option for the right place. This study measured the temperature of paved streets in an urban mockup for 4 days in summer. It is found that east-west (EW) streets are the hottest place in an urban area, followed by the intersection, and finally the south-north (SN) street and that increasing the pavement’s albedo reduces the pavement temperature effectively. The dark gray pavement in an open space is hotter than that in an urban canyon. The heat storage in the building blocks keeps the pavement warmer more than 2 °C at nighttime. The EW street is exposed to solar insolation for long hours, so it is suitable for preferentially developing reflective cool pavements.
Introduction
Urbanizations are replacing soils and grass surfaces with buildings, pavements, and other sealed surfaces. Buildings and the pavements between two adjacent buildings create a canyon-like geomorphology, which absorbs more sunlight than buildings and pavements in open areas (Aida 1982;Aida and Gotoh 1982;He et al. 2019). A great part of the heat stored in pavements and buildings releases as sensible heat to heat up the air in urban areas (Anandakumar 1999, He 2019, Yang et al. 2020a, 2020b. As a result, in summer, the air in urban areas and metropolitan areas is significantly hotter than the air in the surrounding rural areas, a well-known phenomenon that is called the urban heat island effect (Phelan et al. 2015;Mohajerani et al. 2017). The urban heat island effect directly decreases the pedestrian thermal comfort (He et al. 2021;Tan et al. 2021), reduces the urban environmental quality (Yang et al. 2020a(Yang et al. , 2020bYang et al. 2021), and increases the urban energy usage (Santamouris 2013). As pavements typically cover 20-40% land in an urbanized area (Akbari and Rose 2001), the deployment of cool pavements in urban streets has been touted as a strategy for urban heat island mitigation (Santamouris et al. 2011;Akbari and Matthews 2012).
The science and technology of reducing the pavement temperature (i.e., cool pavement) has been well documented. The temperature of a traditional pavement can be cooled by increasing the pavement reflectance (Taha 1997;Akagawa et al. 2008), by rising the evaporative cooling of the pavement (Hendel et al. 2016;Wang et al. 2018), and by other techniques that decrease the pavement temperature (Hasebe et al. 2006;Chiarelli et al. 2015). The reflectance of a pavement can be increased by coating the pavement surface with high reflectance pigment (Feng et al. 2012), sealing the pavement with light-colored layers (Tran et al. 2009), and others (Levinson and Akbari 2002). Increasing the cooling capacity of a pavement can be achieved by developing water-retaining pavements to hold water at the surface layer for subsequent evaporative cooling Wang et al. 2019). Pavement temperature can be also reduced by harvesting the heat of a pavement for sustainable usages and by embedding phasechange materials in a pavement to convert the absorbed heat to latent heat rather than sensible heat (Bo et al. 2011;Jiang et al. 2019). Details about techniques to reduce pavement temperature can be referred to Santamouris (Santamouris 2013).
However, it remains unknown how to find the right cool pavement options for the right place. Takebayashi and Moriyama (2012) simulated the temperature and solar absorption of an urban street canyon using the Monte Carlo method; they found that reflective pavement is only considered in a street canyon with an aspect ratio (street width to building height) that is greater than 1.5. In practice, the temperature of pavements in an urban canyon is different site-by-site because of the variations of the sky view factor, urban geometry, urban materials, solar radiation, city latitude, etc. (Anandakumar 1999). For instance, on a sunny day, the intersection is insolated longer than the other places and thus shall be the hottest place in the urban canyon. The real temperature distribution in the intersection, east-west (EW) street, and south-north (SN) street remains little known. Understanding the temperature distribution in an urban street is important to educate the urban planners to adopt the right cool pavement option for the right place.
The goal of this study is to measure the temperature distribution in a typical urban canyon and thus to identify the hottest place of a paved street in the urban canyon. An urban mockup with an aspect ratio of 1.0 (building height to street width) was built up, and the temperature of the paved street in the urban mockup street was measured. Another urban mockup with white streets was setup side by side to conclude if increasing the reflectivity of the paved streets can cool down the pavement effectively. How the heat released from the building block affects the temperature of the urban street at night is also studied.
Experiments
To measure the temperature of paved streets in an urban canyon, we prepared an urban mockup that consists of a group of cubic concrete blocks. Each concrete block was a hardened dense Portland cement concrete cube with a density of 2350 ± 30kg/m 3 and a length of 0.15 m on each side. The blocks were arranged as indicated in Fig. 1. The ratio of the building height to the street weight was set as 1.0. The urban mockup, in the top view, was a square consisting of eight cubic blocks at each side. The mockup was placed at a rooftop of a five-floor 18-mtall building to minimize the shaded effect during the experiment. The building is located at Nanning, Guangxi (longtitude108.29°, latitude 22.84°). The roof was a new double-skin roof that has interlocked tiles as the top layer, an 8-cm-thick air layer below the tile as the insulation layer, and a roof deck as the base. The tiles were hardened reinforced concrete slabs with a thickness of 3.0 cm. Details about this roofing structure can be found in Qin et al. (2017).
The temperature of typical street sections of the urban mockup was measured. Considering the symmetry of the mockup, we measured the temperature of an L-shaped street section that consists of the intersection, north-south street, and the east-west street ( Fig. 2). At this section, 42 thermocouples were mounted to the paved street surface to log the local temperature ( Fig. 2). To get a representative temperature, each thermocouple was anchored to the upper surface of a 1 mm × 5 mm × 5 mm copper plate, the sensor was first attached to the paver surface by thermal grease, and the entire thermocouple was covered with aluminum foil. After all thermocouplemounted plates were anchored, the paved street, the rooftop, and the building wall of the urban mockup were painted unicolor to ensure that the pavement is heated evenly. The painted pigment was selected such that the urban facet has an albedo of 0.30-0.40, which represents the albedo of common concrete surface in a city. After testing the reflectance spectra of a series of pigments and estimating the albedo of the spectra, a gray pigment with an albedo about 0.35 was selected and used to paint the paved street, the rooftop, and the building wall of the urban mockup ( Fig. 1). This urban mockup is called gray mockup. The thermocouples and their lines were also painted with consistent color as the entire gray mockup (Fig. 1).
Nearby the urban mockup, for comparison, we used the same pigment to paint an open square with the same size but without concrete blocks standing. A thermocouple was anchored to the middle of this open square to log the local temperature for representing the temperature of the same pavement in an open area. Above the middle of the open area, an albedometer was leveled at a height of 0.5 m to log the incoming and reflected solar irradiance. The lower pyranometer of Fig. 1 Two urban mockups with a 2.2 m × 2.2 m square were setup sideby-side for comparing the temperature of paved streets with different colors the albedometer was assembled with a baffle such that the detector of the pyranometer sees only the underlying mockup. Similarly, above the urban mockup, another albedometer with the same buffer on the lower pyranometer was centered and leveled at 0.5-m height to read the incident solar irradiation and the reflected radiation from the mock. The albedo of the urban mockup and of the slab was estimated according to the method proposed by Qin et al. (2018).
Close to these two squares, another urban mockup and another open square were prepared for a comparison side by side (Fig. 1). They had the same geometry as the gray urban mockup and the same cubic blocks as the building, except that the street of this mockup was painted white. The goal was to examine if increasing the reflectivity of the paved street in an urban canyon can effectively cool the street. Only the temperature in the middle of the intersection was measured because of the limitation of the measurement capacity of the data logger. Similarly, the open square was painted white and the temperature in the middle of the square was logged.
Both the temperature and the radiation were logged simultaneously by three Campbell CR3000 loggers in an interval of 1 min. To reduce measurement errors of the apparatus, the CR3000 was shaded and the length from the tip of each thermocouple to the CR3000 was the same. The measurement lasted from June 18 to June 22, 2019, a period of partial sunny days without rain. The global horizontal solar irradiance during the measurement is shown in Appendix 1 for reference.
Temperature of an urban mockup during a day course
The instant temperatures of the representative paved streets in the urban mockup are different place to place (Fig. 3). At the middle day of a day (12:00), the EW street is the hottest place, followed by the intersection, and finally the SN street (Fig. 3a). This order seems reasonable because the EW street is always exposed to sunlight while the SN street always has some parts under shade. Compared to other places, the intersection has a highest sky view factor. Due to this high sky view factor, the intersection drains the absorbed heat faster than both the EW and SN streets. As a result, although the intersection also is exposed to the sunlight as the EW street, it is not the hottest place in the paved street in the urban mockup. Different from the EW street and the intersection, the SN street always has a part of area under shade because the sun rises at the east and sets at the west, making the SN street the coolest place in the paved street in the urban mockup during the middle day.
In the afternoon, the EW street is still the hottest place ( Fig. 3b). At 15:00, a half of the NS street has been shaded and the building wall close to the shadow has been shaded for hours. As a result, the west side of the SN street is the coolest place (Fig. 3b), which can be about 3-5°C cooler than the EW street. The east wall of the building along the SN street are facing to the sun, so the place close to the building wall is the hottest place in the SN street. At this time, most part of the EW street is still exposed to sunlight so it stays hot. At the intersection, the hottest spot locates in the south part because some north part of the intersection has been shaded. At midnight (24:00), the intersection is the coldest place, while the temperature of the NS street is close to that of the EW one. The reason for this phenomenon is that the intersection drains the heat absorbed during the daytime fastest because has a greater sky view factor than both the NS and EW streets. In the urban mockup, the intersection can be about 0.1°C lower than both NS and EW street. In a real urban condition, the building and the street have greater thermal inertia and this temperature difference can be enlarged.
The daily mean temperature of an urban mockup
The daily mean temperature of the paved street in the urban mockup further substantiates that the EW street is the hottest place, followed by the intersection, and finally the SN street (Fig. 4). The temperature difference is about 0.5-1.0°C. The coldest place is the east side of the SN street. This is reasonable because the east side of the SN street is shaded in the morning when the local air temperature is still cool. The intersection is not as hot as the EW street because the intersection has a larger sky view factor and receives a lower amount of heat radiating and reflecting from the building wall. The EW street has almost the same insolation time as the intersection but a lower sky view factor, making it the hottest place in the canyon of the urban mockup.
Gray pavements in open space are hotter than those in urban mockup
Pavements in the open area are hotter than paved streets in the urban mockup, especially during the daytime. During the daytime, the centers of the intersection, EW street, and NS street of the mockup with the gray pavement are about 3-5°C, 4-6°C , and 6-10°C cooler than the center of the open area with the gray pavement, respectively (Fig. 5). This is surprising because the urban mockup absorbs more sunlight than the pavement in open area due to the sunlight trapping effect of the urban canyon (Appendix 2). A possible reason may be that the pavement at an open area is directly exposed to sunlight without shade. Another reason is that urban mockup has a greater thermal inertia, and thus, it has better resistance to temperature rise when it is exposed to sunlight. Due to this thermal inertia, at nighttime, the pavement in an urban canyon is hotter than that in open area because the heat emitted from this pavement and from the nearby cubic blocks is partially captured in the canyon. The difference, however, is much smaller compared to the difference during the daytime.
Increasing the albedo of paved street reduces temperature effectively
Increasing the albedo of pavements in the urban mockup greatly reduces their temperature. During the daytime, the center of the mockup with the white pavement (T imw ) is 5-10°C lower than the center of the mockup with the gray pavement (T img ). This difference indicates that increasing the albedo of the paved street in an urban area effectively cools down the street. During the daytime, the temperature at the center of the intersection of the mockup with the white pavement (T imw ) shows little to no difference from the temperature at the center of an open area with the white pavement (T ow ) (Fig. 6). This minor difference between T imw and T ow means that the albedo of the pavement dominates the pavement temperature, while the urban geometry at this setup plays the secondary role only (Fig. 6). Although we did not measure the temperature at all places in the urban mockup, we can conclude that increasing the albedo of paved streets in an urban area like this urban mockup can decrease the temperature of the street at a degree comparable to the same pavements in open areas.
Heat storage in the building blocks warms the pavement at night In Fig. 6, at nighttime, the center of the intersection (T imw ) is about 2°C warmer than the center of the pavement in the open area (T ow ). The difference, T imw -T ow , starts from 18:00 (sunset) and ends at 6:00 (sunrise) in the next day. During the time spell, there is no solar irradiation and both pavements have the same emissivity. As a result, the leading reason to the T emg T img T smg T og difference, T imw -T ow , is that the pavement in the mockup absorbs the heat emitted from the cubic blocks. From sunset to sunrise, the difference is almost the same, which indicates that the heat in the block is not exhausted during this time span. As the thickness of a real building wall is almost the same as the thickness of the cubic blocks in the urban mockup, we can conclude that the building wall can make pavement at the center of intersection 2°C warmer. As the center of the intersection in the mockup has the largest sky view factor and thus the least view factor to the building wall, one can further imagine that the heat released from the building wall warms the pavement in urban area more than 2°C.
Discussion
The urban mockups used in this study are different from a real urban morphology; the experiments were carried out with an urban mockup by a uniform building height to reach a universal conclusion of the albedo and temperature of an urban canyon, although the building height in reality is not uniform and the building shape is not necessarily cubical. The ratio of building height to street width is different and varies in space. All these differences affect the albedo of the real urban morphology surface. Nevertheless, although the urban mockup of 2.2 × 2.2 m 2 used in the experiment is much smaller than the size of a real city, it is much larger than the wavelength of the incident radiation so the diffraction can be ignored (Qin et al. 2016). The authors do believe the proposed model is useful because the parameters that dominate the albedo and temperature of an urban canyon are research-able and controllable in the mockup.
The experiment above demonstrates the albedos and temperatures of paved streets and pavement in open areas. The albedo varies with time and has a nadir near solar noon, an observation which is in accordance with the observations of Masaru and Aida (1982) and Akbari et al. (2008). During sunny days, the albedo of the gray urban mockup is about 0.10-0.15 lower than that of the gray pavement slab in the open areas; this is because the photons reflected from an urban mockup surface are partially intercepted by the other surfaces, resulting in multiple reflections, which increase the absorption and decrease the albedo. As expected, the pavement temperature in the center of the urban mockup is much lower than that of the open area. A crucial aspect is the multiple reflections of solar radiation in urban canyons. This finding is consistent with Garcia-Nevado et al. (2021), who attributed the actual effect of inter-reflections within the canyon that leads to a radiative trapping phenomenon. The findings of this study also confirm that the importance of increasing the shading area in urban areas to improve thermal comfort in urban areas. As shown in the temperature nephogram in this study (Fig. 3, and Fig. 4), daily solar radiation on a paved street dominates the surface temperature. In the previous literature (Taleghani et al. 2015;Yuan et al. 2017), the mean radiation temperature is a scalar for determining thermal comfort in the urban region. Without shading, pedestrians are directly exposed to sunlight, reducing their thermal comforts. When thermal comfort is considered, shading factors can dilute the importance of other variables like urban albedo, greening rate, and orientation (Yang et al. 2011).
In this study, the experiment time is on June 18-22. At this time spell and at the Tropic of Cancer, the sun is rightly above the experiment location. As a result, the EW street is exposed to sunlight during the daytime for long hours and shows a higher temperature than NS street. At other dates, the solar position is different so the sunlight falling on the paved street will be different. However, as the daytime temperature of the paved street is directly related to the solar insolation and its duration, it is more reasonable to develop reflective pavements in a street that is exposed to sunlight for a longer time. The positive results in the current study indicate that reflective pavement emerges as an attractive option to reduce the temperature of paved street as an urban heat island mitigation. In the next study, we will explore the impact of different canyon geometries, concrete blocksizes, urban-block spacing, and pavement colors (i.e., albedo of paved streets) on the pavement temperature through an entire year.
Conclusion
This study side-by-side measured the temperature of paved streets in two urban mockups and measured the temperature of two paved slabs with the same color as the paved streets. It is found that in an urban street near the Tropic of Cancer, the hottest place is the EW street, followed by the intersection, and finally the SN street. On a partially sunny day, the daily mean temperature at the center of the EW street can be 3-5°C hotter than that of the SN street. The reason is that the EW street has the longest duration of sun insolation. Therefore, the EW street is the most suitable place to develop reflective cool pavements as a strategy for urban heat island mitigation. Our measurement shows that increasing the albedo of pavements in the urban canyon can effectively cool down the pavement (about 5-10°C). In addition, it is found that after a partially sunny day, the heat released from the building block can keep paved streets about 2°C hotter than the pavements in the open air at nighttime.
Although reflective pavements have been advocated as possible solutions to reduce the urban surface temperature, there has been sparse information to understand the effect of solar reflective coatings on pedestrians. Future experiments are expected to assess the thermal impacts of albedo increase on pedestrians and to long-term observed temperatures of paved street in different regions to reach a universal conclusion on the use of reflective pavement as an urban heat island mitigation.
Symbols Ralbedo
Ttemperature,°C Timgtemperature at the center of the intersection of the mockup with gray pavement,°C T emg temperature at the center of the east-west pavement of the mockup with gray pavement,°C T smg temperature at the center of the south-north pavement of the mockup with gray pavement,°C | 5,262.6 | 2021-03-01T00:00:00.000 | [
"Engineering"
] |
Stock Price Determinants: Empirical Evidence from Muscat Securities Market, Oman Stock Price Determinants: Empirical Evidence from Muscat Securities Market, Oman Dharmendra
Stock price is one of the main indicators for measuring firm performance and also the only factor determining shareholders’ wealth. Stock price changes are based on informa tion related to the firm and the market as a whole. This paper is focused on the deter - minants of the share price of the twenty-six non-financial companies listed in Muscat Securities Market, Oman. In this study, closing annual stock price from 2011 to 2016 is the dependent variable and the firm-specific variables like firm size (logarithm of total assets), dividends payout, earning per share (EPS), debt ratio, price-earnings(PE) ratio, first lag of dependent variable(stock price) are the independent variables in the panel data regression using random effect model. There are two categories of research hypoth - esis: the first one is based on semi-strong form of Efficient Market Hypothesis (EMH) and second one is based on Arbitrage Pricing theory (APT). To test the second set of hypothesis, oil price, growth rate in GDP and consumer price index are considered as independent variables as they effect performance of business and so do the stock prices. EPS, debt ratio and first lag of stock prices are significant determinants of stock prices. Dividend firm size and PE ratio are insignificant variables.
Introduction
In today's world, the performance of business and corporates of a country plays a very important role in its position as a world leader. The per capita income, employment rate and other economic variables depend a lot upon the performance of business houses in that country. The stock price of a company fluctuates according to the performance of the business and the economy as a whole. The timing and the decision about buying and selling of stock depend upon the stock price level. When an investor decides to invest in a stock he always looks for strong and growing companies, the value of the firm is reflected in the stock prices of that firm, and that is how an investor without any finance knowledge selects the stock-by-stock price movements.
One of the key sources of financing for the listed firms is the stock issue, and for successful stock issue, firms need to have a strong track record in the stock market. There are various stakeholders to the business, like shareholders, creditors, customers, employees, and government. The rising stock price is an indicator of good management and satisfaction for all the stakeholders. There are company-specific and market-related determinants of stock prices; in literature, many theories are available that explain the movement in the stock prices.
One of the most significant theories is the Efficient Market Hypothesis (EMH), which is based on the assumption that rational investors in the market react to the available information like company fundamentals and other important declaration about the company to decide on the stock buying or selling. If they feel that the information is positive, then they retain the shares if already bought or buy the one which was not purchased earlier and vice versa. The action of buying and selling stocks by the investors is responsible for changes in stock price. There are three forms of EMH-weak, semi-strong and strong form-and they vary regarding available information for public and investors. Another theory 'Random walk' states that stock prices are random and cannot be predicted by any means. This theory has been empirically tested many times and proved by the researchers. A random walk is consistent with EMH, as the flow of information is random which helps investors in reassessing the stock price.
The third theory 'Behavioral Finance Theory' is very different from the random walk and the EMH theories. This theory states that investors do not behave rationally rather they invest by psychological and behavioral factors; for example, they will invest in the stock if the stock price is increasing even if there are no significant changes in the company fundamentals.
Gordon [1] revealed that dividend payment and growth rate of the company have an impact on the intrinsic value of shares. The model was based on the assumption of constant growth in dividends which was one of the weaknesses of the model, but still, it is the highly used model to calculate the intrinsic value of the stock. This model claims that expected dividend and growth rate of the company are positive determinants of stock prices.
A considerable amount of research has been done to find out internal determinants of share price changes of companies, some of the common factors found are dividend yield, total assets, earning per share, capital structure and book value per share. Apart from internal variables, macroeconomic variables also have an impact on share prices that have been discussed by Roll and Ross [2] in his arbitrage pricing theory (APT), a framework for pricing securities for investors. According to Ross, common macroeconomic factors affecting share prices were unexpected changes in inflation, GDP and changes in the yield curve. APT model is flexible as investors can select other factors also depending on the market like for oil exporting and importing countries oil price can be an important factor affecting security prices. Mukherjee and Naka [3] supported the APT theory by confirming the impact of economic variables on the stock returns; they argued that changes in economic variables affect dividend payments and discount rates and thus have an impact on share prices as well.
In the present study, the attempt has been made to study the impact of select internal determinants and macroeconomic determinants of the share price of listed 26 nonfinancial companies in the Muscat Securities Market. A lot of work has been done on this topic, but most of the studies are based on establishing a relationship between dividend policy and stock prices. To the best of researcher's knowledge, this study is the pioneer study on the Oman capital market, based on stock price determinants of the companies from Muscat securities market. In the previous studies from GCC countries [4][5][6] and studies from other countries, authors have not studied any specific sector for share price determinants. Another contribution of this study is that it is exclusively based on nonfinancial companies. The nature of balance sheet in financial companies varies from nonfinancial companies in terms of leverage, current assets and fixed assets composition. Therefore, to study the impact of company-specific determinants on share prices a separate sample of financial and nonfinancial companies would yield better results rather than studying the mix of all types of companies.
The whole chapter is organized into five sections including introduction. Section 2 describes the literature review. Section 3 discusses the methodology and data. Section 4 presents the empirical results and its discussion thereof. Section 5 presents conclusion with policy implications. Collins [7] was the pioneering work on determinants of share prices based on the US market, the findings of the chapter recognized book value of equity, dividend, net profit and operating cash flows as the significant factors affecting share prices.
Review of literature
Nirmala et al. [8] used fully modified least square regression model on panel data of 37 Indian companies from 2000 to 2009. The study identified price earnings ratio, leverage and dividend per share as the major determinants of share prices. In the Indian context, this study was also conducted by Tandon and Malhotra [9]; they tried to identify determinants of stock prices for 100 companies listed in National Stock Exchange (NSE) using linear regression model from 2007 to 2012. The results indicated that firms' book value, earning per share and price-earnings ratio have a significant positive association with firm's stock price, while dividend yield has a significant inverse association with the market price of the firm's stock.
Malhotra and Prakash [10] studied the determinants of stock prices of Indian companies during 1990-1999 with the help of correlation and regression analysis. Book value per share, dividend per share, market to book ratio and PE ratio emerged as the significant determinants of the share prices.
Oseni [11] studied the impact of earnings per share (EPS), oil price, dividend per share (DPS), GDP, foreign exchange rate and interest rates on share prices of 130 companies from the Nigerian stock exchange. The study revealed a strong positive correlation between stock prices and EPS, oil price, dividend per share and GDP. Gjerde and Saettem [12] studied the relationship between stock returns and macroeconomic variables like inflation, real economic activity and oil prices in Norway. The empirical study revealed that inflation is not a significant variable for changes in stock prices. However, there was a positive relationship between oil price and stock price.
Irfan et al. [13] attempted to explain the impact of six company variables dividend yield, dividend payout ratio, leverage, size of the firm, earnings volatility and asset growth rate on stock prices of Pakistani companies during the period 1981-2000. A regression model was Al-Tamimi et al. [5] investigated the key determinants of stock prices of 17 companies listed in UAE stock market during 1990-2005. The regression result indicated EPS as a strong determinant having positive impact on share prices; consumer price index was found to be statistically significant with a negative coefficient. Money supply and GDP were found to have a positive coefficient, but they were statistically insignificant.
Allen and Rachim [14] tested the effect of dividend policy on the stock price volatility with the control variables like leverage, growth, earnings volatility and firm size. The data on 173 companies listed in the Australian stock market from 1972 to 1985 were analyzed with the help of cross-sectional regression analysis. The results showed the significant positive relation between stock price volatility and leverage, size and earnings volatility. It was also concluded that dividend policy is not influencing stock price volatility. Apart from the studies mentioned above, few more important studies from different markets are identified and mentioned in Table 1.
In the existing literature, there is a mixed opinion on the determinants of stock prices and their positive or negative impact. Very few studies are based on GCC countries and none of them from Muscat securities market, Oman. This study thus fills the gap by researching the impact of select firm-specific and economic variables on the stock prices of the nonfinancial sample companies listed in Muscat securities market, Oman.
Data and variables
By available literature and data, the author has identified dividend payout ratio, debt ratio, earnings per share (EPS), logarithm of total assets (a proxy for company size) and price earnings ratio as the regressors of the stock price in this study. Roll and Ross [2] in his arbitrage pricing theory (APT) has proved the relevance of macroeconomic variables in stock pricing. Based on the literature, economic variables like growth rate in GDP, consumer price index and crude oil prices have also been considered as the external variables affecting stock prices. Fama and Schwert [23] the well-known study was also based on the relationship between stock prices and inflation. Oman being the net exporter and mainly depending on oil and gas export is facing the heat of low oil prices. Economy of Oman like other GCC countries is driven by oil and gas, so consideration of oil price as an independent variable makes sense.
Dividend payout ratio is the ratio of the amount of dividend paid per unit of total earnings, also represents the percentage of earnings distributed in the form of dividends to shareholders. The payout ratio is considered to be one of the important variables affecting stock price as current stock value is the discounted value of future cash flows from that stock. The second variable 'debt ratio' is defined as the ratio of total debt to total assets, expressed as a decimal or percentage. It can be interpreted as the proportion of a company's assets that are financed by debt. It is a measure of financial risk on the assets of a company, and higher financial risk will affect the returns and consequently price of a stock. The third variable considered in the study is EPS, which measures the income generated on one share. It is a ratio of net income to the number of shares outstanding. In most of the studies, EPS has emerged as a significant variable having a positive impact on share prices. In literature, many studies have tried to measure the impact of the size of the company on the stock prices. Some of them have used the logarithm of sales as the proxy for company size and in some cases logarithm of total assets. Both sales and total assets are an indicator of business size. Many investors take their investment decision by company size as bigger companies are more stable regarding profit and are also less prone to the business cycle. Price-earnings ratio commonly known as PE ratio is one of the prime indicators used in the stock selection by the investors. PE ratio is the ratio of the market price of a stock to its EPS. It is a measure of investor's confidence on stock and is a reflection of investor's anticipation of higher growth in the future. Gordon growth model confirms the role of the growth rate of the company on the intrinsic value of the stock.
Hypothesis
The following hypothesis statements were formulated on the basis of available literature and theory which provides the scope and depth to the study.
Hypotheses H 01 to H 06 are framed to test the reflection of publicly available information on the stock prices based on semi-strong form of EMH.
H 01 : There is no significant effect of size of the company on its share price.
H 02 : There is no significant effect of dividend payout ratio on share price.
H 03 : There is no significant effect of EPS on share price.
H 04 : There is no significant effect of leverage on share price.
H 05 : There is no significant effect of price-earnings ratio on share price.
H 06 : There is no significant effect of first lag of share price on current share price.
The following hypothesis are framed to confirm the impact of economic variables on the stock returns based on arbitrage pricing theory (APT), H 07 : There is no significant effect of crude oil price on share price.
H 08 : There is no significant effect of inflation on share price.
H 09 : There is no significant effect of growth in GDP on share price.
Panel data analysis
Panel data analysis has been used to analyze the impact of firm-specific and macroeconomic determinants on the share price of the nonfinancial listed companies in Oman. Panel data always has advantages over time-series and cross-sectional data. Panel data analysis weakens the interaction between the variables that result in more reliable parameters, Hsiao [24]. Employment of this technique is considered more efficient as it reduces the co-linearity of the predictor variables and also it offers gain regarding the degree of freedom. The research study uses both the panel data methods, that is, fixed effect method and the random effect method. The better method is then selected applying the Hausman test. Both the models fixed effects and the random effects have been represented by the following Eqs. (1) and (2), respectively: CP jt = β 0j + β 1 CP jt−1 + β 2 Dividend jt + β 3 EPS jt + β 4 Leverage jt + β 5 GDP jt + β 6 Inflation jt + β 7 Size jt + β 8 Oil jt + β 9 PE jt + μ jt (1) CP jt = β 0 + β 1 CP jt−1 + β 2 Dividend jt + β 3 EPS jt + β 4 Leverage jt + β 5 GDP jt + β 6 Inflation jt + β 7 Size jt + β 8 Oil jt + β 9 PE jt + μ jt (2) where, CP jt = annual closing price of firm's stock in year t; β 0 = common y-intercept; β 1 -β 9 are the coefficients of concerned explanatory variables; ɛ jt = stochastic error term for firm j at time t; β 0j = firm j's y-intercept; μ jt = error term for firm j at time t.
Based on the literature on share price determinants, the following company-specific variables dividend payout ratio, leverage, earning per share, size of the company, price earnings ratio and three economy based variables growth rate in GDP, inflation rate and crude oil prices were selected as the predictor variables in the regression analysis. Apart from these variables, first lag of yearly closing price of shares was also considered as a predictor variable.
Data analysis
This section presents the results of panel data analysis which are reported in Table 2. Both the fixed effect and random effect model was used to measure the impact of the selected independent variables on the stock prices of sample companies. Then the Hausman specification test was used to select a better model between fixed effects and random effects model. The null hypothesis in Hausman test is that the preferred model is random effects and the alternate hypothesis is that the preferred model is fixed effects.
According to the results of fixed effects model earnings per share, a log of total assets (a proxy for company size) and crude oil prices are found to be significant determinants of the changes in stock prices. All the three variables have a positive relationship with share prices. The macroeconomic variables growth rate in GDP and consumer price index are found to be insignificant in explaining the changes in share prices.
Results of the Hausman test are reported in Table 3, and according to that, null hypothesis is accepted. Therefore, random effects model is supposed to be a better model for analyzing this panel data. Value of R square is also quite high with 93.23% of variations in stock price explained by the regression model. In Random effects model, among the company-specific variables used in this study, lag of share prices, earnings per share and leverage are the statistically significant variables. The two variables earnings per share and first lag of share prices are even significant at 1% level of significance. The lag of the share prices has positive coefficient which means the previous hike in share prices are responsible for the increase in share price of the next year. Investors invest by stock price movement; this result supports the behavioral theory of finance. Earnings per share (EPS) is one of the most dominant determinants of share prices with the highest positive regression coefficient of 12.16 and significant at 1%. Debt to the total asset (leverage) is also significant and is positively related to sharing prices of the sample companies. The dividend has proved to be an insignificant determinant of the share prices, and this supports the irrelevance of dividend policy on the firm value. The logarithm of total assets (size of the company) and PE ratio are also not significant determinants at 5%.
From the three external variables, inflation rate and crude oil price are significant at 10% level of significance. The result of inflation rate is consistent with the previous studies and has a negative impact on share prices [21,23]. Oman being an exporter of crude oil, the oil prices are significant determinants and have a positive impact on them. The growth rate in GDP is not seen as important and significant variables for share prices in Oman.
Conclusion
The study aimed at investigating the effect of dividend payout, EPS, a log of total assets, debt ratio, PE ratio and previous year stock price on the current stock price of 26 listed nonfinancial companies in Oman. Three economic variables-growth rate in GDP, crude oil prices and consumer price index-are also considered as an independent variable in this study.
The empirical analysis is based on random effect model regression analysis with the stock price as the dependent variable. Based on the data analysis, the study finds that EPS has a significant positive effect on the price of common stock. Relatively, the value of the coefficient (12.16) for EPS is the highest among all the independent variables. In the majority of the existing studies, EPS had shown the same relationship with stock price [5,6,15,19,20]. EPS is a direct measure of shareholders earning on one share, and stocks with high EPS are commonly selected by equity analysts. Debt ratio (leverage) is also a significant variable having a positive relationship with stock price. Conceptually higher debt capital is an indication of financial risk, and hence an investor avoids these stocks. The reason for a positive relation between leverage and stock price could be a low percentage of debt capital in sample companies, as up to a certain level debt capital is favorable for stockholders which has been explained by the concept of 'trading on equity.' First lag of stock price is also significant and has a positive effect on current stock price consistent with Şebnem and Vuran [16]. This finding supports 'Behavioral Finance Theory' which explains the inconsistent behavior of investors toward theories and concepts. Dividend payout is insignificant determinant for stock prices, and results are consistent with the previous studies [9,15,18]. However, intrinsic value of a stock depends on future dividends; this may be because of anomalies or investors giving weightage to capital gains. The firm size is not significant; this result shows that investors are not giving any preference to bigger and established firms.
Test Summary Chi-Square Statistic Probability
Cross-section random 0.000000 1.0000 The macroeconomic changes also influence stock prices; inflation is negatively related to stock prices which support the well-known study of Fama and Schwert [23]. He justified the negative relationship by arguing that 'an increase in inflation causes uncertainty and reduces future economic activity and thus future earnings of the firm which results in a reduction of stock price.' | 5,114.2 | 2018-08-01T00:00:00.000 | [
"Economics",
"Business"
] |
Nanoarchitectonics of Layered Metal Chalcogenides-Based Ternary Electrocatalyst for Water Splitting
: The research on renewable energy is actively looking into electrocatalysts based on transition metal chalcogenides because nanostructured electrocatalysts support the higher intrinsic activity for both the hydrogen evolution reaction (HER) and oxygen evolution reaction (OER). A major technique for facilitating the conversion of renewable and sustainable energy is electrochemical water splitting. The aim of the review is to discuss the revelations made when trying to alter the internal and external nanoarchitectures of chalcogenides-based electrocatalysts to enhance their performance. To begin, a general explanation of the water-splitting reaction is given to clarify the key factors in determining the catalytic performance of nanostructured chalcogenides-based electrocatalysts. To delve into the many ways being employed to improve the HER’s electrocatalytic performance, the general fabrication processes utilized to generate the chalcogenides-based materials are described. Similarly, to enhance the OER performance of chalcogenides-based electrocatalysts, the applied complementary techniques and the strategies involved in designing the bifunctional water-splitting electrocatalysts (HER and OER) are explained. As a conclusive remark, the challenges and future perspectives of chalcogenide-based electrocatalysts in the context of water splitting are summarized.
Introduction
Renewable energy sources utilization is becoming more popular in the development of next-generation energy devices.However, most of them are so dependent on seasonal and regional factors; thus, a reliable method needs to be developed to convert the unstable and surplus energy into stable energy.In recent decades, fuel-cell technology has been booming in the energy sector, which can convert hydrogen into usable electric energy and combust it directly to provide thermal energy.It is well known that a powerful energy carrier with a high energy density is supported by hydrogen (H 2 ), and there are numerous existing approaches to produce the energy.One of the key methods in producing H 2 among the numerous methods is water electrolysis, which offers the benefits of higher efficiency, remarkable adaptability, and almost no carbon emissions [1].The cathodic hydrogen evolution reaction (HER) and anodic oxygen evolution reaction (OER) are two common electrolysis reaction pathways, and several electrochemical reactions take place depending upon the electrolytes used.In actuality, the thermodynamics of the water electrolysis to produce the H 2 and O 2 at standard temperature and pressure (25 • C, 1 atm) is not favored, since the operating system requires a theoretical potential of 1.23 V. Furthermore, the real-time water electrolysis involves the complicated ionic and electron transport mechanisms leading to lower efficiency and sluggish kinetics, and it necessitates a higher applied potential.Due to the above-mentioned reasons, there Energies 2023, 16, 1669 2 of 29 will always be a significant surplus potential (or overpotential) applied compared to the theoretical potential [2].Numerous studies have been performed to date on improving the electrolyzer efficiency and researching the reaction mechanisms to consume less energy.Generally, the kinetic barrier reduction and an increase in the reaction rate rely greatly on the development of effective electrocatalysts.It is commonly known that the acidic electrolytic medium is favored more in water-splitting compared to the alkaline medium, as it offers compactness and also a potential to run in the reversible mode (such as fuel cells) [3].To date, the benchmarking catalyst in the water-splitting activity is from rare earth elements such as platinum (Pt), palladium (Pd), ruthenium (Ru), etc., which can exhibit nearly zero overpotential [4].Nonetheless, the high cost and scarcity of rare earth elements as electrocatalysts limit their commercial applicability.As a result, highly effective alternative electrocatalysts with high activity, abundance, and low cost are crucial research directions in water splitting [3].As a result, several electrocatalysts, notably those based on transition metals and composed of widely distributed elements on the earth, have been thoroughly researched for water electrolysis (OER and HER).The developed electrocatalytic materials include single metal, alloy, oxide, hydroxide, nitride, sulfide, selenide, phosphide, arsenide, telluride, sulfoarsenide, etc., and some of these were set to be the most representative [5][6][7].However, their performance is still insufficient to be deployed in the pilot scaling.Thus, to boost the performance of the catalytic activity of the developed materials, the research was pursued under the direction of two general strategies.The first strategy is to deploy large-surface-area microstructured or nanostructured electrocatalysts, due to the fact that large-surfaced electrocatalysts aid in easing the necessity for high catalyst loading [8].The second is to create composites or alloys by combining one metal with other metals, which can enhance active sites and allows reduced catalyst loading [9].The non-homogeneous configuration of nanostructures in composites, on the other hand, limits the stability of electrocatalytic activity [10,11].As a result, the current demand for H 2 production from water splitting is driving the search for electrolyzers (or catalytic materials) for a large-scale, sustainable, and affordable method.
In this instance, two-dimensional (2D) layered materials such as transition metal chalcogenides (TMCs) have been actively investigated as efficient catalysts in several energy-related applications, including water-splitting [7,[12][13][14][15].Non-noble TMC-based electrocatalysts such as MoS 2 [16,17], WS 2 [16,18], NiS 2 [19,20], FeS 2 [21], MoSe 2 [22,23], WSe 2 [22,24], WTe 2 [25], and their composites are considered to be promising candidates due to the exposed catalytically active sites in the atomically thin nature of 2D materials.As mentioned, the TMCs are layered materials composed of two chalcogen atoms sandwiched between the alternating sheets of transition metal atoms [26].While the presence of van der Waals bonds (weak force) between the layered structures of TMCs allows its exfoliation for up to single layers, with the difference in oxidation degree (metal +4 and chalcogen −2) leading to the salvation of nanosheet structure by forming strong ionic bonds between the metal and the chalcogen atoms [15,27].Additionally, the wide range of electrocatalysts as an alternative to noble rare earth can be achieved by developing binary, ternary, and quaternary TMCs by means of controlling the stoichiometric of transition metals and chalcogens.
Among them, the ternary TMCs has evidenced to be more amenable electrocatalysts by both theoretical and experimental results, suggesting the possible optimization of transition metals to fine-tune their electronic states [28,29].The layered ternary TMCs may represent a new class of effective electrocatalysts that merits in-depth investigation since they not only open new avenues for investigating the advanced functional composites (catalysts) but also broaden and enhance the family of high-performance water splitting (HER reactions).Even though several ternary TMCs have been developed, there have only been a few sporadic studies on the electrocatalytic activity (HER), till now [30].Additionally, their electrocatalytic process, such as the ability to pinpoint the active site, falls well short of the necessity for scientific investigation.For H 2 production, the possible identification of active sites of the electrocatalysts is a crucial one.Thus, for designing catalytic materials, it is essential to identify the activity site of the reaction with the desired structural engineering Energies 2023, 16, 1669 3 of 29 and composition.By means of such activity, it supports catalyzing the H 2 generation at a much lower onset potential with remarkable durability [31,32].Therefore, the review aims to emphasize the main theme of nanoarchitectonics and the electrochemistry of the different layered ternary TMCs and their electrocatalytic activity in water-splitting reactions.The review focuses on this by initiating with the basic principles and electrochemistry of water splitting with its governance factors, followed by the nanoarchitectonics of the layered TMCs.A special focus has been provided for the different nanoarchitectonics describing the morphology and size-controlled structures of various TMCs and their associated electrocatalytic activities toward the production of H 2 .It is important to overview the prevailing trends of the electrocatalytic activity of the layered ternary TMCs and the strategies to improve them for the water-splitting reactions.Finally, the challenges and perspectives of the ternary TMCs are presented for the development of future research directions toward water-splitting electrocatalysts.
HER
The hydrogen evolution reaction (HER) is commonly recognized with a two-electron transfer involving three potential main phases with an adsorption-desorption intermediates process for the reduction of water molecules in an alkaline media (Figure 1), or protons in the acidic media, for hydrogen (H 2 ) production [10,33,34].The Volmer reaction (Equations ( 1) and ( 2)) is the initial step, in which an electron and one proton adsorbed on the catalytic sites (M) react to create adsorbed hydrogen atoms (H ads ) on the surface of the electrocatalyst.In alkaline and acidic electrolytes, the water molecules and the hydronium cation (H 3 O + ) serve as the proton sources, respectively.The H 2 production then occurs either by the Heyrovsky reaction at low H ads (Equations ( 3) and ( 4)) or the Tafel reaction at high H ads coverage (Equation ( 5)) on the surface of the catalysts.Another proton diffuses to the H ads during the Heyrovsky step, where it combines with a second electron, thereby producing H 2 .In the Tafel step, H 2 is created when two nearby H ads atoms join on the electrode's surface.According to mechanistic research, the H 2 originates either via the Volmer-Heyrovsky or Volmer-Tafel pathway.A series of simple steps can sum up the process of HER [35,36]: (1) Electrochemical hydrogen adsorption (Volmer reaction): (2) Electrochemical hydrogen desorption (Heyrovsky reaction): (3) Chemical hydrogen desorption (Tafel reaction): 2M − H ads → 2M + H 2 (both acidic and alkaline media) The H intermediate (H*) formulation is a step in both the Volmer-Heyrovsky and Volmer-Tafel pathways.In order to evaluate the activities of HER catalysts, the ∆GH* is considered a crucial requirement.The reaction H* should not be bound by an active catalyst that is either too weak or too strong.The overall reaction rate of the adsorption (Volmer) step will restrict if the H* binds to the surface too weakly (∆GH* > 0), whereas the desorption (Heyrovsky/Tafel) step will limit the reaction rate if the H* binds to the surface too strongly (∆GH* 0) (Figure 1).Thus, the HER should possess nearly zero ∆GH* for the ideal electrocatalyst [4,34,37].It is crucial to create facile methods to obtain the ∆GH* value when assessing a catalyst's HER activity.Based on density functional theory (DFT) Energies 2023, 16, 1669 4 of 29 calculations, the recently developed computational quantum chemistry offers a suitable way to determine the ∆GH* value.The ∆GH* value can be calculated by simulating the potential intermediates that could occur on electrocatalyst surfaces during the HER process (∆GH* = ∆EH* + ∆EZPE − T∆SH).The differential hydrogen adsorption energy can be calculated by using the isolated H 2 as a reference state, the zero-point energy change (∆EZPE) for adsorbed H* and isolated H 2 can be obtained by vibrational frequency calculation, and ∆SH is the difference in entropy between the adsorbed state and the possible intermediates formed on the surface of electrocatalysts during the HER process (T was set to be 300 K) [32].As mentioned, DFT calculations are becoming more important in determining the ∆GH* values as computational science and theoretical calculations are advancing.As a result, the design of volcanic relationships for different catalysts has been made possible.
Volmer-Tafel pathways.In order to evaluate the activities of HER catalysts, the ΔGH* is considered a crucial requirement.The reaction H* should not be bound by an active catalyst that is either too weak or too strong.The overall reaction rate of the adsorption (Volmer) step will restrict if the H* binds to the surface too weakly (ΔGH* > 0), whereas the desorption (Heyrovsky/Tafel) step will limit the reaction rate if the H* binds to the surface too strongly (ΔGH* 0) (Figure 1).Thus, the HER should possess nearly zero ΔGH* for the ideal electrocatalyst [4,34,37].It is crucial to create facile methods to obtain the ΔGH* value when assessing a catalyst's HER activity.Based on density functional theory (DFT) calculations, the recently developed computational quantum chemistry offers a suitable way to determine the ΔGH* value.The ΔGH* value can be calculated by simulating the potential intermediates that could occur on electrocatalyst surfaces during the HER process (ΔGH* = ΔEH* + ΔEZPE − T∆SH).The differential hydrogen adsorption energy can be calculated by using the isolated H2 as a reference state, the zero-point energy change (ΔEZPE) for adsorbed H* and isolated H2 can be obtained by vibrational frequency calculation, and ΔSH is the difference in entropy between the adsorbed state and the possible intermediates formed on the surface of electrocatalysts during the HER process (T was set to be 300 K) [32].As mentioned, DFT calculations are becoming more important in determining the ΔGH* values as computational science and theoretical calculations are advancing.As a result, the design of volcanic relationships for different catalysts has been made possible.
OER
Similar to this, a four-step electron transfer process with numerous intermediates (such as *OOH, *O, and *OH) involved in the oxygen evolution reaction (OER), both in alkaline and/or acidic conditions in the reactions, is the mechanism shown in Equations ( 6)- (11) [38,39]: At alkaline conditions:
OER
Similar to this, a four-step electron transfer process with numerous intermediates (such as *OOH, *O, and *OH) involved in the oxygen evolution reaction (OER), both in alkaline and/or acidic conditions in the reactions, is the mechanism shown in Equations ( 6)- (11) [38,39]: At alkaline conditions: At acidic conditions: The OER's kinetics are significantly much more sluggish than those of HER, as it involves the four-electron transfer and the production of O-O bonds while serving as an indispensable half-reaction in the water-splitting process.In general, numerous electron transfers occurring simultaneously do not favor the kinetic reactions, and according to certain findings, the OER involves many phases with a single electron transfer each.For a thermodynamically ideal electrocatalyst, the total free energy variation of OER is shown to be 4.92 eV, which can be spread evenly among the four elemental stages with various adsorbates (*OH, *O, *OOH, and O 2 ) [40].The difference between the reaction intermediates *O and *OH's adsorption-free energy change is one of the universal metrics for OER.It is possible to reduce the free energy variation by placing *O in the desired location between *OH and *OOH [41].
Governance Index for Water Splitting (OER and HER)
Some crucial parameters can help us to evaluate the catalytic activity of the developed materials to determine whether they can be suitable to be used as electrocatalysts for water splitting.The overpotential, Tafel plots, turnover frequency, stability, and Faradaic efficiency are primarily among them [42].
Overpotential
For OER and HER catalyzing, the equilibrium potentials will be in the range of 0 and 1.23 V (both vs. RHE), respectively.However, due to the intrinsic kinetic barrier, an additional potential that is greater than the equilibrium potential is required to start the catalytic reactions of both OER and HER.Such a kind of additional potential is described to be overpotential.The electrochemical measurements using cyclic voltammetry (CV) or linear sweep voltammetry (LSV) carried out for the materials will determine the total catalytic activity.To assess the overall electrode activity, the overpotential at a specific current density is often determined.The higher electrochemical activity of the electrode is represented by a smaller value of the overpotential [43][44][45].
Tafel Plot
The LSV curve yields a Tafel plot, and the plot illustrates how the electrochemical kinetics link the overpotential to the rate of an electrochemical reaction.The Tafel equation, = a + b log j, can be used to fit the linear parts of a Tafel plot.Here, corresponds to the overpotential, b is the Tafel slope, and j is the current density.Another measure to evaluate the intrinsic catalytic activity for water electrolysis is exchanged current density (j o ), which is the value obtained from j as a result of the Tafel equation when the applied value is zero.For the water-splitting reactions, the ideal electrocatalysts should have a modest b and a large j o [44,46,47].
Turnover Frequency
The number of molecular reactions that take place at the catalytic site per unit of time under specific conditions can be measured using the turnover frequency (TOF) to determine the active sites at a particular activity.TOF will be measured by applying the following Equation ( 14): TOF = (jA)/(αFn) (14) where j is the current density associated with the LSV results at a specific overpotential; F is the Faradaic constant (96,485.3C mol −1 ); and n is the moles (mol) of covered metal atoms on the electrode, and is calculated from mass (g) divided by the molality (g mol −1 ).A is the working electrode surface area; α stands for the catalyst electron number (electrons mol −1 ) [44,48].
Stability
Stability is a significant metric that is used to assess catalysts' capacity to withstand specified or durable (long-term) working conditions.There are typically two ways to assess the catalytic stability: (i) LSV measurement after performing 1000 CV cycles, and (ii) chronopotentiometry or chronoamperometry.A robust catalyst never exhibits significant changes in potential or current density over the extended operation [46].
Faradaic Efficiency
The term "faradaic efficiency" refers to the effectiveness of electron transmission in a system that facilitates an electrochemical process such as OER and/or HER in watersplitting reactions.The faradaic efficiency can be obtained by comparing the experimental and pre-determined computational (theoretical) results of the produced gas (O 2 or H 2 ).The gas chromatography (GC) measurements or the water displacing method can be used to calculate the amount of generated gas (H 2 or O 2 ).Additionally, a rotating ring disc (RRD) electrode voltammetry will also be used to measure the amount of oxygen (O 2 ) evolved [42,48,49].
Single-Layered Ternary TMCs
The layered ternary TMCs have a great deal of promise in overwhelming the constraints associated with binary TMCs such as MoS 2 in H 2 production.To evidence, the performance of the ternary TMCs, Tiwari et al. demonstrated highly stable layered TMCs, which consist of Mo and Cu (two transition metals) sandwiched with S (one chalcogen) (i.e., Cu 2 MoS 4 ) through the facile solution-processing approach.Furthermore, the results also demonstrated a novel approach to exfoliating the layers of the developed materials (Cu 2 MoS 4 ) by means of anionic doping of larger-sized Se.The representation of the exfoliated ternary TMCs is provided in Figure 2a [50].Generally, in TMCs, the transition metal ions reduce the ∆GH of H ads on the edge site of chalcogen atoms and tend to boost their catalytic performance.For the adsorption of hydrogen in the instance of Cu 2 MoS 4 , there are three different types of edge sites: Cu, Mo, and S. Most of the catalytically active sites are Mo-edges, which are partially covered by sulfur atoms absorbed.Incorporating Cu ions causes the sulfur (S) edges to likewise become catalytically active.In this case, Se ions (negatively charged; −2) take the place of part of the S ions (negatively charged; −2) in Cu 2 MoS 4 to become Cu 2 Mo(S y Se 1-y ) 4 .The Mo-binding Se-edges become catalytically active when the Se ions are present at the S-edges.As a result, the single-layered ternary TMC achieved by controlled doping of Se (Cu 2 Mo(S y Se 1-y ) 4 ) resulted in the enhanced electrocatalytic activity of HER.The single-layered ternary TMCs exhibit robust electrocatalytic (HER) activity with an onset potential of 96 mV with the corresponding Tafel slope of 52 mV/dec compared with other multilayered ternary TMCs.Since there are more active sites in single-layered Se-doped materials than in multilayered samples, they are more effective against HER.This is further supported by the fact that single-layered TMCs have 68 times more electrochemical active area than bulk (multilayered) samples.Further evidence that the higher Se atom doping is not as effective for HER due to the inhibition of the active S vacancy, which is responsible for HER's hydrodesulfurization, comes from the fact that multilayered samples had a 23 times lower electrochemical active area than single-layered TMC.Moreover, the developed single-layered ternary TMCs were reported to be highly stable in electrolysis even after 1000 cycles for 15 h in acidic electrolytic conditions, suggesting a new method for the real-time applications of TMCs in H 2 production [21,36,50,51].
(multilayered) samples.Further evidence that the higher Se atom doping is not as effective for HER due to the inhibition of the active S vacancy, which is responsible for HER's hydrodesulfurization, comes from the fact that multilayered samples had a 23 times lower electrochemical active area than single-layered TMC.Moreover, the developed single-layered ternary TMCs were reported to be highly stable in electrolysis even after 1000 cycles for 15 h in acidic electrolytic conditions, suggesting a new method for the real-time applications of TMCs in H2 production [21,36,50,51].The experiment hypothesizes the decoration of molybdenum selenide (MoSe2) nanodots (NDs) over the basal planes of few-layered Cu2MoS4 nanosheets through an in situ solution processing approach for the electrocatalytic HER activity in water splitting.The interaction of the chalcogen atoms in the ternary TMCs and the transition metals in the NDs induces high catalytically active sites.As in the developed ternary TMCs, the layers are connected by the van der Waals interactions between covalently bonded transition metal atoms and chalcogen atoms (Figure 2b) [52].The active hydrodesulfurization catalyst in Cu2MoS4 is due to sulfur vacancies in the edge sites, whereas the basal planes are inactive due to less exposed transition metals and fewer vacancies of S-atom [53].Additionally, the Se and S atoms have similar electronegativity and tend to have strong enough interaction of NDs and ternary TMCs to help in addressing the stability problem during the electrolysis procedures [54,55].It also suggested that the most effective way to increase the catalytic activity of Cu2MoS4 is to use NDs with a controlled size, which can be regulated during the synthesis of ND-decorated nanosheets.For instance, the 20 mL of The experiment hypothesizes the decoration of molybdenum selenide (MoSe 2 ) nanodots (NDs) over the basal planes of few-layered Cu 2 MoS 4 nanosheets through an in situ solution processing approach for the electrocatalytic HER activity in water splitting.The interaction of the chalcogen atoms in the ternary TMCs and the transition metals in the NDs induces high catalytically active sites.As in the developed ternary TMCs, the layers are connected by the van der Waals interactions between covalently bonded transition metal atoms and chalcogen atoms (Figure 2b) [52].The active hydrodesulfurization catalyst in Cu 2 MoS 4 is due to sulfur vacancies in the edge sites, whereas the basal planes are inactive due to less exposed transition metals and fewer vacancies of S-atom [53].Additionally, the Se and S atoms have similar electronegativity and tend to have strong enough interaction of NDs and ternary TMCs to help in addressing the stability problem during the electrolysis procedures [54,55].It also suggested that the most effective way to increase the catalytic activity of Cu 2 MoS 4 is to use NDs with a controlled size, which can be regulated during the synthesis of ND-decorated nanosheets.For instance, the 20 mL of MoSe 2 NDs in ternary TMCs synthesis exhibit the lowest onset potential of 157 mV at a current density of 10 mA/cm 2 , with a corresponding Tafel slope of 72.6 mV/dec.Further, the NDs decorated ternary TMCs exhibit extremely stable conditions in an acidic medium during the continuous electrolysis process [52].
Nanocrystals
Multinary metal chalcogenides nanocrystals possess the strong potential to replace classic nanodot-structured TMCs in several applications, including water splitting.The nanocrystals offer a unique characteristic of the photo-physical (UV and visible range) and electrochemical properties [56].The crystalline structure and fundamental electronic properties of the nanocrystal-structured ternary TMCs provide ease of dopant ion accommodation in the lattice with tunable bandgap energy [57].
Nanotubes
The one-dimensional (1D) TMC nanostructure is anticipated to provide 1D electron transfer channels, facilitating the electrolyte penetration and increase in the amount of reaction area; however, investigation on 1D ternary TMCs for water splitting has been scarce until now.One of the often-described approaches to manufacturing nanomaterials with varied morphologies, advantageous crystal facets, and regulated sizes and thicknesses by adjusting experimental parameters such as temperature, solvent, time, etc. is the hydrothermal/solvothermal technique.The synthesis procedure is referred to as the hydrothermal method if water is employed as the solvent, whereas the solvothermal method uses organic solvents.In closed steel containers, referred to as "autoclaves", a heterogeneous reaction takes place in the presence of an aqueous solvent or mineralizers at high pressure and low temperature in order to dissolve and recrystallize the materials that are relatively intractable under normal circumstances [58][59][60].For instance, the solvothermal synthesis of MoS 2x Se 2(1-x) nanotubes for the enhanced electrocatalytic HER activity in water splitting was carried out by tuning the chemical (chalcogens) composition, which results in the expansion of interlayer spacing of the ternary TMCs [61].The SEM images of the synthesized ternary MoS 2x Se 2(1-x) nanotubes are shown in Figure 3. Figure 3a displays the morphology of the MoS 2 with an increase in the length of nanotubes (~3-4 m), and, upon thermal post-treatment of the sample for the formal of MoS 2x Se 2(1-x), results in drastic compression in the length of the nanotubes.Specifically, the selenization at higher temperatures and for longer periods results in substantially rough surfaces of the MoS 2x Se 2(1-x) nanotubes, which validates the hollow interiors and hierarchical architectures after being selenized at 800 • C for 1 h (Figure 3b,c).Furthermore, the expansion of interlayer spacing with the hierarchical assembled nanotubes of ternary TMCs benefits the hydrogen adsorption energy during the electrocatalytic activity in water splitting [61,62].
classic nanodot-structured TMCs in several applications, including water splitting.The nanocrystals offer a unique characteristic of the photo-physical (UV and visible range) and electrochemical properties [56].The crystalline structure and fundamental electronic properties of the nanocrystal-structured ternary TMCs provide ease of dopant ion accommodation in the lattice with tunable bandgap energy [57].
Nanotubes
The one-dimensional (1D) TMC nanostructure is anticipated to provide 1D electron transfer channels, facilitating the electrolyte penetration and increase in the amount of reaction area; however, investigation on 1D ternary TMCs for water splitting has been scarce until now.One of the often-described approaches to manufacturing nanomaterials with varied morphologies, advantageous crystal facets, and regulated sizes and thicknesses by adjusting experimental parameters such as temperature, solvent, time, etc. is the hydrothermal/solvothermal technique.The synthesis procedure is referred to as the hydrothermal method if water is employed as the solvent, whereas the solvothermal method uses organic solvents.In closed steel containers, referred to as "autoclaves", a heterogeneous reaction takes place in the presence of an aqueous solvent or mineralizers at high pressure and low temperature in order to dissolve and recrystallize the materials that are relatively intractable under normal circumstances [58][59][60].For instance, the solvothermal synthesis of MoS2xSe2(1-x) nanotubes for the enhanced electrocatalytic HER activity in water splitting was carried out by tuning the chemical (chalcogens) composition, which results in the expansion of interlayer spacing of the ternary TMCs [61].The SEM images of the synthesized ternary MoS2xSe2(1-x) nanotubes are shown in Figure 3. Figure 3a displays the morphology of the MoS2 with an increase in the length of nanotubes (~3-4 m), and, upon thermal post-treatment of the sample for the formal of MoS2xSe2(1-x), results in drastic compression in the length of the nanotubes.Specifically, the selenization at higher temperatures and for longer periods results in substantially rough surfaces of the MoS2xSe2(1-x) nanotubes, which validates the hollow interiors and hierarchical architectures after being selenized at 800 °C for 1 h (Figure 3b,c).Furthermore, the expansion of interlayer spacing with the hierarchical assembled nanotubes of ternary TMCs benefits the hydrogen adsorption energy during the electrocatalytic activity in water splitting [61,62].
Nanowires
Similarly, the 1D nanowires (NWs) and their composites based on NWs have recently been demonstrated to offer a technique for customizing the density of states surrounding the Fermi level, with decreased thermal conductivity, and with compatibility with flexible substrates [63][64][65].The NWs offer the potential benefit of low-cost manufacture for largescale applications in several fields of applications, including catalysis [66][67][68][69].For instance, the ultrathin ternary metal/Te/Se NWs have been developed with tunable composition and high aspect ratios via a solution-based hot-injection approach.By using the mentioned approach, we are able to add a variety of different metal species while preserving singlephase ternary TMCs with well-defined crystal and nanowire structures by using Te 1-x Se x templates, where the chemical reactivity between the Te and Se atoms will be balanced.The synthetic method entails three steps: (1) the synthesis of ultrathin Te nanowires as a precursor, (2) the Te nanowires are applied to create Te x Se 1-x nanowires with the adjustable aspect ratios, and (3) the transformation of the Te x Se 1-x nanowires into a series of ternary metal-Te x Se 1-x nanowires by the hot-injection of suitable metal precursors (Bi 2 (Te x Se 1-x ) 3 , Ag 2 Te x Se 1-x , Cu 1.75 Te x Se 1-x , CdTe x Se 1-x , and PbTe x Se 1-x ) under particular circumstances.The schematic representation of NWs synthesis is provided in Figure 4a [68].The rapid inclusion of metal atoms into the Te/Se NWs throughout the synthesis phase is the main reaction in the transformation.The reaction time may vary depending on the metal precursors; however, all the reactions taking place within 3 h are very stable and are able to successfully synthesize composition-adjusted metal/Te/Se ternary NWs [70].The demonstrated work on ternary ultrathin nanowire-structured TMCs opens the path for the scalable synthesis of the ternary TMCs [68].
Nanospheres and Nanospheroids
Benefiting from the hollow and sphere-structured TMCs in water-splitting reaction a controllable synthesis of nickel sulfoselenide (NiS2(1-x)Se2x) electrocatalyst via the h drothermal method was reported by Zeng et al. [73].The schematic representation for t formation of ternary TMCs (NiS2(1-x)Se2x) is provided in Figure 5a-c.The enhanced ele trocatalytic activity of the ternary TMCS was achieved by regulating the degree of S/Se NiS2(1-x)Se2x to boost its intrinsic activity by changing its electronic structure.The TM with a particular chemical composition of Ni(S0.5Se0.5)2exhibit significant bifunction electrocatalytic activity (OER and HER) in a neutral medium, taking use of their distin tive structural characteristics and the anion doping effect.
Nanofibers
The fibrous structures of the ternary TMCs offer similar kind of characteristics to nanowires, which include rapid electronic transport, high active catalytic surfaces, and so on.Considering this fact, Zong et al. fabricated various nanofiber-structured nickel cobaltbased ternary TMCs comprised of NiCo 2 A 4 (A=S and Se) by creating an anion vacancy.The scheme (Figure 4b) illustrates various ternary TMCs for the hydrogen production process in water splitting, and Figure 4c-f represent the obtained morphology [71].The typical synthesis process involves the intrinsically high volatile sulfur and selenium-rich vacancy for NiCo 2 S 4 and NiCo 2 Se 4 , respectively, by the anionic substitution and thermal treatment [72].The developed ternary TMC architecture with the nanofiber templates induces the formation of anchoring nanowired TMCs over the direction-oriented growth on nanofibers.Both (NiCo 2 S 4 and NiCo 2 Se 4 ) the ternary TMCs offer the benefits of an enhanced number of active sites possessing large surface areas for electrochemical water splitting.The NiCo 2 Se 4 nanofibers, which are rich in selenium vacancies, exhibit stronger electrocatalytic activity than the other chalcogens (NiCo 2 S 4 ), with a Tafel slope of 49.8 mV/dec −1 and a low overpotential of 168 mV at 10 mA/cm 2 , attributing to its anionic vacancy size [71,72].
Nanospheres and Nanospheroids
Benefiting from the hollow and sphere-structured TMCs in water-splitting reactions, a controllable synthesis of nickel sulfoselenide (NiS 2(1-x) Se 2x ) electrocatalyst via the hydrothermal method was reported by Zeng et al. [73].The schematic representation for the formation of ternary TMCs (NiS 2(1-x) Se 2x ) is provided in Figure 5a-c.The enhanced electrocatalytic activity of the ternary TMC S was achieved by regulating the degree of S/Se in NiS 2(1-x) Se 2x to boost its intrinsic activity by changing its electronic structure.The TMCs with a particular chemical composition of Ni(S 0.5 Se 0.5 ) 2 exhibit significant bifunctional electrocatalytic activity (OER and HER) in a neutral medium, taking use of their distinctive structural characteristics and the anion doping effect.This research shows the potential for creating a novel kind of abundant electrocatalyst that is attractive and useful for neutral-pH water electrolysis.Furthermore, it also provides perspectives on creating advanced electrocatalysts and an elegant strategy for enhancing the efficiency of electrochemical catalysis [73].
Bose et al. demonstrated the fabrication of nanospheroid-structured ternary molybdenum sulphoselenide (MoSxSey) on carbon filter paper using appropriate precursors (Mo, S, and Se) by the hydrothermal approach.A typical hydrothermal approach involves the reaction of different sources of precursor and reagents such as molybdic acid (for Mo), thioacetamide (for S), and selenium dioxide (for Se) at a retained temperature of This research shows the potential for creating a novel kind of abundant electrocatalyst that is attractive and useful for neutral-pH water electrolysis.Furthermore, it also provides perspectives on creating advanced electrocatalysts and an elegant strategy for enhancing the efficiency of electrochemical catalysis [73].
Bose et al. demonstrated the fabrication of nanospheroid-structured ternary molybdenum sulphoselenide (MoS x Se y ) on carbon filter paper using appropriate precursors (Mo, S, and Se) by the hydrothermal approach.A typical hydrothermal approach involves the reaction of different sources of precursor and reagents such as molybdic acid (for Mo), thioacetamide (for S), and selenium dioxide (for Se) at a retained temperature of 180 • C for 24 h.Further, the experiments report the electrocatalytic activity of the different binary, ternary, and quaternary TMCs for H 2 production.From the taken reactants, the theoretical growth and reaction mechanism is illustrated in Figure 5d-g.The results display good electrocatalytic activity of the ternary TMCs with the composition of Mo 37.3 S 46.9 Se 15.8 ascribed to its nanoarchitectonic (nanospheroids) structure with a synergistic effect of the large exposed active sites and intrinsic activity [74].
MOF Nanoarchitectures
The metal-organic framework (MOF) nanoarchitectures with extremely high surface areas, tunable nanostructures, and excellent porosities have made significant strides in engineering research and are now being considered as potential raw materials for the creation of highly effective electrochemical water-splitting catalysts.The creation of catalytic centers for electrocatalysts produced from MOF-based materials, as well as optimization and structural functionalization at the atomic and molecular levels, contribute to the rapidly expanding advances in active catalytic activities and fundamental processes.MOFs can be viewed as essential materials to introduce active guest species, such as metal complexes, nanoparticles, and polyoxometalates (POMs), through covalent or noncovalent bonds (electrostatic, π-π, and host-guest interactions), and support the formation of their well-defined pore structures by using a variety of organic linkers.In many instances, the catalytic activity could be enhanced by the synergistic interaction between the guests and MOFs [75].Due to its inherent benefits, including quick mass and electron transfer, adjustable architectures, and more exposed active sites, 2D MOFs have also been shown to be potential electrocatalytic materials in various studies [76,77].Greater exposure of the active surface sites might possibly be made possible by converting bulk MOF crystals into 2D nanosheets.Through a simple method of one-step chemical bath deposition, Duan et al. created ultrathin nanosheet arrays of 2D MOFs on a variety of substrates, demonstrating improved performance for the OER, HER, and overall water splitting.For instance, using a straightforward sonication-assisted solution technique, novel 2D Co-BDC/MoS 2 hybrid nanosheets were developed and created as effective electrocatalysts for alkaline HER.In the Co-BDC/MoS 2 system (BDC stands for 1,4-benzenedicarboxylate, C 8 H 4 O 4 ), the addition of Co-BDC caused a partial phase transfer from 2H-MoS 2 to 1T-MoS 2 , which greatly increases HER activity.More crucially, the alkaline HER would benefit greatly from a well-designed Co-BDC/MoS 2 interface.The alkaline HER's rate-determining water dissociation step is made easier by Co-BDC, while the subsequent H 2 -generation step benefits from modified MoS 2 [78].Due to kinetic restrictions and its critical significance for future energy harvesting, it is still difficult to demonstrate a highly effective non-noble bifunctional catalyst for complete water electrolysis.A simple hydrothermal approach was used to create a low-cost, integrated composite comprising a NiCo metal and organic framework that was then carbonized and phosphorized for the electrochemical oxygen and hydrogen evolution reactions.It had a lower overpotential of 184 mV for the oxygen evolution reaction (OER) and 84 mV for the HER in 1.0 M KOH and 0.5 M H 2 SO 4 electrolytes to reach a current density of 10 mA/cm 2 , with a slight Tafel slope of 63 mV dec −1 for the OER and 96 mV dec −1 for the HER, exhibiting exceptional performance.The research outcomes are significantly superior to those of the benchmark catalyst used in the industry [79].
Likewise, there are several nanoarchitectured TMCs that have been developed and reported for water-splitting applications.Despite the different nanostructures reported, the synthesis approach also plays an important role in determining the catalytic performance of water splitting.Thus, the advantages/disadvantages of various synthesis approaches are comparatively provided in Table 1.Microwave-assisted Synthesis Requires less time/rapid process; size can be controlled [81,82] Electrodeposition Method Rapid and single-step process; used to produce homogeneous and high-purity crystalline materials at the cathode of the electrochemical system during the coating process [83] Sulfidation and Selenization Solution-phase conversion; facile and selectable synthesis method [73] Chemical Vapor Deposition (CVD) Method Gas-phase aerosol process for producing high-purity nanoparticles; mainly used for large-scale thin-film production [84] Photoreduction Requires higher photon energy; can synthesize materials with large surface area and many active sites [85] Refluxing Method Large-scale synthesis method; facile and cost-effective [86] Sputtering 0D, 1D, and 2D materials can be prepared; used for depositing materials with high melting point; as electrons can be focalized, it is possible to obtain very localized heating on the material to evaporate with a high density of evaporation power [87,88]
Layered Ternary TMCs as Bifunctional Electrocatalysts in Water Splitting
The electron-rich structure of the layered TMCs exhibits relatively low inherent electrical resistivity and quick charge carrier movement during OER and HER processes.Additionally, the TMCs could maintain chemical stability in hostile settings (strong acids or alkalis).As a result, they have recently attracted a lot of interest due to their ability to fully split the water molecules in a variety of electrolytes with varying pH levels [20,73,82,[89][90][91].Tao et al. developed nanoclustered nickel cobaltite telluride (NiCo 2 Te 4 ) by the surface modification approach for the bifunctional electrocatalysts in overall water splitting at neutral conditions.As a straightforward approach, surface modification (using surface ligands) can dramatically boost the catalytic activity toward an efficient hydrogen and oxygen evolution reaction at the same time.By utilizing this newly created nanocluster ternary TMC electrocatalyst, a bifunctional (two-electrode)-based water electrolysis cell operating at a low bias voltage of 1.55 V at a current density of 10 mA/cm 2 with exceptional stability for 30 h in a solution with a pH near neutral has been demonstrated [92].The hydrogen adsorption-free energy (∆GH) of the TMC electrocatalysts is shown in Figure 6a.Due to its weak 2D planar π-conjugation structure and anhydride terminal groups (O=C-O-C=O), the surface ligand has a higher ∆GH value (~6.5 eV) than others.The electronegativity of the ternary electrocatalyst decreases, and the ∆GH between the NiCo 2 Te 4 and ligands are both reduced by a strong ionic connection (O-Te) between the TMCs and the surface ligand.As a result, the NiCo 2 Te 4 surface may release atomic hydrogen with ease, thereby boosting the catalytic activities during the water-splitting process [93].Furthermore, the DFT simulations suggest that the surface ligand changed the NiCo 2 Te 4 electrons' energy distributions towards the favored catalytic processes [92].Likewise, the doping of metals (Co) in the binary TMCs to form the ternary Co-MoS 2 nanosheets supports achieving higher conductivity and lower hydrogen adsorption energy toward HER and produces more OER catalytic active centers [94].In both acidic and alkaline conditions, the ternary TMCs (Co-MoS 2 ) exhibited the best HER activity, comparable to that of Pt/C, suggesting that an appropriate Co doping quantity is crucial for the successful regulation of the electronic structure [95,96].The onset potentials and overpotentials of Co-MoS 2 were specifically 0.04/0.03V (vs.RHE) and 60/90 mV in 0.5 M H 2 SO 4 and 1.0 M KOH, respectively, at 10 mA/cm 2 .The reported results are consistent with the fact that the HER activity in the acidic media is higher than in the alkaline media [96,97].Likewise, with onset potentials of 1.68/1.33V (vs.RHE) in 0.5 M H 2 SO 4 and 1.0 M KOH, and overpotentials of 540/190 mV at 10 mA cm 2 , it has the highest OER activity in both acidic and alkaline conditions.Its excellent water-splitting durability is demonstrated at a constant voltage of 1.58 V in 1.0 M KOH, validating the viability of ternary Co-MoS 2 TMCs as a bifunctional electrocatalyst in both acidic and alkaline conditions [94].Following this work, a series of NiS 2(1-x) Se 2x hollow/porous spheres (x = 0, 0.25, 0.5, and 0.72) made through the hydrothermal process and the degree of the selenization tends to attain the satisfactory and the ideal bifunctional (OER and HER) behavior of η 10 overpotential of 501 and 124 mV of ternary electrocatalysts.The composition of Ni(S 0.5 Se 0.5 ) 2 in 1M phosphate-buffered saline solution was attributable to the anion Se doping's optimal electronic state as well as its special structural characteristics (Figure 6b).Then, using the Ni(S 0.5 Se 0.5 ) 2 as a bifunctional electrode (anode and cathode), it was possible to create an overall water splitting that could deliver a lower cell voltage of 1.87 V at 10 mA/cm 2 in neutral conditions (Figure 6b) [73].
Energies 2023, 16, 1669 14 of 30 anion Se doping's optimal electronic state as well as its special structural characteristics (Figure 6b).Then, using the Ni(S0.5Se0.5)2as a bifunctional electrode (anode and cathode), it was possible to create an overall water splitting that could deliver a lower cell voltage of 1.87 V at 10 mA/cm 2 in neutral conditions (Figure 6b) [73].It has also been investigated to greatly increase the electrocatalytic OER and HER activity of undoped pentlandite-type cobalt sulfide (Co9S8), designated as Co9S4P4, under neutral circumstances.Imparting phosphor in the Co9S8 is beneficial to boosting the electrical conductivity, surface area, and charge transfer for water-splitting reactions.In contrast to the benchmark material, Pt/C-IrO2 (1.72 V), Co9S4P4 material demonstrated an applied voltage of just about 1.67 V to obtain 10 mA/cm 2 , as well as a negligible change in It has also been investigated to greatly increase the electrocatalytic OER and HER activity of undoped pentlandite-type cobalt sulfide (Co 9 S 8 ), designated as Co 9 S 4 P 4 , under neutral circumstances.Imparting phosphor in the Co 9 S 8 is beneficial to boosting the electrical conductivity, surface area, and charge transfer for water-splitting reactions.In contrast to the benchmark material, Pt/C-IrO 2 (1.72 V), Co 9 S 4 P 4 material demonstrated an applied voltage of just about 1.67 V to obtain 10 mA/cm 2 , as well as a negligible change in applied voltage following a continuous 24 h galvanostatic electrolysis at 10 mA/cm 2 in a neutral electrolyzer [98].
Tactics for Enhancing Electrocatalytic Activity
The nanoarchitectonics layered ternary TMCs have garnered interest as one of the most promising electrocatalysts for water splitting due to their distinctive structure.The basal plane of TMCs is catalytically inert; however, the novel findings indicate that the stacked TMC edge sites are active for electrocatalytic activity because these edge sites possess a nearly optimum hydrogen adsorption-free energy for water splitting [22,32,99].There are three main ways to improve the electrocatalytic characteristics of layered TMCs: (i) edge engineering for boosting the active sites, (ii) activating the basal planes by altering their chemical composition, and (iii) strain regulation through the structural and chemical changes.
Edge Engineered Layered TMCs
The optimization of the basal plane-to-edge ratio creates a new opportunity for improving catalyst performance, which has been made possible by the observation of multilayered TMCs edge activity (for electrocatalytic activity).The density of the accessible edges at the surface of TMCs has been improved in recent years in several ways, including vertical alignment, porous structure, stepped surface structure, hollow structure (HS), and three-dimensional nanopatterning, nanoarchitectonics [100][101][102].The creation of MoO 3 and MoS 2 core-shell nanowires with a vertical orientation is one such method for achieving this goal, with the use of this technique exhibiting effective electrocatalytic activity for separating water molecules (H 2 and O 2 ) in an acidic medium by exposing the high surface electrodes with significant amounts of edge-exposed MoS 2 [102].Additionally, Cui et al. reported MoS 2 and MoSe 2 thin films with vertically aligned layers developed on flat Mo substrates to increase the exposure of edges.The edge-terminated TMCs demonstrate effective electrocatalytic activity for water splitting (Figure 7a) [23].Similarly, Wang et al. created MoSe 2 and WSe 2 on curved nanowires that were vertically orientated.The surface area for effective electrocatalytic activity is increased while the exposed edge sites are maximized in these reported structures [24].By stretching or compressing the molecular layers, altering the electronic structures of transition metal dichalcogenides, and resulting in an even greater improvement in the electrocatalytic activities of metal dichalcogenides, the formation of vertical structures of TMCs on the curved surfaces may result in strain.To boost the electrocatalytic activity, Kibsgaard et al. constructed MoS 2 with a double-gyroid (DG) shape that preferentially exposes the active edge sites (Figure 7b,c) [103].In the fabricated contiguous large-area thin films of a highly ordered double-gyroid MoS 2 bi-continuous network, a considerable portion of the edge sites have a high surface area and effectively split water through electrocatalysis.Theoretically, stepped-edge customized MoS 2 has been demonstrated to be a more effective technique to increase the catalytic activity than the flat edge site MoS 2 .Figure 7d,e show the vertical arrays of stepped-edge, surface-terminated MoS 2 nanosheets studied by Hu et al. shown to be an excellent and very stable electrocatalyst for water splitting when compared to the flat-edge surface-terminated MoS 2 [104].For effective electrocatalytic activity, the single-crystal MoS 2 nanobelts completely covered in edge sites on the top surface have also been described.These parallel-stacked atomic layers on the MoS 2 basal planes of these nanobelt structures maximize the exposure of active edge sites for efficient electrocatalytic activity for water splitting [105].In a similar way, to create highly electrocatalytic MoSe 2 films, Saadi et al. reported using an operando synthesis approach [106].This method exposes more edge sites and offers a large surface area for effective electrocatalytic activity.Additionally, the effective electrocatalytic activity for water-splitting has been reported to be obtained using flexible electrodes containing edge-oriented MoS 2 [107].The electrochemical anodization of molybdenum metal, followed by interaction with sulfur vapor, produces the edge-oriented flexible film of MoS 2 .The most electrocatalytic activity of TMCs can be utilized by combining a high surface area with a high percentage of exposed edge sites and tailored electronic architectures.A multiscale structural and electrical control of MoS 2 foam as an electrocatalyst for water splitting was reported by Deng et al. [96].To fine-tune the electronic structure of MoS 2 , which improves the electrocatalytic activity of the edges of MoS 2 , the three-dimensional, vertically aligned MoS 2 -foam is doped with Co atoms (Figure 7f,g).The commercial nickel (Ni) foam supports boosting the electrocatalytic activity of the TMCs by acting as a template; moreover, such a simple approach finds a way in a scalable method to expose the maximum edge-oriented TMCs.In this instance, the vertically aligned arrays of the MoS 2 nanosheets over Ni foam exhibit the enhanced H 2 production activity that may be applied in practice by maximizing exposure of atomistic-level MoS 2 edge sites [105,106].Even though the structural engineering of TMCs has demonstrated higher electrocatalytic activity, it has been anticipated that even better performance can be attained if the 3-dimensional (3D) template's effective feature size is reduced to the order of nanoscale [108].Following this strategy, an extremely stable and active electrocatalyst has been created for the massive production of H 2 using the 3D nanopatterning approach.Based on the existing technology, the TMCs were continuously nanopatterned (3D) using proximity-field nanopatterning (PnP) and electrode placement, then a solvent-assisted hydrothermal process.The electrocatalytic activity of such fabricated 3D nanopatterned TMCs was higher than that of TMCs grown on Ni foam with a low loading of active material for water splitting (Figure 7h,e,k) [17,106,108].
Doping/Vacancy in Layered TMCs
Despite considerable advancements in edge engineering, the layered TMCs are still limited in their electrode activities, since only the edge sites are responsible for the positive electrode activity.Generally, the chemical absorption at the active sites of the TMCs can be improvised by the polarizing effect in orbit coupling, which tailors the electron density.In this regard, to improve the electrochemical performance of layered TMCs and increase the number of active sites for electrocatalytic activity, the introduction of heteroatoms or vacancies is suggested as one of the successful techniques [109,110].The heteroatom introduction might enhance the reactive species' adsorption energies during the water-splitting mechanism.It is possible to create a bifunctional catalyst by increasing the active sites, controlling electronic structure, and creating defects and distortions in the lattice [94,[111][112][113].For example, the covalent doping approach of cobalt (Co) into MoS 2 aids induces the bifunctional property of the ternary electrocatalysts in the water-splitting reactions (OER and HER).By optimizing the catalysts' electronic structure (covalently doped Co into MoS 2 ), the ∆GH* could be decreased with an increase in intrinsic conductivity, leading toward the improved performance of HER.The production of high-valence-state Co species in an alkaline solution under anodic potentials is attributed to the increase in OER activity.The developed ideal ternary TMCs displayed the functions of both OER and HER with overpotentials of 260 and 48 mV at 10 mA/cm 2 , respectively.Similarly, the developed ternary TMCs (by covalent doping) achieved nearly 100% Faradaic efficiency in an alkaline medium producing 4.5 and 9.1 µmol min −1 of O 2 and H 2 , respectively [114,115].In a similar way, the copper clusters incorporated into cobalt sulfide and deposited over copper foam (Cu@CoS x /CF) result in serving bifunctional ternary electrocatalysts towards water splitting in an alkaline medium, thereby yielding a current density of 10 mA cm 2 at 1.5 V. Additionally, the Cu@CoS x /CF electrolyzer could maintain the current density of 100 mA cm 2 over 200 h of the water catalytic reaction at 1.8 V without obvious current reduction.According to the experimental and theoretical findings, the ternary Cu@CoS x /CF had good catalytic performance due to the synergistic interactions between Cu and CoS x .Furthermore, the water molecules adsorption and electronic transport on the catalytic surface can be sped up by facilitating the interfacial charge redistribution of Cu@CoS x .Additionally, the water-splitting reaction kinetics were also made easier by the fact that favored the dissociation of water [116].
MoS2 nanosheets over Ni foam exhibit the enhanced H2 production activity that may be applied in practice by maximizing exposure of atomistic-level MoS2 edge sites [105,106].Even though the structural engineering of TMCs has demonstrated higher electrocatalytic activity, it has been anticipated that even better performance can be attained if the 3-dimensional (3D) template's effective feature size is reduced to the order of nanoscale [108].Following this strategy, an extremely stable and active electrocatalyst has been created for the massive production of H2 using the 3D nanopatterning approach.Based on the existing technology, the TMCs were continuously nanopatterned (3D) using proximity-field nanopatterning (PnP) and electrode placement, then a solvent-assisted hydrothermal process.The electrocatalytic activity of such fabricated 3D nanopatterned TMCs was higher than that of TMCs grown on Ni foam with a low loading of active material for water splitting (Figure 7h,e,k) [17,106,108].
Strain Regulation in Layered TMCs
The electronic structure of the ternary TMCs can be customized by altering the atomic configuration of the electrocatalyst through lattice or chemical strain.There are numerous techniques, including doping, vacancy creation, core-shell structuring, and lattice mismatch, that have been demonstrated to create the inherent strain in TMCs [117][118][119][120].The electrocatalytic activity of TMCs is improved by the easy induction of inherent strain by the core-shell structure, which enables an upshift in the d-band center.By tuning the stacking of MoS 2 shell layers, Zhu et al. reported a controllable and accurate tensile strain for effective electrocatalytic activity in a Co 9 S 8 /MoS 2 core-shell structure (Figure 8a,b) [117].The stabilization of the intermediate HER for capturing and release of H 2 ions can be achieved with the regulated charge transfer in heteroepitaxial lattice strain in the core-shell structure, hydrogen atom absorption (DEH), and the transition-state (DE2H) energy barriers, respectively.Similarly, a Mo 2 C/MoS 2 core-shell configuration was reported by Tiwari et al. (Figure 8c) [118]; in order to activate the electrocatalytic activity, molecular spin-coupled core (Mo 2 C) MoS 2 was created.The MoS 2 's curved shells provide the lattice strain, which enhances the kinetics of electrocatalysis, and spin coupling ensures rapid ion diffusion.Additionally, lattice mismatch in the core-shell-structured TMCs induced the intrinsic strain for the enhanced electrocatalytic activity in water splitting [121].Likewise, the MoS 2 electrocatalytic activity was enhanced by creating intrinsic strain in the material through a lattice mismatch with the Au substrate (Figure 8d,e).It is demonstrated that the improved electrocatalytic activity of MoS 2 is due to out-of-plane lattice strain, which controls the charge density distribution and atom migration while lowering the band gap and DGH* [119,122].
Chemical Modification
Based on the prospective outcomes derived from the binary TMCs for the electrocatalyst, the ternary TMCs with a more complicated composition and structure can be presented through chemical modification.There are two ways to synthesize ternary TMCs from binary TMCs: (i) the doping or post-treatment approach, and (ii) the bottom-up synthesis approach.In the doping or post-treatment approach, the difference existing between the atomic radius of the doped and original atom causes the expansion or contrast in the lattice of the developed catalyst; moreover, it tends to change the bonding nature and electronic structure compared to the original system, which tends to induce the adsorption/desorption behavior in the reaction process.On the other hand, the bottom-up approach is considered one of the simplest ways to achieve the controlled morphology and growth of the ternary TMC-based electrocatalysts.
Double-Anion Ternary TMCs
Double-anion TMCs, which hold the ternary blends, were characterized by unique performance to activate the inner base plane of themselves.Because the non-metallic atoms have strong electronegativity, the introduction of non-metallic atoms can more effectively change the electronic structure and chemical properties of the original materials, which provides support for improving the catalytic activity of the catalyst [123].Xie et al. were the first to demonstrate the significant research activity of double-anion TMCs for HER.The oxygen concentration of MoS2 flakes was regulated by adjusting the synthesis temperature, with higher oxygen contents being produced under lower tempera-
Chemical Modification
Based on the prospective outcomes derived from the binary TMCs for the electrocatalyst, the ternary TMCs with a more complicated composition and structure can be presented through chemical modification.There are two ways to synthesize ternary TMCs from binary TMCs: (i) the doping or post-treatment approach, and (ii) the bottom-up synthesis approach.In the doping or post-treatment approach, the difference existing between the atomic radius of the doped and original atom causes the expansion or contrast in the lattice of the developed catalyst; moreover, it tends to change the bonding nature and electronic structure compared to the original system, which tends to induce the adsorption/desorption behavior in the reaction process.On the other hand, the bottom-up approach is considered one of the simplest ways to achieve the controlled morphology and growth of the ternary TMC-based electrocatalysts.
Double-Anion Ternary TMCs
Double-anion TMCs, which hold the ternary blends, were characterized by unique performance to activate the inner base plane of themselves.Because the non-metallic atoms have strong electronegativity, the introduction of non-metallic atoms can more effectively change the electronic structure and chemical properties of the original materials, which provides support for improving the catalytic activity of the catalyst [123].Xie et al. were the first to demonstrate the significant research activity of double-anion TMCs for HER.The oxygen concentration of MoS 2 flakes was regulated by adjusting the synthesis temperature, with higher oxygen contents being produced under lower temperature circumstances.With respect to the increased oxygen content, the flakes became more disordered until the complete amorphous formation.The best sample (2.28 at.%O and 35-40% disordered) displayed a Tafel slope of 55 mV/dec, which was significantly better than the result for pure MoS 2 (81 mV/dec).The results evidence the significance of double anion TMCs and the value of creating disorder or flaws in the basal plane [124].The report made a research direction to focus on creating defects and/or oxidation of TMCs for boosting the electrocatalytic by activating its basal planes.Even though the proposed approach is successful in basal plane activation, there are some hurdles associated with the charge transporting activity (from electrocatalyst to electrode) causing a lowering of electrical conductivity owing to the O 2 -rich or amorphous behavior of the TMCs.Hence, an alternative approach for the double-anion formation of TMCs can be achieved with S or Se doping or substitution.Xu et al. reported the doping of sulfur (S) in MoSe 2 by the simple hydrothermal approach tends to possess more catalytic reaction active sites, thereby boosting the electrocatalytic performance towards the H 2 production.Figure 9a,b present the HRTEM images of the S-doped MoSe 2 nanosheets with exposed edges and basal planes and a scheme demonstrating the unsaturated edges of nanodomains on the oriented basal plane (100) for proton adsorption with the corresponding polarization curves shown in Figure 9c [125].Gong et al. [126] synthesized the ternary MoSSe electrocatalyst through the bottom-up approach by adjusting the ratio of S/Se in the MoCl 5 precursor for the HER activity.The study revealed a 93% retention of the current density over 8000 cycles with an overpotential of 164 mV at 10 mA/cm 2 and a Tafel slope of 48 mV/dec.In accordance with the earlier theoretical research on S and Se-based Mo systems, more negative H 2 adsorption will be exposed by selenide-based Mo edges (∆GH = −140 meV) on the other sulfide-based Mo edges possessing positive H 2 adsorption energy (∆GH = 80 meV) [127,128], suggesting the developed ternary TMCs (MoSSe) could reduce the adsorption energy value to nearly thermoneutral.
Telluride (Te) is another chalcogen that can be used to make effective electrocatalysts for water splitting, despite the fact that the two chalcogens (S and Se) have been investigated widely for electrocatalytic activity.Kosmala et al. fabricated the ternary MoSe 2-x Te x films by molecular beam epitaxy and explored the materials for the electrocatalytic HER activity, and the results showed the overpotential of 410 mV and Tafel slope of 62 mV/dec at a current density of 10 mA/cm 2 .The developed ternary TMC electrocatalyst exhibits abundant metallic twin boundaries and thermodynamically stable defects inducing the performance of the electrocatalytic activity in water splitting.It was also reported that pristine MoTe 2 exhibits more catalytic active sites than the pristine MoSe 2 fabricated under the same conditions, making a noteworthy optimal composition fabrication of Te-rich ternary TMCs (MoSe 0.12 Te 1.79 ) [129].However, the reported results are contradicted to the earlier conducted experimental and theoretical findings on the Te-based system [25,130], proposing Te cannot be intrinsically active catalysts in S or Se-based systems, thereby suggesting a new avenue to explore more investigations on the Te-based TMCs electrocatalysts.
It is anticipated that research in this area will continue to advance given the general availability of both dopant anions and synthesis techniques to produce double anion TMCs.However, it is important to note that reports published so far on double-anion TMCs has employed methods that are unable to create any kind of predictable ordering or arrangement of the replacing atoms.However, the periodic or ordered double anion structures, such as the Janus TMCs proposed by recent theoretical investigations, might produce unique electrocatalytic characteristics [128,131,132].
trocatalyst through the bottom-up approach by adjusting the ratio of S/Se in the MoCl5 precursor for the HER activity.The study revealed a 93% retention of the current density over 8000 cycles with an overpotential of 164 mV at 10 mA/cm 2 and a Tafel slope of 48 mV/dec.In accordance with the earlier theoretical research on S and Se-based Mo systems, more negative H2 adsorption will be exposed by selenide-based Mo edges (∆GH = −140 meV) on the other sulfide-based Mo edges possessing positive H2 adsorption energy (∆GH = 80 meV) [127,128], suggesting the developed ternary TMCs (MoSSe) could reduce the adsorption energy value to nearly thermoneutral.Telluride (Te) is another chalcogen that can be used to make effective electrocatalysts for water splitting, despite the fact that the two chalcogens (S and Se) have been In these formations, the transition metal (Mo or W) has selenide on one side and sulfurization and the corresponding HER reaction on Janus TMCs is provided in Figure 10a [130].However, the single-step synthesis method for these structures is difficult; they require a multi-step procedure that would restrict further investigation into their potential as electrocatalysts in water splitting [131].Additionally, it has been forecasted that in the near future, the tactics exploiting the naturally high surface area beginning materials, such as Ni foam (shown in Figure 10b), would see an increase in popularity [67,133].
Double-Cation Ternary TMCs
Like double-anion TMCs, substantial research activity has been carried out on the double-cation TMC for electrocatalysts.The primary objectives in both (anion and cation) are to induce the typical basal plane to be active and produce additional active sites by defect or strain development.However, numerous investigations suggest that the electron density and local field is the primary advantage of double-cation TMCs because, for layered TMCs, there will not be a typical adsorption site in the transition metal atoms.Li et al. fabricated the Mo 1-x W x S 2 by incorporating W into MoS 2 through the hydrothermal approach by varying the ratio of different metal precursors and investigating its cationic effect in the electrocatalytic activity of water splitting.Due to the well-defined hierarchical structure and subsequent production of densely stacked nanopetals, the ternary Mo 0.85 W 0.15 S 2 composition possesses an abundant number of active sites for the HER activity with the Tafel slope of 89 mV/dec.The theoretical (DFT) calculations used to simulate the band gap of MoWS 2 (0.88 eV), which is found to be less than that of MoS 2 (1.14 eV), and the directional transfer of W atoms result in an "electron-rich" configuration.The electronicrich configuration is asserted to lower the charge transfer resistance, thereby increasing its conductivity towards the H 2 production [134,135].Similarly, sulfur-deficient compositiongraded MoWS x is fabricated using electrodeposition, as shown in Figure 11.Two distinct compositions of the ternary Mo x W (1-x) S x has been achieved depending on whether the deposition takes place either in the cathode or anode in the electrochemical deposition.Among the two distinct compositions, the ternary TMCs (Mo x W (1-x) S x ) fabricated via anodic electrodeposition yields higher electrocatalytic performance with an overpotential of 278 mV at 10 mA/cm 2 and a Tafel slope of 50.5 mV/dec.The higher electrocatalytic Energies 2023, 16, 1669 20 of 29 performance of the ternary TMCs was attributed mostly to the even Mo/W ratio and high exposed surface area of the tiny particles [136].
TMCs has employed methods that are unable to create any kind of predictable ordering or arrangement of the replacing atoms.However, the periodic or ordered double anion structures, such as the Janus TMCs proposed by recent theoretical investigations, might produce unique electrocatalytic characteristics [128,131,132].
In these formations, the transition metal (Mo or W) has selenide on one side and sulfurization and the corresponding HER reaction on Janus TMCs is provided in Figure 10a [130].However, the single-step synthesis method for these structures is difficult; they require a multi-step procedure that would restrict further investigation into their potential as electrocatalysts in water splitting [131].Additionally, it has been forecasted that in the near future, the tactics exploiting the naturally high surface area beginning materials, such as Ni foam (shown in Figure 10b), would see an increase in popularity [67,133].Like double-anion TMCs, substantial research activity has been carried out on the double-cation TMC for electrocatalysts.The primary objectives in both (anion and cation) are to induce the typical basal plane to be active and produce additional active sites by defect or strain development.However, numerous investigations suggest that the electron density and local field is the primary advantage of double-cation TMCs because, for layered TMCs, there will not be a typical adsorption site in the transition metal atoms.Li et al. fabricated the Mo1-xWxS2 by incorporating W into MoS2 through the hydrothermal approach by varying the ratio of different metal precursors and investigating its cationic effect in the electrocatalytic activity of water splitting.Due to the well-defined hierarchical structure and subsequent production of densely stacked nanopetals, the ternary Mo0.85W0.15S2composition possesses an abundant number of active sites for the HER activity with the Tafel slope of 89 mV/dec.The theoretical (DFT) calculations used to simulate the band gap of MoWS2 (0.88 eV), which is found to be less than that of MoS2 (1.14 eV), and the directional transfer of W atoms result in an "electron-rich" configuration.The electronic-rich configuration is asserted to lower the charge transfer resistance, thereby increasing its conductivity towards the H2 production [134,135].Similarly, sulfur-deficient composition-graded MoWSx is fabricated using electrodeposition, as shown in Figure 11.Two distinct compositions of the ternary MoxW(1-x)Sx has been achieved depending on whether the deposition takes place either in the cathode or anode in the electrochemical deposition.Among the two distinct compositions, the ternary TMCs (MoxW(1-x)Sx) fabricated via anodic electrodeposition yields higher electrocatalytic performance with an overpotential of 278 mV at 10 mA/cm 2 and a Tafel slope of 50.5 mV/dec.The higher electrocatalytic performance of the ternary TMCs was attributed mostly to the even Mo/W ratio and high exposed surface area of the tiny particles [136].Meanwhile, this is not a real case applicable to all the metal cations to demonstrate enhanced catalytic efficacy.For instance, niobium (Nb) and tantalum (Ta) doping in MoS 2 and WS 2 do not show any increase in the HER activity as compared with the undoped MoS 2 .However, the result points out a noteworthy discovery validating the 1T metallic phase (after cation doping) benefits in enhancing the HER activity of the electrocatalysts [137][138][139].Furthermore, several computational reports on the cation dopants, including Nb and Ta with TMCs, predicted the value of ∆GH to be nearly thermoneutral, causing benefits in H 2 production in water splitting [140].Askari et al. produced two mixed cation systems: (i) MoWCoS and (ii) MoWCuS by a hydrothermal approach, even though they were not technically double-cation catalysts.The MoWCoS demonstrated extremely robust HER characteristics when hybridized with reduced graphene oxide (rGO), with a Tafel slope of 38 mV/dec [141].It was asserted that the presence of CoS phases within the ternary TMCs induces additional defects and new interfaces between the phases contributing towards the enrichment of electrocatalytic activity.With numerous proposed processes (electron density modulation, 2H/1T phase transformation, morphological effects, etc.) that are accountable for performance gains, the precise role of the mixed (double) cation appears to be less evident than that of mixed (double) anion systems [141].Thus, like mixed anion TMC-based catalysts, it is expected that a research direction will continue to be the focus on the in-depth analysis and fundamental studies on the cation dopants which will undoubtedly be helpful for the future HER research.The governance factors of the water-splitting activity of the various ternary TMC-based electrocatalysts are comparatively provided in Table 2.
Challenges and Perspective
Despite the significant and heartening improvement in the electrocatalytic (OER and HER) performance, the current development of transition metal chalcogenides (TMCs) is still in its infancy and presents numerous obstacles that restrict their practical applications.These obstacles include the need for noble metal co-catalysts to lower the HER kinetics barrier, the reliance on sacrificial reagents as hole scavengers for long-term stability, and the hitches with large-scale production.Furthermore, thorough research and in-depth comprehension of the fundamental causes and mechanisms underlying the synergistic effects of the electrocatalysts remain lacking.To achieve the pilot-scale production of H 2 fuel in the near future, the present step of TMCs in driving efficient and steady overall water splitting still has to be advanced significantly.As per the research directions, the following are offered as recommendations: i. In-depth examination of TMCs' intrinsic structural modifications, which includes controlled morphology to achieve the layered structures, of vacancy engineering that tends to render the electronic trapping effects, and of doping to modify the HER kinetics and the optoelectronic properties, are needed.For instance, the selective doping of p-type with n-type TMCs might result in the internal p-n homojunction formation that grants the "back-to-back" potentials between the boundaries for the improved interfacial contact; ii.To obtain insight into creating the highly stable TMC electrocatalysts, in-depth investigations on the OER mechanism of metal sulfides/metal selenides are necessary to comprehend the changes that occurred in the photo-corrosion kinetics and surface properties.The methods to examine the prevention of TMCs photo corrosion, such as shielding the S and/or Se atom from unnecessary exposure to the oxidizing reactant or eliminating the photogenerated holes from TMCs' valence bands (VBs) to prevent the S 2− /Se 2− from self-oxidizing during the OER reactions; iii.To gain a thorough understanding of the principles behind the formation of ternary electrocatalysts and to reveal the underlying molecular process for water splitting, it is better to carry out the theoretical simulations and the first principle calculations on ternary electrocatalyst-based TMCs; iv.Diversify the research theme by increasing the variety of TMCs from the various combinations of metals to create advanced structures.Exploiting such various TMC types would create new opportunities for developing a system with the effective performance of water splitting; v.To solve the drawbacks of binary-layered structures, consider the possibility of creating ternary or quaternary layered systems; vi.Finally, a shift in the research focus needs to be realized in the developing area of pilot scaling of HER applications.To realize the expansion of HER applications, it is crucial to develop a commercially feasible water-splitting reactor with favorable macroscale configurations, and other research factors such as the impact of water pressure and process studies.Additionally, to maintain a sustainable high-yield H 2 fuel production, the TMCbased electrocatalysts' efficacy and stability must be maintained.
Conclusions
Electrochemical water splitting has attracted a lot of interest as a potential energy vector in future technology.Thus, effective and inexpensive electrocatalysts for overall water splitting are eagerly desired to overcome their slow reaction rate.In this review, an overview of the nanoarchitectonics of the various layered ternary transition metal chalcogenides is provided, with explanations of the electrochemistry related to the watersplitting mechanisms.To date, varieties of binary, ternary, and quaternary-based TMCs have been developed and used as electrocatalysts in water-splitting activities.Among them, the ternary TMC electrocatalysts with different chemical compositions and structures are addressed in the review.The shape, structure, and density of active sites influence the electrocatalytic activities; thus, particular attention should be paid to the precise chemical composition and morphology of the developed ternary TMC electrocatalysts.In this regard, the different nanoarchitectonics, including single-layered, nanofibers, nanotubes, nanowires, nanospheres, and nanospheroids are provided in detail with their synthesis approach.The nanoarchitectonics ternary TMCs offer numerous active catalytic sites and high conductivity, TMCs made of tiny nanoparticles or heterostructures with conductive materials have demonstrated considerable benefits, and the use of electrodes may be a practical solution to the stability problem.It may show tremendous promise for the scaled application in the case of ternary TMCs with abundant active sites; however, greater usage of inactive basal planes needs to occur.The two main strategies being pursued to increase the TMCs catalytic activities are as follows: (i) edge engineering to expose more active sites or full utilization of catalytic potential by providing the conductive pathways (extrinsic), and (ii) chemical modification to enhance the H 2 adsorption by lowering the GH values (intrinsic) through substitution or vacancies.For further tuning the active sites for effective electrocatalytic activity, the hierarchical constructions of TMCs on selforganized nanostructures or conductive templates may be a viable method.The ease of preparation, affordability, repeatability, and stability are the major crucial aspects that need to be considered in this approach, which has a huge impact on the real-time usage of the developed catalysts.Additionally, the TMCs' intrinsic activities must be improved, and the potential strategies include the substitution of elements in the host materials.The chalcogen-atom substitution to produce chemical strain for the activation of basal planes, even if this heteroatom substitution approach has not been implemented for active edge locations of TMCs, is an excellent example of how such an effect could be achieved.To have a thorough understanding of the effective electrocatalytic activity, further research into substitution close to the active edge locations of TMCs is still required.The area of electrochemical water-splitting employing ternary TMCs as electrocatalysts has recently seen a renaissance due to the desire for renewable energy.As a result, much work still needs to be done to widen the search for high-performance electrocatalysts and to investigate the actual applications of effective advanced electrocatalysts.
Figure 1 .
Figure 1.Schematic representation of the mechanism of H2 evolution in acidic and alkaline conditions and water-splitting reaction [34].Reproduced with permission from Zhu et al., Chem Rev., American Chemical Society (ACS), 2020.
Figure 1 .
Figure 1.Schematic representation of the mechanism of H 2 evolution in acidic and alkaline conditions and water-splitting reaction [34].Reproduced with permission from Zhu et al., Chem Rev., American Chemical Society (ACS), 2020.
4. 2 .
Nanodots Similarly, the ternary TMCs composed of Cu and Mo-based (Cu2MoS4) nanosheets demonstrated by Kim et al. evidence the enhanced electrocatalytic hydrogen generation.
Energies 2023 ,
16, 1669 18 of 30 by creating intrinsic strain in the material through a lattice mismatch with the Au substrate (Figure8d,e).It is demonstrated that the improved electrocatalytic activity of MoS2 is due to out-of-plane lattice strain, which controls the charge density distribution and atom migration while lowering the band gap and DGH*[119,122].
Figure 11 .
Figure 11.(a) Schematic illustration of Mo 1-x W x S 2 and Mo 1-x W x S 3 formation, and Mo/W and S/M ratios of (b) MoWS x /BPE cathodic and MoWS x /BPE anodic [136].Reproduced with permission from Tan et al., Appl Mater Interfaces, American Chemical Society (ACS), 2017.
Table 1 .
Advantages/Disadvantages of various synthesis approaches for TMCs. | 17,078.6 | 2023-02-07T00:00:00.000 | [
"Chemistry"
] |
Presenting new approach for optimal placement of nuclear power plant connected to the grid after the trip
This study presents a combination of optimal placement and power system development with the aim of supplying electricity for Nuclear Power Plant after the trip of power plant. Power supply to the internal loads of power plant by the off-site power system is one of the main fields of research in achieving the safety Nuclear Power Plant. One of the main purposes in this article is to introduce a suitable and safe place for the construction and connection of a Nuclear Power Plant to the power system. These locations are identified by the power plants on-site loads and the average of the lowest number of relay protection after the Nuclear Power Plant trip, based on electricity considerations. Along with the optimal placement in this paper, the power system development, including the generation and transmission development in order to provide electricity with higher reliability to the Nuclear Power Plant after the trip is also presented. Monte Carlo and Latin Hypercube Sampling probabilistic methods are proposed for locating the site of a Nuclear Power Plant and algorithms of Genetic and Particle Swarm Optimization for locating and developing power generation and transmission systems. The simulation results are implemented on the IEEE RTS 24-bus system, and finally suitable locations for the construction of the Nuclear Power Plant and the generation and transmission development with the aim of feeding the power plant from the off-site power system and sufficient assurance that the reactor core does not melt after the trip, are determined.
The variable under consideration in the algorithm Monte Carlo x id The spatial position corresponding to the d-th of the i-th particle in the Particle Swarm Optimization Algorithm v id The spatial position corresponding to the d-th of the i-th particle in the Particle Swarm Optimization Algorithm w Inertia Weight in the Particle Swarm Optimization Algorithm c 1 , c 2 Learning factors (acceleration coefficient are also called
Introduction
Nuclear power plants (NPP) safety and the reliability of the power system connected to the NPP are the two main factors in the development and operation of each other, because the NPP in case of emergency, start-up and exit need to receive electrical energy from the power system. A normal NPP is one that injects energy into the grid. Therefore, it is important to maintain NPP safety and the stability of the network by increasing the reliability of the power system and reviewing their performance [1]. It is important for the NPP to control the voltage and frequency of the power system connected to it within an acceptable range to provide the internal energy consumption required by the power plant (5-8% of the rated power of the NPP) during operation, shutdown and trip. Several methods have been proposed by the manufacturers, the most important of which are power supply from the main bus of the power plant, power supply from the generator terminal and group power supply.
The NPP internal power grid consists of two general parts, Off-Site and On-Site. The Off-Site section includes the main lines of the power system network that is connected to the power plant through the main post; the main transformer is the connection of the power plant to the power system network, as well as the power system network lines along with the related standby auxiliary (startup) transformer. The On-Site section also includes the main synchronous generator in the power plant along with the unit auxiliary transformer, medium voltage and low voltage lines used in the power plant as well as all transformers and electrical equipment in the power plant such as motors, batteries, diesel generators. Normally, if the power system is connected to the network through the main lines, the internal power of the power plant is supplied through the main generator and the power system network. In fact, the main generator of the power plant, as part of the power system network, is responsible for providing the internal electrical power of the power plant. If the main network is disconnected, this event is called loss of offsite power (LOOP). The power plant is in House-Load mode and the main generator is responsible for supplying electricity to the power plant. In case of disconnection of the main generator connection breaker, the standby (alternative) power line will provide internal electricity reservation through lines and transformers. With the disconnection of the reserve power grid, diesel generators will come into operation and in case of problems for these diesel generators, batteries and Uninterruptable Power Supply (UPS) will be responsible for providing critical loads. Simultaneous failure in both Off-Site and On-Site leads to Station Blackout (SBO).Various studies have shown that if the SBO event is not controlled in time, it can significantly damage the reactor heart [1,2].
The availability of AC power is essential for safe operation and accident recovery at NPP. Typically, AC power plants are powered by external sources and through the power grid. Four factors affect the SBO event. One of these factors is the LOOP phenomenon. The LOOP event can have a significant negative impact on the plant's ability to achieve and maintain a return from the accident [3]. Examining the articles, we can see the great importance of the LOOP event. The LOOP event was analyzed in various NRC reports during the years 1988-1988b-1996-2003 and 2010 [3][4][5][6][7].
NUREG/CR-1032 assesses the occurrence of nuclear power plant shutdowns due to the LOOP phenomenon based on USA data during the years 1985-1968 [3]. NUREG/CR-5496 examines the LOOP phenomenon in a NPP based on data from 1980 to 1996 [4]. NUREG/CR-5750 provides a more general account of the initial events at US power plants between 1995 and 1987, including the initial events of the loss of NPP's external power [6]. NUREG-1784 assesses the impact of grid operations on the NPP and the event of the loss of the external power supply from 1985 to 2001 [7]. NUREG-6890 assesses the risk of power plant shutdown due to loss of AC power at NPPs by 2004 [8]. EPRI covers the latest reports on power outages outside the NPP from 2003 to 2012 [9]. According to the statistics provided, the LOOP event has been registered 228 times in the IRSN SAPIDE database, 190 times in the GRS VERA database, 120 times in the LER database and 52 times in the IRS database [10]. These reports contain detailed information on the importance of examining the performance of the power grid connected to NPPs.
The loss of offsite power plant depends on the operating condition of the plant. If the NPP is shut down, this incident will not threaten the power plant. If the power plant is in normal operation and this accident occurs, there will be two situations: in the first case, the power plant will continue to operate without trip, that is, the alternative sources have been put into operation properly. In the second case, the power plant is trip. To prevent this, Taking safe and preventive measures against the threatening dangers of a NPP to increase safety and protect power plants, employees, the environment and the power system is essential and inevitable. One of the important issues that always threaten the nuclear power plant and power grid is the power plant's trip. Power plant unit trip cause power shortages, frequency drops, and power network voltage fluctuations. If for any reason the equipment connected to the power system is unable to return power to the power plant, it will lead to successive blackouts and often the collapse of the power system. But this becomes even more serious when, after the plant's trip, the plant's internal power grid fails to inject the necessary power to cool the reactor, which will greatly help damage the reactor and emit radioactive material. Incidents such as the Fukushima accident in Japan or Chernobyl in Ukraine and the Mile Island 3 in the United States will follow [11]. It is therefore important that the power system connected to the nuclear power plant is a reliable power source with sufficient capacity for all reactor operating modes. This requires monitoring, protection, selection of a suitable location for the construction of a NPP and extensive control of the power system.
For countries that are using nuclear energy and countries that have not yet used this energy and decide to use this energy, always pay attention to the infrastructure of the national electricity grid and make the necessary changes to maintain the stability of the network and the security of the NPP is essential. These changes include the introduction of a suitable location for the construction of the power plant, the addition of transmission lines to connect the power plant to the power grid, as well as considering how the power plant operates and is maintained by the power grid. In order to prevent power grid accidents that lead to challenges to the safety of the NPP, special attention should always be paid to the design and operation of the power grid and their relationship with the nuclear power plant. In its recent publications, The International Atomic Energy Agency (IAEA) has given full details of the communication issues between the electricity grid and the NPP, and the importance of addressing this issue [12][13][14]. The organization has also been constantly updating recommendations and details on this issue since the 1980s [15][16][17][18]. Consequences and events of non-supply of power required by the NPP have been analyzed by the power grid as the main source of power supply in emergency situations [19]. One of the important issues that has been mentioned before and should be considered is the introduction of a suitable location for the NPP in the electricity network connected to it. In his research, F. Kiomarsi et al. have introduced a suitable construction site for the NPP after the trip of the power plant and in situations where the power plant is not able to return to normal operating conditions and needs to receive power from the national grid to cool the reactor [20].
In addition to the optimal location of NPP, one way to increase the probability of powering NPP after a trip is making the necessary changes in the electricity network infrastructure. Modifications to the power grid infrastructure include adding or subtracting a production unit or transmission line as needed. Creating such changes and developing the workforce network is very important and strategic. Developing a power grid to increase the security of a NPP is a scenario that must be evaluated before nuclear events such as the reactor's heart melt. Developing a power system will cost the planner extra time and will change the network's topology. But in fact, the changes in the topology of the power system and its additional costs in the face of nuclear accidents, which are due to the lack of power supply to the nuclear power plant by the offsite power system, will be very negligible after the power plant unit trip.
Numerous studies in the field of power grid development planning in traditional and restructuring the power system using meta-innovative methods such as Genetic Algorithm [21][22][23], Simulated Annealing [24], Tabu Search [25], Particle Swarm Optimization Algorithm [26] and a Shuffled Frog Leaping Algorithm [27]. However, no research has been done on the GTD in the presence of the NPP, with the purpose of supplying electricity to auxiliary loads after the power plant trip. So in this article, using the MC probabilistic method and after 2000 repetitions (the reason for choosing this number of repetitions is to achieve convergence in response), first identify the optimal location for the construction of a nuclear power plant and then ensure the introduction of the location. Appropriate, using the super-innovative genetic algorithm to develop the power system (development of transmission lines and production units) taking into account the exit rate for the production unit, transmission lines and protection relays, in order to increase the probability lack of power supply to critical NPP, it is paid for by the offsite power system after the trip. This paper is organized as follows: In Sect. 2, the problem of power supply to auxiliary loads is formulated by the off-site power system based on voltage, frequency and transient current stability measures. Section 3 describes the probabilistic methods and proposed algorithms for solving the problem and how to use them in the article. In Sect. 4, the issue of power system development planning is addressed. The proposed flowchart and its solution method in power system development planning are also presented. In Sect. 5, in order to evaluate the efficiency of the proposed methods, the test network is introduced. In Sect. 6, the results of the simulation of the proposed methods on the IEEE 24-bus network are analyzed. Section 7 concludes the article. Section 8 presents future studies.
Stability analysis functions
Using Optimal Power Flow (OPF), the House Load probability (HLP) to the NPP by the off-site power system can be evaluated based on the criteria of stability limit, voltage limit, frequency limit and transient current.
The mathematical equations used to solve this problem are as follows [28]: According to Eq. (1), the amount of active power produced is equal to the active power of the load in the network. Active load power in the network is the sum of active power consumption and network losses. After the occurrence and application of the accident in the power system in a transient state, the changes in the speed of the power plant unit as well as the frequency changes of that unit are obtained from the difference between the active power output and the active load power in the power network. In Eq. (2), the amount of reactive power produced is equal to the reactive power of the load in the network. Reactive load power in the network is the sum of reactive power consumption and reactive losses of the network. It is obvious that after the occurrence and application of an accident in the power system in a transient state, the voltage changes of the network buses are obtained as a result of the difference between the reactive power produced and the reactive power of the load in the power network.
In Eqs. (3) and (4), the constraint indicates the limit of active and reactive power produced by power plant units. Failure to pay attention to the reactive power limit Gi will cause damage to the excitation coil. Active and reactive power limitations include stator winding limitation, rotor winding limitation, turbine mechanical limitation and torque angle limitation, which must be considered in power grid stability calculations.
In Eq. (5), the rate of change of reactive power of network load to reactive power of network load is considered and shows that it is always a constant value. If the amount of network loads, which includes load and losses, changes in the power system simulation calculations, the ratio of changes in reactive power to active power of network load is the same.
Equation (6) shows the operating voltage range of each bus. Under normal operating conditions, the bus voltage always has the right to change in the range of from 0.95 to 1.05 per unit (Pu). But in case of emergency, this value can change in the range of from 0.9 to 1.1 Pu.
Another limitation applied in this paper is the flow limit across transmission lines. In terms of power system stability, the maximum power through the transmission lines should be considered in the operation of the power system. Note, however, that in the simulation provided for the NPP, all restrictions are observed by protection relays, and protection relays will work if they exceed the specified limits.
Possible assessment
The House Load Probability (HLP) is a dependent index and cannot be considered a constant value to calculate it. However, it makes sense to use probabilistic methods such as the Monte Carlo (MC) probability method, which always provides a wide range of data.
Due to the approach to the real situation in the simulations performed in this paper, uncertainties such as the correct operation of protection relays, exit of transmission lines and exit of power plants in order to supply HLP of the power plant at the moment of trip, used by the power system And all protection relays such as voltage, current and frequency relays are modeled in this simulation.
Monte Carlo simulation random sampling (MCS-RS) Method
The steps of implementing the MC algorithm with the aim of supplying electricity to the HLP of the NPP after trip of the power plant are as follows [22]: Step 1 Specify power plant statuses by producing uniform random variables and compare them with forced outage rates (FOR) of power plant, The FOR for each production unit is shown in Table 1: Step 2 According to Eq. (8), a random number is considered for the transmission line under study and if this random number is less than the FOR of the transmission lines, that transmission line will be out of circuit and if it is higher, it will remain in the circuit. The rate of FOR for transmission lines is 0.02.
Step 3 According to Eq. (8), a random number is considered for the operation of the protection relays under study and if this random number is less than the FOR of the protection relays, that relay will go out of circuit and if it is higher, it will remain in the circuit. The rate of FOR for protective relays is 0.02.
Step 4 Transient studies of the power system after trip of the nuclear power plant are evaluated with the aim of supplying electricity to the auxiliary loads by the off-site power system. The results are then analyzed and after evaluation, the data are stored.
Step 5 If the stop criterion is met, the implementation of the algorithm is completed. Otherwise, to re-run the algorithm, you must go back to the first step and repeat steps 1 to 4 again.
The main criterion in stopping the MC algorithm is to reach a highly reliable answer. Therefore, in this paper, first, the coefficient of variation (CV) is calculated according to Eqs. 9-11 for each stage studied and compared with a very small value, and if the CV is less than this value, the algorithm will stop. (The smaller the comparable value for the CV, the higher the accuracy of the calculations and the longer the calculation time of the algorithm process.) Another stop criterion in the Monte Carlo algorithm is based on the number of iterations. After 2000 repetitions, based on the scattering coefficient of the Monte Carlo CV method, we obtained the optimal response and the same stop criterion. Therefore, in this paper, in order to facilitate and reduce computational time, the MC method has been selected based on the criterion of stopping the number of repetitions and 2000 repetitions as a computational measure.
In Eq. (9), the studied variables x i are different in each step.
Equation (10) shows the standard deviation in which x 1 − X i is the deviation of the i-th data from the mean of the data. If the standard deviation of the data set is small, it means that the scatter of data around their mean is low and therefore the data is closer to each other, and if the standard deviation of the data set is large, it means that the scatter of data around their mean is high and therefore the data is farther apart.
The Eq. (13) shows the coefficient of variation (CV). The CV theory is a criterion for the distribution of data, which is calculated by dividing the standard deviation into the average.
Genetic algorithm (GA)
The GA is widely used to solve various electrical engineering problems. The basis of the GA is Darwin's law of evolution (best survival). The law states: In the structure of nature, stronger beings will replace weaker beings, and weaker beings are doomed to destruction. The law is based on the premise that the more living beings are able to adapt to their environment, the more alive they are and the more stable their offspring will be. The genetic optimization algorithm has effectively selected a set of possible answers in its computational iterations that search for different areas of possible response spaces. The GA has a simple structure for simulation, and according to the structure of this algorithm, we can easily exit the relative extremes and reach the final optimization answer. This algorithm can be used in discrete and continuous, linear or nonlinear problems, however, depending on the type of optimization problem, proper coding for chromosomes should be considered before starting the program. Obviously, proper coding, in addition to increasing the convergence rate to the answer, increases the accuracy of the calculations.
The important operators of the GA used in this paper are as follows: In the first step of the GA, the initial population must be created randomly. Creating an early population means creating chromosomes with random genes. In the second step, the crossover and mutation operations must be implemented on the created chromosomes, which will lead to the production of new chromosomes. Obviously, parental chromosomes will be completely stored so that they do not change. In the crossover operator, two random chromosomes will be selected from the parents and cut from a random gene, and after the chromosome genes are moved together, children will be created. In this paper, not all chromosomes are fertilized in accordance with the laws of nature, and with a probability of 90%, which is the same as the crossover rate; fertilization of chromosomes and production of infant chromosomes is performed. The mutation operator in the GA is chosen based on the mutation rate, which is usually a number between 10 and 30%. Genetic mutations are attempted on a random gene from a random parent chromosome. It should be noted that this genetic mutation may improve or worsen the chromosome.
The total population, which includes parents, crosssectional chromosomes, and mutations, should now be compared in terms of evaluation. For this purpose, we determine and adjust all chromosomes of the total population using the purpose function. The evaluation function, which is the same as the target function, uses the evaluation function to calculate a numerical value for each chromosome. Obviously, using the evaluation function, we optimize the chromosomes for the ultimate goal of optimization and easily separate the superior chromosome and the undesirable chromosomes. After the chromosomes are sorted, the best chromosome is determined. If the stop criterion is met, the GA ends; otherwise, by creating a new population, we move on to the next iteration.
Particle swarm optimization algorithm (PSO)
PSO is an optimization technique and is one of the evolutionary algorithms inspired by nature. This algorithm is an optimization technique based on possible rules. PSO is rooted in two main combinations of cognitive styles and is generally tied to the group of birds or fish and collective theory in particular. In this method, by setting the path of a population of particles in the problem space based on information about the best previous performance for each particle and the best previous performance related to the neighbors, each particle performs a search in a chapter of the problem [31,32].
Using this scenario, PSO uses it to solve optimization problems. In PSO, every answer to a problem is the position of a bird in a search space called a particle. All particles have a certain amount of fitness that is obtained by the fitness function that needs to be optimized, and the bird is closer to the food that has more fitness. Each particle also has a velocity that directs its direction of motion toward the optimal current particle [31,32].
The PSO starts with a group of random answers (particles), then searches to find the optimal answer in the problem space by increasing the position of the particles. Each particle is defined as multi-dimensional (depending on the nature of the problem) with two values, x id and v id , which represent the spatial position and velocity corresponding to the d-th of the i-th particle, respectively. At each stage of the population movement, each particle gets the best value in two days. The first value is the best response from the population movement, each particle with two values is the best in terms of fitness, which has been achieved separately for each particle so far (the value of fitness must be saved) this value is called v-best. The other best value obtained by PSO is the best value ever obtained by all particles in the population, and this value is the best global and is called g-best (the fitness of the value must be saved). If a particle participates in its own spatial neighborhood as a population, this value is calculated only in the same neighborhood and is the best local called l-best. After finding the two values of v-best and g-best (l-best), each particle updates its new speed and location according to the following equations: which in Eqs. (14) and (15), c 1 and c 2 Learning factors (acceleration coefficient are also called) And rand is a random number in the range (1 and 0) [31,32]. To prevent algorithm divergence, the final value of each particle velocity is limited by Eq. (16).
Equation (14) consists of a sum of three phrases, the first of which is a ratio of the current velocity of the particle and its role is similar to the momentum in the neural network, and the second which is proportional to the difference between the bird's location and the previous one and the third which is the difference, and the third phrase, which is the difference in location with the best answer among the whole population, leads the new particle speed to the optimal answer. w, c 1 and c 2 are PSO parameters and convergence depends on the value of these parameters. c 1 is usually set to c 2 and numerically between 1.5 and 2. Convergence is highly dependent on the value of w, and should be defined dynamically (in the range of 0.2-0.8) in such a way that it decreases linearly during the process of population evolution. Initially, w must be large in order to find good answers in the early stages, and in the final stages, the smallness of w will lead to better convergence.
The purpose function
The purpose function considered in this paper is based on locating and optimizing the expansion of transmission lines and power plant units in the grid. In this article, we have tried to put the conditions of grid expansion using a single line, a power plant unit separately on the agenda. Flowchart 1 shows the procedure. This paper examines three scenarios. In the first scenario, the base grid was analyzed with 2000 repetition using the MC probabilistic method. In the second scenario, adding 1 transmission line to the grid by using the GA method and observing the effect of its presence on the house load of the NPP in case of the power plant unit trip. In the third scenario, adding 1 power plant unit to the grid by using the GA method and observing the effect of its presence on the house load of the NPP will be analyzed in the case of the power plant unit trip.
As shown in Fig. 1, after making random chromosomes in the GA and applying crossover and mutations on chromosomes, the suitability of chromosomes should be assessed. For this purpose, the desired chromosome must be sent from MATLAB software to DIgSILENT software to check the purpose functions and problem constraints in DIgSILENT software.
The purpose function of the optimization considered in this paper is to minimize the number of failure to supplies auxiliary loads to the NPP.
In this paper, three genes are considered for each chromosome, and the structure of each gene is such that the numbers between 1 and 24 (depending on the system under study) will be included. The coding for this problem is that the first gene is the beginning bus for the new transmission line, the second gene is the end bus for the new transmission line, and the third gene is the bus for the new power plant. Obviously, if the third gene is inactive, this part will not be considered. Because in some scenarios, there is no need to build a power plant at all, and only the construction of the transmission line will be considered.
Constraints are intended to the GTD
One of the most important problems with GTE is the excessive size is a problem, the complexity, and the difficulty of modeling it. For this reason, the following constraints are considered: • Add only 1 line to the lines in the grid. • Add only 1 power plant to the grid units. • Given that the chromosome defined in the GA identifies the buses on either side of the line, if the voltage of the buses specified by the GA is the same, the transmission line is allowed to be added to the power system. Otherwise, no line will be issued and the suitability of the unsuccessful chromosome will be calculated. For this purpose, penalty coefficients are considered for lines that are incorrectly constrained. • The inequality constraint of the beginning and end chromosomes is observed. This restriction is also considered for all chromosomes sent from MATLAB software to DIgSILENT software, and if the buses on both sides are equal, the evaluation line of this chromosome will not be calculated and will include penalty coefficients.
After examining the constraints by the DIgSILENT software and considering the penalty coefficients on the chromosomes, if the initial constraint of the chromosome is observed, it will be directed to the grid development section. In this sub-section, the desired class is first identified and called. For this purpose, according to the defined
Min f
(17) f = Sum(Failure to supply house loads to the NPP) scenario, and the code sent from MATLAB software to DIgSILENT software, the transmission line class or power plant class will be called. Then, according to the line voltage level and the nominal voltage level of the power plant, the type of equipment specified also line specifications such as R, X, B line voltage level, nominal apparent power and line length should be considered in the calculations. After fully defining the desired object, according to the location, it should be included in the graphical interface and the element should be connected to the parties. After developing and drawing a new element in the grid, the MC method should be implemented. After completing the number of repetitions and reaching the stop criterion, the program will report the number of cases of Failure to supply house load to the NPP unit and will send it to Matlab software as an evaluation.
Numerical studies
In order to evaluate the efficiency of the proposed methods for evaluating the power exchange capacity of the NPP by the power system after trip power plant, IEEE 24-test system has been used, the graphical view of which is shown in Fig. 2.
Voltage, current and frequency relays are adjusted on all system buses under test, and the method of setting each of the above relays in this research is as follows: Voltage relays: These relays will operate in the range of 0.3 Pu increase or decrease the voltage after a period of 0.2 s from the occurrence of the error. The sampled voltage will be 10 times per second.
Current relays: The above relays will operate if a current of more than 1.8 kA is applied to the power system equipment for more than one second. If this overcurrent is not continuous, the relays will be deactivated and will no longer operate. Sampling rate by current transformers is 50 times per second. Frequency relays: The operating settings of these relays are considered in the range (45 Hz, 55 Hz) and after leaving this range, the frequency relays will operate. The time taken to fix the fault and the operation of the frequency protection relay from the start time of the fault is 0.2 s. The frequency sampling rate of power plant units is equal to 10 per second. In the present study, in order to achieve the real conditions and the best performance of the power system in response to network frequency changes, we equipped two power plants with automatic governor adjustment and frequency control loop.
In the study [20], the best place to build a NPP was estimated using the MC-RS probabilistic survey method and the best place for the construction of Bus NO.3 NPP was estimated. With the construction of a nuclear power plant in this bus, the possibility of Failure to supply house load to the NPP is reduced. As mentioned earlier, this paper tries to study in three scenarios. In the first scenario, the base grid was analyzed with 2000 repetition using the MC probabilistic method. In the second scenario, adding 1 transmission line to the grid and observing the effect of its presence on the house load of the NPP in case of the power plant unit trip. In the third scenario, adding 1 power plant unit to the grid and observing the effect of its presence on the house load of the NPP will be analyzed in the case of the power plant unit trip.
Results and Discussion
The NPP will generate 400 MW (According to Table 1 and Fig. 2) before the accident and target 18 test systems on the bus [30]. However, in order to carefully examining the grid conditions and analyzing the performance of protection relays at the time of the accident for the NPP unit in the MC method and investigating the conditions of transient state and house load power of the power plant after trip, It is evaluated and after introducing the desired bus to build a nuclear power plant, it is introduced as the base grid.
The base network is based on the fact that the NPP has been built on bus NO. 3 and the analysis of the number of protection relay operating at the time of the accident for the NPP unit has been performed in the MC method. Figure 3 shows the mean performance of protection relays after 2000 repetitions using the MC method on the IEEE 24-bus network. As can be seen, the mean number of protection relays has converged to 3.6595. In other words, this figure shows the mean number of protection relays operating at the time of the accident (power unit trip) for the NPP.
Power supply to house load NPP is very important for network stability. For this reason, Fig. 4 shows the number of times the NPP auxiliary loads are failure supplied to the base network. As can be seen in Fig. 4, after 2000 repetitions of the MC method, only 7 times in the MC-RS method of power supply failure to NPP has occurred. Figure 5 also shows the mean frequency of power supply failure to the critical auxiliary loads of the NPP. Figure 5 shows that the mean number of times the power supply has failed is 0.0035, which is a very good number. According to the results obtained from the MC method and Figs. 4 and 5, it can be concluded that the selection of bus NO. 3 for the construction of a NPP in case of trip of this power plant unit with the purpose of supplying electricity to the auxiliary loads of the power plant to cool the reactor and It is very convenient to prevent the heart of the reactor from collapsing. Choosing the right place to build a NPP is a positive step towards the security of the power plant, According to Fig. 4, the lack of power supply to the auxiliary loads of the NPP is 7 times. This amount decreases with the construction and development of the transmission network and power plant unit, and the security of the NPP after the trip and its power supply from the offsite power system network also increases.
To transmission lines development, the optimal location of the transmission line in the network is determined using the GA and PSO algorithms. In this paper for GA algorithm, the initial population is 50 and the number of repetitions is considered to be 100 to ensure that the answer is reached. This article is intended mutation rate of 10% and 90% rates intersection also been considered. Also, for the PSO algorithm, the number of repetitions is considered to be 100. The reason for using this number of repetitions is the possibility of comparing the results of the PSO algorithm and the GA algorithm. The number of primary particles in the PSO algorithm method is 50 and the minimum inertia coefficient is 0.4 and the maximum inertia coefficient is 0.9. For this purpose, the GA and PSO are first implemented in MATLAB software and randomly identify the buses at the beginning and end of the transmission line. After undergoing the initial processes in MATLAB software, the GA sends the lines information to DIgSILENT software to evaluate the fit of the chromosomes and the PSO algorithm to evaluate the fit of the particle information. DIgSILENT software uses a subroutine to build lines and transformers, and then a Monte Carlo probability assessment program to check the suitability of chromosomes and determine the suitability of particles, and sends the result to MATLAB software. In this scenario, the NPP is located on bus NO. 3 and an attempt will be made to build a new line.
After completing the process of the GA and PSO algorithm and the complete convergence of the beam curve to the final answer, the optimal location for the construction of the new transmission line between buses 3 and 24 is estimated. But as shown in Fig. 2, bus 3 has a voltage of 138 kV and bus 24 has a voltage of 230 kV. Therefore, a line between these two voltage levels cannot be constructed. To solve this problem, a sub-program has been written, which, if detected, will use a transformer instead of a line. Table 2 compares several of the best sample states for the purpose grid using the GA and PSO algorithm.
As shown in Table 2 and Fig. 4, before developing the power system in the base network where the NPP is located on bus NO. 3, after 2000 repetitions of the MC method, we have 7 times power supply failure to the auxiliary loads of the NPP. As can be seen in Table 2, the best The mean number of times the NPP auxiliary loads is power failure place to build the transmission line in the GA algorithm is between basses 3 and 24, and for the Paso algorithm between basses 3 and 24, as well as between basses 3 and 9. Therefore, with the construction of a transformer between buses 3 and 24 in both evolutionary algorithms in all possible modes of the MC method, the power supply to the critical loads of the NPP has been successfully completed and no unsuccessful cases have been reported. This means that with the construction of a NPP on bus NO. 3 using the MC method and the results obtained from the GA and the PSO algorithm based on the construction of the transmission line and the transformer between buses 3 and 24, after the power plant trip, the power supply failure to the auxiliary loads of the power plant will be fully charged, that new conditions of the network, due to all the uncertainties, the critical loads of the power plant have always been provided properly. After that, the construction of the transmission line between Bus 3 and 9 with 1 time in the GA method and 0 times in the PSO method of power supply failure to the auxiliary loads of the power plant after trip is in the second place. Figure 6 shows the mean number of protection relay functions after 2000 repetitions using the MC probabilistic method and after the construction of a transformer between buses 3 and 24 on the IEEE 24-bus network. As can be seen, the average number of protection relays operating in the base network was 3.6595, and in the developed network, using the GA algorithm, this number was reduced to 3.3635 and using the PSO algorithm to 3.5250. In other words, the number of protection relays operating at the time of the accident for the NPP in this scenario has decreased.
In this part of the paper, the goal is to optimize grid development from the perspective of power plant units. The development of power plant units is limited to the addition of only one power plant unit to the network and is done using the GA and PSO. After the complete convergence of the GA and PSO and reaching the stop criterion, Bus No. 3 will be introduced for the construction of a new power plant. The reason for this choice was to optimize, because in this paper the goal was to optimize the addition of grid power plants to reduce the likelihood of power supply failure to auxiliary loads NPP in the event of a NPP accident. Considering the structure of the base network, the NPP was built on bus No. 3, and the addition of a new power plant on bus No. 3 this bus has two power plant units. The following is an analysis of the number of Fig. 6 The mean number of protection relay functions in the MC method with transformer development Fig. 7 The mean number of protection relay functions in the MC method with generation development performance relays operating at the time of the accident for NPP in the MC method. Figure 7 shows the average number of protection relay performance after 2000 repetitions using the MC method on the IEEE 24-bus network in two cases where the NPP alone is installed on bus NO. 3 (base network), in the other case that the power plant new one shows the power supply to the power plant after the power plant trip on the bus 3 installed (generation development). As can be seen, the mean number of protection relay functions for the base network is 3.6595 and for the network in developed conditions it is converged with a power unit in the GA to 3.1160 and in the PSO to 3.1420. In other words, this figure shows the mean number of protection relays operating in the event of an accident for a NPP. Figure 7 shows that with the development of the power plant unit, the average performance of the protection relays connected to the power plant has improved after the power plant trip compared to the base network, and this means increasing the safety of the NPP.
As mentioned, power supply to nuclear power plants is of great importance for network stability. For this reason, Fig. 8 shows the number of power supply failure to the auxiliary loads of the NPP in the base network and in the developed network using the addition of a power plant unit in the network. As can be seen in Fig. 8, after 2000 repetitions of the Monte Carlo method, the number of times power supply failure to the auxiliary loads of the NPP after the trip is estimated to be 7 times in the MC-RS method, which is added after this number. The power plant unit network has been reduced to 4 times using the GA and the PSO method has been reduced to 2 times. Figure 9 also shows the number of mean power supply failure to auxiliary loads of the NPP. Figure 9 shows that the mean frequency of power failure was 0.0035 for the base network and decreased to 0.002 for the developed network using the GA and 0.001 for the PSO method. Therefore, the development of network power plant units and the optimal location of the new unit can reduce the probability of power supply failure to the auxiliary loads to the NPP to an acceptable percentage.
Conclusion
The nuclear power plant is connected to the power grid and transfers its generated power to the grid and receives its required power from the grid. The trip of the power plant unit causes power shortage, frequency drop and fluctuation in the mains voltage and puts the power system under severe stress and pushes it to the brink of collapse. This is especially important when the power system, while maintaining grid stability, provides the power required by the nuclear power plant to cool the reactor. This requires monitoring, protection of the power grid connected to the power plant, selection of the appropriate location for the construction of the nuclear power plant and extensive control and development of the electrical power system.
In this paper, with proper planning in selecting the appropriate location for the construction of a nuclear power plant and the development of the transmission system and power system power plant units after trip power plant, in addition to preventing the collapse of the power system, the power required to power internal loads is managed. First, using the Monte Carlo probabilistic method, considering the output rate for protection relays, production units and lines, a suitable place for the construction Fig. 8 The number of times the NPP auxiliary loads is power failure in the developed network with one power plant unit Fig. 9 The mean of times the NPP auxiliary loads is power failure of a nuclear power plant is introduced, and then using the genetic metaheuristic algorithm and particle swarm algorithm which are an effective optimization methods in the Development of transmission and generation system, the optimal location for the development of transmission line and generation unit for the purpose of supplying electricity to auxiliary loads is introduced and the simulation results are implemented on the IEEE RTS 24-bus system.
Finally, the research findings show that the site selected for the construction of the nuclear power plant and also the introduction of a new network with the addition of transmission line and production unit, while preventing the collapse of the power system and controlling the voltage and frequency of the network, the possibility of power supply to internal loads is increased after the trip of the power plant and the reactor heart melts to a large extent.
Future study
• Investigating the effect of the presence of distributed generation sources on the recovery of power required by the nuclear power plant from the off-site power system after the trip • It is suggested that the optimal location and development of the power system for nuclear power plants after the trip of the power plant be considered at the same time as the possibility of failure of part of the power system in the future.
Conflict of interest
The authors declare that there is no conflict of interests regarding the publication of this paper.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/. | 11,153.8 | 2021-03-02T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Physics"
] |
Development of Estimation Procedure of Population Mean in Two-Phase Stratified Sampling
This article describes the problem of estimation of finite population mean in two-phase stratified random sampling. Using information on two auxiliary variables, a class of product to regression chain type estimators has been proposed and its characteristic is discussed. The unbiased version of the proposed class of estimators has been constructed and the optimality condition for the proposed class of estimators is derived. The efficacy of the proposed methodology has been justified through empirical investigations carried over the data set of natural population as well as the data set of artificially generated population. The survey statistician may be suggested to use it.
Introduction
In this present paper we have made use of Auxiliary information extracted from the variables having correlation with study variable. Auxiliary information may be utilized at planning, design and estimation stages to develop improved estimation procedures in sample surveys. Sometimes, information on auxiliary variable may be readily available for all the units of population; for example, tonnage (or seat capacity) of each vehicle or ship is known in survey sampling of transportation and number of beds available in different hospitals may be known well in advance in health care surveys. If such information lacks, it is sometimes, relatively cheap to take a large preliminary sample where auxiliary variable alone is measured, such practice is applicable in two-phase (or double) sampling. Two-phase stratified sampling happens to be a powerful and cost effective (economical) technique for obtaining the reliable estimate in first-phase (preliminary) sample for the unknown parameters of the auxiliary variables. For example, Sukhatme [1] mentioned that in a survey to estimate the production of lime crop based on orchards as sampling units, a comparatively larger sample is drawn to determine the acreage under the crop while the yield rate is determined from a sub sample of the orchards selected for determining acreage.
In order to construct an efficient estimator of the population mean of the auxiliary variable in first-phase (preliminary) sample, Chand [2] introduced a technique of chaining another auxiliary variable with the first auxiliary variable by using the ratio estimator in the first phase sample. The estimator is known as chain-type ratio estimator. This work was further extended by Kiregyera [3,4], Tracy et al. [5], Singh and Espejo [6], Gupta and Shabbir [7], Shukla et al. [8], Choudhury and Singh [9], Parichha et al. [10] and among others, where they proposed various chain-type ratio and regression estimators.
In practice, the population may often consist of heterogeneous units. For example, in socio-economic surveys, people may live in rural areas, urban localities, ordinary domestic houses, hostels, hospitals and jail, etc. In such a situation one should carefully study the population according to the characteristics of regions and then apply sampling scheme strata wise independently. This procedure is known as stratified random sampling. It may be noted that most of the developments in twophase sampling scheme are based on simple random sampling only while limited number of attempts are taken to address the problems of two-phase sampling scheme in the platform of stratified random sampling. It may be also noticeable that the most of the research work on two-phase sampling are producing biased estimates. However, biased becomes a serious drawback in sample surveys. A sampling method is called biased if it systematically favors some outcomes over others. It results in a biased sample of a population (or non-human factors) in which all individuals, or instances, were not equally likely to have been selected. If this is not accounted for, results can be erroneously attributed to the phenomenon under study rather than to the method of sampling. For example, telephone sampling is common in marketing surveys. A simple random sample may be chosen from the sampling frame consisting of a list of telephone numbers of people in the area being surveyed. This method does involve taking a simple random sample, but it is not a simple random sample of the target population (consumers in the area being surveyed). It will miss people who do not have a phone. It may also miss people who only have a cell phone that has an area code not in the region being surveyed. It will also miss people who do not wish to be surveyed, including those who monitor calls on an answering machine and don't answer those from telephone surveyors. Thus the method systematically excludes certain types of consumers in the area. It is obvious that the inferences from a biased sample are not as trustworthy as conclusions from a truly random sample.
Encouraged with the above work, we have proposed a class of product to regression chain type estimators in stratified sampling using two auxiliary variables under double sampling. The unbiased version of the proposed class of estimators has been obtained which make the estimation strategy more practicable. The dominance of the proposed estimation strategy over the conventional ones has been established through empirical investigations carried over the data set of natural as well as artificially generated population.
Sampling structures and notations
Consider a finite population U = {1, 2,…, N} of N identifiable units divided into L homogeneous strata with the hth stratum (h = 1, 2,…, L) having N h . Let y and (x, z) be the study variable and two auxiliary variables respectively taking values y ih and (x ih ,z ih ), respectively, for the unit i = 1,2,…N h of the hth stratum.
r are the population standard deviations in the hth stratum. Let ρ yx h , ρ yz h and ρ xz h be the correlation coefficients between (y, x), (y, z), and (x, z) respectively in the hth stratum. Chand [2] and Kiregyera [3,4] discussed a situation in simple random sampling when information on x is unknown but another auxiliary variable z is easily available. It is assumed that population mean of one auxiliary variable z is known in advance and the population mean of the other auxiliary variable x is unknown. We seek to estimate through a two-phase stratified sampling design. Using a simple random sample without replacement (SRSWOR) sampling scheme at each phase, we adopt the double sampling scheme as follows.
i. In the first phase, a preliminary large sample of size n 0 h is drawn from the hth stratum of size N h (h = 1, 2,…, L) and information on the auxiliary variables x and z is observed.
ii. In the second phase, a sub-sample of size n h is drawn from first phase sample n 0 h units from the h th stratum of size N h and information on both the study variable y and the auxiliary variables x and z is taken.
h i¼1 z hi be the corresponding sample means in the hth stratum.
Discussion on existing estimation strategies
The usual stratified mean estimator (y st ) for population mean (Y), is given by The mean square error (MSE) of y st , is given by Motivated with the technique adopted by Chand [2], one may frame the chain ratio-product type estimator in stratified sampling structure as The bias and MSE respectively of y h ð Þ RP , to first order of approximation, are obtained as Similarly, inspired with the technique adopted by Choudhary and Sing [9], one may frame the two-phase stratified random sampling estimator in stratified sampling as where k h is constant.
And MSE y h
Formulation of proposed estimation strategy
Motivated with the earlier work, discussed above, we have constructed a class of product to regression chain type estimators as where k h h ¼ 1; 2; …; L ð Þis a real constant which can be suitably determined by minimizing the M. S. E. of the class of estimator t p and x 0 is the regression coefficient between the variables x and z at the hth stratum.
Bias and mean square errors of the proposed class of estimator t p
It can be easily noted that the proposed class of estimators t p defined in Eqs. (8) is chain product and regression type estimator. Therefore, it is biased estimator for population mean Y. So, we obtain biases and mean square errors under large sample approximations using the following transformations: and E e i ð Þ ¼ 0 for (i = 1, 2,…, 6), e i for (i = 1, 2,…, 6) are relative error term. Under above transformations the class of estimator t p may be represented as We have the following expectations of the sample statistics of two-phase stratified sampling as where Expanding binomially, using results from Eq. (1) and retaining the terms up to first order of sample size, we have derived the expressions of bias B(.) and mean square error M(.) of the class of estimators t p as
Bias reduction for the proposed class of estimators
In recent time serious drawback is bias of an estimator. Therefore, unbiased versions of the proposed classes of estimators are more desirable. Motivated with this argument and influenced by the bias correction techniques of Tracy et al. [5] and Bandyopadhyay and Singh [11] we proceed to derive the unbiased version of our proposed class of estimator t p .
From Eq. (12), we observe that the expression of bias of the estimator t p contains the population parameters such as μ 003 , μ 102 , S yx h , S yz h , S 2 x h , S 2 , y h , x h and s yz h , we get an estimator of B(t p ) and where m pqr ¼ 1 m ∑ m i¼1 x hi À x h ð Þ p y hi À y h À Á q z hi À z h ð Þ r : Motivating with the bias reduction techniques of Tracy et al. [5] and Bandyopadhyay and Singh [11], we have derived the unbiased version of the proposed class of estimators t p to the first order of approximations two-phase stratified sampling.
Thus, the variance of t 0 p to the first order of approximation are obtained as From Eqs. (10) and (15) it is to be noted that the class of estimators t 0 p is preferable over the class of estimators t p of two -phase sampling set up as t 0 p is unbiased (up to first order of sample size) class of estimator of Y h while the class of estimator t p is biased.
Minimum variance of proposed class of estimators
It is obvious from the Eq. (16) that the variances of the proposed class of estimator t 0 p depend on the value of the constant k h . Therefore, we desire to minimize their variances and discussed them below. The optimality condition under which proposed class of estimators t 0 p have minimum variance is obtained as Substituting the optimum value of the constant k h in Eq. (19), we have the minimum variance of the class of estimators t 0 p as
Efficiency comparison of the proposed strategy
It is important to investigate the performance of the proposed class of estimators with respect to the existing ones. We use the two natural population and one artificially generated population data set to justify the supremacy of the proposed strategy.
Empirical investigations through natural populations
The data set of two natural populations has been presented below.
• Population I (Source: Murthy [12], p. 228) y: Factory output in thousand rupees, x: Number of workers in the factory, and z: Fixed capital of factory in thousand rupees.
The data consist of 80 observations which are divided into four strata according to the auxiliary variable z as: (i) z ≤ 500, (ii) 500 < z ≤ 1000, (iii) 1000 < z ≤ 2000, and z > 2000 respectively for allocation of sample size to different strata, Proportional allocation is used.
Marmara region
Agean region Central Anatolia region
Empirical investigations through artificially generated population
An important aspect of simulation is that one builds a simulation model to replicate the actual system. Simulation allows comparison of analytical techniques and helps in concluding whether a newly developed technique is better than the existing ones. Motivated by Singh and Deo [14], Singh et al. [15] and Maji et al. [16] who have been adopted the artificial population generation techniques, we have generated five sets of independent random numbers of size N (N = 100) namely x 0 1 k , y 0 1 k , x 0 2 k , y 0 2 k and z 0 k k ¼ 1; 2; 3; …; N ð Þ from a standard normal distribution with the help of R-software. By varying the correlation coefficients ρ yx and ρ xz , we have generated the following transformed variables of the population U with the values of σ 2 y ¼ 50, μ y ¼ 10, σ 2 x ¼ 100, μ x ¼ 50, σ 2 z ¼ 50 and μ z ¼ 20 as We have split total population of size N = 100 into 5 strata each of size 20 i:e:; N h ¼ 20; h ¼ 1; 2; …; 5 ð Þ ½ taking them sequentially and consider n 0 h ¼ 12 and n h ¼ 8; h ¼ 1; 2; …; 5 ð Þfor the efficiency comparison of the proposed strategy. The percentage relative efficiencies the proposed class of estimators t 0 p with respect to different estimators (under their respective optimum conditions) are derived through the data set of the artificially generated population are obtained as:
Conclusion
From the construction of estimation strategy and efficiency comparison of the proposed methodology, following matters are noted. Table 1, it is clear that the proposed class of estimators is at least 1% better than the existing one in estimating the population mean. Table 2 it is found that the new estimator is at least 28% better than the existing one.
Similarly from
3. It may also be noted from Tables 1 and 2 that the artificially generated population is homogeneous (the mean and variance of the respective variables are almost same for different strata) where the natural populations are heterogeneous (the mean and variance of the respective variables are different for different strata) in nature. Our suggested estimators performs with equal efficiency for both the types. We use following expression to obtain the percent relative efficiency (PRE) of the proposed estimator t 0 p with respect to different estimators as PRE ¼ 4.The unbiased version of the proposed technique has been obtained which make the proposed class of estimators much more practicable.
Thus, it is found that the proposed estimation technique has addressed the problems of estimation through two-phase stratified sampling which may truthful for real life application where population is especially heterogeneous in nature and stratification is essential. Due to the benefits achieved by the new estimator, the survey statistician may be suggested to use it. We use following expression to obtain the percent relative efficiency (PRE) of the proposed estimator t 0 p with respect to different estimators as PRE ¼ Table 2. PRE of the proposed estimator t 0 p with respect to different estimators through data set of artificially generated population. | 3,558.8 | 2019-09-27T00:00:00.000 | [
"Mathematics"
] |
A review of automotive intelligent and adaptive headlight beams intensity control approaches
The automotive headlight stands out as a critical vehicle component, particularly emphasized during nighttime driving. The high beam, designed for optimal driver visibility on long-distance roads, traditionally relies on manual control by the driver. However, this manual control poses challenges, particularly when the high beam light temporarily blinds oncoming drivers. The resultant dazzle for drivers of opposing vehicles is a significant concern. In response to these issues, there is a growing demand for adaptive and intelligent headlights that can autonomously adjust beam intensity. The intelligent headlight system takes on the responsibility of modifying the beam intensities without requiring explicit input from the drivers. This study aims to systematically review various approaches to controlling intelligent headlight beam intensity. The paper identifies four prominent approaches to intelligent headlight beam intensity control, recognized as widely used techniques. Furthermore, the study uncovers intriguing connections between some of these intensity control approaches. A survey on utilization rates indicates that sensor-based and machine learning (ML)-based intensity control approaches are the most commonly employed methods by automotive headlight designers. The paper concludes by providing insights into the future prospects of intelligent headlight technology, offering guidance for future researchers in this field.
Introduction
Automotive headlights serve a crucial role in illuminating the highway and its surroundings during nighttime driving.Given its significance, global regulations mandate institutions, such as the driver and vehicle licensing authority, to inspect and certify the operational condition of headlights for roadworthiness.These inspections involve a thorough examination of the headlight beams and their intensities, ensuring they meet specified minimum standards before certification. 1,2In the pursuit of enhancing passenger and driver safety during nighttime driving, vehicle manufacturers have witnessed a rapid evolution in headlight light sources.From the historical use of candles and lanterns on carriages, the technology has progressed to the current state of intelligent headlights.The evolution in light sources began with tungsten halogen lamps, transitioning to high-intensity discharge (HID) lamps, and now prevalent light-emitting diodes (LEDs).The latest development is steering toward the adoption of Light Amplification by Stimulated Emission of Radiation (Laser) technology. 3Automotive headlights typically feature two primary light beams: the high beam and the low beam.The high beam is utilized to illuminate distant roads, enhancing driver visibility, while the low beam is employed for illuminating the immediate vicinity of the vehicle in traffic-congested environments. 4he existing conventional headlight operates on a purely mechanical basis, requiring the driver to manually switch between high and low beams based on road conditions.A significant factor contributing to nighttime road accidents is poor driver visibility, particularly when the high beam of an oncoming vehicle affects the eyes of the driver traveling in the opposite direction.This occurrence results in temporary blindness for the oncoming driver, often leading to head-on collisions, especially on single-lane dual carriageways.Consequently, developing countries, particularly in Africa, consistently experience higher rates of road accidents and fatalities.The World Health Organization consistently reports elevated fatality rates in Africa and other developing nations, reflecting the prevalence of single-lane dual carriageways in these regions.Figure 1 illustrates the divergent intensity pattern of the headlight high beam, capable of reaching distances exceeding 200 m and covering approximately three lanes.A section of this study delves into the distribution of intensity patterns for both high and low beams to provide a comprehensive understanding of their structures.Due to the high beam's characteristics, improper control can adversely affect drivers from the opposite direction.Motorists with conventional headlights must periodically adjust their high beams based on traffic conditions, a task that becomes more demanding with the growing vehicular population on highways.Continuous adjustments to prevent dazzling other road users during nighttime driving can lead to driver fatigue, a significant contributor to frequent accidents.Moreover, the increase in vehicular population places a heightened demand on drivers to constantly adjust beams, potentially leading to driver fatigue, a major factor in frequent accidents.This repetitive adjustment can also result in driver indifference, allowing the high beam to remain on when encountering oncoming vehicles.Consequently, the imperative for intelligent headlights has arisen to relieve drivers of this control function.][7] The scenario depicted in Figure 1 has contributed significantly to numerous road accidents worldwide, especially during nighttime driving.According to a report from the World Health Organization, a staggering 70% of road crash fatalities and injuries globally impact the economically productive age group, specifically individuals aged 15-64 years, a demographic predominantly involved in nighttime driving.][10] To illustrate, the People's Republic of China, being one of the most advanced and populous nations globally, reported an estimated population of approximately 1,451,886,932.In 2016, China experienced an estimated 212,846 road crashes, resulting in 63,093 fatalities and 226,430 injuries.The economic toll of these road crash fatalities in China amounted to a staggering 1207.6 million Yuan, emphasizing the significant economic impact of road accidents on nations. 11This highlights a diversion of resources from developmental endeavors to addressing the aftermath of road accidents.In light of these alarming statistics, there is a pressing need to harness recent technological advancements to mitigate the toll of road carnages.Leveraging cutting-edge technology can play a pivotal role in reducing road accidents and their associated economic burdens on nations.
In recent times, the surge in technology and the heightened demand for vehicles have led to an exponential growth in vehicular population, resulting in significant traffic congestion on highways.The escalating number of vehicles globally has imposed substantial strain on road infrastructure and increased the demands placed on drivers.3][14][15][16] The integration of high technology into the design of vehicle components aims to enhance aspects such as ride comfort, safety, and vehicle stability.Designing components that simultaneously satisfy these diverse and sometimes conflicting requirements can be a considerable challenge.][19][20][21][22][23][24] Implementing such technological advancements in the design of intelligent headlight systems could significantly mitigate the impact of high beams on road users during nighttime driving.Research on intelligent headlights commenced in the early 2003s to address the limitations of conventional headlights.However, solving the challenges associated with conventional headlights has proven elusive due to differing approaches among researchers. 25Despite these challenges, the field is rapidly evolving, with expectations of increased investment in intelligent headlight research by numerous automobile companies and research institutions. 26n addressing the challenges associated with poor management of conventional headlight high beams during nighttime driving, researchers in automotive intelligent headlights are exploring various innovative control approaches.For instance Moon et al., 27 introduced an Intelligent Headlight Beam Intensity Control System (IHBICS) utilizing a machine learning-based approach.Similarly Loong et al, 28 Tamburo et al., 29 presented an Intelligent Night Vision Headlight System for automobiles, employing infrared cameras and a computer vision-based approach.Additionally Bullough et al., 30 devised an IHBICS utilizing an Arduino controller to automatically adjust the car headlight system based on surrounding lighting conditions through a sensor-based approach.These diverse approaches share a common goal: designing intelligent headlights capable of autonomously controlling headlight beams from high to low without requiring the driver's explicit consent.The objective is to enable drivers of vehicles equipped with intelligent headlights to focus solely on steering control and other tasks that demand their attention.Each approach brings a unique perspective to the challenge, incorporating machine learning, computer vision, and sensor-based methodologies to enhance the safety and efficiency of nighttime driving.
Various researchers have explored a range of headlight beam intensity control approaches, including Pulse Width Modulation (PWM), Fuzzy Logic, wireless sensor networks, and infrared transmitter-receiver systems.In their study Parvin et al., 31 concluded that, considering global research efforts in this field, intelligent headlights are expected to remain a top research area in the future.Anticipated collaborations among commercial vehicle manufacturers, government organizations, and universities are likely to yield significant advancements in this field in the coming years.Additionally, several authors have pointed out that while visionbased vehicle detection has made significant progress over the past decade, achieving a deeper and more comprehensive understanding of the on-road environment will continue to be an active area of research in the future. 32Despite the high research interest in intelligent headlights, the lack of a unified approach has led to some confusion in the sector.The primary objective of this paper is to conduct a comprehensive review of common intelligent headlight beam intensity control approaches.The aim is to provide guidance for future researchers in understanding the trajectory of intelligent headlight technology, fostering coherence in the pursuit of advancements in this crucial area.
The paper follows a structured organization, with Section 2 providing a comprehensive literature survey of intelligent headlight beam intensity control approaches and a utilization rate survey comparing commonly employed methods.In Section 3, the intensity pattern curves distribution of both high and low beams of the headlight is reviewed.Section 4 delves into the discussion of findings from the review study.The concluding remarks are presented in Section 5, and the paper closes with Section 6, offering acknowledgments.Figure 2 illustrates the overall organizational flow of the review paper.This structured arrangement ensures a systematic exploration and presentation of the research on intelligent headlight technology.
Literature survey
It is evident that intelligent headlight design has become a focal point of research, with many studies focusing on utilizing headlights for object detection, vehicle-tovehicle communication, and traffic control management to mitigate road accidents.However, the primary cause of accidents during nighttime driving remains poor driver vision.Various factors contribute to this, including adverse weather conditions, dusty environments, fog, and, notably, glare from the high beams of oncoming vehicles.The frequent occurrence of poor driver visibility, especially due to glare from opposite vehicles, is a significant contributor to nighttime accidents.To address this issue, the proposed solution is the development of intelligent and adaptive headlight systems capable of autonomously managing headlight beams.This perspective aligns with the viewpoint expressed by Fleming. 33While intelligent and adaptive headlight technology is a subject of significant research interest, the focus appears to be directed in a different direction.Rather than developing systems that take control functions away from drivers, much attention is given to using headlights for vehicle detection, tracking, and communication between vehicles, as suggested by Shun-Hsiang Yu 34 This entails diverting the headlight's purpose from its core function of providing clear illumination for optimal driver visibility during nighttime driving.Moreover, the few studies that do aim to design intelligent headlights to enhance their core functionality often employ varied beam intensity control approaches.This diversity creates concerns for vehicle manufacturers, as the lack of a standardized approach raises questions about which intensity control method to adopt.The need for a more focused and standardized approach to intelligent headlight design is highlighted to ensure that advancements align with the primary goal of improving driver visibility and reducing nighttime accidents.
Several examples highlight the diversity of approaches in designing intelligent headlight systems.For instance Erzheng et al., 35 developed a system using Micro-Electro-Mechanical Systems (MEMS) and a digital signal processor to control the headlight beams' angle.In another study Li et al., 36 designed an intelligent headlight system incorporating modules for information collection, data transmission, data processing, and a motor adjusting the range of vision based on vehicle speed to enhance active safety.Similarly, an intelligent headlight system was created with two Electronic Control Units (ECUs) linked by a Controller Area Network (CAN).One ECU featured a biometric fingerprint security system for vehicle ignition, while the other had an auto mode wiper movement based on a fuzzy logic algorithm and an automatic headlight system to minimize glare. 37The use of a piezoelectric motor as an actuator for dynamic headlight leveling was explored as a potential future step in automotive lighting. 38Authors in Sushmita Pal 39 implemented an intelligent headlight system that improves road visibility and adjusts the headlight in hilly and curved terrains using an Arduino-based conventional headlight with multi-traits.This diverse literature showcases the various methods and approaches employed by designers of intelligent headlight systems.In the next section, a comprehensive review is conducted to determine the headlight beam intensity control approaches adopted by different designers.Some commonly used approaches are illustrated in Figure 3, depicting the evolution of intelligent headlight beam intensity control technology from its inception to the current state and future outlook.The following section provides an overview of each approach, and Table 1 summarizes their strengths and limitations.
Overview of sensor-based control approach
The sensor-based intensity control approach stands out as one of the oldest and widely adopted methods by developers of automotive intelligent systems, particularly in the context of intelligent and adaptive headlight beam intensity control management.This approach employs sensors, often utilizing light-dependent resistors (LDR), in conjunction with microcontrollers to control system operations.LDR sensors play a key role in this approach, detecting light intensity levels in the surrounding environment.When exposed to light, the LDR sensor develops high resistance, impeding the flow of electrons.Conversely, in a dark environment, the LDR sensor exhibits low resistance, allowing a greater flow of electrons.The output from the LDR sensor, indicating the detected light intensity, is then transmitted to a microcontroller, such as an Arduino UNO control board, for interpretation.In addition to LDR sensors, distance measurement sensors, such as ultrasonic and radar sensors, are often integrated into sensor-based headlight beam intensity control systems.These distance measurement sensors calculate the distance between two moving vehicles, providing crucial information for the system's decision-making process.Researchers, such as Jadhav et al., 40 have implemented intelligent headlight systems using the sensor-based intensity control approach.In their system, the LDR output voltage is fed to a transistor for signal amplification before being transmitted to a relay.The relay serves as an actuating device, facilitating the switching between high and low beams.Relays play a pivotal role in intelligent headlight beam intensity control systems, enabling the essential function of switching between different beam modes.In summary, the sensor-based intensity control approach, relying on LDR sensors and microcontrollers, remains a fundamental and widely utilized method in the development of intelligent headlight systems.The incorporation of distance measurement sensors and relays further enhances the capabilities of these systems in managing headlight beams based on environmental conditions and driving scenarios.
In a related context, Ne´meth et al. 41 proposed a headlight intensity control approach based on dualpixel Active Pixel Sensor (APS) sensor architecture, specifically designed for vision-based speed measurement applications.This innovative approach utilized a novel double exposure method, integrating two types of imaging elements on the pixel level to generate two spatially and temporally coherent images.The primary sensor was dedicated to producing a high-quality image for vehicle identification, while the secondary sensor's output was employed to calculate speed estimates based on the intra-frame displacement of the vehicle's headlight.A scaling process was implemented to adjust the sensitivity of the secondary sensor, relying on photodiode parasitic capacitor discharge time.While existing intelligent headlight-enabled cars are often considered expensive and inaccessible to the average consumer, 42 addressed this concern by designing a lowcost intelligent headlight system for accident avoidance.Adopting the sensor-based headlight beam intensity control approach, the system employed various sensors, including an LDR sensor for measuring light intensity, a Doppler radar sensor for rain measurement, an optical fog sensor for fog detection, a Video Image Processor (VIP) sensor for vehicle identification, and an ultrasonic sensor for measuring the distance to oncoming vehicles.These sensors were integrated into an Arduino Uno (R3) microcontroller.The author successfully achieved a more cost-effective design compared to existing intelligent headlight systems in luxury cars such as BMW, Mercedes, and Audi.This example highlights the practicality of employing a sensor-based headlight beam intensity control approach not only for its technical advantages but also for its potential in making intelligent headlight technology more affordable and accessible to a broader range of vehicle manufacturers and consumers.
In a similar fashion Muhammad and Shahriar, 43 utilized the sensor-based intensity control approach for their headlight beam intensity control system, affirming the efficiency of this method.Their design was centered around an ambient light sensor (ALS) based on a phototransistor, employing the principle of pulse width modulation.They asserted that their proposed approach is a highly effective method for controlling the headlight beam intensity of a vehicle.Ensuring good visibility on the road is crucial for safe nighttime driving, and the sporadic use of high beams due to the fear of dazzling other drivers underscores the significance of automatic headlight control.Lo´pez et al. 44 addressed this challenge by using a novel image sensor suitable for driver assistance applications, overcoming limitations associated with camera-based approaches.These examples illustrate how various authors have employed the sensor-based intensity control approach in designing automotive intelligent headlights for nighttime driving environments.Despite being one of the oldest approaches, it remains a preferred choice in contemporary intelligent headlight system designs due to its simplicity in architectural construction, cost-effectiveness, and reliability.Sensors, as mechatronic components, offer widespread applications, are highly reliable, and contribute to cost reduction when integrated into systems.The enduring popularity of the sensor-based intensity control approach in modern intelligent headlight designs attests to its ease of use and continued effectiveness.
Overview of fuzzy-logic-based control approach
The fuzzy-logic-based headlight beam intensity control approach represents a newer control methodology following the sensor-based intensity control approach.This approach is conducive to development using software tools like MATLAB.It has been employed by intelligent headlight beam intensity control developers such as Kher and Bajaj. 45In their design, a fuzzy logic controller was utilized to adjust the intensity of the headlight beams.To enhance the effectiveness of the headlight, they proposed an automated fuzzy controller that optimizes illumination in a manner that minimizes glare for oncoming vehicle drivers.This approach leverages fuzzy logic, a mathematical framework that simulates human reasoning to make decisions based on imprecise or uncertain information.
The fuzzy-logic-based headlight beam intensity control approach, while sharing similarities with the sensor-based approach in its reliance on sensors for operation, differs in its mode of control.In the sensorbased intensity control approach, Arduino and transistors are central to control, whereas the fuzzy-based intensity control approach utilizes a fuzzy controller.The proponents of the fuzzy-based intensity control approach argue that a fuzzy controller is more reliable, providing more accurate output signals.To design an intelligent headlight system that dynamically controls the headlight beams without the driver's intervention Uma et al., 46 employed a fuzzy controller.Their proposed system dynamically varied the headlight beam width, angle, and intensity by considering various vehicular parameters such as steering position, inclination angle, and speed, along with ambient parameters like day/night cycle and glare from opposite headlights.They designed two types of Fuzzy Inference Systems (FIS): a Centralized FIS (CFIS) that acquired all sensing parameters for headlight illumination control, and a Decentralized FIS (DFIS) for each control parameter to reduce complexity and errors in the system.The versatility of fuzzy controllers extends beyond headlight control.In a study by Kayabasi et al. 47 fuzzy logic technology was employed to design a wiper system that adjusts wiper movement based on rain intensity, controls headlight brightness according to external darkness, and operates the air conditioner based on temperature values.The results indicated that fuzzy logic can efficiently control electric/electronic systems in vehicle applications, showcasing its viability for headlight beam intensity control.The fuzzy-logic-based approach proves to be a viable method for headlight beam intensity control, offering reliability, accuracy, and versatility in integration with other vehicle components.
Various authors have successfully employed fuzzy controllers for headlight beam intensity control, achieving positive outcomes.In one instance, an automatic wiper and headlight intensity control system utilized a fuzzy control algorithm to adjust the wiper speed based on rain intensity and change the headlight modes according to the light intensity from oncoming vehicles.The fuzzy controller in this design consisted of three components: Fuzzification, Fuzzy Logic Rule Base, and De-fuzzification.Fuzzification converted the physical values of the current process signal and error signal, the Fuzzy Logic Rule Base utilized a set of rules incorporating several variables, and De-fuzzification converted fuzzy terms created by the rule base into crisp terms or numerical values, as described by Myo Tun. 48The study concluded that the fuzzy controller excellently executed the parameters set for control.In another study, Lukacs et al. 49 presented adaptive front light system (AFS) control alternatives using fuzzy logic (types 1 and 2) to determine operating parameters, considering road conditions in the state of Sa˜o Paulo, Brazil.The fuzzy logic control technique, or modeling strategy, proved valuable when making multi-parameter decisions or decisions based on human knowledge.The authors concluded that their results demonstrated the potential of the proposed methodology and its suitability for headlight beam intensity control, contributing to safer nighttime driving.These examples reinforce the efficacy of fuzzy controllers in diverse applications for headlight beam intensity control, showcasing their adaptability and effectiveness in providing solutions for safer driving conditions during nighttime.
In an effort to eliminate accidents caused by temporary driver blindness, a fuzzy controller was designed based on data captured using a Wireless Sensor Network (WSN).The low latency of this system allows for quicker adjustment of headlight intensity to minimize temporary blindness.Multiple attributes were considered in the design of the fuzzy controller, and the results demonstrated that the fuzzy controller's output is nearly instantaneous, providing continuous control signals, as demonstrated by Nutt et al. 50Similarly, Sinitsina and Yaroslavtsev 51 designed a Fuzzy Logic system for in-vehicle control, adjusting parameters for various driving behaviors such as normal driving, acceleration, deceleration, lane changes, zigzag motion, and approaching a car in front.Fuzzy rules associated with these behaviors indicated the level of risk, and experimental results showed an average detection ratio of 95%, suggesting the potential for improving traffic safety.A control system mechanism based on fuzzy logic, incorporating reasonable control rules, was presented by Butt et al. 52 Their aim was to explore the role of genetic algorithms in enhancing the efficiency of a fuzzy logic-based rear-end collision avoidance scheme.Results from the control mechanism indicated that the fuzzy controller is reliable and has diverse applications, making it a probable candidate for use by more designers.The versatility of fuzzy controllers extends to controlling electronic systems for object detection, as demonstrated by Basu et al. 53 They designed a nighttime vehicle detection system for adaptive headlight beams and collision avoidance using fuzzy logic-based control for vehicle detection.The system incorporated a novel segmentation technique based on adaptive fuzzy logic, a statistical mean intensity measure, a ''confirmation-elimination'' based classification algorithm, and a mutually independent feature-based object detection algorithm based on correlation matrix generation for identified light objects in the scene.
The literature discussed highlights the significance and versatility of fuzzy controllers in various engineering applications.It becomes evident that the fuzzy controller can be applied across diverse fields within engineering to control systems effectively.Based on the presented information, it can be confidently stated that the fuzzy-based intensity control approach represents an improvement over the sensor-based intensity control approach.A clear connection has been established, indicating a strong linkage between the two approaches.The primary distinction lies in their modes of control, with the fuzzy-based approach offering enhanced control capabilities.
Overview of PWM-based control approach
Pulse-width modulation (PWM) is a control technique commonly used in motor control, where energy is delivered through a series of pulses rather than a continuously varying (analog) signal.The controller regulates energy flow to the motor shaft by adjusting the pulse width, either increasing or decreasing it.The motor's inherent inductance serves as a filter, storing energy during the cycle and releasing it at a rate corresponding to the input or reference signal.In essence, energy flows into the load not precisely at the switching frequency but at the reference frequency, as discussed by Tripathy, 54 Khachane and Shrivastav. 55Pulse-width modulation is also employed in headlight beam intensity control technologies.This control approach provides a convenient means of managing large components.It transforms a digital signal into an analog signal by adjusting the timing of how long it stays ON and OFF.In the context of pulse width modulation intensity control, the term commonly used is ''duty cycle.''This term refers to the percentage or ratio of how long the system stays ON compared to when it turns OFF.
The PWM approach has been employed by designers of intelligent headlight systems to control headlight beam intensities.In a study by Umar et al. 56 an LEDbased intelligent headlight was presented.The authors designed a boost-type DC-DC automatic switching converter with a pulse width modulation (PWM) dimming controller.They utilized the MATLAB Simulink simulation package to ensure that their system's performance met the desired parameters.Similarly, in a work by Gacio et al. 57 a new approach was presented with the capability of pulse width modulation (PWM) dimming operation added to the high-power factor-integrated buck-flyback converter (IBFC).This converter had been developed in previous works for LED lighting applications.The authors introduced the two main dimming techniques, namely, analog dimming and PWM dimming.They discussed the three main PWM dimming schemes: enable dimming, series dimming, and parallel dimming.Following this, the IBFC topology was tested for both analog and enable dimming.The authors then introduced a newly proposed technique: the high-frequency series PWM dimming technique.This technique overcame challenges faced when developing PWM dimming capabilities in low slew-rate constant-current fixed-frequency-controlled converters and offered all the advantages of PWM dimming over analog dimming while maintaining good efficiency.
Additionally, a comparison of intelligent and advanced speed control methods based on the PWM technique and PI controller to achieve maximum intensity control efficiency was presented.The simulation of the design was carried out in a MATLAB environment, and results were investigated for speed control of headlight beam intensity without any controller and with a PI controller under full load conditions.The field test showed that the PWM headlight intensity control technique is the fastest compared to other approaches, as indicated by studies by Singh et al., 58 Tripathi et al. 59 Furthermore, a driving circuitry system for high-resolution, pixelated-LED automotive headlights was introduced by Jeon et al. 60 The system comprises an array of pixel drivers and a row/column driver suitable for an active-matrix array configuration with individual dimming control capabilities on each pixelated LED.An asynchronous serial communication protocol was introduced to minimize the number of data transmission interface signals between the row/column driver and pixel drivers.The proposed pixel driver was designed to drive each pixelated LED with constant current and pulse width modulation (PWM), containing a memory cell for dimming data and a sample-andhold driver stage to minimize static power consumption of the pixel driver.
The study by Beguni et al. 61 focuses on enhancing the Visible Light Communication (VLC) system through improvements in the VLC transmitter.The concept relies on Light-Emitting Diode (LED) current overdriving and a modified Variable Pulse Position Modulation (VPPM).LED current overdriving aims to provide the VLC receiver with higher instantaneous received optical power and an improved Signal-to-Noise Ratio (SNR).Simultaneously, the use of VPPM ensures that the VLC transmitter adheres to eye regulation norms and safeguards the LED against overheating.The experimental testing conducted in laboratory conditions affirmed the viability of the concept, revealing an increase in communication range by up to 70% while maintaining the same overall optical irradiance at the VLC transmitter level.This innovative approach holds promise for achieving vehicular VLC ranges that meet the requirements of communication-based vehicle safety applications.The utilization of Visible Light Communication in controlling headlight beam intensity represents a novel and actively researched area, offering potential advancements in communication technology.
Indeed, pulse width modulation (PWM) is a widely used and cost-effective approach for controlling components in various systems, including intelligent headlight beam intensity control systems.Its convenience in controlling large components and its applicability to ON/OFF switching, such as in the case of switching between low and high beams, makes it a suitable choice in certain contexts.The use of MATLAB Simulink software for program generation further enhances its accessibility, especially in academic or theoretical applications.However, there might be limitations when it comes to the practical implementation of PWM for real headlight beam control.Real-world factors, such as the complex dynamics of driving environments, variations in road conditions, and the need for rapid and precise adjustments in response to dynamic situations, could pose challenges to the straightforward application of PWM in intelligent headlight systems.The choice of headlight beam intensity control approach often depends on a balance between theoretical effectiveness, practical feasibility, and cost considerations.Researchers and developers need to carefully evaluate these factors to ensure that the chosen approach aligns with the specific requirements and constraints of realworld driving scenarios.
Overview of ML-based control approach
Machine learning, a technique rooted in artificial intelligence and computer science, revolves around utilizing data and algorithms to emulate human learning processes.In the contemporary era, machine learning and artificial intelligence stand as fundamental technologies driving the Fourth Industrial Revolution (4IR or Industry 4.0).This approach has permeated the automotive industry, with developers of intelligent components, such as intelligent headlights, leveraging machine learning to control beam intensities.The machinelearning-based intensity control approach represents one of the latest advancements in headlight beam control, proving to be a game-changer widely adopted by numerous designers and major automotive manufacturing companies.For instance, a dynamic headlight model was introduced by Yas xar S xahia¨n and Akar. 62mploying camera-supported machine-learning algorithms to enhance drivers' vision during nighttime driving.This design addresses various issues, including establishing a lighting field supported by image processing programed with machine learning.It incorporates dynamic adjustments of the high beam of the headlights' LED cells in response to oncoming vehicles, a traffic-sign recognition system, a lane-keeping system, and automatic adjustments of headlight angles, all achieved through the application of machine learning technology.
The practicality of the machine-learning-based approach in intelligent headlight design is widely acknowledged, with many current studies adopting this methodology.In the work by Astuti et al. 63 an innovative intelligent headlight system was devised using a unique machine learning-based method known as voice-based recognition.This system confronted the driver through voice-based interactions, allowing the driver to utter a specific word recognized by the system.This word served as input to the voice-based recognition system, which then determined whether the signal indicated a ''high beam'' or a ''low beam,'' consequently controlling the car headlights.However, it is crucial to note that the machine-learning-based intensity control approach heavily relies on clear photographs from cameras for accurate performance.Any blurriness in the input images can result in mismatches when compared to preprogrammed images, posing a significant limitation to this approach.Additionally Leung et al., 64 highlighted certain challenges associated with the machinelearning-based headlight beam intensity control approach and object detection models, particularly in nighttime and low illumination conditions.They identified issues related to dataset collection and labeling conventions.Public datasets used for object detection are often captured in well-lit conditions, and labeling conventions typically focus on clear objects, neglecting blurry and occluded ones.Consequently, the performance of traditional vehicle detection techniques is constrained in nighttime environments lacking sufficient illumination.This limitation can impact the efficiency and effectiveness of machine learning technology in applications like intelligent headlight systems.
The machine-learning-based approach can be categorized into various types, including support vector, AdaBoost, and others, depending on the preferences and convenience of the designer.In Zhu et al. 65 the authors introduced a novel algorithm that directly extends the machine-learning technique known as the AdaBoost algorithm into the multi-class case without transforming it into multiple two-class problems.This multi-class AdaBoost algorithm functions as a forward stagewise adaptive modeling algorithm, minimizing a novel exponential loss for multi-class classification.The authors demonstrated that this exponential loss belongs to a class of Fisher-consistent loss functions tailored for multi-class classification.Notably, their algorithm is straightforward to implement and exhibits high competitiveness in terms of misclassification error rates.Similarly, in their pursuit of a machinelearning-based approach for vehicle detection and headlight beam intensity control, authors in Moghimi et al. 66 emphasized that vehicle detection is a technology aimed at locating and representing the size of vehicles in digital images.This technology is crucial for detecting vehicles in complex environments with other objects like trees and buildings, playing a significant role in various computer vision applications such as vehicle tracking and traffic scene analysis.The authors proposed using the Viola-Jones boosting technique for vehicle detection and tested their system in real surveillance video scenes under different lighting conditions.Experimental results demonstrated that their vehicle detection method outperformed previous techniques in terms of accuracy (about 94%), completeness (92%), and overall quality (87%).They concluded that their approach is robust and efficient for detecting vehicles in surveillance videos, particularly for applications like headlight beam intensity control.
In a recent study by Bell et al. 67 a machine-learningbased technology was implemented, specifically focusing on a real-time vehicle detection algorithm designed for nighttime driving scenarios.This system demonstrated the capability to identify vehicles in images by analyzing intricate light patterns, forming the foundation for headlight beam intensity control.To achieve this, the authors devised a novel machine learning framework based on a grid of foveal classifiers.This machine learning technology represents the latest advancement in intelligent headlight beam intensity control designs, and numerous vehicle manufacturers have, and continue to, rely on this technology for their intelligent headlight designs.While the technology has proven its superiority over other alternatives, its current application is largely confined to high-end vehicles.However, it is highly plausible that, with the ongoing technological advancements, the cost barrier associated with vehicles equipped with machine-learning-based intelligent headlights will diminish in the near future.
Outlook of intelligent headlight technology
Having traced the evolution of intelligent headlight beam intensity control approaches, from the sensorbased method to the current ML-based approach, it is evident that the intelligent headlight technology is advancing rapidly.A comprehensive review of the four most commonly used headlight beam intensity control approaches leads the authors to confidently project that artificial intelligence (AI)-based headlight beam intensity control is a plausible future development.Presently, weak AI technologies, such as Machine Learning, have already found their place in intelligent headlight system designs, standing as the state-of-theart technology in this domain.While acknowledging the strengths and limitations of each approach, the authors emphasize the potential for further enhancing the sensor-based headlight beam intensity control method.Sensors, known for their reliability and resilience in harsh environmental conditions, have the capacity to withstand adverse weather, a common challenge for many intelligent headlight technologies.The authors contend that with technological improvements, the sensor-based approach could become as efficient as ML-based technology.Looking ahead, the authors express optimism about the future of automotive intelligent headlight technology, foreseeing the gradual emergence of strong AI, also known as artificial general intelligence, in headlight design.Artificial general intelligence, akin to a human's problem-solving ability, has the potential to enhance the efficiency and reliability of headlight systems.The authors anticipate the exploration of a research direction leading to ''robotic eye'' headlights -headlights capable of independently reading the road environment and making decisions, mimicking human eye actions.This technology could significantly contribute to addressing, if not eliminating, road accidents caused by improper conventional headlight use.In conclusion, the authors anticipate that research institutions and organizations will explore the direction of ''robotic eye'' headlights to make this vision a reality.The subsequent section of the study involves a utilization rate survey to determine the most widely used approach from 2018 to 2022.width modulation and fuzzy-logic based intensity control methods.The analysis reveals that the sensor-based intensity control approach is currently the most prevalent.However, it is noteworthy to acknowledge that the machine-learning-based intensity control approach is poised to become more widely accepted, provided the challenges associated with sensor-based control are promptly addressed.Furthermore, an observation indicates that, apart from the machine-learning-based approach, which does not rely on sensors in its intensity control strategy, both fuzzy logic and pulse width modulation methods heavily depend on the sensorbased approach.These two control methods function in direct correlation with the sensor-based intensity control approach, utilizing one or more sensors.This establishes the sensor-based approach as a fundamental influence on nearly all control methods, with the exception of the machine-learning-based approach, which relies more on cameras for environmental data collection.In summary, the data depicted in Figure 4 suggests that the sensor-based and machine-learning-based headlight beam intensity control approaches are the most acknowledged and widely employed methods among developers of intelligent headlights.
Headlight intensity distribution patterns of the high and low beams
To develop an automatic control system for headlight beam intensities, a comprehensive understanding of the pattern formation of these beams and their impact on road users during nighttime driving is crucial.Ensuring that vehicle headlights provide effective road illumination without causing glare for other road users necessitates adherence to specific requirements in the design of headlight reflective devices and associated equipment.The significance of both low and high beams in vehicle headlights cannot be overstated, as their functions differ significantly.They collectively contribute to enhancing road safety, providing comfort, and ensuring optimal road illumination for drivers and other road users in adverse nighttime and weather conditions.High beams are utilized for long-distance visibility in the absence of oncoming vehicles, while low beams, featuring an asymmetrical pattern, offer maximum forward and lateral illumination.Importantly, they minimize glare directed toward oncoming vehicles and road users. 90To enhance the reliability of headlight modeling, researchers 90,91 integrated a market-weighted headlight database with the headlight beam pattern model and a mathematical model.The lights in the marketweighted database were randomly selected from the top 90% of USA vehicle sales in 2010, with a minimum of 25 samples chosen.Compliance with the requirements set by the Economic Commission of Europe (ECE) and the Federal Motor Vehicles Safety Standards (FMVSS) in the United States is mandated for vehicles.For increased clarity, the Iso-candela and Iso-illuminance diagrams illustrating the road surface from a pair of high-beam headlights and a pair of low-beam headlights are presented in Figures 5 and 6, respectively.These diagrams are based on luminous intensities at the 50th percentile, with specified light mounting height (0.62 m for high beams, 0.66 m for low beams) and light separation (1.12 m for high beams, 1.20 m for low beams).[93][94][95][96] Source: Neale et al. 95 Similarly, the headlight radiation pattern curves of the high and low beams of the headlight luminous intensities at the 75th percentiles were presented by authors in 91,[93][94][95][96] using ISO-candela, and ISO-illuminance as shown in Figures 7 and 8 respectively.
The headlight illumination, E on the road surface as shown in Figure 9 is given as: Where dA is the small area of the road surface where the light falls, dv is the solid angle, I(a, b) is the luminous intensity (cd), a and b are the horizontal and vertical angles in relation to the headlight axis respectively, r is the distance between the light source and the small area dA and u is the angle between the road surface normal and the incident direction.Authors in Memedi et al., 98 presented the modelfitting process for integrating the empirical data in a simulated framework.They used the non-linear least squares method and the surface fitting tool from the curve-fitting toolbox of MATLAB for deriving the following equations.Characterizing the path loss between two vehicles is first based on the distance which is given as: They derived values for the following parameters a = 695:3, b = 4:949 and g = 1 from the empirical data.Secondly, they considered the angle between the vehicles.These conditions were taken care-off by the following mathematical equation: The following parameters were also obtained from the empirical data, d = À 747:3, 2 = 63:13, and v = 173.The fitting parameters for equations ( 2) and (3) were determined empirically and they achieved a goodness of fit characterized by R 2 = 0:8703.They finally computed for the Receive Signal Strength (RSS) for given distances and angles between transmitter and Receiver as: Equations ( 2),(3), and (4) were used to plot the radiation pattern of the headlight as shown in Figure 10.
Comparison
To enhance comprehension and foster a deeper appreciation of the headlight model, radiation distribution pattern curves at the 50th and 75th percentiles were presented in Figures 5 to 8, respectively.These figures offer a comparative analysis of the two types of headlight beam intensity pattern curves using ISO-candela and ISO-illuminance models.The ISO-candela luminous intensity pattern curves model illustrates the vertical angle illumination against the horizontal angle Source: Luo et al. 97 illumination, providing a clear distinction between the headlight high and low beams.Upon comparing the ISO-candela pattern curves of the high beam and low beam, a notable observation is that the low beam produces asymmetric intensity pattern curves concentrated toward the left.This design anticipates providing shortdistance illumination to minimize glare for other road users.The low beam, by its design, does not project over long distances before reaching the road surface.In contrast, the high beam generates symmetrical intensity pattern curves with the potential to travel longer distances compared to the low beam.The inherent design of the high beam is oriented toward extended visibility.This comparative analysis of the pattern curves contributes to a comprehensive understanding of how the two types of beams function differently in terms of their illumination characteristics.Additionally, in the comparison of ISO-illuminance between the low beam and high beam, it was observed that optical power was distributed on both sides of the light source along the vertical axis, while along the horizontal axis, the optical power was presented lengthwise.Specifically, in the case of the low beam, the electrical Receive Signal Strength (RSS) closer to the light source was measured at 100 dBm, whereas for the high beam, it was recorded as 50 dBm.This indicates that the RSS of the low beam near the light source is 100% stronger than that of the high beam, as depicted in Figures 5 to 8 for both the radiation pattern curves' intensity distribution models at the 50th and 75th percentiles.This finding suggests that the low beam offers superior front illumination compared to the high beam during nighttime driving.Furthermore, the headlight high beam demonstrates the capability to provide optical power over a longer lengthwise distance, up to 200 m, as opposed to the low beam, which can only provide 100 m of lengthwise optical power distance.Consequently, it is reasonable to conclude that the high and low beams of the headlight complement each other by compensating for their respective deficiencies.While the high beam can travel twice the lengthwise optical power distance of the low beam, the low beam excels in illuminating the vehicle front twice as effectively as the high beam.It is noteworthy that the magnitude of the RSS decreases with an increasing optical power length.
The optical power is inversely related to the focal length, representing the degree to which optical systems converge or diverge light.Higher optical power corresponds to a shorter focal length.In the ISO-illuminance model of the high beam, contour lines were delineated as the RSS decreased from 50 dBm to 30 dBm, and this pattern repeated as it further decreased to 20, 10, 5, 3, 2 dBm.The final contour line was drawn at 1 dBm, corresponding to a lengthwise optical power distance of 200 m, as illustrated in Figures 4(b) and 6(b).Similarly, in the case of the low beam, contour lines were drawn as the RSS decreased from 100 dBm to 50 dBm.Additional contour lines were added at 30 dBm, 20, 10, 5, 2, and the final 1 dBm contour line was drawn at the lengthwise optical power distance of 100 m, as depicted in Figures 6(b) and 8(b).Notably, the low beam exhibits a higher frequency of RSS contour lines compared to the high beam.This analysis provides insight into the varying optical power characteristics of both high and low beams, shedding light on their focal lengths and the resulting patterns of light convergence or divergence in the ISO-illuminance model.
As shown in Figure 10, Memedi et al. 98 introduced a novel headlight model, deviating from the conventional use of contour lines to represent RSS values.Instead, the author employed colors to illustrate different RSS values.The left plot in Figure 10 depicts RSS values calculated using the derived analytical model.For the sake of comparison, the author included the RSS Source: Memedi et al. 98 values obtained from empirical data in the middle plot.The conclusion drawn was that equation (4) closely aligns with the empirical data, especially in capturing the asymmetrical angular behavior characterized by a weaker concentration of power on the left side of the headlight.This finding is consistent with the observations shared by other authors, such as [91][92][93][94][95][96] whose models also yielded similar results.
Discussions
This paper conducts a comprehensive review of automotive intelligent headlight design, focusing on various methods employed by different developers to automatically control headlight beam intensities, thereby preventing dazzling of other road users.The study identifies and assesses four intelligent headlight beam intensity control approaches: sensor-based, machine learning-based, pulse width modulation-based, and fuzzy logic-based.Table 1 succinctly outlines the strengths and limitations of these approaches.Figure 4 serves as a comparative tool for these approaches based on their utilization rates.Notably, the sensor-based approach emerges as the most utilized.Furthermore, the study reveals a direct association between the fuzzy logic and pulse width modulation approaches with the sensor-based method, as they rely on various forms of sensors for their functionality.The machine learningbased approach, leveraging weak AI technology, has gained prominence in the automotive intelligent headlight system.Using cameras to capture road information, it employs a multi-classifier for detecting the high beam of oncoming vehicles.However, its reliance on image quality raises concerns, especially during adverse weather conditions, compromising its effectiveness.To deepen our understanding of the intensity distribution patterns of headlight high and low beams, the paper reviews headlight beam intensity distribution pattern models, comparing curves using ISO-candela and ISOilluminance models.The high beam exhibits symmetrical pattern curves with wide and long-distance intensity distribution patterns up to 200 m.This underscores the necessity for an intelligent headlight beam intensity control system to manage the high beam automatically, given the occasional limitations in driver control.Conversely, the low beam features asymmetrical radiation pattern curves designed to prevent glare on the highway during nighttime driving.Providing high RSS at a short lengthwise optical power distance of 100 m, the low beam addresses visibility challenges and contributes to reducing nighttime accidents caused by poor driver visibility, particularly due to glare from opposing vehicles' high beams.The paper emphasizes the potential for technological advancements to mitigate these avoidable situations.
Conclusion
This paper provides a comprehensive review of intelligent headlight beam intensity control approaches and headlight radiation pattern distribution models, aiming to clarify the confusion surrounding these topics.It clearly outlines the four practical intelligent headlight beam intensity control approaches -sensor-based, machine learning-based, pulse width modulation-based, and fuzzy logic-based.The strengths and limitations of each approach are succinctly summarized in Table 1.A comparative analysis of the four approaches based on their utilization rates reveals the sensor-based approach as the most preferred due to its simplicity in architecture construction and cost-effectiveness.The machine learning-based approach emerges as the most promising technology for intelligent headlight beam intensity control and is the second most preferred among developers.It is important to note that these conclusions are drawn from a random sample of 44 authors in the field spanning from 2018 to 2022.An intriguing aspect explored in this paper is the headlight radiation pattern distribution model.The review enhances our understanding of the characteristics of the high and low beams.The deliberate asymmetrical intensity distribution pattern of the low beam aims to improve front vehicle illumination and reduce glare for drivers of opposing vehicles.The low beam achieves a high Receive Signal Strength (RSS) at a shorter lengthwise optical power distance.The high beam, on the other hand, produces low RSS near the light source but travels twice the distance of the low beam, making it suitable for longer distance illumination.However, its symmetrical intensity distribution pattern raises concerns about potential glare for other road users if not managed properly.Notably, research interest in intelligent headlight design has substantially increased in the past half-decade, resulting in diverse intensity control approaches.Anticipating continued growth in this area, the authors suggest that the future direction for researchers should focus on a robotic eye headlight technology.This envisioned strong AI technology would mimic the human eye, allowing the headlight to dynamically adjust between high and low beams based on road conditions captured by the robotic eye.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: We are immensely thankful to the funding Agencies
Figure 2 .
Figure 2. Structure of the review paper.
Figure 5 .
Figure 5. Iso-candela and iso-illuminance diagrams of the road surface from a pair of high-beams: (a) isocandela diagram (cd) and (b) iso-illuminance diagram (vertical lx).Source: Neale et al.95
Figure 6 .
Figure 6.Iso-candela and iso-illuminance diagrams of the road surface from a pair of low-beams: (a) isocandela diagram (cd) and (b) iso-illuminance diagram (vertical lx).
Figure 7 .
Figure 7. ISO-candela and ISO-illuminance diagrams of the road surface from a pair of high beams: (a) isocandela diagram (cd) and (b) iso-illuminance diagram (vertical lx).Source: Luo et al.97
Figure 8 .
Figure 8. ISO-candela and ISO-illuminance diagrams of the road surface from a pair of low-beams: (a) isocandela diagram (cd) and (b) iso-illuminance diagram (vertical lx).
Figure 10 .
Figure 10.Comparison of the derived model, the empirical data, and the combination of both.
Table 1 .
Strengths and limitations of headlight beams intensity control approaches. | 11,278.4 | 2024-04-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Symmetrically dispersed spectroscopic single-molecule localization microscopy
Spectroscopic single-molecule localization microscopy (sSMLM) was used to achieve simultaneous imaging and spectral analysis of single molecules for the first time. Current sSMLM fundamentally suffers from a reduced photon budget because the photons from individual stochastic emissions are divided into spatial and spectral channels. Therefore, both spatial localization and spectral analysis only use a portion of the total photons, leading to reduced precisions in both channels. To improve the spatial and spectral precisions, we present symmetrically dispersed sSMLM, or SDsSMLM, to fully utilize all photons from individual stochastic emissions in both spatial and spectral channels. SDsSMLM achieved 10-nm spatial and 0.8-nm spectral precisions at a total photon budget of 1000. Compared with the existing sSMLM using a 1:3 splitting ratio between spatial and spectral channels, SDsSMLM improved the spatial and spectral precisions by 42% and 10%, respectively, under the same photon budget. We also demonstrated multicolour imaging of fixed cells and three-dimensional single-particle tracking using SDsSMLM. SDsSMLM enables more precise spectroscopic single-molecule analysis in broader cell biology and material science applications.
Introduction
The ability of spectroscopic single-molecule localization microscopy (sSMLM) to capture the spectroscopic signatures of individual molecules along with their spatial distribution allows the observation of subcellular structures and dynamics at the nanoscale. As a result, sSMLM has shown great potential in understanding fundamental biomolecular processes in cell biology and material science [1][2][3][4][5][6][7] . It also enables the characterization of nanoparticle properties based on the emission spectrum at the single-particle level [8][9][10] . Similar to other localizationbased super-resolution techniques, such as stochastic optical reconstruction microscopy (STORM) and point accumulation for imaging in nanoscale topography (PAINT), the localization precision of sSMLM is fundamentally limited by the number of collected photons per emitter 11 . However, sSMLM suffers from further photon budget constraints since the collected photons of each molecule need to be divided into two separate channels to simultaneously capture the spatial and spectral information [5][6][7][8][9][10] . Thus, the spatial localization precision of sSMLM also depends on the splitting ratio between the spatial and spectral channels and is typically limited to 15-30 nm in cell imaging 2,3,5,6 . Although a dual-objective sSMLM design was previously demonstrated with improved spatial localization precision, it imposes a constraint on live-cell imaging and adds complexity to system alignment 1 . The splitting of photons into two channels in sSMLM forces an inherent trade-off between the spatial and spectral localization precisions 5 . Currently, a method to fully utilize the full photon budget to maximize both the spatial and spectral localization precisions in sSMLM is lacking.
To overcome this inherent trade-off, we developed symmetrically dispersed sSMLM, or SDsSMLM, which has two symmetrically dispersed spectral channels instead of one spatial channel and one spectral channel. SDsSMLM fully utilizes all collected photons for both spatial localization and spectral analysis. We showed improvements in the spatial and spectral localization precisions via numerical simulation and validated them by imaging fluorescent nanospheres and quantum dots (QDs). We further demonstrated multicolour imaging of subcellular structures and three-dimensional (3D) singleparticle tracking (SPT) capabilities.
SDsSMLM
The concept of SDsSMLM is illustrated in Fig. 1. SDsSMLM is based on a conventional single-molecule localization microscopy (SMLM) system with a gratingbased spectrometer (details are described in "Materials and methods" section). In the emission path, the fluorescence light is confined by a slit at the intermediate image plane and symmetrically dispersed into the −1st and 1st orders at an equal splitting ratio by a transmission grating (Fig. 1a). Then, these dispersed fluorescence emissions are captured by an electron-multiplying chargecoupled device (EMCCD) camera to form two symmetrical spectral images after passing through relay optics. In addition, Fig. 1b shows the layout of the SDsSMLM spectrometer in Zemax based on the optical components and dimensions used in our studies.
While existing sSMLM simultaneously captures spatial (0th order) and spectral (1st order) images, SDsSMLM captures only two spectral images (−1st and 1st orders, Fig. 1c, d). The two spectral images of a particular single molecule emission are mirror images of each other with respect to the true location of the molecule. Therefore, we can localize single molecules by identifying the middle points (black plus symbols in Fig. 1e) between the two symmetrically dispersed spectral images. This symmetry-middle point relationship holds true for all molecules regardless of their emission spectra and minute spectral variations even among the same species of molecules. A virtual spatial image can be generated by identifying all the middle points (Fig. 1e). This virtual spatial image utilizes all the detected photons in each EMCCD frame, in contrast to the portion of photons used in existing sSMLM. It should also be noted that the virtual spatial image is not affected by the spectral heterogeneity of individual molecules, which is cancelled out through the symmetry-middle point relationship.
In SMLM, the localization position of individual molecules in the spatial image is estimated with limited certainty 12 . When the localization position is repeatedly estimated, the spatial localization precision (referred to as the spatial precision) is described by the standard deviation of the distribution of the estimated localization positions. Similarly, in SDsSMLM, we estimate the localization positions (x −1 , y −1 ) and (x +1 , y +1 ) from the −1st order and 1st order spectral images (PSF x À1 y À1 and PSF x þ1 y þ1 in Fig. 1c, d). Then, we determine the localization position (x 0 , y 0 ) in the virtual spatial image (PSF x 0 y 0 in Fig. 1e) using (x −1 , y −1 ) and (x +1 , y +1 ), as shown in Fig. 1c-e. Accordingly, the spatial precision in SDsSMLM is described by the standard deviation of the distribution of the estimated (x 0 , y 0 ) in the virtual spatial image (PSF x 0 y 0 ).
In addition, from the two spectral images (PSF x À1 y À1 and PSF x þ1 y þ1 ), we generate new spectral images (PSF λy À1 and PSF λy þ1 ) based on spectral calibration ( described in "Materials and methods"section). Then, we integrate them along the y-axis and extract spectral centroids (λ SC ) to represent the emission spectra of individual molecules. We calculate λ SC as λ SC ¼ P λ λI λ ð Þ= P λ I λ ð Þ, where λ is the emission wavelength and I(λ) is the spectral intensity at λ 13 . Accordingly, the spectral localization precision (referred to as the spectral precision) is described by the standard deviation of the spectral centroid distribution.
Specifically, to generate the virtual image, we first localize the individual molecules in the two spectral images (PSF x À1 y À1 and PSF x þ1 y þ1 ) along the x-axis using Gaussian fitting based on a maximum-likelihood estimator (MLE) 11,14,15 . Then, we obtain the two localization positions x −1 and x +1 , which are symmetrically distributed with respect to the true location of the molecule. Therefore, we can determine the spatial location x 0 in the virtual image by calculating the mean of x −1 and x +1 . In addition, we localize the individual molecules in the two spectral images (PSF x À1 y À1 and PSF x þ1 y þ1 ) along the y-axis, generating two localization positions y −1 and y +1 . These localization positions share the same location of the molecule along the y-axis. Hence, we can determine the spatial location y 0 in the virtual image by calculating the mean of y −1 and y +1 .
We can also perform spectral analysis of individual molecules using all the detected photons. We define the distance between x −1 in Fig. 1c and x +1 in Fig. 1d as the spectral shift distance (SSD) 10 . Individual molecules with longer emission wavelengths (the red plus symbols in Fig. 1c, d) have larger SSD values than molecules with shorter emission wavelengths (the green and blue plus symbols in Fig. 1c, d). Therefore, we can distinguish individual molecules based on their distinctive SSDs. To obtain the emission spectra of individual molecules, we combine photons from the two spectral images (PSF x À1 y À1 and PSF x þ1 y þ1 ) with respect to their spatial locations (x 0 , y 0 ) before spectral fitting, fully utilizing all the collected photons for spectral analysis.
Single and multicolour SDsSMLM imaging of nanospheres
To test the feasibility of SDsSMLM, we first imaged fluorescent nanospheres (200-nm diameter, F8807, Invitrogen). As a proof of principle, we used a grating (#46070, Edmund Optics) that split the emitted fluorescence photons into the −1st, 0th, and 1st orders at 22.5%, 28.5%, and 24% transmission efficiencies, respectively (Fig. S1). The −1st order and 1st order images are the symmetrically dispersed spectral images, and the 0th order image is the spatial image. The 0th order image was used for comparison with the virtual spatial image estimated from the −1st and 1st order spectral images. Figure 2a, b show the two symmetrically dispersed spectral images, and Fig. 2c shows the simultaneously captured actual spatial image overlaid with the virtual spatial image. Details of the experiment and the image reconstruction are described in "Materials and methods" section. We observed that the virtual spatial locations (the green plus symbols in Fig. 2c) of nanospheres estimated from the spectral images agree well with the PSFs and further with the directly obtained spatial locations (the magenta circle symbols). The accuracy for the nanosphere in the highlighted region in Fig. 2c Fig. 2d-f. Note that each localization is rendered using a circle with a 1-pixel diameter for better illustration in Fig. 2d, f. In addition, we numerically corrected the location offset (17.68 ± 23.28 and 32 nm ± 9.99 nm (mean ± standard deviation) along the xand y-axes, respectively) between the virtual and actual spatial locations after image reconstruction. In addition, we characterized the accuracy of the SDsSMLM system using a nanohole array. Details of the experiment are described in Supplementary Note 1 and Fig. S2. The accuracy over the entire field of view (FOV) is 14.43 ± 10.25 nm along the y-axis and 19.86 ± 12.08 nm along the x-axis.
We characterized the spectroscopic signatures of nanospheres using the spectral centroid method 5,13 . Figure 2g shows the scatter plot of the photon count versus spectral centroid for five nanospheres. We observed a narrow spectral centroid distribution of the five nanospheres centred at 696 nm with a spectral precision of 0.35 nm. Figure 2h shows the averaged spectrum of one of the nanospheres from 200 frames (purple cluster in Fig. 2g).
In addition to functional imaging based on spectral analysis 7 , sSMLM allows multicolour imaging with theoretically unlimited multiplexing capability. The multiplexing capability is predominantly determined by the spectral separation of selected dyes and the spectral precision under given experimental conditions 1,5,6 . We validated this capability of SDsSMLM using two types of nanospheres (200-nm diameter, F8806 and F8807, Invitrogen). Experimental details are described in "Materials and methods" section. Figure 3a, b show the first frame of the simultaneously recorded spectral images. While estimating the spatial locations of individual molecules ( Fig. 3c), we successfully classified different types of nanospheres based on their spectral centroid distribution (Fig. 3d). The red and blue colours in Fig. 3c correspond to the spectral centroids of the crimson nanospheres (centred at 690.6 nm with a spectral precision of 0.48 nm) and far-red nanospheres (centred at 696.5 nm with a spectral precision of 0.53 nm), respectively, in Fig. 3d.
Numerical simulation and experimental validation of the localization precision in SDsSMLM
In SDsSMLM, collected photons are dispersed into more pixels in the spectral image than in the spatial image 5,13 . Thus, the spatial precision of the PSF in spectral images (−1st and 1st orders) is more sensitive to noise contributions than the spatial precision of the PSF in the spatial image (0th order). Such spatial precision is affected not only by the number of collected photons and background but also by experimental parameters in the spectral channel, such as the spectral dispersion (SD) 13 and full-width at half-maximum (FWHM) of the emission spectrum, which refers to the emission bandwidth of a single molecule. Please see Supplementary Note 2 for details of the SD definition. Through numerical simulation, we investigated the influence of the SD and emission bandwidth of the emission spectrum on the spatial precision, as well as spectral precision under different experimental conditions. We further compared the spatial and spectral precisions in SDsSMLM and sSMLM both numerically and experimentally using QDs. The experimental details are described in "Materials and methods" section.
We compared the achievable spatial and spectral precisions under different SD and emission bandwidth values, where the total photon count was 1000. In sSMLM, we set the splitting ratio between the spatial (0th order) and spectral (1st order) channels to 1:3, following previously reported experimental conditions 2,4-6,8 . We approximated the emission spectrum shape as a Gaussian function. Details of the numerical simulation are described in Supplementary Note 3. contour map of the estimated spatial precision of SDsSMLM. Overall, a larger SD and a narrower emission bandwidth favour higher spatial precision. They also favour higher spectral precision (Figs. 4b and S3b). These trends are fundamentally governed by the contributions of various types of noise, such as the signal shot noise, background shot noise, and readout noise, and they agree well with analytical solutions, especially for the spectral precision 13 . In contrast, sSMLM shows a uniform spatial precision regardless of the SD and emission bandwidth (Fig. S3a). This is because that information in the spectral image is only used for spectral analysis in sSMLM, which is independent from and does not contribute to spatial localization. Figure 4c, d show the improvements in the spatial and spectral precisions, respectively, in SDsSMLM compared with sSMLM with respect to the SD and emission bandwidth. For example, at a 10.5-nm SD and a 35-nm emission bandwidth, which represent the experimental conditions in imaging QDs, SDsSMLM shows~42% (from 17.93 to 10.34 nm) and 10% (from 0.90 to 0.81 nm) higher spatial and spectral precisions, respectively, compared with sSMLM. In particular, SDsSMLM offers a relatively uniform improvement,~10%, in the spectral precision overall. This improvement is proportional to the square root of the ratio of the number of photons allocated to the spectral channel between SDsSMLM and sSMLM (Fig. 4d). We further estimated the achievable spatial and spectral precisions when the number of photons increased. As shown in Fig. 4e, f, the theoretical estimations are in good agreement with the experimental results using QDs. In addition, we investigated the influence of the splitting ratio on the spatial and spectral precisions (Fig. S4).
Multicolour SDsSMLM imaging of COS7 cells
We demonstrated the multicolour imaging capability of SDsSMLM using fixed COS7 cells. We selected Alexa Fluor 647 (AF647) and CF680, which emit at wavelengths only~30 nm apart (Fig. 5a), to label mitochondria and peroxisomes, respectively 1,6 . To classify them, we used different spectral bands based on the spectral centroid distribution 5,6 : the first band from 682 to 694 nm for AF647 and the second band from 699 to 711 nm for CF680, as highlighted by the yellow and cyan colours in Fig. 5b, respectively. We visualized the colocalization of mitochondria (yellow) and peroxisomes (cyan) (Fig. 5c). We also imaged microtubules labelled with AF647 (magenta) and mitochondria labelled with CF680 (green) (Fig. 5d). By measuring the FWHM of a segment of an imaged microtubule (dashed square in Fig. 5d), we estimated the spatial resolution of SDsSMLM to be 66 nm, as shown in Fig. 5e. Additionally, we observed that the minimum resolvable distance between two tubulin filaments is within the range of 81-92 nm based on multiple Gaussian fittings of the intensity profiles (Fig. 5f, g). Using the Fourier ring correlation (FRC) method 16 , we also evaluated the resolution of another reconstructed image (Fig. 5c) that visualizes mitochondria and peroxisomes. The FRC curve estimated a resolution of 111 nm (Fig. S5) at a threshold level of 1/7. In addition, we quantified the utilization ratio, which is defined as the ratio of the number of localizations allocated into each spectral band to the total number of localizations, in the reconstructed image (Fig. 5c). We calculated the utilization ratio in SDsSMLM using both spectral images. We also calculated the utilization ratio by using only one spectral image (1st-order channel), which mimics conventional sSMLM with a 1:1 splitting ratio between the spatial and spectral channels, for comparison. We obtained a 17.4% improvement in the utilization ratio in SDsSMLM, on average for the two spectral channels, compared with that in sSMLM (Fig. S6). This result demonstrates that SDsSMLM benefits from improved spectral precision by fully utilizing all collected photons for spectral analysis, which subsequently leads to improved spectral classification for multicolour imaging.
3D single particle tracking
We added a 3D imaging capability to SDsSMLM through biplane imaging, similar to what we reported in sSMLM 5 . Since SDsSMLM already has two symmetrically dispersed spectral channels, we can efficiently implement biplane imaging by introducing an extra optical pathlength in one channel. As shown in Fig. 6a, we added a pair of mirrors into the 1st-order spectral channel in front of the EMCCD camera to generate such an optical pathlength difference. This optical pathlength difference introduced a 500-nm axial separation between the imaging planes of the two spectral channels. As a result, individual molecules are imaged with different PSF sizes according to their axial locations. By measuring the ratio between the sizes of the PSFs, we can determine the axial coordinate of each molecule through an axial calibration curve. The full description of biplane SDsSMLM image reconstruction is described in "Materials and methods" section.
We demonstrated 3D biplane SDsSMLM by tracking individual QDs in a suspension. We tracked the movement of QDs for 5 s. We recorded 160 frames with an exposure time of 5 ms at a frame rate of 30 Hz. Figure 6b shows the 3D trajectory of one QD, colour coded with respect to time (represented by the line). The QD locations in the first and last frames are highlighted by the circles colour coded according to the measured spectral centroids. We observed that the spectral centroids remained near 614 nm throughout the tracking period with a spectral precision of 1.5 nm (Fig. S7a). We approximated the diffusion coefficient from the 3D trajectory using D = MSD/6t, where MSD is the mean squared displacement and t is the frame acquisition time 17 . The calculated diffusion coefficient is 0.012 µm 2 /s. These results demonstrate the capability of 3D biplane SDsSMLM to precisely reconstruct the 3D spatial and spectral information of single molecules in SPT.
Discussion
We demonstrated that SDsSMLM acquires both spatial and spectral information of single molecules from two symmetrically dispersed spectral images without capturing the spatial image. SDsSMLM maintains the highest achievable spectral precision per emitter in given experimental conditions, as it fully uses all collected photons for spectral analysis. In addition, it addresses the inherent trade-off between the spatial and spectral precisions by sharing all collected photons in both spatial and spectral channels. We observed that SDsSMLM achieved 10.34nm spatial and 0.81-nm spectral precisions with 1000 photons, which correspond to 42%, approximately doubled photon enhancement, and 10% improvements in the spatial and spectral precisions, respectively, compared with sSMLM using a 1:3 ratio between the spatial and spectral channels.
We applied SDsSMLM to multicolour imaging and 3D SPT. It should be noted that these experimental demonstrations were based on a grating that split the beam into the −1st and 1st orders with efficiencies of 22.5% and 24%, respectively. Thus, only approximately half of the photons of the emitted fluorescence were used for image reconstruction in multicolour imaging. Consequently, the current implementation of SDsSMLM has a reduced image resolution. This can be improved by replacing this grating with a new phase grating that can significantly suppress the 0th order and maximize the transmission efficiency only at the −1st and 1st orders, with the relatively high total transmission efficiency expected to be more than 85% 18 . In comparison, the blazed grating reported in our previous sSMLM system 5,6 has an absolute transmission efficiency of~18% for the 0th order and an absolute transmission efficiency of~50% for the 1st order in the far-red channel, corresponding to an overall efficiency of~68%. Considering the~85% efficiency of the phase grating, the localization precision will scale favourably due to the increased photon utilization efficiency. In this work, to compare both the spatial and spectral precisions between SDsSMLM and conventional sSMLM, we assumed an identical total number of photons in both systems and 100% absolute transmission efficiency. Specifically, we compared two cases: (1) 25% absolute transmission efficiency for the 0th order and 75% absolute transmission efficiency for the 1st order in the standard sSMLM system and (2) 50% absolute transmission efficiency for both the 1st and −1st orders in SDsSMLM. This reasonably mimics a comparison study using the optimized phase grating and the normal blazed grating. In addition, the resolution can be further improved by using a larger SD and a narrower emission bandwidth, as SDsSMLM favours a large SD and a narrow emission bandwidth for high spatial precision. However, an extremely low SD may compromise one of the benefits of SDsSMLM for functional studies that involve resolving minute spectroscopic features in single-molecule spectroscopy. This suggests that SDsSMLM requires careful dye selection and system optimization to achieve the desired spatial and spectral precisions.
The FOV in SMLM is mainly determined by the objective lens, the field of illumination, and the active area of the camera. For sSMLM equipped with a grating-based spectrometer, the FOV is further restricted by the diffraction angle of the 1st order of the grating, which determines the separation between the spatial and spectral images. In this work, our FOV was restricted to~30 × 5 µm 2 , as we also captured the 0th order to compare the virtual and actual spatial images. This constraint can be relaxed in the future by using a customized grating that suppresses the 0th order. In this case, the FOV primarily depends on the separation between the −1st and 1st orders, which could increase the FOV by at least two-fold. Additionally, it can be further addressed in 3D biplane SDsSMLM by separately manipulating the two diffraction orders.
In 3D biplane SDsSMLM, the PSFs of the individual molecules in the spectral images are blurred when they are at out of focus planes. This does not allow for a detailed spectral analysis. However, the calculated spectral centroid can still be used to separate two dyes with slightly different fluorescence spectra 5 . In addition, small differences occur in the magnification and the SD between the two spectral images, caused by their different pathlengths. However, the spectral centroid is not significantly affected by these issues and is sufficient for extracting spectroscopic signatures of individual molecules. We numerically corrected them before image reconstruction and spectral analysis. We observe a spectral precision of 1.5 nm throughout the tracking period under the given experimental conditions: the signal level is~6700 photons, and the background level is~800 photons in total.
Optical setup and image acquisition for SDsSMLM imaging
We performed all experiments using a home-built SDsSMLM system based on an inverted microscope body (Eclipse Ti-U, Nikon) (Fig. 1a). We used a 640-nm laser to excite nanospheres, AF647, and CF680 and a 532nm laser to excite QDs. The laser beam was reflected by a dichroic filter (FF538-FDI01/FF649-DI01-25X36, Semrock) and focused onto the back aperture of an oil immersion objective lens (CFI Apochromat ×100, Nikon). We used a high oblique angle to illuminate the samples. The emitted fluorescence light was collected by the objective lens and focused by the tube lens onto the intermediate image plane after passing through a longpass filter (LPF) (BLP01-532R/647R-25, Semrock). We inserted a slit at the intermediate image plane to confine the FOV and subsequently placed a transmission grating (46070, Edmund Optics) to disperse the emitted fluorescence into the −1st, 0th, and 1st orders. Then, the dispersed fluorescence emissions were captured by an EMCCD camera (iXon 897, Andor) with a back-projected pixel size of 160 nm after passing through relay optics (f = 150 mm, AC508-150-B-ML, Thorlabs).
For SDsSMLM imaging of nanospheres, we acquired 200 frames at a power density of~0.02 kW/m 2 with an exposure time of 20 ms. For the experimental validation of the localization precision using QDs, we acquired 200 frames while varying the signal intensity (photon count) by adjusting the EMCCD exposure time and controlling the illumination power using a neutral density filter (NDC-50C-4M, Thorlabs). For multicolour SDsSMLM imaging of fixed COS7 cells, we acquired 20,000 frames at 10 kW/cm 2 with an exposure time of 20 ms. For SPT in 3D, we acquired 160 frames at~0.02 kW/cm 2 with an exposure time of 5 ms.
Image reconstruction for SDsSMLM imaging
For image reconstruction, we first localized the individual molecules in two spectral images (PSF x À1 y À1 and PSF x þ1 y þ1 ) with 2D Gaussian fitting using Thunder-STORM 14 . Then, using customized MATLAB codes, we classified them into two groups corresponding to the −1st and 1st orders and estimated the spatial locations of pairs of localizations by calculating their mean values. Next, we formed the virtual image (PSF x 0 y 0 ) using the estimated spatial locations.
For spectral calibration, we first captured a calibration image using a narrow slit and a calibration lamp. This calibration image included multiple spectral lines of the calibration lamp in the two spectral images. By integrating the two spectral images along the y-axis, we obtained emission peaks centred at 487.7, 546.5, and 611.6 nm (Fig. S8a). Then, we obtained a calibration curve by fitting the wavelengths of the emission peaks with their corresponding pixel distances using a linear polynomial function (Fig. S8b). Using the obtained calibration curve, we calibrated the emission spectra of individual molecule pairs. Finally, we obtained the final emission spectra by combining the two symmetrical emission spectra. We used the three emission peaks at 487.7, 546.5, and 611.6 nm of the calibration lamp for the first experimental demonstration using nanospheres and the two emission peaks at 620.23 and 603.24 nm of a neon lamp (6032, Newport) in all other experiments.
To characterize the spectroscopic signatures of individual molecules, we used the spectral centroid 13 . For all the experimental demonstrations, we estimated the spectral centroid in the same manner except for multicolour imaging using nanospheres. Unfortunately, in this experiment, we rarely distinguished two different types of nanospheres based on the spectral centroid values, as the emission of one of the nanospheres (crimson) was partially rejected by the LPF. Thus, we fitted the emission spectrum using a Gaussian function and used the emission peak as an approximation of the spectral centroid.
For imaging nanospheres, QDs, and fixed cells, we used spectral windows of 650-750, 565-665, and 625-775 nm, respectively. In addition, we rejected blinking events below 500 photons during the spectral analysis in multicolour imaging of fixed cells. In addition, we used an SD of 8.8 nm/pixel in nanosphere imaging and an SD of 10.5 nm in all other experiments.
Image reconstruction for 3D biplane SDsSMLM
We reconstructed the 3D image in a similar manner as previously described for 3D biplane sSMLM 5 except that we used one symmetrically dispersed spectral image (−1st order), instead of the spatial image (0th order), together with another spectral image (1st order) for biplane imaging. We first captured a 3D calibration image using QDs. This image contained a few samples in both spectral images at different depths. The QDs were scanned from −1.5 to +1.5 µm along the z-axis with a step size of 25 nm. Next, we obtained one-dimensional (1D) PSF y s by integrating the spectral images along the x-axis. Then, we measured the FWHM of the two 1D PSF y s and estimated their ratio (Fig. S7b). We used this ratio to calibrate the axial coordinate of each molecule.
Sample preparation for SDsSMLM imaging
We prepared nanosphere samples for single and multicolour SDsSMLM imaging according to the following steps. Cover glass was rinsed with phosphate buffered saline (PBS), coated with poly-L-lysine (PLL, P8920, Sigma-Aldrich) for 1 h, and washed with PBS three times. Nanospheres (200-nm diameter; F8806 and F8807, Invitrogen) were diluted 10 4 times with a cross-linking buffer containing EDC (1 mg mL −1 , 1-ethyl-3-(3-dimethylaminopropyl) carbodiimide hydrochloride) and NHS (1 mg mL −1 , Nhydroxysuccinimide) in 50 mM MES buffer (2-(N-morpholino) ethanesulfonic acid, pH =~6, 28390, Thermo Fisher). Two hundred microlitres of the cross-linking buffer with nanospheres was added to the PLL-coated cover glass. The cover glass was rinsed with PBS and dried under filtered air. Then, a drop of antifade mounting medium (P36965, Invitrogen) was added to a cover slip. The cover glass with the samples was mounted on the cover slip by sandwiching the samples between them.
We prepared the QD sample according to the following steps. QDs (777951, Sigma-Aldrich) were diluted 10 4 times in water. A total of 400 μL of the QD solution with a concentration of 0.5 μg mL −1 was deposited onto cover glass using a Laurell WS-650SZ-23NPPB spin-coater at 2000 rpm for 1 min. The cover glass with the sample was mounted on a cover slip by sandwiching the sample between them.
For multicolour SDsSMLM imaging, COS7 cells (ATCC) were maintained in Dulbecco's modified Eagle medium (DMEM, Gibco/Life Technologies) supplemented with 2 mM L-glutamine (Gibco/Life Technologies), 10% fetal bovine serum (Gibco/Life Technologies), and 1% penicillin and streptomycin (100 U mL −1 , Gibco/Life Technologies) at 37°C with 5% CO 2 . Cells were plated on cover glass at 30% confluency. After 48 h, the cells were rinsed with PBS and then fixed with 3% paraformaldehyde and 0.1% glutaraldehyde in PBS for 10 min at room temperature. After washing with PBS twice, the cells were quenched with 0.1% sodium borohydride in PBS for 7 min and rinsed twice with PBS. The fixed cells were permeabilized with a blocking buffer (3% bovine serum albumin (BSA) and 0.5% Triton X-100 in PBS for 20 min), followed by incubation with the primary antibodies in the blocking buffer for 1 h. For multicolour imaging of mitochondria and peroxisomes, the primary antibodies used in the study were mouse anti-TOM20 directly labelled with AF647 (2.5 μg mL −1 , sc-17764-AF647, Santa Cruz) and rabbit anti-PMP70 (1:500 dilution, PA1-650, Thermo Fisher). The samples were washed three times with washing buffer (0.2% BSA and 0.1% Triton X-100 in PBS) for 5 min and incubated with secondary antibodies labelled with CF680 (2.5 μg mL −1 donkey anti-rabbit IgG-CF680) for 40 min. For multicolour imaging of microtubules and mitochondria, the primary antibodies used in the study were sheep anti-tubulin (2.5 μg mL −1 , ATN02, Cytoskeleton) and mouse anti-TOM20 (2.5 μg mL −1 , sc-17764, Santa Cruz). After washing with washing buffer three times for 5 min, the samples were incubated with secondary antibodies labelled with AF647 and CF680 (2.5 μg mL −1 donkey anti-sheep IgG-AF647 and anti-mouse IgG-CF680) for 40 min. The dyes were conjugated to the IgG following a literature protocol (degree of label =~1) 19 . The cells were then washed with PBS three times for 5 min and stored at 4°C. An imaging buffer (pH = ∼8.0, 50 mM Tris, 10 mM NaCl, 0.5 mg mL −1 glucose oxidase (G2133, Sigma-Aldrich), 2000 U/mL catalase (C30, Sigma-Aldrich), 10% (w/v) D-glucose, and 100 mM cysteamine) was used to replace PBS before image acquisition.
For 3D SPT, a QD solution of 0.5 μg mL −1 in water was mixed with glycerol (v/v = 1:9) and vortexed for 10 s. Then, 50 μL of the final solution was immediately added onto cover glass. The free-diffusing single QDs were then observed and tracked. | 7,418.6 | 2020-05-25T00:00:00.000 | [
"Physics",
"Chemistry"
] |
Molecular Evolution of CatSper in Mammals and Function of Sperm Hyperactivation in Gray Short-Tailed Opossum
Males have evolved species-specifical sperm morphology and swimming patterns to adapt to different fertilization environments. In eutherians, only a small fraction of the sperm overcome the diverse obstacles in the female reproductive tract and successfully migrate to the fertilizing site. Sperm arriving at the fertilizing site show hyperactivated motility, a unique motility pattern displaying asymmetric beating of sperm flagella with increased amplitude. This motility change is triggered by Ca2+ influx through the sperm-specific ion channel, CatSper. However, the current understanding of the CatSper function and its molecular regulation is limited in eutherians. Here, we report molecular evolution and conservation of the CatSper channel in the genome throughout eutherians and marsupials. Sequence analyses reveal that CatSper proteins are slowly evolved in marsupials. Using an American marsupial, gray short-tailed opossum (Monodelphis domestica), we demonstrate the expression of CatSper in testes and its function in hyperactivation and unpairing of sperm. We demonstrate that a conserved IQ-like motif in CatSperζ is required for CatSperζ interaction with the pH-tuned Ca2+ sensor, EFCAB9, for regulating CatSper activity. Recombinant opossum EFCAB9 can interact with mouse CatSperζ despite high sequence divergence of CatSperζ among CatSper subunits in therians. Our finding suggests that molecular characteristics and functions of CatSper are evolutionarily conserved in gray short-tailed opossum, unraveling the significance of sperm hyperactivation and fertilization in marsupials for the first time.
Introduction
To win the competition between male rivals over females and breed successfully [1], males have evolved unique reproductive strategies. The rapid evolution of sperm design, such as morphology, swimming patterns, and/or cell numbers in ejaculates, can establish the most successful strategy to fertilize the eggs [2]. Mammalian males have evolved increased sperm numbers in their ejaculate to overcome the physical, chemical, and anatomical obstacles in the female reproductive tract; acidic environment near the vagina, cervical mucus, fluid flow in the uterine and the oviduct, narrow path of the uterotubal junction (UTJ), and/or the immune systems limit the number of sperm cells to reach to the fertilizing sites [3,4].
A small number of mammalian sperm cells that arrive at the fertilizing site show unique motility patterns called hyperactivated motility, characterized by asymmetric tail beating with increased flagellar amplitude [5,6]. In mice, hyperactivated motility enables sperm to pass UTJ [7], to swim efficiently under the viscous fluid in the oviductal lumen [8], and to detach from the oviductal reservoir to approach the eggs [9]. In addition, hyperactivation is required to penetrate the glycoprotein barrier of the oocytes, the zona
Sperm Collection and Capacitation
Epididymal spermatozoa from adult gray short-tailed opossums (>4 months old) and mice (>90 days old) were collected by swim-out methods. Briefly, collected cauda epididymis was placed in M2 medium (MilliporeSigma, Burlington, MA, USA) at 37 • C for 10 min (mouse) or 20 min with gentle rocking (opossum). For capacitation, the collected opossum and mouse sperm were washed one time by centrifugation with M2 medium and incubated in human tubular fluid (HTF) medium (MilliporeSigma, Burlington, MA, USA) at 37 • C, 5% CO 2 condition for 90 min. In order to induce sperm capacitation under Ca 2+ chelated or CatSper channel inhibiting condition, 2.5 mM EGTA or 10 µM NNC 55-0369 (NNC, Alomone Labs, Jerusalem, Israel) were supplemented in HTF medium, respectively.
Flagella Waveform Analysis
Uncapacitated or capacitated opossum sperm were transferred to the imaging chamber for Delta-T culture dish controller (Bioptechs, Butler, PA, USA) containing 37 • C HEPESbuffered HTF medium with 0.5% methylcellulose (w/v) [14]. After 1 min, the flagellar movement of head-tethered sperm was recorded for 2 s with 200 fps speed using pco.edge sCMOS camera equipped in Axio observer Z1 microscope (Carl Zeiss, Oberkochen, Germany). FIJI software [22] was used to measure α-angle and beat frequency of the sperm flagella and to draw overlaid images of flagella for two beat cycles as described in a previous study [14].
Swimming Trajectory Analysis
To analyze swimming trajectory, opossum sperm cells were transferred to 37 • C H-HTF medium with 0.5% methylcellulose (w/v) in an imaging chamber coated with 0.2% agarose to minimize head attachment on the plate. Sperm movements were recorded for 2 s with 100 fps speed using Axio observer Z1 microscope with pco.edge sCMOS camera (Carl Zeiss, Oberkochen, Germany). Overlaid images to trace the sperm swimming paths were generated by using FIJI software.
Sperm Fluorescence Staining
Uncapacitated and capacitated short gray-tailed opossum sperm cells were washed with PBS and attached on the glass coverslip by centrifugation at 700× g for 5 min. The sperm cells were fixed with 4% PFA for 10 min at room temperature (RT) followed by washing with PBS three times. The fixed sperm were permeabilized with 0.1% Triton X-100 in PBS at RT for 10 min and blocked with 10% normal goat serum in PBS for an hour at RT. The opossum sperm were stained with 10 µg/mL of mouse monoclonal phosphotyrosine antibody (clone 4G10, MiliporeSigma, Burlington, MA, USA) or Alexa-568 conjugated peanut agglutinin (PNA, Invitrogen, Carlsbad, CA, USA) in blocking solution at 4 • C for overnight. Immunostained coverslips were washed with PBS three times and incubated with goat anti-mouse IgG conjugated with Alexa-568 (1:1000; Invitrogen, Carlsbad, CA, USA) in blocking solution for an hour at RT. Stained samples were mounted with Vectashield (Vector Laboratories, Burlingame, CA, USA) and imaged with Zeiss LSM710 Elyra P1 using Plan-Apochrombat 63X/1.40 oil objective lens. Hoechst (Invitrogen, Carlsbad, CA, USA) was used for counterstaining.
Scanning Electron Microscopy
Opossum sperm cells were attached on the glass coverslips and fixed with 2.5% glutaraldehyde (GA) in 0.1 M sodium cacodylate buffer (pH7.4) for one hour at 4 • C followed by post-fixation with 2% osmium tetroxide in 0.1 M cacodylate buffer. Post-fixed samples were washed with 0.1 M cacodylate buffer three times and dehydrated through a series of ethanol to 100%. Fixed samples were dried using a Leica 300 critical point dryer with liquid carbon dioxide. The coverslips were glued to aluminum stubs and sputter-coated with 5 nm platinum using a Cressington 208HR (Ted Pella Inc, Redding, CA, USA) rotary sputter coater. Samples were imaged with a Hitachi SU-70 scanning electron microscope (Hitachi High-Technologies, Tokyo, Japan).
RNA Extraction and qRT-PCR
Opossum testicular RNA was extracted from males aged 2, 4-5, and over 18-monthold using RNeasy mini-kit (Qiagen, Hilden, Germany), and 500 ng of the total RNA was synthesized to cDNA using iScript cDNA Synthesis Kits (BioRad, Hercules, CA, USA). The cDNA samples were subjected to qRT-PCR (CFX96, BioRad, Hercules, CA, USA) or cloning open reading frame (ORF) of opossum EFCAB9. Primer pairs used for the qRT-PCR were listed in Table S2. TBP was used for the reference to normalize expression level, and relative mRNA expression levels of each CatSper subunit were calculated by the ddCt method.
Statistical Analysis
Statistical analyses were performed with a one-way analysis of variance (ANOVA) with the Tukey post hoc test. Differences were considered significant at * p < 0.05; ** p < 0.01; *** p < 0.001.
CatSper Components Are Conserved and Evolved Slowly in Marsupials
Marsupials and eutherians diverged from their common ancestor around 160 million years ago (MYA) ( Figure 1A and Figure S1A). To understand the molecular evolution of CatSper components in marsupials, we performed comparative amino acid sequence analysis of CatSper proteins from 71 therian mammals including Tasmanian devil (Sarcophilus harrisii), a marsupial in which all the reported CatSper subunits are annotated in the genome ( Figure S1B). Pairwise distance analyses of the concatenated sequence of all ten CatSper subunits (CatSper1-2-3-4-β-γ-δ-ε-ζ-EFCAB9) revealed that Tasmanian devil CatSper protein sequences are highly divergent in therian mammals, together with CatSper proteins in rodents ( Figure 1B, left, and Figure S1C, left). The sequence comparison of the Tasmanian devil CatSper subunits to those in the eutherian species also support that Tasmanian devil CatSper proteins are distinct from eutherian CatSper proteins ( Figure 1C, left). Interestingly, phylogenetic analyses showed that Tasmanian devil CatSper proteins are clustered together with those of Laurasiatherians despite their distinctive protein sequences ( Figure S1A, right). These results indicate that the molecular evolutionary patterns of CatSper proteins are different from taxonomic classification, suggesting that CatSper components might have evolved more slowly in the Tasmanian devil than in rodents. Thus, we normalized the pairwise distances of CatSper proteins by divergence time between two species and compared the values within therian mammals ( Figure 1B, right, and Figure S1C, right). Indeed, normalized pairwise distances reveal that Tasmanian devil CatSper proteins are rather less divergent than those in rodents considering their divergence time. Interclade comparison also supports the slower evolution of the CatSper proteins in the Tasmanian devil ( Figure 1C, right). All these results indicate that CatSper proteins have evolved rapidly in rodents but slowly in the Tasmanian devil; marsupial CatSper proteins might have conserved physiological functions and molecular characteristics inherited from common ancestors of therian mammals.
Gray Short-Tailed Opossum Sperm Develop Hyperactivated Motility after Incubating under Capacitating Conditions
The comparative sequence analyses raised the possibility that the physiological function and molecular workings of the CatSper channel in sperm might have been conserved in marsupials ( Figure 1). Currently, CatSper subunits are annotated fully or partly in five marsupial species ( Figure S1B): four Australian marsupials -Tasmanian devil (S. harrisii), koala (Phascolarctos cinereus), common brushtail possum (Trichosurus vulpecula), and common wombat (Vombatus ursinus) -and one American marsupial -gray short-tailed
Gray Short-Tailed Opossum Sperm Develop Hyperactivated Motility after Incubating under Capacitating Conditions
The comparative sequence analyses raised the possibility that the physiological function and molecular workings of the CatSper channel in sperm might have been conserved in marsupials ( Figure 1). Currently, CatSper subunits are annotated fully or partly in five marsupial species ( Figure S1B): four Australian marsupials -Tasmanian devil (S. harrisii), koala (Phascolarctos cinereus), common brushtail possum (Trichosurus vulpecula), and com-mon wombat (Vombatus ursinus) -and one American marsupial -gray short-tailed opossum (M. domestica, Figure 2A). Pairwise sequence analyses showed that CatSper proteins in gray short-tailed opossum and the Australian marsupials are sequence homologues and evolutionarily closer than eutherian CatSper proteins ( Figure S2). These results suggest that gray short-tailed opossum and Australian marsupials are likely to share the physiological function and molecular characteristics of CatSper proteins despite their early divergence time (82 MYA, Figure S1B).
The CatSper channel is activated under capacitating conditions, which enable eutherian sperm to beat asymmetrically and develop hyperactivated motility. However, it is not known whether marsupial sperm develop hyperactivated motility by activating the CatSper channel during capacitation. Thus, we examined flagellar movement and swimming patterns of sperm from gray short-tailed opossum, an established laboratory marsupial, before and after incubating under capacitating conditions. Opossum sperm are in paired or unpaired forms in the epididymis [23]. The acrosome localizes at the dorsal area of the sperm head, where two sperm form a pairing ( Figure 2B,C) [23]. Scanning electron microscopy clearly shows that opossum sperm flagellum is composed of midpiece and principal piece separated by the annulus ( Figure 2D). In addition, the longitudinal columns are also observed in the principal piece, just like the compartmentalized flagella in other eutherian species.
In order to understand how capacitation induces changes in opossum sperm motility, we compared their flagella beating patterns and swimming trajectory before and after incubating under capacitating conditions ( Figure 3 and Videos S1 and S2). Flagella waveform analyses revealed that opossum sperm flagella beat asymmetrically after inducing capacitation ( Figure 3A-C and Video S1). We compared the maximum angle of the primary curvature (α-angle, [24]) ( Figure 3B) and found that inducing capacitation often led one of the two flagella in paired sperm to beat asymmetrically first, followed by increasing the amplitude of beating flagella in both paired and unpaired sperm eventually ( Figure 3A). Notably, capacitated single sperm show the highest flagellar amplitude, slowing down the beating frequency ( Figure 3C). These temporal changes in the motility patterns indicate that hyperactivated motility helps to separate the paired sperm cells into single sperm cells, which are further hyperactivated. It was previously reported that paired opossum sperm swim linearly, and single sperm swim in circles [25]. We placed opossum sperm in viscous media to mimic the luminal environment of the female reproductive tract and to better visualize the details of the fast-swimming opossum sperm. We observed that inducing capacitation does not change the linear swimming pattern in paired sperm ( Figure 3D and Video S2). By contrast, the radius of the circular swimming path in single sperm increases after inducing capacitation, similar to what is observed for capacitated mouse sperm swimming under viscous conditions [8]. These results suggest that the characteristics of sperm hyperactivation are conserved in the gray short-tailed opossum. Yet, it serves a dual role: to dissociate paired sperm and to enable unpaired sperm to swim efficiently in the female reproductive tract.
Ca 2+ Influx Is Required to Develop Hyperactivated Motility in Opossum Sperm
In eutherians, hyperactivated motility is triggered by Ca 2+ influx into sperm cells [11,[26][27][28]. It has also become evident that Ca 2+ signaling negatively regulates capacitationassociated protein tyrosine phosphorylation (pTyr) [7,16,29]. Incubating sperm from gray short-tailed opossum under capacitating conditions enabled them to develop hyperactivated motility as well as pTyr ( Figure 3E). opossum (M. domestica, Figure 2A). Pairwise sequence analyses showed that CatSper proteins in gray short-tailed opossum and the Australian marsupials are sequence homologues and evolutionarily closer than eutherian CatSper proteins ( Figure S2). These results suggest that gray short-tailed opossum and Australian marsupials are likely to share the physiological function and molecular characteristics of CatSper proteins despite their early divergence time (82 MYA, Figure S1B). To test whether the Ca 2+ requirement for sperm hyperactivated motility is conserved in gray short-tailed opossum sperm, we analyzed flagellar movement and swimming patterns of opossum sperm under capacitating but Ca 2+ -chelating conditions ( Figure 4A,B and Videos S3 and S4). Opossum sperm capacitated with calcium-free conditions only vibrate and fail to develop hyperactivated motility ( Figure 4A, + EGTA, and Video S3). The defective flagellar movement seems to affect the free swimming of opossum sperm; the capacitated sperm fail to swim forward in the viscous medium ( Figure 4B and Video S4). We tested how treatment with a CatSper channel inhibitor, NNC, alters motility in the capacitated opossum sperm ( Figure 4A,B and Videos S5 and S6). Although opossum sperm beat their flagella in the presence of NNC, the flagellar amplitude became relatively smaller than those of capacitated sperm without the drug treatment ( Figure 4B and Video S5). The impaired flagellar movement indicates that the NNC-treated sperm failed to develop hyperactivated motility. In addition, NNC-treated sperm swim inefficiently in a viscous medium, such as sperm capacitated in Ca 2+ chelated conditions ( Figure 4B and Video S6). Sperm heads were tethered to the glass coverslips, and their tail beating was recorded from uncapacitated (left) and capacitated (middle and right) sperm. One flagellum of the paired sperm (middle) and an unpaired sperm flagellum (right) beat asymmetrically (arrowheads) after incubation under capacitating conditions. Flagellar waveforms from two-beat cycles were overlaid, and each frame was color-coded. (B-C) Quantitative comparisons of maximum angle of the primary curvature (B, α-angle) and beating frequency (C) of opossum sperm flagella. The α-angle (the red angles in sperm cartoon) and beat frequency were measured from paired (circles) and single (squares) sperm cells before (black) and after (red) inducing capacitation. α-angle of the uncapacitated sperm (0 min, paired, 51.0° ± 1.7°; unpaired, Sperm heads were tethered to the glass coverslips, and their tail beating was recorded from uncapacitated (left) and capacitated (middle and right) sperm. One flagellum of the paired sperm (middle) and an unpaired sperm flagellum (right) beat asymmetrically (arrowheads) after incubation under capacitating conditions. Flagellar waveforms from two-beat cycles were overlaid, and each frame was color-coded. (B,C) Quantitative comparisons of maximum angle of the primary curvature (B, α-angle) and beating frequency (C) of opossum sperm flagella. The α-angle (the red angles in sperm cartoon) and beat frequency were measured from paired (circles) and single (squares) sperm cells before (black) and after (red) inducing capacitation. α-angle of the uncapacitated sperm (0 min, paired, 51.0 • ± 1.7 • ; unpaired, 48.3 • ± 2.7 • ) increases after inducing capacitation for 90 min (90 min, paired, 63.7 • ± 1.9 • ; single; 82.1 • ± 5.5 • ). α-angle in single sperm is significantly larger than that in paired sperm after inducing capacitation. The beat frequencies of uncapacitated (0 min, paired, 16.3 ± 0.8 Hz; single, 15.8 ± 1.8 Hz) and capacitated (90 min, paired, 15.3 ± 0.9 Hz; single, 11.1 ± 1.5 Hz) opossum sperm were compared. n.s, non-significant, * p < 0.05, *** p < 0.001. Data are represented as mean ± SEM. (D) Swimming trajectories of the opossum sperm. Uncapacitated (top) and capacitated (bottom) opossum sperm were subject to free swimming under 0.5% methylcellulose, and swimming trajectories were drawn by overlaying time-lapse frames. Paired sperm swim in a straight line regardless of inducing capacitation (left). By contrast, unpaired single sperm swim in circles (right-top) with increased radius after capacitation is induced (right-bottom). Filled and empty arrowheads indicate free-swimming paired and single sperm, respectively. (E) Confocal images of the phosphotyrosine (pY)-immunostained opossum sperm. pY in opossum sperm were imaged before (top) and after (bottom), inducing capacitation. Hoechst was used for counterstaining. See also Videos S1 and S2. Figure S3 and Videos S3-S6.
of 21
Next, we examined the extent to which Ca 2+ signaling is associated with pTyr development in opossum sperm during capacitation ( Figure 4C,D). Both paired and single opossum sperm develop capacitation-associated pTyr. Intriguingly, opossum sperm develop pTyr only marginally during capacitation with EGTA, which is contrary to much more potentiated pTyr in mouse sperm capacitated under Ca 2+ -chelated conditions ( Figure S3). Capacitation-associated pTyr developed similarly with or without NNC in both opossum and mouse sperm cells. These results elucidate that requirement of Ca 2+ influx to develop hyperactivated motility is conserved in opossum sperm, likely mediated by the CatSper channel. Yet, pTyr development is not tightly linked to the Ca 2+ -signaling, which might have been evolved differently from eutherian to gray short-tailed opossum sperm cells.
Interaction of CatSperζ and EFCAB9 Is Conserved in Therian Mammals
In the presence of a CatSper channel inhibitor, the ability of gray short-tailed opossum sperm to develop hyperactivated motility is compromised (Figure 4). These results suggest that CatSper-mediated Ca 2+ signaling, as it does in eutherian mammals, triggers sperm hyperactivation in the opossum. To test the hypothesis that the CatSper channel is functionally expressed in the opossum, we first examined mRNA expression of CatSper subunits in opossum testis ( Figure 5). We were able to detect mRNA expression of all CatSper subunits with the exception of CatSperz, which is not annotated currently ( Figure S4). CatSper pore subunits (CatSper1, 2, 3, and 4) and EFCAB9 express higher in adult (over 4-5 months old) testes than those in juvenile (2 months old) testes ( Figure 5A). By contrast, CatSperg, d, and e expression were readily detected in juvenile testes while still lower than those in adult testes. These overall post-meiotic expression of CatSper pore subunits and EFCAB9 genes and the expression of other auxiliary subunits ahead of them in developing testes were previously reported [14,15,30], indicating conserved CatSper subunit expression patterns in the gray short-tailed opossum.
EFCAB9 is the sperm-specific CaM-like protein containing its unique, three EF-hand Ca 2+ binding domains ( Figure 5D; [15]). We searched whether CatSperζ orthologs contain CaM binding sites [21], such as IQ motif, as proteins containing the motif are well-known regulators of CaM. The search predicted that both human and mouse CatSperζ have an IQlike motif at their C-termini ( Figure 6A,B). To test whether the IQ-like motif is essential for the EFCAB9-CatSperζ interaction in therian mammals, we generated constructs encoding mouse CatSperζ with 21 amino acids deletion at N-(CatSperζ-∆N21) or C-(CatSperζ-∆C21) terminus ( Figure 6C). The truncated mouse CatSperζ and EFCAB9 were heterologously expressed and subjected to coIP. Truncation of C-terminus containing the IQ-like motif, but not N-terminal deletion, severely impairs CatSperζ binding to EFCAB9 ( Figure 6B,D). To further clarify the requirement of the IQ-like motif in CatSperζ and EFCAB9 interaction, we generated a construct encoding CatSperζ of which LQ are substituted to alanine (AA, CatSperζ-AA). This substitution also compromised CatSperζ and EFCAB9 interaction ( Figure 6E). The altered interactions clearly demonstrate that IQ-like motif is essential for CatSperζ to form a binary complex with EFCAB9. Next, we tested whether the binding of opossum EFCAB9 to CatSperζ also relies on the IQ-like motif ( Figure 6F,G). Opossum EFCAB9 cannot interact with mouse C-terminal truncated CatSperζ ( Figure 6F) and the AA mutant ( Figure 6G). The impaired interaction between opossum EFCAB9 and mouse CatSperζ-AA suggests that opossum CatSperζ also mediates EFCAB9 signaling via its IQ-like motif. Therefore, the IQ-like motif in CatSperζ is necessary to form EFCAB9 and CatSperζ binary complex, and the motif-dependent molecular interaction is conserved in therian mammals. EFCAB9 and CatSperζ form a binary complex that is responsible in part for sensing intracellular pH and Ca 2+ to regulate CatSper channel activity [15]. Mouse and human recombinant EFCAB9 and CatSperζ can interact with each other across species despite sequence variability of the eutherian CatSperζ orthologs ( Figure 5B,C) [14,15]. CatSperζ is conserved only in mammals [14,15]. Thus, the EFCAB9-CatSperζ complex is a new molecular design specific to placental mammals in regulating CatSper channel activity. Currently, marsupial CatSperz orthologs are annotated in Tasmanian devil, koala, and common wombat ( Figure S1B). Genomic loci encoding CatSperζ and its neighboring genes, however, are conserved synteny in therians ( Figure S4). These genomic characteristics suggest CatSperζ is likely to be present in the gray short-tailed opossum genome but not yet annotated, likely due to its sequence variability. Contrary to CatSperζ, its binding partner, EFCAB9, is conserved broadly in animals [15], and the amino acid sequence of opossum EFCAB9 is well aligned with those of human and mouse EFCAB9 ( Figure 5D). As EFCAB9 interacts with CatSperζ across eutherian species ( Figure 5C) despite sequence variability of CatSperζ orthologs, we hypothesized that opossum EFCAB9 could form a complex with eutherian CatSperζ if the complex formation is conserved in the opossum. EFCAB9 and CatSperζ form a binary complex that is responsible in part for sensing intracellular pH and Ca 2+ to regulate CatSper channel activity [15]. Mouse and human recombinant EFCAB9 and CatSperζ can interact with each other across species despite sequence variability of the eutherian CatSperζ orthologs ( Figure 5B,C) [14,15]. CatSperζ is conserved only in mammals [14,15]. Thus, the EFCAB9-CatSperζ complex is a new molecular design specific to placental mammals in regulating CatSper channel activity. Currently, marsupial CatSperz orthologs are annotated in Tasmanian devil, koala, and common wombat ( Figure S1B). Genomic loci encoding CatSperζ and its neighboring genes, however, are conserved synteny in therians ( Figure S4). These genomic characteristics suggest CatSperζ is likely to be present in the gray short-tailed opossum genome but not yet annotated, likely due to its sequence variability. Contrary to CatSperζ, its binding partner, EFCAB9, is conserved broadly in animals [15], and the amino acid sequence of opossum EFCAB9 is well aligned with those of human and mouse EFCAB9 ( Figure 5D). As EFCAB9 interacts with CatSperζ across eutherian species ( Figure 5C) despite sequence variability of CatSperζ orthologs, we hypothesized that opossum EFCAB9 could form a complex with eutherian CatSperζ if the complex formation is conserved in the opossum.
Thus, we tested whether opossum EFCAB9 can interact with eutherian CatSperζ orthologs. We expressed opossum EFCAB9 transiently together with human or mouse CatSperζ in 293T cells and performed coIP ( Figure 5E). Opossum EFCAB9 was found in the same complex with human and mouse CatSperζ, indicating that opossum EFCAB9 can interact with eutherian CatSperζ. These results suggest that CatSperζ-EFCAB9 complex formation is conserved in marsupials; the molecular mechanisms of pH-dependent Ca 2+ sensing of the CatSper channel are likely shared among therian mammals.
Uniquely Paired Gray Short-Tailed Opossum Sperm Unpair during Capacitation
Sperm from American marsupials, including the gray short-tailed opossum, are paired during epididymal maturation (Figure 2; [31,32]), which is distinct from the unpaired epididymal sperm in Australian marsupials and eutherians. The paired sperm of Virginia opossum (Didelphis virginiana) can pass UTJ and maintain their physical interaction until the peri-fertilization period in the oviduct [33]. The pairing might contribute to
Uniquely Paired Gray Short-Tailed Opossum Sperm Unpair during Capacitation
Sperm from American marsupials, including the gray short-tailed opossum, are paired during epididymal maturation (Figure 2; [31,32]), which is distinct from the unpaired epididymal sperm in Australian marsupials and eutherians. The paired sperm of Virginia opossum (Didelphis virginiana) can pass UTJ and maintain their physical interaction until the peri-fertilization period in the oviduct [33]. The pairing might contribute to efficient sperm migration to the fertilizing site in American marsupials. Motility analyses of gray short-tailed opossum sperm support this possibility; paired sperm swim faster than unpaired sperm in viscous conditions (Figure 3; [25]), suggesting that pairing can be a sperm cooperation mechanism. Sperm conglomerate in the epididymis is also known from egglaying mammals, monotremes [34]. Similar to the pairing of epididymal sperm in American marsupials, platypus (Ornithorhynchus anatinus) and echidna (Tachyglossus aculeatus) sperm assemble into bundles of approximately 100 sperm cells in the epididymis [12,35]. Thus, the sperm interaction during epididymal maturation is an ancient trait from ancestor mammals to monotremes and American marsupials, but not to Australian marsupials and eutherians. In monotremes, an epididymal protein, SPARC, was detected from epididymal and ejaculated sperm bundle but not from the dissociated single sperm cells [12], suggesting SPARC is associated with sperm bundle formation. Yet, the molecular events associated with the sperm pairing in American marsupial species are unknown. Considering the conserved sperm conglomerate during epididymis transition in monotremes, SPARC or other epididymal proteins might be also involved in the sperm pairing of the gray short-tailed opossum.
Incubation under capacitating conditions in vitro unpairs the epididymal sperm of gray short-tailed opossum [25]. Previous studies observed unpaired or unpairing sperm cells from crypts of oviductal epithelium in Virginia opossum [33,36]. In the sperm undergoing unpairing, the peripheral junction is broken down, and the acrosome contents are revealed [33], enabling single sperm cells to interact with the zona pellucida efficiently. Therefore, capacitation-associated sperm unpairing in the oviduct must precede normal fertilization in American marsupials. Gray short-tailed opossum sperm develop hyperactivated motility after incubation under capacitating conditions (Figure 3). Hyperactivation alters the synchronized and symmetric flagellar beating of paired sperm cells, which would generate more force and facilitate dissociation. Intriguingly, under the capacitating conditions but lacking free Ca 2+ , we observed that two sperm were only weakly bound, and their pairing junction was almost displaced without developing hyperactivated motility ( Figure 4). These results suggest that hyperactivated motility dissociates paired sperm mechanically, while additional molecular mechanisms are also involved in sperm unpairing during capacitation. In line with this idea, a previous study showed that increasing intracellular cAMP enhanced the dissociation of echidna sperm cells from the bundle [12].
Capacitation-Associated Signaling Events Are Distinct in Gray Short-Tailed Opossum from Eutherian Mammals
Inducing capacitation develops global pTyr in sperm cells by activating the PKA signaling pathway in eutherian species [37][38][39][40][41]. In our study, we found gray short-tailed opossum sperm also develop capacitation-associated pTyr (Figures 3 and 4) similar to the report on the sperm of two Australian marsupials, the tammar wallaby (Macropus eugenii) and common brushtail possum (T. vulpecula) [42]. By contrast, echidna sperm cells do not develop pTyr when capacitated in vitro [12]. The absence of pTyr development in echidna sperm suggests that capacitation-mediated pTyr is an innovation in therian mammals. Although the physiological significance of pTyr in sperm remains unclear, previous studies have demonstrated that pTyr development is robustly enhanced in mouse sperm through genetic and pharmacological abolition of Ca 2+ entry during capacitation [7,29]. In addition, a high concentration of Mg 2+ further enhances pTyr development in mouse sperm capacitated in vitro in the absence of free Ca 2+ [7]. This regulation elucidates that Mg 2+ could be another player in regulating pTyr development. In gray short-tailed opossum sperm, however, pharmacological blockage of Ca 2+ entry rather suppresses capacitationassociated pTyr development (Figure 4). We used HTF medium to capacitate opossum sperm as the overall composition of the tubular fluid of the female tract is shared in eutherian species [43]; bicarbonate, albumin, Ca 2+ , and nutrients are considered to be essential for sperm capacitation. Successful IVF was previously reported using a similar medium to HTF in gray short-tailed opossum [44]. Therefore, the observed difference is likely from different sensitivity of gray short-tailed opossum CatSper to Ca 2+ blockage in pTyr development. Yet, it would be interesting to see whether the ion composition in the luminal fluid in the female tract and/or ion selectivity and sensitivity of CatSper is uniquely different in gray short-tailed opossum or other marsupials from in mouse and human. A recent mouse study observed that the small number of in vivo capacitated mouse sperm that arrive at the fertilizing site are mostly devoid of pTyr, while suppression of the inhibition was observed down along the female reproductive tract [16]. Thus, global phosphorylation might be a marker for degenerating sperm cells which should be eliminated from the female reproductive tract after fertilization occurs. Intriguingly, American marsupials produce around 10-100 times fewer sperm cells compared to Australian marsupials (except Dasyuridae species), yet 50% of the sperm cells can arrive at the upper oviduct 12 h after coitus [45]. Thus, the pTyr is not associated with sperm selection in gray short-tailed opossum sperm.
CatSperζ Conveys the IQ Domain Interaction with EFCAB9 and Species-Specific pH-Sensitivity in Therian Mammals
The CatSper channel is activated by intracellular alkalization in mice and humans [46,47]. Yet, the extent to which intracellular alkalization contributes to CatSper activation was shown to have species specificity. For example, increasing intracellular pH activates the CatSper channel in mouse and human sperm [46,48,49], but human sperm requires additional ligands for full activation, such as progesterone and/or prostaglandin E 1 . In mouse sperm, CatSper activation is further enhanced by increased intracellular Ca 2+ concentrations [15]. A CaM-like molecule, EFCAB9, forms a complex with CatSperζ and senses Ca 2+ for this modulation in a pH-dependent manner [15]. EFCAB9 is tightly bound to CatSperζ when intracellular pH is low, interacting with the channel pore and limiting Ca 2+ influx before capacitation. The capacitation-induced increase in intracellular pH weakens the interaction and potentiates Ca 2+ influx. We demonstrate that CatSperζ orthologs contain a conserved IQ-like motif, providing insights into CatSper regulation. Despite a high degree of sequence variability among CatSperζ orthologs [14,15], recombinant mouse CatSperζ can interact with opossum EFCAB9 via its IQ-like motif ( Figures 5 and 6), suggesting that a conserved IQ-like motif in CatSperζ orthologs serves as a binding site for EFCAB9 orthologs. In addition, the presence of IQ-like motifs in the CatSperζ orthologs from Tasmanian devil supports the conserved motif-mediated interaction between EFCAB9 and CatSperζ in mammals. Therefore, as seen in many other ion channels, Ca 2+ /CaM regulation via interaction with an IQ domain is a unified mechanism of CatSper regulation in therian mammals, yet the varying sequences of CatSperζ might convey different pH sensitivity in activating CatSper within therian mammals, including gray short-tailed opossum.
Regulation and Function of Sperm Hyperactivation has Evolved Distinctively in Mammalian Linages
In eutherians, Ca 2+ enters sperm cells via CatSper and triggers hyperactivated motility [13]. The CatSper gene expression and physiological changes shared by eutherian mammals and gray short-tailed opossum (Figures 3-5) highlight that CatSper-mediated Ca 2+ signaling is an evolutionarily conserved molecular mechanism to trigger hyperactivated motility in therian species. It was demonstrated that hyperactivation is required for mouse, hamster, and human sperm to efficiently swim in the viscous media [50,51]. Sperm hyperactivation in mouse is also essential for migration in the female reproductive tract, especially passing the utero-tubal junction (UTJ) [16,30]. Thus, hyperactivation functions at multiple levels in eutherians: limiting sperm number in the oviduct, helping the sperm past the UTJ to navigate through the mucoidal oviduct, and finally penetrating the egg coat. The marsupial oviductal epithelium also secrets mucus, making the oviductal lumen viscous [52,53]. Yet, marsupial UTJ present few barriers to control sperm number [54]. This anatomical difference in the female reproductive tract suggests sperm hyperactivation does not play a major role in the sperm transition from the uterus to the oviduct in the gray short-tailed opossum. Accordingly, as many as around half of the inseminated sperm can pass the UTJ in American marsupials [45]. As both paired and unpaired opossum sperm are present in crypts of the oviduct [33,36], enhanced swimming ability of unpaired and hyperactivated opossum sperm under viscous fluid (Figures 3 and 4) illuminates the functional conservation of sperm hyperactivation in the efficient navigation through the convoluted and viscous oviductal environment in eutherians and marsupials.
Unlike therian species, dissociated echidna sperm show asymmetric tail beating only when subjected to the chicken perivitelline membrane but not when swimming freely, even under capacitating conditions [12]. This unique patterned echidna sperm hyperactivation suggests specific ligands from the eggs might be required to activate the CatSper channel in monotreme sperm; CatSper activation mechanisms have diverged between monotremes and therian mammals. In addition, the egg-contact-dependent echidna sperm hyperactivation suggests that the significant motility change is particularly designed for egg penetration, rather than sperm migration. It is of note that additional layers of shells and mucoid around the oocyte are present in monotremes [55] (Figure 7). Therefore, sperm hyperactivation might have evolved as serving the function of penetrating egg barriers, which is conserved in all mammals, and later acquired additional roles for migration in the female reproductive tract in therian mammals. (Figures 3 and 4) illuminates the functional conservation of sperm hyperactivation in the efficient navigation through the convoluted and viscous oviductal environment in eutherians and marsupials. Unlike therian species, dissociated echidna sperm show asymmetric tail beating only when subjected to the chicken perivitelline membrane but not when swimming freely, even under capacitating conditions [12]. This unique patterned echidna sperm hyperactivation suggests specific ligands from the eggs might be required to activate the CatSper channel in monotreme sperm; CatSper activation mechanisms have diverged between monotremes and therian mammals. In addition, the egg-contact-dependent echidna sperm hyperactivation suggests that the significant motility change is particularly designed for egg penetration, rather than sperm migration. It is of note that additional layers of shells and mucoid around the oocyte are present in monotremes [55] (Figure 7). Therefore, sperm hyperactivation might have evolved as serving the function of penetrating egg barriers, which is conserved in all mammals, and later acquired additional roles for migration in the female reproductive tract in therian mammals. Figure 7. A schematic cartoon depicts sperm hyperactivation in each mammalian lineage. Development of sperm hyperactivation in three mammals, mouse (top), gray short-tailed opossum (middle), and echidna (bottom), which represent each lineage, eutherian, marsupials, and monotreme, respectively, are drawn. Mouse and gray short-tailed opossum sperm develop hyperactivated motility during capacitation, but echidna sperm requires egg contact after dissociation during capacitation to be hyperactivated. Sperm hyperactivation seems to be required to penetrate the egg barrier (right, orange layer) in all three lineages.
Supplementary Materials:
The following are available online at www.mdpi.com/xxx/s1, Figure S1: Comparative protein sequence analyses of CatSper components in placental mammals, Figure S2: Marsupial CatSper subunits sequences are homologue, Figure S3: Impaired Ca 2+ entry enhances global tyrosine phosphorylation (pY) during capacitation in mouse sperm cells, Figure S4: CatSperζ orthologs are conserved in marsupials, Table S1: Species and CatSper orthologs information for sequence analyses, Table S2: Primer pairs for qRT-PCR used in this study, Video S1: Flagella move- Figure 7. A schematic cartoon depicts sperm hyperactivation in each mammalian lineage. Development of sperm hyperactivation in three mammals, mouse (top), gray short-tailed opossum (middle), and echidna (bottom), which represent each lineage, eutherian, marsupials, and monotreme, respectively, are drawn. Mouse and gray short-tailed opossum sperm develop hyperactivated motility during capacitation, but echidna sperm requires egg contact after dissociation during capacitation to be hyperactivated. Sperm hyperactivation seems to be required to penetrate the egg barrier (right, orange layer) in all three lineages. | 8,412.6 | 2021-04-29T00:00:00.000 | [
"Biology"
] |
Classification and Grading of Arecanut Using Texture Based Block-Wise Local Binary Patterns
: Arecanut is a commercial crop typical to high rain fall regions. Arecanut has economic, cultural and medicinal importance, and is categorized into different types depend upon the region which grow and market it consumes.In this paper, an attempt towards grading of Arecanut images is proposed. The proposed approach makes use of global textural feature viz., Local Binary Pattern for feature extraction. Initially, an image is divided into k number of blocks. Subsequently, the texture feature is extracted from each k blocks of the image. The k value is varied and has been fixed empirically. For experimentation purpose, the Arecanut dataset is created using 4 different classes and experimentation is done for whole image and also with different blocks like 2, 4 and 8. Grading of Arecanut is done using Support Vector Machine classifier. Finally, the performance of the grading system is evaluated through metrics like accuracy, precision, recall and F –measure computed from the confusion matrix. The experimental results show that most promising result is obtained for 8 block of the image.
INTRODUCTION
Agriculture plays a predominant role in socio-economic development of the country, Agriculture contributes 18.1% of the gross domestic product of the country and 10% of the country's export is from Agriculture only.No doubt Agriculture is the backbone of Indian economy.In terms of total arable land in the world India stands second largest as over 60 % of India's land area is arable.About 50% of the Indian workforce depend upon Agriculture in the country [1] [2].Being the major contributor for the primary livelihood of mankind, it is a traditional occupation pursued by the majority of population.A stable Agricultural sector assures a nation with food, source of income and source of employment.Arecanut(Areca catechu L.) is one of the important commercial crops of India.The areca tree is a feathery palm that grows to approximately 1.5 m in height and is widely cultivated in tropical India, Bangladesh, Japan, Sri Lanka, south China, the East Indies, the Philippines, and parts of Africa.The tropical palm trees bear fruit all year.The nut may be used fresh, dried, or cured by boiling, baking, or roasting.Arecanut plays a significant role in the social, religious, cultural functions and economic life of people in India.Its cultivation is concentrated in North Western and South Western regions of India.The economic product is the fruit called "betel nut" and is used mainly for masticatory purposes.Arecanut has it's applications in veterinary and Ayurvedic medicines.The habit of chewing Arecanut is typical of the Indian sub-continent and its neighborhood.India accounts for about 57 percent of world Arecanut production [3].The quality, variety and types of Arecanut vary from one place to another.Recent studies of Arecanut have shown that Arecanut has pharmalogical uses such as hypoglycermic effect, mitotic activity etc.It was found that tannins, a by-product from the processing of immature nuts find use in dyeing clothes, tanning leather, as a food colour, as mordant in producing variety of shades with metallic salts etc.The nuts contain 8-12% of fat, which can be extracted and used for confectionery purposes.The refined fat is harder than cocoa butter and can be used for blending.So far human has a prominent role in classifying the grades and variety of the arecanut.
IMPORTANCE AND IMPACT OF THE PRESENT WORK
Although there are several computer based technologies available for most of the crops, to best of our knowledge, in classifying and grading the Arecanut, there is no computer vision based advanced technology available till.That too especially, few works are done based on the Arecanut as a whole.But, none of the work has been reported yet on Arecanut which is cut into pieces after processing.Presently the grading system is carried by the people who have got knowledge from the long practices.The dependency on the skilled labor made the system more cumbersome and it has made entire process dependent on manual labor.As we depend more on manual work the efficiency of the entire process will be reduced as humans are more prone to error.As Areca differs from region to region, the cost on manual labor goes on increasing as we need different set of people for different regions.In manual grading system the chances of miss classification and grading is common as processed Arecanut are much similar.With manual classification and grading system, presently we are achieving a success rate of maximum of 60 to 70%.To address the above issue for Arecanut farmers, there is an increasing demand for computer vision based technology.With this proposed work we can expect the farmers to save more money which they spent on manual labor in classification and grading of Arecanut with better accuracy.Also this automation of Arecanut grading system will save more time of farmers and business people as things will get done with much faster time compared to manual work.Also it will help us to apply this technique to similar Areca market throughout the globe.
There are different types of processed Arecanut is present in the market depend upon the area they grow.Upon harvesting, the Arecanut will undergo various stages like blanching, boiling and drying etc. for 3 to 4 days prior to grading process.Presently Arecanut grading is done based on the requirement of the market, that is, it is mostly the application oriented.In the market the Arecanut is initially graded in to 4 verities based on the maturity and based on the application it consumes.There are four types of Arecanuts are considered for this work, namely Hasa, Bette, Gorabalu and Idi typical to Malnad region of Karnataka state.In the proposed method Arecanuts are classified based on Texture namely using Local Binary Pattern histograms.We have conducted a survey and collected samples from about 20 agricultural fields and five tender markets.
Figure 1. Arecanut Collections from various Regions
Rest of the paper, we described some related works briefly in Section II.Proposed methodology has been discussed in Section III that includes segmentation using Otsu's thresholding, feature extraction using Local Binary Pattern histograms and classification of arecanuts using SVM classifier, and included experimental results and discussion in Section IV.Finally, concluded the paper in Section V.
RELATED WORKS
To the best of our knowledge classification of processed and cut Arecanut has not been done completely using computer vision till.However, few techniques have been proposed for classification of non-processed raw Areca nuts, processed uncut Areca nuts and also for the classification of different seeds, fruits and vegetables.But no work has been reported yet towards classification of Processed Areca which is cut into pieces.Ajith Danthi & Suresha M has proposed several techniques to classify both raw and processed uncut Arecanuts.Few robust algorithms proposed for classification of Arecanut can be given as, Suresha M and AjithDanti [4] proposed a technique for effective grading of Arecanut where the Arecanut RGB image is converted into YCBCR color space.Three sigma control limits on color features are determined for effective segmentation of Arecanuts.Color features are used for the grading of Arecants with the help of support vector machines (SVMs) into two grades i.e. boiling and Non-boiling nuts.Experimental kfold cross validation method demonstrated the efficiency of the proposed approach.SureshaM, AjithDanti and S K Narasimha Murthy [5] proposed a technique to classify the Arecanuts using Haar wavelets.For the purpose of feature extraction the method of Wavelet decomposition was used.The statistical feature energy is derived from the approximation coefficients for each level of decomposition and for classification of Arecanut images, color features are also extracted from Arecanut images.Here for classification of Arecanuts they have used decision tree classifier.Many tree splitting rules are used like gini diversity index, twoing rule and entropy.Proposed algorithm is verified for Arecanut images with cross validation method and achieved good success rate.Suresha M and AjithDanti [6] have also proposed a technique to grade raw Arecanuts as well.For Areca nut grading they have used color as a main feature.Threshold based segmentation algorithm was used initially for the purpose of segmentation.In the segmented region, by suppressing the blue color components only red and green components are used to classify the Arecanuts.Average red and green component of a areca nut is extracted.Based on the extracted features Arecanut is classified into various categories.A combination of SVM and KNN classifier is used to classify different types of areca nuts.Among raw areca nuts, the test result showed that the system have achieved a success rate of up to 98%.Suresha M and AjithDanti [7] have also proposed a technique for classification of Arecanut based on texture features.They have used watershed segmentation to segment the Arecanut images.GLCM features and Mean Around features are extracted in the segmented regions.Here they have used Mean around features, Gray level co-occurrence matrix (GLCM) features and combined (Mean around-GLCM) features for Classification of Arecanut.For the classification of Arecanut, they have used Decision tree classifier, and the classification was done in to six classes (Api, Black Bette, Red Bette, Chali, Minne, Gotu).The technique gives the convincing results as well.For the testing purpose the Cross validation method is used and found that, the GLCM features have given success rate of 97.65%.Mean Around features have given success rate of 98.28%.Mean Around-GLCM features have given success rate of 99.05%.Suresha M, AjithDanti and Narasimha Murthy S K [8] proposed a technique for classification of Arecanut.In this work from respective RGB images, HSV images were obtained.Then with the help of threshold based segmentation method the segmentation was done by extracting the saturation channel.Then for Arecanut images, the LBP have been applied.With the help of LBP, Gabor, Image histogram and GLCM features have been obtained.Then correlation distance metric classification has been done with histogram features, and then classification has been done with Gabor, GLCM and combined (GLCM-Gabor) features using kNN classifier.The obtained results show that combined features gave convincing results and the success rate is directly proportional to k value.Harish Naik T and SureshaM [9] proposed a technique using color features of the components to classify raw Arecanut with husk in to various categories.In this paper they have used HSV, RGB and YCbCr color spaces of Arecanut at the stage of feature extraction.And then kNN and SVM classifiers were used for the purpose of classification.The outcome of the research work is when compared to other color models HSV color model gives the good success rate.Kuo-Yi Huang [10] proposed a technique to classify Arecanut into 3 major categories (Excellent, Good and Bad).In his work detection line (DL) method was used for segmentation of defected Arecanuts with diseases or insects.The feature extraction process was done using Six geometric features namely the Area, Compactness, Principle axis length, Axis number, the secondary axis length, perimeter and, 3 color features, that is, the mean gray level of an Arecanut image on the R, G, and B bands, and defects area were used.And then to sort the quality of the Arecanut the back-propagation neural network classifier was used.The presented methodology gives the accuracy of 90.9%.Siddesha S, S K Niranjan and V N Manjunath Aradya [11] proposed a technique to differentiate color segmentation techniques for crop bunch in Arecanut.In their work they mainly focused on exploring different color segmentation techniques such as, Thresholding, Watershed segmentation, K-means clustering, Fast Fuzzy C Means clustering (FFCM), Fuzzy C Means (FCM), and Maximum Similarity based Region Merging (MSRM).Then with the help of different Arecanut image datasets based on the segmentation results the evaluation was done.Siddesha S, S K Niranjan and V N Manjunath Aradya [12] proposed the texture based grading of Arecanut.In that different texture features are extracted from Arecanut by using Local Binary Pattern (LBP), Wavelet, Gabor, Gray Level Co-Occurrence Matrix (GLCM),Gray Level Difference Matrix (GLDM) and features.For the purpose of classification Nearest Neighbor (NN) classifier technique was used.To demonstrate the proposed model's performance, the test was conducted using a dataset of 700 images belongs to 7 different classes.Along with the help of Gabor wavelet features they have achieved the classification rate of 91.43%.Upon seeing the above quoted works it is clear that not much of the work has been done and reported with respect to Arecanut, especially the nuts which are cut into pieces typical to Malnad region of Karnataka and Kerala.This made us work on the problem which is not addressed so far.
Proposed Model
The different steps followed in the proposed Block wise LBP approach for Arecanut classification is given in the figure 1.It involves various steps like Preprocessing, Segmentation, feature Extraction, classification and validation.
The different stages of proposed model are explained in following subsections.
Figure2. Architecture of the proposed model
In the proposed methodology the samples are segmented using Otsu's thresholding technique and necessary preprocessing is done.Classification and grading of processed Arecanut is usually done based on color, shape and texture.For classification and grading of Arecanut we extracted different external features like Color, Shape and Texture.Although color is a good feature descriptor, variation in color due to external factors does mislead about the actual quality of the Arecanut.Shape is another criterion.However, this criterion poses challenge to the exactness of the system as Arecanut from different growing regions vary in their external shapes.Thus it's difficult to arrive at a common thumb rule in identifying the shape of the Arecanut.Exactitude in classification of the Arecanut is achievable with Texture as the criterion, because Areca types differ in Texture significantly.Interestingly, even though the Hasa and the Bette, typical to malnad region of Kerala and Karnataka, are very similar in Texture, they can be differentiated with minute texture details.
To the fact that colour and shapes are not appropriate features for grading of areca, we have used texture for classification.In this work we explore the usage of LBP for texture description.LBP is most robust in identifying minute difference in the texture patterns.As a next step texture features are extracted in the form of Local Binary Pattern histograms.Initially LBP of the image is obtained as a whole and then as a continued step Local Binary Histogram of an image is extracted in segments using with variable number of blocks by changing the K value and unknown samples are tested using Support Vector Machine classifier.
Preprocessing
In this stage, we recommend two different pre-processing tasks, namely, image resizing and gray scale conversion.In Image resizing, we have converted all the images of Arecanut of dimension M*N to m*n to maintain uniformity in the dimensions of the images.Because in the stage of image acquisition the dataset contain various images with varied size, but for better accuracy it's always recommend to use uniform datasets, so we have resized all the images in to size 480*640.Then, we have converted RGB images in to its equivalent gray scale images as this conversion helps in extracting the texture features from the images.
The different steps of preprocessing can be shown as,
Original RGB Images
Resized Gray Scale Images
SEGMENTATION
In this work, the gray scale image is binarized using OTSU thresholding method.Otsu algorithm returns a single intensity threshold that separate pixels into two classes, foreground and background.This threshold is determined by minimizing intra-class intensity variance, or equivalently, by maximizing inter-class variance.Otsu's thresholding method involves iterating through all the possible threshold values and calculating a measure of spread for the pixel levels each side of the threshold, i.e. the pixels that either falls in foreground or background.The aim is to find the threshold value where the sum of foreground and background spreads is at its minimum [13].The connected component analysis is performed on binary image to extract the contours among which the dominant contour is considered to obtain the mask.The region of interest is computed by fitting a bounding rectangle to the extracted contour.
The different steps in segmentation of images can be shown as,
FEATURE EXTRACTION
In this step, from the Arecanut datasets we extracted texture feature viz.Local Binary Pattern from the images.Whereas the Local Binary Pattern of the images are obtained as a whole to achieve precision.Then the LBP of the image is obtained in segments using with variable number of blocks by changing the K value in an image of dimension say K*K.The LBP is obtained from the each block separately and then the corresponding LBP is combined at the end.
Local Binary Pattern
The basic local binary pattern was originally proposed by Ojala et al. [12], was based on the assumption that texture has locally two complementary aspects, a pattern and its strength with the aim of texture classification.The most predominant features of LBP are its invariance to monotonic gray-scale changes, convenient multi-scale extension and low computational complexity.The philosophy behind LBP is simple and well-structured: unify traditional structural and statistical methods.Local Binary Pattern (LBP) is a simple yet very efficient texture operator which labels the pixels of an image by thresholding the neighborhood of each pixel and considers the result as a binary number.Each neighbor pixel is compared with the center pixel, and the ones whose intensities exceed the center pixels are marked as 1, otherwise as 0. In this way we get a simple circular point features consisting of only binary bits.Typically the feature ring is considered as a row vector, and then with a binomial weight assigned to each bit, the row vector is transformed into decimal code for further use.LBP using circular neighborhoods and linearly interpolating the pixel values allows the choice of any radius, R, and number of pixel in the neighborhood, P, to form an operator, which can model large scale structure.A corresponding equation is shown in equation ( 1). ( -where gc is the gray value of the central pixel, gp is the value of its neighbors. A descriptor for texture analysis is a histogram, h(i), of the local binary pattern shown in equation ( 2) and its advantage is that it is invariant to image translation. ( In order to perform classification of arecanut, each arecanut image in the training and test sets are converted to a spatially enhanced histogram via the process described above.Then Support Vector machine classification is performed on it.
Block Wise LBP
Certain Image Processing operations involve processing an image in sections, called blocks or neighborhoods, rather than processing the entire image at once.The basic idea is to break the input image in to blocks or neighborhoods, and apply the required function on each block or neighborhood, and then reassemble the results into an output image.
This proposed approach makes use of global textural feature viz., Local Binary pattern for feature extraction.Initially, an image is divided into k number of blocks.Subsequently, the texture feature is extracted from each k blocks of the image.The k value is varied and fixed empirically.The experimentation is done for whole image and also with different blocks for 2, 4, 8, 16 and 32 blocks.The Local Binary Pattern for each block is obtained and tabulated for classification purpose.Consider figure 4, an image with the matrix M*N is divided in to equal number of pixels with variable number of blocks.The LBP of the each block is obtained and is tabulated for the for the purpose of classification.
Support Vector Machine Classifier
SVM is a supervised machine learning algorithm which can be used for both classification and regression challenges.However, it is mostly used in classification problems.A Support Vector Machine (SVM) is a discriminative classifier formally defined by a separating hyperplane.In two dimensional space this hyperplane is a line dividing a plane in two parts where in each class lay in either side.SVM solves the classification problem via trying to find an optimal separating hyperplane between multiple classes.It depends on the training cases which are placed on the edge of class descriptor this is called support vectors, any other cases are discarded.SVM algorithm seeks to maximize the margin around a hyperplane that separates a positive class from a negative class [14].Given a training dataset with n samples (a1, y1),(a2, y2),...,(an, bn), where xi is a feature vector in a v-dimensional feature space and with labels yi ∈ −1, 1 belonging to either of two linearly separable classes C1 and C2.Geometrically, the SVM modeling algorithm finds an optimal hyperplane with the maximal margin to separate two classes, which requires to solve the optimization problem, as shown in equations ( 3) and (4).
Maximize n i=1 αi − 1 2 n i,j=1 αiαjbibj .K(ai, aj ) (3) where, αi is the weight assigned to the training sample ai.If αi > 0, ai is called a support vector.For superior generalization capability to be achieved C is a regulation parameter used to trade-off the training accuracy and the model complexity.To measure the similarity between two samples K will be used as a kernel function.There are several kernel functions available and are used based on the requirements.The most used are Linear, Gaussian radial basis function (RBF), Multi-Layer Perceptron MLP and Polynomial of a given degree.These kernels works independently of the problem and it can be used for both discrete and continuous data (Grading of Arecanut on the basis of maturity using Local Binary Pattern Histograms) Either a two class problem or a multiclass SVM suits both of it effectively.Here we have used SVM with suitable kernel type and multi class OVR(one vs Rest classifier) method which helps us to classify Areca images in to four different classes.From tables 2 to 7, it is observed that the classification of Arecanut images yields good results for having number of blocks 08.The result gradually decreases as the number of blocks increases.This is due to the large variation in the size of blocks in the image that we consider for extraction of Local Binary Patterns.
Also, from the above tables, it is clearly observed that, the block of 08 in an image results with maximum accuracy, maximum precision, maximum recall and maximum F-Measure.The graphical analyses of the results are given in the figure 5.
DISCUSSION
With the proposed method we have achieved the success rate of around 95% with the variable k value as 8.We tried image as a whole with global LBP features but the success rate was only around 70%.With block wise approach we can get the detailed analysis of an image with much inner details.Here we have considered both local and global textural features with the technique of block wise Local Binary Patterns.We are succeeded with the success rate of only 95% as there are minor chances of error between the class 1 and class 2 as both of them look alike.
When it comes to cross validation it is recommended to use cross validation to fix the train parameters during training process.This subsequently helps during testing phase.Hence, cross validation is recommended when we are adapting parametric approach for training the model [15].But, in case of non-parametric approach it is not necessary to use the cross validation method for training the model [16].In this regard, we have not recommended the cross validation method in this work for training the model.
CONCLUSION
In this paper a block wise approach in classifying Arecanut in to pre-defined 4 classes is proposed.In classifying Areca image a Local Binary Pattern histogram of every data set is obtained for variable number of image blocks.The various combinations of Test and Training tests are also considered for image classification.Further SVM classifier is used for classification.The effectiveness of the proposed classification system is validated through well-known measures like accuracy, precision, recall and F-Measure.Finally, the paper concludes with an understanding that the promising classification results are obtained for the image with number of blocks as 8.
Figure 6 .
Figure 6.Illustration of different types of Arecanut considered for the work
Figure 7 .
Figure 7.The experimental setup used for the work to capture the datasets
Figure 8 .
Figure 8. Graph Analysis for the results obtained | 5,476.8 | 2021-05-10T00:00:00.000 | [
"Computer Science"
] |
Comparative Chloroplast genomes of Gynura species:Sequence Variation, Genome Rearrangement and Divergence studies
Some Gynura species were reported to be natural anti-diabetic plants. The chloroplast genomes of four Gynura species were sequenced for hybridizations to improve agronomic traits. There are only 4 genera of tribe Senecioneae have published chloroplast genome in Genbank up to now. The internal relationships of the genus Gynura and the relationship of the genus Gynura with other genera in tribe Senecioneae need further researches. Results The chloroplast genome of 4 Gynura species were sequenced, assembled and annotated. Comparing with other 12 Senecioneae species, the chloroplast genome features were detailedly analyzed. Subsequently, the differences of the microsatellites and repeats type in the tribe were found. By comparison, the IR expansion and contraction is conserved in the genera Gynura, Dendrosenecio and Ligularia. The region from 25,000 to 50,000 bp is relatively not conservative but the 7 ndh genes in this region are under purifying selection with small change in amino acids. The phylogenetic tree shows two major clades, same as the sequence divergence in region 25,000 to 50,000 bp. Based on the oldest Artemisia pollen fossil, the divergence time were estimated.
Background
Gynura is a genus of flowering plants in the tribe Senecioneae of family Asteraceae endemic to Asia, which contains 44 species in total [1]. Many species of the genus Gynura have been reported to have medicinal value to diabetes mellitus, such as G. procumbens, G. divaricata and G. medica. The aqueous extract from G. procumbens possessed a significant hypoglycemic effect in streptozotocin-induced diabetic rats [2] and it improved insulin sensitivity and suppressed hepatic gluconeogenesis in C57BL/KsJ-db/db mice [3].
Polysaccharide from G. divaricata couldalleviate the hyperglycemiabymodulating the activities of intestinal disaccharidases in streptozotocin-induced diabetic rats [4] and G. divaricata-lyophilized powder was effectively hypoglycemic by activating insulin signaling and improving antioxidant capacity in mice with type 2 diabetes [5]. Phenolic compounds isolated from G. medica inhibitedyeast α-glucosidase in vitro [6].
Some plants in genus Gynura have also been used as vegetable and tea in people's daily life of East and South Asia, thus the genus Gynura is a natural treasure trove to treat the increasingly diabetes problem. Although Gynura plants are seemingly useful and harmless, but some shortcomings need improvement such as the medicinal effect to diabetes, potential toxicity and oral tastes [7][8]. Large improvement relies on interspecific hybridizations to increase genetic diversity and introgression of valuable traits.
Phylogenetic relationship is useful information for the interspecific hybridizations, but the phylogenetic relationship of the species in genus Gynura is, as yet, unclear.
A whole chloroplast DNA ranges between 120 and 160 kb in size on the circular chromosome in most plants, composing of Large Single Copy (LSC), Small Single Copy (SSC), and two copies of an Inverted Repeat (IRa and IRb) [9][10]. Contrast to mitochondrial and nuclear genomes, chloroplast genomes are more conserved in terms of gene content, organization and structure [11]. The chloroplast genomes of angiosperms generally show slow substitution rates under adaptive evolution [12]. Considering its small size, conserved gene content and simple structure, the chloroplast genome are valid and cost-effective to research phylogenetic relationships and evolution of plants in different taxa. Recently, Forage species of Urochloa [13], marine crop Gracilaria firma [14], Epilithic sister genera Oresitrophe and Mukdenia [15], Family Adoxaceae and Caprifoliaceae of Dipsacales [16] were sequenced the related complete chloroplast genomes to elucidate the diversity, phylogeny and evolution.
In the present study, we sequenced, assembled and annotated the chloroplast genomes of four Gynura species. Combined with other published chloroplast genomes of tribe Senecioneae, the structure features, repeat motifs, adaptive selection, phylogenetic relationships and divergence time were analyzed.
Results And Discussion
Chloroplast genome features of 16 Senecioneae species In this study, we focused on the chloroplast genome features of tribe Senecioneae, which is the largest tribe of family Asteraceae. Although the tribe comprises about 500 genera and 3000 species [44], we only found that 4 genera of tribe Senecioneae had published chloroplast genome in Genebank. Five species of genus Dendrosenecio, one species of genus Jacobaea, five species of genus Ligularia, one species of genus Pericallis and four species of genus Gynura were used to find their similarities and differences. The whole sequence length ranges from 150,551 bp (Dendrosenecio brassiciformis) to 151,267 bp (Pericallis hybrida). With the typical quadripartite parts like most land plants, the chloroplast genome has one Large single copy (LSC), one Short single copy (SSC), two Inverted regions (IRa and IRb) (Fig 1). The LSC length ranges from 82,816 bp (Jacobaea vulgaris) to 83,458 bp (Dendrosenecio cheranganiensis), the SSC length ranges from 17,749 bp (D. brassiciformis) to 18,331 bp (P. hybrida) and the IRs length both range 24,688 bp (D. brassiciformis) to 24,845 bp (P. hybrida) ( Table 1). Changes in each region are not consistent with changes in whole chloroplast genome. J. vulgaris has the shortest chloroplast genome in length but its SSR region is longer than 4 Gynura species. In addition, there are 95 coding genes in the chloroplast genome of P. hybrida and 87 coding genes in the J. vulgaris. GCcontent varies between 37.2% and 37.5%. Only the rRNA number is conserved in chloroplast genome of tribe Senecioneae, which is same as family Adoxaceae and Caprifoliaceae [16], but different from genus Oresitrophe and Mukdenia [15].
Microsatellites and Repeats type
Number of micosatellites with mono-,di-and tri-nucleotide repeat motifs varies in the tribe. D. brassiciformis, J. vulgaris and L. hodgsonii do not have tri-nucleotide repeat motifs while four Gynura species have 4 to 5 tri-nucleotide repeat motifs. The number of mono-nucleotide repeat motif is 28 to 38, accounting for the largest proportion (Fig 2a).
The unit size of microsatellites is significantly different in four Urochloa species [13], which has tetra-nucleotide repeat motif and the tri-nucleotide is the largest in the proportion. The total number of repeats types is consistent with those in the four Gynura species, but the number of each repeat type is different. Palindromic repeats are the most abundant and complement repeats are secondary in 16 Senecioneae species (Fig 2b).
Compared with the Oresitrophe species [15], the Senecioneae species have 5 to 12 reverse repeats, but the Oresitrophe species do not have reverse repeats. In addition to this, forward and palindromic repeats number is similar in the Oresitrophe species.
Contraction and Expansion of Inverted Repeats
The chloroplast genome is highly conserved in land plants, but the IR expansion and 6 contraction leads to the different genome sizes of different plants [41]. 16 hodgsonii. In that region, D.cheranganiensis is close to P. hybrida , but different from the four Gynura species and L. hodgsonii (Fig 4B). That region has total 12 genes and 7 genes are coding the NAD(P)H dehydrogenase (NDH) complex subunit. The function of NAD(P)H dehydrogenase (NDH) complex is well-known in the photosystem I (PSI) cyclic electron flow (CEF) and chlororespiration [42][43], so the substitution of ndh genes was further studied. The ratio of non-synonymous(dN)/synonymous substitution (dS) rate were 7 calculated by PAML program. The ratio > 1 indicates positive selection, the ration < 1 indicates purifying selection and the ratio = 1 indicates neutral evolution. All the dN/dS ratio of 7 genes below 1 indicates that they are under purifying selection and little amino acid change happened (Table 2). That means the functions of 7 ndh genes are conservative during evolution, although they are not located in a conservative area.
Phylogenetic Relationships
The sequence alignment of 16 Senecioneae species was used to construct the ML (Fig 5) and BI (Fig S2) tree. In ML tree, two major clades were constructed with 100% bootstrap value. One clade includes the genera Gynura and Ligularia, and the other clade includes the genera Dendrosenecio, Pericallis and Jacobaea. In genus Gynura, G. bicolor was the first to differentiate, followed by G. divaricata, at last were the G. formosana and G.
pseudochina. The two clades divergence result is consistent with the sequence divergence in 25,000 to 50,000 bp region. The former systemic phylogenies of tribe Senecioneae based on ITS region (nuclear) and plastid fragment sequences show a significant different with the phylogenetic tree [44]. In the former phylogenetic tree, genus Ligularia belongs to Tussilagininae grade, which was in the earlier diverging lineage than other genus. The sequence between four Gynura species and five Ligularia species is relatively conserved and the Pi value of most sequence locations are below 0.1 (Fig S3), significantly lower than the 16 species alignment. From the perspective of whole chloroplast genomes, genus Ligularia is close to genus Gynura.
Divergence Time Estimation
For the divergence time estimation of the 16 Senecioneae species, Artemisia gmelinii and Chrysanthemum boreale (Tribe Anthemideae) were selected as outgroup due to oldest Artemisia fossil pollen [38][39]. The divergence time of 16 Senecioneae species were estimated by BEAST2.0 program (Fig 6). The divergence clades of these genera are same with ML tree. The two major clades were expected to differentiate at 37.4 mya (late Eocene). Both Gynura and Ligularia differentiated at 5.8 mya (late Miocene).
Dendrosenecio and Pericallis also differentiated at5.8 mya. The divergence time of tribe Senecioneae and Anthemideae was at 51.39 mya (early Eocene) and the result was consistent with previous study on the evolution and phylogenetic of family Asteroideae basing on the plastid fragment sequences [39]. The traditional view on divergence time of the genus Gynura is in the Old World after the Atlantic opening. In that time, the Senecioid species were transferred to South America and the divergence began [45]. The divergence time of Gynura species was about 0.3 mya and the result showed that the divergence time of genus Gynura was much earlier than the traditional views. The divergence time of genus Gynura could not start at hundreds or thousands years ago [45] and the divergence time estimated by BEAST program was in the same time period with other genera of land plants [13][14][15][16]. The Gynura bicolor, G. divaricata, G. formosana and G. pseudochina plants grow in greenhouse with normal sunlight and temperature. The DNA was extracted from their fresh leaves by CTAB method [17] and DNA degradation and contamination was monitored on 1% agarose gels. About 1.5μg DNA sample was fragmented by sonication to a size of 350bp, then DNA fragments were end polished, A-tailed, and ligated with the full-length adapter for Illumina sequencing with further PCR amplification. After PCR products purification (AMPure XP system), libraries were analyzed for size distribution by Agilent 2100 Bioanalyzer and quantified by using real-time PCR.
Conclusion
The libraries constructed above were sequenced by Illumina HiSeq X Ten platform and 150bp paired-end reads (PE150) were generated with insert size around 350bp. Quality control (QC) is removing reads with ≥10% unidentified nucleotides (N), > 50% bases having phred quality < 5 and > 10 nt aligned to the adapter, allowing ≤10% mismatches.
The perl script NOVOPlasty 2.7.2 [18] was used to assemble the chloroplast genome sequence with 50 K-mer. The chloroplast genome sequence of Dendrosenecio cheranganiensis (Tribe Senecioneae) was selected as the reference genomes. The family Asteraceae plants used in the study were downloaded from Genebank, as follows:
Repeat structure analysis
The microsatellite regions is a tract of repetitive DNA in which certain DNA motifs (ranging in length from 1-6 or more base pairs) are repeated, typically 5-50 times [24][25]. The perl script Microsatellite identification tool (MISA, http://pgrc.ipkgatersleben.de/misa/misa.html) were used to find the microsatellite regions of chloroplast genome. Considering the features of plant chloroplasts, the numbers of each unit of continuous DNA motifs is set to 1-6, and the minus DNA motifs of each unit is 1-10, 2-6, 3-5, 4-5, 5-5, 6-5. Forward, reverse, complement and palindromic repeats type were detected by online tool REPuter [26]. The Hamming distance was setting as 1 and the minimum repeat size was 30bp.
Chloroplast genome analysis All the chloroplast genome sequences was aligned by MAFFT7.427 [27] on FFT-NS-2 module. Alignments of 7 selected genome sequences were visualized by mVISTA [28]. The DNA polymorphism (nucleotide diversity) was calculated by DnaSPv5 [29] based on alignment results.
The molecular evolutionary rates (ω) between orthologous genes were estimated by calculating the ratio of non-synonymous (dN)/synonymous substitution (dS) rate. The coding gene sequences of selected region were extracted by using the Artemis [30]. The gene sequences of each species were aligned by Clustal X [31] with default parameters, and the alignment results (dnd format) were converted to PML format by DAMBE [32] for next analysis. The dN/dS value was calculated by the codeml module (seqtype=1, model=0, Nsites=1,7,8) in PAML4.9i [33]. Significant differences were calculated by Likelihood-ratio test.
Phylogenetic analysis
The 16 chloroplast genome sequences of tribe Senecioneae (family Asteraceae ) were aligned by MAFFT and the results was used to analyze the phylogenetic relationships.
Divergence time estimation
The divergence time of 16 species was estimated by BEAST2 [36]. The oldest Artemisia fossil pollen has been recorded from the Eocene-Oligocene boundary [37][38]. The Asteraceae family plant Artemisia gmelinii and Chrysanthemum boreale were selected as outgroup and the node Artemisia-Chrysanthemum was constrained by using a lognormal distribution with an offset of 31 Ma and a mean and standard deviation of 0.5 [39]. The HKY nucleotide substitution model and priors tree Yule model was selected with strict clock. Each MCMC run had a chain length of 100,000,000 with sampling every 10,000 steps. Tracer [40] was used to read the ESS and Trace value of logged statistics to access the results. Then, the divergence time was accessed by Treeannotator program of BEAST2.
Availability of data and materials
All the data and materials are available from the corresponding authors upon request.
Supplementary Files
This is a list of supplementary files associated with the primary manuscript. Click to download. | 3,158.4 | 2019-06-03T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
HARQ Assisted Short-Packet Communications for Cooperative Networks Over Nakagami-m Fading Channels
Considering the coverage and short-packet communication requirements, this paper proposes a hybrid automatic repeat request (HARQ) assisted short-packet communications for cooperative Internet of Things (IoT) networks, where the signals are transmitted through short-packets. Since the short-packet communications will bring a higher packet error rate than long-packet communications, HARQ is adopted to improve transmission reliability by allowing retransmit the signals which cannot be detected correctly. For lighting the burden of the transmitter, the retransmitted signals are only sent from the relay. According to the detection capability of the user, selection combining (SC) scheme and maximum ratio combining (MRC) scheme are designed to detect the multiple transmitted signals. Besides, a set of closed-form expressions of the average packet error rate (APER) and effective throughput (ET) are derived over Nakagami- $m$ channels in the proposed SC and MRC schemes. For providing more insights into system performance, the asymptotic results are also provided. Simulation results demonstrate that HARQ will significantly improve reliability and transmission efficiency. Also, the MRC scheme outperforms the SC scheme in terms of APER and ET metrics, however, when the transmit power at the relay is high enough or the packet length is big enough, they achieve nearly the same performance.
I. INTRODUCTION
With the development of the 5G and beyond 5G networks, Internet of Things (IoT) is coming to provide ubiquitous connectivity for smart office, smart industry, intelligent transportation, and so on [1]- [3]. For providing the ubiquitous connectivity, enhancing coverage becomes an important task in the development of IoT [4]. In this context, cooperative IoT utilizing relays for cooperating transmission emerges as a promising technique for enhancing coverage and realizing ubiquitous connectivity [5]. On the other hand, some critical IoT application scenarios, such as intelligent transportation system, demand the ultra-reliable and low-latency communications (URLLC), where the transmission delay is the key to realize the real-time control [6]. The transmission delay The associate editor coordinating the review of this manuscript and approving it for publication was Thanh Ngoc Dinh . is mainly caused by long packet. Supporting short-packet communications may reduce the delay significantly. Besides, in the sensor networks, the sensing data usually contains several bits and only needs a short-packet for transmission. Thus, short-packet communications are an inherent characteristic of IoT.
Recently, cooperative technology has been applied to IoT for enhancing coverage [7]- [13]. Chen et al. [9] considered an IoT network where an untrusted relay is utilized for improving transmission reliability. For avoiding traffic congestion and huge energy consumption, a relay assisted Device-to-device (D2D) system is employed into cellular IoT and network resources are optimally allocated [10]. Zhang et al. [11] aimed to many-to-many-to-one Internet of things (IoT) networks and designed a joint thing and relay selection criteria for balancing the performance and fairness. Considering the limited power in IoT, an energy harvesting VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ IoT system was proposed in [12], where relays are utilized to help the base station transmits signals to IoT devices. Besides, Shabbir et al. [13] considered a full-duplex relay with energy harvesting capability and proposed amplifying and forward (AF), and decode and forward (DF) relaying protocols for IoT networks. The above analysis shows that relay plays an important role in IoT for improving network capability. Short-packet communications are widely existing in multiple IoT applications and considering the finite packetlength coding is realistic for real communication systems [14]- [16]. Chen et al. [17] proposed a wireless-powered IoT network with finite packet-length coding, where the effectivethroughput (ET) and effective-amount-of-information are adopted as performance metrics and maximized by optimizing the delay and packet error rate. An enhanced random access scheme with short-packet communications was designed to support cellular IoT communications and the results shown that the proposed scheme can meet the reliability and low-latency requirements [18]. Considering the constrained resources in IoT, Wu et al. [19] adopted a shortpacket for communications and an effective pilot-less oneshot transmission scheme was designed.
According to the state-of-art of cooperative IoT and shortpacket communications, the finite packet coding will bring a higher packet error rate and reduce reception reliability. For enhancing transmission efficiency, a cooperative IoT protocol is proposed in [20], where the average throughput is optimized by designing effective optimal and suboptimal maximization methods. In a simultaneous wireless information and power transfer (SWIPT) assisted cooperative network, the transmission reliability performance is optimized by the optimal selection of SWIPT parameters [21]. Reference [22] investigated the blocklength-limited performance of a relaying system where the introduced weight factor is optimized to improve throughput and effective capacity. The authors in [23] studied the outage probability and the throughput of amplify-and-forward relay networks with finite block-length codes. The source and the destination nodes in the networks are assumed to have constrained energy supply, and the time switching relaying and the power splitting relaying protocols are considered for energy and information transfer.
On the one hand, hybrid automatic repeat request (HARQ) is adopted in multiuser multiple-input-multiple-output (MIMO) system and maximal ratio combining (MRC) scheme is employed [24]. On the other hand, HARQ is also recruited for improving the transmit reliability in short-packet communications [25]- [27]. An incremental redundancy HARQ scheme is adopted in [25] to reduce outage probability and improve throughput. Considering the outage performance in the energy constrained system, Makki et al. [26] proposed a wireless energy and information transfer scheme with retransmissions; results show that the retransmission protocol reduces outage probability. For reducing the delay bring by HARQ, a fast HARQ protocol is designed in [27] and the delay is reduced by 27, 42, 52 and 60 % when 2, 3, 4 and 5 transmission rounds are adopted, respectively.
In this paper, we further investigated the performance of short-packet communications. Different from the scheme in references [20]- [23], we adopt an HARQ technique to improve the reliability performance in short-packet communications. Also, different from [25]- [27], cooperative techniques are employed in this paper to enhance coverage. Although the performance of cooperative networks with HARQ in infinite packet communications has been investigated [28], [29], there are little studies about the combination of cooperative techniques and HARQ in short-packet communications, which may satisfy the low-complexity and large coverage requirements of IoT. The main contributions are summarized as follows.
• We consider a cooperative IoT network where a relay assists to transmit the signals conveyed on the shortpacket. For mitigating the packet error caused by shortpacket communications, an HARQ scheme is employed, where the signal will be retransmitted only from the relay to lighten the burden of the transmitter when the user cannot detect its message correctly.
• According to the characteristics of HARQ, the selection combining (SC) scheme and MRC scheme are designed.
In the SC scheme, the user abandons the received signal when it is decoded with error; while in the MRC scheme, the user saves every transmission and combines them by MRC for detection.
• A generalized Nakagami-m fading channel is considered in the networks and the closed-form expressions for the average packet error rate (APER) and ET are derived in both SC and MRC schemes. Besides, the asymptotic results of APER and ET when the transmit power at the transmitter and relay tends to infinite are analyzed.
• Simulations are conducted and the results show that HARQ is beneficial to improve reliability and transmission efficiency; in addition, the MRC scheme outperforms SC scheme in terms of APER and ET; furthermore, the packet-length can be optimized for achieving higher ET.
The rest of the paper is arranged as follows. The network model and the transmission schemes are introduced in Section II. The closed-form expressions for the APER and ET in SC and MRC schemes are derived at Section III and Section IV, respectively. Next, in Section V, Simulations are conducted to verify the analysis results and some insights into system characteristics are provided. Finally, Section VI concludes the paper.
II. NETWORK MODEL AND TRANSMISSION SCHEMES A. NETWORK MODEL
As shown in Fig. 1, we consider a cooperative network where a decode-and-forward relay denoted as R cooperates with the transmitter T for providing reliable short-packet communications for the user denoted as U . The transmitter conveys the short-packet (e.g., packet size ≤ 10 3 bytes [30]) to the user. We assume there no direct channel between the transmitter FIGURE 1. Illustration of the network model. and the user [31]. Therefore, reliable communications have to depend upon a relay. Due to the short-packet communications, the applications of effective channel coding schemes, such as Low-density parity-check code and turbo code, are limited. Thus, reliable communications cannot always be guaranteed even in the high transmit power region. For ensuring the reliable communication of the user, HARQ is recruited to retransmit the signal that failed to be decoded by the user. When the user decodes the intended message with error, it feeds back a NACK for requesting retransmission; otherwise, it feeds back an ACK. To reduce the burden of the transmitter, the retransmitted signals are only sent by the relay. When the relay receives an ACK or the number of transmission rounds goes to the maximum allowed number L, the relay stops retransmission [32], [33].
The block Nakagami-m fading is adopted to model the channels in the wireless networks, which is a generalized model and can be degraded to Rayleigh fading and Rician fading by choosing different fading parameters [34]. The channel between the transmitter and relay is denoted as h R with mean R and fading parameter m R . In jth transmission, the channel between the relay and the user is denoted as h j U with mean U and fading parameter m U . All the nodes in the network are assumed to be equipped with a single antenna and working under half-duplex mode. The AWGN powers at the receivers are σ 2 . The specific transmission scheme is detailed as follows.
Transmitter sends signal x U with B information bits over a short-packet with packet length N to the relay and the code rate R T = B N . The signal to noise radio (SNR) is denoted as where the P T is the transmit power of the transmitter.
In the short packet communications, for the fixed code rate R T , the resulting packet error rate (PER) is approximated by [35] where Q −1 (·) is the inverse of the Gaussian Q-function Because (2) provides a tight approximation, in the following analysis, the ''≈'' is replaced by ''=''.
When the relay failed to decode x U , packet error is inevitable; otherwise, the relay will send the signal to the User. When the user decoded x U with error, the relay retransmits the signal until the user receives the message correctly or the number of transmission rounds goes to L. Two schemes can be adopted to utilize the retransmitted signals, which are detailed as follows.
B. SC SCHEME
In the SC scheme, the user will abandon the received signal when it is decoded with errors and a NACK is feedback to the relay. Only the signal decoded without error is selected. In the jth transmission, the SNR received at the user is given by where P R is the transmit power of the relay. Thus, in the jth transmission the PER is expressed as In the MRC scheme, the user will save the received signals when they are decoded with errors and combine the retransmitted signals with MRC. After l transmissions, the SNR received at the user is given by Thus, after l transmissions the PER is expressed as From (3) and (5), we can observe that the SNR achieved in the MRC scheme is higher than that in the SC scheme because every transmission is utilized by MRC. Thus, the MRC scheme may achieve better performance than the SC scheme. However, for providing an efficient combination by MRC, the user requires the perfect CSI, which results in the extra communication resource consumption.
III. PERFORMANCE ANALYSIS OF THE SC SCHEME
In this section, we adopt APER and ET as performance metrics for evaluating the reliability and efficiency of the proposed system. The closed-form expressions for APER and ET are derived in the SC scheme.
A. AVERAGE PACKET ERROR RATE IN THE SC SCHEME
When the user decodes its intended messages with error, it will request for retransmission, until the user correctly decodes x U or the number of transmission rounds goes to VOLUME 8, 2020 L. The packet error is inevitable when the relay cannot decode x U correctly or the user still cannot obtain its intended messages after L transmissions. Thus, the APER in the SC scheme is given by where R and j S denote the event that the packet error occurs at the relay and the user in the jth transmission, respectively.
Due to the channels in different transmission rounds are independent, the APER can be further expressed as where E [x] is the expectation operation. Before deriving the closed-form expression for A S per , we first deriving the closed-form expression for E [ε R ] which is given in the following Lemma. where Proof: See Appendix A. From (2) and (4), we can find that ε R and ε j S have a similar expression. Thus,c following the steps in deriving E [ ε R ], we can obtain the closed-form expression for E ε j S by replacing m R by m U , replacing R by U , and replacing ρ T by ρ R in (9). The closed-form expression for E ε j S is expressed as where From (9) and (10) Due to the channels in different transmission rounds are independent and identically distributed (i.i.d.), E ε 1 The closed-form expression for the APER in the SC scheme is given by where E [ ε R ] and E ε j S are given in (9) and (10), respectively.
From (11), we can observe that increasing L will reduce APER, which shows that HARQ is beneficial to reliable transmission. Besides, when L tends to infinite, A S per = E [ε R ], which demonstrates that the reliability performance is determined by the detection ability of the relay.
B. EFFECTIVE THROUGHPUT IN THE SC SCHEME
Although HARQ improves transmission reliability, the same message being transmitted multiple times will degrade the efficiency. Effective throughput is adopted to evaluate the transmission efficiency of the HARQ system. Different from [36], motivated by the total probability formula, the EST is defined as [33] where l S denotes the event that the user correctly decodes its intended message in the lth transmission. (12) shows the sum of average transmission rate in a successful transmission process multiplying its corresponding probability.
Due to the channels in different transmission round is i.i.d., (12) can be further expressed as where E [ ε R ] and E ε j S are given in (9) and (10), respectively.
From (13), we can deduce that increasing L will increase ET because every item in the sum function is positive. It demonstrates that HARQ benefits to the transmission efficiency. However, for a large l the item in the sum function is small, which shows that HARQ is limited for improving transmission efficiency.
IV. PERFORMANCE ANALYSIS OF THE MRC SCHEME
Similar to the SC scheme, APER and ET are also adopted as the metrics in the MRC scheme for evaluating the reliability and efficiency performance, respectively.
A. AVERAGE PACKET ERROR RATE IN THE MRC SCHEME
In the MRC scheme, when the user cannot decode its intended message correctly, a NACK will be fed back for requesting retransmission. The user will combine the original and every retransmitted signal by MRC. After L times of transmissions, the user still cannot obtain its messages correctly, The APER in the MRC scheme is given bythe packet will be received with errors.
where L M represents the event that after L times of transmission the user still cannot obtain its message correctly in the MRC scheme.
Similar to the SC scheme, (14) can be further expressed as where E ε L M is the average PER after L times of transmission.
Before deriving the closed-form expression for A M per , we first derive the closed-form expressions for E ε l M , which is given in the following Lemma.
Lemma 2: The closed-form expression for E ε l M is given by where Similar to the SC scheme, E ε l M also is a decreasing function about transmit power P R . When P R → 0, E ε l M = 1, and when P R → ∞, E ε l M = 0. Replacing l by L in (16), the closed-form expression for E ε L M is obtained. Substituting E [ε R ] and E ε L M into (15), the closed-form expression for the APER in the MRC scheme is obtained.
From (16) we can find that E ε l M decreases with the increase of l. According to (15), increase l will reduce APER in the MRC scheme. Similar to the SC scheme, HARQ is also beneficial to the reliable transmissions in the MRC scheme. In addition, when L tends to infinite, E ε L M tends to zero and A M per = E [ε R ]. It demonstrates that the MRC and SC schemes achieve the same reliability performance, because, in this case, the reliability performance is only determined by the relay.
B. EFFECTIVE THROUGHPUT IN THE MRC SCHEME
Similar to the SC scheme, in the SC scheme, we adopt ET to evaluate the transmission efficiency, which is expressed as where l M represents the event that after l times of transmissions the user decodes its intended message correctly.
Due to the independency between every transmission, (17) can be further expressed as where we define E ε 0 M = 1, and E ε l−1 M can be obtained by replacing l by l − 1 in (16).
Similar to the SC scheme, (18) also shows that increasing L will improve ET, which demonstrates that HARQ helps to improve transmission efficiency.
Remark 1: When P T → ∞, according to (11) and (15), Besides, when P T → ∞, according to (13) and (18), the ETs in the SC and MRC schemes are degenerated as respectively. It shows that the APER and ET are only determined by the reception performance of the user. On the other hand, according to (11), (13), (15), and (18), . It shows the SC and MRC schemes achieve the same reliability and transmission efficiency, because when P R → ∞, the APER and ET are only determined by the reception performance of the relay without HARQ. Thus, both SC and MRC schemes obtain the same performance.
V. NUMERICAL RESULTS
In this section, numerical results are presented to verify our analysis and investigate the effects of the key parameters on VOLUME 8, 2020 FIGURE 2. The APER versus P R with P T = 15dBm, in both SC and MRC schemes.
the system performance. Unless otherwise stated, part of the simulation parameters is given in Table 1, where the bit per channel use (BPCU) is used as the unit for transmission rate. Fig. 2 plots the APER versus P R in both SC and MRC scheme for P T = 15dBm and different maximum transmission round L. Firstly, we can find that the simulation points match the analysis curves well, which demonstrates the correctness of the derivations. Secondly, the APER drops dramatically with the increase of P R and finally tends to the asymptotic results. It shows that high transmit power at the relay benefits to the reliable transmission and when the power is higher enough, the APER is determined by the reception performance of the relay. When L = 1, the proposed schemes are degenerated to non-HARQ schemes and both SC and MRC schemes achieve the same reliability performance. Also, the APERs in both SC and MRC schemes decrease with the increase of L, which shows HARQ is a benefit to reliable transmission. Especially, we can find that the MRC scheme outperforms SC scheme because every transmission is utilized efficiently in the MRC schemes. Fig. 3 depicts the APER versus P T in the SC and MRC schemes for P R = 4dBm and L = 3. We can obtain from Fig. 3 that the APER decreases with the increase of transmit power P T , which demonstrates that high transmit power at the transmitter helps to reliable transmission. However, the APER cannot keep dropping with the increase of P T and finally tends to a performance floor which is determined by the reception performance of the user. Fig. 3 also shows that the reliability performance achieved at the MRC scheme is better than that achieved at the SC scheme. Nevertheless, the performance gains in the MRC scheme is achieved at the cost of high resource consumption. When the fading parameters increase from 1 to 3, the APER decreases in both SC and MRC schemes, which shows that the stable channels are benefiting to reliable transmission. Fig. 4 illustrates the APER versus packet-length N for P R = 4dBm and P T = 8dBm in both SC and MRC schemes. Different from Fig. 2 and Fig. 3, Fig. 4 shows that the curves of APER keep dropping with the increase of packet-length N in both SC and MRC schemes. This is because an increase of N will decrease the transmit rate resulting in the improvement of reliability. However, increasing N will also increase the transmit delay, which should be reasonably determined, especially when the system requires URLLC. We can also find that the MRC scheme outperforms the SC scheme for different L, but they achieve the same performance when the N is bigger enough. It can be explained that when N is big, only one transmission can satisfy the communication requirement and both SC and MRC schemes degenerate to the non-HARQ schemes. Thus, they achieve the same APER. Ultimately, we can observe that increase the maximum transmission round L will reduce APER, which also shows that HARQ benefits to reliable transmission. However, the L also should be determined carefully because a big L will bring a high transmission delay. Fig. 5 plots the ET versus transmit power P R for P T = 4dBm, in both SC and MRC schemes. Firstly, we can observe from Fig. 5 that the ET increases rapidly with an increase of P R . It demonstrates that high transmit power at the relay helps to improve transmission efficiency. However, when further improving P R , the ETs in both SC and MRC schemes tend to the asymptotic results which are determined by the transmit power P T at the transmitter. Besides, when L = 1, the proposed schemes are degenerated to non-HARQ scheme and achieve the worst performance when compared with the case of L = 2 and L = 4, which shows that the proposed HARQ schemes benefit to transmission efficiency. It also shows that the MRC scheme outperforms the SC scheme in terms of ET because the MRC scheme saves every transmission for detection. Fig. 6 depicts the ET versus P T for L = 4 and P R = 4dBm in the SC and MRC schemes. As observed in Fig. 6, with an increase of P T , the curves of ET increase in both SC and MRC schemes, and finally tend to their correspondence asymptotic results, which verifies the analysis results in the remark 1. Besides, we can find the MRC scheme strictly outperforms the SC scheme in different P T . However, the SC scheme consumes fewer communication resources and can be applied to multiple scenarios with constrained resources. Furthermore, in the high transmit power region, when m R increases from 1 to 3, i.e., the channel becomes stability, the ET improved in both SC and MRC schemes. On the contrary, in a low transmit power region, the variation of channels benefits to transmission efficiency. It demonstrates that the stability of channels can be utilized to improve transmission efficiency.
In Fig. 7, we plots the ET versus packet-length N with P T = 8dBm and P R = 4dBm, in the SC and MRC schemes. Firstly, we can observe that the ET first increases and then decreases with the increase of N in both SC and MRC schemes. It is because an increase of N will improve the transmission reliability but further increase N the transmit rate will decrease. When N is small, the reliability dominates the transmission efficiency, and increasing N will improve reliability and increase ET. When N is big enough, the transmission efficiency is dominated by the transmit rate, increasing N will decrease the transmit rate and deteriorate ET. It also shows that the packet-length N can be optimized for achieving higher ET. Besides, we can find that the ET is improved with the increase of maximum transmission round L for different N , which shows that the retransmission will improve transmission efficiency. We also find that the MRC scheme outperforms the SC scheme in different N , however, the performance in the MRC scheme is achieved at the cost of high resource consumption. It demonstrates that the performance and the cost should be jointly considered when choosing a proper scheme.
VI. CONCLUSION
In this paper, we adopt HARQ for improving the short-packet communication performance of cooperative IoT networks. Both SC and MRC schemes are designed for detecting the multiple transmitted signals. We derived the closed-form expressions of the APER and ET in both SC and MRC schemes over Nakagami-m fading channels. Results show that HARQ improves the reliability and transmission efficiency when compared with the non-HARQ scheme, and increasing the maximum transmission round L will further improve system performance. Besides, the performance achieved in the MRC scheme is better than that in the SC scheme. However, the SC scheme consumes fewer communication resources and can be applied to multiple IoT applications. Furthermore, the fading parameters also have an important effect on system performance and when the channel becomes stable the system performance will be improved. Ultimately, the transmission efficiency can be improved by choosing a proper packet-length.
APPENDIX A PROOF OF LEMMA 1
Utilizing (2), ε R can be approximated as [25] Using (1) and (21), the average block error rate is expressed as where R is the probability density function (PDF) of |h R | 2 [37], [38].
YUCUI YANG received the M.S. degree from the College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, in 2006. She is currently with the School of Physics and Electronic Electrical Engineering, Huaiyin Normal University. She is also with the Jiangsu Province Key Construction Laboratory of Modern Measurement Technology and Intelligent System. Her current research interests include cooperative communications, pattern recognition, and image retrieval.
YI SONG received the M.S. degree from the Nanjing University of Aeronautics and Astronautics, in 2011. He is currently pursuing the Ph.D. degree with the Institute of Communications Engineering, Army Engineering University of PLA. His research interests include millimeter-wave, nonorthogonal multiple access, physical-layer security, and cognitive radio.
FENGLIAN CAO received the M.S. degree from the School of Electronic Science and Engineering, Nanjing University, Nanjing, in 2006. She is currently with the School of Physics and Electronic Electrical Engineering, Huaiyin Normal University. Her current research interests include cooperative communications, non-orthogonal multiple access, and physical-layer security. VOLUME 8, 2020 | 6,420.4 | 2020-01-01T00:00:00.000 | [
"Computer Science",
"Business"
] |
High-channel-count 20 GHz passively mode locked quantum dot laser directly grown on Si with 4.1 Tbit/s transmission capacity
Low cost, small footprint, highly efficient and mass producible on-chip wavelength-division-multiplexing (WDM) light sources are key components in future silicon electronic and photonic integrated circuits (EPICs) which can fulfill the rapidly increasing bandwidth and lower energy per bit requirements. We present here, for the first time, a low noise high-channel-count 20 GHz passively mode-locked quantum dot laser grown on complementary metal-oxide-semiconductor compatible on-axis (001) silicon substrate. The laser demonstrates a wide mode-locking regime in the O-band. A record low timing jitter value of 82.7 fs (4 - 80 MHz) and a narrow RF 3-dB linewidth of 1.8 kHz are measured. The 3 dB optical bandwidth of the comb is 6.1 nm (containing 58 lines, with 80 lines within the 10 dB bandwidth). The integrated average relative intensity noise values of the whole spectrum and a single wavelength channel are - 152 dB/Hz and - 133 dB/Hz in the frequency range from 10 MHz to 10 GHz, respectively. Utilizing 64 channels, an aggregate total transmission capacity of 4.1 terabits per second is realized by employing a 32 Gbaud Nyquist four-level pulse amplitude modulation format. The demonstrated performance makes the laser a compelling on-chip WDM source for multi-terabit/s optical interconnects in future large-scale silicon EPICs.
INTRODUCTION
Driven by the huge demand for high-performance computing and large-scale data centers, photonic interconnects employing wavelength division multiplexing (WDM) are evolving fast as it is a feasible technology to meet the high-bandwidth and low energy consumption requirements required for data centers [1][2][3][4]. Silicon, as a major photonics platform, benefiting from the mature complementary metaloxide-semiconductor very-large-scale integration (CMOS-VLSI) fabrication technology, offers a great opportunity to realize a low-cost, small footprint, highly scalable and energy-efficient solution. Large scale silicon electronic and photonic integrated circuits (EPICs) for inter-chip data communications have been demonstrated with unprecedented performance [5,6]. Further breakthroughs in bandwidth density and total power consumption require a full integration strategy where all the photonic components should be integrated on chip, including modulators, photodetectors, (de)multiplexers, polarization rotators, and especially, on-chip light sources with a low power consumption and high channel count for efficient WDM [7]. While the most aforementioned components can be easily realized on silicon with high performance, the on chip light sources, on the other hand, are rather difficult to realize due to the inherent indirect-bandgap nature of silicon [8]. Research has been devoted to demonstrate highly efficient and reliable WDM sources on Si, either by flip-chip bonding [9] or wafer bonding [10][11][12] technique. While the device performance of these lasers are comparable to or even superior to equivalent lasers on native substrates, they are arguably not the optimum choices in terms of cost, yield, scalability and reliability [13]. Recent breakthroughs on direct growth of InAs quantum dot (QD) materials on CMOS compatible onaxis (001) silicon substrates provide an appealing direction from both a cost and performance perspective, as it combines both advantage of QD active material and silicon mass production ability [13]. Record high performance Fabry-Perot (FP) QD lasers with a lowest lasing threshold and longest lifetime of more than a million hours [14] have been demonstrated recently due to the significant reduction in threading dislocation density (TDD) in the GaAs buffer layers [15]. It is, therefore, attractive to realize QD on-chip WDM sources for future large scale silicon EPICs to fulfill the bandwidth and power consumption requirements.
An alternative approach uses silicon based distributed feedback (DFB) laser arrays to achieve multi-wavelength channels by tuning the grating periods [16][17][18]. While looking back at the QD material properties, it is exciting to note that the inherent inhomogeneously broadened gain spectrum, ultrafast carrier dynamics, large gain and saturable absorber (SA) saturation energy ratio and low amplified spontaneous emission (ASE) noise level make QDs a perfect active medium for mode locked lasers (MLLs), which can also be used as light sources for WDM applications [19,20]. Compared to the DFB laser array with several tens of single-channel lasers, where multiple temperature controllers, wavelength trackers and complicated packaging architecture are needed, a single MLL can generate a wide coherent spectrum with a fixed channel spacing corresponding to the cavity length, which would greatly simplify the system topology and lower the total cost as well as the energy consumption. MLLs can also be used in a wide variety of compelling applications, including high-speed photonic analog-to-digital conversion, intrachip/interchip clock distribution and recovery, millimeter wave signal generation for radio-over-fiber applications [21][22][23].
In this work, we demonstrate the first 20 GHz passively mode locked quantum dot laser (QD-MLL) that is directly grown on on-axis (001) CMOS compatible silicon substrate. The InAs QD-MLL employs a chirped QD design with the photoluminescence (PL) full-width at halfmaximum (FWHM) broadened to 53 meV while maintaining high optical intensity. The laser operates in the O-band with a wide mode locking range. Narrow RF linewidth of 1.8 kHz with a record low timing jitter noise of 82.7 fs integrated from 4 MHz -80 MHz is measured. The laser also shows a wide optical bandwidth with total 80 channels within the 10 dB spectral bandwidth. The integrated average relative intensity noise values of the whole spectrum and a single channel are − 152 dB/Hz and − 133 dB/Hz in the frequency range from 10 MHz to 10 GHz, respectively. Among the total lines, 64 channels were selected to realize a 32 Gbaud Nyquist four-level pulse amplitude modulation (PAM-4) transmission with an aggregate total capacity of 4.1 Tbit/s. The measured bit error ratios (BERs) of back-to-back (B2B) and 5-km standard single mode fiber (SSMF) transmission are below the forward error correction (FEC) threshold (with 61 channels below harddecision FEC threshold and total 64 lines below soft-decision FEC threshold). The demonstrated performance suggests the Si-based QD-MLL is a strong WDM light source candidate that can be integrated in future large-scale silicon EPICs to boost the system capacity and efficiency.
DEVICE DESIGN AND FABRICATION
The InAs QD laser growth was directly completed on an onaxis (001) silicon substrate by solid-source molecular beam epitaxy (MBE). Fig. 1(a) shows the detailed QD epitaxial structure. A high quality GaAs buffer layer was first grown on Si with a low TDD value of 7×10 6 cm -2 by optimizing the InGaAs/GaAs strained superlattice dislocation filter layers and thermal cyclic annealing process [14]. It was then followed by a 300 nm heavily n-type doped GaAs contact layer and a 1400 nm n-type AlGaAs cladding layer. Five-layers of InAs/In.15Ga.85As dots-in-a-well (DWELL) active region with 37.5 nm GaAs spacers were sandwiched by upper and lower unintentionally doped 50 nm GaAs waveguides. A 10 nm 5 × 10 17 cm -3 p-modulation doped GaAs layer was also introduced to the spacers to help improve the gain and temperature performance of the laser [24]. After the deposition of a 1400 nm p-type AlGaAs cladding and a 300 nm heavily doped p-type InGaAs contact layer, the whole growth procedure was completed. Previously, the QD layers have the same QD size distribution (the FWHM of PL is ~ 30 meV) [20]. In this study, a chirped QD design was employed to broaden the FWHM of the gain spectrum [25] while maintaining high QD optical gain property. By varying the total thickness of the InGaAs layers in the DWELL structure from 3 nm to 7 nm, a different QD gain peak position of each layer was obtained. Fig. 1 (b) shows the normalized room temperature PL spectra of the calibration samples with single dot layers representing each layer thickness used in the laser. A total of ~70 nm peak wavelength tuning can be obtained using this method. Fig. 1(c) shows the PL spectrum from the full QD laser structure material that was utilized in this work. The PL peak centers around 1240 nm with a broadened FWHM of 69 nm, corresponding to 53 meV, suggesting that the ensemble of five chirped QD layers effectively broadens the overall gain spectrum.
The heteroepitaxial wafer was processed into the designed 3 μm wide deeply etched waveguide structure by standard semiconductor dry etching and metal/dielectric deposition techniques. In order to realize 20 GHz pulse generation, the total length of the laser was set to be 2048 μm (calculated by the previously obtained group effective index of 3.66). The SA section length was designed to be 14% of the total cavity length in this study. It was isolated from the gain section 10 μm away by a second dry etch step, where the heavily doped p-contact GaAs layer was etched away with ~ 600 nm deep into the p-type cladding layer. The measured isolation resistance is around 15 kΩ . Coplanar ground-signal (GS) pads were also designed to facilitate extracting the generated RF signal directly by high speed RF probe. At the time of test, no impedance matching circuit was employed. The processed wafer was then cleaved into the designed cavity length after thinning the backside of the Si substrate down to ∼ 180 μm. The QD-MLL facets were both left as-cleaved.
CHARACTERIZATION AND DISCUSSION
The fabricated 20 GHz QD-MLL chip was mounted on a copper heat sink and tested with a fixed stage temperature of 18℃. Continuous wave (CW) performance was first characterized as shown in Fig. 2(a). Threshold increase from 42 mA to 58 mA as the SA reverse bias voltage increase was observed for this laser due to the enhanced absorption loss within the SA section, which also caused the slope efficiency decrease. The sudden power rise at the threshold indicates the nonlinear saturation effect of the SA section as the increased intracavity spontaneous emission power would finally bleach the SA, leading to a sudden overall cavity loss drop [20]. Further optimization of the chirped QD material would help to improve the threshold current and slope efficiency. The series resistance of this sample is around 3.2 .
To study the passive mode locking (PML) behavior of the laser, the output spectrum was measured using an optical spectrum analyzer (Yokogawa, AQ6370C), RF performance was measured with an electrical spectrum analyzer (Rohde&Schwarz, FSU) and the autocorrelation pulsewidth was recorded with an autocorrelator (Femtochrome, FM-103MN) .
A. PASSIVE MODE LOCKING PERFORMANCE
The PML regime of the QD laser as a function of gain section forward biased current and SA section reverse biased voltage was first delimited by defining a good mode locking state with its fundamental frequency tone signal to noise floor (SNR) ratio larger than 30 dB and corresponding pulsewidth narrower than 12 ps. A wide mode locking area was demonstrated under this criterion with forward current ranging from 75 mA to 200 mA and reverse voltage ranging from 1 V to 5 V as shown in Fig. 2(b) and 2(c) for both the SNR and pulsewidth mapping diagram, respectively. For most of the recorded mode locking states, the fundamental RF peak SNR ratio is larger than 50 dB, which indicates the good mode locking quality across the whole range. The abrupt transition around Igain = 120 -125 mA is caused by the lasing mode hopping behavior, which roughly corresponds to the kink point in the corresponding light-current curves. Narrower pulses were obtained at lower current and higher reverse voltage side with high SNR values. Fig. 3(a) shows the narrowest pulse of 5 ps assuming a hyperbolic secant squared pulse profile. The pulse can be further shortened by increasing the SA section length with more effective pulse shaping dynamics within QD material [20]. RF performance is presented in Fig. 3(b). A sharp fundamental RF tone at 20.02 GHz with a SNR of 64 dB and its higher order harmonic can be clearly seen across the 50 GHz span. A battery current source as well as a linearized regulated voltage supply were then utilized to minimize the line noise in the RF linewidth measurement. Fig. Fig. 2. Si-based 20 GHz QD-MLL (a) continuous wave lightcurrent-voltage curve under different SA reverse bias (b) fundamental RF peak signal to noise floor ratio mapping and (c) pulsewidth mapping as a function of gain section current and SA section reverse bias under passive mode locking operation. 3(c) shows the obtained RF linewidth with a Voigt fit. The 3 dB linewidth is 1.8 kHz, which is comparable to the state-of-the-art high-speed semiconductor mode locked lasers [26,27]. Corresponding single-sideband (SSB) phase noise of this PML state is shown in Fig. 3(d). With a typical roll-off slope of 20 dBc/Hz per decade, the integrated timing jitter is 286 fs from 100 kHz to 100 MHz, and 82.7 fs from 4 MHz to 80 MHz of the ITU-T specified range, which is the lowest timing jitter ever reported to date for any passively mode-locked semiconductor laser diode. This performance is believed to benefit from the low ASE noise and low confinement factor properties of the QD material. Hermetic packaging can help to improve the stability of the QD laser further by suppressing the technical noise and electrical noise from the ambient environment that leads to better RF performance [26].
At higher pump current levels, the pulse starts to broaden to around 10 ps caused by the self-phase modulation mechanism [28]. The spectrum bandwidth also increases as more modes reach threshold due to the inhomogeneous broadening nature of the QD material. However, the SNR still maintains larger than 60 dB level on the lower voltage side, indicating strong phase correlation between these lasing modes. Fig. 4(a) shows the obtained square-like optical spectrum with largest 3 dB bandwidth of 6.1 nm (58 lines, 80 lines within 10 dB) under bias condition of Igain = 180 mA, VSA = -1.92 V. The average optical linewidth of each mode is around 10.6 MHz measured by delayed self-heterodyne method. The corresponding SNR at this state is 66 dB with a RF 3 dB linewidth of 2.7 kHz. The relative intensity noise (RIN) as an important performance parameter was also characterized across the spectrum. As shown in Fig. 4(b), a low integrated average RIN value of -152 dB/Hz is obtained in the 10 MHz -10 GHz range for the whole optical comb. Filtered individual channels, due to the relatively weak line power, were then amplified by an external O-band semiconductor optical amplifier (SOA) before RIN measurement. With the help of the SOA by suppressing the low frequency mode partition noise [29], the laser demonstrates an integrated average RIN value of -133 dB/Hz of each line, which is suitable to be employed in advanced modulation format PAM-4 transmission systems to boost the bandwidth and efficiency [30].
B. DATA TRANMISSION PERFORMANCE
Intensity modulation/direct detection (IM/DD) combined with advanced modulation format like PAM-4 was investigated due to its low complexity, high stability, and high spectrum utilization efficiency compared to coherent detection solutions, which is acknowledged as one of the promising solutions for data center interconnects [31]. To prove the suitability of this siliconbased QD-MLL for high-speed data transmission, a system-level WDM experiment employing PAM-4 modulation format with direct detection is performed. A Nyquist PAM scheme is also deployed to further improve the spectrum efficiency [32]. Fig. 5 shows the experimental setup for the Nyquist-pulse shaped PAM-4 data transmission. Each wavelength channel of the comb generated by the QD-MLL is selected by an optical bandpass filter (OBPF) (Yenista XTA-50/O) as an optical carrier. An O-band SOA (Innolume QD SOA) is followed to boost the carrier power up to ~ 12 dBm. After the SOA, another OBPF (Santec OTF-350) is used to filter out the out-of-band ASE noise. Then the amplified optical carrier signal is launched into a 30-GHz lithium niobate Mach-Zehnder modulator (MZM) (IXblue MX1300-LN-40) for data modulation. The Nyquist-pulse shaped PAM-4 symbols are generated offline by the transmitter digital signal processing (Tx-DSP) block diagram as shown in Fig. 5. The generated symbols are loaded to the arbitrary waveform generator (AWG) (Keysight M8196A, with 32-GHz bandwidth and 92 GSa/s). Then the electrical PAM-4 signals generated by the AWG is amplified to approximately 3.5 V peak-to-peak by a 38-GHz broadband RF amplifier (SHF 806E). A variable optical attenuator is added after the modulator to control the received optical power before the photodetector. Finally, the modulated optical signal is received using a 30-GHz photodetector (Finisar XPRV2022A) with a transimpedance amplifier (TIA). No optical amplifier is used after the modulator. The received signal is then sampled at 200 GS/s by a 70-GHz real-time oscilloscope (Tektronix DPO77002SX) and processed by the receiver DSP (Rx-DSP) for signal demodulation and error counting.
At the Tx-DSP side, the 32 Gbaud PAM-4 signal is generated and up-sampled by a factor of 2.875 (92/32). A root-raisedcosine (RRC) filter is applied with the roll-off factor of 0.12 for Nyquist filtering. Pre-emphasis is then performed to compensate the system end-to-end frequency response using the measured inverse response function of the whole system. A 10% clipping is performed to reduce the high signal peak-to-average power ratio (PAPR) induced signal distortion after RF amplifier. At the Rx-DSP side, a matched RRC filter is executed to mitigate the effects of white noise and the received signals are resampled to two samples per symbol for the receiver-side equalization. All the Nyquist filtering is performed by the DSP. A conventional (33,9) -tap T/2 spaced decision-feedback equalizer (DFE) is applied to restore the signal. Clock recovery is carried out to compensate any sampling phase and frequency offset that may exist between the transmitter and receiver clocks. Finally, the output PAM-4 signal is decoded for bit error ratio (BER) counting. In principle, the 20-GHz channel spacing allows at most 20 Gbaud data per channel. Since the Nyquist pulse shaping is used and the roll-off factor of the RRC filter is designed to be 0.12, the effective bandwidth of the 32 Gbaud Nyquist PAM-4 signal is reduced to 19.2 GHz and the spectrum of the Nyquist PAM-4 signal became rectangular. This allows higher data transmission rate with no crosstalk from adjacent channels.
The modulated optical signal is transmitted both for back-toback (B2B) and over 5-km standard single-mode fiber (SSMF), respectively. Fig. 6 summarizes the transmission results. Total 65 carriers out of whole comb are leveraged for the experiment (under approximately the same bias condition as shown in Fig. 4). Fig. 6(a) shows the BER performance of each channel at the maximum received optical power (between -5.5 dBm to -6.5 dBm for all the channels). For the 65 tested channels, we obtain 64 channels with BERs below the soft-decision forward error correction (SD-FEC) threshold (20% overhead), leading to an aggregate data transmission rate of 4.1 Tbit/s. Considering the overhead of 20% for SD-FEC, the net spectral efficiency amounts to 2.7 bit s -1 Hz -1 . When considering the hard-decision forward error correction (HD-FEC) threshold (7% overhead), we can still achieve 61 channels with BERs below this threshold, which gives a total bit rate of 3.9 Tbit/s with a net spectral efficiency of 3.1 bit s -1 Hz -1 . 5-km SSMF transmission is also performed for some certain wavelengths with the corresponding BER performance shown in Fig. 6(a). Although at O-band, the main impairment to the signal transmission comes from the fiber attenuation, the BER performance is still below the HD-FEC threshold even for the carrier near the edge (at 1265.669 nm) after 5-km SSMF transmission. Fig. 6 (b) shows the corresponding eye diagrams (after Rx-DSP) for B2B and 5km SSMF transmission of the wavelengths located at 1265.669 nm and 1269.445 nm. The eyeopening of Nyquist PAM-4 is reduced compared to conventional PAM signal due to the high signal PAPR [33]. Fig. 7 shows the BER versus received power for the carriers located at 1266.221 nm and 1270.946 nm for B2B and after 5-km SSMF transmission. The power penalties for these two measured channels are about 1 dB. BER floor can also be observed from Fig. 7 due to the insufficient eye-opening, which can be further improved by optimizing the clipping ratio of the transmitted data.
CONCLUSIONS
We have designed and presented a high-channel-count and low-noise 20 GHz passively mode locked quantum dot laser that is directly grown on a CMOS compatible Si substrate for the first time. The QD-MLL shows excellent phase and intensity noise performance. Narrow RF linewidth with record low timing jitter value of 82.7 fs (4 -80 MHz) as well as low RIN values are demonstrated. A wide coherent optical spectrum is also shown due to the adoption of a chirped QD active region design. By employing 64 wavelength channels as optical carriers, system level terabit transmission experiment is demonstrated. Combined with the direct growth nature, the QD-MLL manifests itself as a compelling candidate as an on-chip WDM light source. Future ongoing work would include the integration of QD semiconductor optical amplifiers to further boost the comb laser power performance and provide more flexibility in system level design. As the direct growth on silicon technology gradually grows mature, we expect to see a fully integrated largescale silicon EPIC in the near future. | 4,851 | 2018-12-11T00:00:00.000 | [
"Physics",
"Engineering",
"Materials Science"
] |
Implementation and Performance Analysis of Kalman Filters with Consistency Validation
: This paper provides a useful supplement note for implementing the Kalman filters. The material presented in this work points out several significant highlights with emphasis on performance evaluation and consistency validation between the discrete Kalman filter (DKF) and the continuous Kalman filter (CKF). Several important issues are delivered through comprehensive exposition accompanied by supporting examples, both qualitatively and quantitatively for implementing the Kalman filter algorithms. The lesson learned assists the readers to capture the basic principles of the topic and enables the readers to better interpret the theory, understand the algorithms, and correctly implement the computer codes for further study on the theory and applications of the topic. A wide spectrum of content is covered from theoretical to implementation aspects, where the DKF and CKF along with the theoretical error covariance check based on Riccati and Lyapunov equations are involved. Consistency check of performance between discrete and continuous Kalman filters enables readers to assure correctness on implementing and coding for the algorithm. The tutorial-based exposition presented in this article involves the materials from a practical usage perspective that can provide profound insights into the topic with an appropriate understanding of the stochastic process and system theory.
Introduction
The Kalman filter (KF) [1][2][3][4][5][6][7] describes a recursive solution to the linear filtering problem and has been one of the most common estimation techniques widely used today.It is a standard method used in control engineering for measuring the mean-square error between the output signal of a linear plant subject to a stochastic disturbance and the estimated output signal.The Kalman filter is a set of mathematical equations that provides an efficient computational means to estimate the state of a process in a way that minimizes the mean squared error.As an optimal recursive data processing algorithm, the Kalman filter combines all available measurement data plus prior knowledge about the system and measuring devices to produce an estimate of the desired variables in such a manner that error is minimized statistically.It processes all available measurements regardless of their precision to estimate the current value of the variables of interest.In addition, it does not require all previous data to be stored and reprocessed every time a new measurement is taken.
The Kalman filter algorithm is one of the most common estimation techniques currently used.Due to advances in digital computing, the Kalman filter has been a useful tool for a variety of various applications [8,9].Although the Kalman filter was originally developed for the case of discrete observations that enter into the estimation of the state variables at discrete times, the observations could be continuous as with analog measuring devices [10][11][12][13][14].They might on some occasions be considered nearly continuous if the data rate is very high.The continuous Kalman filter (CKF), sometimes termed the Kalman-Bucy filter, provides the optimal solution for the state estimation problem of the systems modeled by a linear stochastic differential equation.As a continuous-time counterpart to the discrete-time Kalman filter, the distinction between the prediction and update steps of the discrete Kalman filter (DKF) does not exist in the continuous-time case.Further, the majority of Kalman filter applications are implemented in digital computers, however, a thorough study of optimal estimation should include the CKF from which some intuition can be yielded for designing DKF.It is still valuable to investigate the CKF as a baseline system design and as an evaluation tool for DKF, even though the implementation of CKF is not as practical as the DKF.Furthermore, in some cases, the statistical behavior of the system can be determined in a closed, analytical form if formulated as a continuous process.
Some existing works of literature intend to serve as tutorials [15][16][17][18][19], and the purpose of this paper is to provide a practical introduction with implementation practice to the topic.While there are some valuable references detailing the derivation and theory behind the Kalman filter, discrete and continuous, the KF technique is sometimes not easily accessible to some readers from the existing publications.Implementation of the algorithms sometimes bothers or confuses the readers.Generally, engineers do not encounter it until they have begun their graduate or professional careers.It is reasonable to expect working engineers to be capable of making use of this computational tool for different applications.However, it may not be practical to expect working engineers to obtain a deep and thorough understanding of the stochastic theory behind Kalman filter techniques.
The steady-state Kalman filter is considered a type of suboptimal filter, which has a constant gain matrix during the estimation process.It is applicable in some applications with some limitations and can be realized through the analog circuit, which is particularly attractive in real-time applications at the cost of some performance degradation.However, under the conditions of a time-varying environment, where the process and measurement models change with time, the adaptive Kalman filter (AKF) is popular through tuning the covariance parameters Q k and R k .In such a case, the steady-state Kalman filter may not be able to comply with the desired flexibility.Furthermore, when compared to the other filters, the Suboptimal Kalman filter (SKF) has identical tracking accuracy and is highly scalable.The SKFs are designed in a feedback-controlled system to obtain the estimation of the root-mean-square error.It simply requires the filtering calculation and foregoes the reasonably priced enhanced high-dimension computation and the challenging smoothing computation, resulting in a less computational load [20][21][22][23].
This article aims to take a more tutorial-based exposition to present the topics that can provide profound insights into the topic with an appropriate understanding of the stochastic process and system theory involved from a practical usage perspective.Several important issues are delivered through introductory exposition accompanied by supporting examples qualitatively and quantitatively for better clarification of the Kalman filter estimation algorithm.
The remainder of this paper is organized as follows.A brief review of the discrete Kalman filter and continuous Kalman filter is reviewed in Section 2. In Section 3, discretization of the continuous Kalman filter to the discrete-time formulation is revisited.In Section 4, illustrative examples and discussion are presented.Conclusions are given in Section 5.
The Kalman Filters and Suboptimal Filters
In this section, preliminary background on discrete and continuous Kalman filters is reviewed.The optimal Kalman gain and general arbitrary gain, respectively, are introduced.The covariance matrices that describe error propagations of the dynamical system with and without measurement, respectively, are presented.
Discrete Kalman Filter
Consider a dynamical system whose state is described by a linear, vector differential equation.The process model and measurement model are represented as the following: The discrete Kalman filter equations are summarized in Table 1.
Continuous Kalman Filter
Consider a dynamical system whose state is described by a linear, vector differential equation.The process model and measurement model are represented as the following: Process model: Measurement model: where the vectors u(t) and v(t) are both white noise sequences with zero means and mutually independent: where δ(t − τ) is the Dirac delta function, E[•] represents expectation, and superscript "T" denotes matrix transpose.The CKF equations are summarized in Table 2. (1) Solve the error covariance propagation by the matrix Riccati equation for P, which is symmetric positive-definite.
The discrete filter gain and continuous filter gain are related by the following: where ∆t = t k+1 − t k represents the sampling period.
Suboptimal Filters: Estimators with a General Gain
The error covariance P k for a discrete filter with the same structure as the Kalman filter, but with a general (namely an arbitrary) gain matrix is given by the following: The error covariance described in a differential equation: defines the error covariance for the filter with a general filter gain matrix K, which can be solved for the covariance of a general gain model.Taking the partial derivative of P ∞ with respect to K and setting: for a minimum leads to the same result as the matrix Riccati equation in continuous form: . P = FP + PF T + GQG T − PH T R −1 HP, which becomes an Algebraic Riccati Equation (ARE) and can be solved for the steady-state minimum covariance matrix when the system reaches steady-state, .P = 0.The Riccati equation deteriorates to the Lyapunov equation given by the following: for the case that no measurement is available.Equation ( 8) can be considered as either of the following two cases: ( . . P = (F − KH)P + P(F − KH) T + GQG T + KRK T with K = 0 or H = 0 If the general gain matrix K has been designed for particular values of Q and R, the steady-state error covariance will vary linearly with the actual spectral densities of either process or measurement noises.Any deviation of the design variances, and consequently, K, from the correct values will cause an increase in the filter error variance.Further information on sensitivity analysis can be referred to Gelb [3] and Jwo [10].
For the discrete Kalman filter, there are two stages where five equations are involved to complete an estimation cycle: two at the time update for the a priori estimation and three at the measurement update for the a posteriori estimation.For the continuous Kalman filter, only three equations are involved since, there is no distinguishment between the a priori and a posteriori versions of covariance and estimate, more specifically, P − k+1 → P k and x− k+1 → xk .For the steady-state Kalman filter as the suboptimal filter, the gain matrix is fixed as constant.Since the constant gain matrix can be calculated offline, the algorithm now involves four equations, including two for state estimates (a priori and a posteriori, respectively) and the other two for the calculation of covariance matrices (a priori and a posteriori, respectively).For the case that the theoretical covariance matrix is not required, only two equations, i.e., x− k+1 and xk are involved.
Discrete Kalman Filter from Discretization of Continuous Kalman Filter
Expressing Equation (1a) in discrete-time equivalent form via discretisation of a continuous time system leads to the following: In the subsequent discussion, derivation of the key parameters from the continuous form for implementing the DKF will be revisited.Two types of process models for the DKF are involved.
(1) Realization based on Equation (1a): The state transition matrix can be represented as the following: using the Taylor's series expansion.For the process model given by Equation (1a), the noise input is given by the following: where t k ≡ k∆t, t k+1 ≡ (k + 1)∆t, and the process noise covariance can be calculated via the following: Using Taylor's series expansion, we have the following: The first-order approximation is obtained by setting Φ k ≈ I (which is equivalently to F = 0), as the following: It should be mentioned that even Q is diagonal, and Q k need not be due to discretization of the system.Sampling can destroy independence among the components of the process noise.
(2) Realization based on Equation (1b): On the other hand, for the process model given by Equation (1b), the total noise input is now represented as the following: and, consequently, the process noise covariance is now the following: which, by Taylor's series, gives the following expression: The corresponding first-order approximation is given by the following: Equation ( 14) can be regarded as a special case of Equation ( 18) with the noise gain set as an identity matrix: Γ k = I.
An alternative approach is based on the piecewise white noise or discrete white noise approximation.Assuming that the forcing function w(τ) remains constant w(t) = w k over the integration interval, i.e., for t ∈ [t k , t k+1 ] for all k = 0, 1, 2, . .., then the noise gain the following: Equation ( 19) can be written as the following series expansion: For the first-order approximation when Φ k ≈ I, we have Γ k ≈ G∆t, and the following: Equating Equations ( 18) and ( 21) gives the following: It should be noticed that the Q k 's in Equations ( 14) and ( 22) are different.The Q k itself in Equation ( 14) represents the total amount of noise covariance, differing from Equation (22), where Γ k Q k Γ T k is the total amount of noise covariance due to two different representations.
Furthermore, a continuous system model involving deterministic control input is described by the following: It can be discretized by either of the following two forms: depending on the representation of process model.The gain matrix of the deterministic control input is given by the following:
Illustrative Examples and Discussion
In this section, various important issues will be delivered, along with some supporting examples.Four supporting examples are involved for discussion, including the scalar Gauss-Markov process, two examples of the extensions of the process, and the integrated Gauss-Markov process.Table 3 summarizes the objectives and highlights important issues to be delivered from the supporting examples.The reader can utilize the illustrative examples in this paper as step-by-step exercises.Beginning with a standard scalar Gauss-Markov process, and extending to the case of larger deterministic control input introduced, larger random input introduced and then integrated Gauss-Markov process.Both the scalar and vector Kalman filters are involved.For the scalar Kalman filter, it is easier for a beginner to understand the mathematical equations and implement the computer coding.It is more practical in engineering applications for the vector Kalman filter, where matrix calculation, such as inversion and decomposition of matrices, is involved and makes the realization more challenging.The numerical data accompanied by the illustrative examples can be carefully checked with the analytical ones to assuring the correct implementation of algorithms and provide an efficient way for troubleshooting.The examples also provide a connection to the probability and stochastic process, and system theory.
Example 1: The Scalar Gauss-Markov Process
The Gauss-Markov process is a stochastic process that satisfies the requirements for both Gaussian processes and Markov processes.The scalar Gauss-Markov process is described by the stochastic differential equation: It can be represented by the transfer function based on the following Laplace transform: It can also be based on the Fourier transform: which has the impulse response h(t) = e −βt u(t).The process can be represented using the system block diagram shown as in Figure 1.
It can also be based on the Fourier transform: which has the impulse response ( ) ( ) . The process can be represented using the system block diagram shown as in Figure 1.
White noise with a spectral amplitude q Firstly, the theoretical result is presented.The mean-square value of the output ( ) x t can be calculated through the following: As t → ∞ , the value approaches Furthermore, since in this given model is the following: and the spectral amplitude of the input ( ) f S j q ω = .Based on the relation for the wise- sense stationary (WSS) random process applied to a linear time-invariant (LTI) system, the spectral function of the output can be calculated through the following: where the spectral function of input in this example is given by the following: Firstly, the theoretical result is presented.The mean-square value of the output x(t) can be calculated through the following: As t → ∞ , the value approaches q 2β .Furthermore, since in this given model is the following: and the spectral amplitude of the input S f (jω) = q.Based on the relation for the wisesense stationary (WSS) random process applied to a linear time-invariant (LTI) system, the spectral function of the output can be calculated through the following: where the spectral function of input in this example is given by the following: Taking the inverse Fourier transform of S x (jω) yields the autocorrelation function: which provides another means of computing the mean-square value of a stationary process given its spectral function: Since the autocorrelation function in this example is the following: the mean-square value obtained based on Equation (29) has the same result as that based on Equation ( 27): Alternatively, the propagation of error covariance based on the Lyapunov equation for this Gauss-Markov process leads to the following: .P = −2βP + q Mathematics 2023, 11, 521 9 of 19 from which the following steady-state result can also be obtained: When the linear measurement is available in the following continuous form: the differential equation for error covariance of the CKF (Riccati equation) yields the following: .
Note that for this Gauss-Markov process, F = −β, G = 1, H = 1.When the system reaches steady state, we have the ARE: which can be solved to obtain the steady-state covariance: and, consequently, the associated steady-state Kalman gain can be calculated to be the following: Alternatively, the error covariance differential equation for a filter with the structure of a CKF with a general gain shown as in Equation ( 7) is given by the following: .P = 2(−β − K)P + q + K 2 r When the system reaches steady state, we have the following: and thus the following: The same result can be obtained by taking the partial derivative of P ∞ with respect to K and setting it to zero to find the optimal gain: Figure 2 illustrates performance deterioration due to increase in r, where r = 1 and 0.01 are shown.The result based on Riccati equation with r → ∞ results in the same result as that based on Lyapunov equation.It can be seen from the Kalman gain equation K = PH T R −1 that when the measurement noise r increases to a very large value, K becomes very small and P approaches 1 in this example.Figure 3 shows the variations of covariance and Kalman gain as r increases.For two selected values of r, the corresponding covariance and Kalman gain are given by: (1) r = 0.01: P ∞ = 0.1318 and K ∞ = 13.1774;(2) r = 1: P ∞ = 0.7321 and K ∞ = 0.7321, as indicated by the circle symbols in the figure.
where the covariance is the following: Figures 4-6 provide the state estimation results for the first-order Gauss-Markov process with various values of r.The state estimation in the case of a very large measurement noise r ( r →∞ ) is shown in Figure 4.In this case, the Kalman filter gain approaches 0, and the correction capability on state vector is no longer available, meaning that only time update is implemented.Figures 5 and 6 present the estimation results in the case of larger ( 1 r = ) and smaller ( 0.01 r = ) measurement noises, respectively are involved.The plot on the right provides a closer look at the time interval 50-60 s for better observation.To collect the data for calculating the error variance from the estimation results, a recursive loop for evaluating estimation errors is employed based on Figure 7.
Performance degradation due to deviation of Kalman gain K , and three other pa- rameters, including β , q , and r , respectively, is examined in Figure 8.In each of the four plots, two sets of results are shown for observation of the effect by deviation of the parameters from the appropriate points and also for consistency check of DKF and CKF results.The solid lines represent the theoretical values while the circles are based on the DKF, respectively.Figure 9 provides a three-dimensional surface and contour of the covariance due to deviation of q and r from the optimal point.In this case 2 q = and 1 r = , as indicated by a circle symbol on the figure.The discrete Kalman filter is performed for performance comparison and consistency check between DKF and CKF.The continuous-time equation can be discretized as the following: where the covariance is the following: Figures 4-6 provide the state estimation results for the first-order Gauss-Markov process with various values of r.The state estimation in the case of a very large measurement noise r ( r → ∞ ) is shown in Figure 4.In this case, the Kalman filter gain approaches 0, and the correction capability on state vector is no longer available, meaning that only time update is implemented.Figures 5 and 6 present the estimation results in the case of larger (r = 1) and smaller (r = 0.01) measurement noises, respectively are involved.The plot on the right provides a closer look at the time interval 50-60 s for better observation.To collect the data for calculating the error variance from the estimation results, a recursive loop for evaluating estimation errors is employed based on Figure 7. rameters, including β , q , and r , respectively, is examined in Figure 8.In each of the four plots, two sets of results are shown for observation of the effect by deviation of the parameters from the appropriate points and also for consistency check of DKF and CKF results.The solid lines represent the theoretical values while the circles are based on the DKF, respectively.Figure 9 provides a three-dimensional surface and contour of the covariance due to deviation of q and r from the optimal point.In this case 2 q = and 1 r = , as indicated by a circle symbol on the figure.Performance degradation due to deviation of Kalman gain K, and three other parameters, including β, q, and r, respectively, is examined in Figure 8.In each of the four plots, two sets of results are shown for observation of the effect by deviation of the parameters from the appropriate points and also for consistency check of DKF and CKF results.The solid lines represent the theoretical values while the circles are based on the DKF, respectively.Figure 9 provides a three-dimensional surface and contour of the covariance due to deviation of q and r from the optimal point.In this case q = 2 and r = 1, as indicated by a circle symbol on the figure.
Example 2: An Additional Deterministic Control Input Is Introduced
An additional deterministic control input is introduced to the system, shown as in Figure 10.Two extensions of the scalar Gauss-Markov system are presented.Propagation
Example 2: An Additional Deterministic Control Input Is Introduced
An additional deterministic control input is introduced to the system, shown as in Figure 10.Two extensions of the scalar Gauss-Markov system are presented.Propagation of mean value estimate in continuous-time systems is involved is the discussion.An additional deterministic control input is introduced into the Gauss-Markov process, leading to the system described by the following stochastic differential equation: with initial condition y(0) = 0, where u(t) is the unit step function and w(t) is the unity Gaussian white noise.Since the impulse response is h(t) = 6e −βt u(t), and therefore the transfer function is given by the following: of mean value estimate in continuous-time systems is involved is the discussion.An additional deterministic control input is introduced into the Gauss-Markov process, leading to the system described by the following stochastic differential equation: , ( ) ~(0, ) w t N q (i.e., with initial condition (0) 0 y = , where ( ) u t is the unit step function and ( ) w t is the unity Gaussian white noise.Since the impulse response is ( ) 6 ( ) , and therefore the transfer function is given by the following: The discrete model discretized from the continuous model can be represented as the following: where the covariance k Q remains the same as Example 1.
The mean values of the output can be evaluated based on the relation and the error covariance are thus given by the following: The discrete model discretized from the continuous model can be represented as the following: where the covariance Q k remains the same as Example 1.
The mean values of the output can be evaluated based on the relation µ x (t) = µ f (t)H(0): and the error covariance are thus given by the following: Figure 11 provides the estimation result in the case of additional deterministic control input.The plot on the right provides a close look at the time interval 0-10 s.The curve in black indicates the response due to the deterministic control input.The results are consistent with the theoretical result shown as in Figure 2
Example 3: A Larger Gain Is Applied to the System
A larger gain is applied to the system, shown as in Figure 12.A larger gain applied to the system leads to the system described by the following stochastic differential equation: with initial condition (0) 0 y = , where ( ) u t is the unit step function and ( ) w t is the unity Gaussian white noise.The discrete model from the continuous model can be represented as the following: where the covariance is two times larger than the previous two examples.
Deterministic control input
White noise with a spectral amplitude q .
Example 3: A Larger Gain Is Applied to the System
A larger gain is applied to the system, shown as in Figure 12.A larger gain applied to the system leads to the system described by the following stochastic differential equation: with initial condition y(0) = 0, where u(t) is the unit step function and w(t) is the unity Gaussian white noise.
Example 3: A Larger Gain Is Applied to the System
A larger gain is applied to the system, shown as in Figure 12.A larger gain applied to the system leads to the system described by the following stochastic differential equation: with initial condition (0) 0 y = , where ( ) u t is the unit step function and ( ) w t is the unity Gaussian white noise.The discrete model from the continuous model can be represented as the following: where the covariance is two times larger than the previous two examples.The discrete model from the continuous model can be represented as the following: where the covariance is two times larger than the previous two examples.
The transfer function as follows: Accordingly, the output mean values based on the relation µ x (t) = µ f (t)H(0) and the error covariance, respectively, are given by the following: Figure 13 provides the estimation result in the case of a larger gain involved.As a check of consistency, the result based on the DKF P k = 1.2353 matches very well with the result based on CKF.
Example 4: The Integrated Gauss-Markov Process
The integrated Gauss-Markov process shown in Figure 14 is frequently in engineering applications.By defining two state variables, 1 x x = and 2 x x = , the corresponding continuous model is as follows: White noise with a spectral amplitude q The mean-square values for this integrated Gauss-Markov process can be shown to be the following:
Example 4: The Integrated Gauss-Markov Process
The integrated Gauss-Markov process shown in Figure 14 is frequently in engineering applications.By defining two state variables, x 1 = x and x 2 = .
x, the corresponding continuous model is as follows:
Example 4: The Integrated Gauss-Markov Process
The integrated Gauss-Markov process shown in Figure 14 is frequently in engineering applications.By defining two state variables, 1 x x = and 2 x x = , the corresponding continuous model is as follows: White noise with a spectral amplitude q The mean-square values for this integrated Gauss-Markov process can be shown to be the following: and as t → ∞ , the error covariance matrix is approaching the following: The mean-square values for this integrated Gauss-Markov process can be shown to be the following: and as t → ∞ , the error covariance matrix is approaching the following: The mean-square value of x 1 , namely the error covariance P 11 = E[x 2 1 ] grows unbounded.The time history of covariances can be obtained using numerical integration for solving the Riccati equation: . P Figure 15 shows the propagation of the error covariance when no measurement is available for the integrated Gauss-Markov process, using the Lyapunov equation, which can be regarded as the Riccati equation by setting r → ∞ .When the measurement is available, Figure 16 presents the propagation of the error covariance and Kalman gains for the integrated Gauss-Markov process using the Riccati equation of KF.The steady-state error covariance and Kalman gain matrices for the integrated Gauss-Markov process by CKF are the following: To implement the DKF, the parameters Φ k and Q k are found to be the following: where Figure 17 shows the time histories of trajectories for the two states based on the KF compared to the actual process, for the integrated Gauss-Markov process.The results from DKF are given as P k = P ∞ , and the following: which is very close to the following result based on the CKF: The results form the DKF are shown to be consistent with those from CKF very well.
Conclusions
This paper can be served to the readers as a supplement note for the Kalman filter for a better understanding of the topic without requiring a deep theoretical understanding of probability and stochastic process, as well as system theory.The illustrative examples are employed to provide further insights into understanding the analysis and design of the Kalman filter both qualitatively and quantitatively, enabling the readers to correctly interpret the theory, practice the algorithms, and design the computer codes.This article provides a good explanation of the Kalman filter with illustrative examples so the reader can have a grasp of some of the basic principles.A detailed description is accompanied by several examples offered for clear illustration to provide readers a better understanding of this topic.
The supporting examples employed in this work include the scalar Gauss-Markov process, followed by two extensions of the process, including an additional deterministic control input introduced, a larger gain applied, and finally an integrated Gauss-Markov process.The main issues covered are the connection between the two types of Kalman filters based on DKF and CKF and the verification of results by theoretical and numerical approaches.A consistence check of results for DKF and CKF, including mean value, mean-square value, Kalman gain, and theoretical covariance is provided.Performance degradation caused by the deviation from the optimal point due to parameter uncertainties as presented were also involved are unbounded errors caused by unavailable measurement and bounded errors due to available measurement updates.Besides, the influence on the estimation results when an additional control input is introduced as well as a larger gain applied to the dynamical system selected for illustration.
This presented material is especially helpful for those with less experience or background on the optimal estimation theory to build up a solid foundation for further study on the theory and applications of the topic.
Figure 2 .
Figure 2. Performance deterioration due to increase in r , where 1 r = versus 0.01 are provided.The result based on Riccati equation with r → ∞ results in that based on Lyapunov equation.
Figure 2 .Figure 3 .
Figure 2. Performance deterioration due to increase in r, where r = 1 versus 0.01 are provided.The result based on Riccati equation with r → ∞ results in that based on Lyapunov equation.Mathematics 2023, 11, x FOR PEER REVIEW 11 of 20
Figure 3 .
Figure 3. Variations of (a) covariance and (b) Kalman gain as r increases.
Figure 4 .
Figure 4. State estimation for the first-order Gauss-Markov process in the case of very large measurement noise r ( r →∞ ).
Figure 4 .Figure 5 .Figure 6 .
Figure 4. State estimation for the first-order Gauss-Markov process in the case of very large measurement noise r ( r → ∞ ).Mathematics 2023, 11, x FOR PEER REVIEW 12 of 20
Figure 5 .Figure 5 .Figure 6 .
Figure 5. State estimation for the first-order Gauss-Markov process in the case of larger measurement noise (r = 1) involved: (a) state estimation; (b) a closer look.
Figure 6 .
Figure 6.State estimation for the first-order Gauss-Markov process in the case of smaller measurement noise (r = 0.01) involved: (a) state estimation; (b) a closer look.
Figure 6 .
Figure 6.State estimation for the first-order Gauss-Markov process in the case of smaller measurement noise ( 0.01 r = ) involved: (a) state estimation; (b) a closer look.
Figure 9 .
Figure 9. Three-dimensional surface and contour of the covariance due to deviation of q and r from the optimal point (in this case q and r, as indicated by a circle symbol on the figure) (a) surface plot; (b) contour plot.
Figure 11 Figure 10 .
Figure 11 provides the estimation result in the case of additional deterministic control input.The plot on the right provides a close look at the time interval 0-10 s.The curve in black indicates the response due to the deterministic control input.The results are consistent with the theoretical result shown as in Figure 2 in Example 1.
Figure 10 .
Figure 10.Block diagram of Example 2: the scalar Gauss-Markov process with a deterministic control input.
in Example 1 .Figure 11 .
Figure 11 provides the estimation result in the case of additional deterministic control input.The plot on the right provides a close look at the time interval 0-10 s.The curve in black indicates the response due to the deterministic control input.The results are consistent with the theoretical result shown as in Figure 2 in Example 1. Mathematics 2023, 11, x FOR PEER REVIEW 15 of 20
Figure 12 .Figure 11 .
Figure 12.Block diagram of Example 3: the scalar Gauss-Markov process with a larger gain.The transfer function as follows:
Mathematics 2023, 11 ,Figure 11 .
Figure 11.Estimation results for Example 2. The plot on the right provides a closer look at the time interval 0-10 s: (a) estimation results; (b) a closer look.
.Figure 12 .Figure 12 .
Figure 12.Block diagram of Example 3: the scalar Gauss-Markov process with a larger gain.The transfer function as follows:
Mathematics 2023 , 20 Figure 13 Figure 13 .
Figure 13 provides the estimation result in the case of a larger gain involved.As a check of consistency, the result based on the DKF 1.2353 k P = matches very well with the result based on CKF.
Figure 13 .
Figure 13.Estimation results for Example 3. The plot on the right provides a closer look at the time interval 0-10 s: (a) estimation results; (b) a closer look.
) 20 Figure 13 Figure 13 .
Figure 13 provides the estimation result in the case of a larger gain involved.As a check of consistency, the result based on the DKF 1.2353 k P = matches very well with the result based on CKF.
Figure 15 .Figure 16 .
Figure15.Propagation of the error covariance for the integrated Gauss-Markov process when no measurement is available using the Lyapunov equation.
Figure 15 . 20 Figure 15 .Figure 16 .
Figure15.Propagation of the error covariance for the integrated Gauss-Markov process when no measurement is available using the Lyapunov equation.
Figure 16 .
Figure 16.Propagation of the error covariance and Kalman gains for the integrated Gauss-Markov process using the Riccati equation of KF: (a) error covariance; (b) Kalman gains.
Figure 15 .Figure 16 .Figure 17 .
Figure15.Propagation of the error covariance for the integrated Gauss-Markov process when no measurement is available using the Lyapunov equation.
Figure 17 .
Figure 17.Propagation of the two states using KF for the integrated Gauss-Markov process KF: (a) first state; (b) second state.
Table 1 .
Implementation algorithm for the discrete Kalman filter (DKF) equations.
Table 3 .
Objectives and highlights of important issues to be delivered from the examples.
3Larger random input: a larger gain is applied to the scalar Gauss-Markov process -Influence on the estimation result due to a larger gain applied to the system.4 Integrated Gauss-Markov process -Unbounded errors due to measurement unavailable.-Bounded errors due to available measurement update.-Consistence check of results for DKF and CKF, including, mean-square value, Kalman gain, and theoretical covariance. | 8,297.8 | 2023-01-18T00:00:00.000 | [
"Computer Science"
] |
Discussion Papers in Economics Revenue Comparison of Discrete Private-Value Auctions via Weak Dominance
We employ weak dominance to analyze both first-price and second-price auctions under the discrete private-value setting. We provide a condition under which the expected revenue from second-price auction is higher than that of first-price auction. We also provide implications for large auctions, including the “virtual” revenue equivalence.
Introduction
The comparison of the expected revenues from private-value first-price and secondprice auctions (FPA and SPA henceforth) has been extensively analyzed, including the revenue equivalence result by Riley and Samuelson (1981) and Myerson (1981). It also has been shown that once some underlying assumptions are relaxed, not only the revenue equivalence result does not necessarily hold, but the comparison results 1 become ambiguous. 1 In addition, the analyses often have been limited to the twoplayer case, implying the lack of implications for large auctions.
In this paper, we revisit the revenue comparison of FPA and SPA. There are two departures from the literature. One is the use of the maximal elimination of weakly dominated bids -all weakly dominated bids are eliminated -for both FPA and SPA. 2 It has been typically the case that while SPA is analyzed by the maximal elimination of weakly dominated bids, FPA is analyzed by Bayesian Nash equilibrium. It would be ideal to use the same solution concept to assess the differences purely stemming from the comparison of two distinct institutions. Another departure is that we follow a seminal work by Dekel and Wolinsky (2003), which analyze FPA via rationalizability, and adopt the discrete sets of bids and values. 3 One advantage of the adoption of weak dominance and discrete setting is that we require minimal assumptions. In particular, our analysis allows asymmetry and an arbitrary number of players.
Our main result provides a condition under which SPA generates a higher expected revenue compared to FPA. 4 The key is the comparison of the winning bids in FPA and SPA. The result on the winning bids in FPA is due to Battigalli and Siniscalchi (2003) and Dekel and Wolinsky (2003) who provide upper bounds for bids via weak dominance. The maximal elimination of weakly dominated bids implies that the winning bids in SPA is higher than that of FPA. This leads to the comparison of the highest bid in FPA and the second highest bid in SPA. Our condition concerns the case where the highest and the second highest bids are the same in SPA, in which case the price the winner pays in SPA is higher than that of FPA.
We also show that this result is asymptotically robust. Under the assumptions of (i) independently distributed values and (ii) the same highest value (whose probability is bounded below by an arbitrary small number) for every player's support, the expected revenue from SPA is higher than that of FPA in large auctions. 5 In addition, if the iterative maximal elimination of weakly dominates bids is used with 1 For example, Maskin and Riley (2000) and Kirkegaard (2012) analyzed the case of asymmetry. See Maskin and Riley (2000), Krishna (2010) and Milgrom (2004) for the overview of related studies. 2 We use interim weak dominance. That is, we apply weak dominance for the realization of each value. We hence use "bids" instead of "strategies." Note also that this does not imply an iterative procedure. We use the iterative maximal elimination of weakly dominated bids later.
3 Several other studies also analyze FPA via rationalizability, including Battigalli and Siniscalchi (2003), Cho (2005), and Robles and Shimoji (2012). 4 See also Kim (2013) who compares the revenues under the binary setting. 5 See also Yu (1999) for the equilibrium analysis of the symmetric case.
the additional assumption of (iii) players' risk-aversion, the difference in the expected revenues converges to the smallest monetary unit, which we denote d, as the number of players increases. 6 This implies the virtual revenue equivalence in large auctions for small d.
For asymmetric auctions, Kirkegaard (2012) identified sufficient conditions under which FPA generates a higher expected revenue compared to SPA. Note that our result has a different implication. Both the use of weak dominance for FPA and the discrete setting lead to this difference. We use an example to demonstrate that the discretized version of the condition in Kirkegaard (2012) and ours are not mutually exclusive.
Preliminaries
In this paper, we analyze first-price auction (FPA) and second-price auction (SPA). We utilize the private-value setting. The set of players is N = {1, . . . , n} with n ≥ 2. Player i's utility function is u i : R → R which is assumed to be strictly increasing. Before the auction starts, each player i ∈ N observes her value, Let v be a typical element of V = j∈N V j . We use the subscript "−i" to represent player i's opponents. We assume that each player i ∈ N with any v i ∈ V i assigns a strictly positive probability to every v −i ∈ V −i and that the auctioneer assigns a strictly positive probability to every v ∈ V .
Each player i chooses her bid b i ∈ B i = {0, d, . . . ,b i − d,b i } wherev ≤b i for each i ∈ N and hence V i ⊆ B i . A player wins only if her bid is the highest. If there are multiple players who chose the highest bid, each one of them has an equal chance of winning. If player i ∈ N is the winner, the price she pays, s, is such that s = b i for FPA and s = max j =i {b j } for SPA. Player i's utility is u i (v i − s) if she wins the object while it is u i (0) otherwise.
Maximal Elimination of Weakly Dominated Bids
In this section, we solve both FPA and SPA via the maximal elimination of weakly dominated bids.
Winning Bids in FPA and SPA
In SPA, for each i ∈ N and v i ∈ V i , b i = v i is the only bid surviving the maximal elimination of weakly dominated bids (i.e., weakly dominant bid). Given v ∈ V , the winning bid in SPA is hence max j∈N {v j }.
For FPA, Battigalli and Siniscalchi (2003, p.41) and Dekel and Wolinsky (2003, Subsection 4.3) show that for each i ∈ N and v i ∈ V i , the highest bid which survives the maximal elimination of weakly dominated bids is strictly lower than v i . 7
Lemma 1 (Battigalli and Siniscalchi (2003) and Dekel and Wolinsky (2003))
For each i ∈ N and v i ∈ V i , the highest bid which survives the maximal elimination of weakly dominated bids in FPA is max{v i − d, 0}.
We then have the following result. Note that if v > 0, the expression is simply max j∈N {v j − d}. Corollary 1 leads to the following result.
Lemma 2
Given v ∈ V , the winning bid in SPA is weakly higher than the winning bid in FPA. If max j∈N {v j } > 0, the winning bid in SPA is strictly higher than the winning bid in FPA.
If v > 0, the latter is indeed the case. 7 Battigalli and Siniscalchi (2003) and Dekel and Wolinsky (2003) assume V i = V j for every i, j ∈ N (i.e., identical support) with v = 0 and the former uses the continuous bid and value spaces. Their insight remains valid even with the heterogeneous supports.
Revenue Comparison
Lemma 2 implies that if two highest values are the same and strictly higher than 0, the revenue of SPA is strictly higher than that of FPA. Let P represent the auctioneer's belief over players' values. The following result shows a condition under which this possibility of "ties" outweighs other possibilities, leading to our main result.
Proposition 1
The expected revenue from SPA is strictly higher than that of FPA if The expression on the left-hand side concerns the cases in which the first and second highest values are equal, implying that the revenue from SPA is higher than that of FPA. The expression on the right-hand side concerns the cases where the difference between the first and second highest values are at least 2d, implying that the revenue from FPA can be higher than that of SPA. 8 Given that values are linear in d, note also that the expressions on both sides are linear in d, implying that the size of d does not matter for the result.
To visualize the implication of Proposition 1, Figure 1 plots the combination of two order statistics, the highest and the second highest values, v and v respectively.
A : v = v corresponding to the left-hand side expression of Proposition 1. In this case, (i) SPA leads to a higher revenue than FPA, and (ii) the difference in the revenues is d. C : v − v ≥ 2d corresponding to the expression on the right-hand side. They are the cases where (i) FPA generates a higher revenue than SPA and (ii) the difference of revenues is Figure 1: Visualization of Proposition 1 The expression in Proposition 1 says that if the realizations in A are likely, SPA generates a higher expected revenue than FPA.
Large Auctions
We now provide a condition under which Proposition 1 holds for sufficiently large n. Given n, let • q n (ñ) be the probability thatn =ñ whereñ ∈ {0, . . . ,n}, and We need the following assumptions: Assumption 1 Player's values are independently distributed.
The expression on the left-hand side in Proposition 1 contains the probabilities that the highest and second highest values are the same. The following result identifies a condition under which the expression in Proposition 1 holds as n → ∞. 9 Proposition 2 Given Assumptions 1 and 2, if Proposition 1 holds for sufficiently large n.
The condition implies that as n becomes large, there are a sufficient number of players who hasv in the support and the chance that there is only one player whose value is v diminishes. Note that ifv i =v for each i ∈ N ,n = n and hence q n (n) = 1.
Corollary 2 Given Assumptions 1 and 2, ifv i =v for each i ∈ N , Proposition 1 holds for sufficiently large n.
As an example, consider the case of V i = {0, . . . ,v} for each i ∈ N and each player's value is independently and uniformly distributed. Let |V | = m + 1 (i.e., v = md) and hence the probability attached to each value is 1 m+1 . The expression in Proposition 1 becomes The proof in Yu (1999, Proposition 13) for the symmetric case carries the same observation; i.e., the probability that the first and second highest values coincide converges to one as n → ∞. 7 which can be simplified as: For large m, it is sufficient for the number of players, n, to be approximately 88.2% of m to maintain (1).
Iterative Maximal Elimination of Weakly Dominated Bids
The result for FPA is not as sharp as that of SPA. This is because we have focused on (one round of) the maximal elimination of weakly dominated bids and strictly increasing utility functions. If we use the iterative maximal elimination of weakly dominated bids and weakly concave (still strictly increasing) utility functions, we obtain a condition under which the uniqueness result is achieved for FPA. This is a variant of the results from Dekel and Wolinsky (2003) and Robles and Shimoji (2012) which use rationalizability. This result leads to the virtual revenue equivalence.
Uniqueness in FPA
In this subsection, we show a condition under which each player i ∈ N with v i ∈ V i has a unique bid surviving the iterative maximal elimination of weakly dominated bids. Let • player i ∈ N be such thatv i =v andv = max j =i {v j } (i.e., the second highest upper bound).
We need the following assumption: Assumption 3 For each i ∈ N , u i is weakly concave.
We then have the following result: Then, • ifv −v ≤ d, the only bid which survives iterative weak dominance is b i = max{v i − d, 0} for each i ∈ N and v i ∈ V i , and • ifv −v ≥ 2d, the only bid which survives iterative weak dominance is Note that the right-hand side expression in the condition is at least 1 2 . The result is visualized in Figure 2.
We already know that for v i ∈ {0, d}, b i = 0 is the unique weakly dominant bid. With the assumption that u i is weakly concave, we can also show that for any v i ≥ 2d, b i = 0 is eliminated. Consider v i ≥ 3d and compare b i = d and b i = 2d: 1. b i = 2d may win even if b i = d does not win (but not vice versa). In particular, if the opponents' highest bid is 2d, the expected utility from b i = 2d is strictly positive while it is zero for b i = d.
2. If b i = d wins, the opponents' highest bid is either d or 0. This is the only case where the expected utility from b i = 2d may be lower than that of b i = d. In this particular scenario, given the argument above, • b j = d for every j ∈ N with v j ≥ 2d, and The corresponding utility is u i (v i − d). The left-hand side expression in the condition corresponds to the probability that b i = d wins. Note that b i = 2d wins in this case and the corresponding utility is The condition in Proposition 3 rather states that the expected return from b i = 2d is higher than that of b i = d if the opponents' highest bid is either d or 0. This condition is sufficient as long as u i is weakly concave (Jensen's inequality). The same argument is applied repeatedly to obtain the result.
Large Auctions
Consider again the case of large auctions with independent distributions from the previous section. Remember the definition ofv in the precious subsection: the second highest upper bound. Given n, let • r v i ,n (ñ) be the probability from the view point of player Instead of Assumption 2, we require the following: Assumption 4 There exists τ ∈ (0, 1) such that τ i (v i ≥v) ≥ τ for each i ∈N .
We then have the following result.
Proposition 4 Given Assumptions 1, 3 and 4, if
for each i ∈ N with v i ≥ 3d, the result of Proposition 3 holds for sufficiently large n.
For example, the condition holds if r v i ,n (n) is lower for smallern's. Again, ifv i =v for each i ∈ N (i.e., r v i ,n (n − 1) = 1), the result immediately holds.
Given Corollaries 2 and 3 above, we have the following result. 10 Proposition 5 Given Assumptions 1, 2, 3 and 4, ifv i =v for each i ∈ N , the difference of the expected revenues from FPA and SPA via iterative weak dominance converges to d as n → ∞.
This can be seen as the virtual revenue equivalence result for small d in large auctions.
Discussion
In this section, we discuss (i) the order dependence of weak dominance, (ii) the upper bound identified in Lemma 1 and (iii) the comparison of our result to Kirkegaard (2012).
Order Dependence
The emphasis on "maximal" elimination is due to the possibility of the order dependence of weak dominance. 11 That is, different orders of elimination could lead to different predictions. As an example, consider the two-player SPA where the supports of their values, First, eliminate all bids except b 1 = v 1 for player 1 with v 1 ∈ V 1 at the first step. Then, for player 2, eliminate every b 2 ≥ v 1 at the second step. No further elimination occurs. In this case, the second highest bid can be higher or lower than v 2 , and the price the winner (player 1) pays cannot be uniquely identified for any v 2 ∈ V 2 . Note that this applies not only to our result, but also to previous studies which use weak dominance for SPA.
Upper Bound in FPA
where the second inequality comes from Jensen's inequality, b i = 1 weakly dominates 11 See for example Marx and Swinkels (1997 Given the result from the first step, we consider v i = 3 at the second step. The expected utility from b i = 1 is while the expected utility from b i = 2 is Then, Note that if α + β is close to one, the expression above is strictly positive independent of b j (3). In this case, b i = 1 strictly dominates b i = 2 for v i = 3.
On Kirkegaard (2012)
Kirkegaard (2012) identifies two conditions under which FPA leads to a higher expected revenue than SPA. There are several reasons why our result is different from that of Kirkegaard (2012). One reason is that values and bids are discrete in our setting while they are continuous in Kirkegaard (2012) -a tie is not possible in Kirkegaard (2012). Another reason is that while our focus is on weak dominance, Kirkegaard (2012) uses Bayesian Nash Equilibrium for FPA. We now demonstrate that even if (a discretized version of) a condition in Kirkegaard (2012) is satisfied in our discrete setting, it is possible that our condition still hold. 12 Kirkegaard (2012) considers the case of (i) two players and (ii) the bid and value spaces are continuous. Take V i as a closed interval for i ∈ {1, 2}, i.e,. Kirkegaard (2012) assumes that the supports for values are such that 0 ≤ v 2 ≤ v 1 andv 2 <v 1 (player 1 is strong and player 2 is weak). Let F i (v) be the pdf for player i's value. Kirkegaard (2012) assumes That is, F 1 dominates F 2 in terms of not only the reverse hazard rate but also the hazard rate.
). The sufficient condition in Kirkegaard (2012, Expression (9)) is Note that in the discrete setting, there does not exist the corresponding r(v 2 ) for each v 2 ∈ V 2 generically. Thus, we instead require (5) to hold for every v 1 ∈ {v 2 , . . . ,v 1 }.
We also look at the discretized version of (4). We now turn to an example in the discrete setting. Consider the following example: ,v 2 + d} and where (i) v 2 > 0 and (ii) κ is a positive integer. Note thatv 1 =v 2 + d and hence V 2 ⊂ V 1 . The players' values are independently distributed. Let We also assume that ε is sufficiently small. The first inequality in (4) holds with the equality for v 2 and with the strict inequality for every v ∈ V 2 \{v 2 }. The second inequality in (4) holds with strict inequality for every v ∈ V 2 . The inequality in (5) holds with the strict inequality for v 2 and 12 Kirkegaard (2012) has two sufficient conditions. We only focus on one of them.
14 with the equality for every v 1 ∈ V 1 \{v 2 } and v 2 ∈ V 2 \{v 2 }. Consider the expression in Proposition 1. For sufficiently small ε, the sum of probabilities on the left-hand side is close to one, implying that the inequality is satisfied. 13
A Proof of Proposition 1
The highest possible revenue in FPA is max{max i∈N {v i − d}, 0}. The revenue in SPA is the second highest value. Note that if there are multiple highest values which are strictly higher than 0, the revenue from SPA is strictly higher than that of FPA.
Let v be the highest value and v be the second highest value. The expected revenue from SPA (left-hand side) is strictly higher than that of FPA (right-hand
B Proof for Proposition 2
Given n andn where 2 ≤n ≤ n, take the probability that the second highest value isv:n where {i 1 , . . . , i η } ⊆N . The lower bound of the expression above is Note that the last two expression converge to zero asn → ∞ (Corollary 2). The lower bound for the left-hand expression in Proposition 1 (without d) is hence
Let ln(y) = n−2 i−1 and we have e ln(y) − 1 e ln(y) ≥ 2 ⇔ y 2 − 2y − 1 ≥ 0 where (i) y = 1 + √ 2 if the expression is zero and (ii) the expression is strictly increasing if y > 1. Note that ln (1 + √ 2) ≈ 0.8813. 14 D Proof for Proposition 3 First Step: • For each i ∈ N and v i ∈ V i \{0}, every b i ≥ v i is weakly dominated. None of them leads to a utility strictly higher than u i (0) while b i < v i secures u i (0) or higher (e.g., the opponents' highest bid is equal to b i ).
• For each i ∈ N with v i = 0, every b i ≥ d is weakly dominated. Every b i ≥ d leads to a utility of u i (0) or less (e.g., every opponent bids zero) and b i = 0 guarantees u i (0).
• For each i ∈ N with v i ≥ 2d, b i = d weakly dominates b i = 0. The only way b i = 0 wins is that every opponent bids zero as well. In this case, the expected utility with b i = 0 is 1 n u i (v i ) + n−1 n u i (0) while it is u i (v i − d) with b i = d. Since n ≥ 2, we have for any weakly concave u i . 15 For the other possibilities, b i = d leads to a utility weakly higher than that of b i = 0 (i.e., u i (0)). In particular, if the opponents' highest bid is d, the inequality is strict.
The sets of bids surviving the maximal elimination of weakly dominated bids are (i) {max{v i −d, 0}} for v i ∈ {0, d, 2d} -note that they are unique -and (ii) {d, . . . , v i −d} | 5,383 | 2017-07-07T00:00:00.000 | [
"Economics"
] |
Impact of recent (g − 2)μ measurement on the light CP-even Higgs scenario in general Next-to-Minimal Supersymmetric Standard Model
The General Next-to-Minimal Supersymmetric Standard Model (GNMSSM) is an attractive theory that is free from the tadpole problem and the domain-wall problem of Z3-NMSSM, and can form an economic secluded dark matter (DM) sector to naturally predict the DM experimental results. It also provides mechanisms to easily and significantly weaken the constraints from the LHC search for supersymmetric particles. These characteristics enable the theory to explain the recently measured muon anomalous magnetic moment, (g − 2)μ, in a broad parameter space that is consistent with all experimental results and at same time keeps the electroweak symmetry breaking natural. This work focuses on a popular scenario of the GNMSSM in which the next-to-lightest CP-even Higgs boson corresponds to the scalar discovered at the Large Hadron Collider (LHC). Both analytic formulae and a sophisticated numerical study show that in order to predict the scenario without significant tunings of relevant parameters, the Higgsino mass μtot ≲ 500 GeV and tan β ≲ 30 are preferred. This character, if combined with the requirement to account for the (g − 2)μ anomaly, will entail some light sparticles and make the LHC constraints very tight. As a result, this scenario can explain the muon anomalous magnetic moment in very narrow corners of its parameter space.
Introduction
The latest measurement of the muon anomalous magnetic moment a µ ≡ (g − 2) µ /2 announced by the Fermilab National Accelerator Laboratory (FNAL) [1] is in full agreement with the Brookhaven National Laboratory (BNL) E821 result [2]. The combined experimental average is given by equation ( In addition, the Run-1 results in Fermilab also imply that a more thorough analysis in future experiments will most probably substantiate the excess of a µ in 5σ discovery level. In recent years, this situation has inspired continuous attention to a µ . In particular, it was widely conjectured that the anomaly may arise from new physics beyond the SM (see, e.g., ref. [24] and the references therein). Among a variety of theories that can account for the anomalous magnetic moment, supersymmetry (SUSY) is especially promising due JHEP03(2022)203 to its elegant structure and natural solutions to many puzzles in the SM, such as the hierarchy problem, the unification of different forces, and the dark matter (DM) mystery [25][26][27][28]. Studies of low-energy supersymmetric models have indicated that the source of the significant deviation can be totally or partially contributed to smuon-neutralino or sneutrino-chargino loop effects [24,. Although SUSY has multiple theoretical advantages, it has been strongly restricted by DM direct detection (DD) experiments, such as XENON-1T [80,81] and PandaX-4T [82,83] experiments, as well as LHC sparticle searches [84][85][86][87][88][89][90][91]. As a consequence, some of its economical realizations become unnatural for electro-weak symmetry breaking in interpreting the anomalous magnetic moment. In the Minimal Supersymmetric Standard Model (MSSM) with R-parity conservation [26,[92][93][94], the lightest neutralino is usually the lightest supersymmetric particle (LSP), and thus a viable DM candidate. In order to fully account for the DM abundance measured by the Planck experiment [95], it must be Bino-dominated when it is lighter than 1 TeV [96]. In this case, the XENON-1T experiment and the LHC experiments prefer that the magnitude of the Higgsino mass parameter, µ, be larger than about 500 GeV in explaining the discrepancy in the 2 σ level [78]. This implies a fine-tuning at the order of 1 % in predicting m Z [97]. This conclusion may be understood from the features of the DM annihilation mechanisms in the MSSM: • In the case that the DM co-annihilates with Wino-dominated particles to obtain the measured abundance, the Higgsino mass prefers to be much larger than the DM mass, which was explained in appendix A of this work. In addition, the SUSY explanation of the deviation together with the results of the LHC search for SUSY can further restrict µ values in a significant way. 1 • In the case that the DM co-annihilates with Higgsino-dominated particles to obtain the measured abundance, the DM DD experiments have required |µ| to be as large as several TeV, because DM-nucleon scattering rates are enhanced by a factor of 1/(1 − m 2 χ 0 1 /µ 2 ) 2 as m 2 χ 0 1 /µ 2 → 1 (see discussions in appendix A).
• In the case that the DM co-annihilates with Sleptons to obtain the measured abundance, the LHC's searches for electroweakinos require Higgsinos to be massive because Wino-and Higgsino-dominated electroweakinos can decay into Sleptons and thus enhance the production rate of lepton signals at the LHC [67].
• In the case that the DM co-annihilates with Squarks or Gluinos to obtain the measured abundance, the LHC's searches for colored sparticles require the DM mass to be heavier than 1 TeV [98]. 1 In carrying out this study, we also investigated the characteristics of MSSM and Z3-NMSSM in a similar way to this work. We found that the co-annihilations with Wino-and Slepton-dominated particles are the main annihilation mechanisms of the bino-dominated DM , i.e., their corresponding Bayesian evidences are the largest in comparison with the other annihilation mechanisms. We also found that the DM preferred to be heavier than about 300 GeV and |µ| 500 GeV when we implemented detailed Monte Carlo simulations for the constraints from the latest LHC searches for electroweakinos. These observations significantly improve the conclusions of [78] and [79], since more LHC analyses were considered carefully.
JHEP03(2022)203
• In the case that the DM obtains the measured abundance by the SM-like Higgs funnel or Z funnel, the LHC's searches for eletroweakinos prefer massive Higgsinos because the DM is relatively light and the LHC's constraints on sparticle mass spectrum are rather strong [99].
• The case in which the DM obtains the measured abundance by the resonance of heavy doublet Higgs bosons is rare. One reason for this is that this case requires significant tuning of SUSY parameters to realize the correlation |mχ0 1 | m A /2, where m A denotes the mass of CP-odd Higgs bosons in MSSM. Another reason is that the LHC's searches for exotic Higgs bosons prefer the bosons to be very massive [100]. Consequently, the DM is also massive.
Next, we consider the Next-to-Minimal Supersymmetric Standard Model with a Z 3 symmetry (Z 3 -NMSSM) [101,102]. This model extends MSSM by a gauge-singlet Higgs superfieldŜ, and has the advantage that either a Bino-dominated (in most physical cases) or a Singlino-dominated neutralino can act as a viable DM candidate [99,[103][104][105][106][107][108][109][110][111][112][113][114]. The Bino-dominated DM candidate differs from the MSSM prediction mainly in that it could coannihilate with a Singlino-dominated neutralino to obtain the measured abundance [106]. This situation, however, occurrs in very narrow parameter space characterized by |2κµ/λ| |M 1 |, moderately large λ and κ, and |µ| 300 GeV [106,112]. In addition, a SUSY µ is insensitive to Yukawa coupling λ because the Singlino field has no mixing with Wino and Bino fields, and it does not couple directly to the muon lepton. As a result, the formulae to calculate a SUSY µ in NMSSM are same as those at the lowest order of the mass-insertion approximation in MSSM [67]. Considering these features, we expected that the Binodominated DM case in the Z 3 -NMSSM and MSSM would not show significant differences in explaining the discrepancy. The properties of the Singlino-dominated DM are determined by λ, the Higgsino mass µ (denoted by µ tot in this work), and tanβ for a given DM mass [115]. A relatively large λ can increase the DM-nucleon scattering cross-sections, and so far, λ 0.3 is disfavored by the XENON-1T experiments [112,115]. This conclusion implies that the traditional DM annihilation channels,χ 0 1χ 0 1 → tt, h s A s , hA s , where t, h, h s , and A s denote the top quark, SM-like Higgs boson, and singlet-dominated CPeven and CP-odd Higgs bosons, respectively, can not be fully responsible for the measured abundance [112]. As a result, the DM is more likely to obtain the abundance by means of the co-annihilation with the Higgsino-dominated particles, which corresponds to a correlated parameter space of 2|κ| λ with λ 0.1. The Bayesian evidence in this case is heavily suppressed owing to the very narrow parameter space, which entails a certain degree of finetuning to meet the DM experiments [115]. In addition, the interpretation of the magnetic moment causes the Z 3 -NMSSM to be further restricted by the updated searches for SUSY at the LHC with 139 f b −1 data. In particular, the region of tan β 30 in figure 7 of [99] has been excluded because both the DM and Higgsinos are relatively light. Such a situation, as we will show below, was frequently encountered in this work.
The dilemma of MSSM and Z 3 -NMSSM inspired us to study the general Next-to-Minimal Supersymmetric Standard Model (GNMSSM) [116]. Unlike Z 3 -NMSSM, GN-MSSM usually predicts the Singlino-dominated neutralino as a viable DM candidate due JHEP03(2022)203 to its following specific theoretical feature: the properties of the Singlino-dominated DM are described by λ, µ tot , mχ0 1 , tanβ, and κ, among which the first four parameters determine the DM couplings to nucleon, and κ mainly dominates the DM couplings to singletdominated Higgs bosons [116]. Consequently, singlet-dominated particlesχ 0 1 , h s , and A s can constitute a secluded DM sector, where the measured DM abundance can be achieved by the h s /A s -mediated resonant annihilation into SM particles or through the annihilation process ofχ 0 1χ 0 1 → h s A s by adjusting the value of κ. Given that this sector interacts with SM matters only through weak singlet-doublet Higgs field mixing, the DM-nucleon scattering rate can be naturally suppressed by λv/µ tot when λ is small [116]. Since the parameters need no significant tuning to be consistent with the constraints from the DM experiments, the corresponding Bayesian evidence is significantly larger than that for the Bino-dominated DM case [116]. Other characteristics of the theory include that, due to the very weak couplings of the Singlino-dominated DM to other sparticles, heavy sparticles initially prefer to decay into next-to-LSP (NLSP) or next-next-to-LSP (NNLSP). As a result, their decay chains are lengthened and their signals become complicated. In addition, the DM as LSP may be moderately heavy, since the annihilationχ 0 1χ 0 1 → h s A s requires m DM > (m hs + m As )/2. These features weaken significantly the limitations from the LHC's searches for SUSY. Specifically, in our recent work, we studied a SUSY µ in a simplified version of GNMSSM, which we called µ-extended NMSSM (µNMSSM) [67]. We found that, by presuming the DM and LHC experiments are satisfied, µNMSSM can explain the discrepancy in a broad parameter space where Higgsinos are lighter than about 500 GeV.
In our previous work [67], we considered only the h 1 scenario, in which the lightest CP-even Higgs boson corresponds to the SM-like Higgs boson discovered at the LHC. A typical feature of NMSSM is that the next lightest CP-even Higgs boson may also act as the SM-like Higgs boson, which has been dubbed the h 2 scenario in the literature. Thus, a full understanding of GNMSSM necessitates the study of the discrepancy in the h 2 scenario. In particular, given that specific configurations of Higgs parameters are needed to predict m h 1 < (m h 2 125 GeV) and h 2 to be SM-like, it is conceivable that the h 2 scenario suffers tighter experimental constraints than the h 1 scenario. 2 This leaves in doubt the idea that the h 2 scenario can explain the discrepancy. As a result, a careful examination of the experimental constraints on the h 2 scenario is needed, which is the focus of this work.
This work is organized as follows. In section 2, we briefly introduce the basics of GNMSSM and the SUSY contribution to the moment. In section 3, we perform a sophisticated scan over the broad parameter space of µNMSSM, and show the features of the theory in explaining the discrepancy. By using specific Monte Carlo simulations, we also comprehensively study the constraints from the LHC's searches for SUSY. In section 4, we concentrate on the GNMSSM, which has much broader parameter space than µNMSSM, and perform a similar study to those in section 3. Lastly, we draw conclusions in section 5.
2 This conclusion may also be understood intuitively as follows: the lightness of h1 and the premise that h1 and h2 are singlet-dominated and SM-like, respectively, lead to the tendency that some parameters in the Higgs sector are relatively small. Consequently, light sparticles are usually predicted. This phenomenon is similar to the well-known fact that the natural result for electroweak-symmetry breaking prefers |µ| 500 GeV [97].
JHEP03(2022)203 2 Theoretical preliminaries
It is well-known that the superpotential of the popular Z 3 -NMSSM is given by [101,102] where the Yukawa terms W Yukawa are the same as those in MSSM, Higgs superfields, and λ, κ are dimensionless couplings coefficient parameterizing the Z 3 -invariant trilinear terms. GNMSSM differs from Z 3 -MSSM in that its superpotential does not respect the Z 3 symmetry, and thus it contains the following most general renormalizable couplings: Historically, the terms characterized by the bilinear mass parameters µ, µ and the singlet tadpole parameter ξ were introduced to solve the tadpole problem [101,117] and the cosmological domain-wall problem of Z 3 -NMSSM [118][119][120], and the ξ-term can be eliminated by shifting theŜ field and redefining the µ parameter [121]. 3 The bilinear terms could stem from an underlying discrete R symmetry, Z R 4 or Z R 8 , after supersymmetry breaking, and be naturally at the electroweak scale [118,[121][122][123][124]. Note that these extra terms can change the properties of Higgs bosons and neutralinos (in comparison with the Z 3 -NMSSM prediction) and significantly alter the phenomenology of the theory. As emphasized in the introduction, this is one of main motivations of this work.
Higgs sector of GNMSSM
Corresponding to the potential in eq. (2.2), the soft-breaking terms of the GNMSSM are given by [101,102] where H u , H d and S denote the scalar components of the Higgs superfields. The softbreaking mass parameters m 2 Hu , m 2 H d and m 2 S can be fixed by solving the conditional equations for minimizing the scalar potential and then expressing them in terms of the vacuum expectation values (vevs) of the scalar fields: GeV. As usual, the ratio of the two Higgs doublet vevs is defined as tan β ≡ v u /v d , and an effective µ-parameter of MSSM is generated by Consequently, the Higgs sector is described by ten free parameters: tan β, µ eff , the Yukawa couplings λ and κ, the soft-breaking trilinear coefficients A λ and A κ , the bilinear mass parameters µ and µ , and their soft-breaking parameters m 2 3 and m 2 S . The GNMSSM predicts three CP-even Higgs bosons h i = {h, H, h s }, two CP-odd Higgs bosons a i = {A H , A s }, and a pair of charged Higgs bosons H ± = cos βH ± u + sin βH ± d . In We also obtain the following approximations as shown in equation (2.10): These formulae reveal the following facts: • Parameters A λ and m 3 mainly determine the heavy Higgs boson masses, and they have little impact on the other Higgs bosons' mass spectrums.
• m hs and m As depend on parameters λ, κ, µ eff , µ tot , A κ and µ . In addition, m As also depends on m S . This implies that, even when λ, κ, µ eff , µ tot , and µ are fixed, m hs and m As can still vary freely by the adjustment of A κ and m S , respectively. This situation is different from that of Z 3 -NMSSM, where µ tot ≡ µ eff , µ = 0, and m S = 0, and consequently, the masses of singlet fields are correlated [99].
• The most important feature is that the latest LHC Higgs data have imposed an upper limit of about 40 GeV on |λµ tot | in the tremendously large tan β limit, since |λµ tot | may induce a sizable V S h . Furthermore, since a small λ is preferred by DM DD experiments in the Singlino-dominated DM case, |µ eff |, |µ tot | and |µ tot /µ eff | in eq. (2.8) are unlikely to be exceedingly large. Otherwise, strong cancellations among the different terms on the right side of eq. (2.8) are needed to predict m hs < 125 GeV, which makes the theory fine-tuned.
Given that too many parameters are involved in the Higgs sector, the h 2 scenario is studied using the following strategy. First, we assume the charged Higgs bosons to be JHEP03(2022)203 very massive by setting A λ = 2 TeV and m 3 = 1 TeV, following the discussion above. Second, we investigate the characteristics of µNMSSM, where µ and m S are taken to be zero. 4 This model contains most of the key features of the GNMSSM [116], and thus has pedagogical significance. Finally, we concentrate on the GNMSSM by treating µ, µ and m S as variables, and investigate its features in explaining the discrepancy.
Neutralino sector of GNMSSM
The neutralino sector in the GNMSSM consists of the mixtures among the Bino fieldB, the Wino fieldW , the Higgsino fieldsH 0 d ,H 0 u and the Singlino fieldS. Its mass matrix in the basis (−iB, −iW ,H 0 d ,H 0 u ,S) takes the following form [101], as shown in equation (2.11): where M 1 and M 2 are gaugino soft-breaking masses, and µ tot represents the Higgsino mass. This matrix can be diagonalized by a rotation matrix N , and subsequently the mass eigenstates are expressed by equation (2.12): whereχ 0 i (i = 1, 2, 3, 4, 5) are labeled in an ascending mass order. N i3 and N i4 characterize theH 0 d andH 0 u components inχ 0 i , and N i5 denotes the Singlino component. In the case of very massive gauginos and |m 2 χ 0 1 − µ 2 tot | λ 2 v 2 , the following approximations are obtained for the Singlino-dominatedχ 0 1 [131][132][133], given by equations (2.13): 4 Note that µNMSSM as the most economical realization of GNMSSM could arise from the Z3-NMSSM when it was embedded into canonical superconformal supergravity in the Jordan frame, and had applications to the inflation in the early universe [126][127][128][129][130]. This is an interesting realization of supersymmetry in particle physics.
JHEP03(2022)203
These approximations indicate that the mass of the Singlino-dominated DM is determined by the parameters λ, κ, µ eff , µ tot , and µ . In particular, λ and κ are two independent parameters in predicting |mχ0 1 | < |µ tot |. This situation is different from that of the Z 3 -NMSSM, where µ ≡ 0, µ tot ≡ µ eff , and consequently, |κ| must be less than λ/2 to predict the Singlino-dominated neutralino as the LSP [101]. They also indicate that, for fixed tan β, the Higgsino compositions inχ 0 1 depend only on λ, µ tot , and mχ0 1 . Therefore, it is convenient to take the three parameters and κ as theoretical inputs in studying theχ 0 1 's properties, where κ determines the interactions among the singlet-dominated particles. This characteristic contrasts with that of the Z 3 -NMSSM, which only needs the three input parameters of λ, µ tot , and any of mχ0 1 or κ to describeχ 0 1 properties [115]. These differences imply that the singlet-dominated particles may form a secluded DM sector [134], which has the following salient features: • The Singlino-dominated DM can achieve the correct abundance by the process χ 0 1χ 0 1 → h s A s , through adjusting the value of κ, or by the h s /A s -mediated resonant annihilation into SM particles.
• Since the secluded sector communicates with the SM sector only through the weak singlet-doublet Higgs mixing, the interaction between the DM and nucleus is naturally feeble when λ is small.
We added that, even when λ, κ, µ eff and µ tot are fixed, m 0 χ 1 can still vary freely through the tuning of µ . In addition to the processχ 0 1χ 0 1 → h s A s and h s /A s resonant annihilation, the DM has other annihilation channels for obtaining the measured abundance [67], e.g., coannihilation with Higgsino-dominated electroweakinos and/or sleptons, and resonant Z/h annihilations. Owing to these features, the GNMSSM has a broad parameter space consistent with the current DM experimental results. As a result, it is the Singlino-dominated LSP, instead of the Bino-dominated LSP, that is most favored to be a viable DM candidate.
Muon g-2 in the GNMSSM
The SUSY source of the muon anomalous magnetic moment a SUSY µ mainly includes loops with a smuon and a neutralino, as well as those with a muon-type sneutrino and a chargino [29][30][31][32]. The one-loop contributions to a SUSY µ in GNMSSM are given by [30,67] as equations (2.14): (2.14)
JHEP03(2022)203
where i = 1, · · · , 5, j = 1, 2 and l = 1, 2 denote the neutralino, chargino and smuon index, respectively.This gives us equations (2.15): 15) where N is the neutralino mass rotation matrix, X the smuon mass rotation matrix, and U c and V c the chargino mass rotation matrices defined by U c * M C V c † = m diag χ ± . F (x)s are the loop functions of the kinematic variables defined as x il ≡ m 2 /m 2 νµ , and take the following form given by equations (2.16)- (2.19): for the mass-degenerate sparticle case. In practice, it is instructive to understand the features of a SUSY µ through the mass insertion approximation [31]. Specifically, for the lowest order of the approximation, the contributions to a SUSY µ can be classified into four types: "WHL", "BHL", "BHR", and "BLR", where W , B, H, L, and R stands for Wino, Bino, Higgsino, and left-handed and right-handed Smuon fields, respectively. These are from the Feynman diagrams involving respectively, and take the following form [31,33,34] given by equations (2.20)-(2.23): where the loop functions are given by
JHEP03(2022)203
and they satisfy f C (1, 1) = 1/2 and f N (1, 1) = 1/6. Note that the Singlino fieldS can also enter the insertions. Because both theW −S andB 0 −S transitions and theμSμ L,R couplings vanish, the Singlino field only appears in the "WHL", "BHL" and "BHR" loops by two more insertions at the lowest order, which correspond to theH 0 d −S andS −H 0 d transitions in the neutralino mass matrix in eq. (2.11). Since a small λ is preferred by the DM physics, the Singlino-induced contributions are never significant [67]. Although in this case the GNMSSM prediction of a SUSY µ is roughly the same as that of the MSSM, except that the µ parameter of the MSSM should be replaced by µ tot , the two models predict different DM physics and different sparticle signals at the LHC. Thus, they are subject to different theoretical and experimental constraints. It should also be noted that, although there is a prefactor of the Higgsino mass µ in the expression of the "WHL", "BHL", and "BHR" contributions, the involved loop functions approach zero with the increase of |µ|, and consequently these contributions depend on µ in a complex way. By focusing on several typical patterns of sparticle mass spectrum with a positive µ, we found that the "WHL" contribution decreases monotonously as µ increases, while the magnitude of the "BHL" and "BHR" contributions increases when µ is significantly smaller than the slepton mass and decreases when µ is larger than the slepton mass. In addition, the "WHL" contribution is usually much larger than the other contributions ifμ L is not significantly heavier thanμ R .
Research strategy
This sector focuses on the h 2 scenario of µNMSSM. In order to analyze its characteristics in explaining the discrepancy, the following parameter space was scanned by MultiNest algorithm [135]: where the flat prior distribution was chosen for all input parameters and the active point number, n live , was set to be 6000. 5 Other dimensional parameters that are unimportant to this study were fixed at 2 TeV, and include SUSY parameters for the first and third generation sleptons, three generation squarks, and gluinos. In numerical calculations, the model file of GNMSSM was constructed using the package SARAH-4.14.3 [136][137][138][139].
JHEP03(2022)203
contains the value of a SUSY µ [67], and it takes the following form, given by equation (3.2): if restrictions unsatisfied, where the restrictions on each sample include: 1. DM relic abundance, 0.096 ≤ Ωh 2 ≤ 0.144. In implementing this constraint, the central value of the Planck-2018 data, Ωh 2 = 0.120 [149], was used, and a theoretical uncertainty of 10% in abundance calculation was assumed.
2. DM direct and indirect detections. Specifically, the SI and SD DM-nucleon scattering cross-sections should be lower than the bounds from the XENON-1T experiments [150,151], and the DM annihilation rate at present time should be consistent with dwarf galaxies observations from Fermi-LAT collaboration [152]. The method suggested in [153] was adopted in studying the latter constraint.
3. Higgs data fit. The properties of the next lightest CP-even Higgs boson h 2 (also denoted by h throughout this work) should be consistent at the 95% confidence level with corresponding data obtained by ATLAS and CMS collaborations. This condition was checked with the code HiggsSignal-2.2.3 [154] by requiring the sample's p value to be larger than 0.05.
4. Direct searches for extra Higgs bosons at LEP, Tevatron and LHC. This requirement was examined by the code HiggsBounds-5.3.2 [155].
5. Some B-physics observations. Specifically, the branching ratios of B s → µ + µ − and B → X s γ should agree with their experimental measurements, which were summarized in [156] at the 2σ level.
6. LHC searches for SUSY. In order to explain the discrepancy, the electoweakinos and sleptons in the GNMSSM can not be excessively heavy. Thus, they will be produced at the LHC to generate multi-lepton signals. The code SModelS-2.1.1 [157] was used to set limits on the signals in some simple topology cases. Sophisticated study of the constraints will be carried out in subsection 3.3, using the package 7. Vacuum stability for the scalar potential consisting of the Higgs fields and the last two generation slepton fields. This condition was checked by the Vevacious code [161,162], and its effect on the GNMSSM was recently discussed in [67].
In presenting the results, two-dimensional profile likelihood (PL) for the function L in eq. where Θ i (i = 1, 2, . . .) denote the input parameters, Θ A,B are the variables of interest, and the maximization of L(Θ A , Θ B ) is achieved by scanning the parameters other than Θ A and Θ B . Related quantities includes 1σ and 2σ confidence intervals (CI), and the χ 2 function defined by χ 2 ≡ −2lnL(Θ A , Θ B ). These statistical measures were briefly introduced in [163], and they reflect the capability of the theory to explain the discrepancy.
Key features of the interpretation
All samples obtained in the scan were projected onto different parameter planes to show two-dimensional PLs, which could reveal the underlying physics of the h 2 scenario. Figure 1 illustrates that the scenario can interpret the discrepancy in a broad parameter space. 39.8 × 10 −10 , respectively. The upper right panel shows that the maximum reach of µ tot decreases monotonously with the increase of tan β, and it is about 500 GeV (260 GeV) for tan β = 10 (tan β = 60). The reason for such a behavior is that, in the case of a relatively small tan β, the second term in M 2 S,23 of eq. (2.4) is sizable, and can cancel the first term to suppress V S h , which is preferred by LHC Higgs data. As tan β increases, the cancellation effect becomes weak since the second term is suppressed by sin 2β, and tighter constraints are set on µ tot . 6 Moreover, analyzing the posterior probability of the scan results indicates that the scenario prefers small tan β region. Thus, most samples obtained in the scan predict tan β 30. 6 Throughout this work, A λ is fixed at 2 TeV. If a larger A λ , e.g., A λ = 10 TeV, was taken, it was found that tan β tended to become larger, while |µtot| and λ tended to be smaller [164]. This tendency is needed to suppress M 2 S, 23 and M 2 S,33 in eq. (2.4) simultaneously. In addition, the Bayesian evidence of the scenario decreases significantly as A λ increases [164], which means that setting a large A λ will cause a more subtle parameter tuning to obtain m h 1 125 GeV and correct electroweak symmetry breaking. In brief, even when A λ is treated as a variable in studying the parameter space, the natural realization of the h2 scenario to interpret the anomaly in the GNMSSM, as suggested by this work, has been tightly limited. This conclusion was verified by our alternative scans.
JHEP03(2022)203
The lower left and right panels of figure 1 depict the ranges of M 2 ,μ L , andμ R , which are determined by a SUSY µ in eqs. (2.20)-(2.23). They show that M 2 may be as large as 1.5 TeV, andμ L andμ R may be as large as 1 TeV. The lower left panel also exhibits that the mass of charginoχ ± 1 is less than about 350 GeV. It should be noted that the ranges of M 2 andμ L depend strongly on the value of tan β. For example, assuming that the theory explains the discrepancy of ∆a µ at 1σ level, it was found that M 2 andμ L must be less than about 400 GeV and 350 GeV, respectively, for tan β = 10. The upper bounds become 1.2 TeV and 700 GeV for tan β = 20, and 1.4 TeV and 1 TeV for tan β = 27. By contrast, mχ0 1 andμ R are not sensitive to tan β, e.g., mχ0 1 andμ R may vary in the range of 50 GeV mχ0 1 250 GeV and 100 GeV μ R 1 TeV for any value of tan β. The basic reason for the phenomenon is that the WHL contribution to a SUSY µ is usually the dominant one. It depends on M 2 , µ tot , andμ L , and is in particular proportional to tan β. Therefore, when tan β is relatively small, the invovled SUSY particles must be moderately light to predict a sizable a SUSY µ . As a result, the left sides of the lower panels usually correspond to a relatively small tan β, and the right sides correspond to a large tan β. Figure 2 focuses on the DM physics of the h 2 scenario, which involves the parameters λ, κ, µ tot , and the masses of singlet-dominated particles, i.e., mχ0 1 , m hs and m As . It reveals the following features: • 2mχ0 1 > m hs + m As for most of the parameter areas (see the upper right panel), which implies that in the early universe, the Singlino-dominated DM might annihilate into the singlet-dominated Higgs bosons h s and A s . As pointed out in [116], this annihilation proceeded by the s-channel exchange of Z boson and CP-odd Higgs bosons and the t-channel exchange of neutralinos. If the t-channel contribution to the annihilation rate was much larger than the s-channel contribution, |κ| 0.15 × (mχ0 1 /300 GeV) 1/2 could predict the measured abundance, while if the interference of the two contributions was significantly constructive/deconstructive, a smaller/larger |κ| could be fully responsible for the abundance. Given that 0.05 |κ| 0.25 on the upper left panel, we infer that the process played an important role in determining the abundance. In addition, it was verified in fewer cases that 2mχ0 1 < m hs + m As and/or |κ| 0.1, so that the DM obtained the measured abundance mainly by coannihilating with the Higgsino-dominated electroweakinos or µ-type sleptons.
• The SI and SD cross-sections of DM-nucleon scattering may be as low as 10 −50 cm and 10 −45 cm, respectively (see the lower left and right panels). The SD scattering proceeds only through the Z-mediated Feynman diagram, and the rate is proportional to (λv/µ tot ) 4 [116]. Thus, it is a small λ, e.g., λ ∼ O(0.01), that is responsible for the low SD cross-section (see the upper left panel). By contrast, the SI scattering is induced by three CP-even Higgs bosons, and it is the cancellation of h-and h smediated contributions that mainly accounts for the small SI cross section [115].
The DM physics in the h 2 scenario differs from those of the h 1 scenario, which were presented in figure 2 of [67], in three aspects. The first is that the DM is relatively light, i.e., 50 GeV mχ0 scenario. Two reasons may explain this phenomenon. One is that |mχ0 1 | must be less than µ tot , and as shown in figure 1, a moderately small µ tot is experimentally preferred for the h 2 scenario. The other reason is that |mχ0 1 | must be larger than (m hs + m As )/2 for most cases to proceed to the annihilationχ 0 1χ 0 1 → h s A s . A relatively lightχ 0 1 can meet this condition in the h 2 scenario (see the previous discussions). The second one is that |κ| is less than 0.25 in the h 2 scenario, while it is less than 0.4 in the h 1 scenario. The underlying reason for this is that a smaller |κ| can be fully responsible for the measured abundance in the h 2 scenario. The last aspect is that λ in the h 2 scenario may reach about 0.2, while it is at most 0.1 in the h 1 scenario. This is because the cancellation effect in the SI scattering is usually significant in the h 2 scenario, and consequently, a larger λ is still allowed by DM DD experiments.
JHEP03(2022)203
In figure 3, the mass distributions of the singlet-dominated Higgs states and the SUSY particles relevant to a SUSY µ are shown by a series of violin plots, which combines the advantages of the box plot and probability density distribution plot [165]. This figure shows that all SUSY particles except forχ 0 5 tend to be lighter than 500 GeV, and in particular,χ 0 2 ,χ 0 3 , andχ ± 1 are lighter than 500 GeV for nearly all samples obtained in the scan. The fundamental reason for the phenomenon, besides the explanation presented before, arises from the fact that a low tan β is preferred to predict the h 2 scenario. This tendency, once combined with the requirement of a sizable a SUSY µ , will necessitate light SUSY particles. 7 Given that these electroweakinos can be richly produced at the LHC, they have been restricted by searching for multi-lepton signals. This issue will be intensively studied in the following. 7 Without the a SUSY µ requirement,χ 0 1 may be very massive (e.g., |mχ0 1 | > 300 GeV [116]). In this case, the LHC constraints are significantly weakened.
LHC constraints
To comprehensively study the constraints from the LHC search for sparticles on the obtained parameter points, the following processes were analyzed in the Monte Carlo (MC) event simulation as given by equations (3.4): 8 In the calculation, the cross-sections of √ s = 13 TeV were obtained at the next-to-leading order (NLO) by the package Prospino2 [166]. The MC events were generated by the package MadGraph_aMC@NLO [167,168] For each point, 10 6 MC events were generated in the simulations, and the LHC analyses listed in table 1 were used to test it. In particular, the following LHC analyses were included in our study, which played a crucial role in constraining the scenario: The quantity R was used to describe the LHC's limitation on the samples in the discussion. It is defined by R ≡ max{S i /S 95 obs,i }, where S i stands for the number of simulated events in the i-th signal region (SR) of all the included analyses, and S 95 obs,i represents corresponding observed 95% confidence level upper limit. Accordingly, without considering the involved uncertainties, R > 1 represents that the considered parameter point is excluded due to the inconsistency with the LHC limit. Otherwise, it is allowed by the LHC searches [108].
The collider simulation results given by CheckMATE implied that the LHC searches for SUSY set strong restrictions on the h 2 scenario of the µNMSSM. In order to display the analysis results clearly, the points were classified by NLSP's dominated component, which may beμ R ,ν µ ,B,W , orH. The Histograms of R-value distribution for the different NLSP is expressed in terms of its deviation from the FNAL measurement center value and is shown by different colors: turquoise for (−3σ, −2σ), orange for (−2σ, −1σ), pink for (−1σ, 0σ), green for (0σ, 1σ), blue for (1σ, 2σ), and violet for (2σ, 3σ). types were displayed in figure 4 from left to right and top to bottom, respectively. Points colored by turquoise, orange, pink, green, blue, and violet correspond respectively to the cases that a SUSY µ are in the range of (−3σ, −2σ), (−2σ, −1σ), (−1σ, 0σ), (0σ, 1σ), (1σ, 2σ) and (2σ, 3σ). The results were also summarized in table 2, which includes the number of samples obtained by the scan (denoted N tot ), that satisfying R < 1 (denoted by N pass ), and the more detailed classification of N tot and N pass by the ranges of a SUSY µ . According to table 2 and figures 4, the following conclusions are inferred: • Among the five types of NLSP, theH-dominated NLSP is the easiest one for explaining the discrepancy in the h 2 scenario, and by contrast, theB-dominated NLSP is the least preferred one (see N tot in table 2). The LHC restrictions are extremely strong in excluding parameter points for any type of NLSP (see N pass in the table). In particular, they are strengthened significantly once the scenario is required to explain the discrepancy at 3σ level (see N a SUSY µ in the table). Specifically, it takes dozens of parameter points withH-dominated NLSP and only few points withν µ -dominated NLSP to interpret the discrepancy at the 3σ level. This situation reflects the difficulty of the h 2 scenario in explaining the discrepancy. One fundamental reason comes from the fact that the scenario prefers a relatively small tanβ and µ tot , and hence moderately light sparticles are predicted to obtain a sizable a SUSY µ .
It was verified that, among the experimental analyses, the analysis 4 usually sets the tightest constraints. In the case that the parameter points predict sizable signals with four or more leptons, analysis 5 could also impose the strongest restriction.
• In the case ofμ R -orν µ -dominated NLSP, Wino-and Higgsino-dominated electroweakinos will decay mainly into leptonic final states via slepton and/or sneutrino, which proliferates the lepton signals. As a result, R can reach 100 for lots of points, which is shown on the top left and right panels in figure 4. In addition, the LHC constraints on theμ R -dominated NLSP point are usually tighter than those on thẽ ν µ -dominated NLSP point because neutralinos will decay byχ 0 i →μ ± µ ∓ →χ 0 1 µ + µ − for the former case and byχ 0 i →μ ± µ ∓ ,ν µ ν →χ 0 1 µ + µ − ,χ 0 1 νν for the latter case. The former case can produce more µ leptons.
• In the case ofW -dominated NLSP, most points predict R < 20, but in very rare case R may reach 70. Detailed study indicated thatχ 0 2 decays mainly byχ 0 2 →χ 0 1 Z ( * ) ,χ 0 1 h 1,2 for most points, andχ 0 2 →χ 0 1 µ + µ − ,χ 0 1 νν are the dominant decay only for a small portion of the points. It also indicated thatχ ± 1 decays mainly byχ ± 1 →χ 0 1 W ( * ) for nearly all points, and smuons decay in a complex way, e.g., any of the channels µ →χ 0 i µ (i = 1, · · · , 5),χ − 1,2 ν may be the dominant decay. It is notable that R in theW -dominated NLSP case can not be exceedingly large. This conclusion comes from the fact that theW -dominated electroweakinos are forbidden to decay into sleptons directly, and thus, even in the optimum case, the lepton signal from the decayχ 0 2 →μ ± * µ ∓ →χ 0 1 µ + µ − is not much larger than the other final states. Consequently, pp →χ 0 2χ ± 1 , which is the largest sparticle production process, can not generate tri-lepton signal events efficiently. This feature results in a smaller signal rate than theB-dominated NLSP case, where the Wino-dominated electroweakinos may decay far dominantly into leptons. • By considering theH-dominated NLSP case it was found that the decay modes ofχ 0 2 ,χ ± 1 andμ 1,2 are similar to those of theW -dominated NLSP case, and the LHC constraints tend to be weaker than the other cases. This observation may be understood from four aspects [67]. First, since theH-dominatedχ 0 2,3 andχ ± 1 can not decay into sleptons, the leptonic signal rate is usually much smaller than the case whereμ R orν µ acts as NLSP. Second, the collider sensitive signal events are often diluted by the complicated decay chains of sparticles, given that heavy sparticles prefer to decay into the NLSP or other non-singlet-dominated sparticles first. They are diluted also by the decaysχ 0 2,3 →χ 0 1 h s ,χ 0 1 h, given that Br(h s /h → ± ∓ ) is much smaller than Br(Z → ± ∓ ). Third, the interpretation of ∆a µ requires that all crucial sparticles are usually in several hundred GeVs for a not too large tan β. Thus, for the parameter points surviving the LHC constraints, the mass splitting between sparticles is not large enough to produce high-p T signal objects, which can be significantly distinguished from the background in the collider. Last, in some rare cases, the leptonic signal of SUSY may mainly come from the NLSP. For this situation, the discussion of the LHC constraints can be simplified by considering the system that only contains NLSP and LSP. From this, it is evident that the constraints on thẽ H-dominated NLSP case are significantly weaker than those on theW -dominated NLSP case. In fact, we once scrutinized the property of all the samples surviving the LHC constraints. It was found that the above factors applied to these samples.
JHEP03(2022)203
In order to emphasize the characteristics of the parameter point withH-dominated NLSP, two benchmark points, P1 and P2, are chosen to present their detailed information in table 3. Both points satisfy the DM constraints and can explain the a µ discrepancy at 2σ level. The P1 point survives the LHC constraints, while the P2 point has been excluded by the LHC search for SUSY. These two benchmark points verify part of the discussions in this work. Finally, it should be noted that the LHC search for τ -leptons plus missing momentum signal, such as the ATLAS analyses in [178] and [179], was not considered because of the massiveτ assumption in this study. Specifically, the assumption implies that the τ -leptons mainly come from the decay of the W/Z or Higgs bosons, which are the decay products of parent sparticles. For the former case, the final states containing e and/or µ are more efficient than the τ final state in restricting SUSY mass spectrum since all the lepton signal rates are roughly equal. For the latter case, the τ signal is usually less crucial in SUSY search because the branching ratio of the Higgs decay into ττ is significantly small (in comparison withτ decay). With the codes for the analyses in [178] and [179], which were implemented in our previous work [164] and CheckMATE-2.0.29, respectively, we studied JHEP03(2022)203 their prediction of R for the two benchmark points. We found that the analyses do not affect the results in table 3. As an alternative, ifτ or all sleptons are assumed to the NLSP (see, e.g., [41]), the production rate of the e/µ final states will be affected. In this case, R should be recalculated. In particular, the experimental analysis of the τ final state must be included in the study. It is expected that the LHC constraints are still strong because the h 2 scenario is featured by moderately light Higgsinos.
Explaining ∆a µ in the h 2 scenario of GNMSSM
The impact of the muon g-2 anomaly on the h 2 scenario of the GNMSSM is studied in this section. For this purpose, the parameter space including |µ | ≤ 1 TeV, −10 6 TeV 2 ≤ m 2 S ≤ 10 6 TeV 2 , and that in eq. (3.1), were scanned in a way similar to what we did in section 3. It was found that the DM was Singlino-dominated for all the obtained samples, and it annihilated mainly by a resonant Z, h, or h s /A s to obtain the measured abundance. These channels contributed to the total Bayesian evidence by about 43%, 19.6%, and 37%, respectively, before the MC simulations were implemented. The basic reason for such a behavior is thatχ 0 1 , m hs and m As in the GNMSSM can be changed freely by tuning µ , A κ , and m 2 S , respectively. Thus, the annihilations could easily happen. Given that the GMSSM might have different key features from the µNMSSM, various PL maps of the GNMSSM were surveyed in this study.
In figure 5, the two-dimensional profile likelihood function was projected onto tan β − µ tot and M 2 − µ tot planes. They show that the GNMSSM results are quite similar to the µNMSSM predictions. In particular, µ tot and mχ± 1 have upper bounds of about 310 GeV and 300 GeV, respectively. 9 The fundamental reason for this, as was emphasized before, is that the h 2 scenario prefers moderately small µ tot and tan β to predict m h 1 125 GeV and h 2 to be SM-like, which was verified by the posterior probability distribution function of the samples obtained from the scan. This characteristic, once combined with the requirement to explain the anomaly, will entail certain moderately light sparticles. In figure 6, the violin diagrams for the mass spectrum of the singlet-dominated Higgs bosons, the electroweakinos and µ-type sleptons are shown. The profiles for the sparticles are quite similar to those in figure 3 for the µNMSSM results, except that |mχ0 1 | can be as low as several GeV. This difference mainly comes from the DM annihilation mechanisms, and it usually makes the LHC's constraints much stronger. 9 We inferred that the 2σ CI of the µNMSSM covers a broader region on the parameter planes than that of the GNMSSM by comparing figure 5 with figure 1. This is contrary to the common sense that the former should be narrower than the latter since the parameter space of the µNMSSM is only a subset of the GNMSSM's parameter space. This phenomenon originates from the MultiNest algorithm utilized in the scan, which mainly collects the samples contributing significantly to the Bayesian evidence [135]. The parameter points of the µNMSSM correspond to µ = 0 and m 2 S = 0, and are relatively unimportant for the evidence, which was verified by the study of the two-dimensional posterior probability distribution function, P (µ , m 2 S ) [163]. Thus, only a few of them were considered in the sampling. It is expected that, with the increase of the setting n live , more samples of the GNMSSM will be collected, which will broaden the CI regions [164]. This process, however, is very computationally expensive, since a high-dimensional parameter space is surveyed.
JHEP03(2022)203
In table 4, the numbers of the samples surveyed by MC simulations and those passing the LHC constraints were presented in a way similar to table 2. This table shows that the LHC analyses have strongly constrained the parameter space of the GNMSSM.
Conclusion
The recent measurement of a µ by the FNAL corroborates further the long-standing discrepancy of a Exp µ from a SM µ . It can not only reveal useful information of the physics beyond the SM, but also place strong restrictions on certain theories. Recently, implications of the discrepancy were comprehensively discussed with respect to the GNMSSM, which is a theory that has the following attractive features: it is free from the tadpole problem and the domain-wall problem of the Z 3 -NMSSM, and it is capable of forming an economic secluded DM sector to naturally yield the DM experimental results [116]. It was found that the h 1 scenario of the GNMSSM could easily and significantly weaken the constraints from the LHC search for SUSY. It also predicted more stable vacuums than the Z 3 -NMSSM. As a result, the scenario can explain the discrepancy in a broad parameter space that is consistent with all experimental results, and at same time keeps the electroweak symmetry breaking natural [67]. By contrast, it is difficult for the popular MSSM and Z 3 -NMSSM to do this.
These theoretical advantages inspired us to consider the h 2 scenario of the GNMSSM, which is another well-known realization of the theory. It was shown by analytic formulae that, in order to obtain m h 1 125 GeV and an SM-like h 2 without significant tunings of relevant parameters, the scenario prefers a moderately light µ tot and tan β 30. This characteristic, if combined with the requirement to account for the anomaly, will entail some light sparticles, and sequentially make the LHC constraints rather tight. In this work, this speculation was tested using numerical results. Specifically, a special case of the GNMSSM called µNMSSM was first studied by scanning its parameter space with the MultiNest algorithm and considering the constraints from the LHC Higgs data, the DM experimental results, the B-physics observations, and the vacuum stability. Then, the samples obtained from the scan were surveyed by the LHC analyses in sparticle searches. Through sophisticated MC simulations, it was found that only a dozen of the samples, among about twenty thousand, passed the constraints, which corresponded to about 0.04% of the total Bayesian evidence. Given that the scan results have statistical significance, we conclude that the h 2 scenario of the µNMSSM is tightly constrained if it is intended to explain the anomaly. A similar study was carried out for the GNMSSM, and it was found that a smaller portion of the samples (about 0.008% of the total evidence) satisfied the LHC constraints. This difference arises from DM annihilation mechanisms: for the former case, the Singlino-dominated DM achieved the measured abundance mainly through the processχ 0 1χ 0 1 → h s A s , while for the latter case, it was through a resonant Z, h, or h s /A s annihilation to obtain the abundance. Since the latter case usually predicts a relatively light DM, the LHC constraints are stronger.
This work extends the research in [99] by considering a more general theoretical framework with more advanced and sophisticated research strategies. As a result, the conclusions JHEP03(2022)203 obtained in this work are more robust than those of the previous work, and apply to any realizations of the NMSSM.
A DM-nucleon scattering in the MSSM
In this section, we use analytic formulae to focus on Bino-dominated DM and study DMnucleon scatterings for three typical cases.
We begin with the neutralino mass matrix given by [180]: where g 1 = 2M Z s W /v, s β ≡ sin β and c β ≡ cos β. In terms of neutralino mass, mχ0 i , the eigenvectors are then exactly formulated by Parameterizing the couplings of the DM to the SM-like Higgs boson, h, and Z boson as the following form [26,93] L MSSM Cχ0
JHEP03(2022)203
we obtain [112] Cχ0 where α is the mixing angle of CP-even Higgs fields in forming mass eigenstates [94]. In the decoupling limit of the Higgs sector, i.e., m A v, the SI and SD cross-sections of the DM with nucleons are approximated by [112]: with C p 2.9 × 10 −41 cm 2 for protons and C n 2.3 × 10 −41 cm 2 for neutrons. In the following, we assume tan β 1 and m A v so that α β − π/2 [94], and investigate the dependence of σ SĨ to obtain the measured DM abundance, and C 1 is approximated by: Consequently, C hχ 0 where µ is parameterized by |µ| = (1 + δ 1 )|mχ0 1 |, with δ 1 denoting a small positive dimensionless number. These approximations indicate that the DM-nucleon scattering rates increase monotonously as the DM becomes heavier and/or |µ| departures from |mχ0 with the XENON-1T data on SI cross-section. For mχ0 1 = 300 GeV, it must be less than about 0.12. The Higgsinos in this narrow mass region contribute significantly to lepton signals at the LHC and the DM relic abundance. They also affect a µ . Consequently, such a situation needs tuning to satisfy all experimental constraints.
• Case II: the DM co-annihilated with the Wino-dominated electroweakinos to obtain the measured abundance, and |µ| is much larger than |mχ0 1 |. This case predicts [112]: and consequently the scattering rates decrease monotonously with the increase of |µ|.
• Case III: the DM co-annihilated with the Higgsino-dominated electroweakinos to obtain the measured abundance, and |M 2 | is much larger than |mχ0 1 |. For this case: where the second approximation is obtained by assuming |mχ0 | 12,829.6 | 2022-03-01T00:00:00.000 | [
"Physics"
] |
Force Control of a Haptic Flexible-Link Antenna Based on a Lumped-Mass Model
Haptic organs are common in nature and help animals to navigate environments where vision is not possible. Insects often use slender, lightweight, and flexible links as sensing antennae. These antennae have a muscle-endowed base that changes their orientation and an organ that senses the applied force and moment, enabling active sensing. Sensing antennae detect obstacles through contact during motion and even recognize objects. They can also push obstacles. In all these tasks, force control of the antenna is crucial. The objective of our research is to develop a haptic robotic system based on a sensing antenna, consisting of a very lightweight and slender flexible rod. In this context, the work presented here focuses on the force control of this device. To achieve this, (a) we develop a dynamic model of the antenna that moves under gravity and maintains point contact with an object, based on lumped-mass discretization of the rod; (b) we prove the robust stability property of the closed-loop system using the Routh stability criterion; and (c) based on this property, we design a robust force control system that performs efficiently regardless of the contact point with the object. We built a mechanical device replicating this sensing organ. It is a flexible link connected at one end to a 3D force–torque sensor, which is attached to a mechanical structure with two DC motors, providing azimuthal and elevation movements to the antenna. Our experiments in contact situations demonstrate the effectiveness of our control method.
Introduction
Haptics is the technology of touch.In recent years, there has been increasing interest in developing integrated sensory systems, particularly those involving various tactile sensors [1].Multiple applications require integrated systems, such as machine assemblies for precise positioning, impact protection, navigation, etc. Tactile/touch sensing is essential for developing human-machine interfaces and electronic skins in areas such as automation, security, and medical care [2].Tactile sensors were first explored in the early 1990s, for example, in the work of Russell [3].Since then, natural tactile sensors, including whiskers and antennae, have been investigated, e.g., [4].Several attempts have been made to build biomimetic active sensory applications, also known as vibrational systems.Mammal and insect sensing (see Figure 1) has inspired multiple engineering applications, such as the whisker-based texture discrimination presented in [5].The less frequent use of tactile sensing may be partly attributed to its complex and distributed nature.Issues such as sensor placement, robustness, and wiring complexity, among others, make its effective utilization challenging.
Previous works have provided evidence that artificial vibrissal systems can compute estimates of distance and shape [6][7][8] and can distinguish between textures with different spatial frequencies [9,10].These results demonstrate the potential for vibrissal sensors as effective devices for tactile object recognition.Sensors used in these systems include electret microphones [11], resistive arrays [12], strain gauges [7], piezoelectric sensors [9], and magnetic Hall-effect sensors [9,13].Each of these technologies has its advantages and disadvantages.In the last two decades, a robust and compact sensor device called the "sensing antenna" has been proposed, efficiently addressing some of the aforementioned problems.This active sensor consists of a flexible beam moved by servo-controlled motors and a load cell placed between the beam and the motors.An example of this device is shown in Figure 2. The sensing antenna replicates the touch sensors found in some animals and employs an active sensing strategy.The servomotor system moves the beam back and forth until it hits an object.At this instant, information from the motor angles combined with force and torque measurements allows us to calculate the positions of the hit points, which represent valuable information about the object surface.Using this device, a 3D map of the object's surface, which enables recognition, can be obtained [13].Recognition is carried out using techniques that combines information on partial views to gather comprehensive information about the object, e.g., [14,15].
Two strategies can be applied to obtain such 3D maps.The first strategy involves continuously moving the beam back and forth to hit the object at different points, determining their 3D coordinates and producing a map of the object surface.This strategy is used by some insects that employ their antennae for this purpose (e.g., [16,17]).The second strategy involves sliding the beam across the object while exerting a controlled force on the surface of the object to maintain contact, collecting 3D coordinates of points on the object's surface during this movement.This strategy is utilized by some mammals with whiskers as sensors (e.g., [18,19]).Both strategies can be implemented using the aforementioned sensing antennae and require precise control of the force exerted by the antenna on the object (e.g., [20]).Additionally, if the object or obstacle needs to be removed, force control is also necessary.
However, multiple constraints limit the performance of these devices, such as their flexible beam length, light weight, and flexibility.These characteristics make the dynamic behavior of these antennae exhibit an infinite number of vibration modes, resulting in dynamic models of infinite dimension [21].This complexity makes it very difficult to accurately control the position of these devices or achieve precise force control.If the control system in charge of moving the motors does not consider these dynamics, i.e., the beam elasticity, residual vibrations appear that prevent the accurate and fast achievement of the desired pushing force on the object being searched or moved.Moreover, permanent collisions with the object could occur, where the antenna continuously moves back and forth as it collides with the object.These phenomena would cause delays in the recognition process and diminish the quality of object surface estimates, and therefore reduce the efficiency of the device's functioning.
It is well known that the amplitudes of the vibration modes of flexible beams diminish as the frequencies of the modes increase.This allows us to truncate their infinite-order models to finite-order models (e.g., [22]) that usually include as many as three or four vibration modes, which yield accurate approximations of the antennae dynamics.This truncation can be applied to the dynamic model obtained using an assumed-modes modeling approach (e.g., [23]), or directly to the mechanism by assuming lumped masses (e.g., [24]).
Assumed modes (e.g., [23]) have already been used to model the contact dynamics of a beam (or flexible antenna) [25] in the horizontal plane, but never in the vertical plane, under the effect of gravity.Moreover, these models are complex and pose difficulties in designing robust control systems for the contact force.Lumped mass models have been developed for the free rotation movements of flexible beams in the horizontal plane, in attitude movements where gravity affects the dynamics [26], or in a two-degrees-offreedom single flexible-link antenna [27].Robust force control at the tip of the beam has been addressed using a lumped-mass model with a single lumped mass at the tip in [28] in the case of having a horizontal degree of freedom.However, a lumped-mass model has never been used to model the dynamics of the contact situation in which a flexible beam pushes an object at an intermediate point along its link.This requires the use of several lumped masses.
The control of the force at the tip of a single flexible link that rotates on an horizontal plane through one of its ends was studied in [29], assuming a distributed mass link.The experiments showed that direct force feedback from a sensor placed at the tip could not ensure closed-loop stability.Stable tip contact control of a distributed mass link, where a switching transition occurred between the unconstrained and constrained environments, was achieved by [30] using a PD controller that provides feedback from hub measurements.This yielded a control system robust to the mechanical impedance of the contacted object, but could not achieve force control.To increase the stability of the tip force control, some works have redefined the force output to be fed back, e.g., [31,32].In [33], the tip-contactforce control of a constrained single-link flexible arm was performed, overcoming the non-minimum phase nature of the system by defining a new input and generating a virtual contact-force output through a parallel compensator.It was proven that the transfer function from the new input to the virtual contact-force output was minimum-phase and stable.Ref. [34] also addresses tip-contact-force control of a one-link flexible arm interacting with a rigid environment.To achieve contact-force control, a boundary controller was proposed based on an infinite-dimensional dynamic model.The contact-force control and vibration suppression problem for a constrained one-link flexible manipulator with an unknown control direction and a time-varying actuator fault was studied in [35].Finally, we again mention [28], where fractional-order control was implemented in a massless link with a tip payload, damping rebounds and ensuring robust stability to the mechanical impedance of the contacted object.
All these works focus on the force control of a flexible link interacting with the environment, considering that contact is made at the tip.To the best of our knowledge, the force control at an intermediate point of a flexible beam in an elevation rotation movement has never been addressed, either using an assumed-modes model or a lumped-mass model.The objectives of the present research are as follows: (1) to establish a model of the dynamics of a flexible beam contacting an object at one of its intermediate points in a rotational elevation movement based on multiple lumped masses, and (2) based on that model, to define a control system that exerts a precise pushing force on an object.Both objectives represent the contributions of this paper and have never been previously addressed.
This paper is organized is as follows.After this Introduction, Section 2 presents our experimental setup of a flexible-link antenna.Section 3 develops the dynamic model based on lumped masses.Section 4 fits this model to the lowest-frequency mode obtained from an assumed-mass model.Section 5 obtains the transfer functions of our prototype, and Section 6 derives a robust control system based on these functions.Section 7 presents our experimental results, and Section 8 offers our conclusions.
Experimental Setup
The experimental prototype is a two-degrees-of-freedom (2DOF) robotic system with a single flexible link, which is used as a sensing antenna in haptics applications.A detailed 3D representation is shown in Figure 2. Its design was developed by our group in previous works [36], where it has been employed as a tactile sensor to detect objects in its surroundings.
The flexible link, also referred to as antenna, is a lightweight, slender carbon-fiber rod with a circular cross-section.It is fixed at one of its ends (the base), while the other end moves freely (the tip).The antenna is attached at the base to a six-axis ATI FTD-MINI40 force-torque (F-T) sensor, which measures the Cartesian reacting forces and torques generated by the link.The signals are acquired through gauges located inside the sensor, which are multiplexed and amplified to send the information regarding forces and torques to a data acquisition card (DAQ).Holding the sensor and the antenna there is the servomotor structure, which is driven by two Harmonic Drive PMA-5A direct-current (DC) mini-servo actuator motor sets, featuring zero-backlash 1:100 reduction gears.One servomotor rotates the system with azimuthal movements (horizontal plane), while the other rotates it with elevation movements (vertical plane).These DC motors have incremental optical encoders that measure the angular position of the motors, θ m 1 and θ m 2 , corresponding to the azimuthal and elevation joints, respectively.Additionally, a stainless-steel structure holds all this equipment and fixes the system to a flat surface with three legs to ensure perfect stability.The robot is connected to a PC through data acquisition cards.The data acquisition and control algorithms were programmed using LabVIEW NXG 5.1 with a sampling time of T s = 1 ms.All work related to data analysis and representation was carried out using MATLAB 8.2.0.29 (R2013b).
Dynamic Model
This section focuses on the modeling of a flexible beam connected to a motor.This model enables us to characterize the active sensor mentioned earlier, which moves in a vertical rotation back and forth within a plane until it makes contact with an object.Thus, the effect of gravity is considered.For this study, we assume that the interaction between the structure and the environment occurs at a single point of contact along the beam.Additionally, we assume that the force applied by the object on the beam is perpendicular to it.This assumption neglects any slipping that may occur between the two bodies.Furthermore, the contacted object is assumed to be rigid.
The dynamic model of the system is divided into two parts to describe the behavior of the motor and the flexible beam.These subsystems are interconnected through the motor angle and the torque exerted on the motor by the beam, known as the coupling torque.
Beam Dynamics
The flexible beam is characterized by its length L, linear mass density ρ, and flexural rigidity EI.It is assumed that the beam is described by a massless link with n lumped masses along its length, as presented in [24].The beam exhibits small deflections, i.e, deflection lower that 10% of L, allowing us to use a linear deflection model [37].Furthermore, the internal and external friction effects of the beam are neglected.
The deflection, denoted as z(x, t), is measured relative to its undeformed position, defined by the frame (X, Z).As illustrated in Figure 3, the frame (X, Z) rotates relative to a fixed frame (X 0 , Z 0 ).This rotation is given by the angle of the motor θ m (t).Furthermore, the lumped mass m j is located at distance l µ j and angle θ µ j (t) with respect to the axis X and the frame (X 0 , Z 0 ), respectively.It should be noted that the mass m n is placed at the tip of the beam, i.e., l µ n = L.
Non-rigid contact is defined by two angles with respect to (X 0 , Z 0 ): the equilibrium angle of the surface of the contacted object, θ e , and the angle at which the link has penetrated into the object, θ c (t).If the contact is rigid, then θ c (t) = θ e .The contact position is defined by the distance l c along the X-axis.
Considering gravity, its direction opposes the Z 0 -axis.The effect of gravity on the beam is assumed to be the force computed on the undeformed beam.This is based on the principle of superposition, which can be applied when considering small deformations [38].Therefore, the deflection of a massless beam is given by and is related to the angles of the system by means of where θ(x, t) is the angle between any point of the beam and the frame (X 0 , Z 0 ).The solution of Equation ( 1) is given by a piecewise function defined as for the interval [l i−1 , l i ] with i = 1, 2, . . ., N and with l 0 = 0 and l N = L. Here, N can be either N = n or N = n + 1 depending on whether contact occurs or at which position it occurs.The distance l i is determined by the position of either one of the lumped masses l µ j or the contact point l c .
The polynomial coefficients u i,j (t) are obtained from the following conditions where ( 4) and ( 5) are the boundary conditions at the joint with the motor and the tip of the beam, respectively.Equation ( 6) represents the continuity conditions with i = 2, . . ., N.
The force F i (t) is defined by where k is the stiffness of the contacted object.If the contact is rigid, then k tends to infinity.Additionally, as previously mentioned, gravity is applied with respect to the undeformed position of the beam.The coupling torque between the beam and the motor is given by Hereinafter, we work with the nondimensional model to ensure generality and applicability to any slewing flexible beam.Defining T = ρL 4 EI , we obtain the nondimensional time τ = t/T and frequency ω = ω d T (letting ω d be the natural frequency of the flexible beam).The nondimensional spatial coordinate and deflection are χ = x/L and ζ(χ, τ) = z(x, t)/L.The forces and their positions are defined as i (τ) = F i (t) T 2 ρL 2 and λ i = l i /L, respectively, and the nondimensional torque is Γ(τ) = Γ(t) T 2 ρL 3 .Moreover, the masses are defined as µ i = m i ρL , gravity as ĝ = g T 2 L , and the angles as θ i (τ) = θ i (t).Thus, the nondimensional form of Equation ( 1) is where, from now on, ( ˙) and ( ′ ) denote the derivatives with respect to the nondimensional time and spatial variables, respectively.The relation between the deflection and the angles ( 2) is now and the solution of the deflection (3) is (11) where Furthermore, the conditions (4)-( 6) and the forces (7) are The coefficients υ i,j (τ) obtained by conditions ( 12)-( 14) are presented below, while their derivation is detailed in Appendix A. For the two first coefficients, it is found that υ 1,0 (τ) = 0, υ 1,1 (τ) = 0 and with i = 2, . . ., N, whereas, the others are with i = 1, . . ., N. Therefore, using the solution of the deflection in (10), the following equation is obtained Furthermore, the coupling torque of Equation ( 8) becomes Finally, the general solution defined in (20) can be employed to derive the dynamic equations of the beam in two cases: when the beam is freely vibrating and when it is in contact with an object.
Free-Vibration Model
When the link vibrates freely, the forces are only due to the concentrated masses of the model, which can be expressed as i (τ) = µ j λ µ i θi (τ) + ĝ cos(θ m (τ)) .Therefore, for this case, the number of intervals into which the displacement ζ(χ, τ) is divided is equal to the number of masses, N = n, and the distances λ i and angles θ i (τ) are λ µ j and θ µ j (with i, j = 1, . . ., n).
Thus, Equation (20) and the coupling torque of ( 21) become These equations are expressed in a compact form as where 1 n is a vector of ones belonging to ℜ n×1 , and Finally, by manipulating the above equations, we obtain a dynamic model for the case of free vibrations:
Contact Model
Upon establishing contact, the beam oscillates around the position of the object due to the assumption of small displacements.For this reason, the following incremental angles are defined: Furthermore, the shape and dimensions of the model will depend on the relative position between the contact and the masses.Contact may occur at an intermediate position between two masses or coincide with one of them.
In the case of contact between masses, the displacement is divided into N = n + 1 intervals.The distances λ i and angles θ i (τ) (with i = 1, . . ., N) take the values of λ µ j and θ µ j (with j = 1, . . ., n) or λ c and θ c .The equations derived from Equation ( 20) are when when λ µ j−1 < λ i = λ c < λ µ j , and when The coupling torque (21) becomes The system of equations can be expressed in a more concise form as follows: where 0 n is a vector of zeros in ℜ n×1 ; 1 n+1 is a vector of ones in ℜ n+1×1 ; and M, H, Λ 1 , and Λ 2 are the same as in ( 27)-( 30) and After calculating Equations ( 38) and ( 39), we obtain the following dynamic model: where and By assuming that the contact is rigid, i.e., ∆θ c (τ) = 0, we can derive that the Equation ( 43) is the coupling torque ( 46) is and the contact force can be obtained from (44) as On the other hand, when the contact occurs in the position of one of the masses, the displacement is divided into N = n intervals and the equations derived from Equation ( 20) are when when when Here, the coupling torque is equal to (37).
Once more, the model is expressed in a concise form as follows: where the mass at which contact occurs is designated as and Ĥc2 ∈ ℜ 1×n−1 are derived by removing from ∆θ µ , Λ 1 , Λ 2 , H c1 and H c2 the element related to the mass coincident with the contact µ c .Similarly, the matrices M ∈ ℜ n−1×n−1 and Ĥ are derived by removing from M and H the columns and rows corresponding to the mass µ c .
The following dynamic model is obtained from (53) and ( 54) where and Finally, by assuming rigid contact in Equations ( 55) and (58), we obtain and Equation (56) provides us with the contact force as follows: In summary, the contact model is described by Equations ( 47)-( 49) when the contact is between the masses, and by ( 59)-(61) when it coincides with the position of a mass.
Remark.For a given number of masses n, the rigid contact model has one order fewer when the contact is at one mass (Equations ( 59)-( 61)) compared to when it is between masses (Equations ( 47)-( 49)).Consequently, when contact occurs at one of the masses, the number of vibration frequencies is reduced by one.
Motors Dynamics
The behavior of the motor is described by the following equation where J 0 is the rotational inertia of the motor and Γ m (t) is the torque produced by the actuator to move the system.The nondimensional form of Equation ( 62) is where the nondimensional inertia of the actuator is obtained by calculating , where In the case of contact, where the incremental angles of Equation ( 33) are defined, the equation of the engine is such that
Adjustment of the Rigid Contact Model
In this section, the parameters of the lumped-mass model described by the Equations ( 47)-( 49) and ( 59)-(61) are adjusted.The aim is to ensure that the first vibration frequency of the model coincides with the frequency of a flexible beam in contact with a rigid object.This frequency, as discussed in [25], depends on the point at which the contact occurs, denoted as λ c .Furthermore, the order of the model should be kept as low as possible in order to minimize the computational complexity.It is important to note that since the model is dimensionless, the results obtained here are applicable to any slewing flexible beam.
To fit the model, we start with a lower-order model where only one mass is considered, and then increase the number of masses n until a satisfactory result is achieved for the system frequency.The parameters to be adjusted are the masses µ j and their respective distances λ µ j .The conditions imposed include keeping the total mass of the link and ensuring that the length of the link is maintained so that the last mass µ n is positioned at the end of the link, i.e., λ µ n = 1.
Remark.It should be noted that the parameters of the model with one mass (n = 1) are determined by the imposed conditions.Consequently, if these conditions are to be maintained, it is not possible to modify the parameters in order to obtain a better fit.
Thus, Matlab was used to adjust the above parameters with the aim of minimizing the following mean square error: where ω1 (λ c,i ) and ω1 (λ c,i ) represent the frequencies obtained from the model in [25] and from the lower eigenvalue of the matrix R (or R), respectively.Both frequencies depend on the contact point λ c .The contact point is defined in the interval (0, 1] with a step of 0.01, resulting in a total number of points of N p = 100.The smallest number of masses that achieves a satisfactory fit of the first frequency is n = 3.The results for the models with a lower number of masses are presented in Appendix B. To achieve the fit for the three-mass model, it is considered that µ 1 varies within the interval (0, 0.99), and µ 2 within the interval (0, µ 1 ), both with a step of 0.01.For the positions of the masses, λ µ 2 and λ µ 1 vary within the intervals (0.01, 1) and (0, λ 2 ), respectively, with a step of 0.01.Moreover, µ 3 is defined by (65) and λ µ 3 = 1.The model with the minimum mean square error, i.e., MSE = 0.008, is characterized by the parameters µ 1 = 0.49, µ 2 = 0.41, µ 3 = 0.10, λ µ 1 = 0.26, and λ µ 2 = 0.70.The comparison between ω1 (λ c ) and ω1 (λ c ) for this model is presented in Figure 4. ) to ω1 (λ c ).
Flexible Beam Transfer Functions
The transfer function that relates the coupling torque to the motor angle in rigid contact is derived using the parameters from the previous section and Equations ( 47) and (48), or Equations ( 59) and (60) if the contact coincides with one of the masses.However, for the sake of clarity, the steps will be outlined for the first set of equations, but are exactly the same for the second.
As observed, the dynamic model obtained is nonlinear because gravity depends on the motor angle.Consequently, to obtain a linear model, the equations of the system are linearized around the point ∆θ m,0 = 0 (67) Thus, taking into account that cos(θ m (τ)) = cos(∆θ m (τ) + θ e ), the equilibrium point is defined by Let us define the variations from the equilibrium points as δθ m (τ) = ∆θ m (τ) − ∆θ m,0 , δθ µ (τ) = ∆θ µ (τ) − ∆θ µ,0 and δΓ coup (τ) = Γ coup (τ) − Γ coup,0 .The following linearized model is obtained by using the first-order Taylor series expansion Taking the Laplace transforms of these equations and substituting δθ µ from (70) into (71), we obtain the transfer function between the coupling torque and the angle of the motor with Truncating the transfer function to the first mode of vibration, we obtain a function of the form where the coefficients a(λ c ), b(λ c ), c(λ c ), and d(λ c ) are obtained with Matlab and are fitted by means of the functions in the Appendix C. It is important to note that the coefficient a(λ c ) corresponds to the first vibration frequency.These coefficients are obtained for the nondimensional model.Consequently, in order to obtain a valid transfer function for the dimensional model, the following transformations must be made: and the transfer function becomes Upon calculating Equation (77), a different representation of the transfer function is obtained: where K a (λ c ) is the gain of the model, β(λ c , θ e ) represents the zeros, and α(λ c ) denotes the poles, obtained from: Evaluating the values of α(λ c ) and β(λ c , θ e ) across the entire range λ c ∈ [0, 1] and This can be verified in Figure 5, which illustrates the evolution of α(λ c ) and β(λ c , θ e ) with respect to λ c .This property is crucial for demonstrating the robustness of our force controller in the subsequent section.Note that the value of β(λ c , θ e ) barely changes with respect to θ e .This is because c * (λ c ) is significantly smaller than b * (λ c ), resulting in a β(λ c , θ e ) ≈ β(λ c ).This observation can be checked in Figure A2 in Appendix C.
Control System
This section introduces a force control system for a single-link flexible robot operating under gravity.The objective is to regulate the force exerted by the flexible link on the environment, regardless of the contact point on the link.The control process is divided into three stages: (1) free motion control, where the link is servo-controlled until it makes contact with an object; (2) post-impact, where the link pushes against the object, gathering data from the force-torque sensor and using an estimator to identify where the contact point has been produced; and (3) force control, where the force exerted on the object is regulated using the information from the previous estimator.The system utilizes feedback from measurements of the motor's position and force-torque at the base of the link.
Thus, the subsystems comprising the control system can be classified into two categories: controllers and estimators.
Controllers: (a)
The motor position controller, which comprises the inner loop, regulates the dynamics of the motors between the motor angle θ m (t) and its reference θ * m (t).This design ensures robustness against Coulomb friction, viscous friction, variations in link parameters, and external forces exerted on the link, allowing the system dynamics to be treated as a linear time-invariant system.(b) The force controller, comprising the outer loop, regulates the force applied by the antenna at the contact point to a desired value F * (t).This controller operates in conjunction with the inner loop.Thus, this outer loop uses feedback measurements of force-torque at the base of the link and command control signals that adjust the motor references θ * m (t).
Estimators: (a)
The impact detector monitors the motor's position and force-torque at the base of the link to detect the instant at which the antenna impacts an object.(b) The contact point estimator determines the point of the antenna at which contact has been detected.
The combination of these controllers and the estimators across the three stages of the control process is described.
In the first stage (free motion control), the inner loop is activated along with the impact detector, which continuously monitors the data.A programmed motor trajectory allows the antenna to perform a sweep.Then, the antenna moves freely until it makes contact, at which point the impact detector activates and triggers the transition to the second stage.
During the second stage (post-impact), a new reference is set for the inner loop, causing the motors to increase their position relative to the angle at which the contact has been detected, thereby ensuring that the antenna continues to exert pressure on the object.Then, the system remains steady for a predetermined period of time, collecting force-torque data from the base of the link until the contact point estimator identifies the point at which the antenna is pushing against the object.
Following contact point estimation, the third stage (force control) commences.The force control strategy incorporates two nested control loops: the inner loop and the outer loop.The inner loop regulates the motor position, while the outer loop indirectly controls the exerted force by adjusting the torque at the base of the link.Once the distance from the contact point to the joint is estimated, the outer loop sets the desired torque at the base of the link as a reference.This reference is calculated by multiplying the estimated distance by the desired force.
Detailed descriptions of all subsystems involved in the control process are provided below.
Motor Control Inner Loop
The inner loop is designed to control the position angle of the actuators so that the dynamics between the motor position and its reference become an approximately linear time-invariant system.This control is insensitive to gravity and external forces acting on the antenna and remains active throughout the entire control process.This control system has been utilized in both free-and constrained-motion scenarios (e.g., [39]), demonstrating its robustness and effectiveness.The structure incorporates PID controllers with a lowpass filter term, ensuring excellent trajectory tracking, compensating for disturbances such as unmodeled friction components, and maintaining robustness against parameter uncertainties.This provides precise and rapid motor positioning responses.Additionally, it includes a compensator for the nonlinear friction of the motor (stiction) to avoid motor dead-zones and a compensator for the estimated coupling torque caused by the force exerted by the link on the contacted object.
An algebraic design methodology allows for the arbitrary placement of the four poles and two zeros of the closed-loop system.By aligning the zeros and poles at the same position, denoted as p m , and assuming that the compensators perfectly cancel the nonlinearities, the inner closed-loop dynamics are defined as follows: This configuration allows for very rapid motor movements if the absolute values of the poles p m are set high, provided that actuator saturation is avoided.
Impact Detector
Determining the precise moment of contact is crucial for initiating the mechanisms that estimate the contact point on the beam.In robotics, several mechanisms have been proposed that detect a collision by monitoring measured variables that exceed a threshold: [40] for rigid-link robots and [41] for flexible-link robots.Contact instants have also been estimated in artificial antennae designed to mimic insect behavior [42].These experiments employed a two-axis acceleration sensor positioned at the antenna's tip to measure link vibrations and gather information about contact.Information regarding contact was derived from analyzing vibration frequencies.In the case of our flexible antenna, we utilized a mechanism that predicts real-time coupling torque during the antenna's free movement.This mechanism involves comparing the predicted torque with sensor measurements in real time.The equation used to estimate coupling torque is derived from the dynamics of the flexible antenna link.A brief outline of this estimator is given below.
Consider the measured coupling torque
⊺ provided by the F-T sensor at the base of the link.Denote the effect of gravity on the beam provided in the F-T sensor frame as − → Γ g (t) = (0, 0.5 being the mass of the antenna and θ m 2 (t) the elevation motor angle.Define − → Γ e (t) as a real-time estimation of the coupling torque during free-movement mode, assuming no gravity, obtained from Equations ( 31) and (32).The residual error between the measured and the estimated coupling torques can be defined as: Then, contact is produced at the instant t i at which the absolute value of the time derivative of the magnitude of the residual error exceeds the threshold: where the threshold r max Γ is determined experimentally.
Contact Point Estimation
We propose determining the contact point of the antenna, where it makes contact with the surface of an object, using the algorithm outlined in [39].This algorithm combines two estimators.The first one relies on the relation between the lowest natural frequency of the oscillations experienced by the antenna after an impact, ω 1 , and the contact position, l c , as described in Section 4 and represented in Figure 4.This relationship is usually tabulated, allowing for a quick estimation of the contact point from the frequency value.Although this method gives a very precise estimation, it can sometimes yield two possible solutions.To resolve this ambiguity, a second estimator is employed.This one estimates the contact point using the static force and torque measurements of the sensor and the relation between these magnitudes.Since torque is the product of force and distance, it straightforwardly determines the application point.While this method may be less precise than the first, it effectively distinguishes between the two potential solutions provided by the initial estimator.
The contact point estimation process begins when the impact detector triggers the transition from the first stage (free motion of the antenna) to the second stage (post-impact).The antenna pushes against the object and remains steady for a determined period of time ∆t, during which the F-T sensor registers the oscillations of the antenna.Subsequently, Fast Fourier Transform (FFT) is performed on the data to determine the first vibration frequency ω 1 , and the contact point l c is obtained from the tabulated data, as depicted in Figure 4.In cases where two potential contact points are identified, the second estimator computes the contact point, determining which of the initial estimates is correct.
The precision of obtaining the frequency ω 1 by performing FFT on the registered data is inversely proportional to the length of the data vector.Longer data vectors provide greater precision, but also increase algorithm execution time.Therefore, a balance must be struck in defining the data registration time, ∆t.
Force Control Outer Loop
Force control is indirectly achieved by controlling the torque at the base of the antenna.If the force we aim to exert is F * (t) at contact point l c , then a moment at the base of the antenna Γ * (t) = F * (t) • l c should be exerted.The feedback measure from the F-T sensor Γ(t) and the rest of the measures are modified to correct the gravity effect of the antenna system, as is performed for the impact detector in Section 6.2.Thus, the feedback signal is obtained from ⊺ being the measured coupling torques and forces, respectively, provided by the F-T sensor at the base of the link, and − → Γ g (t), − → F g (t) the torque and force effects of gravity on the beam provided in the F-T sensor frame, where the product ρ • L is the mass of the antenna and θ m 2 (t) the elevation motor angle.
Figure 6 presents the outer loop of the system, which is composed of the following: 1.The transfer function G * (s, λ c , θ e ) from Equation (77) describing the dynamics of the antenna in contact with an object.2. The motor control inner loop G M (s) whose dynamics are described by Equation (82).3. A controller C(s) whose robustness would be justified if it were of the PI type: which verifies the robustness condition Figure 6.Force control outer loop scheme.
This control system robustly stabilizes the dynamics of the robot under contact, maintaining stability for any pair of values where 0 < λ c ≤ 1 and −90 • ≤ θ c ≤ 90 • , as well as for any uncertainties in the mechanical parameters of the antenna.The next subsection will prove the above robustness conditions.
Stability Robustness Condition of C(s)
The robust stability of the PI force controller (87) is designed using the Routh-Hurwitz criterion, e.g., [43].
The closed-loop characteristic equation is We hereafter omit the arguments of K a , β, and α for the sake of simplicity.Then, the characteristic polynomial is In order to assess the closed-loop stability, we must first check that all the coefficients of this polynomial are positive.This is easily verified, since α, β, K a , K c , a c , and ε are positive.Next, we calculate the Routh table, giving the following terms in the first column: It is easy to see that 2 − a c • ε > 0 with α > β are sufficient conditions to make this term positive.
•
Term where And, again, it is easy to see that 2 − a c • ε > 0 is a sufficient condition to make this term positive.
Therefore, considering that the property α > β seen in the previous section is verified, we have proven that a PI controller provides force control with robust stability if the following condition 0 is verified.
Design Methodology of C(s)
In this particular application, an algebraic methodology is followed to adjust the parameters K c , a c of the controller C(s) (87).The characteristic equation of the system is Calculating Equations ( 93) with (87), ( 82) and (78), while omitting λ c , θ e for clarity, yields: The two parameters K c and a c of the controller C(s) need to be adjusted.Thus, if a double pole of the system is selected and placed at p F , the following relations need to be accomplished: From Equation (95), the parameter K c based on a c is obtained: Finally, by calculating Equation (96) and including (97), the parameter a c is obtained:
.3. Justification on Tuning the Outer Loop Considering the above Robust Stability Condition
Expressions (97) and (98) enable real-time tuning of the PI controller (87) once λ c and θ e have been estimated.We recall that θ e has minimal influence on the parameters of transfer functions G a (s, λ c , θ e ).However, if necessary, it can be determined by combining measurements from the motor encoders and an inclinometer mounted on the base of the haptic system.The tuning process aims to achieve the same closed-loop poles indepen-dently of the contact point λ c .Nevertheless, this ideal situation cannot be fully achieved in practice due to the following factors: 1. Variations in the estimation of the frequency ω 1 from measured data can lead to incorrect contact point estimations.As previously mentioned, the precision of the FFT depends on the length of the data vector, defined by ∆t.Since this period cannot be set too high, it inevitably introduces imprecision in contact point estimation.2. Modelling errors cause variations in the curve relating the contact point and first vibration frequency of the antenna, represented in Figure 4. 3. The transfer function G a (s, λ c , θ e ) (77) obtained from the model is a simplification of the full system, truncated to the first mode of vibration.This simplification introduces modeling errors in the parameters of this transfer function, leading to non-optimal calculations of controller parameters.
These issues can result in unstable outer loop control systems if a robustness condition is not imposed in the design of C(s).An unstable outer loop can cause undesirable and dangerous behavior, potentially exerting excessive force and risking the integrity of the robot.We have proven that condition (92) guarantees closed-loop stability in all these cases.Moreover, it ensures limited deterioration of the transient response in cases of mismatch.
Robot Parameters and Experimental Results
Experiments are conducted to test various contact points and programmed reference forces.Both degrees of freedom of the robot (azimuthal and elevation movements) are evaluated.Specifically, a set of seven different contact points ranging from λ c = 0.3 to λ c = 0.9 and three levels of reference forces |F * | = (0.05, 0.10, 0.15) N are tested between one and five times for each degree of freedom.An image of the experimental setup is shown in Figure 7, where the system is performing an azimuthal (horizontal) movement with the sensing antenna making contact with a steel cylinder at λ c = 0.9 of the antenna.
In this section, we first detail the main parameters of the system and the control process.Then, we present the experimental results obtained from the different algorithms at each stage of the control process.In this photo, the system is performing an azimuthal (horizontal) movement with the sensing antenna hitting the steel cylinder at λ c = 0.9 of the antenna.
Parameters of the System
Table 1 shows the parameters of the two motors of the system, where Motor 1 and Motor 2 refer to the azimuthal and elevation motors, respectively.
Table 2 details the characteristics of the antenna.Note that the link flexural rigidity EI, as defined previously, is a product of Young's Modulus E and the area moment of inertia I. Finally, Table 3 shows the most important parameters of each control system: • First, in the motor control inner loop, the closed-loop system's poles p m are placed at the same value for both the azimuthal and elevation motors to achieve homogeneous behavior of the system in both degrees of freedom.• Second, the impact detector threshold r max Γ , which has power units (N • m/s), is determined experimentally based on the maximum value of (84) obtained in the free-motion experiments, with an added security margin.• Third, in the contact point estimator, a time of ∆t = 0.7 s is chosen as it provides sufficient FFT precision while allowing the algorithm to execute quickly enough.In this case, the relation between ω 1 and l c , described in Section 4 and represented in Figure 4, is tabulated to allow for quick estimation of the contact point from the frequency value.• And fourth, in the force control outer loop, the closed-loop system's poles p F are placed to achieve the fastest outer loop response possible while satisfying the robustness condition (92).Furthermore, the different values of the parameters K c (λ c , θ e ) (97) and a c (λ c , θ e ) (98) of the force controller C(s) (87) are tabulated to facilitate quick tuning of the outer loop during experimentation.The results of the experiment depicted in the photo in Figure 7, where the antenna performs azimuthal displacement, contacts the cylinder at λ c = 0.9, and pushes with a programmed force of F * = 0.15 N, are represented in Figure 8.The data illustrate the complete control process, from the first stage of free motion control, through post-impact data acquisition in the second stage, to the force control in the third stage.Hereafter, the graphical results presented below belong to this same experiment.
First Stage Results: Impact Detector
The time required for the impact detector to detect contact is measured using a special setup involving the object that the antenna impacts.This setup consist of a thin copper wire attached very close to the surface of the steel cylinder, but not touching it.The cylinder is wired to the digital input of the DAQ system and is set to zero volts.The wire is connected to an output port of the DAQ supplying 5 volts.When the antenna hits the cylinder, it also pushes the wire towards the cylinder surface, causing an electrical connection between the wire and the cylinder.This results in a voltage change in the digital input of the DAQ system connected to the cylinder, registering the exact instant t i A at which the antenna contacts the cylinder.This setup is hereafter referred to as the analog impact detector, and a detailed image of it is shown in Figure 9. Figure 10 shows the performance of the system during the first stage, where the motors control the movement of the antenna, seeking the space until it makes contact with the cylinder.The figure includes plots of the motor reference versus encoder signal, measured torque versus torque simulated by the detector, and measured force applied to the cylinder.It also represents the moment at which the analog impact detector (t i A ) detects contact.Table 4 summarizes the mean results and the standard deviation of all experiments regarding the delay in estimating the contact instant.The time required for the impact detector to detect contact is calculated as ∆t i = t i − t i A , where t i > t i A .Alongside this table, a histogram of ∆t i for all experiments is shown in Figure 11.The histogram illustrates that the most frequent time estimation delay falls between 0 and 2 milliseconds.The experimental setup is positioned in each experiment for the impact to occur at a programmed position of the antenna.Specifically, a set of seven different contact points from λ c = 0.3 to λ c = 0.9 are measured and marked on the antenna with a white point (see Figure 7).Figure 12 shows the collected data during this second stage, where the antenna remains steady, pushing the cylinder for a determined period of time ∆t.This parameter determines the precision with which the FFT determines the frequency.The precision in the FFT frequency is calculated as the maximum frequency read f max , which is half of the frequency of system, divided by the length of the registered data L data , related to ∆t such that Taking into account the relation between ω 1 and l c described in Section 4 and represented in Figure 4, the maximum ∆ f that can be selected varies between 1.4 and 1.5 Hz.This data are obtained considering less than a 2% error in the length of the antenna when estimating the contact point, which corresponds to approximately 10 mm.Thus, the data acquisition time selected for the second stage is ∆t = 0.7 s.Table 5 summarizes the mean values of the estimated contact point and its errors for all the experiments.It can be observed that the mean absolute errors do not exceed the limit of 10 mm, which is approximately 2%.Alongside the table, Figure 13 shows the estimated contact points l c for all experiments in comparison with the real contact point reference l * c = λ c • L. It can be seen that the estimator provides accurate results.14 illustrates the performance of the control system during this third stage, where the antenna applies a specified force to the cylinder.During this phase, both the inner and outer loops of the control system operate concurrently.The programmed force is the reference of the outer loop, and the control signal that it generates results in the input reference of the inner loop.Figure 14 demonstrates that the system operates effectively with zero steady-state errors.
As explained earlier, the outer loop controls the torque at the base of the antenna with the reference Γ * (t) = F * (t) • l c , where F * (t) is the desired force and l c is the estimated contact point from the previous stage.A PI controller is used, which achieves zero steadystate error in torque for every experiment conducted.However, variations in the estimation of the contact point l c lead to two issues: (1) tuning C(s) with non-optimal parameters, and (2) setting an incorrect reference torque Γ * (t).The first issue affects the transient response of the resultant system, as it does not operate as quickly as theoretically predicted.Ideally, the settling time t s of the outer loop response, obtained from simulations, is t s = 0.135 s.This ideal result can be compared with the data in Table 6, which presents the mean settling time measured in each experiment.Additionally, Figure 15 shows a histogram of the settling times t s obtained for all experiments.Finally, the second issue affects the steady-state response of the system.Since the estimation l c of the contact point may introduce errors compared with the real contact point l * c (see Figure 13), setting the reference torque Γ * (t) causes the control to push the cylinder with a force of F(t) = F * (t) • (l c /l * c ), which does not exactly correspond to the desired applied force |F * | = (0.05, 0.10, 0.15) N. The percentage error between the desired force F * and the real applied force F is calculated as: The mean percentage error of force is calculated considering the errors introduced by the estimation of the contact point in all the experiments conducted, and it is illustrated in Figure 16.These results are consistent with the errors observed experimentally (e.g., in Figure 14).
Discussion
This paper developed and tested a precise force control mechanism for a haptic device comprising a flexible link that rotates around one of its ends, resembling the antennae found in many insects.The link, with a distributed mass, executes azimuthal and elevation movements influenced by gravity.Contact with an object can occur at any intermediate point along the link.It is crucial in this context to regulate the force exerted by the antenna on the object to facilitate tasks such as object identification or moving an object.Previous works have addressed force control only when the contact is at the tip.Our work makes several significant contributions to the state of the art because, for the first time, (1) precise force control at intermediate points of a link is achieved; (2) a condition to design robustly stable controllers is obtained, i.e., controllers that maintain acceptable performance independently of the features of the controlled dynamics, that highly change with the contact point at the link; (3) we prove that simple PI controllers verifying this condition achieve such robustness stability; and (4) this control system yields satisfactory experimental results.Moreover, a lumped-mass model (with more than a lumped mass) of a flexible link in the context of contact with an object at an intermediate point is developed for the first time.This model is general because it is developed for a normalized beam.
Next, we specify the roles played by the dynamic models developed in this work.In the first scenario, in which the antenna moves freely, vibrating without suffering any contact, the obtained free model is used to predict (by simulating this model) the coupling torque.This prediction serves to compute the residue used by the impact detector for estimating the impact instant.In the second scenario, the antenna presses the object in a motor control fashion, without employing a flexible-link model.In the third scenario, the parameters of the PI controller are tuned using the family of approximate models obtained for the case of contact (these contact models are different in terms of the function of λ c , and the forms of their dependence on λ c vary in terms of the function of the interval between masses that is being considered or whether λ c corresponds to the position of one of the masses of the lumped dynamics model).Moreover, we mention that the obtained contact models played decisive roles in obtaining the family of robust controllers: (1) these models yielded truncated models with two imaginary poles and two imaginary zeros that were used in the closed-loop stability assessment, and (2) they allowed us to establish a property whereby the zeros are closer to the origin of the S-complex plane than the poles, a property that was crucial for obtaining the robustness condition.
We highlight that we have designed a broader control system in which the PI force controller is embedded.It includes also an impact detector and a real-time estimator of the contact point.The experiments conducted using this whole system demonstrate the effectiveness of this methodology, ensuring the stability of the system and achieving minimal force error at the contact point.Figure 14 shows a mean error in the steady state of nearly null as consequence of using a PI controller.However, since the values of the exerted forces are low, noticeable noise can be observed in the figure because these values are not far away from the accuracy level of the force-torque sensor.
Next, we mention some limitations of the system.The first one is the precision of the controlled force, which depends on the strain gauge offset of the F-T sensor and, as previously mentioned, on the inaccuracy produced in the estimation of the contact position, which introduces a small error in the calculated torque reference.Another limitation is the assumption of small deflection.If this assumption were violated, the model obtained in Section 3 would be incorrect and the dynamics would become nonlinear.Finally, a third limitation is the assumption of the constant cross-section of the antenna.Other behaviors can be obtained using conical antennae.In this case, Section 3 should be developed assuming a decreasing cross section radius.
Finally, we mention that potential applications of the proposed force control exceed the haptic antennae case.This can be applied in other robotic scenarios like the following: (1) in biomimetics, where it can be used to design robotic birds that grasp objects with their beaks; (2) in industrial robots, where it can be used to design hands with flexible fingers that grasp objects with a programmed force, with contact at intermediate points of the fingers; and (3) in robot-assisted surgery, where a required force has to be exerted when the robot contacts an organ.quency, shown in Figure A1a, and this frequency tends to infinity as the contact approaches the tip of the beam, i.e., the position of the mass.In the case of the model with n = 2 masses, the minimization of (66) is performed by varying µ 1 and λ µ 1 in the interval (0, 1) with a step of 0.01.This gives a minimum mean square error of MSE = 0.390 for the model whose parameters are µ 1 = 0.63, µ 2 = 0.37, λ µ 1 = 0.39, and λ µ 2 = 1.However, a local minimum with MSE = 0.410 is highlighted for the model with parameters µ 1 = 0.63, µ 2 = 0.37, λ µ 1 = 0.72, and λ µ 2 = 1 whose MSE is close to the global minimum.These two models have two vibrational frequencies, which are shown in Figure A1a,b.In this case, it is the second frequency that tends to infinity when the contact coincides with one of the masses.
Figure A1 shows the three frequencies of the model and, as in the previous cases, the third frequency tends to infinity when contact occurs at the position of one of the masses.
Figure 1 .
Figure 1.Haptic sensors of the antenna and whiskers types in nature.
Figure 3 .
Figure 3. Scheme of the flexible beam (a) prior to contact and (b) after contact.
Figure 7 .
Figure 7. Experimental setup.In this photo, the system is performing an azimuthal (horizontal) movement with the sensing antenna hitting the steel cylinder at λ c = 0.9 of the antenna.
Figure 8 .
Figure 8. Complete experimental results.Motor angular position (inner loop), measured torque and force (outer loop) along the three stages of the control process.Case: azimuthal displacement, contact point λ c = 0.9, programmed force of F * = 0.15 N.
Figure 16 .
Figure 16.Summary of stage 3 results (II): mean percentage force error obtained for azimuthal and elevation experiments.
Figure A1 .
Figure A1.Vibration frequencies as a function of the contact point, where (a) shows the first frequency, (b) the second, and (c) the third.Here () corresponds to the model of[25], ( ) to the model with n = 1 masses, () and ( ) to n = 2 masses for λ µ 1 = 0.27 and λ µ 1 = 0.72 respectively, and ( ) to the model with n = 3 masses.
Table 1 .
Parameters of the motors.
Table 2 .
Parameters of the antenna.
Table 3 .
Parameters of the control system.
Table 4 .
Summary of stage 1 results: mean and standard deviation σ of all experiments regarding the delay in estimating the contact instant.Histogram of the delay in estimating the contact instant ∆t i for all experiments.7.2.2.Second Stage Results: Contact Point Estimation
Table 5 .
Summary of stage 2 results: mean values of the estimated contact points (in millimeters) and their errors (in millimeters and % with respect to L) for all the experiments.
Table 6 .
Summary of stage 3 results (I): mean of the settling time responses t s of the outer loop in seconds.Histogram of the settling time responses t s of the outer loop.
Table A1 .
Root mean square error of fitted functions. | 13,648.6 | 2024-07-01T00:00:00.000 | [
"Engineering"
] |
Themo-mechanical Interfacial Stress Analysis in Electronic Packaging at Different Temperature Conditions : Revisit Author ’ s Work
The study of thermal mismatch induced stresses and their role in mechanical failure is one relevant topic to composite materials, photonic devices and electronic packages. Therefore, an understanding of the nature of the interfacial stresses under different temperature conditions is necessary in order to minimize or eliminate the risk of mechanical failure. An accurate estimate of thermal stresses in the interfaces plays a significant role in the design and reliability studies of microelectronic devices. In the microelectronic industry, from a practical point of view, there is a need for simple and powerful analytical models to determine interfacial stresses in layered structures. This review paper summarizes the work conducted by the authors in relation to the bi-layered assembly with different temperature conditions on the determination of interfacial thermal stresses. The authors have extended the case of uniform temperature model by earlier researchers of two layered structure to account for differential uniform temperatures, linear temperature gradient in the layers. The presence of a heat source in one layer (die) is also presented. Finally, the effect of bond material properties and geometry on interfacial stresses and bond material selection approach are also considered in a simple way.
Introduction
Thermo-mechanical stress develops at the interface of layered structures in electronic packaging during manufacturing (Curing) and operating stages.Since the electronic chips are getting smaller and smaller with increasing demand of power of the devices, a small deviation in the structure will cause functional and mechanical failure of the devices.Therefore, it is very crucial to accurately estimate the interfacial stresses in order to design the devices with reliability [1].
The existing uniform temperature model for bimaterial assembly is not really adequate to address the real life situation where the temperature levels of the two layers will be different.Again since there is heat flow in the materials, there will also be a temperature gradient in the layers.Thus, the existence of differential uniform temperatures as well as temperature gradient in the layers should be considered while determining the shearing and peeling stresses at the interface.As a result, a generalized form of the bi-material model is required to be constructed which should be able to take care of any temperature condition in the layers.The effect of heat generation on interfacial stresses due to the presence of a heat source in a layer is also needs to be investigated [1][2][3][4][5][6][7].
In this review paper, the authors have presented a summary of work in relation to the bi-layered assembly with different temperature conditions on the determination of interfacial thermal stresses.The authors have extended the case of uniform temperature model by earlier researchers of two layered structure to account for differential uniform temperatures, linear temperature gradient in the layers.The presence of a heat source in one layer (die) is also presented.Finally, the effect of bond material properties and geometry on interfacial stresses and bond material selection approach are also considered in a simple way.
Bi-Layered uniform temperature model
Fig. 1 shows the full length of the model analyzed.AA represents the centre line of the model.The model length is taken as 2L.In the 2-D model, the model is considered to be of unit width in a direction perpendicular to the plane of the paper and the forces and moments are defined with respect to the unit width.The force F at any section of layer is given by where is the shear stress at the interface The compatibility condition between top and the bottom layer can be expressed in terms of displacement as: where U i , i=1, 2 are the axial displacements for the layers.
In our approach, we translate the above condition in a more simpler form: where x(i) , i = 1, 2 are the axial strains which is defined as () By using condition (3), the model was developed by solving a second order differential equation which is much simpler compared to solution involving integrodifferential equation as in the earlier methods.
The solution is based on the assumptions as follows:
1. Thickness of the layered assembly is relatively small.
Each layer can be regarded as Bernoulli beam
3. Spherically bending thin plate is acted in each layer.
4.
No external force acting among them.
5. Axial force due to thermal loading varies along the length and full shear length in the interface bonded layers.
Adhesive layer (solder bond) is very thin compared to the top and bottom layers
With reference to the Figure 1, the axial strain components at the interface of the two layers take the form, where, K i = interfacial shear compliances given by 3 ; G i = shear modulus of rigidities given by 2( 1) The shear strain components in the layers in equation ( 4) are expressed as follows: (1) Strain due to shearing force = The shear stress (x) is given by, The peeling stress P(x) expression is given by, In eq (5), and (6), 2 are shear stress compliances for upper and lower layer respectively.
Bi-Material differential temperatures model
Introducing two parameters relating to temperature and , equations ( 5) and ( 6) can be expressed as, (1 ) 11 ( ) ( ) () ( ) Where temperature changes assigned for entire Layer 1 = ∆T 1 and for entire Layer 2 = ∆T 2 .The total change of curvature of the assembly due to change of temperature is expressed by
Bi-Material Linear Temperature Gradient model
Now considering this modified value of 1 R in eq. ( 9), the eq.( 7) and (8) can be reconstructed as follows: where From eq. ( 12) it can be observed that when gradient in materials is zero (∆T 1 =∆T 3 ) and (∆T 2 =∆T 4 ), the term A 2 becomes zero and eq.( 10) and (11) reduces to ( 7) and ( 8), which are the differential uniform temperature model.It is also observed that the correction factor A 2 (in eq.12) is contributed by six parameters m, n, 1 , 2 , 1 , and 2.
Derivation for heat generation in the layer (Die)
In reality heat is generated in one of the layers say die.In this situation, the temperature distribution gradient accross the layer is expected to be quadratic ruther than linear which is expressed by the differential equation, where G and k represent heat flux and thermal conductivity of the die material respectively [10].The expression of the ∆T term can be formed by applying boundary conditions in Fig. 4, at y=0, T = T 1 and at y =t 1 , T = T 3 , the solution of eq. ( 13) is given by, where ∆T 1 and ∆T 3 represent temperature changes at the interface and top of the die respectively.Thus, so far the analytical model has taken account of the differntail temperature conditions in the layers which is more realistic from the practical packaging point of view.
Analytical Model with Bond layer
Die with heat source x In the previous sections, a perfect bonding condition was assumed in the development of the interfacial stress model for bi-layered electronic package.However in reality, there exists a very thin layer of adhesive bonding material to attach the two layers together.There there is a need to evaluate the influence of this tiny bond layer in the analytical model.Interestingly, this surrogate bond layer may contribute significantly in eliviating the interfacial stresses by choosing appropriate bond layer parameters.In this section, the previous bi-layered perfect model in section 1 is further upgraded with the bond layer consideration.Subsequently a process flow chart is proposed to select the suitable bond using rule of mixture material for physical design and fabrication of layered assemblies.
The same analytical model which has been used in paper 1 (Title: Bond layer properties and geometry effect on interfacial thermo-mechanical stresses in bi-material electronic packaging assembly) in this conference is utilized for bond material selection, and design approach.In this paper, only the final model is presented.The shear stress (x) is given by,
The proposed bond material selection approach
Step 1: Key in properties and geometry of chip and substrate of an arbitrary bi-layered package Step 2: Key in range of bond layer parameters Step 3: Key in the interfacial shearing and peeling stress Step 4: Bond layer property and geometry parameter (for instance) Step 5: Find the volume fraction of material combination (alloy) using rule of mixture The parametric study carried out earlier concluded that the dominant factors of bond layer in minimizing interfacial stresses in the attached layers are: elastic modulus, E i and thickness, t i .Since the thickness of bond layer is a physical property that can be altered, therefore the application of rule of mixture in selecting the material combination for bond layer is focusing on elastic modulus.
Therefore, 41.7% of tin and 59.3% of antimony is required to manufacture tin-antimony alloy bond layer with desired Young modulus, which is 50.0GPa in order to minimize the interfacial stresses in the silicondiamond electronic package.
Step 6: Fabrication of MMC composite material based on the combination received from Rule of Mixture
Conclusions
This review paper summarizes the work conducted by the authors in relation to the bi-layered assembly with different temperature conditions on the determination of interfacial thermal stresses.The authors have extended the case of uniform temperature model by earlier researchers of two layered structure to account for differential uniform temperatures, linear temperature gradient in the layers.The presence of a heat source in one layer (die) is also presented.Finally, the effect of bond material properties and geometry on interfacial stresses and bond material selection approach are also considered in a simple way.
4 )
Strain due to change of temperature = i T (+ve sign because T is assumed to be positive and consequently the effect is an extension in the layers)
Figure 2 :
Figure 2: Bi-layered Assembly with Linear Temperature Gradients in the Layers [1].Considering the top layer of Fig. 2, the temperature distribution throughout the thickness can be represented as shown in Fig 3.
Figure 3 :
Figure 3: Linear Temperature Distribution Gradient in the Top Layer [1].
Figure 4 .
Figure 4.A Die Section with a Heat Source [1] Figure shows the free body diagram of the full length of the model.
Figure 6 :
Figure 6: Rule of Mixture design for interfacial bond layer selection
2
fraction) ; where E A = Property of material A and E B = Property of material A Elastic modulus of desired bonding material, 50 GPa ; E A = Elastic modulus of tin, 43.0 GPa ; E B = Elastic modulus of antimony, 55. | 2,499 | 2018-01-01T00:00:00.000 | [
"Engineering"
] |
On the interpretation of inflated correlation path weights in concentration graphs
Statistical models associated with graphs, called graphical models, have become a popular tool for representing network structures in many modern applications. Relevant features of the model are represented by vertices, edges and other higher order structures. A fundamental structural component of the network is represented by paths, which are a sequence of distinct vertices joined by a sequence of edges. The collection of all the paths joining two vertices provides a full description of the association structure between the corresponding variables. In this context, it has been shown that certain pairwise association measures can be decomposed into a sum of weights associated with each of the paths connecting the two variables. We consider a pairwise measure called an inflated correlation coefficient and investigate the properties of the corresponding path weights. We show that every inflated correlation weight can be factorized into terms, each of which is associated either to a vertex or to an edge of the path. This factorization allows one to gain insight into the role played by a path in the network by highlighting the contribution to the weight of each of the elementary units forming the path. This is of theoretical interest because, by establishing a similarity between the weights and the association measure they decompose, it provides a justification for the use of these weights. Furthermore we show how this factorization can be exploited in the computation of centrality measures and describe their use with an application to the analysis of a dietary pattern.
3 Introduction
Graphical models provide a compact and efficient representation of the association structure of a multivariate distribution by means of a graph and have become a popular tool for representing network structures in many applied contexts; see Maathuis et al. (2019) for a recent review of the state of art of graphical models. If X V is a vector of continuous random variables then an undirected network, called a concentration graph of X V , is constructed in such a way that every vertex is associated with a variable and a missing edge between two vertices implies that the corresponding partial correlation is equal to zero (Lauritzen 1996). In this way, the association structure of X V is encoded by the paths connecting the variables. Paths are the main tools used in the definition of separation criteria and therefore of the Markov properties characterizing these statistical models. More concretely, an edge joining two vertices can be regarded as a single-edge path and encodes a direct association between the corresponding variables, whereas a path made up of two or more edges represents an indirect association mediated by the intermediate variables in the path. It follows that the collection of all the paths joining a pair of vertices provides a full description of the association structure between the corresponding variables. Table 1 contains the entries of the variance and covariance matrix of a random vector X V with V = {x, 1, 2, 3, 4, 5, y} . From the covariance matrix one can compute the partial correlations for every pair of variables given the remaining variables (lower triangle), and Fig. 1 shows a concentration Table 1 x y 1 2 3 4 5 Table 1 Instance of covariance matrix whose inverse is adapted to the graph of Fig. 1 In this matrix the variances are all equal to 9 (main diagonal) and the covariances (upper triangle) are such that the values associated with the edges of the graph are all equal to 4.5. The lower triangle (in bold) gives the corresponding partial correlations 1 3
Example 1 The upper triangle of
On the interpretation of inflated correlation path weights… graph of X V . One can see, for instance, that the analysis of the association structure of X x and X y can be carried out by investigating the role played by the six different paths joining x and y in the graph; these are detailed in Table 2.
In models for directed acyclic graphs the well-established theory of path analysis (Wright 1921) provides a method that allows one to quantify the relevance of a directed path. On the other hand, in models for undirected graphs the problem of quantifying the strength of the association encoded by paths has been investigated only more recently. Jones and West (2005) considered the measure of association between two variables provided by the covariance and showed that this quantity can be decomposed in terms of additive weights associated with the paths joining the corresponding vertices. Roverato and Castelo (2020) provided an analysis of the properties of the covariance path weights introduced by Jones and West (2005) and showed that inflation factors play a key role in the interpretation of these quantities; see also Castelo (2017, 2018); Peeters et al. (2020).
The comparison of paths with different endpoint requires the use of normalized measures of association and, to this aim, Roverato and Castelo (2020) considered the weights obtained for the decomposition of correlation coefficients. Furthermore, they introduced a novel normalized measure of linear association, named the inflated correlation coefficient, and showed that the weights obtained from the decomposition of this quantity satisfy useful properties that, as far as the strength of paths is of concern, make them an appealing alternative to the classical correlation coefficients.
Here, we focus on the weights obtained from the decomposition of inflated correlations. A path weight quantifies the relevance of the corresponding path. A path can be seen as an ordered sequence of vertices and edges, and we show that every inflated correlation weight can be factorized into terms, each of which is associated either with a vertex or to an edge of the path. More specifically, every vertex is associated with an inflation factor quantifying the contribution of the variable to the path. Furthermore, every edge is associated with a partial correlation quantifying the contribution to the path of the corresponding pairwise association. This factorization allows one to gain insight into the role played by a path in the network by highlighting the contribution to the weight of each of the building blocks forming the path. This is of special interest in the comparison of paths. Moreover, it provides a theoretical justification for the use of these weights because it shows that inflated correlations can be decomposed into the sum of weights which can themselves be interpreted as inflated (partial) correlations, thereby conferring consistency between the weights and the association measure they decompose. We then show how this factorization can be used to construct betweenness centrality measures specifically designed to suit the graphical model framework. Finally, an application in the context of dietary pattern analysis is provided. This paper is organized as follows. Background on inflation factors, inflated correlation matrices, concentration graph models and path weighs is given in 1 3 Sect. 2. In Sect. 3 we establish a connection between inflation factors and the determinant of inflated correlation matrices, whereas Sect. 4 deals with inflated correlation weights and describes their decompostion. The betweenness centrality measures based on path weights and their decomposition are introduced in Sect. 5, where an application to the analysis of the eating behaviour of a group of subject is also given. Finally, Sect. 6 contains a brief discussion.
Inflation factors and the inflated correlation matrix
Let X = X V be a random vector indexed by a finite set V = {u, v, … , z} with covariance matrix = { uv } u,v∈V . We denote by K = { uv } u,v∈V the concentration matrix of X V and recall that K = −1 . For two subsets A, B ⊆ V , such that A ∩ B = � , we consider the subvectors X A and X B of X V and denote by X A | X B the residual vector deriving from the linear least square predictor of X A on X B (see Whittaker 1990, p. 134). It follows that the covariance matrices of X A and X A | X B are AA and AA⋅B = AA − AB −1 BB BA , respectively, where we use the convention that −1 AA = ( AA ) −1 and, similarly, We denote by uv⋅B , for u, v ∈ A , the entries of AA⋅B and recall that, in the Gaussian case, AA⋅B coincides with the covariance matrix of the conditional distribution of X A given X B . We write Ā to denote the complement of A relative to V, that is Ā = V ⧵ A and remark that the concentration matrix of X A | XĀ is the submatrix of K with entries indexed by A because it follows from the rule for the inversion of a partitioned matrix that −1 AA⋅Ā = K AA . In linear regression diagnostics, the effect of multicollinearity may be quantified by means of the variance inflation factor. The inflation factor of X v on X V⧵{v} is defined as is the multiple correlation of X v on X V⧵{v} . IF v takes values in the interval [1, +∞) ; it is equal to one if and only if X v and X V⧵{v} are uncorrelated and its value increases as (v)(V⧵{v}) increases (see Belsley et al. 2005;Chatterjee and Hadi 2012). Fox and Monette (1992) considered the case where one is concerned with sets of regressors rather than with individual regressors and introduced a generalized version of the variance inflation factor; specifically, for a pair of subsets A, B ⊆ V , with A ∩ B = � this is given by, We will refer to IF B A as the inflation factor of A on B and in order to simplify the notation we will write IF A when B =Ā . Throughout this paper, the covariance matrices we consider are assumed to be positive definite and, furthermore, we use the convention that the determinant of a submatrix whose rows and columns are indexed by the empty set is equal to one. In this way, the inflation factor in (1) is always well-defined with IF B A = 1 whenever either A = � or B = �. (1) On the interpretation of inflated correlation path weights… Fox and Monette (1992) also suggested a generalization (1) to the case where X V is partitioned into k sets, A 1 , … , A k . In the special case where k = p so that every set contains a single variable, such inflation factor becomes a global measure of association and it is equal to 1∕| | , where = diag( ) − 1 2 diag( ) − 1 2 is the correlation matrix of X V , with entries uv , for u, v ∈ V . This result is consistent with the usual interpretation of the determinant of as common global measure of collinearity, justified by noting that | | = 1 for mutually uncorrelated variables and | | = 0 for perfectly collinear variables. Roverato and Castelo (2020) introduced the matrix and named V the inflated correlation matrix because its entries are given by . Furthermore, they showed that the determinant of V can be computed as and that this determinant provides an alternative global measures of linear association which, like 1∕| | , takes values in the interval [1, +∞) and is equal to one if and only if is diagonal.
The quantities defined in this section can also be computed with respect to the distribution of X A | X B . More specifically, we will denoted by V AA⋅B = { V uv⋅B } u,v∈A the inflated correlation matrix of X A | X B and we remark that if A ∪ B = V then, similarly to the covariance matrix AA⋅Ā , the matrix V AA⋅B can be computed as
Concentration graph models
An undirected graph with vertex set V is a pair G = (V, E) where E is a set of edges, which are unordered pairs of vertices; formally E ⊆ V × V . The graphs we consider have no self-loops, that is {v, v} ∉ E for any v ∈ V . A path of length k ≥ 2 between x and y in G is a sequence We denote by V( ) ⊆ V and E( ) ⊆ E the set of vertices and edges of the path , respectively. We write xy when we want to make more explicit which are the endpoints of the path and, furthermore, when clear from the context we will set P ≡ V( ) thereby improving the readability of sub-and superscripts. For a pair of vertices x, y ∈ A we denote by V xy ≡ xy the collection of all paths between x and y in G.
If K is the concentration matrix of X V then for every u, v ∈ V it holds that where uv|V⧵{u,v} is the partial correlation coefficient of X u and X v given X V⧵{u,v} ; see Whittaker (1990, section 5.7). It follows that uv|V⧵{u,v} = 0 if and only if uv = 0 and we say that K is adapted to a graph G = (V, E) if for every uv ≠ 0 , with u ≠ v , it holds that {u, v} ∈ E and, accordingly, we call G a concentration graph of X V . The concentration graph model (Cox and Wermuth 1996) with graph G = (V, E) is the family of multivariate normal distributions whose concentration matrix is adapted to G . The latter model has also been called a covariance selection model (Dempster 1972) and a graphical Gaussian model (Whittaker 1990); we refer the reader to Lauritzen (1996) for details and discussion.
Decomposition of association measures over G
In the analysis of concentration graph models, Jones and West (2005) showed that the covariance between two variables can be computed as the sum of weights associated with the paths joining the two variables. More specifically, if the concentration matrix K of X V is adapted to the graph G = (V, E) then for every x, y ∈ V it holds that where The quantity ( , ) in (4) represents the contribution of the path to the covariance xy and for this reason we call it the covariance weight of relative to X V . More generally, we will refer to (3) with the name of the covariance decomposition over G.
An issue concerning the covariance decomposition in (3) is the interpretation of the values taken by the weights of a path. From this perspective, Roverato and Castelo (2020) showed that every covariance weight can be factorized as ( , ) = ( , PP⋅P ) × IF P , with the two factors which provide two clearly distinct pieces of information. More specifically, the first term, ( , PP⋅P ) , is the covariance weight computed on the distribution of X P | XP and captures the strength of the path, after adjusting for all the variables outside the path, while the inflation factor IF P captures the connectivity of the vertices of the path with the rest of the multivariate system. Table 1 gives the entries of a covariance matrix, , whose inverse K is adapted to the graph G of Fig. 1. There are | xy | = 6 paths between x and y in G and these are given in Table 2, together with the corresponding weights. The covariance of X x and X y is equal to xy = 2.411 , and it can be readily checked that
Example 2
Because the six weights have the same sign (they are actually all positive) it makes sense to include in Table 2 a column with the relative contribution of every path to the covariance. This shows, for example, that almost 50% of the value of xy is due to path 3 . The decomposition of covariance weights into partial weighs and inflation factor is given in the last two columns of Table 2. One can see that the relevance of path 3 with respect to the other paths is mainly due to its partial weight because its inflation factor is only slightly larger than those of the other paths.
It can be shown that the value of a covariance weight depends on the scale of the variables which are endpoint of the path. Hence, in order to compare paths with different endpoints it is necessary to deal with normalized quantities. Roverato and Castelo (2020) noticed that the decomposition in (3) is not restricted to covariance matrices but it can be straightforwardly extended to any positive definite matrix = { uv } u,v∈V obtained as = where = { uv } u,v∈V is a diagonal matrix with nonzero diagonal entries. More specifically, both and V are specific instances of this general setting with = diag( ) − 1 2 and = diag(K) 1 2 , respectively. Indeed, both correlations and inflated correlations are normalized measures of association and for this reason the corresponding weights are of interest. More specifically, Roverato and Castelo (2020) provided the following decomposition of inflated correlations, where we refer to Roverato and Castelo (2020) for details on the properties of inflated correlation weights. Here we remark that, as well as the weight in (4) also inflated correlation weights can be factorized into a partial weight and an inflation factor, ( , V ) = ( , V PP⋅P ) × IF P and, furthermore, all the factors in (6) are feasible of a clear interpretation because it is the product of the partial correlations corresponding to the edges of the path and of | V PP | ≥ 1 which "inflates" the partial correlations. Table 2 All the paths between x and y in the graph of Fig. 1 with the corresponding weights, proportion of covariance due to the weight, partial weight and inflation factor, where
On the relationship between inflation factors and inflated correlation matrices
In the theory of path weights a key role is played by both inflation factors and inflated correlation matrices. The inflation factor IF B A provides a well-established way to quantify the linear association between two random vectors X A and X B . On the other hand, the inflated correlation matrix V was firstly introduced by Roverato and Castelo (2020) who showed that the determinant of this matrix can be regarded as multivariate generalization of the inflation factor, and therefore as a global measure of linear association of X V . As well as the inflation factor, also | V | takes values in the interval [1; + ∞) where the value 1 represents absence of linear association, and in this section we formally establish a connection between these two quantities. The stated relationship between inflation factors and inflated correlation matrices will be exploited in the next section to the computation and interpretation of path weights. However, it is also of theoretical interest because it provides a clear way to interpret the value of | V | , thereby allowing us to gain insight into the type of information conveyed by this quantity.
We first need to prove the following lemma.
Lemma 1 Let X V be a random vector indexed by a finite set V and let
The identity (i) can be shown by using the alternative formulation of the inflation factor given in Roverato and Castelo (2020, eqn. (4)) and then applying the Schur's determinant identity formula as follows, notice that (7) is still valid if either A ′ or B are equal to the empty set because if A � = � then, by convention, On the interpretation of inflated correlation path weights… In order to show (iii) we first apply the Schur's determinant identity formula to | V AA⋅B | as follows, with where k vv is the relevant entry of the concentration matrix K of X V . We also notice that in the case where A = {v} and B =Ā = V ⧵ {v} then (8) becomes Finally, (ii) is a special case of (iii) obtained by setting B = � . ◻ It is worth remarking that, in order to apply Lemma 1 in the case where either A ′ or B are empty, one has to recall that in this paper we use the convention that the determinant of matrices indexed by the empty set are equal to one and this also implies, for instance, that Theorem 2 Let X V be a random vector indexed by a finite set V with |V| = p , and let V be its inflated correlation matrix. Then for any nonempty subset A ⊆ V and any numbering of the elements of A = {v 1 , … , v |A| } it holds that, Furthermore, when A = V the last term of the factorization (11) is equal to one, that is IF v p |pr(v p ) = 1.
Proof If |A| = 1 then (9) is equivalent to point (ii) of Lemma 1. Hence we assume |A| = q with q ≥ 2 and consider an arbitrary numbering A = {v 1 , … , v q } of the elements of A. Hence, we can first apply the factorization (ii) of Lemma 1 to obtain } and then we can apply (iii) iteratively to v 2 , … , v q−1 to obtain the factorization in (9). ◻ This theorem deals with an arbitrary submatrix of V and shows that its determinant can be written as the product of inflation factors. On the right side of (9) the elements of A are taken one at the time and the term relative to v i , for i = 1, … , |A| , captures the additional contribution of X v i to | V AA | with respect to the previous variables considered. Concretely, this is given by inflation factor of X v i computed on the distribution of X V⧵pr(v i ) | X pr(i) , that is the inflation factor of X v i on all the remaining variables linearly adjusted for X pr(v i ) . It is worth noting that the contribution of the last variable in the numbering is IF v |A| |pr(v |A| ) , that is the inflation factor of X v |A| on XĀ adjusted for X A⧵{v |A| } , and in the case where A = V this is equal to 1 and therefore uninfluential. In turn, this implies that It is also useful to compare (9) with the following factorization of IF A .
Theorem 3 Let X V be a random vector indexed by a finite set V. Then for any nonempty subset A ⊆ V and any numbering of the elements of A = {v 1 , … , v |A| } it holds that, Proof The result follows from the iterative application of the factorization (i) of Lemma 1. ◻ In a similar fashion to (9), each term of the factorization (10) captures the additional contribution of X v i to IF A . In order to understand the different type of information provided by | V AA | with respect to IF A it is useful to compare every term on the right hand side of (9) with the corresponding term in (10). In this way we see that both IFĀ v i |pr(v i ) and IF v i |pr(v i ) are computed on the distribution of X V⧵pr(v i ) | X pr(i) , however the former inflation factor only involves the linear association between X v i and the variables not in A, XĀ , whereas the latter inflation factor involves the linear association between X v i and both the variables not in A and the remaining variables in A, that is both XĀ and X A⧵pr(v i )∪{v i } . The following result gives an additional relationship between IF A and | V AA |.
Corollary 4 Let X V be a random vector indexed by a finite set V and let
, for every i = 1 … , |A| , so that it follows from (9) and (11) On the interpretation of inflated correlation path weights… Equation (12) show that | V AA | can be computed as the product of two quantities, | V AA⋅Ā | and IF A . The former, | V AA⋅Ā | , is a measure of global association of variables in X A linearly adjusted for XĀ whereas the former, IF A , measures the strength of the linear association between X A and XĀ . Recall that both | V AA⋅Ā | ≥ 1 and IF A ≥ 1 and therefore | V AA | = 1 if and only if AA⋅Ā is diagonal and AĀ = 0.
Decomposition of inflated correlation weights
In this section we consider the inflated correlation weights in (6) and exploit the results of the previous section to provide an alternative formulation of these quantities that identifies the role played by every vertex and edge of the path. Assume that the concentration matrix K of the random vector X V is adapted to the undirected graph G = (V, E) and let let = ⟨v 1 , … , v k ⟩ be a path between v 1 and v k in G . The vertices P = V( ) of are naturally ordered along the path and, more precisely, because the paths we consider are undirected, every path identifies two different orderings of its vertices each starting from one of the two endpoints of the path. We will refer to these orderings as the two natural numberings of the vertices of the path.
Proposition 5 Let K be the concentration matrix of X V . If K is adapted to the graph G = (V, E) then for every path = ⟨v 1 , … , v k ⟩ between v 1 and v k in G it holds that Proof The result follows from the application of Theorem 2 to the definition of ( , V ) in (6). ◻ We illustrate the application of Proposition 5 with an example.
Example 3
The covariance matrix given in the Example 1 can be inverted to obtain a concentration matrix that is adapted to the graph depicted in Fig. 1. The path xy = ⟨x, 1, 2, y⟩ has inflated correlation weight equal to ( xy , V ) = 0.09 and if we apply Proposition 5 with respect to the natural vertex numbering starting from the endpoint x we can associate to every vertex of the path an inflation factor and to every edge a partial correlation, as follows, where we write uv|rest to denote the partial correlation between X u and X v given all the remaining variables X V⧵{u,v} .
The factorization of ( , V ) in (13) can be carried out with respect to any of the two natural numberings of the vertices of the path.
Example 3 (Continued)
An alternative decomposition of the weight ( xy , V ) for xy = ⟨x, 1, 2, y⟩ can be obtained from the natural ordering of the vertices of the path starting from the endpoint y, as follows.
The possible choice of different vertex numbering may be an advantage. For instance, as shown below, the comparison of the two paths xy and xz in (16) becomes straightforward if one considers for both weights the natural numbering starting from the endpoint x of the two paths. On the other hand, the paths we consider are undirected and it is desirable to have a decomposition of path weights that is symmetric with respect to the two endpoints of the path. To this aim, for a path xy we consider the two natural ordering of its vertices and denote by pr x (v) and pr y (v) the predecessor of v ∈ V with respect to numbering starting from x and y respectively. Hence, we introduce an inflation factor computed as the geometric mean of the corresponding inflation factors in the two natural numbering of the vertices. and we will simply write IF ⟨v⟩ when it is clear from the context which path we are referring to. We can now state the main result of this section.
Theorem 6 Let K be the concentration matrix of X V . If K is adapted to the graph G = (V, E) then for every path = ⟨v 1 , … , v k ⟩ between v 1 and v k in G it holds that Proof The result follows because ◻ The decomposition of ( , V ) in (15) is uniquely associated to a path and can effectively capture the role played by the building blocks of the path, as shown in the example below.
On the interpretation of inflated correlation path weights… Unlike each of the two decompositions obtained from the two natural numbering of vertices, this decomposition shows that the variables X x , X 2 and X y play a similar role in the path. On the other and, the smallest inflation factor is associated with the vertex 1 and, interestingly, this is the only vertex in the path that is not linked with any vertex outside the path.
In graphical modelling the distinction between directed and undirected edges is important. A directed edge indicates the direction of dependence of a response on an explanatory variable. In a directed path every intermediate vertex is at the same time a response for the previous variables and explanatory for the following variables. Thus, for any directed graph there exists a natural ordering of variables that can be exploited to obtain a recursive factorization of the probability distribution. In turn, the terms of such factorization can be used to assess the contribution of each of the elementary units forming the path. On the other hand, undirected edges represent symmetric relationships whose interpretation is less straightforward, possibly resulting from a feedback relationship (Lauritzen and Richardson 2002). Thus, when investigating the interpretation of a path weight, the two endpoints of the undirected path need to be put on an equal footing. The decomposition given Theorem 6 satisfies this requirement because it is obtained from the geometric mean of the two alternative decompositions of the same weight with respect to the two natural orderings of the vertices. From this viewpoint, Proposition 5 could have been stated as a lemma preliminary to Theorem 6. However, we deem that Proposition 5 has its own interest because it can be readily applied to the comparison of paths. Consider, for instance, the case where we have a path xy = ⟨x, … , y⟩ and xz = ⟨ xy , z⟩ = ⟨x, … , y, z⟩ so that xz is exactly one edge longer than xy .
Then we can compute the ratio of the two relevant weights thereby obtaining, The path xz has one edge and one vertex more than xy , and the contribution of these additional components can be quantified as the product of the partial correlation associated with the additional edge and the inflated correlation associated with the additional vertex. Although the role played by the partial correlation in (17) is somehow intuitive because in concentration graph models partial correlations are naturally associated with edges, the role played by the inflation factor is more subtle. The relevance of a path within a network also depends on how its vertices interact with the rest of the network. The inflation factor in (17) quantifies the contribution of the additional variable X z to the interaction with the rest of the network. This quantity is computed after the variables are adjusted with respect to X V( xy ) , so as that it gives the "additional" contribution of z with respect to the vertices already present in V( xy ) . In fact, if the additional vertex z is connected with vertices forming the path but with no other vertex outside V( xy ) , then IF z|V( xy ) = 1. We close this section by remarking that the factorization in (15) is also of theoretical interest. Equation (5) shows the decomposition of the inflated correlation V xy over the paths of G where V xy = xy (IF x × IF y ) 1 2 . It is theoretically relevant that an association measure might be decomposed into path weights with have the same type of interpretation. An inflated correlation coefficients is obtained from the product of a correlation and the geometric mean of two inflation factor. The right hand side of (15) is consistent with this type of interpretation because its elements are (partial) correlations and quantities obtained as geometric mean of two inflation factors.
Application to the construction of betweenness centrality measures
In these section we apply Theorem 6 to the construction of centrality measures and describe their use to the analysis of a network representing the eating behaviour of a group of subjects. Undirected graphs can effectively be used to model the structure of complex systems and, in many applied contexts, the association network is expected to be very heterogeneous with some vertices and edges being more important than others in some sense. This importance can be referred to as network centrality and it is typically quantified by means of centrality measures; see Rodrigues (2019). Centrality is one of the most fundamental metrics in network science, but there is no general definition of centrality and a wide range of centrality measures focusing on different features of the network are available. One of the most prominent measure of centrality, called betweenness centrality, relies on the idea that information flows along paths. The most widely used betweenness measure is due to Freeman (1977) and it is based on the idea that a vertex has a high betweenness centrality if a large number of shortest paths crosses it. Accordingly, betweenness of a vertex is computed by summing up the fractions of shortest paths between every pairs of vertices that pass through it.
The choice to focus on shortest paths was motivated in a context of social network analysis. On the other hand, in other fields of application the assumption that information flows only along shortest paths is not justified. This has led to the introduction of alternative betweenness centrality measures where all paths contribute, possibly with different values, to the computation (Freeman et al. 1991;Newman 2005). More specifically, one can use different criteria to quantify the relevance of a path to the centrality of a vertex and this results in different betweenness measures. From this perspective, we consider the following comprehensive way to compute the betweenness of the vertex v ∈ V, On the interpretation of inflated correlation path weights… where B xy (v) is a measure of betweenness of vertex v relative to vertices x and y, x ≠ y , based on the criterion . Although centrality is most commonly computed for vertices, also edge centrality is of interest; see Girvan and Newman (2002); Bröhl and Lehnertz (2019) and references therein. Hence, similarly to (18) (2019), Peeters et al. (2020) and Roverato and Castelo (2020). However, the construction of centrality measures specifically designed to suit the graphical model framework is a recent, and largely unexplored, area of research. In the following, we consider three different types of vertex/edge betweenness centrality. The first type is based exclusively on the graph structure and therefore not specific of the graphical model field. The second and third types are specific of concentration graph models and are based on the theory of path weights and on the weight decomposition given in Sect. 4, respectively.
We refer to the first centrality measure with the name basic because it differs from that of Freeman et al. (1991) only from the fact that it is computed using all paths rather than shortest paths. More formally, it is denoted by B (⋅) and it is obtained by applying in (18) and (19), respectively. Here, I v ( ) denotes the indicator function that takes value one if v ∈ V( ) and zero otherwise; similarly, I {u,v} ( ) = 1 if {u, v} ∈ E( ) and zero otherwise.
We now turn to the specific case of concentration graph models. This is done by keeping into account the meaning and role that paths play in these models, and we deem that the theory of path weights provides a natural framework to address this issue. Consider the criterion = such that and (18) so that every path contributes to the computation with its absolute inflated correlation weight. The vertex betweenness centrality based on (21) was first introduced by Roverato and Castelo (2020) whereas in (22) we use the same criterion to introduce a novel edge beweenness centrality based on path weights. Note that, if all the paths between x and y have the same sign, then B xy (⋅) can be interpreted as the proportion of the inflated correlation coefficients between X x and X y due to the paths involving the relevant vertex/edge. It is also worth remarking that, in fact, B xy (v) can be equally interpreted as the proportion of covariance or correlation. Hereafter, we will refer to B (⋅) as to the weight betweenness. The criterion applied in (21) and (22) is, perhaps, the most straightforward way to apply the theory of path weights in the computation of betweenness centralities. A more subtle way may be obtained by considering the factorization in Theorem 6 and assigning to every path a value reflecting the role played by the relevant vertex/edge in the determination of the path weight. More specifically, we define where IF ⟨v, xy ⟩ , given in (14), represents to the contribution of vertex v to the path . More specifically, IF ⟨v, xy ⟩ ≥ 1 "inflates" the path ( , V ) of a factor equal to (IF ⟨v, xy ⟩ − 1) . Similarly, from Theorem 6, the contribution of an edge {u, v} to the weight of a path between x and y may be quantified by (IF ⟨u, xy ⟩ � uv|rest �IF ⟨v, xy ⟩ − 1) , thereby giving, We will refer to B (⋅) as to the inflation betweenness.
The rest of this section is devoted to an application where we compare the behaviour of the three types of centrality measures on a food network. Hoang et al. (2020) applied concentration graph models to learn the networks describing the eating behaviour of some distinct groups of subjects. Here, we focus on the network, given in Fig. 2, which represents the main dietary pattern for the group of men. Every vertex is associated with a food group whereas edges show how food groups are consumed in relation to each other. This graph was obtained in Hoang et al. (2020) by applying graphical lasso (Friedman et al. 2008) to a sample of 3769 subjects, and the estimates of the nonzero partial correlations can be found in Table 4. This sample is part of a larger dataset from a cross-sectional study carried out in South Korea between 2007 and 2019.
In the analysis of dietary patterns it is of interest to identify food groups that play a central role in the eating behaviour (Iqbal et al. 2016;Schwedhelm et al. 2018). For the concentration graph model of Fig. 2 we computed the centrality values of the vertices according to the three criteria given above. More specifically, because betweeneess centralities scale with the number of pairs of vertices, it is common practice to apply the following normalization, where B min and B max are the minimal and maximal values of B (⋅) , respectively. Hence, the normalized vertex centralities are given in Table 3. It is of interest to compare the generic basic centrality with the two specific weight and inflation centralities. To this aim we look at the correlation coefficient between every pair of measures, which turns out to be always positive, ranging from 0.79 to 0.91. Hence, from this viewpoint the three measures provide similar results. There are however also some differences of interest. Both the basic and the weight centrality identify light-color vegetables as the most central vertex, whereas the inflation centrality puts this vertex in second position, behind condiment and seasoning. We can, somehow informally, say that light-color vegetables is a central vertex because it contributes to the computation of a high proportion of the correlation of other variables, whereas condiment and seasoning is a central vertex because of the number of paths it belongs to and the relevant contribution it gives to the weight of such paths. Furthermore, basic betweenness identifies a cluster of 4 vertices with high centrality value whereas both the weight and inflation centralities restrict the set of highly central vertices to two elements, thereby highlighting the relevance of these two vertices in the network.
We turn now to edge betweenness, whose values are given in Table 4. Unlike vertex centrality, there are important differences in this case. Indeed, the most central edge according to basic centrality is the least central edge according to the other two types of edge centrality. More generally, basic centrality is negatively correlated with each of the other two centralities. In concentration graph models, an edge is not present in the graph if its partial correlation is equal to zero, and the absolute value of partial correlations are often regarded as a measure of edge relevance. Partial correlations enter in the computation of path weights whereas they play no role in the computation of the basic centrality. More specifically, we note that the correlation between the values of the basic centrality measures and the estimated of partial correlations is equal to −0.11 , and therefore negative. We also note that, for the most central edge according to the basic centrality, that is the edge joining tubers and roots with other seafood, the associated partial correlation is one of those with smallest value. Hence, form this perspective, the edge basic centrality measure does not seem to properly suit the graphical model framework. As expected, both weight and inflation centrality have a positive correlation with the estimated partial correlations. On the other hand, partial correlation is only one of the determinant of these centrality values and, interestingly, the two most central vertices according to both the weight and the inflation centralities are the edges joining light colored vegetables with mushrooms and seaweeds, respectively, and the removal of any of these edges would make the graph disconnected. The results provided by the weight and the inflation edge centralities are similar, but not identical, and the correlation between the values of these two measures is equal to 0.7. When comparing the three edge centralities it is interesting to notice that the five most central edges according to the inflation centrality have all one of the endpoints equal to light colored vegetables. Indeed, the inflation centrality clearly identifies all the edges starting from light colored vegetables as highly central so as to pinpoint the relevance of the hub associated with this vertex. On the other hand, the basic edge centrality ranks the vertices of this hub in its lowest positions thereby regarding this structural component of the network as non-central. This seems to be in contradiction with the basic vertex centrality which identifies light colored vegetables as the most central vertex. Finally, the information provided by the weight edge centrality with respect to this hub is more ambiguous giving high centrality value to some edges but low value to others.
We close this section by noticing that, potentially, there are exponentially many paths between two vertices of a graph and therefore, for large graphs, the computation of centrality measures that requires the identification of all paths may be computationally unfeasible. The weight and inflation centrality measures introduced in this section seem to give comparable results; however, inflation centrality has the advantage that it is computationally less demanding because its computation does not involve all the paths between two vertices, but only those involving the vertex of interest.
Discussion
In recent years there is a growing interest on how to make use and interpret the properties of networks, such as the identification of relevant edges and paths, the computation of centrality measures, the identification of communities. Of special interest is the investigation of methods especially suited for graphical models where the structure of the graph encodes the independence structure of the variables. The theory developed in this paper goes in this direction. Paths play a central role in undirected graphical models and are the key structures to be used in the identification, for instance, of relevant patterns and of vertex which may be regarded as central. It is therefore important to meaningfully associate weights to the paths of a graph which may then be used in the computation of summary measures, such as betweenness centrality measures, and in the comparison of relevant patterns.
In the examples considered in this paper there seems to be a relationship between weight and path length, in the sense that the shorter the path the larger the path weight. This is due to the role played by partial correlations. As shown in (17), if we start from a path and add one edge to it, then the original weight is updated by multiplying it by two factors: (i) an inflation factor that makes the weight value larger because it is greater than one, (ii) a partial correlation that makes the weight value smaller because it belongs to the interval (−1, 1) . In the examples we consider, the partial correlation component of the update has always a stronger effect and thus longer paths tend to have smaller weight. A formal analysis of this behavior is an interesting direction of future research so as to clarify to what extent, in large graphs, one could discard large paths and restrict the attention to smaller ones.
The family of undirected graph models and the family of models for directed acyclic graphs (DAGs) have some elements in common. More specifically, there exists a one-to-one relationship between the family of models for undirected decomposable graphs and the family of models for perfect DAGs. In DAGs the relevance of a path is quantified by the theory of path analysis and a second future research direction involve the comparison of the theory of path weights in models which belong to both families. | 10,883.6 | 2021-09-20T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Experimental Implementation of A Quantum Zero-Knowledge Proof for User Authentication
. A new interactive quantum zero-knowledge protocol for identity authentication implementable in currently available quantum cryptographic devices is proposed and demonstrated. The protocol design involves a verifier and a prover knowing a pre-shared secret, and the acceptance or rejection of the proof is determined by the quantum bit error rate. It has been implemented in modified Quantum Key Distribution devices executing two fundamental cases. In the first case, all players are honest, while in the second case, one of the users is a malicious player. We demonstrate an increase of the quantum bit error rate around 25% in the latter case compared to the case of honesty. The protocol has also been validated for distances from a back-to-back setup to more than 60 km between verifier and prover. The security and robustness of the protocol has been analysed, demonstrating its completeness, soundness and zero-knowledge properties.
Introduction
Zero Knowledge Proofs (ZKP) [1] are cryptographic mechanisms where a user (prover) has to prove to another user (verifier) that the first is aware of a secret, without revealing the secret itself or any information about it.Zero-knowledge proofs provide a powerful tool for enhancing online privacy and security in various domains.Depending on the scope of application, it may be the case that both the verifier and the prover are aware of the secret before carrying out the proof or, on the contrary, that only the prover knows the secret.Depending on the initial setup and the specific needs of the system, a large number of use cases with ZKP applicability can be defined, among which stand out authentication systems to prove the identity of an entity or person [2]; privacypreserving payments to verify that one party has sufficient funds to make a payment, without revealing the actual balance or transaction history [3]; or access control where users can prove that they have appropriate access to a system or resource without revealing any additional information [4].The concept of ZKP was first introduced in 1985 by S. Goldwasser, S. Micali, and C. Rackoff [1] showing that certain types of problems, such as graph isomorphism, could be proven without revealing any additional information beyond the truth of the statement.Since then, ZKP have advanced rapidly, with new techniques, protocols, and applications being developed and refined, becoming an important tool in cryptography.As technologies mature, ZKP are expected to play an increasingly important role in enhancing privacy and security.This type of protocols can be conducted through interactive ZKP [1], that is, those in which both the prover and the verifier are required to be present simultaneously during the execution of the proof, as would be the case of the protocol proposed by Fiat-Shamir [2].Non-interactive ZKP can also be implemented [5], as in the case of the Zero-knowledge succinct non-interactive argument of knowledge (zkSNARK) [6], where the verifier can launch the proof when the prover is absent, thus solving it later.The migration of this type of concepts to the world of quantum communications is of special interest for use cases such as the authentication of several users with access to the same quantum node within a quantum communication infrastructure (QCI).It is important to highlight that this type of user-oriented authentication proposed in this work should not be confused with the authentication of the classic communication channel in the QCI.The latter could solve the authentication of the classic channels used during communications, but there is still a need of guaranteeing the identity of the end user who is on the other side of the screen in such a way that his data remains private.The previous issue is addressed in this work.An example could be the use of the same computer by several doctors in a hospital to upload their patient information into the health system.When accessing the health system each of the doctor must be authenticated.Within the field of quantum cryptography, the Quantum Key Distribution (QKD) provides secret symmetric keys between two remote parties thanks to the fundamental laws of quantum mechanics.The most widely studied and tested QKD protocol is BB84 [7].Recently, other quantum-based cryptographic techniques have been explored, such as quantum digital signatures [8,9] or oblivious transfer [10], among others.This work aims to adapt the concepts of classical zero-knowledge proofs to the field of quantum cryptography, and proposes a new quantum zero-knowledge proof (QZKP).Currently in the literature there are not many studies on QZKP, however some proposals and approaches, mainly for increasing the efficiency of QKD devices, have turned out to be of great interest for the design of the QZKP.Specifically, in 2005 the floating bases protocol was published [11,12], proposing an increase of the number of possible bases to be used in QKD protocols in order to achieve a more efficient system, simultaneously increasing the threshold of the allowed error rate and reducing the information that can be extracted by Eve.To carry out this scheme, it is required that Bob and Alice have a pre-shared secret key on which the selection of the bases will depend.Another estrategy to improve the efficiency of the QKD devices is the one in [13], where the authors propose a decoy-state protocol for QKD characterized by a biased bases selection, where signal states are always encoded in basis Z, while decoy signals can be randomly encoded in basis X or Z with a pre-determined probability.More recently, in 2018, modifications of BB84 protocol were proposed through the use of pseudo-random states generated from a pre-shared secret key [14], in order to achieve higher key rates.The main drawback of the proposed scheme is the strict requirement of the employment of a perfect single-photon source.The use of pre-shared keys and the pseudorandom selection of the quantum states are the main concepts applied for the design of the proposed QZKP.In this paper, we propose and implement a new interactive QZKP where both the prover and the verifier possess a shared secret in advance.The proof is based on purely quantum mechanisms and has been implemented and experimentally tested on quantum cryptographic devices.The paper is organized as follows: firstly, the design of the new QZKP in Section 2 is presented; then, the security of the protocol is analyzed in Section 3. Finally, the experimental setup and the outcomes are described in Section 4.
Quantum zero-knowledge proof
In this paper, we propose an interactive quantum zero-knowledge proof (QZKP), where both the verifier (Alice) and the prover (Bob) must pre-share a secret s to correctly validate the proof.In particular, Bob uses a QZKP with Alice to authenticate himself, as detailed in Figure 1.This is always the case whenever a QKD channel has been established before.The proof is divided into three stages: A. Pre-processing stage, when all the information and setup needed to carry out the QZKP are prepared.The procedures performed in this stage are purely classical and correspond to steps 1 and 2 of Figure 1; B. Quantum stage, in which the generation, transmission and measurement of the quantum states will be carried out, permitting to create a raw bit string in both the transmitter and receiver ends.In this case, quantum processes are taking place and correspond to steps 3, 4 and 5 of Figure 1; C. Verification stage, in which the validity or not of the proof is determined through an estimation of the error rate.To perform this evaluation, classical tools are used and correspond to steps 6, 7 and 8 of Figure 1.
The pre-processing stage starts with a handshake between Alice and Bob, at the end of which an identical timestamp t 0 is generated in both sites.After that, given a secret s of any length, Alice and Bob apply a Key Derivation Function (KDF) [15] that permits to derive h 1 and h 2 , just giving t 0 and s as inputs.This step is carried out by both Alice and Bob simultaneously.The h 1 and h 2 resulting from this operation on Alice are h 1 ∈ {0, 1} m and h 2 ∈ {0, 1} n , where the lengths m and n, being n < m, are values to be defined by the players before executing the protocol.The process is equivalently performed at Bob's side, where h ′ 1 ∈ {0, 1} m and h ′ 2 ∈ {0, 1} n are computed.In a scenario where both Alice and Bob are honest players s = s ′ , otherwise s ̸ = s ′ .More details about the KDF are given in Section 3.2.
Once {h 1 , h 2 } and {h ′ 1 , h ′ 2 } have been calculated, the second phase of the protocol proceeds to generate the quantum bit string.Unlike a conventional QKD protocol, where the bases and states are randomly selected, here the bases are determined on Alice side by the bit-values of h 1 , while a random selection of states within each basis is performed.Then, Alice sends the stream of quantum signals to Bob, who will receive and measure them following the bases determined by the bit-values of h ′ 1 .When Bob really knows the secret and Alice is an honest verifier, h 1 = h ′ 1 .Therefore, when preparing and measuring the quantum states, both will obtain an almost identical bit string, ∆ r a in Alice's side and ∆ r b in Bob's, meaning with the superindex r that these are raw strings without any post-processing.The obtained bit string is almost identical because, despite preparing the states and measuring them on the same bases, losses occur during their transmission, even in the absence of any malicious manipulation.If we consider an ideal setup, characterized by a perfect extinction ration at the receiver, with no errors in the transmission, neither in the devices, and in absence of eavesdroppers intercepting the communications, the bit string detected by Bob would be identical to the generated one by Alice.
Once the transmission and measurement of the quantum states is concluded, the verification stage begins with a partial sifting process.This process differs from the standard procedure employed in QKD.In the QZKP, Bob does not publish the bases on which he has measured, because otherwise he would reveal the h 1 value related to the secret, which violates the zero-knowledge principle.Instead, he simply announces to Alice the time instants in which he detected a single photon; then, both Alice and Bob selects just these bits from their respective strings, obtaining a sifted string, ∆ s a for Alice and ∆ s b for Bob, without revealing any information.The superindex s means that these are the sifted strings.
Then, an error estimation between ∆ s a and ∆ s b is performed, to evaluate the quantum bit error rate (QBER).Typically in QKD, this process is done by both Alice and Bob publishing the same fragment of key as plain text and comparing them.Thus, the number of errors obtained in the selected fragment represents an estimate of the error in the rest of the key.After this comparison, the published fragment is discarded.In QZKP, it is not possible to directly publish a clear fragment of ∆ s a or ∆ s b because it would reveal an information strongly related to the pre-shared secret s.Instead, the selected fragment of ∆ s b in Bob, named δ b with length n, is encrypted with h ′ 2 by means of an One-Time Pad (OTP) procedure, whereby a bit-by-bit XOR is made between δ b and h Finally, Alice computes the QBER estimation between δ a and δ b .In QZKP, only a rough estimation of the error rate is needed.Errors are neither corrected, nor is privacy amplification performed, since a secret symmetric key is not required at the end of the process.
Finally, once the QBER has been estimated, the validity of the proof is verified: if the QBER exceeds a certain predefined verification threshold, T v , the proof will give a negative result; on the other hand, if QBER < T v , the proof will be positive, proving the identity of Bob.The QZKP must be performed iteratively N times to guarantee a correct statistical estimate of the QBER.
3 Security proof
Security assumptions
A set of security assumptions must be taken into account during the execution of the protocol.First of all, in the different security analysis of the QKD BB84 protocol [16,17], a set of assumptions on the adversary are considered that apply equally to the QZKP.Specifically, it is considered that: 1. any adversary (external or participant) has unlimited computational power, even with access to a quantum computer; 2. the quantum channel is considered untrusted; 3. an external adversary is able to eavesdrop the communication on the classic channel but not to inject messages or modify the content of the information since the channel is assumed to be authenticated.
Moreover, a security perimeter for both Alice and Bob nodes must be guaranteed, in order to avoid any unauthorized physical access to the hardware; as well, appropriate cybersecurity measures are needed to ensure that no side-channels attacks can be performed in both the classical and quantum channels.
Key-Derivation Function details
Regarding the KDF, they are basic and essential components of current cryptographic systems.Their goal is to take some source of initial keying material and derive from it one or more cryptographically strong secret keys.Two types of KDF are defined, according to the standard NIST SP800-56C (r2) [15]: -One-Step Key Derivation: from a series of inputs, and a secret value, the cryptographic material is derived.-Two-Steps Key Derivation: prior to the derivation, a transformation of the secret is applied.
In the QZKP proposed here, a Two-Steps Key Derivation function with a counter mode is recommended [18].The general structure has two main phases: 1. Extract phase: the keying material (s) and a salt value (t 0 ) are taken as input and a fixed-length pseudorandom key K IN is extracted.2. Expand phase: the pseudorandom key K IN is expanded into several additional pseudorandom keys (h 1 , h 2 ).
Additional input values in the second phase are a label and context, which are fixed values, and the required output length (m + n).It is worth noting that, even though both sources of entropy, (h 1 , h 2 ) are directly derived from the secret s, the actual encryption δ a ⊕ h 2 is between two independent elements, since δ a , even though derived from s, is actually built as a random string of bits and therefore independent of h 2 .
QZKP security analysis
A ZKP has to guarantee completeness, soundness and zero-knowledge.In the following, the security of the proposed QZKP is demonstrated.
Completeness can be described as: given that both the verifier and the prover are honest and both know the secret, the prover is able to convince the verifier that he does indeed know the secret without revealing it.In the absence of malicious actors, since both parties are aware of the secret, they will get the same bases configuration for states preparation (Alice) or their measurement (Bob).Therefore, in an ideal scenario without photon losses and electronic noise, the results of ∆ r a and ∆ r b will be perfectly equivalent.Therefore, the estimate of the error would be QBER = 0 and Bob can fully convince Alice of the knowledge of the secret.However, as we will see in Section 4, in a realistic implementation, transmission losses and additional sources of noise could be present, increasing the measured the error rate to QBER ̸ = 0.
Assuming that the verifier is honest but the prover is malicious and unaware of the secret, it must be ensured that a dishonest prover is not able to convince the verifier that he knows the secret, except with a negligible probability, to prove the soundness of the proof.These malicious attempts are reflected in the measured QBER, resulting from the execution of the protocol.As proposed by H.-K. Lo analysis [19], the QBER is given by: Where, 0 < r ≤ 1/2 is a variable parameter which depends on the value of the bits in h 1 ; p A,Z = (1 − r)p µ and p A,X = rp µ are Alice's probabilities of preparing the states in each basis; e Z B = p X B /2 = (1 − p Z B )/2 the error rate for the case when Alice prepares the state on Z basis and Bob measures on X basis; and e X B = p Z B /2 the error rate for the case when Alice prepares the state on X basis and Bob measures on Z basis.To try to cheat on Alice, Bob can carry out the following strategy.Given p Z B = 0.5 and r = 1/2, that is, 50% of the signal states are encoded in the Z basis and 50% in the X basis, since Bob does not know the value of h 1 , he will measure the signals randomly.In this way he will guess correctly 50% of the times in the selected basis and of the other 50% he will get an uncorrelated result but he will guess correctly the value of the resulting bit half of the times.In total, he will get 75% of the measurements correct, but without knowing which elements are wrong and which are correct.This strategy raises the QBER to 25% without taking physical errors into account.Therefore, for a T v < 25% the proof would give a negative result, proving the soundness of the QZKP.This analysis agrees with what is obtained in Eq. ( 1) by introducing the parameters.
Finally, if the prover is honest and the verifier a malicious player, the later learns nothing from the proof, demonstrating zero-knowledge.For this aim, Alice could try a similar strategy as in the previous scenario, preparing the quantum states using random bases, since she does not know the secret, s.Alice can also try to extract information from the string fragment δ b during the error estimation process.However, she does not know the value of h ′ 2 , and without this value, it is not possible to correctly decrypt the OTP.If she tries to guess the value of each bit of c ′ , the probability of success guessing all the elements will be: P guess = 1/2 n .In both cases, the obtained QBER will behave similarly as before.
Experimental system
The protocol described in Section 2 has been implemented experimentally exploiting a pair of discrete-variable (DV) quantum cryptographic devices, already tested for standard QKD transmission also in a deployed network in coexistence with classical channels [20].A schematic of the transmitter and receiver is shown in Figure 2. The DV-QKD prototypes are based on the implementation of the standard BB84 protocol with polarization encoding and decoy-state method [21].Alice and Bob exploit a fully-automatic synchronized architecture, thanks to using two different distributed-feedback (DFB) lasers with identical nominal wavelength as sources for the quantum channel and of the auxiliary channel [22].The quantum signal is composed by weak optical pulses with 20 ns time duration and 1 kHz repetition rate.The selected wavelength is 1310 nm, useful to avoid Spontaneous Raman scattering photons generated by co-propagating classical sources in the C-band.The decoy-state is implemented as signal (µ), weak decoy-state (ν) and vacuum states (0), each of them characterized by a pre-determined probability of occurrence, p µ , p ν , p 0 , respectively.The measured losses of the receiver module are about 5 dB.The proposed scheme has been implemented firstly in a back-toback (B2B) scenario, considering two cases: 1) all actors are honest and 2) the prover is a malicious user who does not know the secret and randomly measures the quantum states received from the verifier.
After the validation of the QZKP in these first, short-distance experiments, the distance between Alice and Bob was increased in order to evaluate the impact on the QBER in the honest condition, ensuring that a relevant number of false positives or false negatives do not occur.For this aim, the QZKP has been tested in a point-to-point standard single-mode fiber (SSMF) link and its performance has been measured, in order to estimate the impact of losses on the QZKP solution.The different distances were emulated by inserting optical attenuation in a controlled manner.All the intermediate elements were previously characterized to determine the initial losses introduced by the setup, these being a total of 2.5dB.Taking into account that the losses in a standard optical fiber are 0.21dB/km, the setup establishes an emulated initial distance between the devices of 11.9km.Thus, the evaluated propagation distances ranges from 11.9 km to 60.6 km, covering a link attenuation from 2.5 dB to around 13 dB.
Parameter settings
In all the experiments carried out for the honest case, the protocol parameters have remained constant except for the length of sifted string ∆ s a,b , named L ∆ , that were modified to cover string lengths between 256 bit and 2048 bit.In the case of the length n of δ a,b , the use of 15% of the total of ∆ s a,b was established for the QBER estimation.All the values of the parameter settings are collected in Table 1.The same approach was applied for the execution of the dishonest case but, in this case, the protocol was modified in Bob's side in order to perform random measurements due to the assumption that he ignores the secret, as explained in Section 3.For each experiment, we provide in Table 1 the emulated distance in km, being B2B the back-to-back configuration; the losses in dB; the length L ∆ of ∆ s a,b ; the number of iteration that the QZKP has been executed; the average time the system takes for generating 1 bit; the average QBER estimation; and the standard deviation of the QBER.As we can see in Figure 3, the time needed for the generation of 1 bit shows a logarithmic behaviour when increasing the losses in the honest case.
Analysis of results
Comparison between honest and dishonest cases.For each case, the QZKP procedure has been run for more than 170 iterations, as shown in Table 1.The outcomes obtained for the average estimated QBER and the standard deviation of the results are shown in Table 1 and Figure 4.
When verifier and prover are honest (blue stars), the QBER is far below the standard security threshold value of 11% [16].In particular, the measured average QBER shows an error floor of 2.9%, owing to the non-idealities of the system, as the finite polarization extinction ratio (ER) of the polarization beam splitter (PBS), limited to 20 dB, and the presence of dark counts in the two employed single-photon avalanche detectors (SPADs).On the other hand, in presence of a dishonest prover (red stars), the QBER increases up to 26.6%, overcoming the security limit of 11% of the BB84 protocol.The QBER performance is stable in time over all the 170 iterations; some fluctuations are visible both for honest users and dishonest prover, owing to the limited number of bits used for the estimation of the QBER, which is a 15% of m; in case of a raw key length of m = 2048 bits, the estimation is performed on n = 307 bits.The standard deviations for the two considered configurations are 0.7% and 1.5%, respectively.The stronger fluctuation in the dishonest case comes from the limited number of detections and from the statistical behavior of the prover, who measures randomly with equal probability for each basis, while the bases sent by Alice are r in X, being p µ the signal probability, m the length of h 1 .Anyway, the presence of the fluctuation does not introduce any false positive or false negative condition, permitting to complete the user authentication in all iterations.
Results over the distance.The measured QBER performance in function of the experimented additional losses is reported in Figure 5 and in Table 1.
As before, several acquisitions have been measured for each propagation length, corresponding to several executions of the QZKP.In Figure 5, the average values of the QBER together with their standard deviations are shown.The minimum attenuation at 0 dB corresponds to the already described B2B scenario.As can be seen, as the link losses increase, the QBER increases slightly, although it is always far below from the security threshold of 11%.As already explained, a minimum error rate close to 3% is present, owing to the limited polarization ER generated by the intrinsic properties of the optical devices and Fig. 5. Measured QBER performance together with the associated standard deviations versus additional link losses in case of honest parties.The green dashed line refers to the standard security threshold value of 11% for the BB84 protocol [16].
by unavoidable misalignments arising before the PBS.An improvement in the optical components and of the polarization alignment would reduce the QBER to values less than 1% in the B2B scenario, which would allow us to appreciate with greater detail the gradual increase in QBER with respect to attenuation.In addition, it is observed that the greater the losses, the greater the dispersion of the measured QBER.This is expected due to the limited number of bits used during the QBER estimation influenced by the reduction of the length of ∆ a when increasing the attenuation of the system, since the measurement time duration required to obtain the established ∆ a also increases.In the B2B scenario, the generation of 1 bit takes an average time of 0.033 s, with a QBER deviation of 0.7% corresponding to the full protocol execution, while for 12.7 dB the generation of 1 bit takes an average time of 0.465 s, giving a standard deviation of 1.8%.It is worth to point out that the dishonest case has only be executed in a B2B setup to demonstrate the impact in the QBER when a malicious prover who does not know the secret is present during the execution of the QZKP.This corresponds to the best case scenario for the attacker as there is not additional transmission losses due to the increase in the distance during the proof.
Comparison between real and estimated QBER.Finally, given that the QBER used to accept or reject the authentication of a user is an estimate extracted from a fragment of length n of ∆ a and ∆ b , the variation that exists between this estimate and actual value of QBER has been evaluated for the B2B setup and for raw string outcomes from the largest distances: 22.6 km to 60.6 km.As reported before, the length of the fragment used to estimate the experimented QBER is the 15% of the total segment.To obtain the real value of the QBER, each of the elements of ∆ a and ∆ b have been compared bit by bit, obtaining the results showed in Figure 6 -blue down triangles.For its part, the estimation is carried out over the values of L ∆ gathered in Table 1, obtaining the results showed in Figure 6 -red up triangles.As we can see, the difference between the estimated value and the real one is less than 1% in the best case at 4.75 dB and an underestimation of 25% in the worst case at 9.24 dB, without any negative impact in the authentication test, a behavior that remains constant for all the lengths of ∆ A,B used.
Conclusions
To benefit from the advantages provided by QKD, the design of an end-to-end secure cryptographic system is required, where the demonstration of the identity of the two communicating users is one important step to achieve this goal.For this aim, the proposed QZKP is a tool that allows the authentication of users in networks that already have a quantum communications infrastructure without disclosing any personal information about the user or his secret.Based on purely quantum processes the tool provides a quantum-safe authentication mechanism that, not only adds another layer of security to the entire ecosystem, but is also easily implementable with the technology available for QKD and more efficient because it does not require full error correction steps.Regarding the security of the protocol, the increase in the order of 25% produced in QBER has been demonstrated, both theoretically and experimentally in a back-to-back scenario, between an honest case, where QBER = (2.9 ± 0.7)%, and an attempt by a malicious prover to guess the bases associated with the derived h 1 function from the secret, where QBER = (26.6 ± 1.5)%.In addition, the proof is also valid for long distances, being demonstrated for metropolitan areas (≈ 60 km), where the increase in the QBER is appreciated as well as a greater dispersion of the data.It is worth to point out that the QZKP has not presented a false positive or false negative, thus demonstrating the robustness of the proof.The QZKP has been tested and guarantees completeness, soundness and zero-knowledge, against different strategies from a malicious player.Finally, we have demonstrated that, for lengths of 2048 bit, 512 bit and 256 bit, an error estimation using 15% of ∆ a and ∆ b , provides us with a reliable QBER value that can be used to validate or not the QZKP.
Fig. 1 .
Fig. 1.Flowchart of the quantum zero-knowledge proof between Alice and Bob.Steps 1 and 2 correspond to the pre-processing stage where the information needed for the execution of the proof is prepared.Steps 3 to 5 correspond to the quantum stage, where the quantum states are prepared, sent and measured.In Steps 6 to 8 the verification of the proof is carried out by the estimation of the quantum bit error rate (QBER).If both are honest s = s ′ , otherwise s ̸ = s ′ .KDF means Key Derivation Function; ∆ a,b are raw measurements; δ a,b are the results of the post-processing of ∆ a,b ; ENC means the encryption of δ a,b with h ′ 2 ; and Tv is the verification threshold.
Fig. 2 .
Fig. 2. Schematics of the pair of discrete-variable quantum cryptographic devices.
Fig. 3 .
Fig. 3. Amount of time needed for the generation of 1 bit in the honest case.The time needed shows a logarithmic behaviour when increasing the losses.The black dot corresponds to the back-to-back (B2B) configuration.
Fig. 4 .
Fig. 4. Experimental results of the QBER in a back-to-back setup.Blue stars: all players are honest, Red stars: dishonest prover.The black line refers to the standard security threshold value of 11% for the BB84 protocol [16].
Fig. 6 .
Fig. 6.Comparison of the real QBER of (∆a, ∆ b ), blue down triangles, versus the estimated QBER obtained from the fragments (δa, δ b ), red up triangles, for different string lengths.
Table 1 .
Parameter settings established during the QZKP executions of the honest and dishonest cases and results of the emulated distance in km, being B2B the backto-back configuration; the losses in dB; the length L∆ of ∆ s a,b ; the number of iteration that the QZKP has been executed; the average time the system takes for generating 1 bit; the average QBER estimation; and the standard deviation of the QBER. | 7,219.8 | 2024-01-17T00:00:00.000 | [
"Computer Science",
"Physics"
] |
Antifreeze Protein Prolongs the Life-Time of Insulinoma Cells during Hypothermic Preservation
It is sometimes desirable to preserve mammalian cells by hypothermia rather than freezing during short term transplantation. Here we found an ability of hypothermic (+4°C) preservation of fish antifreeze protein (AFP) against rat insulinoma cells denoted as RIN-5F. The preservation ability was compared between type I–III AFPs and antifreeze glycoprotein (AFGP), which could be recently mass-prepared by a developed technique utilizing the muscle homogenates, but not the blood serum, of cold-adapted fishes. For AFGP, whose molecular weight is distributed in the range from 2.6 to 34 kDa, only the proteins less than 10 kDa were examined. The viability rate was evaluated by counting of the preserved RIN-5F cells unstained with trypan blue. Significantly, either AFPI or AFPIII dissolved into Euro-Collins (EC) solution at a concentration of 10 mg/ml could preserve approximately 60% of the cells for 5 days at +4°C. The 5-day preserved RIN-5F cells retained the ability to secrete insulin. Only 2% of the cells were, however, preserved for 5 days without AFP. Confocal photomicroscopy experiments further showed the significant binding ability of AFP to the cell surface. These results suggest that fish AFP enables 5-day quality storage of the insulinoma cells collected from a donor without freezing.
Introduction
Cells obtained by either cultivation or extraction from human or animal tissues are used in the fields of regenerative medicine and livestock farming [1]. For short-term transplantations occurring within 1-5 days, it is desirable to preserve the cells without freezing. We generally handled such cells as assemblies of several to millions, and the percentage of viable cells in the total number (i.e., survival rate) is generally improved by soaking them in a preservation solution comprising inorganic salts, glycerol, sugars, etc., such as lactated Ringer [2], Euro-Collins (EC) [3], and the University of Wisconsin (UW) solutions [4]. The performance of each solution is highly dependent on the cell-type, and varies widely from cell to cell, even as a function of their age [1]. Indeed, the above solutions were initially developed to preserve specific organs, but are now used for virtually every type of cell and organ. The EC-and UW-solutions could, for example, preserve human hepatocytes under hypothermic conditions (e.g. +4uC) for 24 to 72 h [5,6]. Here we examined whether the performance of a cellpreservation solution is improved by addition of fish antifreeze protein (AFP), for which a general cell-membrane protection ability has been recognized in the last two decades [7].
AFPs, first extracted from blood sera of polar fishes in the 1970s, were initially identified as macromolecules that specifically adsorb onto ice crystals to inhibit their growth [8]. The AFPs were categorized into AFPI-IV and antifreeze glycoprotein (AFGP), according to their differences in amino acid sequence and tertiary structure [9,10]. AFPI is an amphipathic a-helical peptide (M.w. = 3.5 kDa). AFPII is an elongated globular protein with mixed secondary structures stabilized with disulfide bonds (M.w. = 14 kDa). AFPIII is made up of short b-strands and one helical turn, which constructs a flat-faced globular fold (M.w. = 6.5 kDa). AFPIV consists of four a-helices of similar length which are folded in to a four-helix bundle. AFGP is made up of repeating tripeptide units (Ala-Thr-Ala) n to form a polyproline type II helix, whose Thr is modified with a disaccharide moiety. A polar fish, such as Antarctic Nototheniids, expresses eight AFGPs ranging in size from 2.6 to 34 kDa, where AFGP7 and 8 are less than 10.5 kDa. For AFPs and AFGP, their icebinding function can be identified by their ability to shape ice crystals (ex. Into hexagonal bipyramids). The ability of ice-binding is also characterized with thermal hysteresis (TH), which is a difference between the non-colligative freezing point depression and the elevated melting point [11].
The cell-preservation ability of fish AFPs was first reported by Rubinsky and colleagues in 1990 [12]. They revealed that AFGP protected the structural integrity of the oolemma of pig oocytes, and inhibited ion leakage across the oolemma for 24 h at +4uC. The best preservation result was obtained with a solution of phosphate buffered saline (PBS) containing AFGP at a concentration of 40 mg/ml. Rubinsky et al. further showed that the fish AFPI-III can also protect the membrane of immature bovine oocytes for 24 h at 4uC, when AFPs were dissolved at a concentration of 20 mg/ml [13]. The ability to protect whole rat liver against hypothermic damage was also identified for AFPIII [14]. Although such cell-protection abilities of fish AFPs were significant, their marginal performance with a specific cell-line, and their individual differences in cell-preservation ability were not clarified. One of the reasons for this was the scarcity of fish AFPs, which had to be purified from blood sera of polar fishes.
Recently, a simple method of purifying massive amounts of fish AFP1-III and AFGP was developed [15]. This uses muscle homogenates of mid-latitude fishes as the source material. The present study used fish-muscle-derived AFPs supplied by a food company. The AFPs dissolved into EC-solution were examined for their ability to prolong the life-time of rat insulinoma cell-line RIN-5F under hypothermic conditions.
The preservation experiment on RIN-5F cells was divided into three steps: (a) setting of the cells, (b) 1-5-day preservation with a solution containing AFP, and (c) survival rate evaluation. Figure 1A shows a flow-chart of step (a). We first cultured the cells in a CO 2 incubator with medium-A in a flask by performing 2 cycles of 72 hr-cultivation at 37uC to reach an 80%-confluent state. The cells were successively detached from the flask by the addition of 1 ml of trypsin-A, and put into a centrifuge tube with 10 ml of medium-A. Following centrifugation at 1,0006g for 3 min, the collected cells were suspended in medium-A to a concentration of approximately 10 6 cells/ml. An aliquot (100 ml) of this cellsuspension was put into each well of a 96-well microplate such that each well had approximately 10 5 of RIN-5F cells. Following incubation for 3 days at 37uC, medium-A was carefully removed from each well so as not to disturb the cells. Subsequently, 100 ml of the AFP-containing solution, which was pre-cooled at 4uC, was put into each well to start the 1-5-day preservation experiments. We prepared 3-5 microplates for each set of experiments.
We chose EC-solution as a base fluid to dissolve the AFP as it contains no protein, which simplified the basis for the proteininduced effect on the cells. The purified samples of AFPI from Liopsetta pinnifasciata, AFPII from Hypomesus japonicus, AFPIII from Zoarces elongates Kner, and AFGP from Eleginus gracilis were provided from Nichirei Foods Inc. (9-Shinminato, Mihama-ku, Chiba-shi, Chiba 261-8545, Japan), and were used without further purification. These fishes were captured from mid-latitude sea areas near Japan, and their minced muscles were utilized for the source material to purify mass-amount of each AFP [15]. The AFP sample was dissolved into a freshly made EC-solution consisting of 99.3 mM of KCl, 15.1 mM of KH 2 PO 4 , 9.0 mM of NaHCO 3 , and 194 mM of glucose (pH = 7.4), whose osmolarity was 355 mM/kg of H 2 O [5]. Bovine serum albumin (BSA) and trehalose were also separately dissolved into the EC solution to evaluate their cell-preservation abilities.
The survival rate of RIN-5F cells was examined by protocols shown in Figure 1B. After 24, 72, and 120 h of preservations at 4uC, we carefully withdrew the EC-solution containing AFP from a well, and stored it in a 1.5 ml tube. An aliquot (0.04 ml) of trypsin-A was then poured onto the cells remaining in the well. Following a 37uC-incubation for 5 min, the cells were detached and added back into the AFP-solution withdrawn from that well into the tube, before being centrifuged at 1,5006g for 3 min. The collected cells were re-suspended with 0.04% of trypan blue dissolved in PBS. The number of unstained living cells was counted using a hemocytometer, and a ratio of the viable cells versus the total number of the cells determined before preservation, was defined as the survival rate (%). Such evaluations were performed for 3 microplates taken out after 24, 72, and 120 h of preservation.
The ability of the RIN-5F cells to produce insulin was also evaluated ( Figure 1C) before and after the preservation. For the latter evaluation, we took out a 96-well microplate from the 4uCincubator after 120 h of preservation, and settled it into a 37uCincubator for 30 min. The AFP-solution was then replaced with fresh medium-B, and the plate was further incubated at 37uC for 3 h to induce the secretion of insulin. We then evaluated a ratio of the secreted amount of insulin after the 120 hrs of preservation versus the amount measured before the preservation, which was examined as another survival rate. The concentration of the secreted insulin was estimated using a rat insulin ELISA kit (Shibayagi Co. Ltd.).
The antifreeze activities of AFPs were examined as described in [17] by using an in-house photomicroscope system with a Leica DMLB 100 photomicroscope (Leica Microsystems AG, Wetzlar, Germany) equipped with a Linkam LK600 temperature controller (Linkam, Surrey, UK). We monitored the AFP-induced change of the ice crystal shape (eg. hexagonal bipyramid), and also the ice growth initiation-and melting-temperatures to evaluate thermal hysteresis, a measure of antifreeze activity.
A Leica DM IRE2 confocal microscope system was also used to observe a human hepatoma cell, HepG2, and RIN-5F cells in the EC-solution containing AFPIII or BSA. The two proteins were labeled with a fluorescent detergent, Alexa-488 (Invitrogen, USA), which binds to lysine. All of the cells were incubated with 0.4 mg/ ml of each protein solution for 1 hr at 37uC. The photomicroscope image was then captured after rinsing with the solution without protein. All the images and movies were recorded using a colorvideo 3CCD camera (Sony, Tokyo, Japan).
Results
Purity of the AFPI -III and AFGP samples provided from Nichirei Foods Inc. was confirmed by SDS-PAGE ( Figure 2). Each AFP is a natural mixture of the isoforms. For example, the 6.5 kDa AFPIII from Zoarces elongates Kner consists of at least 13 isoforms whose ice-binding activities are different [18]. As shown in Figure 2A, the purified AFPIII migrated on the gel to a position between bovine trypsin inhibitor (6 kDa) and insulin (3 kDa) as reported previously [19]. The AFGP sample comprises the polypeptides less than 10 kDa, whose main species were assignable to AFGP7 and 8 [20].
The structural integrity of each AFP in the cell-preservation solution was confirmed by observations of their ice-shaping ability and TH activity. The samples of AFPI, AFPII plus Ca 2+ -ion, AFPIII, and AFGP can each shape an ice crystal into the hexagonal bipyramid ( Figure 2B), indicating that they form native structures in the EC-solution to exert their functions. AFPII requires a Ca 2+ ion for ice-binding activity [21], so that the ''unshaped'' disk-like ice crystal was observed in the absence of Ca 2+ ( Figure 2B). Detection of approximately 1uC of TH activity for AFPI-III also indicated the functionality of their native structures. The lower TH activity of the AFGP sample (0.56uC) can be ascribed to the lack of higher M.w species, AFGP1-5 (.10 kDa). The survival rate of RIN-5F cells was examined for seven ECsolutions, five of which contains 10 mg/ml of the proteins, AFPI, AFPII, AFPIII, AFGP, and BSA. The 10 mg/ml concentration was chosen, since it was the optimal concentration for another mammalian cell [5], and also the solubility limit of the present AFPIII sample. For AFPII, unbound Ca 2+ was removed since excess Ca 2+ ion is harmful to the cells. The survival rate was also examined with plain EC solution as the control. A well-known cellprotection agent, trehalose, was also examined. It was dissolved into the EC solution without glucose to 70 mg/ml to adjust its osmorality to that of the other solutions. Approximately 10 5 of RIN-5F cells prepared in AFPI-III, AFGP, trehalose, and BSA solutions were put into four wells of a 96-well microplate (Figure 1). At least four sets of these microplates were prepared for each timepoint of preservation (0, 24, 72, and 120 h), and we repeated each set of the experiments three times. The number of the living cells were hence averaged over 364 wells for one preservation period, and the number at 0 h was used as a denominator to evaluate the survival rate (%) with a standard deviation.
After 24 h of exposure to 4uC ( Figure 3A), trehalose could keep alive 98% of RIN-5F cells, while EC solution preserved only 20% of the cells. BSA gave a slightly better result (38%) than EC. The survival rate with the three AFPs was similar at 78%. The rate with AFGP was lower (49%) compared with the other AFPs.
After 72 h of preservation, approximately 90% of the cellpopulation died in both EC and the trehalose solution. The survival rates with AFPII and AFGP also decrease to 22 and 18%, respectively. However, there was almost 0% survival of the cells in the BSA solution. In contrast, much greater survival rates of 56% and 68% were obtained for AFPI and III, respectively.
After 120 h of preservation, a 58% survival rate was obtained with AFPI, implying that there is practically no change in the rate for AFPI from 72 h to 120 h. Similarly AFPIII keeps its preservation ability, although it went down slightly to 57%. The protective ability of AFPI and III was further verified by measuring the concentration of insulin secreted from RIN-5F cells before and after 120 h of preservation ( Figure 3B). As shown, approximately 60% of the cells retain the ability to secrete insulin, but only when the EC solution contained either AFPI or III. Figure 4 shows confocal photomicroscope images of HepG2 and RIN-5F cells examining the membrane-protection ability of AFP. Images A and B show the slice images of 1 micrometer width of a HepG2 cell, and images C and D are those of RIN-5F cells, respectively. Images A-D were captured after 1 h preservation at 37uC in EC-solution containing BSA (A), AFPIII (B and C), and AFPI (D). All of the proteins were labeled with the fluorescence reagent, Alexa-488, which binds to lysine. Note that BSA is a 583residue protein and contains 20 lysines. In contrast, only one lysine exists in the 65-residue AFP III and in the 37-residue AFPI [8]. As can be seen, surface potion of the cell incubated with AFPIII (Fig. 4B) is brightened significantly compared from that with BSA (Fig. 4A). These data suggests that AFPIII molecules bind uniformly onto the cell surface, while BSA undergoes heterogeneous binding. The superior binding ability of AFPIII is more evident by comparison of the intact cell images (Fig. 4A' and 4B'), each of which were synthesized by stacking of the slices. For RIN-5F cells ( Fig. 4C and 4D), it was difficult to capture the noncongestion image, as they tend to stick together and collapsed very easily.
Discussion
The observed life time elongation of RIN-5F cells could be attributed to the content of AFP in the EC solution since the solution itself provided very little protection to the cells. AFP I and III at concentrations of 10 mg/ml could keep alive approximately 60% of a insulinoma cell-line for 120 h, and the preserved cells retained the ability to secrete insulin. Previous studies showed that AFPI-III could preserve oocytes for 24 h at 4uC; e.g. 50% of bovine immature oocytes were able to undergo in vitro maturation after preservation [13]. This is consistent with the present results; approximately 78% survival rate was obtained after 24 h of preservation with AFPI-III. The observed protective ability of AFGP was, however, much poorer compared with the others, which became consistently evident during the preservation period ( Figure 3). This is probably due to the present AFGP sample containing only the smaller glycopeptides of less than 10 kDa ( Figure 2). The full set of AFGP peptides were also needed for the preservation of pig oocytes [12]. Significantly, the preservation (Fig. 1B). B. The survival rate evaluated by measuring the amount of insulin secreted from living cells (Fig. 1C). A freshly made Euro-Collins solution (EC) was used as a basic cell-preservation fluid to dissolve AFPI-III, trehalose, and BSA. doi:10.1371/journal.pone.0073643.g003 ability of AFPII became poorer than that of AFPI and III after 72 h at the low temperature. Such a difference in ability was not visible in 24-48 h preservation period, but became evident at a time of marginal performance, 120 h, in this case.
The mechanism by which AFPs protect cells under hypothermia is not well understood. In general, the lipid bilayer is in a fluid state at physiological temperatures. As the temperature is lowered, the bilayer segregates and becomes leaky, which allows ions from the exterior to enter into the cell in uncontrollable ways, leading to the cell destructions [1]. Rubinsky showed the ability of AFPI and AFGP to block the passive ion channels [23], which was assumed to reduce the leakiness of membranes and provide cold-tolerance to the cells. Tomczak et al. showed that AFP is introduced into the lipid bilayer through a hydrophobic interaction [24]. This interaction increases the phase transition temperature of the membranes and alters the molecular packing of the acryl chains, leading to a reduction in membrane permeability. A recent timelapse SECM experiment further demonstrated that a human hepatoma cell, HepG2, was swollen and ruptured during hypothermic exposure, while this process was effectively depressed by AFPIII [25]. This mechanism presumably led to the 80% survival rate of the cell after 72 h-preservation at 4uC, which used EC-solution containing 10 mg/ml of AFPIII [5].
The confocal photomicroscope images of HepG2 and RIN-5F cells ( Figure 4) clearly showed that both AFPI and III are capable of binding to the surface of RIN-5F cells. The membrane surface was covered with AFP more entirely compared with BSA. Hence, taking all the obtained information together, we suggest that AFPI and III possess the same level of binding affinity to the surface of RIN-5F cells, covering the whole surface effectively to inhibit its swelling. This mechanism results in delaying of their rupture, which was detected as the improvement in survival rate.
A difference in the cell-preservation capacity between AFPI-III became evident in the present study; the ability of AFPII is poorer than that of AFPI and III (Figure 3). It should be noted that a size of the non-polar accessible surface area is approximately 2,400 Å 2 for both AFPI and III, while it is 4,200 Å 2 for AFPII [26] when evaluated for each protein's coordinates (Protein data bank, http://www.rcsb.org/pdb/), 1WFB, 2ZIB, and 1HG7, respectively. Although it is unclear what prescribes the capacity of AFPI-III, the hydrophobicity specified with the non-polar accessible surface area might be one of the factors to differentiate AFPII from the others, since it will facilitate a proper binding of an AFP to the lipid bilayer. To clarify whether the ice-binding surface of AFPs shares the membrane-binding surface should be another interesting issue.
Insulin-dependent diabetes mellitus is a sickness that affects millions of people who have difficulty controlling their blood-sugar levels owing to deficiencies in the insulinoma cells [16]. Transplantation of insulinoma cells to such diabetic patients has been tried from the 1990s. Currently, the cells collected from a donor are all stored in a blood bag prior to being infused into a portal vein in the patient's liver. When the insulinoma cells are bound to the portal vein, they work as a sensor to monitor the blood-sugar level and secrete insulin [27]. A key step is the quality storage of the insulinoma cells collected from a donor, to which AFP is expected to make a contribution. It is thought that AFPs do not present chemical toxicity threats at high concentrations (40 mg/ml) [28], and do not affect the cell osmotically due to their high molecular weight, and they are soluble in buffer solutions. With the helps of mass-preparation technique, AFP may enable 5-day quality storage of the insulinoma cells collected from a donor without freezing. This will lead to an improvement of a success rate of diabetes mellitus treatment. The images A and B show the slice data of 1 mm-width of HepG2, and A' and B' an intact cell images synthesized by stacking of the slices. These 4 images were reproduced from [22] with permission. The images C and D are the slice data for RIN-5F cells. All the images were captured after 1 h-preservation at 37uC. Accumulation of AFPIII on the cell-surface is more evident compared with BSA. doi:10.1371/journal.pone.0073643.g004 | 4,800.6 | 2013-09-17T00:00:00.000 | [
"Biology",
"Medicine"
] |
Intonation processing of interrogative words in Mandarin: an event-related potential study
Intonation is the variation in pitch used in speech, which forms the premise of tonal and non-tonal languages. Interrogative words are words that introduce questions. Previous research lacks clarity regarding the specific cues used in the processing of word intonation. To address this gap, this study used the event-related potential electroencephalogram (EEG) research method to explore the intonation processing of tone two (mid-rising) interrogative words in Mandarin. For this, the word “shui,” meaning “who,” was selected as the experimental material. To avoid the influence of the environment, gender, and semantics, the Hum version, corresponding to the stimulus material, was also adopted for the experiment. This study used a passive oddball paradigm to examine the clues of intonation information processing in automatic cognitive processing through amplitude, latency, time window, and evoked location potential mismatch negativity. The standard stimulus was the declarative intonation with a high probability of occurrence (90%), and the deviant stimulus was the interrogative intonation with a low probability of occurrence (10%). In the time window of 370–450 ms, the mismatch negativity was found at the F3, F4, C3, Cz, and C4 channels. The findings show that, in the passive oddball paradigm, lexical semantics are essential for intonation processing at the pre-attentive level, which is dominated by the frontal and central areas of the brain. The results support the functional and comprehensive hypotheses that the processing of intonation is based on the function of language and that bilateral regions are involved in this processing. This study makes an important contribution by providing event-related potential evidence that lexical semantics plays a key role in the pre-attentive processing of intonation, as shown by the significant differences between semantic and non-semantic conditions.
Introduction
Intonation is common information in intonation and non-intonation languages, and it is also the mode of pitch change at the sentence level.The change in intonation is mainly formed through a change in the overall pattern of pitch, and its realization is a comprehensive effect that includes coordinated changes in pitch, length, and intensity (Yang, 2015).Mandarin Chinese tones can be described phonetically as having high-level (tone 1), mid-rising (tone 2), low-rising (tone 3), or high-falling (tone 4) pitch patterns.As there are different degrees of similarity among the four tones in terms of acoustic characteristics, these differences could influence pre-attentional processing (Xu, 1997).Interrogative words refer to words that can help raise questions, and their basic usage is to express inquiries and access unknown information (Xue, 2014).
Previous research lacks clarity regarding the specific cues used in the processing of word intonation.This study examined automatic intonation processing using a passive oddball paradigm, focusing on the amplitude, latency, time window, evoked location, and other mismatch negativity (MMN) information.This study hypothesized that lexical semantic function is important during early intonation processing.The acoustic hypothesis is supported if the semantic and non-semantic conditions are not distinguishable.
Brain lateralization in speech processing
Lateralization of the brain during speech information processing, which is also known as asymmetry, has attracted considerable attention from researchers.Researchers generally believe that information processing in language, regardless of speech perception or production, is a function of language.Both aspects rely primarily on the left hemisphere of the brain (Markus and Boland, 1992).Studies suggest that intonation processing involves brain areas related to speech and premotor functions as well as universal auditory mechanisms, and it shares similarities across languages, but with some dissociations for tonal language speakers (Chien et al., 2020).Intonation processing among tonallanguage speakers involves increased frontotemporal connectivity, thereby suggesting the involvement of a phonological network (Chien et al., 2020).Overall, intonation processing involves the activation of specific brain regions, including the left inferior frontal gyrus and the bilateral temporal regions, as well as the establishment of functional connectivity within phonological networks.
However, in the processing of speech prosodic information, there is a phenomenon of the lateralization of brain functions, whereby the information carried in prosodic information has different performances in a specific time span, and there are differences in the brain regions that integrate such information.Regarding the processing of speech prosodic information, previous studies have mainly proposed four viewpoints: the functional hypothesis (Liberman and Whalen, 2000), the acoustic hypothesis (Zatorre et al., 2002), the comprehensive hypothesis (Gandour et al., 2004), and the two-stage model (Luo et al., 2006).In previous studies, the acoustic (Tong et al., 2005;Ren et al., 2009) and functional hypotheses (Gandour et al., 2004;Wong et al., 2005) have been supported through many experiments.However, most of the studies focus on the level of lexical recognition and lexicaltonal information processing, and they rarely discuss the brain mechanisms behind intonation processing.
The functional hypothesis, which is also known as the taskdependent hypothesis, states that pitch processing is biased toward the left hemisphere of the brain when pitch patterns carry more speech information.When less verbal information is conveyed in pitch patterns, pitch processing is biased toward the right hemisphere of the brain (Liberman and Whalen, 2000).Gandour et al. (2002) used Thai vocabulary to compile experimental materials, and they employed functional magnetic resonance technology (fMRI) to study the processing of Thai sounds and Thai vowel lengths by selecting native Chinese and Thai speakers under both phonetic and non-phonetic conditions.The brain mechanisms of the two groups of participants in processing spectral information and processing time information related to language were investigated.The results showed that only the native Thai speakers experienced activation in the left sub-prefrontal cortex of the brain under the condition of tone judgment.Gandour et al. (2003) conducted a cross-language study using functional magnetic resonance imaging (fMRI) and found that when processing information related to the lexical tone of Chinese words, native Chinese speakers mainly relied on the left hemisphere for information processing, whereas the right hemisphere was mainly relied on when processing information related to intonation.
The sound hypothesis, which is also known as the cuedependent hypothesis, states (Zatorre et al., 2002) that the acoustic structure of auditory stimuli determines the functional lateralization of both hemispheres of the brain.The sounds that reflect spectral changes are mainly processed in the right hemisphere of the brain, whereas the sounds that reflect temporal changes are mainly processed in the left hemisphere.Tong et al. (2005) used fMRI to investigate the neural mechanisms of native Chinese and English speakers when processing prosodic information in Mandarin Chinese.The study found that both groups showed right-sided shifts in the medial frontal gyrus of the brain.Activation in the left superior limbic gyrus and posterior middle temporal gyrus among native Chinese participants was not observed among native English participants.A change in pitch pattern can lead to changes in both lexical tone and lexical intonation.In a study on the neural mechanism of tone, to investigate the influence of language background on automatic processing, Kaan et al. (2007) adopted the passive oddball paradigm and selected three groups of participants whose mother tongues were Chinese, English, and Thai, respectively, to study the tonal processing of Thai.The results showed that Chinese and English participants performed similarly in the discrimination of Thai low and mid tones.Under these two experimental conditions, the discrimination of Thai tones induced MMN among the three groups of participants, and the mean amplitude of MMN was not significantly different.Ren et al. (2009) studied the neural mechanisms behind the automatic processing of pitch information and found that regardless of whether the change in pitch caused a change in the intonation of words with linguistic function or words without linguistic function, the participants showed a processing advantage in the right side of the brain regardless of whether the change existed in the linguistic or non-linguistic environments.Subsequently, Ren et al. (2011) explored the brain mechanisms behind Chinese intonation automatization and found that the processing of Chinese tone-two intonation in the absence of semantics can produce a processing advantage in the right hemisphere, independent of the integration of time windows.
According to the comprehensive hypothesis by Gandour et al. (2004), the recognition of speech prosody is mainly modulated by the right hemisphere of the brain, responsible for complex speech analysis; however, this recognition is mainly performed by the left hemisphere of the brain, responsible for language processing.This was reported in a study by Schmithorst et al. (2006), who used an fMRI to study prosodic information processing in 5year-old children.In their study, children were asked to complete an experimental task of prosodic matching.The results suggested that the children had a right hemispheric advantage in processing prosodic information, but both sides of their brain networks were activated.Subsequently, Wartenburger et al. (2007) studied the speech content and prosody processing of 4-year-old children using the research method of near-infrared spectroscopy.The results demonstrated that the right frontal and temporal lobes of the brain showed significant activation when only the prosodic information of speech was processed, whereas the left hemisphere showed greater activation when the content of speech was processed.
According to the two-stage model (Luo et al., 2006), speech is initially processed in the pre-attention stage as a general sound signal rather than a function-specific signal, after which it is mapped into a semantic representation through the activation of neural circuits.Luo et al. (2006) proposed that the brain relies mainly on acoustic cues during early word processing, which is consistent with the sound hypothesis.The hemispherical advantage of speech processing depends on the acoustic structure of the first stage, i.e., the need to solve the computational problems posed by the precise extraction of time and spectral information, but at the same time, it depends on the linguistic function of the second stage, i.e., the need to integrate information in the language.This theory suggests that the acoustic and functional hypotheses are not mutually exclusive but simply work at different stages of auditory processing.
Mismatch negativity
Mismatch negativity (MMN) was first discovered by Näätänen et al. (1978).The most classic paradigm for inducing MMN is the oddball paradigm of binaural listening.The standard stimulus with a higher probability and the deviant stimulus with a lower probability are presented to the participants through the left and right ears, respectively, during the experiment, and the participants are required to pay attention to the sound in one ear while ignoring the sound in the other ear.It was found that whether the deviant stimulus appeared in the attentive or non-attentive ear, it caused a larger negative wave than the standard stimulus.MMN can present a novel time window in auditory processing that reflects the neural mechanisms behind brain processing, thereby providing a new understanding of the brain processes that form the biological basis of central auditory perception, different forms of auditory memory, and attentional processes that control auditory sensory input into conscious perception and higher forms of memory (Näätänen et al., 2007).MMN usually reaches its peak 150-250 ms after a stimulus, and its latency is shortened with an increase in the amplitude induced by a stimulus (Näätänen et al., 2007;Winkler, 2007).
In previous studies, it was unclear which cues were used to process the intonation of words.By comparing the main hypotheses of prosodic information processing, it is not difficult to find that the proponents of the functional and acoustic hypotheses use different research methods.The research advantage of functional magnetic resonance imaging (fMRI) is that it has a strong spatial positioning function but at the expense of time resolution.Therefore, studies on the automatic processing of intonation information cannot be conducted using fMRI.Ren et al. (2011) used the passive oddball paradigm to explore intonation information processing.They found that the intonation information processing of Mandarin tone two (mid-rising) words did not trigger MMN under the condition of semantic meaning.Mandarin tone two (mid-rising) is a special tone with a rising pitch contour that is very similar to interrogative intonation.The differences between declarative and interrogative intonation in Chinese showed that interrogative intonation has a higher phrase curve than declarative intonation (Yuan et al., 2002).
In this study, the Event-related Potential (ERP) research method was used to explore the intonation processing of interrogative words in Mandarin tone two (mid-rising).The tone-two interrogative word "shui" (which means "who" " ") was selected as the experimental material (semantic condition).Meanwhile, to avoid the influence of the environment, gender, or semantics, the Hum version corresponding to the stimulus material was also adopted for the experiment (non-semantic condition).In this study, a passive oddball paradigm was used to examine the clues of intonation information processing in automatic processing through amplitude, latency, time window, evoked location, and other MMN information.This study proposed the following hypotheses: if lexical semantic function is important during the early stage of intonation processing, the results support the function hypothesis, and if there is no difference between semantic and nonsemantic conditions, the results support the acoustic hypothesis.
Participants
Seventeen undergraduates and postgraduates from Liaoning Normal University, including 11 males and six females, were recruited to participate in the experiment.The participants' age ranged from 20 to 26 years old, with an average age of 23 years old.The participants were right-handed native speakers of Mandarin Chinese; had normal hearing, vision, or corrected vision; had no neurological diseases or internal or external brain damage; and did not use addictive drugs continuously.Before the formal experiment began, each participant was informed of the specific experimental processes, read and signed the informed consent form, and received a reward afterward.
Experimental design
A single-factor, two-level (semantic and non-semantic), withinsubject experimental design was adopted.In the semantic condition, the sound material was the tone two interrogative word "shui" (meaning "who" " ").In the non-semantic condition, the sound material was a "Hum" version of the tone-two interrogative word "shui" (meaning "who" " ").
Materials
This study comprised two experiment conditions, in which the stimulus sequence, composed of sound stimuli, included two types of stimuli: (i) a standard stimulus with a high probability of occurrence (90%), and (ii) a deviant stimulus with a low probability of occurrence (10%).The first condition comprised the tone two interrogative word with semantic meaning, in which the standard stimulus was the declarative intonation of "shui, " and the deviant stimulus was the interrogative intonation of "shui." The second condition comprised the tone two interrogative word with non-semantic meaning, in which the standard stimulus was the declarative intonation of "shui" in the Hum version (to avoid the influence of the environment, gender, and semantics), and the deviant stimulus was the interrogative intonation of "shui" in the Hum version.
All the sound stimuli used in the experiments were recorded by professionals (Mandarin Level 1 B), using a CSL-4500 voice workstation.The frequency parameters of all sound stimuli were set at Hz.After recording each sound more than three times, the best sound was selected as the official experimental material.All sounds were processed using Praat software1 (Styler, 2013).Among these, the duration of the stimulus in both intonations (declarative and interrogative) of the tone-two interrogative word "shui" was 510 ms.The sound stimuli used in both conditions had the same acoustic parameters.The start time of the sound was the same for both the standard and deviation stimuli.The average and maximum amplitudes of the sound for both stimuli were also adjusted to be the same using Praat software.The spectrum diagram of the stimulus is shown in Figure 1, where the ordinate is the fundamental frequency (f0) of material, and the abscissa is the time (ms).Before the formal experiment started, ten students, who did not participate in the formal experiment, evaluated the stimuli used in the experiment.
Procedure
The experiment consisted of two different conditions (blocks), each containing 715 sound stimuli (trials) with a time interval (offset-to-onset ISI) of 700 ms.The first 15 sound stimuli in each condition were not included in the EEG recording or statistical analysis.The probability of the standard stimulus was 90% (630 times) and the probability of the deviant stimulus was 10% (70 times).In each stimulus sequence, the order of stimulus presentation was processed pseudorandomly in advance to ensure that there were at least two standard stimuli between each deviant stimulus presentation.The order of presentation of the experimental conditions was divided into two groups that were balanced between the participants.In the experiment, the sound stimulus was presented to the participants through headphones, and the sound volume was uniformly set to 70 dB.In this experiment, the participants were asked to choose a silent movie according to their own preferences and were highly deprived of attention.They were then asked to ignore the sound in the headset during the formal experiment and focus on watching the selected silent movie.
The experiments were conducted in a soundproof and lightproof laboratory with soft lighting.Each participant sat on a chair 65 cm away from the computer monitor.According to the measured head circumference of the participants, the test chose the appropriate electrode cap to wear, and it adjusted the resistance of each electrode cap, as long as the resistance in each electrode was reduced to less than 50 K .After the electrodes were adjusted, the following instructions were presented to the participants through a computer monitor: "In the subsequent experiment, you will hear a series of sounds through the headset while watching a silent movie.Please ignore the sounds in the headset and watch the movie carefully.During each break, I will ask you about the movie plots.Please try your best to answer these questions.Please try your best to control your body movements and blink your eyes during the experiments.If you confirm your participation, press any key to access the experiment."The participants were asked to read the instructions carefully and confirm their understanding before starting the experiment.After completing the practice experiment, the participants entered the formal experiment.
In the formal experiment, the sound stimulus used was presented to the participants through headphones.At the beginning of each experimental condition, 15 standard stimuli were presented successively.After the end of one sound stimulus, there would be a 700 ms sound interval, after which the next sound stimulus would be presented successively, and the sounds would be presented in good pseudo-random order in front of the event.After all 715 sounds were presented, the experiment was completed.After each experimental condition, the participants rested for a certain amount of time.The duration of rest was based on personal preferences.The specific presentation process of the experimental stimuli is illustrated in Figure 2.
EEG recordings
The Electroencephalogram (EEG) equipment produced by EGI (Electrical Geodesic, Inc.) 2 was used to collect brain signals.A 128-channel electrode cap, 300-fold amplifier, and corresponding Electroencephalogram (EEG) system (Net Station 5.0) were used (Figure 3).In the experiment, the electrode distribution of the electrode cap was arranged according to the 10-20 system in the international standard, and the Cz point was selected as the reference electrode in the original record.The 128 electrodes for recording Vertical Electrooculography (VEOG) and Horizontal Electrooculography (HEOG) were included in the electrode cap, and the electrodes were adjusted to the standard recording position for data acquisition before the experiment.The sampling rate of the EEG device was 250 Hz, and the resistance of each electrode of the electrode cap was reduced to less than 50 kω throughout the experiment.
EEG data analysis
All recorded Electroencephalogram (EEG) data were analyzed using the MATLAB software (MATLAB 2022b, MathWorks, Inc.).The EEGLAB toolbox (EEGLAB v. 2023.0)(Delorme and Makeig, 2004) and customized scripts were used to perform the preprocessing analysis.During preprocessing, the continuous data were filtered through 0.1 Hz high pass filtering and 30 Hz low pass filtering.The filtered data were segmented according to the markers, and the epoch length was 1100 ms, including a baseline of 100 ms.Head motion artifacts were recognized and removed manually, and the EEGLAB independent component analysis (runICA) function was used to correct the electrooculogram (EOG) artifacts.Thereafter, automatic detection was used to remove the whole epoch containing the poor EEG portions with a wavelet greater than 100 µV.The data were referenced to the bilateral mastoid electrodes (E57 and E100).The MMN waveform was obtained by subtracting the standard stimulus from the deviant stimulus.The time window of MMN was determined to be 370 ms-450 ms according to the peaks in the total average 2 https://www.egi.eu/amplitude of the ERP generated in the experiment.Nine electrodes (F3, C3, P3, F4, C4, P4, Fz, Cz, and Pz) were selected for the statistical analysis.
Analysis of mean amplitude
The average ERP amplitude was first analyzed using threefactor repeated measurement ANOVA with two experimental conditions (semantic vs. non-semantic) × two stimulus types (standard vs. deviant) × nine electrodes (F3, Fz, F4, C3, Cz, C4, P3, Pz, P4).Greenhouse-Geisser method was used to adjust the lack of sphericity (Abdi, 2010).The ground-averaged waveforms generated by the standard and deviant stimuli under different experimental conditions (semantic vs. non-semantic) are shown in Figure 4.
In the time window of 370-450 ms, the main effect of the experimental conditions was significant, F (1,16) = 14.554, p = 0.002, η p 2 = 0.476.The mean amplitude of the semantic condition (−0.937 µV) was significantly more negative than that of the non-semantic condition (−0.158 µV).
The interaction effect of the experimental condition × stimulus type was significant, F (1,16) = 7.611, p = 0.014, η p 2 = 0.322.The simple effect analysis results suggested that in the semantic condition, the mean amplitude of the standard stimulus (−0.352 µV) was significantly less negative than that of the deviant stimulus (−1.321 µV), p = 0.002.The interaction effect of electrode × stimulus type was significant, F (8,128) = 2.847, p = 0.043, η p 2 = 0.151.The simple effect analysis results indicated that in the F3, F4, C3, Ground averaged ERP waveforms.Cz, and C4 channels, the mean amplitude of the standard stimulus was significantly less negative than that of the deviant stimulus, p < 0.05.
Analysis of MMN
The average amplitude of MMN obtained from the deviant stimulus minus the standard stimulus was analyzed using threefactor repeated measures ANOVA: two experimental conditions (semantic vs. non-semantic) × 3 scalp distribution types (frontal, central, and parietal area) × 2 hemispherical positions (left vs. right).Greenhouse-Geisser method was used to adjust the lack of sphericity (Abdi, 2010).The ground-averaged MMN waveforms for the two experimental conditions (semantic vs. non-semantic) are shown in Figure 5.
In the time window of 370-450 ms, the main effect of the experimental conditions was significant, F (1,16) = 8.040, p = 0.012, η p 2 = 0.334.The mean amplitude of MMN in the semantic condition (−0.974 µV) was significantly more negative than that in the non-semantic condition (−0.101 µV).
The main effect of the scalp distribution type was marginally significant, F (2,32) = 3.360, p = 0.072, η p 2 = 0.174 (Figure 6).The pairwise comparison results indicated that the mean amplitude of MMN in the central area (−0.711 µV) was significantly more negative than that in the parietal area (−0.330 µV), p = 0.003.Additionally, the mean amplitude of MMN in the left hemisphere (−0.559 µV) was more negative than that in the right hemisphere (−0.516 µV).
Discussion
The statistical results indicated that there was a significant difference between the mean amplitude in the semantic condition and that in the non-semantic condition, thereby illustrating that the brain processes the tone two (mid-rising) interrogative word "shui" differently when with the lexical semantics and when without the lexical semantics (Hum version).As the experiment used the passive oddball paradigm and the MMN reflects automatic processing at the pre-attentive level, the brain could distinguish the lexical semantics of the tone two interrogative word.
Additionally, the simple effect analysis results of the interaction effect on the experimental condition (semantic vs. non-semantic) and stimulus type (standard vs. deviant) suggested that the deviant stimulus (interrogative intonation) could only evoke MMN in the semantic condition.This result means that when the tone two interrogative word "shui" contains the lexical semantics, the brain can distinguish between the declarative and interrogative The distribution maps of MMN mean amplitude in different conditions.
intonations.However, when there is only the acoustic stimulus (hum version "shui"), the brain cannot process the different intonation (declarative vs. interrogative) in tone two (mid-rising).These results were consistent with those of a study conducted by Ren et al. (2011).This might be because both the interrogative intonation and Mandarin tone two (mid-rising) words end with the mid-rising curve of their acoustic feature.The pre-attention level brain processing cannot differentiate between the interrogative intonation and tone two (mid-rising) endings, but the brain is also sensitive to the interrogative words in declarative intonation, and interrogative intonation might be because those words are always used for extra questions.
In addition, the simple effect analysis results of the interaction effect of electrodes × stimulus type illustrate that MMN only existed in the frontal (F3, F4) and central (C3, Cz, and C4) scalp areas, which means that at the pre-attention level, the domain brain regions that processed the intonation information were the frontal and central areas.The findings show that lexical semantic function is important in intonation processing at the pre-attention level, and it dominates processing in the frontal and central areas.
In this study, the time window of MMN initiation ranged from 370 to 450 ms, which is later than the usual time window of MMN generation (Näätänen et al., 2007;Winkler, 2007).This might be due to the particularity of the selection of experimental materials.In previous studies, researchers mostly used monosyllabic words for research, but the tone two (mid-rising) interrogative word "shui" contains the fricative sound "sh, " and "shui" is a multisyllable word.From the contour curve of the sound, it can be observed that the real pitch is produced after the "sh" sound, which may result in the delay of the MMN time window owing to the different syllables of the Chinese character itself and the position where the meaning is generated.Additionally, there is no clear evidence of the domain hemisphere when processing the intonation of declaratives and interrogatives, which supports the functional and comprehensive hypotheses that the processing of intonation is based on the function of language, and both brain regions are involved in the processing.
Conclusion
This study made the first endeavor to explore the intonation processing of tone two (mid-rising) interrogative words in Mandarin at the pre-attentive level.Using EEG recordings of MMN, we were able to provide novel evidence that lexical semantics strongly modulate pre-attentive brain responses to intonation contours.The significant differences in MMN amplitude between semantic and non-semantic stimuli demonstrate that the brain rapidly distinguishes between interrogative and declarative intonations only when words carry lexical semantic information.This finding supports a functional account of early intonation processing, rather than an acoustic account, which is consistent with the view that the brain processes speech prosody in relation to linguistic function.The results of this study indicate that frontal and central cortical regions underlie automatic intonation processing, thereby elucidating the neural generators and temporal dynamics involved.Overall, the data demonstrated that language experience shapes pre-attentive auditory processing to be maximally sensitive to linguistically relevant pitch patterns in speech.The passive oddball paradigm with Mandarin tone two (mid-rising) interrogatives provides an elegant way to probe the early functional processing of intonation before attention or awareness.Therefore, this study contributes to the understanding of the early neural processing of linguistic intonation.However, the experiments were conducted using a single word-"shui, " meaning "who" (" "), and only assessed with the second tone.Future studies should explore different tones, using other interrogative Mandarin words, to study intonation processing.
FIGURE 1Spectrum diagram of sound stimuli.
Frontiers
(A) The ground averaged waveforms of the semantic condition of nine interested channels.(B) The ground averaged waveforms of the non-semantic condition of nine interested channels. | 6,322.4 | 2023-12-15T00:00:00.000 | [
"Linguistics"
] |
Research on Coupling Method of Watershed Initial Water Rights Allocation in Daling River
As a typical abnormal, nonlinear and multidimensional system decision-making problem, watershed initial water rights allocation involves various resource distribution, economic, social and environment objectives. Because of the large subjectivity of weight determination, different from the traditional methods, we adopt a new coupling method which is a dimension reduction method in this study to solve the watershed initial water rights allocation problem. The data allocated in Daling watershed can be calculated in optimum projection direction to gain the watershed initial water rights allocation scheme in low-dimensional space.
Introduction
Water is one of the basic resources for agricultural development and plays a strategic position in the agricultural production. As one of the increasingly scarce strategic resources, water resources have a global and long-term impact on food security and agricultural economic development. Nowadays, water resources scarcity is becoming much severe due to the increasing population, climate change and water pollution problems. Groundwater resources in many regions have been exploited to irrigate crops in dried years which result the drop of the water tables. Many issues lead to the severe damage to the environmental and social efficiency decreased. The contradiction between water supply and demand has become increasingly prominent, especially the excessive consumption and indiscriminate abuse of the severe problem of water supply and demand. The situation of the water shortage and water pollution is very grim for the sustainable use of water resources and water environment. How to achieve the rational use of water resources, improve the efficiency and equity of water resources utilization has become the focus of economic theorists and policy makers concerned.
Although the excessive use and waste of water resources are the main reasons leading the water shortage, structural defects of the existing system of water rights can be called the deep-seated reason of current water shortage. Due to the unclear water property, economic agents regard the water resources as the free "public resources" which lead to excess use and waste of water. If the current water rights system in China continues without fundamental change, only by increasing people's awareness of water conservation and expansion of the limited water supply is impossible to solve the current low water use efficiency fundamentally.
Some researchers have studied the hydrology, ecology and other subject models. Most of them focused on one factor but ignored the multi-factor analysis, it is still a challenge to directly solve the water rights allocation which is a multi-factor cluster problem. How to get a single-factor question 2 from the multi-factor one is the key issue. As a consequence, we adopt Projection Pursuit (PP) introduced by Friedman and Tukey (1974) to solve the above problem. Projection pursuit (PP) is an effective method based on different functions of projection indexes has been presented in this paper when standard deviation and partial density are used to structure the function of projection indexes, it deduces and acquires the empirical formula of calculation of exclusive parameter the density window breadth of projection pursuit cluster model. also studied this model to solve the nonlinear questions. By transferring high-dimensional data into a low dimensional space (e.g., three-dimension, two-dimension or one-dimension), PP model can be directly driven by the low-dimensional data. Except the PP model, some researchers studied some derived models to analyze the high-dimensional data. Such as projection pursuit regression (PPR), projection pursuit density estimation (PPDE), projection pursuit cluster (PPC), projection pursuit learning network (PPLN), projection pursuit wavelet learning network (PPWLN), etc. (Friedman and Stuetzle. 1981, Friedman et al. 1984, Hall 1989, Hwang et al. 1994, Lin et al. 2003. The method of PP model, possessing advantages of good stability, strong anti-interference and high accuracy, has been widely used in many areas. Coupling method has been used successfully in different areas to evaluate the multi-factor problems. But this method just provides the projected characteristic value remaining the major characteristics of data according to the projection index (Wang et al. 2002). In this paper, we propose a mathematical model by projection pursuit technique to solve the above mentioned problems, and find the optimal projection direction according to the Coupling optimization algorithm. Set up a new projection index to overcome the difficulties of weights determination. The method owns the advantages of complying with watershed initial water rights allocation mechanism and meeting the control requirements of water quantity, water quality and water utilization efficiency, which help to achieve effective allocation of water resource.
Study Area
As the largest single flow in the western of Liaoning province in China, Daling watershed stretches across the provinces of Liaoning, Inner Mongolia and Hebei. This region belongs to the typical continental monsoon climate characterized by hot and rainy summers, cold and dry winters, which results in uneven amount of precipitation of year and rainfall is concentrated in July and August. Meanwhile the average rainfall of years increases by degrees from north to south. The annual mean precipitation of this watershed is between 400mm and 600mm. The per capita possession of water resources of this watershed is merely 392m 3 , which accounts for 18% of the national level. In spite of the shortage and conflict of water resources, the frame work, such as the comprehensive scheme of social and economic development of this watershed, and the comprehensive scheme of water resources, are relatively complete. Hence, this article selects the initial water rights allocation scheme of Daling watershed as a case to study. Choose the fairness, efficiency and sustainability as three basic principles, show as
Coupling optimization algorithm
Coupling is a random phenomenon with seemingly rule in deterministic system, because of its randomness and regularity properties, chaotic motion could go through on its own rule within a certain range. Thus, by using chaotic variables, we can obtain globally optimal solution with high efficiency. The basic idea of coupling optimization algorithm is to linearly map chaotic variables to the value interval of optimized variables, and then search iteratively to get the optimal solution. Generally, the mathematical model of nonlinear programming can be expressed as: S is called feasible set and the elements included are feasible points.
Logistic model is one of the most classical model of fuzzy research, chaotic variables generated by the model can be used to search the optimal solution, and the equation is: Where is the control parameter, and it is selected in the range of 0 and 4, when =4, the system is in the fuzzy state.
Coupling Model
Coupling model sets high dimensional data into a low dimensional data space, finds out structure or characteristics projection can reflect the original high dimensional data and studies the purpose of high dimensional data. The problem of structuring projection index function and its optimization is the key of applying method of projection pursuit successfully. This problem is very complex, traditional projection pursuit has considerable amount of calculation, it limits the method of thorough research and widely use to some extent. The concrete steps of the model are as follows: Step 1: Structuring coupling model index function In order to eliminate the effect of different ranges of values of cluster factors, the initial data are standardized before it is used in the PPDC model. Samples experience indexes set as are sample number (sample size) and index number respectively. To eliminate the dimensions of index value and range of unity each index, following formula can be adopted for extreme normalized.
In case the index the bigger the better: min max min In case the index the smaller the better: Where, max () xj and min () xj is the max and min value of the j th indicator respectively.
Step 2: Construct coupling model index function () Qa . Essentially, projection is used to observe data characteristic from all angles. The main purpose is to find the hidden structure from high-dimensional data sets by searching through all their lowdimensional projections (Cui 1997 Where Z S is the standard deviation of projection value ()
Zi
; Z D is the local density of () R is the window radius of local density, normally is set to 0.1 is the unit step function, it equals to 1 when 0 t , otherwise 0.
Step 3: Optimize the coupling index function.
When the sample set of indicators is given, () Qa
only depends on the projection direction a, the best projection direction is the one that exposures the characteristics of high-dimensional data structure of maximum possible, which can be obtained by solving the max value of projection index function: Constraint is: The model is a complex nonlinear problem with variables { ( ) 1, 2, , } a j j p , and can be solved by Coupling optimization algorithm.
Step 4: Initialize parameters. Assume 1 k Z , repeat step (6) until () fX satisfies precision requirement. Finally, X is the optimal projection direction a of (10).
Step 7: Determine the proportion of allocated water. Substitute the optimal projection direction a in step (6) into (6), and projection value () Zi of every sample can be obtained. Normalizing can get the proportion of allocated water, multiplied by the total amount of water is the final allocated water.
Results and analysis
In addition, the result obtained through coupling optimization algorithm in this study is an optimal solution. This is because coupling optimization algorithm due to its ergodicity characteristic as well as enlarging the fuzzy sequences to the value interval of optimized variables to carry on iterative optimization. As long as the fuzzy sequences are long enough, we can obtain the globally optimal solution through the coarse search and the fine search. However, the solution of genetic algorithm is uncertain because many random numbers are used during calculation process, and it is easy to be trapped in the local optimal solution even divergent. Table 2 and table 3 are the curves between optimization iterations and projection index values of Coupling optimization algorithm and genetic algorithm.
As shown in the following tables, the optimization iterations of Coupling optimization algorithm and genetic algorithm are given. With this comparison, coupling optimization algorithm can obtain the optimal projection value and calculation speed is faster and the result is more stable, which proves the effectiveness of the model in this study fully.
Conclusion
With the rapid development of the economy of our country, the demand for water resources is increasing every day. The shortage of water resources has become an important restrictive factor for the sustainable development of the economy of our country, which causes water use conflicts between the upstream and downstream in the watershed or between different areas, so it's really important to allocate water rights fairly. This study proposed a new coupled fuzzy optimization projection pursuit model, taking advantage of projection pursuit to map high-dimensional data to one-dimensional space, using coupling optimization algorithm to find the optimal projection direction. This model is objective and fair in weights determination, which overcomes the difficulties of traditional methods that the objective functions and constraints must be continuous and differential and avoids being trapped in the local optimal solution during the process of optimization. Through an example analysis, the result of initial water rights allocation obtained through the model is reasonable and effective, simultaneously enriching and developing initial water rights allocation theory and method. However, the window radius of local density of projection pursuit in the model is determined by empirical formula, lack of theoretical basis and the optimal solution obtained through coupling optimization algorithm has great dependence on the length of chaotic sequences and the number of iterations, which need to be improved in the future. | 2,711.8 | 2016-12-01T00:00:00.000 | [
"Engineering"
] |
Hatching delays in great tits and blue tits in response to an extreme cold spell: a long-term study
Variation in ambient temperature affects various life stages of organisms. It has been suggested that climate change not only implies higher global temperatures but also more unpredictable weather and more frequent extreme weather events. Temperature has a major influence on the optimal laying-incubation-hatching dates of insectivorous passerines, because it poses energetic constraints and affects the timing of food abundance. We have been studying breeding characteristics of great tits Parus major and blue tits Cyanistes caeruleus in two areas, an urban parkland and a deciduous forest, around the city of Łódź since 2002. During the egg-laying period in 2017, both tit species at both study areas faced an unusual cold spell as reflected by a sudden decrease in the mean ambient temperature to ca. 2–3 °C for about 5 days, which caused mean hatching delays of up to 6 days. Since flexibility of behavior plays a major role in adjusting to unpredictable weather conditions, examining its limits may be an important goal for future research.
Introduction
Variation in ambient temperature affects various life stages of organisms (Stearns 1992;Mainwaring and Hartley 2016;Rodríguez et al. 2016;Bleu et al. 2017;Vaugoyeau et al. 2017). It has a large influence on the optimal layingincubation-hatching dates of insectivorous passerines, because it affects the timing of food abundance (Perrins 1991;Hinks et al. 2015). A combination of warm weather and mild rainfall in spring provides good conditions which enable the development of plants and rich arthropod communities, while low temperature slows down these processes. Clutch initiation date in tits is characterized by wide phenotypic plasticity and it largely depends on the temperatures directly before the laying of the first egg. The moment of initiating a clutch by a single tit female may differ more than 3 weeks between breeding seasons depending on the temperatures (Glądalski et al. , 2016aWesołowski et al. 2016) but also on the habitat type (Blondel et al. 1993;Massa et al. 2011). When the temperatures are appropriate, females start producing eggs (in tits one per day) and then, if there is a sudden temperature drop, they may delay the moment of laying the next eggs or delay the moment of starting incubation. It is rather difficult to pause the incubation for more than a few hours without losing the clutch (Lee and Lima 2017). Therefore, delays in hatching usually occur when females are faced with a sudden cold spell (García-Navas and Sanz 2011; Kluen et al. 2011;Tomás 2015) and may be considered as beneficial, when they allow for better synchronization between food demands of nestlings and the peak of caterpillar availability (Monrós et al. 1998;Cresswell and McCleery 2003). Females may also accelerate hatching, by starting incubation before producing their last eggs, when conditions are improving. Additionally, laying gaps and hatching delay may be interpreted as a consequence of food shortage or increased costs of thermoregulation during egg laying period (Nilsson and Svensson 1993a, b;Cucco et al. 2017). Little is known about the flexibility of the hatching delay (Naef-Daenzer et al. 2004;Kluen et al. 2011) and since phenotypic plasticity and flexibility of behavior play a major role in adjusting to unpredictable weather conditions in spring, examining their limits may be an important goal for research (Tomás 2015).
There is a need to study variation in weather characteristics before and during the breeding period in birds in order to understand the ecological implications of climate change and more frequent extreme weather events (Charmantier et al. 2008;Goodenough et al. 2011;Pipoly et al. 2013;Donnelly and Yu 2017;Marrot et al. 2017). Extreme weather events are seen as weather conditions that cause the biological response to be in the 5% of most extreme values of the response variable (Altwegg et al. 2017;van de Pol et al. 2017). As a result, the number of studies on various phenology traits has recently increased (Gaughan et al. 2017;Sheridan and Allen 2017). It was also suggested that climate change not only implies higher temperatures and global changes in precipitation, but also more frequent extreme weather events, like cold spells in spring, or warm spells during winter (Otto 2015;Buckley and Huey 2016;Bailey and van de Pol 2016;Ummenhofer and Meehl 2017). Some authors even suggest that extreme weather events may have stronger effects on wildlife populations and habitats than changes in averages (Bateman et al. 2015;Martinuzzi et al. 2016). In addition to variation in local weather conditions, the occurrence of extreme weather events also affects breeding birds (Jenouvrier 2013;Mainwaring et al. 2017). A cold snap during the breeding season may have large consequences for breeding birds (Glądalski et al. , 2016aIndykiewicz 2015;Tobolka et al. 2015). On the other hand, it was suggested that the recent extreme weather events can be treated as natural experiments that may elucidate the mechanisms by which birds adjust their phenology to fluctuating environments (Both and Visser 2005;Jentsch et al. 2007;Glądalski et al. 2016a;Altwegg et al. 2017). Fletcher et al. (2013) and Whitehouse et al. (2013) conclude that there is a need to collect long-term phenology monitoring data in order to fully understand the impacts of climate change on different species. Bauer et al. (2010) note in addition that most papers analyzing these trends do not use data from central Europe, and there is a need to fill this gap.
In 2017, a large temperature drop during breeding was noticed in many parts of Europe and caused hatching delays in many tit populations in Belgium, England, France, Germany, Hungary, the Netherlands, Sweden, and others (Massemin S. personal information, Matthysen E. personal information, Nilsson J.-Å. personal information, Santema P. personal information, Seress G. personal information, Szulkin M. personal information, Visser M. personal informationinformation gathered during 8th International Hole-Nesting Birds Conference, Trondheim, Norway, October 30-November 2, 2017). In 2017, great temperature drop occurred shortly after the moment of the initiation of tit breeding at our study areas. Such temperature drops may indeed be seen as natural experiments. The aim of this paper is to show effects of extreme temperature drops during breeding on hatching delays in the great tit Parus major and the blue tit Cyanistes caeruleus at an urban parkland and a deciduous forest in central Poland. We suggest that during colder weather, ecological interactions, including predator-prey interactions, change (smaller amounts of prey available), and it may lead to changes in breeding strategies-eggs of small songbirds are built from the current income of resources. Additionally, female parents may require more food for body maintenance than for eggs, and this may lead to days with no produced eggs. Therefore, we predict that hatching delay should depend on ambient temperatures during breeding and very low temperatures should increase hatching delay.
Materials and methods
This study was carried out in 2002-2017 as part of long-term research project concerning the breeding biology of secondary hole-nesting birds occupying nestboxes near Łódź, central Poland (51°47′ N, 19°28′ E) (Glądalski et al. , 2017Wawrzyniak et al. 2015). Both study sites are located in two, 10-km-distant, structurally and floristically contrasting habitats, an urban parkland (51°45′ N, 19°24′ E) and a deciduous forest (51°50′ N, 19°29′ E). The urban parkland area (80 ha) consists of the zoological garden (16 ha) and the botanical garden (64 ha). This area is one of the biggest recreation and entertainment areas in Łódź (Glądalski et al. 2016b). The vegetation of the parkland area consists of a diverse mix of tree species including exotic tree species (Marciniak et al. 2007). The forest site is about 130 ha area in the center of mature mixed deciduous forest (Łagiewniki forest, 1250 ha in total), bordering on the NE suburbia of Łódź. Large parts of the forest come directly from the ancient woodland typical for this region of central Europe. Oaks (Quercus robur and Q. petraea) are predominating tree species in the forest.
Both study areas were supplied with standard wooden nestboxes (Lambrechts et al. 2010). About 200 nestboxes were set in the parkland and about 300 nestboxes were set in the forest. All the nestboxes were placed on trees (usually on oaks) at a height of about 3 m. In both study areas, distances between neighboring nestboxes were about 50 m. At the start of the breeding season, the nestbox study areas were visited every day to record nestbox occupancy, laying date, clutch size, and hatching day. In normal conditions, the female lays one egg per day in tits (Perrins 1996). In situations when we found older hatchlings, we estimated hatching day using our photographic key for age determination of nestling tits. In the case of great tits, only first clutches were analyzed-clutches that started no more than 30 days from the first clutch in a studied population during the breeding season (van Noordwijk et al. 1995). A total of 1517 (890 in the parkland and 627 in the forest) first clutches of the great tit and a total of 835 (348 in the parkland and 487 in the forest) first clutches of the blue tit were studied.
The main period of the temperature drop during the midlaying-early-incubating time of tits in Łódź in April 2017 lasted for about 5 days and was characterized by the mean ambient temperature ca. 2-3°C, with no snow cover ( Fig. 1). We define a cold spell as a sudden drop in ambient temperature for a relatively short period of time and extreme weather events as weather conditions that cause the biological response to be in the 5% of most extreme values of the biological response variable (Altwegg et al. 2017;van de Pol et al. 2017). The biological response variable was hatching delay, which occurred in most extreme form in 2017 in both study areas and for both tit species (Figs. 2, 3, and 4). The local temperatures (average annual temperature) for Łódź were obtained from TuTiempo.net climate database for Łódź (http://www.tutiempo.net/en/Climate/LODZ/124650.htm and https://en.tutiempo.net/climate/ws-121055.html). Following García-Navas and Sanz (2011), we calculated the expected hatching date as follows: first egg date + clutch size + 12 (incubation in these species normally lasts 13 days and female usually starts to incubate 1 day before completing the clutch (García-Navas and Sanz 2011). The difference between this date and observed hatching date was taken as hatching delay (negative values equals hatching occurred before expected and positive values equals a delay). In all calculations, the mean laying dates were expressed as days from 1 March. Following Perrins and McCleery (1989) and , we calculated mid-laying-early-incubating warmth sums (mid-laying temperatures are crucial for hatching delays; García-Navas and Sanz 2011; Cresswell and McCleery 2003), as the sum of the mean daily temperatures for the 7 days starting on the 4th day since the first egg date (first egg date + 4), to characterize thermal conditions during egg laying.
We computed general linear models to examine effects of year and site factors (factorial design ANOVA) on hatching delays, assuming a Gaussian error structure. Separate models for blue tits and great tits were fit. Because interactions of the main factors were significant, the full models were presented. To check if the relationship between hatching delays and thermal conditions (mid-laying-early-incubating warmth sums) was linear or non-linear, we calculated polynomial regressions with cubic and quadratic terms. We used t tests for the cubic and quadratic terms to delete non-significant terms. We also present adjusted values of r 2 to evaluate the fit of the regressions. STATISTICA 12 (StatSoft Inc 2014) was applied to perform all computing and to produce charts.
Results
Mid-laying-early-incubating warmth sums were extremely low in 2017 in comparison with the values for the preceding 15 years and, what is crucial, this large temperature drop happened at the time of egg laying (Fig. 2). The yearly mean hatching delay was negatively correlated with the warmth sums over the study years at both study areas for both tit species: great tits in the urban parkland and in the forest, and blue tits in the urban parkland and in the forest (Table 1, Figs. 3 and 4). In all cases, the relationship was non-linear and it suggests that the hatching delays for low temperatures are disproportionately larger than for average conditions.
In both tit species, the largest hatching delays occurred in 2017: mean for great tits 5.27 ± 4.4 SD days in the urban parkland and 4.27 ± 2.24 SD days in the forest and for blue tits 3.47 ± 2.10 SD days in the urban parkland and 6.12 ± 5.91 SD in the forest (Figs. 5 and 6). In the great tit, hatching delay was affected by a significant interaction between study area and year ( Table 2). The interaction results from the fact that in most years, there is no interhabitat difference in the hatching delay, whereas there is a significant difference in the extreme year 2017 (Fig. 5). Although the hatching delay is large in both 2016 and 2017, only in 2017 is it so clear (Fig. 5). The difference is likely to result from a small difference in the breeding phenology of tits between the park area and the forest area. In the blue tit, hatching delay was also affected by a significant interaction between study area and year ( Table 2). The interaction in blue tits results from the difference in hatching delay between habitats being exceptionally large in 2017, but in reverse direction in comparison to great tits (Figs. 5 and 6). This reverse direction may result from a between-species difference in phenology. In both great tits and blue tits, hatching delays that occurred in 2017 were exceptionally long, with the delays in both tit species in 2016 and in blue tits in 2005 also being substantially long (Table 3). Fig. 2 Mid-laying-earlyincubating warmth sums, as the sum of the mean daily temperatures for the 7 days starting in the 4th day since the first egg date, in great tits and blue tits and in the forest study area and in the urban parkland study area (2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017) not exclude spells of exceptionally unfavorable weather occasionally. In the present study, hatching delays of great tits and blue tits were highly correlated with temperatures during the mid-laying-early-incubating period. This study shows that the large temperature drop during the laying period in 2017 caused extreme hatching delays in both tit species at both our study areas. In all cases, the relationship was non-linear; therefore, it suggests that the hatching delays for low temperatures are disproportionately larger than for average conditions. It is difficult to tell whether hatching delay immense flexibility is a unique feature of great tits and blue tits. Several studies analyzed the potential of tits for adjusting the interval between laying and hatching date and excluding Cresswell and McCleery (2003), all those studies analyze only 1-3 breeding seasons (Monrós et al. 1998;Naef-Daenzer et al. 2004;García-Navas and Sanz 2011;Kluen et al. 2011). Cresswell and McCleery (2003) suggested that birds increase their fitness by synchronizing their production of offspring with a peak of food abundance (in the case of tits, caterpillars are the optimal food for nestlings). This synchronization may (2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017). Mean hatching delay is presented as average ± 95% confidence intervals be accomplished by varying the moment of clutch initiation (and this is very flexible in tits), but temperature characteristics during the egg production phase may delay or accelerate the caterpillar peak. Another way of synchronization may be downsizing of the clutch, laying gaps or delaying/accelerating the onset of incubation (Tomás 2015). Tomás (2015) even suggests that the hatching date should be analyzed as a more proper phenological trait than the laying date because of those mechanisms that allow a female to synchronize her production of offspring with a peak of food abundance. Laying gaps may be caused also by food shortage (a low temperature inhibits activity of insects (Mellanby 1939;Bale 2002) and may reduce prey accessibility for birds) or increased costs of thermoregulation during egg laying period (Nilsson and Svensson 1993a, b). But the Bstrategy^and the Bconstraint^hypotheses are not mutually exclusive, and probably both energetic limitations and behavioral decisions contributed to the observed hatching delays (Naef-Daenzer et al. 2004). Monrós et al. (1998) conclude that some delays in the hatching date could be beneficial for parents and offspring, since they seemed to allow for a better adjustment to changes in environmental conditions. The difference in hatching delay between the study areas and years (thus also interaction) in both tit species may be caused by a difference in phenology of great tits and blue tits in combination with a difference in phenology of the study habitats (blue tits tend to initiate clutch mean 1.5 days earlier than great tits in the parkland area and mean 2.5 earlier in the forest area, unpublished data). Urban environments are usually associated with earlier clutches in tits (Bańbura and Bańbura 2012;Seress and Liker 2015;Marini et al. 2017). Taxonomic composition of tree flora in the parkland results in earlier leafing-buds and thus larvae on poplars and birches (the parkland) appear earlier than on oaks (the forest) Wawrzyniak et al. 2015). The leafing phenology directly influences the occurrence of caterpillars (the most important component of the diet of chicks, sometimes supplemented by spiders and other insects (Blondel et al. 1993)). Those shifts in timing may affect the hatching delay because when the drop of a temperature happens birds in one study area may be a few days later in clutch completion or vice versa. This paper is based on the occurrence of a cold spell as a natural experiment in which we could not control environmental conditions and properly ascribe initiated clutches to treatments. This results from the fact that it is obviously not possible to experimentally manipulate the ambient temperature in the field. We see two ways in which experiments capable of identifying more precisely at least some proximate mechanisms underlying hatching delays could be performed. One way would be to use indoor aviaries to manipulate thermal conditions during the egg laying stage of breeding. The (2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017). Mean hatching delay is presented as average ± 95% confidence intervals Table 3 Summary of Tukey's post hoc effects of year and habitat interaction analysis on hatching delay (2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017) in great tits (left-down) and in blue tits (right-up) in the urban parkland and in the forest (−p > 0.05; *p < 0.05; **p < 0.01; ***p < 0.001) Year Area 02/P 02/F 03/P 03/F 04/P 04/F 05/P 05/F 06/P 06/F 07/P 07/F 08/P 08/F 09/P 09/F 10/P 10/F 11/P 11/F 12/P 12/F 13/P 13/F 14/P 14/F 15/P 15/F 16/P 16/F 17/P 17/F 02 P 16 P *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** --*** 16 F *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** --*** 17 P *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** 17 F *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** --other way, available in the field in the case of hole-nesting birds, would be experimental cooling of nestboxes at the time of egg laying. As far as we know, such experimental cooling has only been used to study different effects of thermal conditions for incubation so far (e.g., Alvarez and Barba 2014). | 4,550 | 2018-04-17T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
On the Reuse of SLS Polyamide 12 Powder
In the Selective Laser Sintering (SLS) technique, the great majority of the powder involved is not included in the final printed parts, being just used as a support material. However, the quality of this powder is negatively affected during the process since it is subjected to high temperatures (close to its melting temperature) during a long time, i.e., the printing cycle time, especially in the neighborhood of the printed part contour. This type of powder is relatively expensive and large amounts of used powder result after each printing cycle. The present paper focuses on the reuse of Polyamide 12 (PA 12) powder. For this sake, the same PA 12 powder was used in consecutive printing cycles. After each cycle, the remaining non-used powder was milled and filtered before subsequent use. Properties of the powder and corresponding prints were characterized in each cycle, using differential scanning calorimetry (DSC), scanning electron microscopy (SEM), computed tomography (CT), and tensile tests. It was concluded that subjecting the same powder to multiple SLS printing cycles affects the properties of the printed parts essentially regarding their morphology (voids content), mechanical properties reproducibility, and aesthetical aspect. However, post-processing treatment of the powder enabled to maintain the mechanical performance of the prints during the first six printing cycles without the need to add virgin powder.
Introduction
Additive Manufacturing (AM) technologies have been gaining prominence in the industrial world and are used to create functional parts, reaching geometries impossible to achieve by conventional manufacturing technologies [1,2]. More commonly known as 3D printing or Rapid Prototyping (RP), the term additive manufacturing refers to any technology based on the production of parts layer-by-layer from a computer-aided design (CAD) 3D model previously created [3][4][5]. Dating back to the 80 s, the first commercially available AM technology consisted of the use of a UV source to harden a UV-sensitive polymer (photopolymer) to build the desired structure, this being the first Stereolithography (SLA) equipment in this field. Even though it was innovative, this technology was also very expensive and not reliable [6][7][8]. One variant belonging to this particular family of technologies is the Selective Laser Sintering (SLS) process. SLS is becoming popular and is no more limited to high-tech industry. It is being increasingly chosen to manufacture functional parts, with great dimensional precision, using a wide variety of materials for different application areas (like automotive, aerospace, and medical among many others) and a relatively low cycle time [3,9].
Beyond the geometrical freedom, the use of SLS, and AM techniques in general, brings other advantages, such as no need for molding tools, and removing constraints normally associated with the conventional subtractive processes [10]. tendency to agglomerate and improving its flowability. Nevertheless, these agents may negatively affect the mechanical integrity of printed parts, a disadvantage that must be considered [14].
On the other hand, low-temperature SLS can also minimize the powder degradation. As the name suggests, this technique is based on the maintenance of the powder at lower temperatures, reducing the degradation of the material. Although this might sound simple, it increases the chance of curling, especially if the printed parts have thin walls. Binding the parts to a rigid base might reduce this tendency, but it will demand more post-printed processes to finish the parts [16].
Finally, the most common method used to reduce the loss of material properties is to add virgin powders to the reused ones. Normally, refresh rates of 50% are used in this technique. Studies differ in this subject, since some authors argue that the powder that remains in the build space should not be reused, and others state that this material can be reused, but regular check-up tests must be carried out to assess its quality. A common test used to assess the integrity of the material is the Melt Flow Rate (MFR)/Melt Flow Index (MFI), which in the case of PA 12 must be maintained above 18 g/10 min [12,16].
The degradation, or aging, of polymers can be promoted by internal or external causes [19,20]. The external causes are considered to be physical and chemical interactions of the material with its surroundings, as for example, weathering, UV radiation, humidity, and temperature, the latter being the most notable degrading agent of this type. The internal causes are thermodynamically unstable states present in the material that, if activated, usually by thermal stress, lead to a measurable change in properties. Examples of internal causes are unstable crystallization states, residual stresses, and incomplete polycondensation [19][20][21].
In the case of PA 12, the main degradation mechanism is the cross-linking of polymer chains caused by oxidation, which is said to be dominant at the initial stages of the degradation process. The main consequences of the degradation are a decrease in flowability of the powder, due to an increase in molecular weight, and poor mechanical properties, especially strain at break and maximum stress [12,22]. The increase of molecular weight can also occur by post condensation reactions. This leads to a shift of the crystallization temperature to lower values, due to the lower chain mobility. Although this broadens the processing window, higher molecular weight results in higher viscosity, which prejudices the spreading of the powder and makes the SLS printing process harder [23]. Chain scission may also occur, but at later stages of the degradation process, causing a decrease in molecular weight, counterbalancing the increase of molecular weight caused by cross-linking [23]. Studies showed that the increase of molecular weight affects the quality of manufactured parts, both in the correct set up of the build surface and in the reproducibility of the printed parts [23].
In the literature, there is a proposed methodology that aims to predict the property loss of polymers when subject to reprocessing through primary recycling, in processes such as injection molding. In this case, the virgin polymer is processed (i.e., a part is injected), reground, mixed with virgin polymer, and reprocessed. This methodology, developed by Bernardo et al. [24], was able to predict the material properties decay and relies on two simplifying assumptions: (i) the regrind process does not affect the properties of the polymer (it is only intended to enable feeding the material to the injection molding machine), and (ii) the fraction of the virgin polymer added in each cycle is constant. Very recently, Lopes et al. [25] successfully applied this methodology to SLS PA 12 printed parts, predicting the decay of the Young's modulus, tensile stress at yield, and tensile stress at break. The authors used 5 cycles of reprocessing and performed a fitting for the linear law of mixtures, through a custom-made software. This software adjusted 3 different models and predicted the decay of properties for the first 20 reprocessing cycles. The main conclusions of this work were that the mechanical properties and the density of the printed samples were compromised when only reprocessed material was used, even at the earlier stages of reprocessing and that the loss of properties can be minimized if a virgin powder fraction is incorporated in the mixture.
The aging of PA 12 powder exposed to multiple processing cycles and subjected to a post-processing treatment after each printing cycle, namely, filtering, milling, and homogenization, is the aim of the current study. Therefore, the use of the methodology employed in [25] is not valid since after each printing cycle the lower quality unsintered powder is improved and/or rejected.
Another factor that may influence the behavior of the printed parts is the orientation in which they are printed. Tomanik et al. [26] studied the differences in mechanical behavior of PA 12 printed samples with different building orientations relative to the build platform. They concluded that the samples printed at 0 • (i.e., horizontally, parallel to the build platform) present the most ductile behavior, and those printed at 45 • showed the most brittle one. However, none of the printing orientations originated a Young's modulus close to that indicated by the supplier. This will also be investigated in the present work.
This work was carried out in the frame of a partnership between a university and an industrial company that provides SLS printing services.
Polyamide 12 Powder
PA12-L 1600 powder (from Prodways Technologies, Paris, France), whose main properties are listed in Table 1, was used to produce all the samples.
Processing Equipment and Methodology
The SLS equipment used for the production of samples was a ProMaker P1000 (Prodways Technologies), schematically shown in Figure 1, and the main specifications are listed in Table 2.
The parameters used in all the printing cycles were fixed and chosen based on previous knowledge of the company. The heating and cooling stages had a 90 min duration, while the printing cycle was fixed at around 9 h, including warm-up and cooling stages. The SLS processing parameters are given in Table 3.
The unsintered powder milling (through sieves and ceramic balls) and filtering were performed in the ProTool BS01 equipment (Prodways Technology), employed to destroy agglomerates and to separate the bigger remaining ones and/or pre-sintered material (the last filter has a 200 µm mesh), for later disposal. The resulting powder was homogenized in an intensive mixer, having three rotational blades, for 2 min.
Lastly, printed parts were cleaned by sand jet in a Guyson Formula 1400 equipment (Prodways Technologies).
The methodology followed in this work is illustrated in Figure 2, where the sequence of the main stages is depicted. The parameters used in all the printing cycles were fixed and chosen based on previous knowledge of the company. The heating and cooling stages had a 90 min duration, while the printing cycle was fixed at around 9 h, including warm-up and cooling stages. The SLS processing parameters are given in Table 3. The unsintered powder milling (through sieves and ceramic balls) and filtering were performed in the ProTool BS01 equipment (Prodways Technology), employed to destroy last filter has a 200 μm mesh), for later disposal. The resulting powder was homogenized in an intensive mixer, having three rotational blades, for 2 min. Lastly, printed parts were cleaned by sand jet in a Guyson Formula 1400 equipment (Prodways Technologies).
The methodology followed in this work is illustrated in Figure 2, where the sequence of the main stages is depicted. In each printing cycle, other parts (with commercial interest for the industrial partner) were added to the build space, in order to reach the fixed cycle duration of around 9 h. It is important to stress that no virgin material was added after each cycle. Therefore, to ensure that there was enough amount of powder for all the cycles performed (twelve, in total), the equipment was loaded to its maximum capacity on the first processing cycle, even though there was only a small portion of powder used for printing. Furthermore, the test samples were always printed in the same coordinates of the building space.
Characterization of the Powder and Printed Parts
Differential Scanning Calorimetry analyses (DSC) were performed in a Netzsch DSC 200 F3, to determine the melting temperature and the melting enthalpy of the powder along subsequent processing cycles. The samples heating was carried out from 20 °C to 220 °C, at a rate of 10 °C/min and maintained at 220 °C during 1 min. Then, the samples were cooled down until 20 °C at a rate of 10 °C/min. Nitrogen, purged at 50 mL/min, was used to ensure an inert environment.
Scanning Electron Microscopy (SEM) using FEG-SEM FEI Nova 200 field emission gun scanning electron microscope, from FEI Company, Hillsboro, OR, USA, was used to In each printing cycle, other parts (with commercial interest for the industrial partner) were added to the build space, in order to reach the fixed cycle duration of around 9 h. It is important to stress that no virgin material was added after each cycle. Therefore, to ensure that there was enough amount of powder for all the cycles performed (twelve, in total), the equipment was loaded to its maximum capacity on the first processing cycle, even though there was only a small portion of powder used for printing. Furthermore, the test samples were always printed in the same coordinates of the building space.
Characterization of the Powder and Printed Parts
Differential Scanning Calorimetry analyses (DSC) were performed in a Netzsch DSC 200 F3, to determine the melting temperature and the melting enthalpy of the powder along subsequent processing cycles. The samples heating was carried out from 20 • C to 220 • C, at a rate of 10 • C/min and maintained at 220 • C during 1 min. Then, the samples were cooled down until 20 • C at a rate of 10 • C/min. Nitrogen, purged at 50 mL/min, was used to ensure an inert environment.
Scanning Electron Microscopy (SEM) using FEG-SEM FEI Nova 200 field emission gun scanning electron microscope, from FEI Company, Hillsboro, OR, USA, was used to characterize the powder morphology. High voltage of 10 kV and secondary electrons (SE) mode were selected for the analysis.
For characterization purposes, two distinct sample geometries were produced for each printing cycle: (i) Samples for tensile tests, shown in Figure 3a, according to standard DIN 53504-S3a, were printed in two different orientations (vertical, V, and horizontal, H, relative to the build platform). These samples (7 in each orientation) were printed inside two different cages (printed in the same cycle), marked with V and H to distinguish the vertical and horizontal orientations. Marking the cages instead of the samples intended to avoid any eventual negative effect on the samples' mechanical performance. (ii) Samples for porosity analysis to be performed with computed tomography (CT), printed vertically in the center of the build platform and marked with a C symbol, as illustrated in Figure 3b.
(i) Samples for tensile tests, shown in Figure 3a, according to standard DIN 53504-S3a, were printed in two different orientations (vertical, V, and horizontal, H, relative to the build platform). These samples (7 in each orientation) were printed inside two different cages (printed in the same cycle), marked with V and H to distinguish the vertical and horizontal orientations. Marking the cages instead of the samples intended to avoid any eventual negative effect on the samples' mechanical performance.
(ii) Samples for porosity analysis to be performed with computed tomography (CT), printed vertically in the center of the build platform and marked with a C symbol, as illustrated in Figure 3b.
The coordinates of the samples in the build space were kept fixed in all the printing cycles (corresponding to the first layers of the center of the build space). A Shimadzu AG-X tester was used to conduct the tensile tests until break. The deformation was monitored through a video extensometer. The tests were performed at a 10 mm/min speed, at room temperature, and a load cell of 1 kN was used.
The printed samples illustrated in Figure 3b were analyzed in a computed tomography equipment, XT H 225 S, from Nikon Metrology, using a tungsten filament. The 360° scans were performed with beam energy of 180 kV, power beam current of 11 μA, 20 W of power, and exposure of 4 fps. Two frames were taken for each projection. All images were analyzed in the Visual Graphics Studio software. For each sample, three longitudinal cuts (see Figure 4a) performed at 1.0, 2.5, and 4.0 mm from the sample wall marked with a C symbol, were made, and four areas were considered in each section (see Figure 4b). Thus, a total of twelve values of porosity were determined for each sample. Additionally, three transversal cuts (see Figure 4c) were made to check the shape of the samples. The porosity results are presented as the average values from three independent samples. Statistical analysis was evaluated by one-way ANOVA followed by Tukey's test for multiple comparisons, considering p < 0.05 as significant. The coordinates of the samples in the build space were kept fixed in all the printing cycles (corresponding to the first layers of the center of the build space).
A Shimadzu AG-X tester was used to conduct the tensile tests until break. The deformation was monitored through a video extensometer. The tests were performed at a 10 mm/min speed, at room temperature, and a load cell of 1 kN was used.
The printed samples illustrated in Figure 3b were analyzed in a computed tomography equipment, XT H 225 S, from Nikon Metrology, using a tungsten filament. The 360 • scans were performed with beam energy of 180 kV, power beam current of 11 µA, 20 W of power, and exposure of 4 fps. Two frames were taken for each projection. All images were analyzed in the Visual Graphics Studio software. For each sample, three longitudinal cuts (see Figure 4a) performed at 1.0, 2.5, and 4.0 mm from the sample wall marked with a C symbol, were made, and four areas were considered in each section (see Figure 4b). Thus, a total of twelve values of porosity were determined for each sample. Additionally, three transversal cuts (see Figure 4c) were made to check the shape of the samples. The porosity results are presented as the average values from three independent samples. Statistical analysis was evaluated by one-way ANOVA followed by Tukey's test for multiple comparisons, considering p < 0.05 as significant.
During the successive printing cycles, a visual inspection of the parts was carried out to qualitatively evaluate their aesthetic quality. This is a relevant issue since SLS is often used to produce parts for applications where good surface finishing is required. During the successive printing cycles, a visual inspection of the parts was carried out to qualitatively evaluate their aesthetic quality. This is a relevant issue since SLS is often used to produce parts for applications where good surface finishing is required.
Powder
DSC characterization was only performed on the powder, the raw material of the printing stage, since it is the eventual degradation of this system that will influence the printed parts characteristics. As can be seen in Figure 5, the DSC curves of the powder tested along the printing cycles maintain a similar trend. This means that there are no significant changes occurring in the material. This was also concluded in a recent work [25], but the detailed results were not shown. In the current work, the evolution of the melting temperature and melting enthalpy is shown in Figure 6a,b, respectively. The melting temperature is almost maintained (there is a 1.4 °C difference between the limiting values measured), showing an overall slight tendency to decrease along the twelve successive printing cycles. However, in some intermediate cycles the value of this temperature is higher than that corresponding to the virgin powder (named as 0). This is in line with the recommendation made by most powder suppliers to maintain, or even slightly increase, the temperature of the SLS process when reused powder is employed. The melting enthalpy shows the same trend, but more markedly, presenting a variation from 106 J/g to 88 J/g (corresponding to a variation in the degree of crystallization from around 43% to 36%). These results are in line with those from [25] and suggest that chain-scission degradation mechanism may be the prevalent one in the first printing cycles. Heat promotes intra-chain bonds stresses, due to higher amplitude intramolecular vibrations, which may cause their rupture (chain-scission) originating fragments and radical end-groups [27,28]. The scatter of the results obtained is also worth noticing. This may be originated by the fact that the powder is still not homogeneous, despite the post-processing treatment it is subjected to. In fact, the powder particles experience different thermal histories during the printing stage, depending on their distance to the laser beam (printed parts contour).
Powder
DSC characterization was only performed on the powder, the raw material of the printing stage, since it is the eventual degradation of this system that will influence the printed parts characteristics. As can be seen in Figure 5, the DSC curves of the powder tested along the printing cycles maintain a similar trend. This means that there are no significant changes occurring in the material. This was also concluded in a recent work [25], but the detailed results were not shown. In the current work, the evolution of the melting temperature and melting enthalpy is shown in Figure 6a,b, respectively. The melting temperature is almost maintained (there is a 1.4 • C difference between the limiting values measured), showing an overall slight tendency to decrease along the twelve successive printing cycles. However, in some intermediate cycles the value of this temperature is higher than that corresponding to the virgin powder (named as 0). This is in line with the recommendation made by most powder suppliers to maintain, or even slightly increase, the temperature of the SLS process when reused powder is employed. The melting enthalpy shows the same trend, but more markedly, presenting a variation from 106 J/g to 88 J/g (corresponding to a variation in the degree of crystallization from around 43% to 36%). These results are in line with those from [25] and suggest that chain-scission degradation mechanism may be the prevalent one in the first printing cycles. Heat promotes intra-chain bonds stresses, due to higher amplitude intramolecular vibrations, which may cause their rupture (chain-scission) originating fragments and radical end-groups [27,28]. The scatter of the results obtained is also worth noticing. This may be originated by the fact that the powder is still not homogeneous, despite the post-processing treatment it is subjected to. In fact, the powder particles experience different thermal histories during the printing stage, depending on their distance to the laser beam (printed parts contour). Figure 7 shows SE/SEM images of the virgin and after printing cycle 12 powder can be observed that the reused powder has a rougher surface and seems to have a hig porosity, which might also induce some porosity in the produced parts. Dadbakhsh e [18] also observed an increment in porosity in aged PA 12 powder. These authors tributed this behavior to the evaporation of remaining alcohol and absorbed moist and/or due to the successive expansion/shrinkage cycles during SLS process. Howev this difference in morphology of the powder was not observed by Lopes et al. [25], these authors only studied five printing cycles. Figure 7 shows SE/SEM images of the virgin and after printing cycle 12 powders. It can be observed that the reused powder has a rougher surface and seems to have a higher porosity, which might also induce some porosity in the produced parts. Dadbakhsh et al. [18] also observed an increment in porosity in aged PA 12 powder. These authors attributed this behavior to the evaporation of remaining alcohol and absorbed moisture and/or due to the successive expansion/shrinkage cycles during SLS process. However, this difference in morphology of the powder was not observed by Lopes et al. [25], but these authors only studied five printing cycles. Figure 7 shows SE/SEM images of the virgin and after printing cycle 12 powders. It can be observed that the reused powder has a rougher surface and seems to have a higher porosity, which might also induce some porosity in the produced parts. Dadbakhsh et al. [18] also observed an increment in porosity in aged PA 12 powder. These authors attributed this behavior to the evaporation of remaining alcohol and absorbed moisture and/or due to the successive expansion/shrinkage cycles during SLS process. However, this difference in morphology of the powder was not observed by Lopes et al. [25], but these authors only studied five printing cycles.
Printed Parts
The porosity evolution of printed parts along the printing cycles, together with the transversal cuts made in each sample, is shown in Figure 8. Through the analysis of the 12 zones obtained from the 3 different cross-section cuts made in each sample, it is noticeable that the average porosity level of the samples increases with the number of reprocessing cycles, with statistically significant differences between almost all the reprocessing cycles. This is particularly evident in reprocessing cycle 8. This result might be related to the increased porosity observed in the powders from the first to the last cycle (see Figure 7). On the other hand, a decrease in porosity level from printing cycles 8 to 12 seems to occur. However, and as shown in Figure 8e, this is just apparent, since there is a clear deformation (depression) in the samples of printing cycle 12. This deformation might result from the collapse of the sample walls due to extreme local porosity. If this is the case, porosity is, therefore, only artificially diminished.
Printed Parts
The porosity evolution of printed parts along the printing cycles, together with the transversal cuts made in each sample, is shown in Figure 8. Through the analysis of the 12 zones obtained from the 3 different cross-section cuts made in each sample, it is noticeable that the average porosity level of the samples increases with the number of reprocessing cycles, with statistically significant differences between almost all the reprocessing cycles. This is particularly evident in reprocessing cycle 8. This result might be related to the increased porosity observed in the powders from the first to the last cycle (see Figure 7). On the other hand, a decrease in porosity level from printing cycles 8 to 12 seems to occur. However, and as shown in Figure 8e, this is just apparent, since there is a clear deformation (depression) in the samples of printing cycle 12. This deformation might result from the collapse of the sample walls due to extreme local porosity. If this is the case, porosity is, therefore, only artificially diminished. Materials 2022, 15, x FOR PEER REVIEW 11 of 16 As the number of printing cycles increased, visual changes were also observed in the printed parts. Here, this is illustrated through the commercial parts that were produced in parallel with the studied test samples, where this effect was more evident. In Figure 9, a part printed in the beginning of the study (with 100% virgin powder) and another printed with 100% reused powder with 11 reprocessing cycles, are illustrated. As it can be seen, the printed part with virgin powder presents a white and smooth surface, while the As the number of printing cycles increased, visual changes were also observed in the printed parts. Here, this is illustrated through the commercial parts that were produced in parallel with the studied test samples, where this effect was more evident. In Figure 9, a part printed in the beginning of the study (with 100% virgin powder) and another printed with 100% reused powder with 11 reprocessing cycles, are illustrated. As it can be seen, the printed part with virgin powder presents a white and smooth surface, while the printed part after 12 printing cycles shows a yellowish color and a much rougher surface. This phenomenon started to be evident after printing cycle 6. printed part after 12 printing cycles shows a yellowish color and a much rougher surface. This phenomenon started to be evident after printing cycle 6. Thus, considering the porosity level, color, and surface finishing of the printed parts, it was decided to limit the evaluation of the mechanical properties until printing cycle 6.
As referred before, the tensile samples were printed in two different orientations, horizontal (H) and vertical (V). Concerning the horizontal orientation, samples from all the cycles were analyzed. However, for the vertical orientation only some of the cycles were considered, once these samples had much lower mechanical performance (namely, yield and tensile strength) than those produced in the horizontal orientation, as can be seen in Figure 10. The relevant mechanical properties of all the samples are shown in Figure 11 and Table 4. As described in the literature, vertically printed samples presented a brittle behavior, barely reaching the plastic domain. This behavior is probably due to weak interlayer bonding, a drawback of the additive (layer by layer) manufacturing techniques [25,29]. In the V samples, these weak planes are perpendicular to the force applied in the tensile tests, promoting their early rupture. Thus, considering the porosity level, color, and surface finishing of the printed parts, it was decided to limit the evaluation of the mechanical properties until printing cycle 6.
As referred before, the tensile samples were printed in two different orientations, horizontal (H) and vertical (V). Concerning the horizontal orientation, samples from all the cycles were analyzed. However, for the vertical orientation only some of the cycles were considered, once these samples had much lower mechanical performance (namely, yield and tensile strength) than those produced in the horizontal orientation, as can be seen in Figure 10. The relevant mechanical properties of all the samples are shown in Figure 11 and Table 4. As described in the literature, vertically printed samples presented a brittle behavior, barely reaching the plastic domain. This behavior is probably due to weak interlayer bonding, a drawback of the additive (layer by layer) manufacturing techniques [25,29]. In the V samples, these weak planes are perpendicular to the force applied in the tensile tests, promoting their early rupture. printed part after 12 printing cycles shows a yellowish color and a much rougher surface. This phenomenon started to be evident after printing cycle 6. Thus, considering the porosity level, color, and surface finishing of the printed parts, it was decided to limit the evaluation of the mechanical properties until printing cycle 6.
As referred before, the tensile samples were printed in two different orientations, horizontal (H) and vertical (V). Concerning the horizontal orientation, samples from all the cycles were analyzed. However, for the vertical orientation only some of the cycles were considered, once these samples had much lower mechanical performance (namely, yield and tensile strength) than those produced in the horizontal orientation, as can be seen in Figure 10. The relevant mechanical properties of all the samples are shown in Figure 11 and Table 4. As described in the literature, vertically printed samples presented a brittle behavior, barely reaching the plastic domain. This behavior is probably due to weak interlayer bonding, a drawback of the additive (layer by layer) manufacturing techniques [25,29]. In the V samples, these weak planes are perpendicular to the force applied in the tensile tests, promoting their early rupture. fact, and as already referred, the quality of the power is expected to deteriorate more in the vicinity of the printed parts contour, where the laser promotes the required increase in temperature needed for sintering. Therefore, despite the powder post-processing treatment, some small agglomerates (with dimensions lower than 200 µm, in this case), resulting from environmental humidity absorption or from partial sintering, exist, inducing some inhomogeneity. Additionally, the thermal degradation of the powder particles can also be different due to differences in their thermal history and in their molecular weight.
Thus, the powder post-processing treatment after each printing cycle seems to be an effective way to minimize the negative impact of reusing the powder, valid until the sixth cycle. This constitutes an alternative to the addition of 20% or 30% of virgin powder after each printing cycle, as recommended by the powder supplier or by Lopes at al. [25], respectively. It is worth mentioning that those authors, who did not treat the powder, obtained a 20% decay in the mechanical properties after only 2 cycles, when using 100% of reused powder.
It is worth mentioning that the Young's moduli obtained in the present work were always lower than that claimed by the powder supplier, as in [26].
Conclusions
SLS technology is a very versatile production process and enables to employ reused powder. However, a loss in properties reproducibility when 100% of reused PA 12 powder is used was observed. This inconsistency is not only a consequence of the different thermal histories of the (re)used powder particles, but also inherent to the process, as can be seen from the properties scatter of the samples printed in the first cycle.
The degradation of the powder with successive use is apparent. The DSC analysis showed a clear decrease in the melting enthalpy and a slight decrease in the melting temperature, meaning that the degree of crystallization is gradually decreasing throughout the printing cycles. This might indicate the occurrence of chain scission, since shorter chains have higher mobility, which makes the crystallization process more difficult.
Moreover, the degradation of the PA 12 powder was observed by SEM: after 12 printing cycles, the surface of the powder is rougher and seems to have an increased porosity when compared with the virgin powder.
CT analysis results are in line with the powder morphology obtained by SEM, i.e., an increase in printed parts porosity is observed along the printing cycles. This phenomenon is extremely negative for functional parts since it may be responsible for their premature failure and/or for their lack of dimensional accuracy. These negative features are mainly evident after the 6th reprocessing cycle. The visualization of printed parts with virgin and reused powder shows that smooth and clear surfaces are nearly impossible to obtain with powder subjected to more than 6 printing cycles.
The mechanical characterization showed that the mechanical properties loss is not significant until the 6th printing cycle, but that the corresponding standard deviation increases. Moreover, this characterization showed that the parts printed vertically have worse mechanical performance than those printed horizontally. Therefore, vertical printing orientation should be avoided whenever mechanical functionality is required.
Lastly, probably the most significant conclusion of this work is that the post-processing treatment of the (re)used powder, namely, through milling, filtering, and homogenizing, minimizes its degradation by eliminating the most problematic powder (agglomerates) and homogenizing the remaining powder. This powder post-processing treatment constitutes, therefore, an alternative to the approach used by Lopes et al. [25] that consisted in the addition of virgin powder after each printing cycle.
Conflicts of Interest:
The authors declare no conflict of interest. | 7,869.4 | 2022-08-01T00:00:00.000 | [
"Materials Science"
] |
Avian Influenza Virus (H5N1) Replication in Feathers of Domestic Waterfowl
We examined feathers of domestic ducks and geese inoculated with 2 different avian influenza virus (H5N1) genotypes. Together with virus isolation from the skin, the detection of viral antigens and ultrastructural observation of the virions in the feather epidermis raise the possibility of feathers as sources of infection.
S ince 1997, an epidemic of avian infl uenza (AI) virus subtype H5N1 has spread in Asia, causing fatal infections in poultry, wild birds, mammals, and humans (1). Wild waterfowl, including ducks and geese, are natural hosts of AI virus of all 16 hemagglutinin subtypes in nature (2,3). Generally, AI virus is transmitted by the fecal-oral route without causing clinical signs (2)(3)(4). Although current AI virus (H5N1) strains have mild to severe pathogenicity in waterfowl (5)(6)(7), these birds can still be carriers of the virus (7). Even asymptomatic domestic ducks can shed the virus from the cloaca and oral cavity (7,8) and contribute to viral maintenance and spread (9,10). Therefore, focusing on the epidemiologic role of domestic waterfowl in AI (H5N1) outbreaks is important.
We previously reported that the Japanese AI virus (H5N1) isolated in 2004 causes necrosis of the feather epidermis with viral antigens in domestic ducks, a fi nding that demonstrates the possibility of viral release from feathers (11). In addition, these affected feathers can cause infection in orally inoculated domestic ducks (12). Except for our previous studies, to our knowledge this feather lesion has not been reported in AI (H5N1)-infected waterfowl. However, if the feather lesion is common to other waterfowl species and AI (H5N1), affected feathers might involve the spread of the virus. We describe the pathologic, virologic and ultrastructual fi ndings of the feather in domestic waterfowl infected with AI (H5N1).
The Study
Two species of domestic waterfowl, ducks (n = 4) and geese (n = 4), were used. Domestic ducks (Anas platy-rhynchos var. domestica) called Aigamo in Japanese are a crossbreed of wild mallard and domestic ducks; they are free-ranging ducks in water-soaked rice paddy fi elds and are used for weed control and meat production. Domestic geese (Anser cygnoides var. domestica) are reared for food production on farms. We selected geese because wild geese (A. indicus) accounted for a large proportion of the deaths in AI (H5N1) outbreaks at Qinghai Lake in People's Republic of China in 2005 (5). These 2 species of birds were obtained from the farm at 1 day of age and raised with commercial food in an isolated facility. Birds were moved into negative-pressure isolators of Biosafety Level 3-approved laboratories (National Institute of Animal Health, Tsukuba, Japan) for acclimation 1 week before inoculation.
Two different AI virus (H5N1) genotypes were used. A/chicken/Yamaguchi/7/2004 (Ck/Yama/7/04) is classifi ed as genotype V (13). A/chicken/Miyazaki/K11/2007 (Ck/ Miya/K11/07) belongs to genotype Z and H5 clade 2 subclade 2 (M. Mase, unpub. data), which is now circulating from China to Japan, Europe, and Africa (5,14). The stored virus was propagated for 36-48 hours in the allantoic cavity of 10-day-old embryonated chicken eggs at 37°C. The infectious allantoic fl uid was harvested and stored at -80°C until use. All experimental procedures were approved by the Ethics Committee of the National Institute of Animal Health in Japan.
For each species, two 4-week-old birds were inoculated intranasally with 0.1 mL of the inoculum containing 10 8 50% egg infectious dose (EID 50 ) per mL of each AI virus (H5N1) genotype. Each inoculated group was kept in a separate isolator. Inoculated birds were euthanized with an overdose injection of sodium pentobarbital (i.v.) on days 3 and 5 postinoculation.
For histopathology, the skin, including numerous feathers, was removed from the head, neck, back, shoulder, abdomen, thigh, and tail. Samples were fi xed in 10% neutral-buffered formalin, embedded in paraffi n, sectioned at 4 μm, and stained with hematoxylin and eosin. Immunohistochemistry was performed to detect the viral antigen with a Histofi ne Simple Stain PO (M) kit (Nichirei Inc., Tokyo, Japan). A mouse monoclonal antibody specifi c for the infl uenza A matrix protein (diluted 1:500; clone GA2B, AbD Serotec, Kidlington, UK) was used as the primary antibody (11). For the virus isolation, clean dry skin was collected from the neck and stored at -80°C (11). The viral titer of the samples was determined with 10-day-old embryonated chicken eggs and expressed as EID 50 /g as previously described (13). The viral titer <10 2 EID 50 /g was considered negative for virus isolation. For the electron microscopic examination, fl esh contour feathers were fi xed in 3% glutaraldehyde in 0.1 M phosphate buffer, postfi xed in 1% osmium tetroxide, and embedded in epoxy resin. Ultrathin sections were stained with uranyl acetate and lead citrate Inoculated birds did not exhibit apparent clinical signs, except for unilateral corneal opacity in a goose inoculated with Ck/Yama/7/04 on day 5 postinoculation. Results of histopathologic and virologic examinations are summarized in the Table. Histologically, viral antigens were occasionally detected in the feather epidermal cells with or without epidermal necrosis (Figure 1, panels A and B). Some affected feathers were accompanied by heterophilic and lymphocytic infi ltration in the inner feather pulp. Other tissues in the skin were negative for infl uenza virus by immunohistochemical analysis with the exception of very rare positive reaction in stromal cells in the feather pulp. Virus isolation from the skin was positive in 1 duck and 1 goose inoculated with Ck/Yama/7/04; the viral titers were 10 3.5 and 10 4.5 EID 50 /g, respectively. All ducks and geese inoculated with Ck/Miya/K11/07 tested positive for the isolation; the viral titers were 10 2.5 -10 4.5 EID 50 /g. Ultrastructurally, round, enveloped virions 80 to 100 nm in diameter were observed between feather epidermal cells in both domestic ducks and geese (Figure 2, panels A and B). Spherical virions budding from cell surface were occasionally observed ( Figure 2, panel C).
Conclusions
We found that 2 different AI virus (H5N1) genotypes that were isolated in 2004 and 2007 can replicate in the feather epidermal cells of domestic ducks and geese. To our knowledge, this is the fi rst report of in vivo ultrastructural observation of AI (H5N1) replication in waterfowl.
The important fi nding is that the histologic feather fi nding and virus isolation from the skin were found in inoculated birds that did not exhibit apparent clinical signs. Although 1 goose inoculated with Ck/Yama/7/04 was negative for all examinations, this might have resulted from individual differences in susceptibility or the limited area of the skin used for the examination. Nevertheless, our data indicate that recent AI (H5N1) strains are likely to replicate in feather epidermal cells of domestic ducks and geese. All birds inoculated with Ck/Miya/K11/07, which belongs to the current lineage spreading to Europe and Africa, tested positive for virus isolation, compared with the results with Ck/Yama/7/04. Feathers can easily drop off, blow away, or be reduced to dust, suggesting that affected feathers of waterfowl infected with infl uenza (H5N1) virus can be potential sources of infection, along with their feces and respiratory secretions (7,8). At this time, it is unclear to what extent affected feathers contribute to the epidemiology of AI (H5N1) fi eld outbreaks. However, more attention needs to be paid to persons who handle domestic waterfowl possibly infected with AI virus (H5N1). | 1,699.6 | 2008-01-01T00:00:00.000 | [
"Biology",
"Environmental Science",
"Medicine"
] |
Nanoporous Au Behavior in Methyl Orange Solutions
Nanoporous (NP) gold, the most extensively studied and efficient NP metal, possesses exceptional properties that make it highly attractive for advanced technological applications. Notably, its remarkable catalytic properties in various significant reactions hold enormous potential. However, the exploration of its catalytic activity in the degradation of water pollutants remains limited. Nevertheless, previous research has reported the catalytic activity of NP Au in the degradation of methyl orange (MO), a toxic azo dye commonly found in water. This study aims to investigate the behavior of nanoporous gold in MO solutions using UV-Vis absorption spectroscopy and high-performance liquid chromatography. The NP Au was prepared by chemical removal of silver atoms of an AuAg precursor alloy prepared by ball milling. Immersion tests were conducted on both pellets and powders of NP Au, followed by examination of the residual solutions. Additionally, X-ray photoelectron spectroscopy and electrochemical impedance measurements were employed to analyze NP Au after the tests. The findings reveal that the predominant and faster process involves the partially reversible adsorption of MO onto NP Au, while the catalytic degradation of the dye plays a secondary and slower role in this system.
Introduction
Nanoporous metals (NPMs) belong to a class of materials with unique properties related to their peculiar nanostructured morphology.They present a spontaneous disordered three-dimensional open-cell structure (bicontinuous network) composed of ligaments and massive nodes.The resulting pores, as well as the ligaments, possess characteristic dimensions ranging from 2 to 100 nm, which collectively contribute to a structure with a significant specific surface area [1][2][3][4][5][6].
Different methods are used to fabricate NP metals [7][8][9][10][11].However, primarily, they are obtained through the process of chemical or electrochemical de-alloying, fundamentally relying on the preferential and selective dissolution in an acidic solution of one or more elements from a parent alloy.During this process, individual atoms or small clusters thereof transition into solution, while a concurrent reconstruction of solid-liquid interfacial zones occurs, governed by surface mass transport phenomena.
Within the residual alloy, a connected porosity progressively develops, giving rise to a complex network of relatively thin ligaments interconnected by nodes where a greater mass concentration is observed.The volumetric shrinkage resulting from dissolution induces internal stresses that can be mitigated through thermal treatments.These treatments, depending on various factors, such as the characteristic dimensions of the ligaments, the temperature applied during the thermal treatment, and the exposure time to treatment, can lead to a coarsening of the structural parts (nodes and ligaments) [12][13][14].
However, despite its apparent simplicity, de-alloying presents several limitations.For instance, while it works optimally for the production of NP Au, the same cannot be said for the fabrication of non-noble NP metals, significantly limiting its fields of application.Moreover, conducting the process in an aqueous environment precludes the use of metals that are easily oxidizable.Finally, the wide array of variables at play during the process makes it difficult to fully control the experimental phase and, consequently, the outcome, especially at the atomic scale.
Consequently, the pursuit of novel synthetic pathways for NP metals stands as a critical advancement.The meticulous examination of the morphology characterizing pores, ligaments, and nodes is a pivotal step towards validating and assessing the adaptability of manufacturing techniques.Achieving control over structural characteristics through sophisticated material manipulation on the nanoscale is a key goal for unlocking the full potential of NP metal foams.Furthermore, a deeper understanding of the complex interplay between structure and properties paves the way for the intelligent design of NP metals [15,16].
Despite numerous challenges, the unique structural morphology of nanoporous metals marks a considerable innovation, endowing these materials with distinct physical and chemical properties.This advancement not only fuels intriguing inquiries into the fundamental relationships between structure and function but also showcases their substantial potential across diverse scientific and technological domains.The array of applications extends from structural materials and catalysis to sensing and energy-oriented electrochemistry, illustrating the versatility and impact of nanoporous metals in advancing current and future innovations.
Among NPMs, nanoporous gold (NP Au) is by far the most investigated material.As for the majority of NPMs, NP Au is also usually obtained by chemical or electrochemical dealloying, which leads to the etching of the less noble metal (silver in most cases).Indeed, in the scientific literature, it is possible to find papers which report the use of NP Au as material for (a) new generation of high sensitivity optical and electrochemical sensors to detect biological molecules or pollutants; (b) medical treatment and diagnostics in controlled drug delivery, hyperthermia treatment for cancer; (c) surface-enhanced Raman scattering, improving the capability to find chemical substances and biomarkers; (d) the phototermal and photocalysis field, using the capacity of NP Au to convert light into heat for activating chemical reactions; (e) catalysis in different reactions, including air purification and the production of specific fine chemicals [10,17,18].
In particular, regarding the potential applications mentioned above, several papers have reported the use of NP Au as a catalyst [19][20][21].Indeed, the interest in gold-based systems is related to the nontoxicity of this metal and to its capability to favor selective reactions in mild conditions.NP Au has been investigated as a catalyst in CO oxidation at low temperature and pressures [22][23][24][25], as well as in the gas phase oxidative coupling of methanol at relative low temperature [20,26].In the last decades, great attention has been paid to nanosized Au catalysts, supported on CeO 2 [27], TiO 2 [28], carbon [29] or polymers [30], for reactions of selective oxidation of alcohols [27,28] or hydrocarbons [29].However, the monolithic form of NP Au allows to overcome the need of a support, which is instead necessary in the case of nanoparticles to enable an easier recovery of the catalyst from the reaction environment [31].
A few years ago, Hakamada et al. proposed use of an NP Au monolith as a catalyst for methyl orange (MO) degradation in solution, at room temperature and in the dark [32] MO is a molecule which belongs to the class of azo dyes (R-N=N-R'); this family of compounds is widely employed in the textile industry because of its intense color and its stability [33].Furthermore, MO has also been used as a probe for tests in photocatalytic studies devoted to the degradation of this class of organic dyes, which are pollutants and, therefore, damage the natural environment, especially water.Because of the very large use of synthetic dyes in industry, their wastewater pollution is considered one of the most important problems for the environment [34].For this reason, many efforts are being made to find methods to remove these pollutants.In addition to the investigations concerning the adsorption technique [35][36][37][38][39], many studies have focused on dye degradation.In particular, in the cases of heterogenous semiconductor photocatalysts, holes or electrons react with water or oxygen to form several oxygen-containing species, such as hydroperhydroxyl, superoxide and hydroxyl radicals, which, thanks to their high reactivity as oxidizing agents, cause the degradation of the dyes [40][41][42][43][44].
In the present work, it is shown that in the presence of monolithic NP Au, only a partial degradation of MO occurs, whereas most of the dye is adsorbed onto the NP Au surface.
Our study was carried out by means of ultraviolet-visible (UV-Vis), energy dispersive X-ray (EDS), and X-ray photoelectron (XPS) spectroscopies, high-performance liquid chromatography (HPLC), scanning electron microscopy (SEM) and impedance measurements.
Results and Discussion
Au in NP form was fabricated by the chemical etching of a cold-pressed pellet of an AuAg alloy using nitric acid at 70% as the corrosive agent for 24 h.The starting alloy was obtained by ball milling gold and silver powders in the atomic ratio of 3:7.
The morphology and the composition of the obtained NP Au were investigated by SEM and EDS measurements, respectively.Figure 1 shows the SEM image of the NP Au surface and an EDS spectrum of the examined area.The NP Au highlights a bicontinuous structure of interconnected ligaments and pores with a mean ligament diameter around 15 ± 5 nm and a mean pore diameter around 14 ± 5 nm.Moreover, EDS measurements reveal a residual Ag atomic content around 12%.
stability [33].Furthermore, MO has also been used as a probe for tests in phot studies devoted to the degradation of this class of organic dyes, which are pollu therefore, damage the natural environment, especially water.Because of the v use of synthetic dyes in industry, their wastewater pollution is considered one o important problems for the environment [34].For this reason, many efforts made to find methods to remove these pollutants.In addition to the inve concerning the adsorption technique [35][36][37][38][39], many studies have focused degradation.In particular, in the cases of heterogenous semiconductor photo holes or electrons react with water or oxygen to form several oxygen-containin such as hydroperhydroxyl, superoxide and hydroxyl radicals, which, thanks to reactivity as oxidizing agents, cause the degradation of the dyes [40][41][42][43][44].
In the present work, it is shown that in the presence of monolithic NP A partial degradation of MO occurs, whereas most of the dye is adsorbed onto t surface.
Our study was carried out by means of ultraviolet-visible (UV-Vis dispersive X-ray (EDS), and X-ray photoelectron (XPS) spectroscopies, high-per liquid chromatography (HPLC), scanning electron microscopy (SEM) and im measurements.
Results and Discussion
Au in NP form was fabricated by the chemical etching of a cold-pressed p AuAg alloy using nitric acid at 70% as the corrosive agent for 24 h.The starting obtained by ball milling gold and silver powders in the atomic ratio of 3:7.
The morphology and the composition of the obtained NP Au were invest SEM and EDS measurements, respectively.Figure 1 shows the SEM image of t surface and an EDS spectrum of the examined area.The NP Au highlights a bico structure of interconnected ligaments and pores with a mean ligament diamet 15 ± 5 nm and a mean pore diameter around 14 ± 5 nm.Moreover, EDS meas reveal a residual Ag atomic content around 12%.In order to investigate the behavior of NP Au with MO, a monolith of this ma immersed into an aqueous solution of the dye with a concentration of 2.0 × 10 −5 M. over time of this system was monitored by UV-Vis spectroscopy measuring th absorption spectrum at different time intervals.Figure 2 illustrates the discoloration MO solution after the immersion of NP Au, calculated by considering the ratio be intensity of the MO absorption peak around 463 nm after a certain time of immersio of pristine MO solution.The peak intensity decreases with time until 8 h, after which increases before reaching a constant value around 29% of the initial peak intensity.In order to investigate the behavior of NP Au with MO, a monolith of this material was immersed into an aqueous solution of the dye with a concentration of 2.0 × 10 −5 M. The trend over time of this system was monitored by UV-Vis spectroscopy measuring the solution absorption spectrum at different time intervals.Figure 2 illustrates the discoloration rate of the MO solution after the immersion of NP Au, calculated by considering the ratio between the intensity of the MO absorption peak around 463 nm after a certain time of immersion and that of pristine MO solution.The peak intensity decreases with time until 8 h, after which it slightly increases before reaching a constant value around 29% of the initial peak intensity.
The discoloration tests were repeated multiple times, yielding a mean final I/I 0 ratio of approximately 25% (±5%).The notable standard deviation is likely attributable to the varying surface areas of the NP Au pellets utilized in the tests.The discoloration tests were repeated multiple times, yielding a mean final I/I0 ratio of approximately 25% (±5%).The notable standard deviation is likely attributable to the varying surface areas of the NP Au pellets utilized in the tests.
HPLC measurements were conducted on the MO solution after 30 h of NP Au pellet immersion (see Figure 3).The chromatogram of a reference solution of pure MO shows a peak at ca. 5.9 min of retention time, whereas that of the residual solution of the abovedescribed discoloration test presents the characteristic peak of MO along with a peak at 3.84 min and a smaller one around 3.1 min.However, the latter can also be found in the initial MO solution and in MilliQ H2O, with comparable intensities.These findings suggest that NP Au can cause MO HPLC measurements were conducted on the MO solution after 30 h of NP Au pellet immersion (see Figure 3).The chromatogram of a reference solution of pure MO shows a peak at ca. 5.9 min of retention time, whereas that of the residual solution of the abovedescribed discoloration test presents the characteristic peak of MO along with a peak at 3.84 min and a smaller one around 3.1 min.The discoloration tests were repeated multiple times, yielding a mean final I/I0 ratio of approximately 25% (±5%).The notable standard deviation is likely attributable to the varying surface areas of the NP Au pellets utilized in the tests.
HPLC measurements were conducted on the MO solution after 30 h of NP Au pellet immersion (see Figure 3).The chromatogram of a reference solution of pure MO shows a peak at ca. 5.9 min of retention time, whereas that of the residual solution of the abovedescribed discoloration test presents the characteristic peak of MO along with a peak at 3.84 min and a smaller one around 3.1 min.However, the latter can also be found in the initial MO solution and in MilliQ H2O, with comparable intensities.These findings suggest that NP Au can cause MO However, the latter can also be found in the initial MO solution and in MilliQ H 2 O, with comparable intensities.These findings suggest that NP Au can cause MO degradation, as reported by Hakamada et al. [32], found in the initial MO solution and in MilliQ H 2 O, although only partially, in agreement with the UV-Vis data.Moreover, the calculated MO concentration by HPLC is systematically lower than that calculated from UV-Vis measure-ments.This feature can be ascribed to the superposition of the absorption peak of MO and that of the degradation product (hereinafter referred as DP), which could lead to a concentration overestimation with spectroscopic measurements.Since the peak around 463 nm is characteristic of a N=N bond, it can be supposed that this bond was not broken in the degradation product, in agreement with the results found for the MO photodegradation in the presence of TiO 2 [44].Another interesting piece of evidence is that only this peak appears in the HPLC chromatograms of different residual MO solutions, suggesting that the degradation process does not continue after the formation of this compound.
To evaluate the effect of a reduction in the surface area of gold, an NP Au sample was prepared by treating with HNO 3 for 5 s only on one face of a AuAg alloy pellet.Since the short time of dealloying induces the formation of only a thin NP layer, a significantly lower surface area can be expected in comparison with the NP Au obtained with 24 h of leaching on both faces.In Figure 4, the discoloration rate when a pellet of NP Au with a thin NP layer was immersed into a solution of MO is reported.It can be observed that the intensity of the peak at 463 nm slightly decreases at the beginning and then reaches a plateau at around 94% of I/I 0 , significantly higher compared to the plateau value measured in the immersion of the pellets porous throughout their entire thickness.This fact points to a dominant non-catalytic process; indeed, in the case of a prevalent catalytic degradation, a slower discoloration rate should be expected (due to a lower active surface area accessible to the reactant molecules), but not such a dramatic change in the final I/I 0 ratio.degradation, as reported by Hakamada et al. [32], found in the initial MO solution and in MilliQ H2O, although only partially, in agreement with the UV-Vis data.Moreover, the calculated MO concentration by HPLC is systematically lower than that calculated from UV-Vis measurements.This feature can be ascribed to the superposition of the absorption peak of MO and that of the degradation product (hereinafter referred as DP), which could lead to a concentration overestimation with spectroscopic measurements.Since the peak around 463 nm is characteristic of a N=N bond, it can be supposed that this bond was not broken in the degradation product, in agreement with the results found for the MO photodegradation in the presence of TiO2 [44].Another interesting piece of evidence is that only this peak appears in the HPLC chromatograms of different residual MO solutions, suggesting that the degradation process does not continue after the formation of this compound.
To evaluate the effect of a reduction in the surface area of gold, an NP Au sample was prepared by treating with HNO3 for 5 s only on one face of a AuAg alloy pellet.Since the short time of dealloying induces the formation of only a thin NP layer, a significantly lower surface area can be expected in comparison with the NP Au obtained with 24 h of leaching on both faces.In Figure 4, the discoloration rate when a pellet of NP Au with a thin NP layer was immersed into a solution of MO is reported.It can be observed that the intensity of the peak at 463 nm slightly decreases at the beginning and then reaches a plateau at around 94% of I/I0, significantly higher compared to the plateau value measured in the immersion of the pellets porous throughout their entire thickness.This fact points to a dominant non-catalytic process; indeed, in the case of a prevalent catalytic degradation, a slower discoloration rate should be expected (due to a lower active surface area accessible to the reactant molecules), but not such a dramatic change in the final I/I0 ratio.Figure S4 shows instead the discoloration rate of the MO solution when an NP Au monolith was repeatedly immersed in fresh solutions at the same concentration.Here, it can be seen that the plateau level (the final I/I0 ratio) increases after each immersion, suggesting the establishment of an equilibrium of absorption.Therefore, to have further adsorption, a higher MO equilibrium concentration is needed when more NP Au Figure S4 shows instead the discoloration rate of the MO solution when an NP Au monolith was repeatedly immersed in fresh solutions at the same concentration.Here, it can be seen that the plateau level (the final I/I 0 ratio) increases after each immersion, suggesting the establishment of an equilibrium of absorption.Therefore, to have further adsorption, a higher MO equilibrium concentration is needed when more NP Au adsorption sites are occupied.To investigate in depth the nature of this process, the desorption of MO from NP Au pellets was attempted.When the pellet was immersed in water, no observable MO desorption was noticed, while when the pellet was immersed either in NaOH or in HCl, the solution turned colored.The obtained solutions were analyzed by UV-Vis (see Figure 5) and HPLC measurements, which highlight the presence of MO along with other undefined compounds.In the UV-Vis spectra in Figure 5, the characteristic peak of MO is red-shifted when desorbed in HCl and blue-shifted when desorbed in NaOH.This is a typical behavior of MO, by which it is used as a pH indicator; by varying the pH of the solution, the ratio between protonated (red) and deprotonated (yellow) forms of the molecule changes, producing a change in the position and shape of the resulting absorption peak [45].After the desorption in both solutions, it was observed that the NP Au pellet did not cause MO discoloration anymore, also after having neutralized the surface with the HCl and NaOH for the samples treated with NaOH and HCl, respectively.This finding suggests that an irreversible change occurred to the NP Au surface following its contact with the MO solution.
desorption of MO from NP Au pellets was attempted.When the pellet was immersed in water, no observable MO desorption was noticed, while when the pellet was immersed either in NaOH or in HCl, the solution turned colored.The obtained solutions were analyzed by UV-Vis (see Figure 5) and HPLC measurements, which highlight the presence of MO along with other undefined compounds.In the UV-Vis spectra in Figure 5, the characteristic peak of MO is red-shifted when desorbed in HCl and blue-shifted when desorbed in NaOH.This is a typical behavior of MO, by which it is used as a pH indicator; by varying the pH of the solution, the ratio between protonated (red) and deprotonated (yellow) forms of the molecule changes, producing a change in the position and shape of the resulting absorption peak [45].After the desorption in both solutions, it was observed that the NP Au pellet did not cause MO discoloration anymore, also after having neutralized the surface with the HCl and NaOH for the samples treated with NaOH and HCl, respectively.This finding suggests that an irreversible change occurred to the NP Au surface following its contact with the MO solution.To clarify the adsorption-desorption mechanism, an adsorption isotherm was constructed by exposing the same NP Au pellet to gradually higher concentrations of the MO solution, from 4 × 10 −6 M to 1.4 × 10 −2 M. The adsorption isotherm is shown in Figures 6 and S5 and can be well fitted by the Langmuir adsorption model [46].The total amount of MO that was lost from the solutions is around 1.6 mg with a pellet of 230 mg.Moreover, after the complete saturation of the sample, desorption was repeatedly induced by exposing the sample to MO solutions with decreasing concentrations and then to pure water.It was estimated that 30% of the MO adsorbed was released.The HPLC measurements shown in Figure 7 clearly indicate that, along with MO, the degradation product found in the residual solutions of the adsorption tests is also present after desorption.In this case, the amount of MO adsorbed on NP Au was so high that it was possible to remove around 50% of the disappeared amount by immersing the pellet multiple times in pure water and then another 5% was extracted during immersions in the NaOH solution.This fact confirms the hypothesis that this compound is also partially adsorbed by NP Au.To clarify the adsorption-desorption mechanism, an adsorption isotherm was constructed by exposing the same NP Au pellet to gradually higher concentrations of the MO solution, from 4 × 10 −6 M to 1.4 × 10 −2 M. The adsorption isotherm is shown in Figures 6 and S5 and can be well fitted by the Langmuir adsorption model [46].The total amount of MO that was lost from the solutions is around 1.6 mg with a pellet of 230 mg.Moreover, after the complete saturation of the sample, desorption was repeatedly induced by exposing the sample to MO solutions with decreasing concentrations and then to pure water.It was estimated that 30% of the MO adsorbed was released.The HPLC measurements shown in Figure 7 clearly indicate that, along with MO, the degradation product found in the residual solutions of the adsorption tests is also present after desorption.In this case, the amount of MO adsorbed on NP Au was so high that it was possible to remove around 50% of the disappeared amount by immersing the pellet multiple times in pure water and then another 5% was extracted during immersions in the NaOH solution.This fact confirms the hypothesis that this compound is also partially adsorbed by NP Au.
It is still difficult to determine the percentage of MO adsorbed and that of MO degraded because the amount of degradation product seems to be much lower than the amount of MO disappeared and that was not finally desorbed.It is supposed that MO and the degradation product are not completely desorbed from the NP Au surface.With the aim of verifying if the interaction of NP Au with MO causes irreversible changes to the surface and, therefore, whether regeneration of the pellet is possible or not, measurements of electrochemical impedance spectroscopy (EIS, see Scheme S1) and X-ray photoelectron spectroscopy (XPS) were performed.NP Au was analyzed in its original pristine state, after the absorption of the dye, and after its desorption.Figure 8 reports the Nyquist plot of the analyzed samples, each one rescaled for its own solution resistance (Rs) value.As one can see, the electrochemical response varies significantly in all three situations but NP Au after the adsorption differs definitely more than the other two.More specifically, after MO adsorption, the NP Au pellet loses part of its capacitive feature for a more resistive response.Then, after the desorption, the material does not completely regain its original behavior.These observations, although qualitative estimations, can be a hint of non-reversible processes that occur at the surface, such as changes in morphology or some species (MO or some degradation products), which, being hardly desorbed, decrease and modify the active surface of the material.It is still difficult to determine the percentage of MO adsorbed and that of MO degraded because the amount of degradation product seems to be much lower than th amount of MO disappeared and that was not finally desorbed.It is supposed that MO and It is still difficult to determine the percentage of MO adsorbed and that of MO degraded because the amount of degradation product seems to be much lower than th amount of MO disappeared and that was not finally desorbed.It is supposed that MO an the degradation product are not completely desorbed from the NP Au surface.With th behavior.These observations, although qualitative estimations, can be a hint of reversible processes that occur at the surface, such as changes in morphology or species (MO or some degradation products), which, being hardly desorbed, decreas modify the active surface of the material.Some alternatives for the electrochemical characterization of the surface area gold were explored by Rouya et al. [47].In particular, EIS was used to calculate the do layer capacitance and, therefore, the estimation of the surface area.Concernin method, EIS analysis was conducted at the OCP using a 0.1 M solution of HClO4, an dependence of the imaginary part of the impedance on frequency is reported in Figu In a graph of this kind, a log-log slope of -1 in the medium-to-low frequencies r corresponds to typical purely capacitive behavior.Since our cases differ from id (with log-log slopes in the range 0.5-0.75), the system should be better described constant phase element.Despite the use of this approximation, the slope in the samples changes in agreement with our previous considerations.Moreover, the changes from −0.75 to −0.55 when the surface is modified with the adsorption o whilst it reaches −0.60 after desorption.As already observed, the not complete deso of the dye and/or the related surface modifications can also be qualitatively highli by the partial recovery of the pristine capacitive behavior.Some alternatives for the electrochemical characterization of the surface area of NP gold were explored by Rouya et al. [47].In particular, EIS was used to calculate the double-layer capacitance and, therefore, the estimation of the surface area.Concerning this method, EIS analysis was conducted at the OCP using a 0.1 M solution of HClO 4 , and the dependence of the imaginary part of the impedance on frequency is reported in Figure S6.In a graph of this kind, a log-log slope of -1 in the medium-to-low frequencies region corresponds to typical purely capacitive behavior.Since our cases differ from ideality (with log-log slopes in the range 0.5-0.75), the system should be better described by a constant phase element.Despite the use of this approximation, the slope in the three samples changes in agreement with our previous considerations.Moreover, the slope changes from −0.75 to −0.55 when the surface is modified with the adsorption of MO, whilst it reaches −0.60 after desorption.As already observed, the not complete desorption of the dye and/or the related surface modifications can also be qualitatively highlighted by the partial recovery of the pristine capacitive behavior.
XPS measurements were also performed and are reported in Figure 9.A comparison between the pristine sample (in the figure, NP Au pristine) and that obtained after the immersion into the MO solution 2 × 10 −5 M (NP Au MO 20 mM) does not highlight any important difference.This finding might indicate either not effective adsorption of MO on the Au surface or an extremely low MO content, even lower than the detection limit of this technique.However, the sample NP Au 10 mM (blue line), which was immersed into a more concentrated solution, clearly shows the presence on the Au surface of molecules containing N and S, as indicated by the N 1s peak at (399.5 ± 0.2) eV and by the S 2p 3/2 and S 2p 1/2 peaks at (167.3 ± 0.2) eV and (168.5 ± 0.2) eV.The observed XPS signals are consistent with the azo-and sulfonate-moieties of MO, in agreement with literature reports [48].Moreover, the sample DES NP Au MO 10 mM (pale blue line) obtained by immersion in the same solution, and which underwent desorption until the washing solution was colorless, presents the residual presence of N 1s (at approx.399.7 eV) and S 2p (at approx.167.4-168.6 eV) XPS peaks, which can be ascribed to MO and/or its degradation products adsorbed onto the surface of the NP.It is interesting to note that, despite the presence of signals assigned to MO, the position of the peaks corresponding to the Au 4f electrons (namely, the Au 4f 7/2 peak at 83.9 eV and the Au 4f 5/2 peak at 87.6 eV) is not affected.The lack of shift of these peaks was put forward by Hakamada et al. as proof of the absence of dye molecules in their NP Au sample after immersion into the MO solution [32].To further explore the behavior of NP Au, the powder was finally immersed in MO solution, instead of the pellet.In this way, the surface area of NP Au was hugely increased and the percolation time into the pores of the material reduced.As can be seen in Figure 10a, this resulted in a faster discoloration of the MO solution even with a MO/NP Au mass ratio 80-fold higher than that of the test reported in Figure 2.These results show that MO can be nearly completely removed from the solution.However, by prolonging the immersion time, it was observed that the stoppage of the discoloration process was followed by a modest increase in the 463 nm peak intensity.This phenomenon can be clarified by looking at the HPLC measurements in Figure 10b.The MO concentration appears to continuously decrease in the solution while the intensity of the second HPLC peak increases with time.By summing the two peak areas, the same trend was observed through UV-Vis measurements.Therefore, a part of the adsorbed MO is slowly degraded into a second product that is partially desorbed into the solution.This test gives a clear indication of the contribution of adsorption and catalytic phenomena in this system; adsorption is the more relevant and faster phenomenon, which results in the disappearance of 90% of MO in the first 3 h, with a global recollection-i.e., the amount of residual MO after the adsorption and desorption tests-of 87% of the initial MO amount, as can be seen in Table 1.Meanwhile, catalysis plays a secondary role in terms of magnitude and speed, leading to the degradation of less than 37% of MO in 24 h of immersion, an amount that comprises both the degraded and the still adsorbed MO molecules.
NP Au Fabrication
The AuAg precursor alloy was prepared by mechanical alloying of Au and Ag powders in 3:7 atomic proportion.A quantity of 2 g of the mixture was placed inside a hardened steel vial with two hardened steel spheres of 8 g each.The powders were milled in a SPEX 8000M (SPEX SamplePrep, LLC, Metuchen, NJ, USA) ball miller for 16 h.NP Au was prepared by cold-pressing the as-prepared AuAg powder into pellets with a diameter of 13 mm and a mass of 350 mg (see Figure S1 in the Supplementary Materials), followed by chemical corrosion in HNO 3 70% for 24 h.After the dealloying, the pellets were washed 5 times with MilliQ water and then dried overnight under vacuum (Figure S2).
Pellet Tests
NP Au pellets were immersed in 3.3 mL of a 2 × 10 −5 M MO solution (0.063 mg MO/g NP Au) in comparable conditions with experiments reported by Hakamada et al. [32].After immersion, the pellets were washed with ultrapure water, dried, and immersed again in 0.10 M HCl or NaOH solutions for desorption tests.The process was repeated until a negligible amount of MO could be desorbed from the pellets.The experiments were performed both in the dark and under light; however, no differences were observed.
Powder Tests
A quantity of 47 mg of NP Au powder was immersed in 6 mL of 1.20 × 10 −4 M MO solution (5 mg MO/g NP Au).After immersion, the powder was washed with ultrapure water, dried and immersed again in 0.10 M HCl or NaOH solutions for desorption tests.The process was repeated until a negligible amount of MO could be desorbed from the powder.
UV-Vis Measurements
Electronic absorption spectra were recorded with a UV-Vis spectrophotometer (Agilent Technologies) (Cary Series Spectrophotometer) in a quartz cell of 10 mm path length.
High-Performance Liquid Chromatography
Solutions were analyzed by a 1260 Infinity II (Agilent Technologies, Santa Clara, CA, USA) HPLC system, equipped with a Kinetex (5 µm C18 100 Å, 250 mm × 4.6 mm) column (Phenomenex, Torrance, CA, USA) and a single wavelength UV-Vis absorption detector.Analyses were performed at 40 • C under isocratic conditions, with a mobile phase (flow rate 0.8 mL min −1 ) composed of a mixture (24/76 v/v) of acetonitrile and a 10 mM ammonium acetate solution.Absorption at 463 nm was recorded.
Scanning Electron Microscopy/Energy Dispersive Spectroscopy
The precursor and dealloyed materials were investigated by scanning electron microscopy (SEM) using a Zeiss (Zeiss, Oberkochen, Germany) Merlin microscope, equipped with a Schottky electron source, operated at an acceleration voltage of 5 kV and at short working distance (<2 mm).Secondary electrons (SE) were collected to provide fine details on the surface morphology and using an in-lens detector.Energy dispersive X-ray spectroscopy (EDS) measurements were carried out in the SEM exploiting an Oxford silicon drift detector (SDD) (Oxford Instruments, Abingdon, UK) with a detection area of 60 mm 2 and with the microscope working at the same acceleration voltage of 5 kV.Au and Ag standardless quantitative analysis was performed using the Oxford AZTEC 2.1 software.
N 2 Adsorption-Desorption Isotherms
Textural analysis of an NP Au pellet was carried out with an ASAP 2020 system (Micromeritics, Norcross, GA, USA), by determining the nitrogen adsorption-desorption isotherms at −196 • C (Figure S3).Before analysis, the sample was heated overnight under vacuum up to 200 • C (heating rate, 1 • C/min).
Electrochemical Measurements
The NP Au was characterized by means of electrochemical impedance spectroscopy (EIS) using a 0.1 M HClO 4 solution in a conventional three-electrode cell in which a saturated calomel electrode (SCE) was the reference and a platinated titanium net was used as the counter electrode.All the EIS measurements were carried out using an AUTOLAB PGSTAT302N (Metrohm, Herisau, Switzerland) potentiostat/galvanostat equipped with the FRA analyzer and controlled with the NOVA software 2.1.6at the open circuit potential (OCP); the frequency was varied from 63 kHz down to 0.1 Hz with an amplitude of 0.01 Hz.
X-ray Photoelectron Spectroscopy (XPS)
XPS analyses were carried out with a Kratos Axis Ultra DLD (Kratos Analytical, Manchester, UK) spectrometer using a monochromatic Al Kα source operated at 20 mA and 15 kV.Wide-scan analyses were carried out with an analysis area of 300 × 700 µm and a pass energy of 160 eV.High resolution analyses were carried out with the same analysis area and a pass energy of 10 eV over the binding energy regions typical for N 1s, S 2p and Au 4f signals.Spectra were analyzed using CasaXPS software (version 2.3.24).
Conclusions
In this paper, an investigation was reported on the catalytic activity of NP Au in the degradation reactions of MO, a molecule that belongs to the class of azo dyes.Hakamada et al. reported that NP Au was able to catalyze the complete degradation of MO [32].Our experiments instead show that the degradation of the dye occurs only partially and that an important part of the MO is adsorbed onto the large surface of the NP material.Indeed, UV-Vis measurements on the solutions obtained from the tests of desorption show that MO, along with the MO degradation product, is released in a large amount by NP Au.Moreover, these findings are also in agreement with the HPLC measurements, which proved that MO with another molecule are present in the same solutions.Furthermore, EIS measurements suggested that the surface of NP Au was modified by the immersion into the MO solution and that, even after a prolonged desorption, it did not recover the pristine conditions.XPS analysis showed the presence of molecules containing N and S atoms on the surface of NP Au both after the immersion and after the desorption procedures, in agreement with the EIS measurements.These results are strengthened by observations made on NP Au powder in MO solution, in which it was possible to distinguish a faster and more important adsorption process-which led to the disappearance of 90% of MO in the first 3 h-from a slower and minor catalytic process, which, together with the irreversible adsorption process on the metal surface, was responsible for less than 40% of the MO disappearance after 24 h of immersion.
Figure 1 .
Figure 1.SEM image and EDS spectrum of NP Au after 24 h of dealloying.
Figure 1 .
Figure 1.SEM image and EDS spectrum of NP Au after 24 h of dealloying.
Figure 2 .
Figure 2. MO discoloration rate.The inset shows the UV-Vis spectrum of MO before NP Au immersion, 70 min and 8 h after the immersion.
Figure 2 .
Figure 2. MO discoloration rate.The inset shows the UV-Vis spectrum of MO before NP Au immersion, 70 min and 8 h after the immersion.
Figure 2 .
Figure 2. MO discoloration rate.The inset shows the UV-Vis spectrum of MO before NP Au immersion, 70 min and 8 h after the immersion.
Figure 4 .
Figure 4. Trend of relative concentration vs. immersion time for NP Au pellet dealloyed for 5 s in order to obtain a thin NP layer.
Figure 4 .
Figure 4. Trend of relative concentration vs. immersion time for NP Au pellet dealloyed for 5 s in order to obtain a thin NP layer.
Figure 5 .
Figure 5. Normalized UV-Vis spectra of solutions desorbed from different NP Au pellets in HCl and NaOH 0.1 M.
Figure 5 .
Figure 5. Normalized UV-Vis spectra of solutions desorbed from different NP Au pellets in HCl and NaOH 0.1 M.
Figure 7 .
Figure 7. HPLC chromatograms of residual solutions of adsorption and desorption in differen concentrations of MO: after adsorption in 10 −3 M solution (orange), desorption in H2O (blue) an desorption in NaOH solution (green).
Figure 7 .
Figure 7. HPLC chromatograms of residual solutions of adsorption and desorption in differen concentrations of MO: after adsorption in 10 −3 M solution (orange), desorption in H2O (blue) an desorption in NaOH solution (green).
Figure 7 .
Figure 7. HPLC chromatograms of residual solutions of adsorption and desorption in different concentrations of MO: after adsorption in 10 −3 M solution (orange), desorption in H 2 O (blue) and desorption in NaOH solution (green).
Figure 8 .
Figure 8. Nyquist plot of NP Au pristine, NP Au after adsorption and NP Au after desorpt and magnification on the high-frequency region (b).
Figure 8 .
Figure 8. Nyquist plot of NP Au pristine, NP Au after adsorption and NP Au after desorption (a) and magnification on the high-frequency region (b).
Figure 9 .
Figure 9. XPS measurements on pristine NP Au (black lines) and three NP Au immersed in MO solutions: 20 µM solution (red lines), 10 mM solution (blue lines) and after complete MO desorption (pale blue lines).The left, middle and right panels report the XPS spectra collected over the energy regions for N 1s, S 2p and Au 4f signals, respectively.
Figure 10 .
Figure 10.(a) UV-Vis relative intensity of absorption peak at 463 nm at different times after NP Au powder immersion; (b) HPLC absorption intensities of MO (RT = 6.0 min), DP (RT = 3.9 min) and sum of the two intensities (MO + DP).
Figure 9 . 15 Figure 9 .
Figure 9. XPS measurements on pristine NP Au (black lines) and three NP Au immersed in MO solutions: 20 µM solution (red lines), 10 mM solution (blue lines) and after complete MO desorption (pale blue lines).The left, middle and right panels report the XPS spectra collected over the energy regions for N 1s, S 2p and Au 4f signals, respectively.
Figure 10 .
Figure 10.(a) UV-Vis relative intensity of absorption peak at 463 nm at different times after NP Au powder immersion; (b) HPLC absorption intensities of MO (RT = 6.0 min), DP (RT = 3.9 min) and sum of the two intensities (MO + DP).
Figure 10 .
Figure 10.(a) UV-Vis relative intensity of absorption peak at 463 nm at different times after NP Au powder immersion; (b) HPLC absorption intensities of MO (RT = 6.0 min), DP (RT = 3.9 min) and sum of the two intensities (MO + DP).
Figure S2: Schematic representation of the NP fabrication process; Figure S3: N 2 gas adsorption-desorption isotherm; Figure S4: Relative concentration of MO over time for the same sample repeatedly immersed in fresh MO solution; Figure S5: Comparison between several adsorption models.Scheme S1: Schematic illustration of adsorption and desorption process and related EIS measurements; Figure S6.Log-log dependance of the imaginary part of the impedance vs. frequency and magnitude and fit of the linear part in the medium-to-low frequencies.
Table 1 .
Residual MO fractions in the solution after immersion of NP Au powders for different time intervals and recovered MO fractions of initial MO, recovered both from adsorption and desorption solutions. | 9,739.8 | 2024-04-24T00:00:00.000 | [
"Chemistry",
"Environmental Science",
"Materials Science"
] |
Pricing Defaulted Italian Mortgages
Our paper forecasts the expected recovery rates of defaulted Italian mortgage loans backed by either residential or commercial real estate. We apply an exponential Ornstein–Uhlenbeck process to model the price dynamics at the provincial and regional level, and two haircut models to estimate the liquidation value. Compared to our findings, rating agencies such as Moody’s, which use geometric Brownian motion to model the price dynamics, paint a rosier picture with higher recovery rates. As a consequence, non-performing mortgage loans held by Italian banks might be overvalued.
Introduction
Forecasting expected recovery rates of defaulted mortgage loans relies on modelling the stochastic price process (typically price per square meter of a specific property type) as well as possibly employing a realistic liquidation model. The recovery rate is the fraction of the value of an asset recovered from liquidating the collateral. For real estate mortgages, the collateral is typically the value of the property. Our aim is to forecast expected recovery rates of defaulted Italian mortgage loans backed by either residential or commercial real estate.
Moody's Investors Service (2004,2019) valuation model of defaulted Italian mortgages is based on the assumption that prices per square meter follow a geometric Brownian motion, a popular model to describe stock prices. However, Fabozzi et al. (2012) and Perelló et al. (2008) found that an exponential Ornstein-Uhlenbeck (EOU) process is more suitable for modelling the stochastic dynamics of real estate prices. Property price dynamics are important as they capture the loan-to-value ratio, which in turn determines loss given default and thus the recovery rate. 1 In this paper, we use a unique data set of real estate prices in all Italian provinces collected by the Agenzia delle Entrate-Osservatorio del Mercato Immobiliare. We calibrate an EOU process and estimate the recovery rate of defaulted mortgages. Recovery rates depend, in addition to the property price dynamics, on the length of legal procedures to liquidate the property after default and on the actual liquidation value. In Italy the length of legal procedures varies considerably between provinces, which explains in part the sluggish recovery of the Italian economy since the 2008 financial crisis. 2 Expected recovery rates vary significantly across provinces due to different price dynamics, and these rates become even more variable when including the specific court timings in the analysis. This is especially true for the lower tail of the distribution (i.e., recovery rates significantly worsen in 1 See also Gao et al. (2009) and Chaiyapo and Phewchean (2017) on mean reversion and the use of EOU in modelling house prices. Blanco and Gimeno (2012) show the importance of loan-to-value in defaults. 2 For macro-prudential regulation the correct estimation and pricing of risks related to mortgages plays a pivotal role, see, for example, Ye and Bellotti (2019); Ferretti et al. (2019); and Mayer et al. (2009). magnitude for less efficient provinces). A portfolio of bad loans produced a very different cash flow depending on the efficiency of the legal courts that manage the foreclosure.
Actual liquidation of property in Italy happens at auctions. 3 Banks reduce the sale price with every subsequent auction, until the property is sold. We consider a simple model of liquidation value, common in real estate literature, where a haircut of 20% is assumed; in other words, the proceeds from selling the property are on average equal to 80% of the property's the market price (this discount also includes other legal costs). We also use Moody's liquidation model where the discount in each round of auctions and the number of auctions is taken into account. This approach allows us to determine how sensitive recovery rates are with respect to the assumption on the underlying price dynamics and the role played by the liquidation model.
Moody's, which publishes results by regions rather than the more detailed provincial level, reported range of expected recovery rates is smaller than ours. Their recovery rates are consistently biased upward, with larger discrepancies among more volatile regional markets. Compared to our model, Moody's underestimates expected loss given default on those markets, especially for less developed regions. Of course, considerable model risk remains as none of the competing stochastic models is correct. A more suitable model should produce results closer to the truth in the particular task at hand. An ambitious project, left for future research, is to compare forecasted and actual recovery rates. However, such a project likely requires the involvement of the Bank of Italy or a major commercial Italian bank since access to reliable data will be an issue.
Our paper also observes high volatility in recovery rates across Italian provinces, a factor that seems to be overlooked by large rating agencies. Indeed, Moody's assumes that the provinces in each of Italy's 20 regions have similar characteristics, but huge discrepancies in the stochastic dynamics of collateral prices are observed at the municipal level, resulting in overoptimistic forecasts and expected recovery rates on Italian impaired mortgages that are too high. Our results also suggest that the Bank of Italy does not correctly address the riskiness characterizing those exposures. As a consequence, equity provisions that Italian banks need to keep according to Basel III requirements might be miscalculated. Market practitioners have to adjust coarse regional recovery rates to account for these substantial intra-regional differences. Without correctly quantifying the riskiness of the defaulted mortgages held on their balance sheets, banks might not make enough provisions of non-performing loans and violate Basel III recommendations. Given that risk-based pricing is now more widespread in Italy (Magri and Pico 2011), adequate assessment of riskiness is vital to a sound banking sector.
These findings are important because the stock of non-performing loans (NPLs) in Italy tripled since the 2008 global financial crisis, reaching 18% of total loans in 2015 (Bank of Italy reports by Ciocchetta et al. 2017;Accornero et al. 2017;Fischetto et al. 2018). The NPL problem at Italy's banks is largely the result of the prolonged recession that hit the Italian economy in recent years and of lengthy credit recovery procedures (European Banking Authority 2016). In addition to rising concerns about the soundness of the banking sector, this phenomenon might trigger a vicious circle where the contraction in credit supply driven by the level of NPLs leads to lower growth, a slower recovery, and a further deterioration in the balance sheets. In March 2018 the European Commission adopted a comprehensive package of measures, including a proposal for a regulation amending the Capital Requirements Regulation (CRR) to introduce common minimum loss coverage levels for newly originated loans that become non-performing (European Systemic Risk Board 2017a, 2017b).
Figures are produced using Mapchart.net. This paper is based on the first author's MSc dissertation which was supervised by the second author at the University of Manchester in summer 2019.
Data
The main data used in this study consist of the Property Market Observatory (OMI, Organizzazione Mercato Immobiliare-Agenzia delle Entrate) data set. The data contain the time series of house prices from H1:2000 to H1:2018, collected on a half-yearly basis. Each time series provides minimum house price per square meter (psqm) and the maximum house price psqm. Each price range is associated to a specific municipality (9714 municipalities), in the respective province (107 provinces), region (20 regions), and territorial area (north-west, north-east, centre, south and isles) of Italy.
Each range of prices is specific to qualitative characteristics: property location, property maintenance status, and property type. Some data cleaning is performed in order to get every half-yearly time series comparable to each other, since data collection and notation changed during the sample period. We use the average of the minimum and the maximum psqm as an approximation of the fair price.
The data set provided by OMI also includes a more detailed breakdown into collateral geographic location (ISTAT 4 code, OMI code) and range of monthly lease rent prices psqm. Since this information is not relevant for the purpose of this research, such data columns are not considered.
Court-by-court data on the length of Italian legal procedures are obtained from La durata dei fallimenti e delle esecuzioni immobiliari e gli impatti sui npl, a Cerved Group S.p.A. ("La Scala" Attorneys Association) study, allowing for a very detailed calibration analysis and the estimation of expected recovery rates forecast from the foreclosure of the collateral backing the defaulted loans.
Model
The model consists of two main elements: (1) a stochastic continuous-time model capturing the dynamics of prices per square meter of real estate (psqm); and (2) a valuation model for recovery rates of real estate loans at the time of default.
The stochastic dynamics of prices are described by an exponential Ornstein-Uhlenbeck (EOU) process as proposed in Fabozzi et al. (2010) and Perelló et al. (2008). 5 Both papers provide strong support for this model. The EOU of the psqm P t is given by the stochastic differential equation S t is the log of psqm P t , µ is the (long-run) trend, θ ≥ 0 measures the speed of mean reversion, σ is the volatility and, W t is the Wiener process. The market value of the collateral of a loan that has market value P t at time t evolves as P T = P t e S T for all T ≥ t. Applying Ito's lemma, one obtains In Monte Carlo simulations, one can use the discrete-time approximation Applying Ito's lemma, one obtains In Monte Carlo simulations, one can use the discrete-time approximation Converting to returns, we finally find: with t ∼ N(0, 1). The length of time steps ∆t is six months because our data are provided on a semi-annual basis. Equation (5) can be written as a regression formula: where α = θµ + σ 2 /2 ∆t, β = −θ∆t, and e t = σ t ∆t. This gives us the following ordinary least squares (OLS) parameter estimates: σ 2 e is the variance of the error term e t (Iacus 2008). Recovery rates are calculated from a loan's loss profile T D is the time of default, T L the time of liquidation of the collateral, r the discount rate (market interest rate), EAD is exposure at default (the outstanding unredeemed part of the loan at time of default), k is a discount applied to the collateral which has market value P T L at the time of liquidation. The discount k captures losses (relative to the market price) due to property being sold at auction and other legal and administrative costs. The expected loss given default (the loss per Euro exposed at default), is given by where the expectation E LP T D is the integral over (Equation (6)) under the distribution of P T L which is defined by the dynamics (Equation (3)) with initial value P T D . The time span between the initial and the terminal time is T L − T D , the time between the default and the liquidation. Unlike in option pricing models, the expectation is taken under the physical measure. The (expected) recovery rate is finally given by Frontczak and Rostek (2015) derive explicit equations for these quantities. Their remark in Section 3.2 states that and S T D is the log price. Φ is the cumulative normal distribution. One can use these formulas to determine recovery rates (after calibration of the model to find the parameters µ, σ, and θ). Alternatively, one can also carry out Monte Carlo simulations. 6 In our case both produce essentially identical results with 100,000 runs.
We estimated loss given defaults (LGDs) directly in terms of the loan-to-value ratio. This quantity is defined as the ratio of the face value of debt and the collateral value pledged against the debt, EAD/P t . This ratio is used by banks to assess the riskiness of an engagement. The more collateral is provided, the less potential losses in case of default (i.e., prices are risk-sensitive).
The time between the default of an obligor and the liquidation of the collateral, T L − T D , depends mainly on the length of legal proceedings. These durations vary considerably across Italy. Sicilia and Molise have the least efficient courts in the entire country with foreclosures taking on average 18.5 years (Messina), 15.7 years (Enna), 14.7 years (Campobasso), and 14.6 years (Caltanissetta). In contrast, Crotone, Bolzano, Gorizia, and Como have the most efficient courts with 3.8, 4.1, 4.1, and 4.2 years, respectively (La durata dei fallimenti e delle esecuzioni immobiliari e gli impatti sui npl, Cerved Group S.p.A).
The wide variability observed in the courts' timing therefore affects the value of NPLs. A portfolio of bad loans produces a very different cash flow depending on the efficiency of the legal courts that manage the foreclosure. Longer legal proceedings become costlier and erode the current value of the collateral. All this results in even lower expected recovery rates, thus higher expected losses and tighter capital requirements for the already distressed Italian banks. The duration of real estate foreclosures has a pronounced impact on the value of impaired loans in the Italian banks' balance sheets because it affects the timing and size of recovery quotas on those credits.
The 'fire sale' discount k can be modelled in many different ways. In the Italian jurisdiction, the enforced properties are sold in auctions by the court. It is customary to assume a constant haircut of k = 20%. The discount can also be based on the number of auctions it takes to sell the collateral and the haircut (i.e., the discount applied between subsequent auctions). 7 Moody's Investors Service (2019) model describes real estate prices using a geometric Brownian motion process (GBM), defined by the stochastic differential equation The growth rate µ and the volatility σ of the property values in the pool depend on the asset type (residential or commercial) and geographical location.
The cash flow generated by the liquidation of a pool of defaulted mortgages in secured transactions (amounts and timing of collections), at the point of sale, is the solution to Equation (9). 8 Proceeds from liquidation = f adjusted P T D e (µ− σ 2 2 )(t−T D )+σ where ε ∼ N(0, 1). The adjustment factor f adjusted in Moody's model incorporates the timing of collections to factor into the analysis the possibility of longer-than-average recovery times. The adjustment also distinguishes between more liquid and less liquid properties (primarily determined by property location and condition). Moody's assumes that more than one auction may be required to sell fewer liquid assets. Thus, for each additional auction in the foreclosure process, property values are adjusted as set in the previous formula; Moody's applies fixed haircut levels in Italy to reduce the initial property value by 10% to 25%, depending on the region under consideration, for each additional auction (Moody's Investors Service 2019, p. 15). In this case, Numerical simulations can be based on, for example, the method detailed in Kuchuk-Iatsenko and Mishura (2015). 7 There is strong support for such a two-step approach (first price dynamics then applying a discount) over models aiming to directly capture the liquidation value (e.g., Leow and Mues (2012)). 8 LGDs in the geometric Brownian motion model can be calculated explicitly as in the Merton (1974) model. is the estimated liquidation value relative to the market value.
Comparison of Price Process Models
An estimation of the three different stochastic processes is performed to assess which one provides the best fit of our data. This is done across geographical areas, but also distinguishes between residential and commercial properties. Table 1 supports our analysis. The table shows how higher statistically significant mean reverting parameter estimations are obtained when using the EOU process to model the collateral price. Notes: * p-value < 0.05, ** p-value < 0.01, *** p-value < 0.001.
The EOU process produces the best result. The mean reversion coefficient is consistently more significant than for the geometric OU process. P-values for the geometric Brownian motion are smaller across all geographic areas and for both property types. In conclusion, this analysis justifies the choice of the EOU as the stochastic process to model collateral values.
Calibration of the chosen EOU process model provides us with parameter values that will form the basis of our calculation of expected recovery rates. One can either use explicit formulas (given above) or run Monte Carlo simulations. When simulating the calibrated process, one can also visualize forecast housing prices data and the LGD distribution. In both approaches the outcomes are expected recovery rates for both residential and commercial real estate.
The calibration is performed using both maximum likelihood estimation (MLE) and least squares regression (LSE) methods. Both techniques are good at estimating drift and volatility. First, both estimates are unbiased. Second, the estimate of the standard deviation is accurate. The least-squares minimization is tested to verify the efficiency of the maximum likelihood estimation. Both provide the exact same parameter estimates forμ andθ, but differ in estimatingσ. Thus, the root mean square error (RMSE) parameter is also provided and compared between the two methodologies.
The analysis follows the same structure as above: examining geographic areas before moving on to regions and finally provinces. In most cases, the RMSE parameter 9 is smaller when using the MLE than OLS: the only two exceptions are isles for residential properties (0.0173 vs. 0.0166) and north-west for commercial ones (0.0141 vs. 0.0133). However, this level of description is insufficient to justify the choice of the calibration model. For regions the MLE is also preferable to OLS because the MLE's RMSE is consistently smaller across all twenty Italian regions for commercial properties. For residential properties, the only exceptions are five regions out of twenty: Basilicata (0.0141 vs. 0.0136), Calabria (0.0200 vs. 0.0191), Sardegna (0.0265 vs. 0.0261), Toscana (0.0199 vs. 0.0194), and Veneto (0.0173 vs. 0.0170). Even in these few cases, the difference is very small. For the reasons discussed above, the MLE method is considered to be more efficient than the LSE method. The deepest level of analysis is performed only with the MLE technique.
Expected Recovery Rates
Using the estimated parameter values, simulations of the exponential OU process are performed to forecast recovery rates for each region and each province of the Italian peninsula. The analysis is first carried out across regions for residential and commercial real estate separately. Average national recovery rates are 57.44% for residential property and 56.60% for commercial real estate. For residential properties recoveries range from 68.52% (Friuli Venezia Giulia and Trentino Alto Adige) to 38.60% (Molise). The ranking is the same for commercial property with Trentino Alto Adige at the top (68.31%), followed by Trentino Alto Adige (67.48%) and Valle d'Aosta (67.12%), while Molise is at the bottom of the distribution with 37.91%.
These intervals of 29.92 and 30.40 percentage points reveal significant inter-regional volatility. This becomes even more pronounced when splitting each region into its provinces. For the 108 Italian provinces we find the following: Gorizia and Bolzano come first with 75.05% both, followed by Como and Sondrio (74.53% both); the lowest are observed for Messina (27.39%), Enna (33.31%) and Campobasso (35.37%). The range widens to 39.16 percentage points between the most and least efficient provinces. Figure 2 shows expected recovery rates for loans secured by residential properties. Commercial properties exhibit even larger differences between provinces than residential ones (available from the authors upon request). Indeed, recovery rates decreases more for southern and island regions than for northern ones.
It can be inferred from Table 2 that northern regions have smaller recovery rate intervals with respect to the rest: the province with the lowest recovery rate belongs to Sicilia (Messina with 27.39%), while the one with the highest recovery rate across the country is in Trentino Alto Adige (Bolzano with 75.05%), with a very pronounced difference of 47.66 percentage points. The very small range of recovery rates for Molise and Umbria (5.74 and 6.45 pp, respectively) are representative of a lower number of provinces in the region per se, together with consistently very low recovery percentages.
The situation does not change much when moving to commercial real estate (Table 3)
Comparison with Moody's Valuations
We compare our results to Moody's rating practice. Moody's is chosen because it is one of the biggest rating agencies worldwide, owning around 40% of the market, and it provides detailed information on its credit rating methodology which enables us to derive expected recovery rates for regions following its procedure.
To perform the benchmark analysis with Moody's practice on evaluating stochastic collateral amounts on secured loans in the Italian territory, we use "Moody's Approach to Rating Securitizations Backed by Non-Performing and Re-Performing Loans" and "Moody's Approach to Rating Italian RMBS". These publications provide specific information about Moody's assumptions about the Italian market. It is important to stress that Moody's does not breakdown its assumptions at a province level but considers only inter-regional differences. Since the Italian real estate market is characterized by significant inter-regional differences (different provinces belonging to the same region show different price volatility and courts timing), the lack of information at province level constrains us to focus on the regional level.
To estimate the cash flows generated by a pool of secured NPL transactions, the model used in Moody's rating process generates the collected amounts and the timing of collections. Moody's calculates the stochastic collected amounts from the collateral by using a geometric Brownian motion to model future property values. Average price growth rates and their volatility are calculated from forecasted prices and then inserted into our pricing formula to get expected recovery rates under Moody's assumptions. The expected recoveries are then subtracted from ours to measure discrepancies between the two methodologies. Figure 3 depicts the difference between the two calculations for residential properties. Both methods give similar expected recoveries for northern regions. For Friuli Venezia Giulia, Lombardia, Emilia Romagna, and Trentino Alto Adige, the percentage difference for residential (commercial) properties is 0.51% (1.35%), 2.90% (2.74%), 3.12% (3.47%), and 3.26% (0.97%), respectively. Although differences are large for residential properties in Molise (27.44%), Lazio (17.18%), Sicilia (15.13%), and Sardegna (14.50%). For commercial properties, the situation is quite similar with differences of 32.80%, 14.02%, 16.11%, and 14.11%, respectively.
Our estimates of expected recoveries, which are derived from an EOU process, range between 34.53% and 66.70%, with an average recovery of 54.48%. Moody's, which are obtained using the geometric Brownian motion, range from 53.44% to 74.90%, with an average of 64.03%. Interestingly, the lowest regional recovery rate obtained using Moody's practice is only slightly smaller than the average recovery rate obtained with our pricing approach.
The average recovery rates estimated in our model are broadly in line with findings in Ciocchetta et al. (2017) and Fischetto et al. (2018). This suggests that Moody's predictions on recovery rates are upward biased, and that Moody's underestimates the expected loss given default, especially with regards to less developed regions in Italy which exhibit high volatility.
By neglecting the actual risks affecting the market at regional and provincial levels, market practitioners might fail to address higher-than-expected loss given defaults. This can result in banks' inability to correctly quantify the riskiness of the assets held on their balance sheets. By estimating lower expected loss rates, banks fail to keep enough provisions for NPLs and would therefore not be sufficiently capitalized, violating Basel III recommendations.
Conclusions
Managing credit risk means knowing in advance how changes in the input parameters affect the estimation results. This enables adequate regulatory capital calculation, adequate economic capital calculation, adequate downturn-LGD (loss given default) estimation, and differentiated risk allocation due to risk-sensitive pricing. For existing defaulted loans, the knowledge about the effects
Conclusions
Managing credit risk means knowing in advance how changes in the input parameters affect the estimation results. This enables adequate regulatory capital calculation, adequate economic capital calculation, adequate downturn-LGD (loss given default) estimation, and differentiated risk allocation due to risk-sensitive pricing. For existing defaulted loans, the knowledge about the effects of liquidation efficiency (courts timings) and cost factors on the LGD may have consequences for liquidation policy.
In this paper we focus on loans of the retail sector that are collateralized by residential and commercial real estate property in Italy. We find significant intra-regional differences in the Italian real estate market. When working with regional rather than provincial data, such differences are overlooked. At the regional level our results are much less optimistic than those derived using Moody's methodology. This implies that Italian banks might hold too little capital to cover expected losses from these engagements.
Although this study captures the most relevant characteristics of the two main property types (residential and commercial properties classified as being in normal condition), it would be of interest to develop this study further by using a larger panel data set including property location and property maintenance conditions. Further research should also add validation using actual liquidation data and compare these with forecasts derived in this paper, although the Bank of Italy report by Ciocchetta et al. (2017) highlights the "scarcity of reliable public data on banks' track record in bad loan recovery". | 5,828.2 | 2020-01-12T00:00:00.000 | [
"Economics"
] |
Partial Regulatory T Cell Depletion Prior to Schistosomiasis Vaccination Does Not Enhance the Protection
CD4+CD25+ regulatory T cells (Tregs) do not only influence self-antigen specific immune responses, but also dampen the protective effect induced by a number of vaccines. The impact of CD4+CD25+ Tregs on vaccines against schistosomiasis, a neglected tropical disease that is a major public health concern, however, has not been examined. In this study, a DNA vaccine encoding a 26 kDa glutathione S-transferase of Schistosoma japonicum (pVAX1-Sj26GST) was constructed and its potential effects were evaluated by depleting CD25+ cells prior to pVAX1-Sj26GST immunization. This work shows that removal of CD25+ cells prior to immunization with the pVAX1-Sj26GST schistosomiasis DNA vaccine significantly increases the proliferation of splenocytes and IgG levels. However, CD25+ cell-depleted mice immunized with pVAX1-Sj26GST show no improved protection against S. japonicum. Furthermore, depletion of CD25+ cells causes an increase in both pro-inflammatory cytokines (e.g. IFN-γ, GM-CSF and IL-4) and an anti-inflammatory cytokine (e.g. IL-10), with CD4+CD25- T cells being one of the major sources of both IFN-γ and IL-10. These findings indicate that partial CD25+ cell depletion fails to enhance the effectiveness of the schistosome vaccine, possibly due to IL-10 production by CD4+CD25- T cells, or other cell types, after CD25+ cell depletion during vaccination.
Introduction
Schistosomiasis is one of the most important neglected tropical diseases (NTDs) and remains a major public health problem in endemic countries [1,2]. Although schistosomiasis can be treated with praziquantel [3], the high re-infection rate limits the overall success of drug therapies [4,5]. Therefore, the development of a safe, effective vaccine would significantly improve the long-term management of schistosomiasis and improve the efficacy of chemotherapeutic interventions [6,7]. Despite decades of research toward developing vaccines against Schistosoma japonicum (S.japonicum), however, a protective vaccine against this pathogen is still not available.
A potential issue limiting the immune system's response to vaccination is the presence of regulatory T cells (Tregs) which suppress T cell activation [8,9]. Tregs play a central role in immune homeostasis and in preventing autoimmune disease. Natural Tregs which express Foxp3 and antigen-specific Tregs which secrete IL-10 and/or TGF-b, termed Tr1 or Th3 cells, play a protective role in immunity to infection by controlling infection-induced immunopathology [10]. However, induction of Tregs to suppress the host's protective immune responses is also a potent immune subversion strategy utilized by many pathogens, including S. japonicum, to prolong their survival [11,12]. Both thymus-derived natural Tregs and pathogen-induced peripheral Tregs could contribute to the immune suppression observed during infection [13]. Depletion of these natural and induced Tregs, consequently, can enhance the development of protective T cell responses during chronic infection [14,15].
Studies have demonstrated that vaccination may also lead to the expansion of CD4 + CD25 + Tregs, which ultimately blunts responses to cancer vaccines. Indeed, the depletion of this cell type results in an enhanced tumor vaccination response [16,17]. The potent immunosuppressive effects of CD4 + CD25 + Tregs may in part explain the failure of many immunotherapeutic approaches to cancer [18,19]. For example, treatment with cyclophosphamide to reduce suppressor cells has been shown to enhance antitumor immunity during vaccination in melanoma patients. However, it is now recognized that more specific strategies are required to eliminate Tregs in order to improve the efficacy of anti-tumor immunotherapeutics [20]. A common method of depleting CD25 + Tregs is to inject an antibody against CD25, which is constitutively expressed on this cell type. This approach has been demonstrated to significantly improve the clearance of injected tumor cells [16,21]. A similar strategy is required to enhance the efficacy of poorly immunogenic prophylatic infectious disease vaccines and for therapeutic vaccination in chronic infections [22,23]. Although previous studies point to the importance of CD4 + CD25 + Tregs in the host response to cancer and other diseases, the influence of these cells on the response to a schistosomiasis vaccine has yet to be examined.
A 26 KDa isoenzyme of S.japonicum glutathione S-transferase (Sj26GST), which catalyses detoxification of lipophilic molecules by thioconjugation, is one of the six antigens recommended by WHO for vaccine development [24]. It has been shown that the reduction of worm burdens and liver egg numbers in mice can reach 30.1% and 44.8%, respectively, after immunization with a plasmid containing Sj26GST DNA (pVAX1-Sj26GST) [25]. In order to investigate the influence of CD4 + CD25 + Tregs on the response to a schistosome vaccine, this study evaluated whether the depletion of CD4 + CD25 + Tregs using anti-CD25 antibody treatment leads to an enhancement of pVAX1-Sj26GST DNA vaccine potency in mice. The results demonstrated that CD25 + cell depletion did not enhance protection conferred by pVAX1-Sj26GST vaccination, but did cause a significant increase in splenocyte proliferation and IgG levels. Depletion of CD25 + cells induced splenic CD4 + CD25 2 T cell secretion of both IFN-c and IL-10, which may, in part, explain the lack of enhancement of the protection conferred by vaccines.
Results
Anti-CD25 Monoclonal Antibody Treatment Depletes Treg Cells in C57BL/6 Mice Prior to pVAX1-Sj26GST Vaccination Given the potential role of CD4 + CD25 + Treg cells in suppressing the immune response induced by vaccination, a key question is whether the depletion of this cell type affects the protective efficacy of the pVAX1-Sj26GST schistosomiasis vaccine. To deplete CD4 + CD25 + Treg cells, C57BL/6 mice were administered a 500 mg/mouse dose of the anti-CD25 PC61 antibody via intraperitoneal injection. This treatment protocol has previously been shown to deplete and inhibit CD4 + CD25 + Tregs [23]. As CD25 is expressed on effector T cells generated upon immunization, as well as on CD4 + CD25 + Tregs, it is not possible to examine the effect of the anti-CD25 antibody on Tregs by surface phenotype. The transcription factor Foxp3 is associated with CD4 + CD25 + Tregs identity and function. Therefore, coexpression of CD25 and Foxp3 was used to identify CD4 + CD25 + Tregs [26]. The effectiveness of the treatment regimen was confirmed by FACS analysis of peripheral blood from either control rat IgG1 or anti-CD25 mAb treated mice ( Figure 1A). Three days after treatment, compared to control mice, anti-CD25 treatment resulted in an average reduction of 66% in CD4 + CD25 + Foxp3 + Treg cell number ( Figure 1B).
Pre-emptive Depletion of CD25 + Cells does not Significantly Improve the Protective Efficacy of pVAX1-Sj26GST Vaccination To assess the effect of CD4 + CD25 + Treg depletion on the protective efficacy of the pVAX1-Sj26GST vaccine, C57BL/6 mice were subjected to anti-CD25 antibody treatment, or treated with rat IgG1 antibody or no antibody as controls, and immunized intramuscularly three days later (day 0) with 50 mg pVAX1 or pVAX1-Sj26GST DNA. The treatment regimen is illustrated in Figure 2A. The percentage of protection induced by vaccination was measured by the reduction in adult worm and egg burden. Among mice with no antibody pre-treatment, those inoculated with pVAX1-Sj26GST show a reduction in worms of 33.23%, and a reduction of eggs in the liver of 28.42% (P,0.05), compared with the pVAX1 inoculated control group (Figures 2B and 2C). Similarly, mice pre-treated with the control IgG1 antibody and inoculated with pVAX1-Sj26GST show a 26.15% reduction in worms and a 34.21% reduction of eggs in the liver (P,0.05) compared to control inoculation. However, pre-treatment with anti-CD25 antibody followed by vaccination with pVAX1-Sj26GST results in a slightly higher reduction in worm burden (41.82%) and liver egg reduction (36.24%) compared to control inoculated mice ( Figures 2B and 2C). This indicates that anti-CD25 antibody treatment does not significantly improve the protective efficacy of pVAX1-Sj26GST vaccination.
Kinetics and Characterization of Treg Cell Induction during pVAX1-Sj26GST Vaccination
This limited change in disease protection conferred by pVAX1-Sj26GST vaccination after CD25 + cell depletion may be explained by Tregs that remain or recover after antibody treatment. It has been previously reported that the depletion of CD4 + CD25 + Tregs with anti-CD25 + treatment is not completely effective [27]. Tracking the kinetics of CD4 + CD25 + Treg cell numbers after immunization and CD25 + cell depletion demonstrates that depletion is effective 3 days after injection (day 0), reaches a maximal level of a 70-80% reduction on day 8, and remains significantly lower than that of pre-immunization on day 35. Further, after immunization, the percentage of CD4 + CD25 + Foxp3 + Tregs in CD25 + cell-depleted mice after vaccination with pVAX1-Sj26GST was significantly lower than that in other groups ( Figure 3). However, both pVAX1 and pVAX1-Sj26GST-immunized mice had significantly increased percentages of CD4 + CD25 + Tregs after vaccination, compared to before vaccination, suggesting that vaccination induced production of peripheral Treg cells. Overall, these data suggest that CD25 + cell recovery after depletion likely does not explain the limited disease protection elicited by the vaccine in CD25 + depleted mice.
Depletion of CD25 + Cells Enhances Splenocyte Proliferation and the Production of IgG Antibody after Vaccination with pVAX1-Sj26GST CD4 + CD25 + Tregs have specifically been shown to suppress the immune response to schistosome infection [12,28,29]. We investigated whether CD25 + cell depletion in vivo would allow a more robust induction of immune responses after pVAX1-Sj26GST vaccination. To determine the influences on the immune response following antigen specific stimulation, splenocyte cell proliferation and antibody production were assessed. Splenocytes were isolated from CD25 + cell-depleted and non-depleted pVAX1-Sj26GST vaccinated mice, pooled, and stimulated with soluble worm antigen (SWA). Only Treg-depleted mice produce splenocytes that vigorously proliferate in the absence of in vitro stimulation with SWA ( Figure 4A), suggesting that pVAX1-Sj26GST vaccination induces T cell activation in vivo after CD4 + CD25 + Treg cell depletion. Furthermore, in vitro SWA stimulation causes a significant increase in splenocyte proliferation in both CD25 + celldepleted mice and controls after pVAX1-Sj26GST vaccination ( Figure 4A). This suggests that pVAX1-Sj26GST immunization induced antigen-specific T-cell proliferation, regardless of CD25 + cell depletion.
To examine whether the depletion of CD25 + cells influences antibody production, the levels of specific SWA antibodies in the serum of CD25 + cell-depleted mice after pVAX1-Sj26GST immunization were examined. Among non-CD25 + cell-depleted mice, pVAX1-Sj26GST vaccination causes a significant increase in antigen-specific IgG levels (P,0.05) compared with control inoculation ( Figure 4B). However, after cell depletion, pVAX1-Sj26GST vaccination causes an even more robust increase in IgG response (P,0.05) than in non-depleted vaccinated mice. No IgG1 or IgG2a response was observed in immunized mice, regardless of CD25 + cell depletion ( Figure 4B). Taken together, these results indicate that CD25 + cell depletion specifically influences both the proliferation of splenocytes and IgG production.
CD25 + Cell Depletion Prior to Vaccination Upregulates Both Pro-and Anti-inflammatory Cytokines in pVAX1-Sj26GST-vaccinated Mice
To further investigate the influence of CD25 + cell depletion on the immune response, the levels of cytokines in splenocytes isolated from CD25 + cell-depleted, pVAX1-Sj26GST-vaccinated mice after SWA stimulation were examined. pVAX1-Sj26GST vaccination significantly increases the production of IFN-c and GM-CSF in all cases (P,0.05; Figures 5A and 5B), while IL-4 and IL-10 levels are not significantly changed in vaccinated control Abtreated mice ( Figures 5C and 5D). When CD25 + cells are depleted prior to immunization, IFN-c levels in splenocyte supernatants are not increased to a higher level than that observed in non-CD25 + cell-depleted mice (P.0.05; Figure 5A). However, GM-CSF, IL-4, and IL-10 are significantly increased after immunization of CD25 + cell-depleted mice, compared to non-depleted controls (P,0.05; Figure 5C-D). Overall, these results demonstrate that CD25 + cell depletion prior to pVAX1-Sj26GST vaccination causes the upregulation of both pro-and anti-inflammatory cytokines.
CD4 + CD25 2 T Cells from pVAX1-Sj26GST Vaccinatedmice after CD25 + Cell Depletion are a Major Source of Increased IFN-c and IL-10 IFN-c produced by CD4 + Th1 cells is known to be a key cytokine in promoting schistosome vaccine-induced protection, while IL-10 is a key inhibitor of the process [30]. Furthermore, as the CD4 + T-cell mediated immune response plays a central role in the control of schistosoma after natural infection or vaccination [6,30], we determined whether CD4 + T cells are responsible for the production of these two cytokines. Splenocytes were isolated from pVAX1-Sj26GST-vaccinated mice with and without CD25 + cell depletion, labeled for the surface markers CD3, CD4, and CD25, and also intracellular IFN-c and IL-10, and analyzed by flow cytometry. Compared to those isolated from non-depleted (Figures 6A-B). However, gating CD25 + cells from splenocyte isolates after CD25 + cell depletion reveals that CD4 + CD25 2 T cells produce significantly higher levels of both IFN-c and IL-10 than CD4 + CD25 + cells (P,0.05 and P,0.01; Figures 6C-6D). These results indicate that CD4 + CD25 2 T cells are one of the major sources of both IFN-c and IL-10 after CD25 + cell depletion.
Discussion
CD4 + CD25 + Tregs affect the potency of vaccines with respect to both vaccine-induced self-antigen and foreign-antigen immune responses in multiple systems [31]. A number of groups have found that depletion of CD25 + cell populations using an anti-IL-2 receptor alpha chain antibody (anti-CD25 antibody) [32] potentiates vaccine-induced immunity to both tumors [16,21] and pathogens [23,33]. However, the impact of CD4 + CD25 + Tregs on vaccines against schistosomiasis, a disease that poses a significant public health concern in many tropical countries, was unknown, and is the subject of this investigation.
In the current study, to investigate the impact of CD4 + CD25 + Treg cells on vaccines against schistosomiasis, we have chosen a plasmid encoding Sj26GST as a DNA vaccine. Sj26GST is recognized as a promising vaccine candidate against S. japonicum [6]. However, like other vaccine candidates, the current schistosoma vaccine induced limited protection, highlighting the possible negative influence of Treg cells (e.g. CD4 + CD25 + Tregs ) in response to the vaccine. Indeed, the present study demonstrates that pVAX1-Sj26GST immunization induces a significant increase of CD4 + CD25 + Foxp3 + Tregs, which may be involved in the limited protection the vaccine confers. This finding is consistent with a recent publication showing that Sj26GST vaccine can enhance the expression of CD4 + CD25 + Tregs in the infected animals, resulting in poor disease protection [34]. Whether other candidate antigens could induce CD4 + CD25 + Tregs after immunization requires further analysis. Although not many schistosoma vaccines that elicit Treg cell development upon vaccination have been characterized [34], our group and others have demonstrated that several schistosoma antigens can induce CD4 + CD25 + Tregs [12,35]. Therefore, it is not surprising that many schistosoma antigens do not confer protection as recombinant proteins [6]. Whether these antigens could induce CD4 + CD25 + Tregs upon immunization also requires further analysis.
Regarding protective immunity, CD25 + cell depletion prior to vaccination with the S. japonicum pVAX1-Sj26GST DNA vaccine results in a significant increase in vaccine-induced splenocyte proliferation and IgG levels. However, CD25 + cell depletion does not significantly enhance the disease protection conferred by the vaccine. These observations imply that there might be other factors that affect the vaccine efficacy after CD25 + cell depletion. Furthermore, these results appear inconsistent with previous studies, as a number of groups have shown that removal of CD4 + CD25 + Tregs with anti-CD25 treatment enhances both the immune response and therapeutic potency of vaccines [16,17,23,36,37]. However, consistent with the current work, Tuve and colleagues report that CD4 + CD25 + Tregs depletion is inefficient in controlling tumor growth in a mouse model of cervical cancer [38]. Even though CD25 + cell depleted mice challenged with rotavirus had improved antigen-specific CD4 + and CD8 + T cell responses, the clinical outcome was not improved [33]. Whether these differences are due to different host system, different disease models, or different vaccine formulations remains to be investigated in future studies.
Quantification of cytokines in splenocyte culture supernatants indicates that pVAX1-Sj26GST vaccination induces significant levels of IFN-c and low levels of IL-4 and IL-10 in vaccinated control Ab-treated mice. However, immunization induces significantly higher levels of IL-4, IL-10, as well as GM-CSF in CD25 + cell-depleted mice. A high elicited ratio of IFN-c/IL-10 is predictive of the success of certain vaccines [39,40]. It is known that activation of CD4 + Th1 cells stimulates the production of a high level of IFN-c, which promotes protective immune responses, and of IL-10, which plays a negative role in the development of immunity against S,japonicum [6]. Previous studies report that CD25 + cell depletion results in a significant increase in IFN-c production in splenocyte culture supernatants, decreases the production of IL-10 in viral infection, and enhances the specific immune response induce by viral infection [33]. In contrast, this study found that CD25 + cell depletion causes an increase in both pro-inflammatory cytokines (e.g. IFNc, GM-CSF and IL-4) and an anti-inflammatory cytokine (e.g. IL-10).
Although IFN-c and IL-10 can be produced by many cell types, including B cells, macrophages, and CD4 + or CD8 + T cells [41,42,43], the CD4 + T cell mediated immune response plays a central role in the control of schistosoma after natural infection or vaccination [6,30]. Therefore, in this study, we assayed the production of IFN-c and IL-10 by CD4 + T cells. Consistent with other studies on parasitic infection [44,45], we found that CD4 + T cells produce higher levels of IFN-c and IL-10 after CD25 + cell depletion following pVAX1-Sj26GST immunization. Although CD4 + CD25 + Tregs have been shown to secrete IL-10 [46,47], intracellular cytokine staining analysis in this study shows that CD4 + CD25 2 T cells, not CD4 + CD25 + T cells, produce IL-10 and IFN-cafter CD25 + cell depletion. Our finding that the upregulation of IFN-c and IL-10 in CD25 + cell-depleted mice may explain the impaired inhibition of IL-10 and IFN-c production by CD4 + CD25 2 T cells, or other cells, after CD25 + cell depletion.
Notably, it has been demonstrated that host-protective IL-10 is produced, through autocrine signaling, by conventional IFN-cproducing Th1 cells during infection with Toxoplasma gondii [48]. Certain studies have suggested that IFN-csecretion enhances IL-10 production, particularly in disease conditions in which the host has already been primed to antigen, such as in chronic infection or cancer [49]. Antigen-experienced T cells appear to require IFN-c to further enhance IL-10 secretion for the inhibition of antigenspecific T cell responses [49]. Whether the induction and the suppressive function of IL-10-producing CD4 + CD25 2 T cells in pVAX1-Sj26GST vaccination after CD25 + cell depletion are dependent on IL-10 in vivo requires further investigation. Our findings that CD25 + cells depletion elicits the upregulation of both IL-10 and IFN-c in CD4 + CD25 2 T cells may imply that a feedback mechanism occurs after CD25 + cell depletion and vaccination, which may be involved in self-regulation of inflammation by anti-inflammatory cytokines (e.g, IL-10) after vaccination. This suggests that immune homeostasis shapes the delicate balance between pro-and anti-inflammatory cytokines and regulatory and effector T-cell function, in a manner corresponding to immunological threat and minimizing damage to the host.
In conclusion, this work demonstrates that depletion of CD25 + cells increased immune response, but did not confer enhanced protection after immunization with S. japonicum pVAX1-Sj26GST. Based on our interpretation of the data, the failure of CD25 + Treg cell depletion to enhance vaccine-mediated protection may be due to: (i) insufficient splenocyte proliferation and IgG levels to promote immune killing of the schistosomes. A complex organism, such as S. japonicum, needs vaccines to stimulate the appropriate immune response that leads to protection. Studies have shown that protection elicited by vaccination is not dependent on one immune mechanism, but is multifactorial, involving both cellular and humoral elements that can be affected by the host's genetic background and the vaccine regimen [50,51]. Although depletion of CD25 + cells elicits increased IgG production and T cell response, it may not induce a sufficiently wide spectrum of immune responses. (ii) complex negative regulatory strategies, such as IL-10 production by CD4 + CD25 2 T cells. This observation is consistent with results of a study showing a role for IL-10 in the suppression of host immunity upon vaccination; the blockade of IL-10 allowed an ineffective therapeutic DNA vaccine to stimulate even stronger immunity and enhance clearance of persistent viral replication [52]. (iii) Similar to cancer and infection, systemic immunization is able to induce antigen-specific T cells in the peripheral system, but cannot overcome the immunosuppressive microenvironment within local immune response sites. Studies have reported that numbers of CD4 + CD25 + Tregs in infective or intratumoral sites were significantly increased compared to the number of peripheral blood mononuclear cells (PBMC) [53,54]. Indeed, natural and inducible CD4 + Foxp3 + Tregs are recruited in the liver after schistosoma infection, providing an essential regulatory arm that stabilizes the immune response and limits immunopathology [29]. Apart from Tregs, others cells and molecules, such as regulatory B cells, inhibitory soluble factors (e.g., TGF-b), and inhibitory cell surface receptors (e.g., FasL, PD-L1 and B7-H1) are likely to be involved in the suppression of vaccine-mediated protection, but their roles are not yet clear.
In addition, the selection of a suitable adjuvant and delivery system to aid in the stimulation of the appropriate immune response is a critical step in the path to the development of successful antischistosome vaccines. Recently, a study showed that Lipopolysaccharide (LPS) as an adjuvant can support the development of diverse CD4 + T cell subsets, depending on the tissue microenvironment. For example, mice sensitized, intranasally, with a low dose of LPS display heightened Th2 responses against an allergen, whereas intravenous immunization generates Treg cells that limit CD8 + T cell response and intraperitoneal injection leads to Th17 and Th1 expansion in small intestinal lamina propria [55]. A similar study involving Leishmania donovani (L. donovani) vaccination suggested that mice immunized intraperitoneally (i.p.) and intravenously (i.v.) with L. donovani promastigote membrane antigens (LAg), either free or encapsulated in liposomes, were protected against challenge infection with L. donovani, whereas mice immunized through the subcutaneous (s.c.) or intramuscular routes were not protected. The induction of high prechallenge TGF-b limits the efficacy of s.c. vaccination, rendering it nonprotective [56]. It is not yet clear whether a traditional delivery system or an adjuvant used with schistosoma vaccines could induce CD4 + CD25 + Tregs upon immunization. Elucidation of the protective mechanisms of schistosoma vaccines depends on increased depth of understanding of basic immunological knowledge. We are now working to clarify the reasons for the failure of the vaccine, with or without CD25 + cell depletion, to elicit protective responses. We are investigating the possibilities discussed above, toward developing strategies for schistosomiasis vaccine formulation and delivery.
Although we did not demonstrate improved protective efficacy using the CD25 + cell depletion strategy, we did gain insight regarding effective design of S. japonicum vaccines. CD25 + cell depletion combined with the inhibition of IL-10 may represent a promising new approach for effective schistosomiasis vaccine design. Furthermore, to develop a successful schistosomiasis vaccine, we should consider immune regulation from a broader perspective, with an appreciation of interactive networks, within and beyond the immune system, that play roles in the response to vaccination.
Ethics Statement
Animal experiments were performed in strict accordance with the Regulations for the Administration of Affairs Concerning Experimental Animals (1988.11.1), and all efforts were made to minimize suffering. All animal procedures were approved by the Institutional Animal Care and Use Committee (IACUC) of Nanjing Medical University for the use of laboratory animals (Permit Number: NJMU 09-0128).
Animal Studies
Six-week-old C57BL/6 female mice were provided by the Center of Experimental Animals (Nanjing University, Nanjing, China) and bred in university facilities. All animal experiments were performed in accordance with the Chinese laws for animal protection and in adherence to experimental guidelines and procedures approved by the Institutional Animal Care and Use Committee (IACUC), the ethical review committee of Nanjing Medical University, for the use of laboratory animals. Oncomelania hupensis harboring S. japonicum cercariae (Chinese mainland snail strain) were purchased from the Jiangsu Institute of Parasitic Diseases (Wuxi, China).
DNA Vaccine Preparation
Constructs encoding the Sj26GST were prepared and confirmed as described previously [25], using the 3 kb recombinant expression plasmid pVAX1 (a gift from Professor Jiaojiao Lin, Shanghai Veterinary Research Institute, Chinese Academy of Agricultural Sciences, Shanghai, China) containing the cytomegalovirus (CMV) promoter and bovine growth homone (BGH) polyadenylation signal. Constructs were confirmed by sequencing. Expression of Sj26GST was verified by transfecting the pVAX1-SjGST plasmid into 293 cells. The empty vector pVAX1 was used as control. Plasmids were replicated in DH5a Escherichia coli and purified with Qiagen Endo-Free plasmid kit (Qiagen, Valencia, CA) according to the manufacturer's protocol. The Limulus Amebocyte Lysate QCL-1000H kit (Cambrex, Charles City, IA, USA) was used to confirm that the Endotoxin concentration were below 0,1 EU (Endotoxin Units) per dose.
Depletion of CD4 + CD25 + T Cells
For the in vivo depletion of CD4 + CD25 + T cells, mice were intraperitoneally injected with 500 mg of the anti-CD25 monoclonal antibody clone PC61 (BD Bioscience, Pharmingen, San Diego, Calif.) or a rat IgG1 isotype control (Sigma-Aldrich). Depletion efficiency was verified by staining with anti-CD25 antibody clone 7D4 (BD Bioscience, Pharmingen, San Diego, Calif.) followed by measurement with flow cytometry (see Results).
Immunization and Challenge Infection
For characterization of immune responses, three independent experiments were carried out. In each experiment, C57BL/6 mice (6 mice per group) were intramuscularly injected in the quadriceps muscle with 50 mg of pVAX1 or pVAX1-Sj26GST. The immunization was repeated three times at 14-day intervals. One week after the final vaccination, mice were sacrificed for the characterization of their cellular and humoral immune response.
For the vaccination challenge trial, two independent experiments were carried out. In each experiment, C57BL/6 mice were divided into four groups of 8 mice per group. Three days after anti-CD25 or control antibody injection, each mouse was intramuscularly injected with 50 mg of pVAX1 or pVAX1-Sj26GST. Immunization was repeated three times at 14-day intervals. Two weeks after the final vaccination, all mice from each group were challenged percutaneously with 4061 S. japonicum cercariae. After six weeks, mice were sacrificed and perfused to determine adult worm burdens and the liver egg burdens. Reductions in worms/liver egg burdens are expressed as a percentage of the burden recorded in the control groups.
Antibody Detection in the Sera of Immunized Mice
For antibody detection, serum samples were collected seven days after the last immunization. Standard ELISAs were performed using soluble worm antigen (SWA) as the antigen source, which was prepared as previously described [25,57,58]. Antibody detection in the sera of immunized mice was performed as previously described [59]. In brief, ELISA plates (Titertek Immuno Assay-Plate, ICN Biomedicals Inc., Costa Mesa, CA, USA) were coated with SWA (15 mg/ml) in 50 mM carbonate buffer (pH 9.6), and stored overnight at 4uC. Each plate was washed three times with PBS (pH 7.6) containing 0.05% Tween-20 (PBST), and blocked with 0.3% (w/v) bovine serum albumin (BSA) in PBS for 1 h at 37uC. The plates were further washed three times with PBST, and then incubated with sera diluted with 0.3% BSA (1:100) for the detection of IgG, IgG1, and IgG2a antibodies at 37uC for 1 h. The plates were washed four times with PBST, followed by incubation with HRP-conjugated rat antimouse IgG, IgG1, and IgG2a (1:1000) for 1 h at 37uC. The plates were washed five times with PBST and were then developed with tetramethylbenzidine (TMB) substrate (BD Biosciences Pharmigen) for 30 min. Optical density (OD) was read at 450 nm using a BioRad (Hercules, CA, USA) ELISA reader. To evaluate cytokine production, single-cell suspensions of splenocytes were cultured in the presence of 15 mg/ml SWA or control medium at 2610 5 cells/well in round bottom 96 well plates. After 3 days, culture supernatants were collected and assayed for IFN-c, GM-CSF, IL-4, and IL-10 using FlowCytomix Mouse Cytokine Kit (Bender MedSystems, Vienna, Austria) according to the manufacturer's instructions.
Flow Cytometry
For analysis of CD4 + CD25 + Foxp3 + T cells, the Mouse Regulatory T Cell Staining Kit (eBioscience, San Diego, CA) was used. Whole blood from immunized mice was obtained retroorbitally. RBC lysis was done on whole blood as needed with 16ammonium chloride lysing solution (BD PharMingen) [60]. Cells were surface-stained with PerCP anti-CD3 mAbs (eBioscience, San Diego, CA), FITC anti-CD4 mAbs, and APC anti-CD25 mAbs, followed by fixation and permeabilization with Cytofix/Cytoperm. Intracellular staining with phycoerythrin (PE) mouse anti-Foxp3 or PE IgG2a rat immunoglobulin control antibody was performed according to the manufacturer's protocol.
Statistical Analysis
Statistical analyses were performed using SPSS version 10.1 (Statistical Package for Social Sciences, Chicago, IL statistical software). Statistical significance was determined by Student's t-test with P,0.05 considered statistically significant. | 6,636 | 2012-07-03T00:00:00.000 | [
"Biology",
"Medicine"
] |
Variable-Order Fracture Mechanics and its Application to Dynamic Fracture
This study presents the formulation, the numerical solution, and the validation of a theoretical framework based on the concept of variable-order mechanics and capable of modeling dynamic fracture in brittle and quasi-brittle solids. More specifically, the reformulation of the elastodynamic problem via variable and fractional order operators enables a unique and extremely powerful approach to model nucleation and propagation of cracks in solids under dynamic loading. The resulting dynamic fracture formulation is fully evolutionary hence enabling the analysis of complex crack patterns without requiring any a prior assumptions on the damage location and the growth path, as well as the use of any algorithm to track the evolving crack surface. The evolutionary nature of the variable-order formalism also prevents the need for additional partial differential equations to predict the damage field, hence suggesting a conspicuous reduction in the computational cost. Remarkably, the variable order formulation is naturally capable of capturing extremely detailed features characteristic of dynamic crack propagation such as crack surface roughening, single and multiple branching. The accuracy and robustness of the proposed variable-order formulation is validated by comparing the results of direct numerical simulations with experimental data of typical benchmark problems available in the literature.
Introduction
Fracture is one of the most commonly encountered mode of failure in structural systems across a broad spectrum of applications spanning the civil, mechanical, and aerospace engineering fields. The prevention of fracture-induced failure is a major concern of structural design and has historically motivated the development of theoretical and experimental methodologies to predict nucleation and propagation of structural damage. While the general topic of fracture mechanics is very complex in itself due to the coexistence of multiple physical processes occurring over multiple spatial scales, the specific topic of dynamic fracture is possibly even more challenging due to the occurrence of crack surface roughening, instabilities, and branching. Detailed discussions on the implications and modeling approaches for dynamic fracture can be found in many sources such as [1]. During the last few decades, the analysis of dynamic fracture has certainly largely benefited and made significant progress thanks to the rapid development of numerical methods. From a high level perspective, the approaches available for the analysis of damage can be divided into two categories, namely, discrete and continuum. This classification refers specifically to the modeling of the damage interface so that, while in both cases the solid is treated as a continuum, in the former class of approaches the displacement is modeled as a discontinuous field across the fracture surface. In the latter category, instead, the displacement is treated as continuous field everywhere (even across the crack surface), but the local value of the elastic energy is reduced by accounting for the softening of the material properties associated with the fracture-induced degradation. In the following, we briefly review some of the most accredited dynamic fracture models in order to clearly define the context in which our variable-order approach is defined.
Discrete approaches to the modeling of dynamic fracture include extended finite element methods (XFEM) [2][3][4][5], discontinuous cell methods (DCM) [6], cohesive interface element techniques [7][8][9], discontinous Galerkin methods [10,11], and lattice based models [12,13]. From a general perspective, these approaches are based either on linear elastic fracture mechanics (LEFM) [14] or on the cohesive zone model (CZM) [15]. Owing to its computational and multiscale analysis capabilities, the XFEM has quickly risen in popularity and it is currently one of the most widely used approaches. In XFEM, cracks are represented as discrete discontinuities that are embedded in the damaged elements by enriching the displacement field according to the method of partition of unity [16]. This approach implies that the front of the discontinuity (i.e. the crack) must be tracked explicitly. While several tracking algorithms have been proposed over time [4,5,17], the front tracking process is quite computationally intensive, particularly in three-dimensional problems involving complex crack topology. Another important limitation consists in the need for a branching criterion which is often ad-hoc and limited to two crack branches. Exceptions to this latter comment are the formulations based on either the DCM [6] or the interface elements [7,18], which however must be inserted in the model a priori hence posing the problem of knowing the location and path of propagation of damage. The front tracking limitation was addressed by the use of lattice models where the continuum is replaced by a system of rigid particles that interact via a network of linear and nonlinear springs. More recently, Silling [19,20] proposed an approach denominated peridynamics that models the solid medium as a nonlocal lattice of particles described via an integral formulation. During the last two decades this approach has received much attention and it has been used in many diverse applications. In the context of dynamic fracture, peridynamics has shown to be able to address several of the above shortcomings and could accurately capture crack intersections and branching in complex structures and materials. An important caveat of the lattice models derives from the fact that the springs stiffness is often defined heuristically and various elastic phenomena (e.g. the Poissons effect) cannot be reproduced exactly.
In the second category, the continuum approaches, we find the crack band method [21], nonlocal integral damage models [22], nonlocal stress based damage models [23], and the more recent class of phase-field models [24][25][26][27][28][29][30]. Phase-field models are undoubtedly the methods that have seen the highest popularity given their overall accuracy and ease of implementation. In phase-field models, sharp cracks are regularized by a diffused damage field while a variational approach is adopted to obtain evolutionary equations for both the displacement and the damage fields [24]. The formulation also includes a small and positive length scale parameter so that, in the limit for the parameter approaching zero, the phase-field representation of the crack converges to the original problem of a sharp crack [31]. The use of a phase-field regularization prevents the need for an explicit tracking of the crack surface discontinuity. It follows that the numerical implementation of the phase-field model is relatively straight-forward when compared to the previously mentioned discrete approaches. An important disadvantage of these models lies in their high computational cost, which follows from the need to solve a coupled system of partial differential equations for both the damage (phase) and the displacement fields [30]. This limitation becomes even more significant when the phase field approach is applied for fracture analysis in three-dimensional media. Additionally, phase-field models are subject to an artificial widening effect in the damaged area at the point of occurrence of instability [30,32], which is in contrast to the microbranching and crack surface roughening seen in experiments [33]. A detailed review on phase-field models can be found in [34].
In very recent years, variable-order fractional calculus (VO-FC) has emerged as a powerful mathematical tool to model a variety of discontinuities and nonlinear phenomena. Variable-order (VO) fractional operators are a natural extension of constant-order fractional operators that allow the differentiation and integration of functions to any real or complex valued order [35]. In VO operators, the order can be function of time, space, internal variables (e.g. system energy or stress) or even of external variables (e.g. temperature or external mechanical loads) [36]. As the VO-FC formalism allows updating the system's order depending on either its instantaneous or historical response, the corresponding model can evolve seamlessly to describe widely dissimilar dynamics without the need to modify the structure of the underlying governing equation. Thus, a very significant feature of VO-based physical models consists in their evolutionary nature; a property that can play a critical role in the simulation of nonlinear dynamical systems [36][37][38]. In recent years, many applications of VO-FC to practical real-world problems have been explored including, but not limited to, modeling of anomalous diffusion in complex structures with spatially and temporally varying properties [39], the response of nonlinear oscillators with spatially-varying constitutive law for damping [37], nonlinear dynamics with contacts and hysteresis [38], and even complex control systems [40]. The interested reader can find a comprehensive review of the applications of VO-FC in [41].
In this study, we present a theoretical and computational framework based on VO fractional operators and capable of effectively capturing the many features of dynamic fracture in brittle and quasi-brittle solids. We will show how the many unique capabilities of this framework build directly on the several remarkable properties afforded by these fascinating mathematical operators. Dynamic fracture is a quintessential evolutionary nonlinear dynamic problem that involves the propagation of nonlinearities and discontinuities through a system. VO fractional operators are uniquely equipped to model these complex class of dynamical problems. The VO framework presented in this paper builds upon the mathematical structure presented in [42] which focused on the modeling of the propagation of dislocations through lattices of particles using physics-informed order variations. More specifically, the VO model introduced in [42] leveraged an order variation law based on the relative displacements of particles within the lattice in order to capture the formation and annihilation of pair-wise bonds. The general strategy followed that outlined in [38,43] for physics-driven VO laws for discrete systems. The approach resulted in the formulation of evolutionary VO fractional differential equations capable of capturing the transition towards a nonlinear dynamic regime (associated with the motion of dislocations) without having to explicitly track the location of the dislocation. In this study, we extend this general approach to continuous systems by formulating a VO elastodynamic framework uniquely suited for the analysis of dynamic fracture and capable of detecting the formation and propagation of damage by means of a strain-driven order variation laws. The introduction of VO operators in the continuum elastodynamic formulation allows the governing equations to evolve (from linear to nonlinear) and adapt (by capturing discontinuities) based on both the local response and the underlying damage mechanism while eliminating the need for explicitly tracking the damage front. We will show that the resulting formulation is capable of capturing key features associated with the dynamic fracture mechanism such as roughening of the crack surface, crack instability, and crack branching without the need of any a-priori assumptions or ad-hoc criteria. Further, contrary to phase-field models, no additional partial differential equations are needed to predict the evolution of the damage field. Indeed, in the VO framework, the damage field evolves naturally guided by the variation of the order of the fractional operators that solely depends on the instantaneous response of the system. In the second part of this study, the VO dynamic fracture model is validated by applying it to the direct numerical simulation of three benchmark experiments available in the literature: 1) the Kalthoff-Winkler experiment that involves the impact shear loading of a doubly notched specimen [44], 2) the dynamic crack branching experiment [33], and 3) the John-Shah experiment that involves the impact loading of a pre-notched concrete slab [45].
Material and Methods
We briefly discuss the general strategy leading to the formulation of evolutionary governing equations based on VO Riemann-Liouville (VO-RL) derivatives of constants [38,43]. Then, we apply these operators to formulate an evolutionary elastodynamic framework suitable for the modeling of dynamic fracture. Some background and discussion of the fundamental properties of the VO-RL operators used in this study are provided in Supplementary Information (SI).
Evolutionary governing equations via VO-RL derivatives of constants: A particularly interesting property of fractional-order Riemann-Liouville operators stems out of their behavior when applied to the fixed-order derivative of a constant. It is found that this fractional order derivative is not equal to zero, unless the order converges to an integer. While this is an unexpected and maybe even unsettling property of such operators, at least in view of classical integer order calculus, we will show that this property has extremely valuable implications for modeling physical systems exhibiting highly nonlinear and discontinues behavior. Mathematically, the RL derivative of a constant A 0 ∈ R to a constant fractional-order α 0 ∈ R + defined on the interval (a, t) ∈ R is given as [35]: where Γ(·) is the Gamma function. Note that, although apparently non intuitive, this is merely an intrinsic property of the RL operator. The use of this property was originally outlined and extended to variable-order in [38,43] where it was applied to the modeling of highly nonlinear mechanisms in dynamical systems. More specifically, the properties offered by the variable-order Riemann-Liouville (VO-RL) derivative of a constant creates a unique opportunity to formulate governing equations in an evolutionary form. In the following, we briefly review these characteristics in order to lay the necessary foundation for the development of the VO elastodynamic formulation. Consider a function α(t) constructed using a continuous real-valued function κ(t) in the following fashion [38]: where the function κ(t) is some function designed to capture the desired physical mechanism of interest and the one producing the order variation. Specific details on the selection of this function in the context of fracture mechanics will be provided when addressing the VO dynamic fracture formulation. We emphasize that, while the characteristic function κ(t) introduced above was defined to be a function of time t, the functional dependence can be extended to include any other dependent or independent variables depending on the specific physical problem. Further, κ 0 ∈ R + is a scaling factor that allows calibrating the order variation on the scale of the characteristic response of the physical system. A detailed discussion of the procedure to determine the value of κ 0 is outlined in the SI along with an illustrative example. For a given κ 0 , the limiting behavior of α(t) is: Now, we can indicate the VO-RL derivative of the constant A 0 to the order α(t) on the interval (a, t) as a D α(t) t f (t) or, in the interest of a more compact notation, as D It appears that, when the VO-RL operator is applied to a constant under the conditions in Eq. (2), a discontinuous (switch-like) behavior can be captured simply following a change in sign of the function κ(t). It is exactly this switching behavior that can be exploited to simulate the occurrence of certain nonlinear and discontinuous dynamical properties of mechanical systems. More specifically, consider defining VO operators as part of a governing equation such that its variation can capture changes in the properties of the systems such as, for example, a change in stiffness (e.g. bilinear stiffness) or the occurrence of geometric discontinuities (e.g. dislocations in a lattice or crack in a continuum). In all these cases, the response of the system changes from initially linear to, potentially, highly nonlinear. The onset of either types of nonlinearities or discontinuities results in an implicit reformulation of the underlying system dynamics. This change in the underlying dynamics can be captured in the order α(t) via the function κ(t). It immediately follows that a change in the order α(t) results in an implicit reformulation of the equations of motion following a change in the underlying physical mechanisms dominating the response of the system. This characteristic was illustrated to formulate evolutionary equations to model contact dynamics, hysteretic behavior [38], and motion of edge-dislocations in lattice structures [42]. In the present study, we extend this unique behavior of the VO-RL operator to simulate the initiation and propagation of cracks in solids. Such behavior is achieved by proper integration of the VO-RL operators in the elastodynamic formulation.
VO elastodynamic formulation: The strong form of the governing equation for a solid having a volume Ω (see Fig. (S3) in SI) is given in the well-known form: where σ denotes the stress field, u denotes the displacement field, f denotes the externally applied force, and ρ denotes the density of the solid. The bold-face is used to indicate either vectors or tensors. The above equations of motion are subject to the following boundary (BC) and initial (IC) conditions: where ∂Ω u and ∂Ω t denote the portion of the boundary where essential and natural boundary conditions are applied, respectively. u and t denote the externally applied displacement and traction at ∂Ω u and ∂Ω t , respectively. While u 0 and v 0 denote the displacement and velocity fields at t = 0. The stress developed in the medium upon damage is defined in the following fashion: where C is the classical fourth-order elasticity tensor and ε is the symmetric displacement-gradient strain tensor. ψ(d) is a degradation function of the damage variable d ∈ [0, 1] such that ∂ψ/∂d ≤ 0; this latter condition originates from the thermodynamic consideration that the degradation function must lead to a decrease in the elastic energy with an increase in damage size. In this study, the damage variable d is defined such that d = 0 indicates the undamaged state, while d = 1 indicates a fully damaged state. We note that the same stress-damage-strain constitutive relation has also been adopted in several classical dynamic fracture formulations [6,29,30].
In the VO dynamic fracture formulation, we adopt a strain-based criterion to detect the onset of damage. More specifically, damage at a given point occurs when the maximum principal strain at the given point exceeds a critical strain derived from the elastic strength of the material. The VO-RL formalism presented previously allows us to define the characteristic function κ(x, t) which allows detecting the onset of damage following the strain-driven physical law. More specifically, we define a VO α(x, t) in the following manner: where κ 0 is the previously introduced scaling factor. ε u ∈ R + is the material parameter defining the ultimate tensile strain limit governing the onset of damage. ε(x, t) is the maximum principal strain that occurs at a given point x until the instant t. More specifically: whereε(x, τ ) denotes the maximum principal strain component (i.e., maximum of different eigen strain values ε x (x, τ ),ε y (x, τ ),ε z (x, τ )) at the point x and at a given time instant τ . Recall that a change in the sign of the argument κ(x, t) within the exponential of the VO results in a reformulation of the underlying governing equations. Exploiting the previously described property of VO-RL operators and defining a physics-driven variation of the order according to Eq. (8), the damage variable can be written as: where d 0 = 1 indicates the maximum possible damage. Before discussing the specific role of the two terms in Eq. (10), we explain the different parameters introduced in the equation. ε R is defined as: where l t = 2EG f /σ 2 u is the characteristic material length for an isotropic solid having Young's modulus E, fracture energy G f , and elastic strength σ u [46]. l f determines a characteristic physical dimension of the area within which the crack is localized and, in numerical implementations, it is directly related to the size of the elements used for the spatial discretization of the domain. In other terms, l f dictates the width of the crack path at a given point, that is the distance perpendicular to the crack path at the same point, within which the damage varies between its extreme values. Further, the parameter ε R governs the damage evolution rate that determines the level of damage via term II (see also, Fig. (S3) of SI). In order to guarantee the insensitivity of the results to the specific choice of the numerical mesh adopted, it is necessary that l f < l t [6,46]. The latter condition also follows from the fact that the size of the elements used to simulate the crack must be smaller than the characteristic material length for accurate resolution of the crack path.
It follows from Eqs. (8,10) that d(x, t) = 0 for ε(x, t) ≤ ε u and d(x, t) → 1 when ε(x, t) ε u . Thus, it appears that, when the maximum principal strain ε(x, t) exceeds the critical strain limit ε u , damage is initiated at that particular point. The specific value of the damage variable is determined by the combined effects of the two terms in Eq. (10). While term I in Eq. (10) sets the value of the maximum damage, term II allows for an exponential interpolation of the damage between its extreme values (0 and 1) depending on the amount by which the maximum principal strain (ε) exceeds the critical strain (ε u ) by using the parameter R . Note that the evolution of both these terms is guided by the VO-RL derivatives. More specifically, the VO-RL operator allows detecting the onset of damage in the solid driven by the VO α(x, t). This leads to an automatic reformulation of the underlying governing equations via Eqs. (5,10) in order to account for the occurrence and evolution of damage. Remarkably, the resulting evolutionary VO model does not require any front tracking algorithm nor additional criteria to capture the characteristic features of the dynamic crack mechanism such as roughening and branching. This latter comment will be more evident when contrasting the above formulation with the numerical results presented below.
The VO dynamic fracture formalism deserves some additional remarks. First, note that the constitutive relations defined in Eq. (7) result in an identical tensile and compressive fracture behaviour which is not generally true when modeling the failure of brittle and quasi-brittle solids. Several researchers have captured the asymmetric tensile/compressive damage by performing a spectral decomposition of the strain energy density and by degrading only the positive strain energy [26,32]. In this study, we incorporate this asymmetric behavior via the maximum principal strain based damage criterion, which follows from the well known Rankine criterion [30]. In other terms, as described previously, the crack is allowed to nucleate only when the maximum principal strain exceeds the critical tensile strain (ε u ). This specific feature ensured via the VO defined in Eq. (8) allows to model the asymmetric damage behaviour. Further, the definition of the parameter ε in Eq. (9) based on its past history, along with the conditionḋ(x, t) ≥ 0, ensure irreversibility of the system. More specifically, these conditions ensure that the length of the crack, denoted by Γ c , is monotonically increasing, that is Γ c (t 1 ) ⊆ Γ c (t 2 ) ∀ t 1 < t 2 . Additionally, the use of the strain-history based parameter ε leads to simpler numerical implementations as it allows for an operator split algorithm within a given time-step, wherein the displacement field and the damage field are updated in a staggered manner. The same concept, albeit using a strain-energy based history variable, is often employed in phase-field models of dynamic fracture [26,28]. Following this staggered numerical implementation, the computation of the damage field is a purely algebraic operation and it does not require the minimization of an additional potential function which, in the case of phase-field models, corresponds to the crack surface density function. The most immediate consequence is that the VO approach reduces the computational cost of dynamic fracture when compared to phase-field models.
Finally, note that the damage value d = 0 obtained for ε = ε u indicates a zero crack-tip opening displacement. As ε increases, the damage increases leading to an increase in the crack-tip displacement. Since the crack-tip opening displacement is directly linked to the strain value, it follows that the ratio of the instantaneous crack-tip opening displacement to the maximum crack-tip opening displacement is equal to the ratio of the instantaneous strain to the maximum strain (obtained for d = 1). Further, given the strainbased definition of the damage variable in Eq. (10), it follows that the ratio of the crack-tip displacement (δ c ) to the critical crack-tip displacement (δ 0 ) can be given as the ratio of the damage parameters d/d 0 . This formulation allows expressing different traction separation laws (also called softening laws) directly in terms of the damage variable d without the need for additional characteristic functions similar to [6]. In this study, we use two different traction separation laws expressed in terms of the damage variable as: Equation (12a) is a linear law for brittle materials [29], while Eq. (12b) is the Cornelissen law [47] for concrete (a quasi-brittle material). This latter law was obtained experimentally in [47] and the coefficients were found to be η 1 = 3 and η 2 = 6.93. Note that, in both cases, ψ(d) is bounded so that ψ(d) ∈ [0, 1] for d ∈ [0, 1]. We highlight that the VO elastodynamic formulation described above does not account for contact conditions, such as those that occur when the free surfaces of a crack come in contact when subjected to compressive loads. Note that this is not a limitation of the methodology but merely a decision of the authors to focus this work on aspects concerning crack initiation and propagation. Indeed, the contact problem is typically not addressed in classical treatments of dynamic fracture. However, the VO formulation can easily account for contact dynamics by simply adding dedicated terms in the VO derivative. The case of contact via VO operators was previously treated by the authors in [43], albeit only for discrete systems.
Results
To demonstrate the accuracy and the robustness of the VO fracture mechanics framework, we apply it to perform numerical simulations of three classical benchmark experiments available in the literature. The first two examples refer to the Kalthoff-Winkler experiment [44] and the classical crack-branching experiment [33]; both pertain to the dynamic fracture of brittle solids. The last example consists of mixed-mode fracture of a concrete slab under an impact load that was analyzed experimentally in [45]. In all three examples, the numerical results were obtained using a plane strain elastic model. The computational domain was discretized using uniform quadrilateral elements and the dynamic solution was computed using an explicit Newmark solver. Further, a lumped mass matrix was used in the dynamic solver to suppress high-frequency elastic oscillations (noises) and to ensure conservation of energy. The time-step (∆t) used in the dynamic solver, was determined using the Courant-Friedrichs-Lewy (CFL) condition. For a more conservative scheme, we used ∆t = 0.9∆t 0 = 0.9h/c, where h denotes the size of an element within the mesh and c denotes the speed of compressional waves in the medium under consideration. Further details on the numerical implementation are provided in SI. Further, videos of the growing crack front in the three benchmark cases are provided as multimedia supplementary information.
Kalthoff-Winkler experiment: The classical Kalthoff-Winkler experiment [44] consists of an unrestrained doubly notched specimen subject to an impact load, as illustrated in Fig. (1a). Following the original experimental setup [44], the specimen was made of maraging steel with the following material properties: E = 190 GPa, σ u = 844 MPa, G f = 22.2 N/mm, ν = 0.3, and ρ = 8000 kg/m 3 [6]. The characteristic material length corresponding to the aforementioned material properties is obtained as l t = 0.012 m. It was observed experimentally that, for lower strain rates (v 0 = 16.5 m/s), brittle failure occurs and the cracks nucleate from the edges of both notches at an angle of about 70 • with respect to the horizontal axis (which coincides with the line of symmetry). In the following, we numerically analyze this benchmark problem using the VO dynamic fracture model.
Given the symmetry of both the specimen and the test conditions in the original experiment, we modeled only the upper-half of the specimen in order to reduce the computational cost. The vertical component of the displacement field was set as zero (u y =0) at the line of symmetry indicated in Fig. (1a) to impose the symmetric boundary conditions. To model the impact load, a velocity boundary condition was applied at the nodes corresponding to the impact zone and the impulse was kept constant throughout the dynamic simulation. Further, the linear softening law (see Eq. (12a)) was used to model the degradation in the elastic energy upon damage development. The damage pattern generated using the VO model is presented in Fig. (1b,1c) for two different mesh configurations. The element size is taken as h = 0.5 mm in Fig. (1b), and h = 0.25 mm in Fig. (1c). As evident from Figs. (1b,1c), although a sharper crack is obtained in the case of the finer mesh, the overall crack propagation features are insensitive to the specific mesh configuration. For both mesh configurations, the crack develops in a direction that forms approximately 70 • with the horizontal axis. The average angle from the initial crack tip to the point where the crack intersects the top boundary is obtained as 72 • which matches well with the experimental result in [44]. Further, the crack intersects the top boundary of the specimen at a time instant of 75 µs for both the mesh configurations; this time corresponds to an average crack propagation velocity of c = 1064 m/s. Note that c < 0.6c R , where c R (= 2745 m/s) denotes the Rayleigh wave speed in the medium, as observed commonly in experiments [33]. For completeness, we highlight that unlike the results presented in [5,6] and obtained using different FEM and DCM models, we did not observe any spurious cracks developing from the bottom right corner and travelling towards the tip of the notch.
Dynamic crack branching: In this benchmark problem, we model a pre-cracked specimen loaded dynamically in tension as illustrated in Fig. (2a). This problem has been widely adopted in the literature to study dynamic crack branching, both experimentally [33] and numerically [4][5][6]30,32]. The specific material parameters used in this simulation were E = 32 GPa, σ u = 3.1 MPa, G f = 3 J/m 2 , ν = 0.2 and ρ = 2450 kg/m 3 [6], which correspond to a glass type material. The associated characteristic length is found to be l t = 0.02 m. Further, given the brittle nature of the material in this experiment, the linear softening law was used to account for the degradation in the elastic strain energy upon damage. A uniform and constant traction of magnitude σ 0 = 1 MPa is applied instantaneously to the rectangular specimen on its top and bottom surfaces at the initial time step and is held constant throughout the simulation. All other surfaces of the specimen are left free. This load condition is such that crack branching occurs in the specimen. The simulations obtained via the VO formulation are presented in Fig. (2b-2d). Figure ( The results obtained via the VO model lead to the following remarks. First, as discussed in [33], upon the onset of instability several microcracks develop from the principal propagating crack branch and interact with one another simultaneously. This process ultimately leads to a roughening of the crack surface. As evident from the inset in Fig. (2b), the VO formulation is able to capture in great detail the roughening of the crack surface due to the emerging microbranches. Similar to the experiments conducted in [33], these microbranches vary in size and the larger ones develop into full fledged branches. The remaining microbranches are arrested as a result of dynamic interaction with the growing ones. This set of results also highlights some of the advantages of the VO model over phase-field models which invariably capture an artificial widening effect in the damaged area at the point of occurrence of instability [30,32]. Also extremely remarkable, unlike classical dynamic fracture models that capture two branches [5,6,30,32], the VO model predicts four branches nucleating from the point of instability. This result closely matches the experimental results in [33], where it was demonstrated that the number of branches developed can vary between two and four.Further, we emphasize that unlike classical discrete approaches to dynamic fracture [2,3,5,8] we did not impose any additional criteria within the VO model to facilitate the crack branching behavior; the branching and roughening occurs naturally as a result of the local response field. Similar to phase field (variational) approaches, the VO dynamic fracture model leads to full crack identification without the support of additional branching conditions. Finally, as shown in [33], the number of branches exceeds four when also considering unsuccessful (i.e. not fully developed) branches. This feature is also captured by the VO model wherein we see that a number of unsuccessful branches nucleate from the principal branches.
John-Shah experiment: Another benchmark problem to test the performance of the VO fracture mechanics framework involves the three-point bending of concrete beams subject to impact loading [45]. The geometry and boundary conditions for the specimen involved in this test are illustrated in Fig. (3a). In this classical benchmark problem, a pre-built notch (offset from the mid-span axis of symmetry) is used to study mixed-mode fracture in concrete beams. It was observed by John and Shah that the parameter γ = l 1 /l 2 (see Fig. (3a)), that controls the placement of the notch, plays a critical role in determining what failure mode and damage pattern would occur in the specimen after the impact. Indeed, there exits a critical value γ c such that, for γ < γ c , the crack nucleates from the notch tip while, for γ > γ c , the crack nucleates from the mid-span. In addition, there exists an intermediate value of γ close to and less than γ c wherein both cracks develop. The experimentally determined value of γ c was γ c = 0.77 [45].
We simulated this benchmark problem using the VO framework. The specific material parameters used in the simulation were E = 34 GPa, σ u = 1 MPa, G f = 31.1 J/m 2 , ν = 0.2 and ρ = 2400 kg/m 3 . The degradation in the strain energy upon damage was modeled using the Cornelissen softening law for concrete (see Eq. (12b)). The impact velocity is given by the linear ramp [17]: where v 0 = 0.06 m/s and t 0 = 196 µs. Using the above material properties, geometry, and loading conditions we simulated the dynamic three-point bending for three different values of γ ∈ {0.72, 0.76, 0.79} corresponding to three different notch locations. In all the three cases, the computational domain was uniformly discretized using elements of size h = 0.635 mm. Note that the characteristic material length corresponding to the material properties of the concrete specimen is obtained as l t = 2.1 m. We merely observe that for quasi-brittle materials like concrete, the characteristic length-scale is generally too large when compared to the dimensions of laboratory specimens and hence, the condition l f < l t is virtually meaningless for quasi-brittle materials. The crack obtained for the three different cases are presented in Figs. (3b-3d).
Overall, the results of the three numerical experiments compare very well with the experimental results. In particular, as in the experimental results, the crack nucleates from the tip of the notch for γ = 0.72 < γ c and from the mid-span of the beam for γ = 0.79 > γ c , and propagates towards the top surface of the beam. Further, similar to the experiment conducted in [45], a transition state is observed for the case γ = 0.76 wherein cracks propagate from both the notch tip and mid-span towards the top surface. It follows that the estimate of the critical notch location γ c obtained via the VO dynamic fracture model lies in (0.76, 0.79), which is in good agreement with the experimental value of γ c = 0.77.
Conclusions
This study presented a novel elastodynamic formulation based on variable order fractional operators and capable to provide accurate estimates of dynamic fracture in brittle and quasi-brittle solids. From a mathematical perspective, the peculiar properties of the VO-RL operator enable capturing the behavior of highly nonlinear systems with evolving discontinuities, such as those involved in the nucleation and propagation of cracks in solids. We showed that an apparently unsettling property of the RL operator, that is the non-vanishing derivative of a constant, can have very useful implications to model dynamic fracture. Furthermore, the ability of VO operators to update their order as a function of either dependent or independent variables, results in governing equations that can evolve in real time to capture growing cracks without requiring modifications to the fundamental governing equations. Even more remarkable, and certainly in stark contrast with more traditional approaches to dynamic fracture, is the fact that VO governing equations do not require a priori assumptions or additional conditions to detect characteristic aspects of dynamic fracture such as crack nucleation, crack surface roughening, crack instability and branching. In other terms, the nonlinear and discontinuous dynamic behavior associated with fracture naturally emerges based on the instantaneous response of the system. Further, given the many recent advances in the formulation of fractional order mechanics as a comprehensive approach to nonlocal elasticity, it can be envisioned that the present VO elastodynamic framework could be easily integrated in a fully fractional formulation hence leading to a powerful tool for dynamic fracture analysis of nonlocal media. | 8,629.6 | 2020-08-16T00:00:00.000 | [
"Physics"
] |
ESSENTIAL SUPREMUM NORM DIFFERENTIABILITY
The points of Gateaux and Frechet differentiability in L∞(μ,X) are obtained, where (Ω,∑,μ) is a finite measure space and X is a real Banach space. An application of these results is given to the space B(L1(μ,ℝ),X) of all bounded linear operators from L1(μ,ℝ) into X.
INTRODUCTION.
Let m be the restriction of Lebesgue measure to [0,i] and L=(m,0 the Banach space of all measurable, essentially bounded, real-valued functions on [0,i] equipped with the norm llfll ess sup {If(t) l: t e [0,I]} (as usual, identifying functions that agree a.e. on [0,i]) In [4], Mazur proved that given any f e L(m,RR) f # 0 there exists a g L(m,0 such that lira jjf + gjj ijfjj does not exist.In other words, the closed unit ball in L(m,<) has no smooth points.
In this note, we show that an analogous result holds for L (,X) the space of ;J-measurable, essentially bounded functions, whose values lie in a Banach space X-provided that the underlying measure space (,Z,) is non-atomic.We then obtain a complete description of the smooth points of L=(,X) in the general case.We show, in fact, that f is a smooth point of L(,X) if and only if f achieves its norm on a unique atom for and its (-a.e.constant) value on this atom is a smooth point of X operators rom LI(,0 into a Banach space X when X has the Randon-Nikodym property with respect to
PRELIMINARIES
Throughout this note, X denotes a real Banach space with dual X* A point x X {0} is a smooth point of X if there exists a unique e X with IIIi 1 such that (x) llxll The norm function on X is Gateaux differentiable at non-zero x e X if there exists a X* such that liml[lx + ,hll ilxll O(h) 0 (*) for all h e X The functional is the Gateaux derivative of the norm at x e X Mazur, [4], has shown that the following are equivalent: (i) x is a smooth point of X (ii) lira llx + %hll-llxll exists for all h e X (iii) the norm function on X is Gateaux differentiable at x The norm function on X is Frchet differentiable, at a non-zero x e X if there exists a e X* such that Of course, Frchet differentiability at a point implies Gateaux differentiability at the point.
Let (Q,Z,) denote a finite measure space.A mapping f: X is called measurable (or strongly measurable) if -i (i) f (V) e Z for each open set V = X and (ii) f is essentially separably valued; that is, there exists a set N e Z with (N) 0 and a co,table set H = X such that f( N) = H The Lebesgue-Bochner function space L=(,X) is the real vector space of all -measurable, essentially bounded, X-valued fctions defined on L=(,X) is a real Banach space when equipped with the norm f ess sup {[f()[[: } (as usual, identifying functions which agree v-a.e.) A set A e Z is an atom for the measure V if and only if v(A) > 0 and for any B e Z with B = A either v(B) 0 or v(B) (A) The measure space (9,Z,) is called non-atomic if there are no atoms for V in Z and rely atomic if Q can be expressed as a ion of atoms for We will write O 9d with Q d e Z for the (essentially unique) decomposition of c c into its non-atomic and purely atomic parts.Since 0 is a finite masure, there exists an at st countable pairwise disjoint collection {A i i e I} of atoms for such that d A.
We note that if A is an atom for V and f e L (v,X) for p We will need the following lemmas in the discussion of the smooth points of L,(, X) Xn are Banach spaces, then (x I, x 2, Xn is smooth point of (X 1 X 2 Xn ) if and only if there exists a Jo n such that Jo-(i) llxj > llxjll for j 4 Jo and 0 (ii) x.
is a mooth point of X.
If (d' ld' d and (c, Zc' c are the purely atomic and non-atomic measure spaces, respectively in the decomposition of fl" then L (,X) is isometrically isomorphic x)) to (L=(d,X) L(U c, The proof of the second lemma is routine, while the proof of the first lemma uses the fact that (X 1 X 2 Xn) is isometrically isomorphic to (X 1 @ X 2 E) Xn) 1 see [3] The next two lemmas are straightforward generalizations of results in Kothe [2], we sketch the proof of the first lemma.
LEMMA 2.3: Let X be a Banach space and let E(X) denote the space of bounded sequences in X with the supremum norm.If x {Xn }n>l (X) x # 0 Lhen x is a smooth point of =o(X) f and only if there exists a positive integer n o such that (i) llxn0ll > up {llXnl n @ n O} and (ii) x is a smooth point of X n O PROOF.
Let x (x (X) be a smooth point we may assume that llxll sup llXnl i n n_l n>!If there exists a mubsequence {xnk}k> I__ such that k-lira llxnkll--I we can demonstrate the existence of distinct elements of E(X) which support the unit ball at x by the following modification of the argument given in Kthe [2] for (0 We consider the disjoint sequences {x and {x For each n 2j j>_l n2j-i j_>l j _> i let j and Sj be elements of X such that lljll lljil i with x llx n I I and (x llXnmj_lli J( n2j 2j 'bJ n2j-i Define .jand ?.3 on ,(X) by j (y) j (Yn2 j and ''j (y) j (Yn2j-l) for all y {Yn}n>l e (X) and j i then ., '. e 9 (X) and ]= j[ for.
z Let , and be w*-accumulation points of the sequences {+j }jl and {' j}jl respectively, then by construction we have This contradicts the fact that x is a smooth point of (X) us, we have sho that if x {Xn}n>l is a smooth point of (X) then li Xn < [x[ and therefore there must exist a positive integer n 0 such that n, [Xn0{ x] If there exists another integer m 0 # n 0 with Xmo ][x let 1' e X with 1 i and (0 (Xm0) x Now define P e (X) by #(y) #(Yn0) d V(y) (Ym0) for y {Yn}nl e (X) then + and P are distinct support functionals to the ball in (X) at x Again, a contradiction.We have established that if x is a smooth point of (X) then (1) must hold.A similar argent shows that (ll) must hold as well.
Conversely, If x {Xn}n>l e (X) and (1) and (i) hold, then for any y {Yn}nl e (X) y # 0 we have x + %y Xn0 + %Yn0 for all A e satisfying [[ (x-sup{ n # n0}) Therefore, IXn 0 Yn 0 Xn 0 m x + Zl[-[ im which exists by (ll); them, x Is a smooth point of m(X) This completes the proof of the lena.
#m argent similar to the above gives the following: LEMMA 2.4: Let (,Z,) be a finite measure space which is purely non-atomlc, and let X be a real Banach space, then L(,X) has no smooth points.
MAIN RESULT
In this section, we characterize the smooth points of the space L(,X) THEOREM 3.1: Let (,Z,U) be a finite measure space, X a Banach space, and f e L(,X) with f @ 0 then f is a smooth point of Lm(,X) if and only if there exists an atom A 0 for such that (i) llfli ess sup {ilf(o)]l e A O} and (ii) x 0 is a mooth point of X where x 0 is the essential value of f on A 0 PROOF.
Suppose f e L(v,X) f 0 is a smooth point of Lo=(v,X) then Lemma 2.4 implies that Z contains at least one atom for V Let fl fc U rid be the decomposition of r into its non-atomic and purely atomic parts.Since, by Lemma 2.2, Loo(v,X) is isometrically isomorphic to (L (Vc' and the fact that f i a smooth point of L(v,X) imply that either I" llfl > es sup {llf()ll 6 rid and x) fl is a mooth point of L (Vc' c 2 o. l[flzdll ess up llf(>ll =zc and fld is a .moothpoint of L(d,X) Now, ease 1 i ru!ed out by Lena 2.4, since (Q ,Z , ib a finite on-atomic c c c measure space.Therefore, l]fl?
Let d U A i where {A.i e I} is a paiise disjoint collection of atoms for since is finite, then either I is finite or cotably infinite.
I i finite, then m(Ud,X) is isometrically isomorphic to (XIX..) with X.
X for l, 2 n while if I is countably infinite, then L (d,X) is isometrically isomorphic to (X) In either case, it is easily seen (from Lemma 2.1 or Lena 2.3) that there exists an atom A 0 for with f > up f(> n A0 =d (ii) x 0 is a smooth point of X where x 0 is the essential value of f on A 0 Conversely, suppose that f 8 L(,X) and there exists an atom A 0 for Z such that (i) and (ii) hold.Let m0 AO with f(mo x 0 then from (i) we have lf[ f(0 Now, f and g are constant a.e. on A 0 so there exists an 01 A 0 such that f(w) + g(0) f(o + %g(la.e. on A 0 and hence I I f + gll llf( O) + g(l)li when 0 li < 2.. Therefore, llf( o) + x()lllira hf + xgll-ilf!llira and the latter limit eodss since f(mo is a smooth point of X hence f is a mooth point of Loo(,X) This completes the proof of the Theorem.
COROLLARY 3.2: Let (R,Z,) be a finite measure space, X a Banach space and f Loo(,X) with f # 0 then the norm function on Loo(,X) is Frchet differentiable at f if and only if there exists an atom A 0 for such that () llfll > ess sup {llf()ll e R A 0} and (ii) the norm function on X is Frchet differentiable at x 0 where x 0 is the essential value of f on A 0 This follows immediately from the proof of Theorem 3.1. 4. REPRESENTABLE OPERATORS ON LI(, If X is a Banach space and (R,E,) is. a finite measure space, then X is said to have the Radon-Nikod{m property with respect to if and only if for every countably additive X-valued measure m: Z X which is of bounded variation and absolutely continuous with respect to there exists a g e LI(,X) such that re(E) / g(0)d() for E e E A bounded linear operator T LI() X is said to be representable if and only if there exists a g Loo(,X) such that T(f) / f()g(00)d(m) for all f .LI(,E4$ Let B(LI(,x),X) denote the Banach space of all bounded linear operators from L I(B,o into X For each g e Loo(,X) define o(g) 8 B(L I(,,O,X) by Let T e B(LI(V,0,X with T $ 0 The norm function on B(LI(,O,X) is Gateaux (Frchet) dlfferentiable at T if and only if there exists an atom A 0 for such that 0 v(A 0) < v () and 1 1 (i) IITII -0)IIT(XA0) I I > ('flA0 IIT(XA 0) I I and (ii) T(XA O) is a point of Gateaux (Frchet) differentiability of the norm of X (m) + Xg(m)l [[f(m0)l[ + ll [[g[ess sup {[If(u) + kg(o)[[ e A 0} [[f(0)[[-6 whenever 0 [[ 2[--On the other hand, f + g[l >__ [[f[i [[ [[g][ > ]l f(0 )[[ ."flaeefore, lit + gil p (llt() + xm()ll e A O} wnr O< I1 .
o(g)(f) f()g()d() f el(,aO It follows from the results in Diestel and Uhl [i, p. 63], that if X has the Radon-NikodyCm property with respect to then o is a linear isometry of Lo(,X) onto B(LI(,O,X) Using this fact and Theorem 3.1 we get the following characterization of the points of Gateaux and Frchet dlfferentlabillty of the norm function on B(L I(,R0 X) THEOREM 4.1: Let X be a real Banach space and (fl,E,) a finite measure space such that X has the Radon-Nikodm property with respect to | 2,948.4 | 1985-01-01T00:00:00.000 | [
"Mathematics"
] |
Continuous wavelet estimation for multivariate fractional Brownian motion
In this paper, we propose a method using continuous wavelets to study the multivariate fractional Brownian motion through the deviations of the transformed random process to find an efficient estimate of Hurst exponent using eigenvalue regression of the covariance matrix. The results of simulations experiments shown that the performance of the proposed estimator was efficient in bias but the variance get increase as signal change from short to long memory the MASE increase relatively. The estimation process was made by calculating the eigenvalues for the variance-covariance matrix of Meyer’s details coefficients.
Introduction
Fractional Brownian motion (FBM) provides an appropriate modeling framework for non-stationary self-similar stochastic processes with stationary increments. It has been widely used to model random phenomena related to different research fields. On other hand, wavelet transforms provide a regularized differentiation of the processes, have a filter structure in perfect adequacy with 1/f type of spectral behavior, and may eliminate long-range dependence properties if the analyzing wavelet was properly chosen. Thus, studying multivariate fractional signals through the lens of the wavelet is indeed natural, and we expect that it will be useful as will in revealing the interaction structure between the components of the FBM.
The fractional Brownian motion was defined for the first time within Hilbert space by Kolmogorov in 1940 where he called it Wiener Helix and was studied more broadly by Yaglom in 1958 but the designation belongs to the researchers Mandelbrot and Van Ness (1968) where they explained the random integration of this process in its standard form when the Hurst parameter is 0.5, where the fractional Brownian is a generalization of this case, and unlike the standard form of this random process, the increase is not necessarily independent, also this process is a continuous time, with zero mean and variance-covariance equal to: The problem we seek to solve in this paper is to develop a method that can be used to model and study m th order fractional Brownian motion using Meyer's continuous wavelets. The proposed procedure will rely on finding the eigenvalues for the variance-covariance matrix of the detail coefficients of the fractional Brownian motion. The performance of this estimator was validated through a simulation study using the Mean average square error (MASE).
The paper is structured as follow: Section 2 the multivariate fractional Brownian motion will be presented, in Section 3 we give an overview on continuous wavelet transform, and in Section 4 the proposed method will be briefed, Section 5 the simulation study will be conducted and finally Section 6 the conclusions.
Multivariate Fractional Brownian motion (m-FBM)
The multivariate formulas in the fractional Brownian motion were numerous, as Ayache et al. in (2002) proposed the Brownian Sheet, which required an estimate of a complete matrix of parameters and was studied from the wavelet perspective by Wu and Ding in (2015) , and the researchers followed a similar method for Abry et al in (2000).
In (2015) and (2018), Abry and Didier studied FBM Operator, which is the estimation of the scale parameter in a semi-diagonal matrix formula (Jordan Form). The field remains open for researchers to delve deeper in this direction in order to arrive at the best estimates that would assist in the development and modernization.
As for this paper we will consider the multivariate fractional Brownian motion m-FBM proposed by Perrin in (2001) defined by performing subtraction of the m-degree up to the limit expansion of the kernel (t − u) H−1/2 so that the equation is as follows: The subtraction process ensures that the scaling parameter H will remain within the range [m − 1, m] as well as B H m (0) = 0. It should also be noted that this function m − 1 derivatives that satisfy the equation: Also the covariance of this function will be Where These equations illustrate the non-stationary and self-similarity of the multivariate fractional Brownian motion the fractional Gaussian noise can be defined as follows And the covariance function for it Perrin showed that this function is stationary and converge asymptotically of the order of degree |τ| 2H−2m when τ → ∞, and the stochastic process is non-resistant, meaning it has a short memory when m − 1 < H < m − 1/2, but if m − 1/2 < H < m it has a long memory.
These configurations to the fractional Brownian motion on multivariate scale will lead us directly to modify the slop value for the proposed estimation method that will be use next by adding m − 1 to the estimated Hurst exponent, for example for bivariate if the estimated exponent was = .8 the result will be = 1.8.
From the above definitions, we can construct a model assuming the presence of a group of observations y (t) where t = 1, … … … . . , T and that the random process B H m (t) is contaminated with an additive noise, so the random process model is as follows: Where y (t) represents the dependent variable under study, B H m (t) is a multivariate fractional Brownian motion, contaminated by Gaussian additive noise ε t with zero mean and variance σ ε 2 , and the aim is to estimate the Hurst parameter in the presence of this noise.
Continuous Wavelets Transform
The general formula for Continuous Wavelet Transform (CWT) (Mallat (2009)) is: Which is consist from the inner product of the function ( ) and the function ( − ), sometimes called kernel of the transform and the transformation depends totally on it. For example, in Fourier transform the transform function is e −iωt while the translation k play the role of time and the scale s is the opposite of the frequency means that the high scale have low frequency while the low scale have the high frequency and since the scale is variable so the value | | −1/2 is required to ensure the normalized status of the wavelet ensures that the energy of the wavelet is equal to the integer one ‖ ‖ = 1, no matter what the value of the scale noting that the scale always greater than zero because negative scaling is undefined. When determining the appropriate wavelet function for the data under study, the socalled type of wavelet. The wavelet function ψ( ) or the mother wavelet, called this name because it acts as a Prototype, through which all the windows used in processing the series signal are generated and that the wavelet function represents the dilation or expansion of the signal in the high frequencies and is also called the Primary Wavelet. The scale function ϕ( ) or the father wavelet, acts as smooth slope or low frequencies and must be orthogonal to ensure that the series energy remains constant and is not affected by the translation of data. When decompose these functions (father wavelet and mother wavelet) the so-called "basis of wavelets" appears in the sense that all the other windows that are generated in the wavelets are mere basis of these two. The scale as a mathematical process dilate the series signal or compresses it. Its worthy to mention that the translation term k play as shifting in the sense that it places a position on the time function. The scale s, it places a position on the frequency function and by finding the value of the transformation equation for the number of mattresses by changing the values of k and s and aggregating together a scale-transformation representation will appear, which is the equivalent of a frequency-time representation, and this will choosing a particular wavelet function which have best representation for the data under study, which will be used in the transformation because the choice of this function will determine the shape and characteristics of the wavelet transformation (Debnath and Ahmad Shah (2015)).
The inverse of continuous wavelet transform is: Where is defined as: is the Fourier transform of the mother wavelet ( ) and so this equation also called the admissibility condition which implies that the Fourier transform vanish at the zero frequency, i.e., This means that wavelets must have a bandpass-like spectrum, also means that the average value of the wavelet in the time domain must be zero. The most used type of CWT specially in adaptive filters, fractal random fields and multi-fault classification is the Meyer wavelet and that because it is orthogonal and indefinitely differentiable with infinite support and also multiresolution.
Where, for instance other choice can be made Many implementation of this auxiliary function one is this standard adopts ( ) = 35 4 − 84 5 + 70 6 − 20 7 In order to evaluate the corresponding wave forms of the Equations in time domain, denoted by Φ ( ) and Ψ ( ), we use the inverse Fourier transform resulting in:
Eigenvalue estimator for fractional Brownian motion using continuous wavelet
Characteristics of second-order random processes like periodic correlation, stationary increment, harmonizability and self-similarity can be studied through wavelet transformation, and this transformation has the attention of many researchers in the practical and theoretical fields because of its ability on revealing the relationship between the signal and its transformation. As a signal membership in the functional spaces and local smoothness characteristics that can be observed through wavelet transformation decay, but there remains a problem of random signals that cannot be directly transformed because the sample space of the random process does not have finite energy. In this section, Meyer wavelets will be used for the purpose of obtaining an efficient estimate of the Hurst parameter, starting with the definition of the general form for the wavelet transformation in equation (9).
Whereas, X (t) represents a second order complex random process (which is the fractional Brownian motion in our case), that is to be transformed with scale a and the translation b. Given that it is not possible to find an estimate of the parameters of this type of processes directly, we will find it through the second moment, as the first moment is zero and by finding the variance for the result of this transformation, we will obtain the variance of the detail coefficients of the wavelet transformation function, as shown in the following theorem. For each scale | | the wavelet coefficients form a sequence of random coefficients but Meyer's wavelet family still constitute an orthonormal system and there is no reason for the wavelet coefficients to be uncorrelated considering the details coefficients for fractional Brownian motion ( ) is Assuming ( ) satisfy the admissibility condition Where the second moment of the fractional Brownian motion as shown by Mandelbrot in equation (1), Which represent the variancecovariance for FBM, from this equation and since translation not affected by increases, its straightforward calculation yields From the FBM variance equation taking in account the independency between details coefficients, we get Where And is the transformation of the wavelet itself, then the variance of this transformation is The wavelet transform field is ( + 1 2 ) self similar and has stationary increments at all scales.
Now we can find a method to calculate the value of the Hurst parameter for fractional Brownian motion through the variance of the details coefficients of wavelet transformation and it shall be considered as a general form for such processes, but there remains a problem with continuous wavelet transformation, where the scales are have exponential increases, which makes it difficult to define the scales given the large size of the filters used according to a specific standard, and for this purpose it must be another way for obtaining an efficient estimate, and given that the variance matrix of these parameters still contains sufficient information to conduct the estimation process, it will be relied upon, but through the eigenvalues of the wavelet transform variance for the random process, instead of relying on the wavelet spectrum behavior as a function of wavelet scales, it will be relied on the eigenstructure of the wavelet spectrum, and for this purpose it is necessary to find a mathematical formula to use in calculating the parameter from the eigenvalues, and that the variance and covariance matrix for the coefficients of the wavelet transform is positive definite and symmetric, and given the characteristics that distinguish the eigenvalues in that they contain all the information without being affected with changes in the matrix (such as correlations), and from the last formula in theorem above.
Then by taking the logarithm for both sides to find an acceptable expression for log 10 ( ) 2 = (2 + 1) log 10 ( ) + Considering the power-low behavior of the wavelet coefficients variance we can take the eigenvalues of this matrix as a replace for the variance matrix to avoid the influence of the exponential scale in continuous wavelet Where this formula can be used as a general form for such processes using wavelet transform.
Simulation study
In this section, we will conduct simulation study of multivariate fractional Brownian motion, where wavelet synthesis method proposed by Sellan and Abry (1996) will be used for generating the mentioned random process, and then we will use Eigenvalue estimator for fractional Brownian motion using continuous wavlet (EECW). The length of the series to be generated will be as = 100,200 and the number of variables = 4, 8,12 we will replicate the calculation with = 500 for increasing the accuracy in the estimation. As for the wavelet used for the generating, = 2 10 , 2 12 . Taking into account that the estimation process for the Hurst parameter will be for all levels = [0.1: 0.9]. The mean square error and the bias of the estimator will be calculated as follows In addition, a new random series had been generated each time before the estimation in order to get the best view of the performance of the method used and at different levels of the random process. (1) the left figure is ̂( ) for 12-variate self-similarity signal at sample size n=200 and FBM wavelet synthesis scale = 2 10 with Hurst exponent .9 compared to EECW estimation method at ̂= .8986, and the figure on the right is ̂( ) for 4-variate self-similarity signal at sample size n=100 and FBM wavelet synthesis scale = 2 12 with Hurst exponent .1 compared to EECW estimation method at ̂= .0942.
Conclusions
The estimation using continuous wavelet transform always suffer from the lack of compact support between the multiresolution functions sample spaces used, this problem have a great effect on the variance and so as the long range dependence parameter increases. As well as, the MASE witnessed an average increase too.
The bias on other hand was too low which reflect the best use of the proposed algorithm that's rely on the eigenvalues of the details coefficients covariance matrix to measure the slope for the wavelet basis which make the method efficient facing the long memory.
The smoothing level for the generated signal proves its effect, as its increase to the bias went lower taking into account this increase in the scale might lead to misestimate for higher scales, because the simulation method will have a wrong signal estimation. | 3,544.8 | 2022-09-10T00:00:00.000 | [
"Mathematics"
] |
Being human is a gut feeling
Some metagenomic studies have suggested that less than 10% of the cells that comprise our bodies are Homo sapiens cells. The remaining 90% are bacterial cells. The description of this so-called human microbiome is of great interest and importance for several reasons. For one, it helps us redefine what a biological individual is. We suggest that a human individual is now best described as a super-individual in which a large number of different species (including Homo sapiens) coexist. New concepts of biological individuality must extend beyond the traditional limitations of our own skin to include our resident microbes. Besides its important contributions to science, microbiome research raises philosophical questions that strike close to home. What is left of Homo sapiens? If most of our cells are not Homo sapiens cells, what does it mean to be an individual human being? In this paper, we argue that the biological individual is determined by the amount of functional integration among its constitutive parts, a definition that applies perfectly to Homo sapiens and its microbiome.
Background
In the Origin of Species, Darwin posits that the process of natural selection directly acts on species by affecting the reproductive success and survival of the individuals constituting them. But what exactly are these individuals? Intuitively, we tend to think that the entities upon which selection acts are what common sense would describe as an organism, for example, a dog, a cow, and a human being. However, recent findings in various domains of biology such as physiology [1][2][3], sociobiology [4], microbiology [5], metagenomics [6], immunology [7], as well evolutionary transitions [8] have brought into question our intuitions concerning the correspondence between our notions of individual and organism by showing the tensions that arise between these two concepts. Such concerns have been in the heart of heated disputes in philosophy of biology [9][10][11][12][13], but they have not yet fully penetrated the scientific community.
By emphasizing the relation that humans have with their microbiomes, we aim to question the definition of human individuality in biological research by showing that the entity that we traditionally conceive as the organism called 'human being' is not the individual we intuitively think it is. There are many ways to approach the concept of biological individuality. Here, we will focus on one particular approach in order to raise the question of individuality in humans. By focusing on the biological aspect of individuality and its relation to fitness, we distance ourselves from philosophical questions concerning selfhood and personal identity [14]. Our claim is that, with respect to most biological research projects, human beings are so well integrated with their microbiomes that the individuality of human beings is better conceived as a symbiotic entity. Insofar as biological research is concerned, to be human is to be multispecies.
Main text
Most notions of individuality in biological research have directly or indirectly called upon evolutionary considerations, but why is that? Our understanding of individuality (be it an individual chair or an individual giraffe) has historically been linked to the question of organization: an individual has often been conceived as an organized whole, distinguishing it from a mere collection of disjointed parts. For artifacts such as individual chairs, the origin of organization was easier to establish given the clear human intentionality found in the design of the artifact (that is, a chair is a functional whole that has a specific purpose because we designed and built them to have that integrated function); for biological individuals however, the question was much more complicated. Before Darwin, intelligent design arguments (such as the ones found in Paley) explaining the organization found in biological individuals via divine creation were the norm. Since Darwin, the origin of organization of biological individuals is to be explained thanks to designer-free adaptive processes. Individuals were functional wholes whose partsintegration was the result of evolution by natural selection.
One of the advantages of focusing on evolutionary considerations is that it also allows to account for collectives of individual organisms acting as emergent super-individuals, such as bee colonies being recognized as what is often referred to as 'super-organisms.' As suggested by E. O. Wilson [15], the question of individuality emerging from functional integration can be read in the following way: when the behaviors of members of a society -or a group -become so well organized, coordinated, and integrated that the degree of functional organization approaches -or rivalsthat of the integration between the parts of an individual organism, is it still truly a group of singular beings? In fact, when such a group of entities acquires such a high degree of organization, it may become fitness-bearing in the right way and thus be defined and recognized as a proper unit of selection above and beyond the individual organisms forming the group [16]. If biological individuality is to be conceived as being an evolutionary individual, the unit of selection debate will intersect with our understanding of individuality when the unit of selection achieves higher fitness thanks to higher functional integration. Here, we side with the position asserting that the functions accomplished by integrated entities are the result of collaboration between diverse entities [17]. This collaboration is sometimes between members of a same species (for example, bees) or, more controversially, between members of different species. If individuality is about organization, and if that organization in the case of biological individuals emanates from evolution by natural selection, one needs a revised account of fitness to account for the emergence of multispecies individuals.
One of the problems of accounting for the functional integration of distinct individual organisms into an emergent super-individual is that it is not obvious how to aggregate the evolutionary success (or adaptedness) of individuals with autonomous evolutionary histories. One cannot readily compare or add up the reproductive success (fitness in the traditional sense) of organism X from species A to that of organism Y from species B. Interspecific fitness comparisons are usually frowned upon for that reason. Figuring out how to identify the degree of fitness alignment between organisms of different species requires the identification of a common evolutionary currency that is distinct from reproductive success. Alternative measures of fitness such as energy control [18] or differential persistence [19][20][21] have been suggested to allow for interspecies fitness comparisons and for fitness attributions to multispecies community level individuals. We favor the latter to account for multispecies assemblages such as the individuals that emerge from symbiotic interactions, because persistence is a necessary aspect of functional integration, whereas such integration could be achieved without fluctuations of energy control. Furthermore, the concept of persistence can also account for individuals emerging from multiple species interacting via abiotic parts of the environment [22]. In this respect, our account of biological individuality differs from others proposed in biology [23,24]. However, fully explaining this theory and its implications lies beyond the scope of this paper.
Discussion
How do these issues illuminate our understanding of our own biological individuality? It is common knowledge that bacterial presence is ubiquitous in every single surface of the environment exterior to any single organism and inside of it. Human beings are in constant interaction with the bacterial world, whether at the interface of their epidermis or through their digestive tract. If individuality is simply characterized by the amount of integration, the interaction between humans and microbes allows us to raise one important question: is the organism currently recognized as a human being the real individual? Indeed, the different sites of the human body are inhabited by millions of bacterial cells -the so-called human microbiome -interacting with each other as well as with human cells [25,26]. Such bacterial communities differ in diversity and proportion according to specific body habitats [27], which in turn ensures that each localized microbiome accomplishes different functions that affect a person's health and well-being. Namely, the gut microbiome, constituted mostly by Bacteroidetes and Firmicutes, is involved in the fermentation that enables bacteria to live in an anaerobic environment [28]. This process is used to produce short-chain fatty acid through the conversion of sugars, which are used by human cells as a source of energy. Metabolic activities of the gut microbiome also increase the amount of indispensable amino acids (that is, lysine) and contribute to the degradation of xenobiotics such as benzoate, a common food supplement involved in the biosynthesis of B9 and B12 vitamins [29]. Therefore, the bacterial communities inhabiting the human gut are an essential component of human digestion [30], and the corresponding intestinal microbiome is as important as a functional heart or kidney for survival [23]. In many respects, our microbiome-based digestion is more essential to the survival of an individual than the maintenance of other organs. In this light, being a human biological individual is to be a community of Homo sapiens and microbial symbionts whose degree of functional integration (and degree of individuality) is a function of the potential of that community to persist and evolve as a whole.
Because all living organisms (human beings included) rely so heavily on their microbiome to perform some of the metabolic functions that keep them alive, we claim that this symbiotic association is bound by a common evolutionary fate. The idea of common fate utilized in some accounts of biological individuality reflects the notion that the functional whole that we define as the organism and its microbiome can stand or fall as a whole when undergoing a selective pressure [16,22]. One notable example of such an evolutionary process is illustrated by the detrimental effects of Clostridium difficile on the functionality of the gut microbiome [31] and survival in humans [32]. At a higher level, one can also pinpoint the effect of the microbiome on reproductive success (and fitness) among closely related species and the role that gut bacteria play in speciation [33,34]. That is that, it is the sum of an organism's genome and microbiome -the hologenome -and the processes they make possible that are linked by a common evolutionary fate (extinction, speciation) and selected together as a whole [35,36].
Conclusions
If individuality is a matter of being functionally integrated to the extent that the causality between the interacting parts may persist or cease when one of these parts is faced with an evolutionary pressure, we ought to consider, for biological research purposes, that the single Homo sapiens is not in fact the real biological individual. While a simpler mono-species view of individuality may be sufficient for most of our everyday social interactions, the real biological individual is a super-individual defined as the sum of the organism + its microbiome; it is this integrated symbiotic association that is able to persist and survive. As the poet Walt Whitman aptly pointed out in Song of Myself, 'I am large, I contain multitudes.' Submit your next manuscript to BioMed Central and take full advantage of: | 2,518.6 | 2015-03-13T00:00:00.000 | [
"Biology"
] |
Similarity networks for the classification of rice genotypes as to adaptability and stability
The objective of this work was to evaluate the similarity network graphic methodology for the classification of flood-irrigated rice (Orzya sativa) genotypes regarding their adaptability and stability. Two statistical measures were used to represent the proximity of the behavior (based on Pearson’s correlation) or values (based on Gower’s distance) between pairs of genotypes or between genotype and environment. Productivity data of 18 genotypes were evaluated in three locations in the state of Minas Gerais, Brazil, in the harvests of 2012/2013, 2013/2014, 2014/2015, and 2015/2016, in a randomized complete block design. The genotypes were previously assessed for adaptability and stability by the Eberhart & Russell and centroid methods. The graphical representations provided by the similarity networks allowed to better identify the pattern of the genotype x environment interaction, overcoming the interpretation difficulties due to the disagreements between the results obtained by the Eberhart & Russell and centroid methods. The similarity networks improve genotype x environment interaction studies.
Introduction
Rice (Oryza sativa L.) represents the most important commodity worldwide, standing out as the second most cultivated and one of the most consumed grains, providing over 70 and 65% of the Asian and world population meals, respectively (Santos et al., 2006). In Mercosul, Brazil occupies the first position in harvested area and rice production (Acompanhamento…, 2017).
In general, the greatest challenge for breeding programs for grains and agricultural species has been to select genotypes that are stable and have high productivity in several environments (Reginato Neto et al., 2013). In this context, the evaluation of the genotype x environment (GxE) interaction is of special importance. For this, several methodologies have been proposed, following the analysis of variance principles, environmental stratification (Lin & Binns, 1988), or adaptability and stability analyses based on simple, multiple, and nonparametric linear regressions (Eberhart & Russell, 1966;Rocha et al., 2005).
Although widely used, those methodologies have some limitations when individually applied. For example, they usually do not report specific GxE interactions, besides being difficult to interpret (Malosetti et al., 2013). Another problem faced by GxE interaction studies is the classification mismatch between different methodologies, which makes it even more difficult for the breeder to interpret data and make decisions. Aiming to overcome difficulties of interpretation, several authors have proposed the simultaneous use of traditional and graphic methodologies, such as additive main effects and multiplicative interaction (AMMI) (Zobel et al., 1988), GGE biplot (Yan et al., 2000), and restricted maximum likelihood/best linear unbiased prediction (REML/ BLUP) (Resende, 2002), useful for zoning purposes and specific indications in studies with a wide range of environments. Faria et al. (2017) used the Eberhart & Russell, centroid, AMMI, and mixed model methods to evaluate the adaptability and stability of commercial corn (Zea mays L.) hybrids. The authors observed that the studied methods diverged in the indication of hybrids with specific adaptability to favorable and unfavorable environments, concluding that the use of more than one evaluation method allows a more reliable recommendation.
In practice, the breeder is interested in knowing if a genotype is able to thrive in more than one environment.
The consistency of this response pattern can be determined by measuring correlations or distances regarding the performance of pairs of genotypes in environments; the performance of a genotype in pairs of environments; and the relationship between genotypes and environments (Cruz et al., 2014). With these data, it is possible to obtain a matrix of correlations or distances. Although similarity measures have already been successfully used in clustering methods (Cruz et al., 2011), few studies currently adopt correlation or distance measurements to aid in the classification of genotypes as to adaptability and stability (Silva et al., 2019).
In the present study, a new methodology is proposed, based on the similarity analysis of the behavior of genotypes and environments, which represents a very useful and easily interpreted alternative for GxE interaction studies. The technique was built in analogy to the correlation network plots used by several authors (Kumar & Deo, 2012;Saba et al., 2014;Monforte et al., 2015;Silva et al., 2016) to represent and explore -by nodes and lines -the similarity pattern among genotypes and/or environments (Epskamp et al., 2012). This graphical analysis allows the organization, by zoning, of the evaluated environments, adding to them information about the adaptation of a genotype to specific regions.
The objective of this work was to evaluate the similarity network graphic methodology for the classification of flood-irrigated rice genotypes regarding their adaptability and stability.
Materials and Methods
Eighteen flood-irrigated rice genotypes were evaluated for grain yield (kg ha -1 ) in multi-location value for cultivation and use (VCU) trials. Of these genotypes, 13 are elite lines and 5 are commercial cultivars -Rio Grande, BRS Ourominas, BRSMG Seleta, BRSMG Predileta, and BRSMG Rubelita ( Table 1).
The experimental trials in all locations were performed within each harvest year, totaling 12 environments (Table 1). First, individual analyses of variance were performed and, then, the joint analysis of all sources of variations was carried out based on a simple factorial arrangement, considering the genotypes as fixed and the environments as random (Ramalho et al., 2000). The statistical model used was: Y ijk = µ + B/E jk + G i + E i + GE ij + ε ijk , where Y ijk is the observation in the k-th block, evaluated in the i-th genotype and j-th environment; μ is the general mean of the experiments; B/E jk is the effect of block k within environment j; G i is the effect of the i-th genotype considered as fixed; E i is the effect of the j-th environment considered as random; GE ij is the random effect of the interaction between genotype i and environment j; and ε ijk is the random error associated with observation Y ijk .
To better assess the interactions between genotypes and environments, the genotypes were previously classified as to adaptability and stability by the Eberhart & Russell (1966) and centroid (Rocha et al., 2005) methods. These classifications were applied in the graphical analyses using the similarity network and distance projection methods proposed in the present study.
According to Eberhart & Russell (1966), genotype adaptability can be classified as: class I, general adaptability; class II, adaptability to favorable environments; class III, adaptability to unfavorable environments; class IV, restrictions to recommendation (not adapted); and class V, not recommended (not adapted). In the centroid method, four classes are used: class I, wide adaptability; class II, adaptability to favorable environments; class III, adaptability to unfavorable environments; and class IV, minimally adapted (Rocha et al., 2005). Classes IV and V of the Eberhart & Russell method and class IV of the centroid method refer to disposable or poorly adapted material; therefore, these classes are considered equivalent. All analyses were performed using the Genes software (Cruz, 2016).
The graphical representation of the similarity networks was built in analogy to the correlation network plots proposed by Epskamp et al. (2012). Each line has a weight indicating correlation force or existing similarity, depending on the similarity matrix used, either based on Pearson's correlation or on Gower's similarity. As the degree of relationship between two variables gets stronger, the lines that connect them get thicker in the network frame. The intensity of the correlations and/or similarities also depends on the length of the lines. According to Epskamp et al. (2012), shorter lines indicate stronger relationships, allowing to group different variables. The two-dimensional network representation of a p-dimensional similarity matrix, for example, allows the researcher to detect important structures and complex statistical patterns difficult to extract from a table (Silva et al., 2019).
Two different similarity matrices were generated, with elements either representing the proximity of the behavior (using the correlation principle) or of the values (using the distance principle) between pairs of genotypes or between genotype and environment. Therefore, the similarity matrix was constructed based on two principles: Pearson's correlation (SN/r p ) and the complement of Gower's distance (SN/G). Similarity was structured in a matrix of (g + e) × (g + e) dimension, with two diagonal blocks (R gxg and R exe ) arranged on the main diagonal and the R gxe = R T exg information, on the secondary diagonal.
For the similarity matrix based on Pearson's correlation, the main diagonal was composed by Pearson's correlations (r pearson ). Therefore, the information about the performance of the g genotype in the e environment made up the data matrix (M), from which the R gxg submatrix was generated, whose elements represented the correlation between the M matrix columns. Similarly, the M matrix transpose (M T ) allowed generating the R exe submatrix, whose elements represented the correlation between the M T columns. The R gxe submatrix was obtained from a transformation in the M matrix, which consisted in converting the maximum and minimum values of each column into 1 and 0, respectively; the other values were interpolated within these limits. The addition of the R gxg R gxe : R exg R exe submatrices generated the R matrix for the similarity network analysis.
The similarity matrix based on Gower's distance was constructed equivalently to the one based on Pearson's correlation. From the M matrix, it was possible to obtain the R gxg and R exe submatrices, which represented the diagonal blocks of the R matrix. The R gxe submatrix was established identically to the one for the similarity matrix based on Pearson's correlation. The diagonal blocks were composed by the similarity matrices generated based on Gower's algorithm, described by where p is the number of variables (p = e to get R gxg and p = g to get R exe ); w ijk is the weight given to the ijk comparison, assigning 1 for valid comparisons and 0 for missing values; and s ijk is the similarity between i and j, which represent pairs of genotypes or pairs of environments for variable k (0 ≤ s ijk ≤ 1). The classification of genotype and environment into groups according to the Eberhart & Russell and centroid methods was associated to the similarity matrices. Therefore, a total of four scenarios were evaluated: similarity network using r pearson (SN/r p ) and similarity network using Gower's distance (SN/G) for the GxE classification groups given by the Eberhart & Russell method; and similarity network using r pearson (SN/r p ) and similarity network using Gower's distance (SN/G) for the GxE classification groups obtained by the centroid method.
Results and Discussion
The joint analysis of the experiments showed that environment and GxE interaction effects were significant (Table 2). Genotype mean, however, did not differ significantly, which is explained by the significance of the GxE interaction and by the advanced breeding stage in which the lines were evaluated, which makes it difficult to detect differences among the general means of the lines.
The significant interaction between genotypes and environments ( Table 2) is indicative of the varying behavior of the rice genotypes throughout the evaluated environments. This justifies the need to perform adaptability and stability studies to better assess genotype performance in different environmental conditions, also allowing to identify stable genotypes (Cruz et al., 2014).
When the Eberhart & Russell and centroid methods were used to classify the genotypes for adaptability and stability, differences were observed in some line rankings (Table 3). Assuming that classes IV and V of the Eberhart & Russell method are equivalent to class IV of the centroid method, a 50% agreement was found between the classifications of the 18 assessed genotypes. Both methods, for example, classified the BRA 02691 (G3), MGI 0717-18 (G12), MGI 0607-1 (G5), and the control genotype 'BRS Ourominas' (G8) as of general adaptability, but differed regarding the classification of other lines of interest. The BRA 031001 (G1) genotype was classified as of general adaptability (class I) by the Eberhart & Russell method, since it presented Mi > M, β 1i = 1, and R i 2 > 70%. However, this same genotype was classified as poorly adapted (class IV) by the centroid method, because its average productivity was not high compared with that of MGI 0607-1 (G5). The classification of the BRA 02708 (G13) and BRA 041230 (G10) genotypes also differed: both were classified as of general adaptability (class I) by the Eberhart & Russell method, whereas BRA 02708 (G13) and BRA 041230 (G10) were classified as of specific adaptability to unfavorable and favorable environments, respectively, by the centroid method.
These discrepancies make the decision-making process difficult for breeders. Several other authors also reported differences in the classifications and recommendations given by different methodologies of adaptability and stability analyses (Pelúzio et al., 2008;Barroso et al., 2017;Silva et al., 2019). Therefore, each method presents singularities when ranking lines, and the choice of the best biometric technique should be made carefully (Faria et al., 2017).
The graphical representation of the similarity networks based on Pearson's correlation (SN/ r pearson ), using the groups given by the Eberhart & Russell and centroid methods, showed that each methodology supported the other in important aspects (Figure 1). The exception were the divergent rankings for BRA 02708 (G13) and BRA 031018 (G17), when adopting class I of Eberhart & Russell and class III of the centroid method. These two lines had a strong correlation with environment E3, classified as favorable, in both graphs; however, BRA 02708 (G13) was classified as of specific adaptability to unfavorable environments by the centroid method, with similar probabilities of belonging to classes I and III (Table 3). Therefore, the proposed similarity network gives the researcher greater security in classifying genotypes of general adaptability and with a high correlation with favorable environments. It also highlights the greater correlation Table 3. Estimates of the adaptability and stability parameters, spatial probabilities [P(I) to P (IV)], and phenotypic adaptability classification of flood-irrigated rice (Orzya sativa) genotypes according the Eberhart Pesq. agropec. bras., Brasília, v.55, e01017, 2020 DOI: 10.1590/S1678-3921.pab2020.v55.01017 between genotypes that have an average higher than the general one. A similar response pattern was found by Silva et al. (2019) when adopting strategies based on the projection of dissimilarity measures.
Another discrepant result that had to be evaluated with caution was the classification of the 'BRSMG Rubelita' (G4) commercial line. It was classified as not recommended (class V) by Eberhart & Russell since its average was lower than that of the general experiment, but showed specific adaptability to favorable environments (class II) by the centroid method ( Table 3). The obtained graphs emphasized a strong correlation between 'BRSMG Rubelita' (G4) and environment E3 (Figure 1). It was noted that the probability of this line not being recommended (class IV) or of showing specific adaptability to favorable environments (class II) was similar by the centroid method, which caused confusion in its classification. In this case, the graphical representation was useful because it aided in the visualization of the relationships between genotypes and environments. Silva et al. (2019) also evaluated the 'BRSMG Rubelita' (G4) commercial line, and found a strong correlation between this genotype and environments classified as favorable. According to the authors, the graphical representation by projections of distances helped to visualize the relationship between genotypes and environments.
Relationships between genotypes, environments, and GxE were expressed by Gower's similarity measure (Figure 2). By adopting this similarity matrix over the previous one based on Pearson's correlation, it is possible to make inferences about genotypes and environments based on the obtained values and not on their general behavior. This is important because, even when the correlation between two environments, considering distinct genotypes, is high, the comparative values between them may be as discrepant as if one environment were favorable and the other unfavorable (Cruz et al., 2014). Therefore, a distance measurement, rather than a correlation measurement, would be able to capture this and provide a new angle for the breeder's interpretation (Cruz et al., 2014;Epskamp et al., 2012).
Similarity networks based on the complement of Gower's distance (SN/G) also used the groups given by the Eberhart & Russell and centroid methods ( Figure 2). The obtained graph evidenced the similarity between environments belonging to a same favorable Table 1. Pesq. agropec. bras., Brasília, v.55, e01017, 2020 DOI: 10.1590/S1678-3921.pab2020.v55.01017 or unfavorable class. A similar pattern of behavior was verified for the BRA 02708 (G13) and BRA 031018 (G17) lines regarding favorable environments, particularly E1 and E3. The BRA 031001 (G1) line also resulted in a conflict of interest for the breeder, since it presented a higher average than the general one, but was classified as not adapted (class IV) by the centroid method and as of general adaptability (class I) by the Eberhart & Russell method (Table 3). Applying the proposed similarity networks (Figures 1 and 2), this line showed a high correlation with genotypes classified as of general adaptability and as adapted to favorable environments, even for the networks constructed based on the centroid classification (Figure 1 B). The same similarity pattern was observed when the decision criterion was based on Gower's similarity, i.e., BRA 031001 (G1) also presented a great similarity with genotypes classified as of general adaptability and as adapted to favorable environments, as well as with the favorable environment E1 (Figure 2 B).
Considering that the correlation network analysis has been useful in plant breeding studies (Silva et al., 2016), similarity networks showed an increase in the effectiveness of genotype selection, assisting in the decision-making process, especially for genotypes that are difficult to classify (Silva et al., 2019).
Conclusions
1. Similarity matrices between genotypes, between environments, and between genotypes and environments are effective for genotype x environment interaction studies in flood-irrigated rice (Orzya sativa).
2. The graphical evaluations provided by the proposed similarity network methodology are useful in the breeder's decision-making process, when evaluating lines classified by the Eberhart & Russell and centroid methods. Table 1. | 4,149.6 | 2020-01-01T00:00:00.000 | [
"Biology"
] |
Evolutionary Perspectives on Unbelief: An Introduction from the Editor
Abstract The scientific study of atheism and unbelief is at a pivotal turning point: past research is being evaluated, and new directions for research are being paved. Organizations are being formed with an exclusive focus on unbelief research, and large grants are funding the topic in ways that historically have never happened before. This article serves as an introduction to the state of the literature and study of evolutionary perspectives towards unbelief, which incorporates cognitive, adaptive, and biological contributors. This article serves to contextualize the subsequent articles, which all have distinct perspectives on the evolutionary factors that contribute towards unbelief.
Introduction
The scientific study of atheism and unbelief is on the cusp of major change. Traditionally, the study of unbelief has been problematic in the same way that studying any group based on religious categorization is problematicthe researchers are going to have a predisposition towards belief or non-belief, so bias is inevitable. Just as the psychology of religion has addressed these concerns in the study of religion [12], researchers are now criticizing early works on unbelief that claim that humans have an innate predisposition towards religious belief, and therefore to be non-religious is to have violated human nature. In criticizing early approaches to non-belief, researchers are also coming up with new ways to explain the phenomena of non-belief through a multitude of approaches, a number of which are covered in this special issue.
Unbelief is another term referring to non-believers or atheists that maintains the traditional dichotomy between religious believers and those who do not identify as religious. This introduction will primarily use the term 'unbeliever' to refer to a person that is non-religious, but the subsequent articles in this special issue leave the preferred terms up to the author's discretion (e.g., atheist, nonbeliever, unbeliever). It should be noted, that as the authors of the subsequent articles in this special issue will use whichever term they prefer, that this is in no way to over-generalize to all non-believers, as we know there are substantial differences between people who are not involved in organized religion [23], [28].
Early research on atheists approaches them from a default position of religion being innate and natural, and therefore making an absence of deist beliefs unnatural. In the last decade, social scientists have begun to criticize existing frameworks for studying atheism, as many of them investigate atheism as an afterthought or by-product of religion, rather than studying atheism in its own right. Recently, the study of unbelief has become a focal point for many researchers, with organizations such as the Nonreligion and Secularity Research Network (NSRN) promoting research on the topic, and some major funding initiatives are now underway, including the Unbelief Project funded by the John Templeton Foundation. This point in history marks the divergence between previous ways of thinking about unbelief (as a consequence of disobeying natural mechanisms), and the future of studying unbelief, which increasingly makes an argument for unbelief as similarly and/or equally evolutionarily natural through new cognitive, adaptive, and biological explanationsamong others. This volume is intended to further promote the critical discussion that is now underway, which assesses existing empirical frameworks, and proposes alternative ways to study unbelief while accounting for the confounding baggage that studying anything in relationship to religion inevitably introduces. The primary focus of this special issue is on evolutionary perspectives towards unbelief, so this can exclude some social and affective explanations for a person being non-religious, but it is important to acknowledge that these too play a role in explaining unbelief. Evolutionary perspectives about non-religion are especially sparse, which is another problem that this issue brings to light. Due to the inter-disciplinary nature of this issue and the early stages of the topic, the definition of evolutionary perspectives here is broad, and allows the incorporation of many factors into explaining how unbelief has persisted and has spread across generations of people and societies. We are hopefully at the beginning of a new zeitgeist, where unbelief is not just studied from the default starting position of religion, but instead is progressed as a novel scientific discipline.
Explanations for Unbelief
There have been a number of explanations for the existence of the non-religious, but research towards each of those explanations is still in its formative stages, and as thus, it is often methodologically flawed, or contradictory. I've outlined some of the most influential approaches to unbelief, including cognitive, functionally adaptive, and biological explanationsall of which play a role in how unbelief has evolutionarily persisted across generations. These are only some examples of explanations that can be approached from an evolutionary perspective, which do not exhaust other possibilities that are not covered in this introduction, but these will help provide context to the articles that follow in this special issue.
The Cognitive Explanation
Some have argued that religious belief is the result of a more intuitive thinking style (e.g., [26]). One of the most prominent theories within the cognitive science of religion assumes that religious belief is natural, innate, and intuitive, and so, in order to be an unbeliever, one must first effortfully violate the cognitive predispositions towards religious belief [3], [18]. There have been growing criticisms of this theory, since studies applying it are consistently methodologically flawed, and the data is often contradictory [29], [17], [14], [31]. Some have even tested the theory directly, showing that there is no relationship between intuitive thinking and religious belief [8]. Even with all of the evidence and criticism of the contrary, the intuitive thinking explanation for non-belief persists as being one of the most prominent theories to explain the existence of unbelievers.
The Functional (Well-Being/Adaptive) Explanation
A stronger argument for an evolutionary role in religious belief and unbelief is the functional argument. Traits that are more functionally efficient are generally passed on to future generations because they assist in survival, so some have argued that religious belief is more natural because it is more adaptive than unbelief [4], [10]. With this argument, religion is not cognitively innate as much as it is functionally convenient and efficient. It is well documented that religious belief is adaptive [4], [10], [11], [15], [21], [24] but the perspective that is changing is about the adaptiveness of unbelief. Religious belief helps fulfil a number of psychological needs, including a need for social relatedness, reducing fears about mortality, providing security, well-being, and meaning in life. In areas of the world where those functions are fulfilled through other secular means (e.g., wealthier countries more easily provide a high standard of living for inhabitants), then those countries tend to be more secular [20], [19]. To give a specific example, involvement in societal groups lowers mortality rates, regardless of that societal group being religious or secular [27]. In other words, when the functions of religious belief are made redundant through other mechanisms, then religion tends to be less culturally dominant. In addition, we are beginning to understand the role of secular beliefs, such as believing that science is a moral guide to life, in being functionally adaptive in ways that are similar to religious belief [7], [1]. People that are well-off without religion are less likely to be religious because they don't have the functional need for religion, whereas people that struggle and experience much hardship are more likely to use religion as a means to find greater well-being. The functional argument has largely argued that religion is more functional and thus evolutionarily more efficient than unbelief, however, as we increasingly understand more about how psychological functions and needs are fulfilled for the non-religious through secular beliefs and societal mechanisms, this calls into question earlier claims of religion being more evolutionarily beneficial, adaptive, and efficient.
The Biological Explanation
There has been increasing evidence that parts of the brain are associated with religious belief and experiences or a lack thereof, but this research has yet to conclusively explain the evolution of belief and unbelief [16]. Some insight comes from brain lesion studies, as increases and decreases in religiosity can be observed dependent on which area of the brain has the lesion, as posterior lesions can lead to higher religiosity, whereas anterior lesions lead to lower [30], which hints at the biology of the brain playing a role in whether or not someone is religious. To give another example, the prefrontal cortex is associated with processing doubt, so people with damage to this area frequently exhibit higher levels of religiosity, and it is also not a coincidence that many religious conversions happen around adolescence, when the doubt processing part of the brain dramatically grows [2]. The prefrontal cortex also interprets religious imagery differently depending on if one is a believer or an unbeliever [32]. Besides parts of the brain, there is also evidence that the accessibility of hormones such as dopamine play a role in whether or not someone is religious [22], [25], [9], [5], [6]. Evidence for biological explanations are still new, and many of the findings are contradictory, but the evidence seems to point to there being a role of biology in being a nonbeliever; we just aren't sure exactly what that role is yet since replications and further research are still needed.
This Issue
Contained in this special issue of Studia Humana is a selection of papers from authors across various disciplines discussing different evolutionary perspectives towards unbelief. In reading over these articles, I was quick to identify that some of the claims made in this tome will generate strong responses, and that is largely the goal of releasing this special issue: to generate critical discussion about a topic that needs more of it to progress. Even though large strides have been made, the scientific study of unbelief is still in its formative stages, and as such, it is important to learn not only from the difficulties of studying religion, but to also study unbelief in its own right, while acknowledging the interconnected nature that non-belief has with religion and the history of studying religion. Many have criticized the early works on unbelief as having come through the lens of religion, and many of those criticisms are legitimate, so this gathering of manuscripts across multiple disciplines hopes to add scope to how far those problems lie, elucidate and criticize them, and offer some suggestions moving forward. The scientific study of unbelief is now coming to a crossroads, where it is now increasingly being studied as something other than a by-product of religion, moving away from cognitive claims that unbelief results from the rejection of an innate, religious predisposition [29], [13], [31], [3], [18].
A variety of inter-disciplinary perspectives are included in this special issue. Lluis Oviedo provides a culturally adaptive sociological explanation for atheism, while warning of limitations. Jay Feierman gives a functional, biological perspective on how non-religion can be a by-product of in-group breeding clusters, explaining that as breeding clusters no longer need to compete, the byproduct that is religion that stems from these clusters becomes obsolete. Religion becomes superfluous in his explanation, causing religion to be a deteriorating phenomenon because of modernity. The paper by Mikloušić and Lane makes an argument for the role of personality in determining the relationship that people have with an overseeing God, explaining through their own empirical work that the religious see God as having personality traits more similar to the self, whereas unbelievers have a perception of God that is less relatable to their own personality traits. Although similar investigations have been done looking at personality fusion with a divine being, Mikloušić and Lane's findings are novel in that they also incorporate sociosexual variables, which have previously been shown to play an important role in understanding religious attitudes and behavior. This special issue concludes with a paper by Alogna, Bering, Balkcom, and Halberstadt, which criticizes modern frameworks and questions the notion of unbelief entirely, since even selfprocessed atheists show signs of implicit supernatural belief, but the studies making these claims often overreach from what their data can support. This final paper serves as an appropriate word of caution when empirically investigating unbelief and its evolutionary correlates. This special issue should serve as a point of entry for seeing the breadth of directions in which unbelief is being approached evolutionarily, which should promote discussion, further criticism (including of the articles held within this volume), which will hopefully result in a further expanding of research, and eventually theories that are stronger when placed under vigorous scrutiny. | 2,900.8 | 2019-10-01T00:00:00.000 | [
"Philosophy",
"Psychology"
] |
Sensitivity of Summertime Convection to Aerosol Loading and Properties in the United Arab Emirates
: The Weather Research and Forecasting (WRF) model is used to investigate convection– aerosol interactions in the United Arab Emirates (UAE) for a summertime convective event. Both an idealized and climatological aerosol distributions are considered. The convection on 14 August 2013 was triggered by the low-level convergence of the cyclonic circulation associated with the Arabian Heat Low (AHL) and the daytime sea-breeze circulation. Numerical experiments reveal a high sensitivity to aerosol properties. In particular, replacing 20% of the rural aerosols by carbonaceous particles has a comparable impact on the surface radiative fluxes to increasing the aerosol loading by a factor of 10. In both cases, the UAE-averaged net shortwave flux is reduced by ~90 W m − 2 while the net longwave flux increases by ~51 W m − 2 . However, when the aerosol composition is changed, WRF generates 20% more precipitation than when the aerosol loading is increased, due to a broader and weaker AHL. The surface downward and upward shortwave and upward longwave radiation fluxes are found to scale linearly with the aerosol loading. An increase in the amount of aerosols also leads to drier conditions and a delay in the onset of convection due to changes in the AHL.
Introduction
It has long been known that aerosols, defined as solid or liquid particles suspended in the atmosphere from both from natural and anthropogenic sources, play an important role in the climate system [1][2][3]. Aerosols significantly interact both with the radiation (direct and semi-direct effects; [4][5][6]) and cloud microphysics (indirect effects; [7]). For simplicity, the former will be denoted as aerosol-radiation interactions (ARI) and the latter as aerosol-cloud interactions (ACI) throughout the text. Aerosols scatter and absorb solar (shortwave) and thermal (longwave) radiation, leading to a warming of the aerosol layer and a cooling of the surface below. As far as the ACI effects are concerned, an increase in aerosol loading leads to a larger number of smaller cloud droplets (first indirect or Twomey effect), which leads to more scattering and hence a higher cloud albedo and optical depth [8]. As a result, aerosols act to suppress precipitation, increasing the cloud lifetime and cloud height (second indirect or Albrecht effect; [9]). While pollution and smoke from industrial activities are the most common anthropogenic aerosols, dust is the most abundant natural aerosol on Earth. The Sahara Desert is the main source region of mineral dust, accounting for roughly half of global dust emissions [10], with contributions from other hyperarid regions such as the Arabian Desert in the Middle East [11], the Gobi Desert in East Asia, and the Sonoran Desert in the United States [12]. Dust has been shown and 5 airport stations (31)(32)(33)(34)(35) for which weather measurements were available on 14 August 2013. The United Arab Emirates' orography was taken from a 30 m digital elevation model [33].
Being part of the Arabian Desert, aerosols are ubiquitous in the UAE. As discussed in [40], the prevailing aerosol subtype is dust, with higher AODs in summer and spring, typically in the range 0.3-0.6. During dust storms, the AOD can exceed the climatological values by an order of magnitude; for example, during the July 2018 event, it exceeded 3 with more than 20 × 10 15 g or 20 Tg of dust being lifted into the atmosphere [11]. On diurnal scales, the AOD values are slightly higher in the early morning when the nighttime low-level jet mixes down to the surface, with the stronger near-surface winds lifting higher amounts of dust [41]. The aerosol variability in the UAE is also discussed in [42], which analyses measurements collected by a LIDAR from February 2018 to February 2019. The authors concluded that the size of the aerosols is more important than their chemistry (i.e., composition, which affects the hygroscopicity) for aerosol particle activation, in line with the findings of [43].
In this work, the interaction between aerosols and convection in the UAE is investigated for a summertime convective event that occurred on a relatively dusty day. The two main objectives of this study are as follows: (i) investigate the added value of incorporating aerosols and accounting for their direct and indirect effects on the model-predicted convective activity, and (ii) explore the sensitivity of the WRF response to different aerosol loadings and properties and assess how it compares against observations. The findings of this work will be very relevant to other arid/hyperarid regions, in particular those adjacent to major aerosol sources, such as deserts.
This paper is structured as follows. In Section 2, a description of the model, datasets and numerical simulations conducted is given. The meteorological conditions on 14 August 2013, the event targeted in this work, are analysed in Section 3. In Section 4, the results of the model simulations are discussed, with the main findings outlined in Section 5.
Numerical Model
The numerical model used in this study is the WRF model version 4.2.1 [18]. WRF is a fully compressible, non-hydrostatic, community model, which makes use of the Arakawa-C grid staggering for horizontal discretization and employs the Lorenz grid for vertical discretization. In all simulations, WRF is initialized on 13 August 2013 and run for 48 h, with the first 24 h discarded as model spin-up. As discussed in Section 3, the 14 August 2013 convective event is selected as it features both deep convection and a dusty atmosphere over the UAE. The initial and boundary conditions are taken from ERA-5 data [44], the latest reanalysis dataset of the European Center for Medium Range Weather Forecasts, which provides meteorological fields on a 0.25 • × 0.25 • grid and on an hourly basis, from 1979 to present. WRF model experiments are run in a three-nest configuration, with spatial resolutions of 22.5 km, 7.5 km and 2.5 km. The spatial extent of the model grids is presented in Figure 2a. The outermost grid is at a resolution of 22.5 km, and covers the vast majority of the Arabian Peninsula and surrounding region, while the innermost nest, at 2.5 km resolution, is centered over the UAE and extends into the adjacent Arabian Gulf and Sea of Oman ( Figure 2b). The boundary conditions from ERA-5 are relaxed on a five grid-point buffer zone (not displayed in Figure 2a,b). The grid resolutions used here are the same as those employed in [19] for the 5 September 2017 convective event in the UAE. In that study the authors concluded that adding another nest, at a spatial resolution of 0.833 km, does not provide added value to the model forecasts. The physics schemes employed in the WRF simulations are summarized in Table 1. The model set up reflects the findings of [34], who tested different WRF configurations for the 14 July 2015 convective event in the UAE. The authors in [34] noted that a 0.025 • grid (~2.7 km) may still be too coarse to represent shallow clouds, and hence they employed a shallow cumulus scheme in their runs. The same applies to the 2.5 km grid considered here, and for that purpose the mass-flux scheme embedded in the MYNN PBL scheme, which parametrizes the non-convective component of the subgrid clouds [45], was activated. The Noah-MP is configured following [19,46], while the sea surface skin temperature scheme of [47], which allows for the simulation of its diurnal cycle and feedback on the atmosphere, is switched on. In the vertical, 45 levels are considered, more closely spaced in the PBL, with the first level at about 27 m above the ground, and with the model top at 50 hPa. Rayleigh damping is applied in the top 5 km to the wind components and potential temperature and on a timescale of 5 s to damp vertically propagating waves [18]. In all simulations, Atmosphere 2021, 12, 1687 5 of 40 the more realistic representation of the soil texture and land use land cover over the UAE described in [48] is employed.
WRF Experiments
A total of nine WRF simulations were performed, as listed in Table 2. The main difference between them is in the set-up of the Thompson-Eidhammer cloud microphysics scheme. This scheme, also known as Thompson aerosol-aware, is a modified version of the original Thompson scheme [57,58], incorporating the activation of aerosols as cloud condensation nuclei and ice nuclei in a simplified manner [49]. Two new variables, representing the concentration of hygroscopic or "water-friendly" aerosols (N w f a ; designed to account for a combination of sulfates, sea salts, and organic matter) and non-hygroscopic or "ice-friendly" aerosols (N i f a ; mineral dust), are added to the model. Aerosol direct and semi-direct effects, namely the scattering and absorption of radiation [59], as well as indirect effects, aerosol-cloud interactions [60], can be accounted for in a relatively computationally cheap way, when compared, e.g., to the simplest set up of the WRF-Chem [21] as noted by [61]. It is important to stress that in the default version of the scheme only ACI effects are activated; the ARI effects are switched on through an option in the model's namelist which allows the radiation scheme to "see" the N w f a and N i f a populations as discussed below. Table 2. List of the WRF simulations discussed in this study. The experiments differ in the aerosol profile considered (idealized profile, IDEAL, or climatological profile, CLIM, the latter scaled by a factor of 5 in experiments number 6-8, and 10 in experiment number 9), aerosol-radiation interactions (ARI) option (rural, urban and maritime models), and whether grid nudging is applied in the two outermost model grids (NUDGE).
Numerical Experiment
Aerosol Profile ARI Setting Nudging There are two ways to initialize the aerosol concentration arrays: (i) employing an idealized profile based on prescribed concentrations and the terrain height (IDEAL); (ii) extracting the aerosol profiles from a 7-year (2001-2007) simulation with the Goddard Chemistry Aerosol Radiation and Transport (GOCART; [62]) model, described in [63] (CLIM).
In (i), where idealized aerosol profiles are used, the aerosol concentration is defined as with In the equations above, h(z) is the height of the model level z in meters, with h(1) being the height of the first model level. The constants N 1 and N 0 are set to 50 × 10 6 m 3 and 300 × 10 6 m 3 for water-friendly aerosols, and 0.5 × 10 6 m 3 and 1.5 × 10 6 m 3 for ice-friendly aerosols, respectively. This definition is based on the premise that aerosols are mostly concentrated in the lowest part of the atmosphere, with a faster decrease with height over the higher terrain, and a profile tailored for the continental United States. Spatially, N w f a and N i f a are uniform at the start of the run, but evolve during the course of the model integration.
In (ii), the climatological aerosol distribution used to initialize the aerosol fields is that described in [49]. It is a 0.5 • × 1.25 • dataset available on a monthly timescale and on 30 vertical levels, comprising both water-friendly (sulphates, sea salts and organic carbon) and ice-friendly (dust, with particle sizes larger than 0.5 µm) aerosols. This dataset is generated from a global model simulation, with the predicted aerosol optical depth and Angstrom exponent comparing well with those estimated from satellite data in particular in this region [63].
As discussed in [49], and for the "ice-friendly" aerosols in more detail in [25], the temporal evolution of N w f a and N i f a is given by Equations (5) and (6) (6) where CCN stands for Cloud Condensation Nuclei and IN for Ice Nuclei. The source terms for both the "water-friendly" and "ice-friendly" aerosols can be summarized as follows: 1.
The nucleation of cloud droplets from N w f a is achieved through a lookup table with the activation fraction as a function of parameters such as the WRF-predicted temperature, updraft speed, number of available aerosols, and predefined values of the hygroscopicity parameter and the aerosol's mean radius; 2.
Once nucleated, the aerosols are removed from N w f a , the third term on the righthand-side (RHS) of Equation (5), but can be restored via hydrometeor evaporation, the fourth term in Equation (5). Aerosols can also be removed from the population by precipitation scavenging, the first term in Equations (5) and (6); 3.
For "water-friendly" aerosols, and when a climatological-based distribution is employed, a constant surface emission forcing is added in the lowest model layer based on the starting near-surface aerosol concentration. A similar contribution is not considered for the "ice-friendly" aerosols in the present version of the scheme, i.e., the last term on the RHS of Equation (6) is set to zero; 4.
The nucleation of dust particles into ice crystals occurs in the presence of supersaturation with respect to ice. Depending on the relative humidity (RH) with respect to water, condensation, immersion freezing (i.e., ice nucleation by particles immersed in supercooled water) and deposition nucleation (i.e., formation of ice from supersaturated water vapor on an insoluble particle without prior formation of liquid) can occur. These processes are accounted for by the second term on the RHS of Equation (6); 5.
The freezing of homogeneous nucleated deliquesced hygroscopic aerosols is also accounted for, with the decrease in N w f a represented by the second term on the RHS of Equation (5), while the freezing of existing water droplets is parameterized to be more effective in the presence of higher amounts of dust aerosols. Cloud ice sublimation returns the aerosols to N i f a , the third term on the RHS of Equation (6).
In order to switch the ARI effects on, assumptions have to be made regarding the aerosol properties, in particular the single-scattering albedo, asymmetry factor and Angstrom exponent. Three aerosol models are available in WRF: rural, urban and maritime [64,65]. The rural aerosol model (ARI_R) is designed for cases where the contribution from urban and industrial sources is small. It assumes a mixture of 70% water soluble (ammonium, calcium sulphate, organic compounds) and 30% dust-like aerosols. The urban model (ARI_U) is a mixture of 80% rural aerosols and 20% carbonaceous or soot-like aerosols, which are assumed to have the same size distribution as both components of the rural model. As a result of the soot-like particles, the aerosols will be more absorbing [66]. The maritime aerosol model (ARI_M) also consists of two components: sea salt and a continental component assumed to be identical to the rural aerosol but with the very large particles removed, as they will eventually fall out as the air mass moves across water. Hence, the maritime aerosol model will be less absorbing than the default (rural) model. It is important to note that the assumptions made in the different aerosol models may not be in full agreement with the fraction of hygroscopic/non-hygroscopic aerosols at a given grid-point, which varies during the course of the model integration. Nevertheless, the three aerosol models are considered in this study to explore the sensitivity of the WRF predictions to the composition of the aerosol particles.
As an attempt to correct some of the model biases, different configurations of grid (or analysis) nudging [67,68] towards ERA-5 data are considered. They are discussed in Appendix A. In these runs, the horizontal wind components, water vapor mixing ratio and potential temperature perturbation are nudged on a timescale of 1 h above roughly 800 hPa, excluding the PBL. This nudging configuration is preferred so as to allow the model to develop its own structures while at the same time constraining the atmospheric circulation in the free atmosphere [69].
Observational and Reanalysis Datasets
In order to evaluate the best aerosol configuration for an increased model performance, two in situ and satellite-derived datasets are used. Station data collected by the National Center of Meteorology (NCM) is available at 30 automatic weather stations (AWS) and 5 airport stations, given in Figure 1. Air temperature, RH, sea-level pressure, and horizontal wind direction and speed are available every 15 min at the former and 1 h at the latter on 14 August 2013, with the downward shortwave radiation flux at the surface also measured at the location of the AWS. Daily accumulated precipitation is available for all 35 stations. In addition to the surface/near-surface measurements, the 00 and 12 UTC radiosonde profiles at Abu Dhabi's International Airport (24.4331 • N, 54.6511 • E) from the National Oceanic and Atmospheric Administration Integrated Radiosonde Archive (IGRA; [70,71]) are considered.
The satellite-derived datasets comprise (i) Red Green Blue (RGB) satellite images obtained from the Spinning Enhanced Visible and Infrared Imager (SEVIRI) instrument onboard the Meteosat Second Generation spacecraft [72], and (ii) Infrared Brightness Temperature (IRBT) maps from a combination of European, Japanese and United States geostationary satellites provided by the National Center for Environmental Prediction/Climate Prediction Center [73]. RGB images are available every 15 min on a 0.05 • (~5.6 km) grid for the domain 60 • S-60 • N and 60 • W-60 • E on the European Organisation for the Exploitation of Meteorological Satellites (https://eoportal.eumetsat.int/, accessed on 22 June 2021) website. These images are processed to display relevant features such as dust, sand and clouds in contrasting colours following [72]. The IRBT maps are at 4 km spatial resolution and 30 min temporal resolution, available from 60 • S-60 • N on the National Aeronautic and Space Administration's EarthData website (https://disc.gsfc.nasa.gov/datasets/GPM_ MERGIR_1/summary, accessed on 6 May 2021).
Besides the listed observational datasets, the Modern-Era Retrospective analysis for Research and Applications version 2 (MERRA-2; [74]) data is also considered in this work. MERRA-2 explicitly accounts for aerosols and their interactions with the climate system, and is used to assess the spatial distribution of aerosols over the UAE on 14 August 2013 given the gaps and missing data in observation-derived products due to the extensive cloud cover. MERRA-2 provides aerosol-related variables such as the AOD on a 0.625 • × 0.5 • global grid and on an hourly basis. It has been shown to perform well in the Arabian Peninsula when compared to satellite-derived and ground-based measurements [75,76], and is therefore suitable to be used here.
In the equations above, D is the discrepancy between the model forecast F and the observations O, while X and σ X are the mean and standard deviation of X, respectively.
The bias is defined as the mean discrepancy between the WRF predictions and the observations, D , while the normalized bias is the ratio of the bias to the standard deviation of the discrepancy, σ D . The latter is used to assess whether the model biases can be regarded as significant: as explained in [77], if |µ| < 0.5, the contribution of the bias to the Root-Mean-Square-Error is less than roughly 10%, and hence the biases can be deemed as not significant. The correlation (ρ) and the normalized error variance (η) are a measure of the phase and amplitude agreement between the observed and modelled signals, respectively, with the two sources of error accounted for in the α diagnostic. For a random forecast based on the climatological mean, ρ = 0 and hence α = 1. Hence, a model prediction is considered as practically useful if α < 1. The ρ, η and α diagnostics are non-dimensional quantities, symmetric with respect to the observations and forecasts, and applicable to both scalar and vector variables, making them suitable to be used in this work. Further details regarding the listed diagnostics can be found in [77].
Description of the Event (14 August 2013)
On 14 August 2013, deep convection and a dusty environment were ubiquitous in the UAE, as seen in Figure 3. The RGB and IRBT maps in the afternoon and evening hours, given in the first two rows, show a rapid flare-up of convection in the local early afternoon hours, which affected mostly the western and central parts of the country. The IBRT values dropped to around 190 K, indicating rather cold cloud tops, a sign of very deep convection [78], with the thick high-level clouds shaded in brown in the RGB images. Such low values of IRBT are more typical of tropical convective activity, such as that seen in tropical disturbances [79], than the average summertime convection in the UAE [31]. A second but less intense round of convection occurred in the evening to nighttime hours, with isolated convective cells developing over eastern UAE and western Oman in early to mid-afternoon hours, when convection typically flares up here [31].
Besides the unstable environment, on this day the atmosphere was also rather dusty. The third row of Figure 3 gives the AOD from MERRA-2 reanalysis data. Values in excess of two were seen over the western half of the UAE at 11 UTC (15 LT), decreasing during the afternoon and early evening hours. While these are not unusually high values for this region [40], AODs higher than two are commonly seen during dust storms [80]. Some of the reduction in the AOD may be attributed to transport by the low-level circulation, but the fact that the dusty region overlaps at least partially with the convection region suggests that convection-aerosol interactions have likely taken place.
The 14 August 2013 event was chosen by manually inspecting hourly IRBT and MERRA-2 AOD images for the summertime (June to September) periods for which NCM data were available, and selecting the one where the deepest convection, as given by the lowest IRBT, and the dustier environment, as given the highest AOD, co-occurred in the UAE. Figures 4 and 5 show the sea-level pressure, 2 m water vapor mixing ratio, and lowlevel winds on 14 August 2013 from ERA-5 every 2 h from 08 UTC (12 LT) to 18 UTC (22 LT). The AHL is initially over the UAE and surrounding region, but at 12 UTC it shifts westward, lying over western parts of the country and extending into Saudi Arabia and Qatar, where the minimum sea-level pressure lies. The counterclockwise circulation around the AHL converges with the daytime sea-breeze from the Arabian Gulf and Sea of Oman. This convergence is more evident around 12-14 UTC (16-18 LT), Figure 4c,d, over central and western parts of the country, around the time when convection flared up rapidly (Figure 3a,b,d,e). The low-level convergence weakened after 16 UTC (20 LT), Figure 4e, when both the AHL and the sea-breeze faded away. The convective clouds that developed over eastern UAE were likely triggered by the convergence of the AHL circulation with the sea-breeze from the Sea of Oman and topographically driven flows (cf., Figure 4c,d and Figure 3d). Figure 5 shows that the near-surface air was rather moist over the country on this day, with water vapor mixing ratios typically in the range 15-20 g kg −1 . Together with the low-level wind convergence, the large-scale environment was suitable for the occurrence of deep convection. A comparison of the satellite images, Figure 4a,f, with the ITD drawn as a solid white line in the panels of Figure 5, reveals that, at least on this day, the clouds tended to develop around this convergence line. It is interesting to note that the ITD on this day reached southern parts of Iran to the north of the UAE, a behaviour that is expected in the warmer months. As explained in [35], the inland moistening by the sea-breezes from the Arabian Gulf, Sea of Oman and Arabian Sea allows the 15 • C isoline of dewpoint temperature, the metric used to diagnose the position of the ITD, to propagate northwards into the Arabian Gulf, as seen in Figure 5. A comparison of the AOD plots given in Figure 3g-i with the 10 m horizontal wind vectors plotted in Figure 4 indicates that the accumulation of aerosols over western UAE is related to the presence of a closed atmospheric circulation associated with the AHL in the region. The decreasing values of AOD in the evening to night time hours, are likely due to the advection of cleaner air from the south (cf., Figure 4e,f), as well as due to the washout and clearing of the air after the occurrence of precipitation. As far as the dust emission is concerned, two factors are at play: (i) dust lifted by strong near-surface winds triggered by cold pools and downbursts in association with the deep convection that developed on this day, a well-known mechanism for dust lifting in arid regions [81][82][83]; (ii) strong southerly winds in the early morning, from the combined effect of the AHL and sea-land breeze circulations, with the low-level wind convergence by high turbulent winds at the leading edge of the ITD [84,85] aiding in the dust-lifting activities (Figure 5a,b). Figure 6 shows the concentration of water-and ice-friendly aerosols in the lowest model layer for the simulations with the idealized (WRF-IDEAL) and the climatological (WRF-CLIM) aerosol distribution, the former multiplied by a factor of 10 so that the two have the same order of magnitude. The fact that the idealized profile is cleaner than the climatological profile is not surprising: as stated in Section 2.2, the idealized distribution was designed for the continental United States, where the atmosphere is cleaner compared to that in the UAE and surrounding region. In fact, over India and during the summer monsoon, the observed aerosol loading within the boundary layer, as measured at the surface and by aircrafts, was found to be roughly 10 times larger than that employed in the idealized profiles in WRF [86]. The spatially uniform aerosol loading at the start of the run in Figure 6a, in line with the way it is coded in the model, contrasts with a heterogeneous pattern in the simulation forced with the seven-year climatological aerosol loading. The higher amounts of water-friendly aerosols (sulphates, sea salt, organic matter) over the Arabian Gulf and of ice-friendly aerosols (mineral dust) over inland areas in Saudi Arabia and Oman are consistent with the fact that the former is typically advected from industrial and urban sites as well as from water bodies by the background northwesterly winds, while the latter has its main source in the Rub' Al Khali desert [40]. Despite differences in the initialization and order of magnitude, the spatial pattern of aerosol loading is similar in the two configurations, with a marked northwest-southeast gradient over the UAE. This can be explained by the near-surface circulation, given in Figure 7a for WRF-CLIM (similar results are obtained for WRF-IDEAL, not shown). A comparison with Figure 7b, same fields but from ERA-5, reveals that the AHL in WRF at 12 UTC, and as given by the sea-level pressure, is broader and displaced to the southeast with respect to that in ERA-5. The associated cyclonic circulation acts to slow down the progression of the sea-breeze over central and eastern parts of the country, where the model is drier than the reanalysis dataset, and speed it up over western UAE, where WRF is moister as the daytime sea-breeze is reinforced by the AHL circulation. This explains why, as shown in Figure 6, the higher aerosol concentrations over the Gulf extend well inland in the western half of the country, but are mostly confined to coastal areas elsewhere. Figure 8 gives the vertically averaged profiles over the UAE at 00 and 12 UTC for both WRF-IDEAL and WRF-CLIM simulations. The decrease in aerosol concentration with height is more pronounced in the runs with the climatological profile, and in particular for the ice-friendly aerosols. This is consistent with the fact that dust is primarily present at low elevations as its source is surface emissions in semi-arid/arid regions [40], whereas other aerosol types have varied sources and are more ubiquitous in the troposphere. The diurnal variability is small except at low elevations, below 700 hPa, where the well-mixed daytime boundary layer leads to approximately constant values with height, whereas at night the concentrations are higher just above the surface, as the aerosols are trapped below the low-level nighttime surface-based inversion and in the residual mixed layer above it. This variability is in line with the findings of [40,87]. The aerosol concentration profile shown in Figure 8 resembles the observed profiles measured during dedicated field campaigns [88].
Aerosol Loading
An assessment of the WRF-predicted vertical aerosol profiles against those observed, which may feature multiple dust layers [87][88][89], as well as their composition and optical properties, is not possible due to the lack of observational data. An evaluation of the modelpredicted AOD, which is a column integral and gives information on the attenuation of the incoming solar radiation as it goes through the atmosphere, against that estimated from ground-based and satellite assets, also cannot be conducted due to the extensive cloud cover on this day (Figure 3a-c) and the resulting gaps and missing data in the observed estimates (not shown). However, the WRF-predicted AOD can be compared with that of MERRA-2 reanalysis data, as shown in Figure 9. The WRF-5×CLIM-ARI_R-NUDGE simulation, for which the climatological aerosol distribution is multiplied by a factor of 5, gives the best agreement with the MERRA-2 AOD out of all model configurations considered. However, even in this simulation the atmosphere in WRF is slightly dustier, in particular in the afternoon hours, likely due to a lack of precipitation that precludes a washout of the aerosols and a cleaning of the air, as discussed in the next section. In any case, and even though MERRA-2 is taken as a reference for this comparison, it is important to note that, despite the data assimilation, this dataset still has biases when compared to observed measurements, mostly due to missing emissions and/or deficiencies in the parameterization schemes, as noted in [90]. The lack of ground-based measurements on this day, however, precludes an evaluation of the quality of the MERRA-2 forecasts over the UAE. sphere. The diurnal variability is small except at low elevations, below 700 hPa, where the well-mixed daytime boundary layer leads to approximately constant values with height, whereas at night the concentrations are higher just above the surface, as the aerosols are trapped below the low-level nighttime surface-based inversion and in the residual mixed layer above it. This variability is in line with the findings of [40,87]. The aerosol concentration profile shown in Figure 8 resembles the observed profiles measured during dedicated field campaigns [88]. An assessment of the WRF-predicted vertical aerosol profiles against those observed, which may feature multiple dust layers [87][88][89], as well as their composition and optical properties, is not possible due to the lack of observational data. An evaluation of the model-predicted AOD, which is a column integral and gives information on the attenuation of the incoming solar radiation as it goes through the atmosphere, against that estimated from ground-based and satellite assets, also cannot be conducted due to the extensive cloud cover on this day (Figure 3a-c) and the resulting gaps and missing data in the observed estimates (not shown). However, the WRF-predicted AOD can be compared with that of MERRA-2 reanalysis data, as shown in Figure 9. The WRF-5×CLIM-ARI_R-NUDGE simulation, for which the climatological aerosol distribution is multiplied by a factor of 5, gives the best agreement with the MERRA-2 AOD out of all model configurations considered. However, even in this simulation the atmosphere in WRF is slightly dustier, in particular in the afternoon hours, likely due to a lack of precipitation that precludes a washout of the aerosols and a cleaning of the air, as discussed in the next section. In any case, and even though MERRA-2 is taken as a reference for this comparison, it is important to note that, despite the data assimilation, this dataset still has biases when
ARI on Idealized and Climatological Aerosol Distributions
In order to investigate the impact of switching on the ARI on the simulations with the idealized and climatological aerosol distributions, Figure 10 shows the WRF bias, with respect to the hourly station data, for air temperature, water vapor mixing ratio, horizontal wind speed and surface downward shortwave radiation flux, averaged over all 35 NCM stations on 14 August 2013. The scores averaged over all hours of the day are given in Table 3.
As expected, when the ARI is switched on there is a decrease in the shortwave radiation flux reaching the surface (Figure 10d), which is more pronounced for the run with the climatological distribution owing to the higher aerosol loading. Compared to the simulations where it is switched off, the maximum reduction in the radiation flux is 10 W m −2 for the run with the idealized aerosol distribution and~40 W m −2 for the run with the climatological aerosol distribution, with daily averaged values of 3 W m −2 and 20 W m −2 , respectively. Despite the small decrease in the downward shortwave radiation flux, however, WRF continues to largely overestimate the observed values, which can be attributed to a lack of clouds in the model, a bias that has been noted by several authors [48,91,92]. Given the lack of clouds, the ARI effects will prevail over the ACI effects, and hence the model predictions for simulations WRF-IDEAL, WRF-IDEAL-ARI_R, WRF-CLIM and WRF-CLIM-ARI_R will be comparable, as the radiative impacts of switching on the ARI are small. This can be seen in fields such as the air and surface temperatures, for which the decreases are within 0.5 K and 1 K, respectively, when the ARI effects are activated. These changes are comparable to those reported by other authors for a similar variation in the surface radiation fluxes [22,93].
In all simulations, WRF is much colder than observations, with biases of up to 7 K and a daily average around 2.5 K. This has been reported in the literature [46,48], with the discrepancy more pronounced in the warmer months and not being restricted to the Arabian Desert [94]. It may arise from deficiencies in the physical parameterization schemes, in particular in the LSM and radiation schemes, and/or an incorrect representation of the atmospheric composition. Several attempts have been made to correct for this bias, such as employing different model configurations [34,95] and input data [19], tuning hard-coded parameters [46,96], and using more realistic lower boundary conditions [48]. The sensitivity experiments described in Figure 10 suggest that having a more realistic representation of the aerosol loading does not alleviate the cold bias either, with differences within ±0.15 K for the daily averaged air temperature (Table 3). It is then possible that the referred cold bias could be down due to a non-linear interaction of different model errors. Besides the cold temperatures (Figure 10a), the near-surface wind speed is also too strong when compared to that observed (Figure 10c). The two biases can be related, as too strong turbulent mixing will lead to cooler and drier near-surface conditions [97], the latter consistent with the negative mixing ratio biases of up to −4.5 g kg −1 (Figure 10b) and a daily average around −2.2 g kg −1 ( Table 3). The stronger near-surface winds in the model are likely a result of an incorrect representation of its subgrid-scale fluctuations and deficiencies in the surface drag parameterization, as optimizing relevant parameters such as the roughness length does not seem to alleviate the problem [96]. Changing the aerosol loading by an order of magnitude only leads to differences of up to ±0.2 m s −1 in the daily mean wind speed (Table 3), or less than 6% of the daily averaged values. In a nutshell, the major impact of switching on the ARI is a decrease in the downward shortwave radiation flux, which reaches up to 40 W m −2 when the more opaque climatological distribution is employed. It is interesting to note that, for all fields given in Figure 10, the magnitude of the WRF biases exceed that of the response to changes in the aerosol loading and activation of the ARI.
The verification diagnostics when all hours of the day and 35 weather stations are considered are given in Table 3. In line with Figure 10, the scores are roughly comparable for the four simulations. Except for sea-level pressure, the α scores are always less than 1, indicating that the model predictions can be regarded as skilful. For all variables shown, phase errors dominate over magnitude errors, as η is typically larger than 0.95, while ρ is, at times, negative. A similar conclusion was reached by [92], in the analysis of cold season and warm season convective events in the UAE. The lack of clouds and the drier environment in the model will impact the diurnal cycle of variables such as air temperature and mixing ratio, which exhibit higher α values when compared to the shortwave radiation flux, for which the diurnal variability is rather well captured by WRF, with both ρ and η in excess of 0.9. The poorer scores for sea-level pressure are consistent with the incorrect simulation of the AHL (cf., Figure 7a,b), both in terms of its magnitude and temporal variability. On the other hand, the lower values of ρ (and hence higher values of α) for the wind vector are a reflection of its higher temporal and spatial variability, which are rather difficult to model in the UAE, as noted by [92,96]. Except for the water vapor mixing ratio, the absolute value of the normalized bias is generally higher than 0.5 for the four WRF simulations, meaning that the WRF tendency to under-predict the air temperature and overestimate the strength of the near-surface wind can be regarded as significant. Figure 11 shows the bias in the temperature and RH profiles at the location of Abu Dhabi's airport with respect to radiosonde data at 00 and 12 UTC on this day. In order to extract this quantity, first the observed and model-predicted data are interpolated in log-pressure coordinates to a pre-defined set of pressure levels from 1000 to 100 hPa at which the observational data is typically available, before the difference between each set of WRF and observed profiles is taken. The WRF temperature biases are typically within ±2 K, having the largest amplitudes between 950 and 800 hPa at 00 UTC. The magnitude of the biases decreases from a peak of about 3 K for WRF-IDEAL-ARI_R to 1.5 K for WRF-CLIM-ARIR_R, with the warming consistent with the increased dust loading (Figures 8 and 9). A smaller warming tendency of up to 0.5 K is also seen when the ARI effects are switched on, in particular when the climatological aerosol loading is used (WRF-CLIM vs. WRF-CLIM-ARI_R). The temperature biases at 12 UTC have a reduced magnitude likely because of the well mixed vertical profile in the lower layers, which leads to a roughly uniform aerosol loading below 700 hPa (Figure 8). The RH vertical profile in WRF is much drier than in observations, in particular at 12 UTC, in line with the less moist near-surface environment. The tendency of the model to generate drier conditions at the site in the summer season was reported by [48] over the UAE and [98] over Qatar. Besides deficiencies in the physics schemes, the drier environment can be explained by a lack of clouds in WRF which is consistent with the reduced amounts of precipitation generated by the model (Table 3) and the cooler temperature profile (cf., Figure 11a), and has been found to be the case in summertime convective events in the region [19]. As an attempt to correct for the aforementioned model biases, different configurations of grid nudging were tested as discussed in Appendix A. In the subsequent model simulations, grid nudging is employed in the two outermost nests, as by and large it helps to improve the model performance (cf., WRF-CLIM-ARI_R with WRF-CLIM-ARI_R-NUDGE scores in Table 3). On this day, the sum of the observed precipitation at all stations was 56.20 mm, most of which fell over southern parts of the country (Figure 12a). However, the model biases for runs WRF-IDEAL, WRF-IDEAL-ARI_R, WRF-CLIM and WRF-CLIM-ARI_R ranged between −42 and −51 mm, as shown in Table 3, indicating that less than a quarter of the On this day, the sum of the observed precipitation at all stations was 56.20 mm, most of which fell over southern parts of the country (Figure 12a). However, the model biases for runs WRF-IDEAL, WRF-IDEAL-ARI_R, WRF-CLIM and WRF-CLIM-ARI_R ranged between −42 and −51 mm, as shown in Table 3, indicating that less than a quarter of the observed precipitation is captured by WRF. As seen in Figures 12a-e and 13a-e, most of the rain and clouds in the model develop to the south of the UAE, due to a southward shift in the region of low-level wind convergence, as a result of a broader and stronger AHL. This shift can be seen by comparing Figure 7a,b, e.g., at 12 and 18 UTC, in ERA-5 the low-level convergence is mostly over central UAE, while in WRF it is further south and takes place later in the day, as the southerlies are weaker due to a more extensive thermal low. It is interesting to note that using the climatological aerosol loading leads to slightly drier conditions at the location of the NCM stations of 10-11 mm (Table 3), even though over the whole domain it rains more (Figure 12a-e) due to enhanced convection over northeastern Saudi Arabia (Figure 13a-e). The reduction in precipitation over the UAE in WRF-CLIM and WRF-CLIM-ARI_R compared to WRF-IDEAL and WRF-IDEAL_ARI_R may be attributed to the drier conditions (Table 3), as well as to the stabilizing effect aerosols have on the environment, with a heating of the aerosol layer and a cooling of the surface below [99]. However, aerosol precipitation effects are known to be highly sensitive to aerosol properties [100]. The drier environment in WRF-CLIM and WRF-CLIM-ARI_R is mostly over western UAE, where there is additional precipitation in WRF-IDEAL and WRF-IDEAL-ARI_R (Figure 12b-e) and is due to a late arrival of the sea-breeze that arises from a southeasterly shift in the position of the AHL (not shown). The changes in the position and strength of the AHL with the aerosol loading is discussed in more detail in Section 4.2.2. Over the whole domain, however, WRF-CLIM-ARI_R is wetter than WRF-IDEAL, WRF-IDEAL_ARI_R and WRF-CLIM. In fact, while at the location of the weather stations the impact of switching on the ARI on the model-predicted precipitation is rather small, generally less than 1 mm (Table 3), when the climatological distribution is used it leads to a~47% increase in the domain-wise rainfall (Figure 12d,e). This arises from deeper convection, as shown by the colder cloud tops in Figure 13d,e as opposed to Figure 13b,c, with the stronger updrafts ( Figure 14) leading to a higher fraction of aerosols being activated [49]. Figure 14 shows the maximum vertical velocity in the column, and the pressure level at which it is predicted, for runs WRF-CLIM and WRF-CLIM-ARI_R. In the latter the vertical velocity has a larger magnitude (56 m s −1 vs. 31 m s −1 ), peaking in both at about 160 hPa, a sign of overshooting convection [101]. These findings are in line with the results of [17], who found that switching on the ARI effects delays the onset of convection due to the dust-stabilizing effects, but leads to more active cells later in the day, with an overall increase in rainfall.
The results in Figures 10, 12 and 13 and Table 3 indicate the model has biases in the simulation of the meteorological conditions on this day. As noted before, changes in the model physics and even the use of interior nudging in the outer grids failed to correct the major biases such as the surface cold bias, excessive downward shortwave radiation and stronger wind speeds. Despite this, however, the current WRF set up can be used to explore the sensitivity of the model forecasts to the aerosol loading and aerosol optical properties, which is the purpose of this study. This is carried out in Sections 4.2.2 and 4.2.3, respectively.
Sensitivity to Linear Scaling of Aerosol Loading
In this subsection, the impact of the aerosol loading on the WRF predictions of convection over the UAE is analysed. Figure 15 [22] for a study over West Africa. On the other hand, the impact on the longwave radiation flux is much smaller, with hourly changes in the net flux of up +62 W m −2 and +129 W m −2 for runs WRF-5×CLIM-ARI_R-NUDGE and WRF-10×CLIM-ARI_R-NUDGE with respect to WRF-CLIM-ARI_R-NUDGE, and daily averaged values of +25 W m −2 and +51 W m −2 , respectively. These changes are smaller by a factor of two than those estimated by [22]. This may be explained by the aerosol properties used in the model, which the longwave radiative forcing is known to be highly sensitive to [102,103]. As seen in Figure 15b, the downward longwave radiation flux exhibits a change of less than ±10 W m −2 , as this field is mostly a function of the atmospheric emissivity and cloud cover, both of which vary less than the surface temperature [104]. The upward longwave radiation flux, on the other hand, is lower for higher aerosol loadings as the surface temperature drops, but the maximum reduction is still less than a factor of two to three smaller than the decrease in the downward shortwave radiation flux. This is because the temperature does not vary much in absolute values, as it is estimated from the surface energy budget, with the different terms adjusting to a varying downward shortwave radiation flux [55]. As for the shortwave radiation flux, the changes in the surface longwave radiation fluxes scale roughly linearly with the aerosol loading, in line with the findings of [105] for a field campaign in the Cape Verde islands in September 2006. The impact of the aerosol loading on the near-surface variables is summarized in Table 3. The main difference between runs WRF-CLIM-ARI_R-NUDGE, WRF-5×CLIM-ARI_R-NUDGE and WRF-10×CLIM-ARI_R-NUDGE is in the downward shortwave radiation flux, with a bias of about +74 W m −2 , +9 W m −2 and −46 W m −2 , respectively. The other variables given in Table 3 show much reduced relative changes between runs WRF-CLIM-ARI_R-NUDGE, WRF-5×CLIM-ARI_R-NUDGE and WRF-10×CLIM-ARI_R-NUDGE. In fact, the 2 m temperature only decreases by about 0.5 K when the aerosol loading is increased by a factor of 10, a similar variation reported by [22] when the mineral dust emissions are doubled. The surface temperature, on the other hand, is roughly 6 K colder in WRF-10×CLIM-ARI_R-NUDGE compared with WRF-CLIM-ARI_R-NUDGE (not shown). In the surface layer scheme, the 2 m temperature is obtained from the surface temperature, the difference between the temperature at the first model level and the surface temperature, and the similarity function for heat [106]. The smaller change in air temperature may be attributed to the decrease in the sensible heat flux, by about 32 W m −2 , which leads to small changes in the temperature at the first model level and therefore in the 2 m temperature. As the NCM stations are spread out over the UAE (Figure 1) and as in some regions there is an increase in air temperature at certain times during the day due to drier conditions (Figure 16a), on average the variation will be small. The increase in the aerosol loading leads to warmer temperatures in the aerosol layer, with this being particularly evident at 12 UTC (Figure 11a), in particular below 700 hPa where the concentration of aerosols is higher (Figure 8b); the WRF temperature biases increase from <0.5 K in WRF-CLIM-ARI_R-NUDGE to up to 3 K in WRF-10×CLIM-ARI_R-NUDGE, and are accompanied by a drying of the layer by up to 15% (Figure 11b). As the aerosol loading is increased, the model-predicted precipitation decreases. This is true at the location of the NCM stations (Table 3), and is easily seen in the accumulated precipitation maps (Figure 12f,g,j) with a domain-wise reduction of roughly 1% and 16% in WRF-5×CLIM-ARI_R-NUDGE and WRF-10×CLIM-ARI_R-NUDGE with respect to WRF-CLIM-ARI_R-NUDGE, respectively. This can be explained by the aerosols' impact on the atmospheric circulation. An inspection of Figure 16 reveals that in WRF-10×CLIM-ARI_R-NUDGE the AHL is displaced to the east, with the associated circulation leading to a deeper inland penetration of the moist Arabian Gulf air over western UAE and adjacent Saudi Arabia, while the southeasterly winds ahead of it slow down the sea-breeze progression and lead to drier conditions over parts of central and eastern UAE. Despite an aerosol loading that is 10 times higher, the drier environment here, with differences in the water vapor mixing ratio of more than 10 g kg −1 , allows for warmer air temperatures, spreading into parts of the Gulf at 18 UTC. However, elsewhere it is colder in WRF-10×CLIM-ARI_R-NUDGE when compared to WRF-CLIM-ARI_R-NUDGE, in particular at 18 UTC. The reduced spatial extent and amount of precipitation in WRF-10×CLIM-ARI_R-NUDGE arises from an eastward shift in the region of low-level wind convergence, into an area where the atmosphere is drier. Figure 16 highlights the importance of the aerosols' effects on the model-predicted circulation (and consequently on the precipitation) which are more prominent for higher aerosol loadings, a finding also reached by [107] for simulations over northern India in the 2008 summer monsoon. Besides the suppressed rainfall, there is also a delay in the development of convective clouds as the aerosol loading is increased, as seen by comparing Figure 13f,g,j.
Sensitivity to Aerosol Properties
In Section 4.2.2, the impact of the aerosol loading on the surface fluxes and atmospheric circulation is investigated. Here, the focus will be on the aerosol properties, with the aerosol loading in all simulations corresponding to that of the climatological distribution scaled by a factor of 5, which has been found to give the best agreement with the MERRA-2-predicted AOD averaged over the UAE (Figure 9). The results are summarized in Figures 17 and 18 and in Table 3.
As stated in Section 2.2, and due to the presence of carbonaceous particles, the urban aerosol model (WRF-5×CLIM-ARI_U-NUDGE) is more absorbing that the rural (default) model (WRF-5×CLIM-ARI_R-NUDGE), while the maritime aerosol model (WRF-5×CLIM-ARI_M-NUDGE) is less absorbing as the larger particles are removed and some of the rural aerosols are replaced with sea salt. The results in Figure 17 show that a change in the aerosol composition has a larger impact on the surface radiation fluxes than an increase in the aerosol loading (cf., Figure 15). In particular, when the urban aerosol model is used, the downward shortwave radiation flux is cut by up to 360 W m −2 with a daily average reduction of around 114 W m −2 , a larger radiative effect than when the aerosol loading is multiplied by a factor of 10. The important role played by the aerosol composition has also been highlighted by [66] for WRF simulations over Borneo. When compared with WRF-5×CLIM-ARI_U-NUDGE, the reduction in the upward longwave radiation flux exceeds 100 W m −2 , and is a result of the much colder surface, with the daily averaged surface temperature dropping by about 7 K (not shown) and the air temperature by 0.8 K ( Table 3). The radiation absorbed by the aerosols during the day is emitted at night, and in the urban aerosol model the aerosols are so absorbing that the surface downward longwave radiation flux in WRF-5×CLIM-ARI_U-NUDGE is up to 12 W m −2 higher than in WRF-5×CLIM-ARI_R-NUDGE at night (Figure 17b). The impact of changing aerosol properties on the temperature and RH vertical profiles is given in Figure 11. The most noteworthy difference between simulations WRF-5×CLIM-ARI_R-NUDGE and WRF-5×CLIM-ARI_U-NUDGE is the heating around 700-750 hPa and the cooling below 800 hPa in simulation WRF-5×CLIM-ARI_U-NUDGE at 12 UTC, with magnitudes up to +1.5 K and −3.5 K, respectively. As the urban aerosols are more absorbing, and most are below 700 hPa at this time of the day (Figure 8b), there is a strong heating at the top of the layer and a cooling at lower levels as the vast majority of the incoming solar radiation is absorbed. This is in contrast to when the aerosol loading is increased, where the most pronounced warming occurs in the lowest part of the layer. The impact of making the aerosols more absorbing on the atmospheric circulation is presented in Figure 18. When carbonaceous aerosols are added, the AHL is weaker (note the anticyclonic circulation in the 10 m winds at 06 UTC and to a lesser extent at 12 UTC) and broader, as evidenced by the negative sea-level pressure anomalies over the Arabian Gulf and Oman, in WRF-5×CLIM-ARI_U-NUDGE when compared with WRF-5×CLIM-ARI_R-NUDGE. This is consistent with the referred pronounced reduction in the downward shortwave radiation flux and resulting colder surface and air temperatures ( Table 3). As the land temperatures become more comparable to the sea surface skin temperatures over the Gulf, the sea-level pressure minimum extends into adjacent areas, which allows the AHL to expand. As a result of the modifications to the AHL, the excessive moistening over western UAE is reduced, and increased over eastern and southeastern parts of the country. The interaction between the associated cyclonic circulation and the sea-breeze from the Sea of Oman and Arabian Gulf leads to a region of low-level wind convergence here where, due to a moister environment, the model predicts precipitation (Figure 12h). WRF-5×CLIM-ARI_U-NUDGE is the wettest simulation over the UAE, with roughly 35% of the observed precipitation at the location of the NCM stations captured by the model (Table 3). However, a comparison of Figure 13g,h reveals that the rainfall falls from shallower clouds, with deep convection virtually absent in this simulation. The weakening of the AHL also brings it closer to that given by ERA-5, Figure 7b.
When the maritime aerosol model is used, on the other hand, there is a small increase in the downward shortwave radiation flux by up to 75 W m −2 , or by~22 W m −2 on a daily averaged scale, with the surface temperature at the location of the NCM stations higher by about 1 K (not shown). The AHL is slightly weaker and smaller in size in this run (Figure 18b), with the changes in sea-level pressure mostly within 1 hPa, whereas in WRF-5×CLIM-ARI_U-NUDGE in some regions they exceed 2 hPa. As a result, the precipitation and the clouds shift southwards with respect to those in WRF-5×CLIM-ARI_R-NUDGE (Figure 12g,i and Figure 13g,i), with less rainfall accumulated at the location of the NCM stations (Table 3).
Discussion and Conclusions
In this manuscript, the Weather Research and Forecasting (WRF) model is used to investigate the role of aerosol loading and properties in a dusty summertime convective event in the United Arab Emirates (UAE), which occurred on 14 August 2013. This convective event was triggered by the low-level convergence of the cyclonic circulation associated with the Arabian Heat Low (AHL), located over western UAE, and the seabreeze from the Arabian Gulf and Sea of Oman. This was also a rather dusty day in the UAE, with Aerosol Optical Depths (AODs) in excess of two. An analysis of reanalysis data revealed that two factors played a role in the dust-lifting activities on this day: (i) cold pools and downbursts, which occurred in association with the convective activity in the local afternoon and evening hours, and (ii) strong near-surface winds along the leading edge of the Intertropical Discontinuity (ITD) earlier in the day.
The main findings of this work are as follows: 1. Two aerosol distributions are considered in this study: an idealized distribution, set up for the continental United States, and a climatological profile, based on a 7-year output of a general circulation model. The best agreement is found when the climatological values are multiplied by a factor of 5, in line with the dustier atmosphere during this event.
2.
For the simulations with the idealized and climatological aerosol distributions, when the aerosol-radiation interaction (ARI) effects are switched on, the daily averaged surface downward shortwave radiation flux is reduced by 3 W m −2 and 20 W m −2 , respectively, leading to changes in the surface temperature within 1 K and in the air temperature within 0.5 K. Activating the ARI effects when the climatological aerosol loading is used leads to a roughly 47% increase in the domain-wide precipitation, as the convective cells are more active, and the stronger updrafts increase the fraction of activated aerosols.
3.
WRF has a cold bias over the UAE, which is not alleviated when interior nudging in the outermost and two outermost grids is employed. While the skill scores of the innermost nest improved in particular when interior nudging is applied to the two outermost grids, the cold bias in the 2.5 km grid persisted. This is because a change in the atmospheric circulation, in particular in the position of the AHL, leads to increased precipitation over the UAE and locally colder temperatures, which offset the higher temperatures that arise from more accurate boundary conditions. 4.
The downward and upward shortwave and the upward longwave radiation fluxes are found to decrease linearly as the aerosol loading is increased. As the aerosol loading goes up, the AHL shifts eastwards, with the low-level wind convergence taking place in a drier region, resulting in lower precipitation amounts falling in a more spatially confined area. In addition, the onset of convection is also delayed.
5.
When 20% of the aerosols are replaced with more absorbing (carbonaceous) particles, the roughly 87 W m −2 decrease in the surface net shortwave radiation flux is comparable to the drop when the aerosol loading is augmented by a factor of 10. This stresses that the aerosol composition plays a role as important as its amount on the surface radiative fluxes, at least for the range of values considered here.
Even though, in a comparison with observed measurements, no simulation clearly outperformed another, the sensitivity experiments highlighted aspects of the experimental setup that have to be carefully considered for aerosol-related simulations in hyperarid regions adjacent to major aerosol sources such as the UAE:
1.
When accounting for the observed aerosol loading, using a climatology-based distribution is preferable to an idealized distribution as it can improve the representation of deep convection.
2.
Even in the short term, such as 2-day simulations, the fields in the interior of the WRF nests can be substantially different from those in the input dataset. Employing nudging in the outer nests is preferable to only applying it in the outermost nest or not doing it altogether, as it helps to at least partially correct some of the WRF biases.
3.
It is vital to accurately represent the properties of the observed aerosols in the model, more so than the amount, provided the order of magnitude is in line with that observed.
The representation of ARI and aerosol-cloud interaction (ACI) effects in the model still needs to be further refined, in particular with respect to the aerosol optical properties and size distribution. This can be achieved through additional studies that combine both in situ measurements (such as aerosol concentration profiles from aircraft measurements; [89]) and numerical modelling. An extension of this work would be to investigate whether similar findings are reached for summertime convective events that occur on the eastern side of the UAE, for which the AHL plays a reduced role in the triggering of the convective clouds [19]. It is also of interest to further explore the interaction between the ARI and ACI effects and the background meteorological state. This can be achieved through the piggybacking methodology [108], where two sets of thermodynamic variables, one coupled with the model dynamics and another applied diagnostically (i.e., driven by the flow but not feedbacking into it), are considered. . The authors thank the NCM for providing the weather station and air quality observations over the UAE, under an agreement with clauses for non-disclosure of data. Access to these data is restricted and readers should request them through contacting<EMAIL_ADDRESS>We would also like to thank the UAE NCM for kindly providing radiosonde data at Abu Dhabi's International Airport through the National Oceanic and Atmospheric Administration Integrated Global Radiosonde Archive's website (https://www.ncei.noaa.gov/products/weather-balloon/integratedglobal-radiosonde-archive, accessed on 8 August 2021). The Spinning Enhanced Visible and Infrared Imager data were extracted from The European Organisation for the Exploitation of Meteorological Satellites' website (https://eoportal.eumetsat.int/, accessed on 8 August 2021); the Infrared Brightness Temperature data were downloaded from the National Aeronautic and Space Administration's website (https://disc.gsfc.nasa.gov/datasets/GPM_MERGIR_1/summary, accessed on 8 August 2021) as was the case for the MERRA-2 data (https://disc.gsfc.nasa.gov/datasets?project=MERRA-2, accessed on 8 August 2021); the ERA-5 data are available on the European Centre for Medium Range Weather Forecast's Copernicus website (https://cds.climate.copernicus.eu/, accessed on 8 August 2021). The authors also wish to acknowledge the major contribution of Khalifa University's high-performance computing and research facilities to the results of this research. We are also grateful to the four anonymous reviewers of this work for their detailed and insightful comments and suggestions which helped to significantly improve the quality of the manuscript.
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A. Sensitivity to Nudging Formulation
WRF has a considerable cold bias over hyperarid regions, which is not restricted to the UAE. However, when ERA-5 data, used to force the model, are compared with station data, such a cold bias is much reduced; it is mostly within 1 K and with a maximum value of 2.7 K, less than half of the peak WRF bias ( Figure 10). As attempts to address this issue by modifying the WRF configuration have not been successful [48,95,96], interior nudging towards ERA-5 was applied to the outermost and two outermost grids in an attempt to correct the aforementioned model biases. As noted in Section 2.2, the fields nudged include the water vapor mixing ratio, temperature, and horizontal wind components above 800 hPa and on a timescale of 1 h, excluding the PBL. Figure A1 shows near-surface atmospheric fields for the run with the climatological aerosol loading and without interior nudging, WRF-CLIM-ARI_R, and the difference between the two simulations with interior nudging and this control run.
When interior nudging is employed in the 22.5 km and 7.5 km grids, WRF-CLIM-ARI_R-NUDGE, the model predictions in the 2.5 km grid are generally more skilful when compared with the run where no interior nudging is applied (Table 3) or when it is restricted to the 22.5 km grid (not shown), as the output of the 7.5 km grid is used to generate boundary conditions for the innermost nest. In particular, a comparison of Figures A1a,c and 7b reveals that the near-surface fields in the 2.5 km grid are corrected towards those in ERA-5, despite the fact that the interior nudging is only applied above 800 hPa and in the outer grids. As an example, the atmosphere over central and western UAE is moister at 06 UTC and over the UAE it is generally warmer as well; the minimum sea-level pressure is shifted eastwards at this time, closer to that in ERA-5; at 12 and 18 UTC, the sea-level pressures are higher in WRF-CLIM-ARI_R-NUDGE compared with WRF-CLIM-ARI_R. These tendencies are also present when nudging is restricted to the outermost grid but are of a smaller magnitude, as the ERA-5 signal is likely weakened by the lack of interior nudging in the intermediate grid. These results are consistent with the findings of [69], who concluded that employing analysis nudging in the interior of 30 km and 10 km grids of a three-nest simulation leads to more accurate predictions in the 2 km innermost grid compared to when interior nudging is restricted to the 30 km grid. Table 3 shows that in WRF-CLIM-ARI_R-NUDGE, the aforementioned cold bias is slightly reduced, albeit by only 0.01 K on a daily averaged scale. This is because WRF also generates more precipitation, which leads to locally colder temperatures (cf., Figure A1c). In both nudging simulations, the AHL is displaced to the east with respect to WRF-CLIM-ARI_R, in particular when nudging is employed in the two outermost grids, with the low-level convergence of the associated cyclonic circulation with the sea-breeze from the Arabian Gulf leading to increased rainfall over central and eastern UAE (Figure 12e,f). On the backside of the AHL, the enhanced moisture advection from the Arabian Gulf augments the precipitation over southwestern UAE and adjacent Saudi Arabia, as evidenced by the deeper convection in the region (Figure 13e,f). Over northeastern UAE, on the other hand, the southeasterly winds from the AHL bring in drier air from the desert and weaken the moistening effect of the sea breeze from the Sea of Oman and Arabian Gulf, leading to a reduction in the 2 m water vapor mixing ratio by more than 10 g kg −1 at some sites in WRF-CLIM-ARI_R-NUDGE. As a result, the averaged bias of this field at the location of the NCM stations increases slightly from −2.32 g kg −1 in simulation WRF-CLIM-ARI_R to −2.81 g kg −1 in WRF-CLIM-ARI_R-NUDGE. The air temperature, sea-level pressure, downward shortwave radiation and precipitation scores, on the other hand, are higher for WRF-CLIM-ARI_R-NUDGE compared with WRF-CLIM-ARI_R (Table 3). A marginal improvement is also seen in the vertical profiles of temperature and RH with respect to the Abu Dhabi sounding data ( Figure 11). With respect to WRF-CLIM-ARI_R (cyan curve), in WRF-CLIM-ARI_R-NUDGE (dark green curve) there is a slight reduction in the biases; for example, note the decrease in the air temperature biases around 500 hPa and 850-950 hPa at 00 UTC and between 150 and 350 hPa at 12 UTC by up to 1 K, and in the RH biases between 550 and 700 hPa at 12 UTC by up to 10%.
In summary, while the application of interior nudging in the outermost or two outermost grids generally improves the model performance, in line with the findings of other studies, in some regions (e.g., northeastern UAE) it may have detrimental effects, due to its impact on the atmospheric circulation. Nevertheless, simulation WRF-CLIM-ARI_R-NUDGE is preferred to WRF-CLIM-ARI_R, as per the scores given in Table 3, with this nudging configuration recommended for summertime convection simulations in this region. | 15,387.4 | 2021-12-16T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Multiple instance learning of Calmodulin binding sites
Motivation: Calmodulin (CaM) is a ubiquitously conserved protein that acts as a calcium sensor, and interacts with a large number of proteins. Detection of CaM binding proteins and their interaction sites experimentally requires a significant effort, so accurate methods for their prediction are important. Results: We present a novel algorithm (MI-1 SVM) for binding site prediction and evaluate its performance on a set of CaM-binding proteins extracted from the Calmodulin Target Database. Our approach directly models the problem of binding site prediction as a large-margin classification problem, and is able to take into account uncertainty in binding site location. We show that the proposed algorithm performs better than the standard SVM formulation, and illustrate its ability to recover known CaM binding motifs. A highly accurate cascaded classification approach using the proposed binding site prediction method to predict CaM binding proteins in Arabidopsis thaliana is also presented. Availability: Matlab code for training MI-1 SVM and the cascaded classification approach is available on request. Contact<EMAIL_ADDRESS>or<EMAIL_ADDRESS>
INTRODUCTION
Calmodulin (CaM) is an intracellular calcium sensor protein that interacts with a large number of proteins to regulate their biological functions and exhibits sequence conservation across all eukaryotes (Bouche et al., 2005). Ca 2+ plays a very important role in many cellular functions ranging from fertilization and cellular division to neuronal spiking (Reddy et al., 2011). Due to the importance of calcium signaling in cells, identifying proteins that bind CaM and determining the location of the CaM binding site in them can help in gaining a better understanding of cellular function in general, and the role of calcium in different cellular processes in particular. This article presents a highly accurate computational approach that can identify the location of a CaM binding site in a protein solely on the basis of its amino acid sequence, helping avoid the significant effort of performing such experiments in the lab (Reddy et al., 2011). Our approach uses sequence information alone, which ensures its wider applicability in comparison to methods that rely on structural modeling (Zhou and Qin, 2007).
CaM binding sites are known to be contiguous in sequence, often occurring through an amphiphilic alpha helix (O'Neil and DeGrado, 1990). This makes CaM binding site prediction amenable to a sliding-window classification approach, as applied in recent work (Radivojac et al., 2006;Hamilton et al., 2011). The method by Radivojac et al. uses a hierarchical neural network classifier trained on the basis of amino acid properties averaged over a fixedsize window. Hamilton et al. showed that a simple sliding window * To whom correspondence should be addressed. Support Vector Machine (SVM) trained on average amino acid composition achieves similar performance.
In this article, we present a novel formulation of the binding site prediction problem that is based on the framework of multiple instance learning (MIL; Dietterich et al., 1997). In MIL positive examples come in bags. For a positive bag, it is assumed that at least one of the examples is indeed positive whereas negative bags contain only negative examples. We use this for binding site prediction by forming a positive bag out of fixed-size sequence windows that overlap the annotated binding site. This allows us to model the uncertainty in actual binding site location-experimental methods may not precisely locate a binding site, and may include a region that is larger than the true binding site due to limitations of budget and experimental procedures. Furthermore, modeling binding sites this way facilitates the use of sequence representations that are position dependent, yielding a more detailed model of the binding site. This allows learning of motifs that are characteristic of the binding site.
MIL has been applied in a variety of other problem domains such as object tracking (Babenko et al., 2011), protein identification (Tao et al., 2004), and prediction of protein-ligand binding affinities (Teramoto and Kashima, 2010).
Our results show that the proposed MI-1 SVM has higher accuracy than the classical multiple instance SVM (Andrews et al., 2003), and is also faster to train. MI-1 also performs better than a standard SVM, thereby improving on existing work of (Radivojac et al., 2006;Hamilton et al., 2011). We also compare the merits of several ways of representing binding sites, and demonstrate the ability of our method to learn motifs that are associated with CaM binding. Finally, we show how the resulting binding site predictor can be used as the basis for a classifier that predicts CaM binding proteins, with improved accuracy over earlier work.
Datasets and pre-processing
The dataset for CaM binding site prediction and its pre-processing follows (Radivojac et al., 2006). A set of 210 proteins was obtained from the Calmodulin Target database (Yap et al., 2000). Each of these proteins bind CaM, and one or more binding sites within each protein are annotated. A non-redundant subset of 153 proteins containing 185 binding sites was then chosen such that no two proteins have more than 40% sequence identity and no two binding sites are more than 50% identical.
Sequence windows of length 21, the average length of CaM binding sites, were extracted from the protein sequences to create positive and negative examples. Negative examples were created by sliding a length 21 window in 10 amino acid increments such that no part of the window overlaps an annotated binding site. Positive examples, on the other hand, were created by sliding a length 21 window over an annotated binding site in increments of 1 amino acid. Thus, the number of positive examples from an annotated binding site equals the number of amino acids in the binding site.
For CaM binding prediction, we used a dataset of 236 proteins experimentally determined to bind CaM using a protein array screen that tested around a thousand proteins in Arabidopsis thaliana (Popescu, 2007). The remaining 27,140 proteins in the A.thaliana proteome were used as negative examples (non-binders).
Vanilla SVM
As a baseline method we have used a standard binary SVM (Cortes and Vapnik, 1995). Our labeled dataset consists of N labeled examples (x i ,y i ) where x i is the sequence of a window, and y i {+1,−1} is its associated label indicating whether the central residue of x i lies in a binding site or not.
The large-margin learning problem can be formulated as: subject to: Here φ x i is the feature representation of the window x i and the cost parameter C controls the trade-off between constraint violation and margin maximization. The discriminant function f x i = w T φ x i +ρ can then be used to predict whether a given window is part of a binding site or not. The location of a binding site is predicted by the window that offers the highest value of the discriminant function for that protein (Hamilton et al., 2011). PyML (Ben-Hur, PyML-machine learning in Python, 2011) was used for the implementation.
Multiple instance learning SVM
In MIL [both multiple instance learning SVM (mi-SVM) and MI-1-SVM], the positive examples from each binding site are grouped into a single bag. We denote the set of positive examples for a given binding site b as P b and the set of negative examples from the protein to which the binding site b belongs as N b . The mi-SVM approach is formulated as follows (Andrews et al., 2003): In this formulation y i ∈ {+1,−1} acts as a label for the window x i , and the objective is to find the optimal labeling of the examples that comprise the positive bags such that at least one example in each positive bag is labeled as In case of the binding site prediction problem, this means that a trained mi-SVM will choose at least one positive window from the set of positive windows in a binding site. The mi-SVM formulation is a combinatorial optimization problem. We use the heuristic algorithm proposed by (Andrews et al., 2003) y i +1 /2 ≥ 1 is violated), the algorithm picks the example in the bag having the largest discriminant function value and sets its label to +1. The algorithm then alternates between label imputation and SVM training until the labels stop changing. This simple algorithm has shown good performance in comparison to more complicated ones (Andrews et al., 2003). The score from the trained discriminant function for one window in a binding site should be higher than the scores generated for non-binding site windows within that protein
Novel MI SVM formulation (MI-1 SVM)
Accurate prediction of the location of a binding site in a protein requires a less stringent condition than the one used in mi-SVM: at least one window in the true binding site needs to score higher than the negative windows from the same protein ( Fig. 1). This allows us to significantly reduce the complexity of the learning problem in comparison to mi-SVM. The mi-SVM and vanilla SVM formulations try to classify windows as binding or non-binding without modeling the concept that these windows in fact lie within a protein. Our proposed MI-1 SVM formulation, on the other hand, operates at the protein level. The large-margin formulation of this learning problem, can be expressed as follows: where M is the total number of binding sites in the training data. For a given binding site, this formulation tries to maximize the difference between the discriminant function values of the maximum scoring window within the binding site and the non-binding windows in the rest of the protein containing that binding site. Since MI-1 SVM simply compares the discriminant function scores in the binding and non-binding site windows in its constraints, it does not require a bias term. Moreover, the number of slack variables (ξ b ) in MI-1 SVM is equal to the number of binding sites and not the number of training examples, as in the vanilla SVM and the mi-SVM. As a consequence, the number of variables involved in the optimization in MI-1 SVM is much smaller than that in mi-SVM and this leads to faster training. Using the same ξ b for a single binding site effectively takes the maximum of the scores over all non-binding site windows of the protein to which b belongs. Another important feature of MI-1 SVM is that, like the ranking SVM discussed in (Joachims, 2006), MI-1 SVM also explicitly maximizes the area under the Receiver Operating Characteristic (ROC) curve. Similar to mi-SVM, which performs optimization over the labels of examples in positive bags, MI-1 SVM is also a combinatorial optimization problem because of the maximum operation in its constraints. We have used the heuristic algorithm given in Table 1 to obtain a solution to this problem. The algorithm can be stopped when the representative examples of all binding sites stop changing, or on the basis of a user-defined maximum number of iterations. In all our experiments, the algorithm converged in 10 iterations or less. A trained MI-1 SVM can be used to produce discriminant function scores for any given residue in a protein.
i417 Table 1. Heuristic algorithm used for training MI-1
Initialization:
With each binding site b, we associate a representative example x b with feature representation φ(x b ) which is initialized to be the mean of the examples in P(b): Until convergence, repeat: Solve the following quadratic programming (QP) problem: such that, ∀b Update (for all binding sites): The QP problem in the MI-1 algorithm can be solved in the primal or in the dual. The primal formulation of the problem (3) is more efficient than the dual when the dimensionality of the feature vector is smaller than the number of training examples. The dual formulation of the QP problem (based upon the Lagrange of the primal) is given by: such that ∀b : Here α b j is the Lagrange variable corresponding to the primal constraint The dual formulation reveals some interesting aspects of the MI-1 SVM. It shows that Lagrange variables (α) only exist for negative examples, and that the sum of all α for negative examples from a single protein is constrained to be less than or equal to C/M. This differs from a conventional SVM formulation which requires that each of the α, on its own, should be ≤ C/M and the sum of products of α from all training examples with their corresponding labels should be zero. Thus, the MI-1 SVM formulation is less constrained than a conventional SVM formulation and this can potentially lead to a better solution.
CaM binding prediction
In this article, we compare the following two strategies for CaM binding prediction.
Discriminant function scoring
The maximum discriminant function score across all windows in a protein can be used as the CaM binding propensity of that protein. This approach was used in (Hamilton et al., 2011) to predict CaM binding of proteins in the A.thaliana proteome. In their method, the scores were generated using a standard SVM classifier trained for binding site prediction. In this article, we use the scores from MI-1 SVM instead.
Cascaded classification
We implemented a two-stage cascaded classification approach for CaM binding prediction. In the first stage, the window in a given protein with the highest MI-1 SVM discriminant function score is chosen as the most likely binding site window for that protein. This is done for all proteins in the training set. In the second stage, a standard SVM is trained to discriminate between the most likely binding site windows in positive examples (known CaM-binding proteins) and negative examples (non-CaM-binding proteins). Once the second stage SVM has been trained, the binding propensity of a test protein can be estimated by first finding its most likely binding site window using MI-1 SVM, and then evaluating the discriminant function value of the second stage SVM for the chosen window. A Gaussian kernel was used in the second stage SVM as it performed significantly better than a linear kernel. However, the use of non-linear kernels in MI-1 SVM did not seem to improve performance.
Feature representations
The performance of the learning methods described above for binding site and CaM binding prediction was analyzed using a number of feature representations which are presented next.
p-Spectrum
The p-spectrum φ(x) of a string over an alphabet is a vector each of whose components φ v (x) are the number of occurrences of each length-p substring v in the string x. The p-spectrum kernel between two strings is given by the corresponding Euclidean dot product (Leslie et al., 2002).
Position-dependent p-spectrum
The position-dependent p-spectrum φ(x) of a string x is a vector of indicator variables φ (v,k) (x) each showing whether the length-p substring v occurs at position k in the string x. The resulting position-dependent p-spectrum kernel is given by: The position-dependent kernel takes the relative position of an amino acid in a window into account whereas the p-spectrum kernel does not. We perform normalization of any kernel representation using the cosine
Evaluation methodology
We use Leave-One-Protein-Out (LOPO) cross-validation in order to analyze the performance for binding site prediction. In LOPO, all examples (positive or negative) from a single protein are heldout while the classifier is trained on the remaining proteins. The classifier is then evaluated over the examples from the held out protein. We evaluate the following performance metrics and use their average across all proteins to make comparisons between methods and kernels: (c) False-Hit ratio (FH-measure): The percentage of non-binding site windows (out of the total number of non-binding site windows) that have a score higher than the maximum scoring window in the known binding site. This measure tells us how many non-binding site windows are expected with a score higher than the true binding site window.
(d) True-Hit probability (TH-measure): For a given protein, a true hit is defined to occur when the residue at the center of the highest scoring window for that protein lies within a binding site. The average number of true hits across all proteins (called the TH-measure) represents the probability of the maximum scoring window predicted by a classifier to lie within a true binding site.
The AUC is a measure of how good a particular method is in ranking binding site windows above non-binding sites. AUC 0.1 gives us a sense of how good are the top scoring windows produced by a classifier. The FH measure represents the chances of a nonbinding site window to be ranked higher than a true binding site window. The TH-measure tells us about the chances of the highest scoring window predicted by a classifier to belong to a true binding site. Both the TH and the FH measures provide meaningful information about the accuracy of the method to a biologist who intends to use the proposed prediction scheme to verify potential binding site locations experimentally.
We use AUC as the performance metric for CaM binding prediction. AUC can be directly computed from the estimated CaM binding propensities when using the Discriminant function scoring approach. With the Cascaded classification approach, AUC is obtained from 5-fold stratified cross-validation with nested grid search for model selection. In cross-validation, it was ascertained that two proteins with more than 40% sequence similarity are in the same fold (evaluated using BLASTCLUST from the NCBI BLAST package (Altschul et al., 1990)). Moreover, the data for CaM binding prediction in A.thaliana did not include any proteins which were part of the MI-1 training set.
Model selection
In order to perform model selection (the choice of the cost parameter C) for the vanilla and MI-1 SVM formulations for binding site prediction, we used nested 5-fold cross-validation within each iteration of LOPO cross-validation. The TH-measure obtained from the 5-fold cross-validation is then used to choose the value of C for that iteration of LOPO. The values of C that were used in the nested cross-validation are {0.01, 0.1, 1.0, 10, 100}.
As mi-SVM takes a long time to train, nested cross-validation could not be performed. Instead we evaluated the LOPO crossvalidation performance (TH-measure) of mi-SVM with different values of C in {0.01, 0.1, 1.0, 10, 100} and the best results with the optimal value of C = 10 are reported. This method for selection of C for mi-SVM can potentially lead to over optimistic performance The features are 1-spectrum (1-Spec), position-dependent 1-spectrum (PD-1) and the combination (Comb) of the 1-Spec and PD-1 representations. The Max Std. rows show the maximum standard deviation of a particular performance metric using the above feature representations. Results with the position-dependent Gappy triplet kernel (Gappy) with MI-1 SVM are also reported (for a single run due to its longer computational time). Bold numbers indicate the best value (across all methods) for a particular metric using a particular feature representation. estimates. This is not an issue, since our claim is that the proposed approach performs better. In the case of CaM binding prediction in A.thaliana using cascaded classification, we performed a nested (5-fold) grid search within each cross-validation fold for selecting the parameter values of the second-stage SVM. Values of C in the SVM and γ of the Gaussian kernel K x 1 ,x 2 = exp −γ φ x 1 −φ x 2 2 were chosen from {0.1, 1, 10, 100} and {0.005, 0.02, 0.5, 2.0}, respectively. Table 2 presents the LOPO cross-validation results for the three SVM formulations for the 1-spectrum, position-dependent 1spectrum and the combination of the two feature representations for predicting CaM binding sites. We observe that both MIL formulations (mi-SVM and MI-1 SVM) perform better than the vanilla SVM. This shows the value of expressing binding site prediction as an MIL problem. This is particularly evident with the use of position-dependent feature representations, as they are more sensitive to changes in relative position of an amino acid in a window within the binding site than position-independent feature representations. It can also be noted that the accuracy of MI-1 SVM is noticeably better than mi-SVM. We believe that this improvement stems from the fact that the proposed scheme implements a more realistic model of the binding site prediction problem. The improvement resulting from switching to a positiondependent feature representation is also larger for MI-1 SVM than that observed in the case of mi-SVM. The higher AUC 0.1 scores indicate the improved sensitivity and specificity of MI-1 SVM which is also reflected in the ∼8% improvement in the TH-measures and the decrease in the FH-measure.
RESULTS AND DISCUSSION
The vanilla SVM approach is the same as the method in (Hamilton et al., 2011), which they showed works comparably as the neural network approach of (Radivojac et al., 2006). Therefore we conclude i419 Fig. 2. MI-1 discriminant values along the length of a held-out protein with the position-independent (top) and the position-dependent (bottom) 1-spectrum features that the proposed scheme performs better than previously reported approaches.
F.ul Amir Afsar Minhas and A.Ben-Hur
We also compare the performance of these approaches with a naive local alignment-based method for finding CaM binding sites. In this method, local alignment between a held out protein and the binding sites of the remaining proteins is performed and if the best scoring alignment overlaps (by at least 10 residues) with the known binding site in the held out protein, it is considered to be a true hit. This approach gives a TH% of 39.5%. This shows that the machine learning approaches presented in this article use more than sequence similarity to make better predictions.
We have also performed an analysis of the stability of the results for the MI-1 and the vanilla SVMs by averaging performance statistics of 12 runs of 5-fold cross-validation. This analysis was not performed for mi-SVM or for the gappy triplet kernel with MI-1 SVM owing to their large time requirements. The 5-fold cross-validation results for both the methods are very similar to the LOPO cross-validation results. The maximum standard deviation in a particular performance metric across different feature representations obtained from the 5-fold cross-validation for vanilla and MI-1 SVMs is given in Table 2. This statistic gives an idea of the variability of the results with respect to changes in the data. Figure 2 shows the output of the MI-1 SVM for a single protein for the position-dependent and position-independent versions of the 1-spectrum feature representation. It is quite clear that the output for the position-independent features is much smoother than that from the position-dependent 1-spectrum features. This is because the position-independent 1-spectrum feature vector representation changes only slightly as the window is translated by one position, whereas the position-dependent feature vector can change dramatically. Due to the increased resolution power, the position-dependent features lead to a classifier that is able to correctly predict both binding sites in the example shown in Figure 2, which is not achieved using the position-dependent features.
We have also analyzed the weight vectors from different feature representations in order to extract amino acid patterns informative of CaM binding sites. The plot of weights from the 1-spectrum features and the position-dependent 1-spectrum features are shown in Figure 3a and b, respectively. The weights for the 1-spectrum features closely follow the amino acid propensities in CaM bindings sites (Hamilton et al., 2011), with R (Arginine), K (Lysine) and W (Tryptophan) showing large positive weights, whereas D (Aspartic acid), E (Glutamic acid) and P (Proline) have large negative weights. The plot of the position-dependent 1-spectrum features indicates that the importance of different amino acids varies with their position in the window. For example, Arginine shows large positive weights in the middle of the window, and negative weights in the ends; Glutamic acid shows the opposite behavior. This indicates that the classifier is indeed learning a position-dependent model.
The results of 5-fold cross-validation using the positiondependent gappy triplet kernel (K PDGT ) shown in Table 2 indicate that this kernel provides comparable performance to other feature representations using MI-1 SVM. Since the number of dimensions in the feature representation of the gappy triplet kernel is much larger than the number of training examples, MI-1 SVM learning was performed using the dual formulation for this kernel, which is more computationally intensive. That is why we have used 5-fold cross-validation instead of LOPO cross-validation.
Next, we ranked the features of the gappy triplet kernel in terms of their weights in MI-1 SVM learning in order to find motifs that are associated with CaM binding. Figure 3c shows the top 100 motifs and their positions. We observe that motifs tend to associate with particular positions, showing that MI-1 SVM uses the flexibility in choosing a representative window to 'align' instances of CaM binding sites (for instance, notice the presence of 'R' at positions 10 and 11 across different features). Moreover, it is able to find parts of known CaM binding motifs provided in the CaM Target Database (Yap et al., 2000). The CaM Target Database classifies CaM binding targets into 5 groups, each characterized by certain motifs: 3 predominantly calcium-dependent motifs (1-10, 1-14 and 1-16, named according to the position of large hydrophobic residues), the IQ motif which is typically not dependent on calcium concentration, and others. As is evident from Figure 3c, IQ, QxxxR, RxxxxR, RGxxxR, RxxL, KxxxxR receive large positive weights. These motifs are components of the IQ subclass of motifs. Other features belonging to different subclasses of motifs that receive large positive weights include: AxxI, IxxxF, LxxV, (from the 1-14 subclass), RR, KK, RxF (from the 1-10 subclass) etc. This clearly illustrates the capabilities of the proposed scheme to learn CaM binding motifs. We also note that most of the top ranking features correspond to a motif with 3 or 4 do not care positions. This is in agreement with the known fact that CaM binding usually occurs via an alpha helix, and this corresponds to the periodicity of the alpha helix.
On the task of CaM binding prediction (Table 3), the performance of discriminant function scoring is only marginally better than that of the 1-spectrum feature representation used in (Hamilton et al., 2011). However, with the cascaded classification approach with a Gaussian kernel, the results are significantly better. Even though the AUC for the position-independent 1-spectrum features is higher than that of the position-dependent features, the AUC 0.1 was higher for position-dependent features (29.1) in comparison to the simple 1-spectrum features (26.6).
In order to obtain a better understanding of what our classifier picks up, we considered the proteins that are not known to bind to CaM and ranked that list according to the score provided by our i420
72.3
The features are 1-spectrum (1-Spec), position-dependent 1-spectrum (PD-1) and the combination (Comb) of the 1-Spec and PD-1 feature representations. Using Cascaded Classification with a liner kernel in the second stage SVM instead of the Gaussian kernel, the best AUC was 0.72 with 1-spectrum features. (AUC: area under the ROC curve).
classifier. We then tested for enrichment of GO terms of segments of that list: the first 1000 proteins, proteins 1001-2000 etc., using the GOrilla tool (Eden et al., 2009). For the first 1000 we found enriched terms that are in agreement with known functions of CaM binders (Reddy et al., 2011): In GO molecular function, transcription function activity and CaM-dependent kinase activities were the most highly enriched with adjusted p-values below 10 −10 . All other enriched terms were related to these except for 'inward rectifier potassium channel activity' which had an adjusted p-value of 0.02. In GO biological process namespace all the terms except for 'response to carbohydrate stimulus' (adjusted p-value 0.02) were related to phosphorylation and various regulatory processes. In analyzing enrichment for size-1000 chunks we found that the p-values for these functions and processes went down as we went down the ranked list, and for proteins ranked 5000-6000, no terms showed enrichment.
CONCLUSIONS AND FUTURE WORK
We have presented a novel MIL algorithm for CaM binding site prediction called MI-1 SVM, and shown its performance advantages in comparison to the standard MIL SVM and regular SVM, which was used in previous work. Our new MIL formulation captures the minimal constraints that a good binding site classifier needs to have, and we believe this is the reason for its better accuracy. Not only that, it also runs more than twice as fast as standard MIL SVM (running time on a dataset of 16,060 windows was 510.5s for MI-1, 1059.1s for mi-SVM, and 348.3s for vanilla SVM). Expressing binding site prediction as an MIL problem is a natural way to incorporate uncertainty about binding site location, and our results show that this allows the classifier to 'align' binding sites and learn position-dependent motifs that characterize the binding site. The proposed scheme also shows its efficacy in prediction of CaM binding proteins.
i421
In general, binding sites in proteins nucleic acids are not contiguous in sequence as they are in CaM binding proteins. MI-1 SVM can be extended to solve the generic problem of binding site prediction by using sequence-based features that capture the noncontiguous nature of binding sites. Currently, MI-1 SVM generates the CaM binding propensity along a protein's length and cannot explicitly identify multiple binding sites. Identifying the number of binding sites in a protein remains for future work. | 6,771.8 | 2012-09-01T00:00:00.000 | [
"Computer Science",
"Biology"
] |
A Metamaterial-Inspired Microwave Sensor for Dielectric Characterization of Organic Liquids and Solid Dielectric Substrates
A microwave metamaterial-inspired sensor based on a 13 × 13 arrays of Asymmetric Electric Split-Ring Resonator (AESRR) is proposed for dielectric characterization of organic liquids and solid dielectric substrates with low permittivity. The sensor, excited by a pair of patch antennas and working at around 11.575 GHz, is fabricated using printed circuit board (PCB) technology. T-shape channel was integrated to the sensor by grooving in the FR-4 substrate which improved the integration and provided the feasibility of liquids detection. Seven liquids and four dielectric substrates are measured by this sensor. The measured results show the transmission frequency shifts from 11.575 GHz to 11.150 GHz as the liquid samples permittivity changes from 1 to 7 and the transmission frequency shifts from 11.575 GHz to 8.260 GHz as the solid substrates permittivity changes from 1 to 9. The measured results have proven the improved sensitivity and the larger frequency shift ∆ f on material under test (MUTs) compared with the conventional reported sensor. The relative permittivity of liquid samples and solid samples can be fitted by establishing approximate models in CST, respectively. Two transcendental equations derived from measured results are proposed to predict the relative permittivity of liquid samples and solids samples. The accuracy and reliability of measured results and predicted results are numerically verified by comparing them with literature values. The proposed sensor has many advantages, such as low-cost, high-sensitivity, high-robustness, and extensive detecting range, which provided a great potential to be implemented in a lab-on-a-chip sensor system in the future.
Introduction
Metamaterials are artificially made electromagnetic materials composed of sub-wavelength resonant elements, which can manipulate electromagnetic wave beams and exhibit some exotic electromagnetic properties by manipulating their structural geometry and arrangement 1,2 . Microwave sensors have many advantages such as low fabrication and measurement cost, CMOS compatibility, design flexibility, and real-time response, which allow microwave sensors based on metamaterials to be widely used in various fields, such as chemical, biosensing, substrate detection, and microfluidic systems 3-12 . Recently, many new and improved microwave sensor based on meta-atom structure were proposed. A microfluidic sensor implemented from a single split-ring resonator (SRR) was proposed for the dielectric characterization of liquid samples 3 . A new microwave device which was composed of a microstrip coupled complementary split-ring resonator (CSRR) was proposed in reference 4 as a microfluidic sensor. The sensor can identify water-ethanol mixtures of different concentrations and determine their complex permittivity. A microwave sensor using a Complementary Circular Spiral Resonator (CCSR) was designed for identifying different liquid samples and determining their dielectric constants by dropping the liquids on the sensitive area 5 . Many other microfluidic sensors based on different meta-atom structures [6][7][8] were reported to distinguish different liquids and determine their permittivity, such as water, hexane, chloroform, water-ethanol or water-methanol mixtures.
The response of a material to electric signal depends on the permittivity of materials. Thus, many sensors have been proposed and used for material characterization. A microwave sensor, excited by microstrip line and based on the complementary circular spiral resonator (CCSR), was reported for nondestructive evaluation of dielectric substrates 9 . A ring resonator sensor structure was used to identify not only the permittivity but also the thickness of different materials attached to the sensor 10 . And a parabolic equation was proposed to predict the permittivity of material based on the measured resonance frequency. M. S. Boybay et al proposed a microwave method for dielectric characterization of planar materials by using complementary split-ring resonators (CSRRs) working at a 0.8 GHz-1.3 GHz band 11 . A complementary split-ring resonator (CSRR) sensor, operating at 1.8 GHz to 2.8 GHz, was proposed and fabricated for measuring the dielectric constants and loss tangents of materials 12 .
Among the aforementioned microwave characterization devices, all of them are only designed for liquids or solids measurement. And microfluidic sensors [4][5][6][7][8] can only distinguish some liquids with high permittivity such as water-ethanol mixtures of different concentrations of which the dielectric constants vary greatly. The sensor for material characterization of solids [9][10][11][12] can distinguish different substrate materials with tiny frequency shift ∆f, so there is still a lot of room for improvement in sensitivity. Meanwhile, most of the reported microwave sensors composed by meta-atom structure are easily influenced by the surroundings, leading to the low stability of the sensor.
Sensor design and fabrication
Metamaterials design and sensor design. Fig. 1(a) shows the schematic of asymmetric eSRR (AESRR) structure, the primary component of the proposed metamaterial-inspired sensor. AESRR is chosen as the fundamental building block of metamaterials because of its simplicity and sensitivity to the change of permittivity environment. The material of AESRR is copper (pure) with electrical conductivity of 5.96×10 7 s/m and the substrate is FR-4 (lossy) with a dielectric constant of 4.4. The dimensions of the AESRR metamaterial structure are shown in Fig. 1(a): gap width (g) = 0.5 mm, line width (w) = 0.75 mm, the length of the substrate (p) = 10 mm, the thickness of the substrate (h) = 1 mm, the length of the copper (c) = 6 mm, the thickness of the metal part (t) = 0.03 mm, and other parameter: a = 0.9 mm, b = 3.3 mm, the width of the middle metal arm (d) = 0.9 mm. 1(b) shows the equivalent circuit model of AESRR metamaterial structure. In the equivalent circuit, 1 , 2 , 3 , 4 , and 5 represent the equivalent inductances of the metal arms in the corresponding position, respectively. 1 and 2 are the equivalent capacitances of the gaps of AESRR. Among the circuit element, the values of inductance 1 -5 related to the sensor itself are determined by the structural parameters and the composition materials of the sensor. Equivalent circuit model 4 concludes that the equivalent capacitance of the gaps of sensor is determined by the capacitive effects of sensor itself and the effect of MUTs. According to the equivalent circuit model, the equivalent capacitance 1 and 2 can be expressed as: ′ and 0 ′′ model the capacitive effects on both sides of the gaps, which are determined by the dielectric substrate, channels, and surrounding space of the sensor itself. The term ( ) ′ and ( ) ′′ describe the dielectric contribution from the load MUTs with being the capacitance of an empty channel and being the permittivity of MUTs. The value of the effective capacitance , the total equivalent capacitance of the sensor including 1 and 2 , is influenced by the dielectric materials around the gaps and can be approximately expressed as 13 : As mentioned above, 0 models the total capacitive effects determined by the sensor itself and the term describes the total dielectric contribution from the load MUTs.
The resonant frequency ( 0 ) of the sensor can be defined as: Where L represents the total equivalent inductance of the AESRR structure. From (1)-(3), the resonant frequency can be functions of the load MUTs permittivity as (4) shows: 0 = 1 ( ) (4) This indicates that the resonant frequency of the sensor will be influenced by the permittivity of the load MUTs 14 . Therefore, the dielectric constant of an unknown MUTs can be determined simply by measuring the different transmission resonance frequencies of sensor due to the interaction with different MUTs. The Fano resonance, discovered by Ugo Fano in 1961, has been described as the interference between continuum of states (the scattered states) and quasi-bound states (resonant states) 15 . V. Sekar et al concluded that introducing Fano resonance to the metamaterial structure is an efficient way to generate a new resonance peak improving the sensitivity of the sensor 14 . The basic eSRR metamaterial structure is shown in Fig. 2(a). To achieve higher sensitivity, the asymmetric eSRR (AESRR) structure is proposed based on the Fano resonance, shown in Fig. 2(b). The Fano resonance is generally caused by asymmetric metamaterial structures 14 . As Fig. 2(c) shows, there appears a novel Fano peak at around 11.30 GHz with the symmetry of eSRR destroyed. Electric field distribution at resonance frequency 11.28 GHz. Fig.3 shows surface current simulation in eSRR and AESRR at different frequency. As Fig.3(a) shows, the currents in the two equal metal wire arms of eSRR oscillate in phase and interfere constructively 7 , which generates a resonance peak at 5.81 GHz. Compare to the eSRR, the two current loops in AESRR differ with the symmetry broken, leading to a strong coupling between them. Generally speaking, the longer the current path is, the lower the frequency of resonance peak is; The shorter the current path, the higher the resonant frequency. In Fig.3(c), The right current loop is slightly stronger than the left current loop. In Fig.3(d), The left current loop is obviously stronger than the left current loop. The resonant peak of AESRR at 5.67 GHz is from the large current path on the right and the resonant peak of AESRR at 11.28 GHz is from the small current path on the right. By comparing Fig.3(b) and Fig.3(d), the current loop in AESRR is stronger and the current difference between the two loops in AESRR is larger, which created a strong coupling and generated a new resonance peak at 11.28 GHz. And the electric field distribution of AESRR at transmission resonance peak at 11.28 GHz is shown in Fig.4. Electric field distribution embedded in Fig. 4 tells us that a strong electric field establishes between gaps, especially the left one. To ensure the performance of the sensor, the channel should cover the sensitive areas. Whereas the width of the gap (g) increases the difficulty of the microfluidic channel processing and integration. Considering the integration difficulty and processing cost, finally, we decide to process and integrate the T-shape microfluidic by grooving in the FR-4 substrate as Fig. 5 shows. Another consideration was a lab-on-chip system implementation, which is convenient with microfluidic channel in substrate. As Fig. 5 shows, a metamaterial-inspired sensor based on a 13×13 AESRR arrays structure has been designed to enable the feasibility and the accuracy of the measured results. Fig. 5(a) shows a 13×13 AESRR arrays which is large enough to cover the radiation range of the antenna to ensure the reliability of the measurement. Fig. 5(b) is the schematic of the whole microfluidic channel. As Fig. 5(b) shows, we also designed two square grooves on the both edges of the microfluidic channel so that it is convenient for us to make the liquid samples in the square grooves fill in the channel with the help of the gravity and fluidity of liquid samples.
In order to verify the performance of the proposed sensor and compare the sensitivity of different resonance peaks, sensors based on different metamaterial structures was analyzed in the CST [25][26]. By changing the dielectric constant of MUTs in the channel, different resonance peaks have different frequency shift |∆ |. Fig. 6 clearly illustrates that the sensitivity of the peak of AESRR is much better than that of the other two resonance peaks. The simulated frequency shift |∆ | shows that the resonance peak of AESRR at around 6 GHz and the resonance peak of eSRR at around 6 GHz are insensitive to small changes in the dielectric environment unless the changes in the dielectric environment are large enough. At the same time, the resonant peak of AESRR at around 11 GHz has a large |∆ | even for the slight changes of dielectric environment. Based on the simulated results, the resonance peak at around 11 GHz was selected for measuring different MUTs with slight dielectric change.
Sensor fabrication and measurement setup.
We fabricated the sensor based on the AESRR by employing the PCB fabrication technology. Considering the characteristic of the patch antenna and in order to ensure the accuracy of the measurement result, a 13×13 AESRR arrays plant was fabricated on the FR-4 substrate, with a relative permittivity of 4.4 and was 13 cm×25 cm in size, shown in Fig. 7. This sensor is a kind of passive microwave device, and has the advantage being high-robust, reusable, real-time and high-sensitivity. In the simulation software (CST) the distance between transmitting antenna and sensor must greater than 10.2 mm which is determined by substrate thickness (1 mm) and periodic structure characteristics. Fig.8 shows the effect of the distance on the measured results. It's not hard to find the distance between transmitting antenna and sensor has very little effect on the measured results. Considering the attenuation of the antenna is severe when the distance is large, so we decided to keep the distance between 1.2 mm and 1.6 mm. The schematic diagram of the developed microwave sensor for dielectric characterization and its deployment are shown in Fig. 9. All the experiments were carried out at a room temperature of 25℃. In our measurement, signal is generated by vector network analyzer (AV3672C, 10MHz-43.5GHz), and a pair of patch antennas are used to transmit and receive signals, shown in Fig. 9. Fig. 9, about 15 times measurement was carried out to make sure the measured results are reliable, and the measured results indicate that the sensitive peak is stabilized at 11.575 GHz. Detailed data of the simulated and measured results are given in Table 1. The difference of amplitude between simulated and measured results is mainly due to the characteristics of patch antenna, fabrication tolerance, conductor, dielectric and radiation losses. Considering that the proposed device distinguishes different MUTs based on the shift of the resonant frequency, the measured results indicate that the device conforms to the design and can be used as a sensor.
Measured results of different liquids with low dielectric constant
The resonant peak at around 11.60 GHz is sensitive to the small change of dielectric environment, so we try to measure different liquids with low dielectric constant to verify the performance of the sensor. Different organic liquids that have a homogeneous dielectric distribution and high fluidity, such as peanut oil (LuHua), corn oil (Longevity Flower), sunflower seed oil (Longevity Flower) soybean oil (Golden dragon fish), IPA (DongWu), Ethyl acetate (DongWu), and ethanol (Aladdin) were chosen as MUTs. In order to minimize the impact of contamination and humidity from the previous test sample liquids, we washed the channel with detergent and brush firstly, then rinsed the channel repeatedly with alcohol solution and dried the remaining alcohol with a small hair dryer. Finally, after the sensor was laid flat for about 30s to ensure that the alcohol evaporates adequately, the next liquid sample was dropped in the channels. When measuring the volatile liquid samples, we record the measured data quickly. Fig.11 shows the overall experiment platform for measuring different liquid samples. Each sample was measured about 15 times to ensure the reliability of the measured results. The measured results 21 of different liquids sample are presented in Fig. 12 and the specific measured data is shown in 15 . Considering that the attenuation of the patch antenna has a great impact on the measured amplitude, so we only can analyze the real part of the sample liquids' permittivity. According to the actual situation of the liquids in the measurement, a relatively accurate model, which includes the microfluidic channel part, is built and shown in Fig. 13. The blue part of the model is the channel filled with different liquids. The geometrical parameters of T-shape channel shown in Fig. 13(a) are as follows: 1 = 0.5mm, 2 = 3 = 4 = 1mm, 5 = 4mm, the depth of the T-shape channel ( 6 ) is 0.5mm. By changing the dielectric constant of the liquids in the model, making the simulated resonance frequency fit the measured results as much as possible, then we can get a fitted permittivity of the liquid and the obtained value is very close to the real dielectric constant of the liquid.
x y Figure 13. Simulation model of the proposed microwave sensor using CST with the sample liquids filling with the channels.
The Model between the dielectric constant and resonant frequency for liquids with low permittivity. Using the fitting model mentioned above, the relative permittivity of liquids can be obtained and the comparison of measured results and simulated results are shown in Fig. 14. The difference of amplitude is mainly caused by the fabrication tolerance, conductor, dielectric and radiation losses. The simulated and fitted dielectric constants of different organic liquid samples are tabulated in Table 3. As Fig. 15 shows, the measured of peanut oil, corn oil, sunflower oil, soybean oil, IPA, ethyl acetate, and ethanol match well with those measured in the literature 5,7,[16][17][18][19][20] , which indicates the reliability and accuracy of the measured results and the simulation model. The model between the dielectric constant and resonant frequency for liquids with low permittivity. G. Galindo-Romera et al proposed a parabolic equation 16 between resonance frequency f and dielectric constants which can be used to estimate the relative permittivity of some other unknown liquids. The parabolic equation with three constant parameters is as follows: . = 1 + 2 ′ + 3 ′ 2 (5) Here, ′ is the relative permittivity of liquid sample. 1 , 2 ,and 3 are constant values. The reference MUT is air whose dielectric constant is 1. Considering that . , the resonant frequency of sensor with empty channel, is a constant value. Based on reference 15 , equation (5) can be expanded with respect to ( ′ − 1), as (6) shows: . = 11.575 + 2 ( ′ − 1) + 3 ( ′ − 1) 2 (6) Based on the measured results of different liquids, the constant parameters 1 , 2 ,and 3 of (6) can be determined. The final parabolic equation (6) becomes . = 11.575 − 0.10863( ′ − 1) + 0.00646( ′ − 1) 2 (7) The curve of the fitting parabolic equation (7) is shown in Fig. 16 Based on the measured resonance frequency, the transcendental equation can be used to estimate relative permittivity of unknown liquids with permittivity ranges from 1 to 9. To verify the reliability of the transcendental equation, measured resonance frequency are used to estimate the dielectric constant of different liquid samples are compared in Fig. 16 and Table 4. And the model error shows the reliability of the transcendental equation (8).
Measurement for solid dielectric substrates
Simulation and measurement of common solid dielectric materials. Considering the actual measurement situation of the solids, a simulation model including air layers was built in CST. In the actual measurement, the slight bending of the MUTs resulted in the fact that the MUTs and the sensor did not fit tightly, so we added the air layer 2 to the model shown in Fig. 18 to ensure the accuracy of simulation results 12 . The thickness of air layer 1, air layer 2 and MUT are 0.03mm, 0.02mm and 1mm, respectively. Common solid dielectric materials (Teflon, Quartz, FR-4, Ceramics) were simulated and the simulated results is shown in Fig. 19. The simulated results show that the sensor has the ability to distinguish different solid materials with high sensitivity and large frequency shift ∆ f. Table 4. To verify the accuracy of the simulation model and the measured results, the comparison between simulated and measured results is shown in Fig. 22 and Table 5. The differences between simulated and measured resonance frequency are very small and can be attributed to fabrication tolerance and measurement errors. And the differences between simulated and measured amplitude are mainly caused by the fabrication tolerance, conductor, dielectric and radiation losses. The irregularity of the measured carve is mainly caused by the heterogeneity of the MUTs and dielectric and radiation losses. Fig. 23 shows that the measured relative permittivity of MUTs match well with the literature values reported in references [9][10][11][12] , which indicates the accuracy of the measured results and the reliability of the sensor proposed in this paper.
where . and . are resonance frequencies of sensor with and without MUT, respectively. And , and , are effective permittivity of air and MUTs, respectively. Figure 10 shows the relationship between relative permittivity of MUT and the resonant frequency of sensor due to interaction with MUT. And the relationship shows that the resonance frequency is decreasing by the increasing the relative permittivity of MUT. In reference 22 , a parabolic equation between relative permittivity of MUTs and the resonant frequency of sensor is established. It's shown in the following equation: (10) Here, ′ is relative permittivity of MUT. 1 , 2 ,and 3 are constant values. The reference MUT is air whose dielectric constant is 1. Considering that the resonant frequency of sensor without MUT . is a constant value and based on reference 15 , equation (10) can be expanded with respect to ( ′ − 1), as equation (11) shows: . = 1 + 2 ( ′ − 1) + 3 ( ′ − 1) 2 (11) Based on the measured results of materials (Air, Teflon, Quartz and Ceramics), the constant parameters 1 , 2 , and 3 of (11) can be determined. Then, equation (11) becomes: . = 11.575 − 0.74629( ′ − 1) + 0.04152( ′ − 1) 2 (12) Materials (FR-4) are stand dielectric substrate for which dielectric constant is well known. We used (12), fitted based on the measured results of other MUTs, to estimate the relative permittivity of FR-4 to test the reliability of this model. The ′ value obtained based on measured resonant frequency is 4.23 which is closed to the relative permittivity 4.3. It's clearly that equation (12) is fairly reliable for predicting the dielectric constants of known MUTs based on measured .
. To calculate the relative permittivity of known MUT, equation (12) (13) can be used to calculate relative permittivity of known MUTs. In order to check the reliability and validity of the simulation model and (13), the relative permittivity of different MUTs are calculated based on the measured . using proposed sensor and are tabulated in Table 6. And Fig. 24 shows the calculated relative permittivity agree well with the literature values. The reliability of calculated ′ shows that the sensor has the ability to identify different MUTs and predict their dielectric constant within a certain range of accuracy.
Performance comparison
The microwave sensor proposed in this paper can be used not only for identifying organic liquids but also for distinguishing solid substrates. So as to place the present work in context, the performance of the proposed sensor is compared with microwave sensors for liquids and microwave sensors for solids reported in the prevailing literature. Moreover, to make a fair comparison of the sensitivity between the proposed sensor and other microwave sensors, we use the mean sensitivity S defined in reference 23 and shown in (14): Comparison with prevailing sensors for liquids. Table 7 presents the performance characteristics of several conventional microwave sensors with various configurations, resonance frequencies, and excitation sources, etc. Most of conventional meta-atom sensors excited by microstrip line are used for liquids whose permittivity ranges from 9 to 80. The proposed sensor, excited by a pair of antennas, is designed for liquids with low permittivity which complements the detection range of traditional sensors. Moreover, based on the measured results and mean sensitivity S defined in reference 23 , Table 7 shows that the proposed metamaterial-inspired sensor can distinguish different liquids whose permittivity ranges from 1 to 9 with high mean sensitivity. Comparison with prevailing sensors for solids. Table 8 presents the performance characteristics of several conventional microwave sensors with various configurations, excitation sources, permittivity range studied and frequency shift ∆f, etc. Many conventional meta-atom sensors excited by microstrip line have been reported to be used for distinguishing different dielectric materials and predicting their permittivity. But conventional meta-atom sensors have a lot of room for improvement in terms of frequency shift ∆f and sensitivity. Moreover, Table 8 shows that the proposed metamaterial-inspired sensor, excited by antenna, can distinguish different solid dielectric materials with bigger frequency shift ∆f and higher mean sensitivity.
Conclusion
A high-sensitivity microwave metamaterial-inspired sensor, based on a 13×13 arrays of Asymmetric Electric Split-Ring Resonator (AESRR), is presented for the permittivity characterization of organic liquids and solid dielectric substrates with low permittivity. Excited by a pair of patch antennas, the sensor exhibits strong electric field in the gaps of AESRR which allows the sensor is sensitive to the change of dielectric environment. T-shape channels were integrated to the sensor by grooving in the substrate to improve the integration and enable the feasibility of liquids detection. During the measurement session, seven organic liquids and four solid dielectric substrates were chosen as MUTs and the measured results match well with the simulated results which verified the reliability of sensor. Based on the fabricated sensor and actual measurement environment, simulation models of measuring liquids and measuring solids were built in CST, respectively. Moreover, two transcendental equations, derived from the measured results, are proposed to predict the relative permittivity of liquid samples and solid materials, respectively. And the estimated values of relative permittivity are in good agreement with the literature values showing the accuracy of transcendental equations. Compared to prevailing conventional meta-atom microwave sensors excited by microstrip line, the proposed sensor can distinguish not only liquids but also solid dielectric materials with bigger frequency shift ∆ and higher sensitivity. This sensor has many advantages, such as low-cost, real-time, high-sensitivity, and high-robustness. Most importantly, it applies to the permittivity characterization of organic liquids as well as solid dielectric substratesa wider range of applications, which makes the sensor an attractive choice to be implemented in a lab-on-a-chip sensor system in the microwave band. Future work will focus on increasing the sensitivity of sensor and reducing sensor size and reducing the volume/area of MUTs. | 6,001.8 | 2021-08-13T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Numerical Analysis of Earth Dam Subjected to an Earthquake Excitation
ABSTRACT
INTRODUCTION
Earth dams are indispensable designs that store water, control floods, and produce hydroelectric power.They are utilized universally, with many enormous dams worked during the twentieth hundred years.Nonetheless, earth dams are in danger from seismic hazards, which can cause significant damage or even disappointment.Accordingly, seismic evaluation is urgent, particularly in regions inclined to earthquakes.Dam disappointment can prompt death toll, financial damage, and extreme ecological mischief [1].Various kinds of dams can be worked with different materials, and the decision relies upon site conditions, material accessibility, and the dam's motivation.Present day dams are fundamentally of two sorts: substantial dams and dike dams.Dike dams are additionally partitioned into rock-fill dams and earth-fill dams [2].
As of late, there has been a developing requirement for framework like interstates and dams.The development of huge dams has become more continuous, filling needs, for example, water supply, flood control, water system, and hydroelectric power age.Notwithstanding, the seismic way of behaving of earth dams is a critical variable to consider during plan and development.
This paper presents a powerful investigation of the Makhool dam utilizing Geo-studio programming to concentrate on the seismic way of behaving of earth dams.The review inspects the effect of seismic tremor excitation on the dam's way of behaving, zeroing in on the dam's level, soil properties, and information movement.The examination was performed utilizing the Shake/W program, with input from the Leak/W program.The discoveries offer significant bits of knowledge into planning and building earth dams in tremor inclined regions and highlight the significance of seismic evaluation for dam security.
LITERATURES REVIEW
Seismic evaluation for dam development is vital to guarantee the security and dependability of earth dams.Many investigations have analyzed the seismic way of behaving of earth dams, zeroing in on what quake excitation means for the dam.Field examination is a vital piece of seismic evaluation, remembering seismological investigations that show past quake events for the locale.These examinations assist with assessing the probability of future earthquakes, and the seismic history for the dam development region should be accessible.Another examination includes geotechnical studies, which inspect the dirt or rock development at the site and survey their way of behaving during a tremor, deciding what they mean for the design's obstruction.The geological condition likewise should be examined [3].It is possible for earthquakes to increase pore water pressure within dams and cause extra stress to them.The main impact of earthquakes is slope instability, which can lead to dam failures [4].The shaking from an earthquake can collapse an entire dam, and earthquakes are structural failures.There are two main types of earthquake-related failures: liquefaction of the dam's foundation and sliding or cracking of the dam's embankment [5].Approximately 30,000 reservoirs are in a low safety condition, according to collected data [6,7].These statistics show a high risk of dam failure due to earthquakes.
Hosseini and Nasrollahi [8] determined the excess pore pressure after the earthquake for the Karkheh dam that consider the largest dam in Iran, the case study was modeling by FLAC2D.It was concluded from this study that the value of pore-water pressure decreases at the filter zones that existed on core sides.The maximum pore-water pressure happened at the middle level of the core and this value increases about 26% after earthquake.
Niu et al. [9] studied the seismic response of Shuangjiangkou earth-rock fill dam by using non-linear Pastor-Zienkiewic model that consider the materials of the dam as two-phase pours medium and consider the damreservoir-foundation as on system and called it interaction system.It was calculated from this study the maximum horizontal displacement that equal to 0.6 m and the maximum vertical displacement that equal to 0.3 m at the crest of the dam.Pore-water pressure increase at the base of the dam also the vertical stresses increase at the base of the dam.
A study by Khalil [10] examined the pore-water pressure in the Mosul dam under three accelerations: 0.2 g, 0.25 g, and 0.3 g.The sections varied in the degree of friction and the density of the core materials.With increasing acceleration value, porewater pressure increases in the section above.
According to Bouaicha et al. [11], the variation of porewater pressure and horizontal displacement of earth dams were studied using FLAC2D based on the finite difference method.The displacement with water was also compared with the displacement without water, and the displacement with water is greater and equal to 0.342 m.
Ebrahimian [12] made investigation of seismic behavior of earth dam by using numerical modeling.It was studied the effect of dam height, behavior of the soil, the characteristics of input motion on the seismic response of the dam, it was concluded that the horizontal displacement and shear strain increases with increasing of the dam height and the maximum displacement of the dam at the end of earthquake is equal to 94 cm.Also, the type of dam soil plays a major effect on seismic behavior of the dam, when the soil with less strength that reduce the acceleration of the soil when it was compared with the strong soil.
According to Fattah et al. [13], the Khassa Chai-zoned earth dam was dynamically analyzed under earthquake conditions.The earthquake El-Centro, with a duration of 10 seconds, was used as a basis for this study.Vertical accelerations of 0.05 g, 0.1 g, and 0.2 g were input.In addition to pore-water pressure, displacement, and stress, many dam parameters were measured.With depth, horizontal displacement increased with increasing pore-water pressure at the dam's base.
The Al-wand earth dam was numerically analyzed by Al-Hadidi and Abbas [14].In their simulation, the 2017 Iraq earthquake was accelerated by 0.05 g, 0.02 g, and 0.03 g.For nodes 3 and 1, the maximum pore-water pressure was 80 Kpa and 110 Kpa, respectively.As the earthquake lasted, horizontal displacements and vertical displacements increased, whereas stresses gradually decreased, indicating soil weakening.
Bhosale and Deshmukh [15] made the numerical analysis of Ambad dam (zoned earth dam) located at India by using PLAXIS 2D to study the seismic response of the dam.Seismic behavior of Ambad dam can be expressed by the terms: porewater pressure, stress and displacement.
It was used two cases in this numerical analysis: earthquake with full reservoir and the second case with empty reservoir.
For earthquake 5.4 magnitude the results of displacement as listed below: -Total displacement 0.2 m, 0.45 for full and empty reservoir.
-Horizontal displacement 0.187 m, 0.1 m for full and empty reservoir.
-Stresses for full reservoir equal to 683.49Kpa while for empty reservoir 515.47 Kpa.
-Pore-water pressure for full reservoir equal to 480 Kpa and for empty equal to zero.
Tosun et al. [16] researched numerical analysis of Bebekli dam that existed in the western part of Turkey to show the seismic behavior of the dam.It was observed from this analysis that the maximum value of horizontal displacement is equal to 58.5 cm on the crest of the dam while the deformation is about 7-15 cm and these results can be showed the problem of the sliding through the dynamic analysis.
Mazaheri et al. [17] studied the dynamic analysis of Doyraj Earth Dam that is located in Iran, seismic behavior can be expressed by vertical and horizontal acceleration and horizontal excel ration also it was adopted two models in this analysis Mohr model and Finn mode.Deformation can be showed that the maximum settlement at the dam is 61 cm and the horizontal deformation at the core is less than the dam upstream and the maximum horizontal happened at the downstream of the dam.
Soroush and Rayati [18] studied numerical analysis for Karkheh dam with central clay core and cut off wall on its foundation with finite element code PLAXIS, the studies shows that the peak acceleration for Karkheh dam is 0.4 g and it was used only the first twenty seconds.It was concluded from this study the effect of earthquake on cut off wall and the maximum horizontal deformation of cut off wall happens in higher elevation of cut off wall and this deformation at the top about 54 cm that means settlement in foundation that could be increase the displacement [19,20].
METHODOLOGY
The dynamic analysis of the Makhool dam was conducted using the QUAKE/W program, which is a finite element method-based software for analyzing the seismic response of earth structures.A number of studies have validated the Shake/W program in geotechnical engineering.Break/W, a finite element method-based program used to analyze earth structures' drainage behavior, was used to generate data for the Shake/W program.When the dam was excited during a seismic event, the Break/W program simulated the distribution of pore-water pressures within it.Based on their ability to simulate earthquake loading conditions of earth structures, the Quake/W and Break/W programs were selected.In geotechnical engineering, finite element analysis has been extensively used to analyze the behavior of structures under dynamic loading conditions.Using Geo-studio software, a comprehensive seismic analysis of earth dams was conducted by integrating the Shake/W and Break/W programs.The item thinks about the showing of amazing estimations and material properties, and it gives a simple to utilize association highlight data and result.the choice of the Shake/W program, Hole/W program, and Geo-studio programming relied upon their ability to reproduce the perplexing approach to acting of earth structures under seismic stacking conditions and give a comprehensive examination of the seismic approach to acting of earth dams.
The two-layered model used for the strong assessment of the Makhool dam was made using Geo-studio programming.The model relied upon the authentic estimation and material properties of the dam, and it consolidated the foundation, barrier, and supply.The model was discretized into restricted parts, and the examination was driven using the Shiver/W program.The material properties of the dam were gained from research focused tests and field assessments.The limits used in the model consolidated the thickness, Energetic's modulus, Poisson's extent, and shear strength of the soil layers.The limits were endorsed against certified data, including the delayed consequences of exploration office tests and field assessments.The model was endorsed against the eventual outcomes of past assessments and authentic data.The consequences of the examination were contrasted and the aftereffects of past investigations that researched the seismic way of behaving of earth dams.The correlation showed great understanding between the consequences of the investigation and the aftereffects of past examinations.the two-layered model utilized for the powerful investigation of the Makhool dam depended on the genuine math and material properties of the dam.The parameters used in the model were validated against real-world data, and the model was validated against the results of previous studies and real-world data.
Geo-studio software was used to create the two-dimensional model presented in this paper.This analysis is carried out by finite element method and subjected to Makhool earthen dam as a case study to show the dam stability.In addition to the clay core, downstream and upstream filter layers, containing fine and coarse gravels, are installed along the dam axis between Makhool Mountain fold in the west and the Khanukah fold in the east.Detailed information about dams and reservoirs in Iraq is provided in an unpublished report by the State Commission for Dams and Reservoirs (1921) as shown in Figure 1.Table 1 shows the Properties of Makhool dam.
Seepage analysis
Program SEEP/W is used for analyzing the seepage through the dam, the mesh includes about 2365 nodes and 3371 elements, the nodes that is located at upstream are be considered as head boundaries with total head equal to the level of water for the reservoir.
Figure 2 shows the seepage line of minimum water level.
Dynamic analysis
Dynamic analysis is done by the QUAKE/W program depending on the imported results from the SEEP/W program.The analysis assumes that water level changes in two cases, with three values of 0.04 g, 0.06 g and 0.08 g.: -Minimum water level 140 m.The analysis results are shown in the form of figures that includes total stress in x-direction and Y-direction, pore water pressure, x-displacement for four nodes.Table 2 shows the material properties of dams.
A comparison will be made between the nodes (1, 2, 3 and 4) for different computed parameters in the program.Figure 3 shows the Nodes of dynamic analysis.
RESULTS AND DISCUSSIONS
The mathematical examination of the earth dam exposed to a tremor uncovered a few significant discoveries that have huge ramifications for the plan and development of dams in quake inclined locales.In this segment, we will introduce and decipher the aftereffects of our review.First and foremost, the examination showed that the level part of movement altogether affects the way of behaving of earth dams during earthquakes.The dam experienced bigger removals and strains in the even heading than in the upward course.This tracking down features the significance of considering the even part of movement in the plan of earth dams to guarantee their security and unwavering quality during earthquakes.Also, the investigation discovered that the seismic reaction of the dam is exceptionally reliant upon its level and soil qualities.As the height of the dam increases, the displacement and strain values also increase.Similarly, the soil type affects the seismic response of the dam, with softer soils resulting in larger displacements and strains.These findings suggest that engineers should carefully consider the height and soil characteristics of the dam during the design phase to ensure its safety and reliability under seismic loading conditions.the study's results can be used to optimize the design of earth dams by considering the effect of dam height, soil characteristics, and input motion on the seismic response of the dam.By incorporating these factors into the design process, engineers can develop more effective strategies to mitigate the risks associated with earthquakes and ensure the safety and reliability of earth dams.Our study provides valuable insights into the seismic behavior of earth dams and highlights the importance of seismic evaluation for dam construction in earthquake-prone regions.The findings of this study can be used to improve the design and construction of earth dams and ensure their safety and reliability under seismic loading conditions.However, further research is needed to validate these findings and develop more accurate models for predicting the seismic response of earth dams.Figure 4 shows the Input acceleration of 0.04 g. Figure 5 shows the Input acceleration of 0.06 g. Figure 6 shows the input acceleration of 0.08 g. show the horizontal and vertical total stresses for nodes 1, 4 for three values of earthquake.The x-total stress and Y-total stress for the two nodes started to decrease that affected the strength of the soil and led to soil weakness.Table 3 shows the max x-displacement of nodes 1 and 3 for three values of earthquake.According to the results, the x-displacement increases with the value of earthquake, as well as the displacement increases with the depth of the nodes.For earthquake values of 0.06 g and 0.08 g, Figures 13-14 show the pore-water pressure at node 1 and node 3, respectively, and it was concluded that pore-water pressure is greater at the base than at the crest and increases with earthquake value.
CONCLUSIONS
The aim of this study was to examine the seismic behavior of earth dams and assess how earthquake excitation affects the dam's performance.The researchers utilized the QUAKE/W program, a software based on the finite element method, to model the seismic response of the Makhool dam.Information for the Tremor/W program was made utilizing the Leak/W program, another limited component strategy based programming for breaking down drainage conduct in earth structures.Key discoveries show that dam uprooting ascends with expanding speed increase, pore-water pressure is higher at the foundation of the model contrasted with the dam peak, and info speed increase drops with expanding x-speed increase.The concentration additionally highlights the meaning of field examination and geotechnical reads up in seismic evaluation for dam development.All in all, this examination offers important bits of knowledge into the seismic way of behaving of earth dams and underscores the requirement for seismic evaluation in dam development in tremor inclined regions.The consequences of this study can be applied to improve the plan and development of earth dams, guaranteeing their security and unwavering quality under seismic stacking conditions.
water level elevation (MWL) 150 m Minimum water level elevation (NWL) 140 m Makhool dam is being constructed on the Tigris River in Iraq.Approximately 3 km separate this mountain range from the eastern side of the Makhool anticline, approximately 16 km from the Al-Fatha bridge, and 30 km from Baijy town in the Salah Al-Din Governorate.It is approximately 15 kilometers downstream from the confluence of the smaller Zab and Tigris rivers from Baghdad, the Iraqi capital.Located 3670 meters above mean sea level (m.a.m.s.l.), the dam reaches a maximum water level of 150 meters.This dam reaches its highest point at 56 meters above mean sea level (m.a.m.s.l.) with a crest height of 160 meters and a crest width of 12 meters.
Figure 3 .
Figure 3. Nodes of dynamic analysis
Table 2 .
Material properties of dam
Table 3 .
The max x-displacement of nodes 1 and 3 for three values of earthquake | 3,921.4 | 2024-06-22T00:00:00.000 | [
"Engineering"
] |
Investigation of Graphene Oxide in Diesel Soot
Introduction Graphene has emerged as a potential material in various scientific disciplines, ranging from material science, engineering, and more recently biomedicine. The paper describes the investigation of the presence of graphene and graphene oxide (GO) in the carbon soot of internal combustion diesel engines. The UV-Visible, Fourier transform infrared (FTIR), Xray diffraction (XRD), Photoluminescent (PL) and Raman spectroscopic analysis of the sample provided a conclusive evidence of the formation of graphene and GO. The Field Emission Scanning Electron Microscopy (FESEM) and Energy Dispersive Spectrum (EDX) of the sample show carbon nanoparticles (CNPs) of size less than 50nm. The High -Resolution Transmission Electron Microscopy (HR-TEM) analysis confirmed the formation of graphene sheets with carbon nanospheres attached to it. The study reveals the possible exploitation of the diesel soot for potential applications in science and technology.
Introduction
Graphene has emerged as a potential material in various scientific disciplines, ranging from material science, engineering, and more recently biomedicine.The paper describes the investigation of the presence of graphene and graphene oxide (GO) in the carbon soot of internal combustion diesel engines.The UV-Visible, Fourier transform infrared (FTIR), X-ray diffraction (XRD), Photoluminescent (PL) and Raman spectroscopic analysis of the sample provided a conclusive evidence of the formation of graphene and GO.The Field Emission Scanning Electron Microscopy (FESEM) and Energy Dispersive Spectrum (EDX) of the sample show carbon nanoparticles (CNPs) of size less than 50nm.The High -Resolution Transmission Electron Microscopy (HR-TEM) analysis confirmed the formation of graphene sheets with carbon nanospheres attached to it.The study reveals the possible exploitation of the diesel soot for potential applications in science and technology.Keywords: Graphene; Graphene oxide; Carbon nanoparticles; Combustion; Carbon nanotube ISSN: 2348-9812 Graphene, an allotrope of carbon, is a single-layer planar sheet of sp 2 bonded carbon atoms that are packed in a two-dimensional honeycomb lattice.It is the thinnest and strongest free-standing two-dimensional (2D) basic structural element of other allotropes, including graphite, charcoal, carbon nanotubes and fullerenes.The mechanical strength, large conductivity, large surface area, and light weight make it a suitable material for high-capacity energy storage batteries.Intrinsic graphene behaves like a semi-metal or zero-gap semiconductor.It can be considered as an indefinitely large aromatic molecule, the ultimate case of the family of flat polycyclic aromatic hydrocarbons.Its unique honeycomb carbon geometry makes it a potential material in various scientific disciplines, ranging from material science, engineering and more recently biomedicine [1][2][3][4][5][6][7][8].
A bulk solid made by oxidation of graphite with increased interlayer spacing is termed as graphite oxide.Chemically modified graphene formed as a byproduct of this oxidation process is called graphene oxide (GO).Reducing graphene oxide by various physical, thermal and chemical methods produce reduced graphene oxide (r-GO) which is commonly employed for producing large quantities of good quality graphene for various industrial applications.Based on the number of well-defined countable stacked layers there are the bilayer, tri-layer, multi-layer and few-layer graphene.The single-atom-thick sheet of carbon atoms having a thickness or lateral dimension less than 100nm are classified as Graphene nanoplates, nanosheets, nano-flakes [9].The most commonly used bottom-up and top-down methods for the synthesis of graphene and its derivatives are chemical vapor deposition, micromechanical exfoliation, solvent stripping, ball milling, oxidative functionalization etc.
List of Abbreviations: NGO: Graphene oxide; FTIR: Fourier transform infrared; XRD: X-ray diffraction; PL: Photoluminescent; FESEM: Field Emission Scanning Electron Microscopy; EDX: Energy Dispersive Spectrum; CNPs: Carbon nanoparticles; HR-TEM: High Resolution Transmission Electron Microscopy; 2D: Two-dimensional; GO: Graphene oxide; r-GO: Reduced graphene oxide; ICE: Internal combustion engines; G-band: Graphitic band; D band: Disorder band; RBM: Radial Breathing Mode; IFM: Intermediate Frequency Mode; oTO-LA: Out-of plane transverse optic-longitudinal acoustic modes Nowadays the world is worried about the pollution due to old vehicles and internal combustion engines.The present work is an attempt to investigate the formation of GO by the incomplete combustion of the hydrocarbon-diesel.In internal combustion engines (ICE), combustion of the fuel, diesel takes place at high temperature producing carbon nanoparticles.The high-temperature combustion may lead to the formation of graphene sheets containing carbon nanotubes and graphene oxide [10].Lesser the efficiency of ICE greater is the possibility of formation of particulate matter in addition to carbon dioxide and water.Soot particles are solid, carbon-rich (~ 98%) material formed by the vapor phase condensation as a product of combustion [11].Thus the study is helpful in turning the hazardous diesel soot into useful material for super capacitive energy storage applications.
The soot particles formed by the incomplete combustion of diesel from ICE are collected and purified by liquid phase oxidation method.Purification of the carbonaceous soot particles are essential since the impurities in them may affect the properties of the nanotubes, graphene layers etc. present in the sample.In the liquid phase oxidation method the sample is mixed with sulphuric acid and nitric acid in the ratio 1:3, ultrasonicated using Scientech SE-366 for 20 minutes, filtered with Whatman Filter paper 41 and washed with distilled water four times.Then it is quenched with ice-cooled water and base neutralized with sodium hydroxide [12].Then the sample is again washed with distilled water four times and filtered with Whatman Filter paper 42.The purified sample is subjected to morphological characterization by Nova Nano FESEM and JEOL JEM-2100 TEM.The composition study is carried out by EDX.XRD measurements are done in Bruker D8 Advanced Diffractometer with CuKα radiation (λ=1.5406Å).The functional groups are identified by FTIR which is recorded using Shimadzu IR Prestige-21 and Raman spectrum using Lab Ram Micro-Raman Spectrometer with Argon ion laser (at 514.5nm wavelength and power of 5 mW) as the excitation source.UV-Visible spectrum is recorded using Jasco V 550 UV-Visible spectrophotometer and Photoluminescent Spectrum of the sample is recorded using Horiba Fluoromax.
Materials and Methods
The FESEM image of the carbon particles formed from the internal combustion diesel engine is shown in Figure 1(a).It indicates spherical carbon nanoparticles with size less than 50nm.Elemental analysis by EDX spectra of the sample shows the presence of carbon, oxygen, sodium, sulphur, calcium and potassium.The EDX spectrum of carbonaceous diesel soot is shown in Figure 1(b).The HR-TEM images of the sample are shown in Figure 2. The TEM images confirm the formation of graphene sheets with carbon nanospheres attached to it.The image shows the multi-layered graphene sheets.
Results and Discussions
The UV-Visible absorption spectrum gives the electronic transitions from the ground state to the excited state.The spectrum shows a peak at 247nm that arises from the π-π* transition of C-C and C=C bonds in the sp 2 hybrid region.The peaks at 278-290nm region is due to the n-π* transition of the C=O bond of the sp 3 hybrid region [13,14].This type of absorption spectrum is reported to be exhibited by graphene comprising a single layer of carbon atoms [15].The UV-Visible absorption spectrum of the sample is shown in Figure 3(a).The identification of the surface functional groups is done with the FTIR spectroscopy.The FTIR spectrum of the sample is shown in Figure 3(b).Graphene oxide is considered as the precursor for graphene synthesis.It contains various functional groups including C-O, C=O, -OH, -C-O-C that have a significant role in the properties exhibited by the graphene oxide sheets [16,17].The peak around 1343 and 1410 cm -1 is attributed to the deformation of CH 3 and CH 2 groups [1].The region 1500-1650 cm -1 is said to be the aromatic region, in which peak at 1577 cm -1 may be due to the double bonds with one substitution [18,19].The region 2500-3650 cm -1 corresponds to -OH stretching vibrations and the presence of aromatic and unsaturated bonds present in the sample [15,20].The FTIR spectrum of the sample under investigation shows the characteristic peaks of graphene as evidenced by literature.This gives an indication of the possibility of the presence of graphene in the sample.
It can be seen that as the excitation wavelength is increased from 350nm to 510nm, the maximum emission is shifted to the longer wavelength region.From literature it can be seen that the broad emission band in the region 400-800nm is exhibited by the PL spectra of GO [22].The PL spectra of the sample also indicate a similar nature and thereby revealing the presence of GO.This type of redshift is an indicative of combined aromatic groups or cyclic molecules with several pi (π) bonds [22].The fluorescence exhibited by the graphene oxide may be due to the optical transitions occurring in the π-π* gap of sp 2 sites [23,24].The role of sp 2 clusters within the sp 3 matrix is very prominent in deciding the emission wavelengths.Thus the PL spectra of the sample provide an information about the presence of GO which is in agreement with the FTIR spectra.Photoluminescent (PL) spectroscopy is a contactless, nondestructive method for getting information about the electronic structure [21].The Photoluminescent Spectra of the sample are recorded for three different excitation wavelengths 350nm, 430nm, and 510nm.The corresponding emission spectra obtained in the range 400-800nm are shown in Figure 4 A color emission from the sample on photoexcitation are measured and expressed in terms of the resultant chromaticity coordinates (x, y).Taking the International Commission on Illumination (CIE) XYZ color space as the standard reference the chromaticity diagram of the sample is plotted for the three different excitations at 350nm, 430nm, and 510nm and is shown in Figure 5.The x and y chromaticity coordinates for the three excitation wavelengths are calculated in the CIE XYZ color space.For the excitation at 350nm, 430nm and 510nm the emissions are obtained at the CIE coordinates (0.321,0.313) in the bluish pink region, (0.272, 0.280) in the purplish blue region and (0.424, 0.567) in the yellowish green region.The XRD spectrum of the sample is shown in Figure 6(a).On deconvolution of the spectra, a small peak can be seen in the range 10-15°, which is a characteristic of GO.The two-theta values at 25.17° and 42.96° are identified as graphite peaks corresponding to planes (002) and ( 101).The presence of these peaks suggests the formation of multi-layered GO sheets due to the strong interactions existing between the layers.This further confirms the presence of GO in the sample.
Raman spectroscopy is an efficient non-destructive tool for the characterization of all types of carbon allotropes.This technique is one among the most reliable methods to determine the quality of graphene by understanding the defect density within the C-framework [1].It gives various information about the sp 2 hybridized nano carbons which includes graphene and graphenerelated materials, carbon nanotubes etc. [25].The Raman spectrum of the sample is shown in Figure 6(b).The Raman peaks observed and its assignments are given in Table 1.The presence of D peak, G and 2D (or G') peaks at 1379, 1580 and 2400-3250 cm -1 respectively are the characteristic peaks exhibited by graphene.The G band corresponds to the first order scattering of the E 2g stretching vibration mode of sp 2 carbon and is an indication of phonon emission around 1580 cm -1 .The D peak shows the disorder in the crystal structure whereas the 2D peak indicates the graphene structure.If the defect density is about 1-3% then the 2D peak is only barely detectable (no sharp peak).The low intensity of 2D peak shows that the structure of graphene is destroyed and a considerable amount of defects are introduced within the carbon framework as a result of the formation of graphene [28].Figure 5 also exhibit the same nature with low-intensity 2D peak indicating the formation of graphene.
The defect density can be evaluated from the ratio of I D /I G or I 2D /I G ratio and in the present case, it is found to be ~3 which is in agreement with the literature [22].The planar microcrystallite size (L a ) for the samples are calculated from the intensities of D and G bands of Raman spectra using empirical relation L a =(102*(I G /I D )) 0.5 nm [29].The crystallite size is found to be ~5nm, for the Raman excitation wavelength at 514.5 nm.
Graphene exhibits two hexagonal structures such as zigzag and armchair.It is well reported in the literature that the strong D peak compared to the weak G peak arises from the armchair structure [30,31].Figure 6(b) is in agreement with the reports in the literature.These peaks along with UV, FTIR, PL and XRD data provide a conclusive evidence for the existence of graphene in the diesel soot from ICE.The TEM images provide the direct proof for the formation of graphene sheets.
D band 1379
Raman active mode of graphite [26] Attributable to the presence of graphene G¯ band 1580 Raman active mode of graphite [26] G+ band 1656 Overtone of oTO mode [26] M¯ band 2130 Overtone of D mode [24,27] Attributable to the presence of graphene G´ band 2512 Overtone of D mode [24,27] Attributable to the presence of graphene G' band 2830
Table 1: Raman spectral assignments
The study reveals the possible exploitation of the diesel soot for potential applications in science and technology and suggests a solution for the worry of the world about the pollution due to old vehicles and internal combustion engines.The analysis of the purified samples from the ICE reveals the formation of graphene oxide and opens a new window for the effective use of the soot particles in fuel cell, nanocapacitors, etc.The HR-TEM and FESEM analysis show multi-layered graphene sheets with carbon nanospheres of size less than 50nm and the EDX revealed the richness of carbon in the sample.The structural (FTIR, XRD, and Raman) and optical (UV and PL) characterizations of the particulate matter collected from the internal combustion diesel engines are found to exhibit the existence of graphene.Thus the hazardous diesel soot can be converted into an useful material for electronic applications.
Figure 1 :Figure 2 :
Figure 1: (a) FESEM image (b) EDX spectra of the sample Photoluminescent (PL) spectroscopy is a contactless, nondestructive method for getting information about the electronic structure[21].The Photoluminescent Spectra of the sample are recorded for three different excitation wavelengths 350nm, 430nm, and 510nm.The corresponding emission spectra obtained in the range 400-800nm are shown in Figure4(a), (b) and (c).
Figure 6 :
Figure 6: (a) XRD pattern (b) Raman spectra of the sample | 3,248 | 2017-03-01T00:00:00.000 | [
"Materials Science"
] |
The Neuroprotective Effects of SIRT1 on NMDA-Induced Excitotoxicity
Silent information regulator 1 (SIRT1), an NAD+-dependent deacetylase, is involved in the regulation of gene transcription, energy metabolism, and cellular aging and has become an important therapeutic target across a range of diseases. Recent research has demonstrated that SIRT1 possesses neuroprotective effects; however, it is unknown whether it protects neurons from NMDA-mediated neurotoxicity. In the present study, by activation of SIRT1 using resveratrol (RSV) in cultured cortical neurons or by overexpression of SIRT1 in SH-SY5Y cell, we aimed to evaluate the roles of SIRT1 in NMDA-induced excitotoxicity. Our results showed that RSV or overexpression of SIRT1 elicited inhibitory effects on NMDA-induced excitotoxicity including a decrease in cell viability, an increase in lactate dehydrogenase (LDH) release, and a decrease in the number of living cells as measured by CCK-8 assay, LDH test, and Calcein-AM and PI double staining. RSV or overexpression of SIRT1 significantly improved SIRT1 deacetylase activity in the excitotoxicity model. Further study suggests that overexpression of SIRT1 partly suppressed an NMDA-induced increase in p53 acetylation. These results indicate that SIRT1 activation by either RSV or overexpression of SIRT1 can exert neuroprotective effects partly by inhibiting p53 acetylation in NMDA-induced neurotoxicity.
Introduction
Silent information regulator 1 (SIRT1), an NAD + -dependent deacetylase, is known to deacetylate histone and nonhistone proteins such as transcription factors. It participates in a variety of physiopathological processes such as health maintenance in development, gametogenesis, homeostasis, longevity, and several neurodegenerative diseases as well as age-related disorders [1][2][3][4][5]. Recently, the neuroprotective effects of SIRT1 have attracted great interest. It has been found that SIRT1 could be upregulated to antagonize neuronal injury in different animal models, such as cerebral ischemia, Alzheimer's disease (AD), and Huntington's disease (HD) [6]. It has also been demonstrated that SIRT1 deacetylates p53, PGC-1α, and NF-κB to prevent many pathogenic processes. However, it remains unknown whether SIRT1 protects neurons from NMDA-mediated neurotoxicity in different excitotoxic insult models.
Glutamate is a primary excitatory amino acid neurotransmitter and activation of glutamate receptors including NMDA receptor plays crucial roles in the central nervous system. However, overactivation of NMDA receptor may cause intracellular calcium overload, leading to an enzymatic cascade of events resulting ultimately in cell death known as excitotoxicity [7]. A wide range of acute and chronic brain injury diseases, such as stroke/ischemia and epilepsy, and certain neurodegenerative disorders have been linked to NMDA receptor-mediated excitotoxicity [8]. Therefore, NMDA-induced excitotoxicity is a useful tool to evaluate neurotoxicity in isolated cells and is a good model of nerve injury that mimics closely the situation in vivo [9].
The present study was designed to investigate the neuroprotection of SIRT1 in NMDA-induced excitotoxicity by activation of SIRT1 using resveratrol (RSV) in cultured cortical neurons or by overexpression of SIRT1 in the SH-SY5Y cell line. The neuroprotective role of SIRT1 activity in vitro may be useful for the development of new treatments for central nervous system disorders. Poly-D-lysine (MW 150,000-300,000), trypsin, arabinoside cytosine, Calcein-AM, propidium iodide (PI), RSV, Sirtinol, NMDA, MK-801, and SIRT1 assay kit were all purchased from Sigma-Aldrich (St. Louis, MO, USA). The Cell Counting Kit-8 (CCK-8) was from Dojindo, and the kit of LDH was from Njjcbio. The polyclonal antibody to SIRT1 was from Santa Cruz Biotechnology (Santa Cruz, CA, USA). Two polyclonal antibodies to p53 and Ace-p53 were obtained from Cell Signaling Technology (Beverly, MA, USA).
Cell
Culture. Primary cortical cells were isolated from 1-3-day-old Wistar rats and were cultured as previously described [10]. In brief, cortical neurons from rats anesthetized with ketamine (intraperitoneal injection, 100 mg/kg, 3 min) were dissected and digested in 0.025% trypsin, followed by centrifugation at 800 g for 5 min. Cells were resuspended in neurobasal/B27 medium and cultured at 37°C in 5% CO 2 . Arabinoside cytosine (10 μM) was added after 24 h in vitro to inhibit non-neuronal cell growth. Experiments were performed after 10-12 days in vitro.
The human neuroblastoma SH-SY5Y cell, obtained from the Chinese Academy of Sciences Institute of Cell Resource Center, Shanghai, China, was maintained under a DMEM/ F12 medium with 10% FBS in 5% CO 2 incubator. They were washed by PBS buffer before adding 0.25% Trypsin-EDTA, followed by incubation for 5 min at room temperature. Then, the cells were detached, resuspended in medium, counted, and seeded into plates at the density of 1 × 10 5 .
NMDA Treatment.
After overnight incubation allowing the cells to reach 80% confluency, cells were treated with NMDA-containing Mg 2+ -free Locke's buffer for 2 h. RSV was added to cultures 12 h prior to NMDA induction. Sirtinol was added 2 h before NMDA treatment. MK-801 and NMDA was simultaneously added to Mg 2+ -free Locke's buffer in the NMDA + MK-801 group. Control cells were incubated with drug-free Mg 2+ -free Locke's buffer and grown at 37°C in an atmosphere containing 5% CO 2 .
2.4. Transfection of SIRT1. The expression vector expressing human wild-type SIRT1 (WT-SIRT1) and the dominantnegative form of human SIRT1 (DN-SIRT1) was constructed by Genecopoeia. The plasmids were extracted with a Plasmid Midi Kit (Omega, GA, USA). The SH-SY5Y cells were seeded into plates at a density of 1 × 10 5 , and after 24 h, the plasmids were transfected into the cells with a Lipofectamine 2000 Transfection Reagent.
2.5. Cell Viability Assay. Cells were seeded in 96-well plates, and cell viability was assayed 24 h after NMDA exposure. Administration of 10 μL cck-8 solution into each well was performed followed by incubation at 37°C for 2 h. Absorbance at 490 nm was measured using a microplate reader (Packard, Meridien, MS).
2.6. Lactate Dehydrogenase (LDH) Assay. LDH is released from cells into a culture medium upon cell lysis. The cells were plated in 24-well plates. At 24 h after NMDA exposure, the supernatant was collected to measure LDH release according to the manufacturer's instructions.
Calcein-AM and PI Staining.
Calcein-AM solution (20 μM) was added to coverslips and the cells were incubated at 37°C for 30 min. PI solution was added and the cells were incubated at 37°C for 5 min. The cells were examined by using confocal microscope (Olympus, FV-1000) at the excitation wave of 490 nm and emission wave of 515 nm.
2.8. SIRT1 Deacetylase Activity Assay. To measure SIRT1 activity, the protein was extracted from cells. The enzyme activity of SIRT1 was measured using a SIRT1 assay kit (CS1040; Sigma-Aldrich) based on the fleur de Lys-SIRT1 substrate peptide. The fluorescence intensity was measured with a microplate reader (Packard, Meridien, MS), and the excitation wavelength was 365 nm, and the emission wavelength was 460 nm.
2.10. Western Blot Analysis. The SH-SY5Y cells were collected at 24 h after exposure to NMDA. Then, cells were lysed in a lysis buffer (10 mM Tris-HCl (pH 7.4), 1 mM EDTA, and 1% Triton X-100). Cleared cell lysates were obtained after centrifugation at 10000 ×g for 30 min at 4°C. After measurement of protein concentration using a BCA Protein Assay kit, cell lysates (30~50 μg/lane) were subjected to SDS-PAGE, and separated proteins were electrotransferred to nitrocellulose membranes. The membranes were washed in Tris-buffered saline (TBS) containing 0.1% Tween 20 and 3% bovine serum albumin (BSA). The membranes were incubated overnight at 4°C in TBS containing 3% BSA and one of the following primary antibodies: SIRT1 (1 : 100), p53 (1 : 1000), and Ace-p53 (1 : 1000). Subsequently, the labeled proteins were incubated with an HRP-conjugated anti-rabbit IgG (1 : 10,000) for 2 h. Blots were developed with the ECL chemiluminescence system and were captured on autoradiographic films (Kodak Image Station 440). Films were scanned and a densitometric analysis of the bands was performed with AlphaEase image analysis software.
2.11. Statistical Analysis. The data were expressed as means ± S.E.M. of at least three independent experiments. One-way analysis of variance (ANOVA) with Bonferroni post hoc test was used for statistical comparisons. P < 0 05 was considered to be significant.
Effects of RSV on NMDA-Induced Decrease in Cell
Viability. Our previous study showed that the optimal excitotoxicity was induced 24 h after NMDA (100 μM) exposure for 2 h in primary cortical neurons. Figure 1(a) shows that NMDA-induced cell viability decreased by 51.97% as compared to the control in primary neurons (P < 0 05). Pretreatment with five dosages (10 μM, 25 μM, 50 μM, 75 μM, and 100 μM) of RSV, a potent SIRT1 activator, showed that cell viability was increased by 20.43% (P < 0 05), 31.92% (P < 0 05), 17.78% (P < 0 05), 11.85% (P < 0 05), and 0.37% (P > 0 05), respectively, when compared to that of the NMDA-treated group (Figure 1(a)). In the following experiments, RSV was administered at the concentration of 25 μM based on the significant protective effect observed in the 25 μM RSV group. MK-801 (10 μM) did not alter cell viability and yielded a value nearly equivalent to that of the control group (P > 0 05). These data support the notion that the toxic effects were induced by NMDA. Similarly, DMSO as a NMDA vehicle had no significant effects on cell viability (P > 0 05). As shown in Figure 1(b), pretreatment with RSV (25 μM) significantly increased the viability of primary neurons compared to that of the NMDA group (P < 0 05). However, a combination of Sirtinol (10 μM) and RSV (25 μM) did not affect the NMDA-induced decrease in cell viability (P > 0 05), suggesting that Sirtinol, a specific inhibitor of SIRT1, blocked the protective effect of RSV on NMDA-induced excitotoxicity. Pretreatment of Sirtinol (10 μM) alone does not affect cell survival in primary neurons (data not shown).
Effects of RSV on NMDA-Induced Decrease in SIRT1
Deacetylase Activity. As shown in Figure 4, NMDA greatly reduced the SIRT1 activity (P < 0 05), which was inhibited by MK-801. Pretreatment with RSV significantly ameliorated SIRT1 activity reduced by NMDA (P < 0 05), while Sirtinol abolished the effect of RSV (P < 0 05). There was no difference in SIRT1 deacetylase activity between the DMSO group and the control group (P > 0 05). DN-SIRT1 overexpression restored the NMDA-induced decrease in the levels of SIRT1 mRNA and protein compared to those of the control (P < 0 05); however, WT-SIRT1/DN-SIRT1-overexpressing cells without NMDA treatment exhibited about a 2~3-fold increase in SIRT1 mRNA and protein when compared with those of the control group (P < 0 05). The level of SIRT1 mRNA and protein showed a great difference between the WT-SIRT1 + NMDA group and the WT-SIRT1 group and also between the DN-SIRT1 + NMDA and DN-SIRT1 groups (P < 0 05), indicating that the transfection efficiency may be downregulated by NMDA. There was no difference between the NMDA group and the NMDA + vector group (P > 0 05).
Effects of SIRT1 Overexpression on the Deacetylase
Activity in NMDA-Induced Excitotoxicity. As shown in Figure 7, NMDA inhibited SIRT1 deacetylase activity (P < 0 05). WT-SIRT1 overexpression after exposure to NMDA reversed the deacetylase activity decreased by NMDA (P < 0 05), whereas DN-SIRT1 overexpression with NMDA administration had no effect (P > 0 05). Compared with that of the control group, WT-SIRT1 overexpression itself significantly increased SIRT1 activity (P < 0 05), and DN-SIRT1 overexpression itself reduced the activity (P < 0 05). The SIRT1 activity showed a great difference between the WT-SIRT1 + NMDA group and the WT-SIRT1 group (P < 0 05). There was no difference between the DN-SIRT1 + NMDA and DN-SIRT1 group and also between the NMDA group and the NMDA + vector group (P > 0 05).
Effects of SIRT1
Overexpression on p53 Acetylation in NMDA-Induced Excitotoxicity. Figure 8 shows that NMDA induced acetylation of p53 and the level of acetylated p53 (Ace-p53) was significantly higher (36.60%) than that of the control group (P < 0 05). WT-SIRT1 overexpression partially inhibited NMDA-stimulated p53 acetylation (P < 0 05), and DN-SIRT1 overexpression had no effect on Ace-p53 increased by NMDA (P > 0 05). The total levels of p53 were virtually unchanged under all of these experimental conditions.
Effects of SIRT1
Overexpression on NMDA-Induced LDH Release. As shown in Figure 10, WT-SIRT1 overexpression reduced NMDA-induced LDH release by 24.26% (P < 0 05). Whereas, DN-SIRT1 overexpression did not protect against NMDA-mediated LDH release (P > 0 05). The effects of WT-SIRT1 or DN-SIRT1 overexpression alone on LDH release were completely consistent with those of WT-SIRT1 or DN-SIRT1 overexpression alone on cell viability.
Effects of SIRT1 Overexpression on the Number of Living
Cells Reduced by NMDA. Calcein-AM and PI staining results ( Figure 11) showed that NMDA resulted in a significant decrease in the number of living cells, which was inhibited by WT-SIRT1 overexpression (P < 0 05). While DN-SIRT1 overexpression has no effect on the number of survival cells when compared with the NMDA group (P > 0 05). The effects of WT-SIRT1 or DN-SIRT1 overexpression alone on cell survival showed similar results as those of the data described above.
Discussion
The present study provided the following three important findings. First, activation of SIRT1 or overexpression of SIRT1 protected against NMDA-mediated excitotoxicity; second, the neuroprotective effects of SIRT1 on NMDA-induced excitotoxicity were attributed to its deacetylase activity; and third, inhibition of p53 acetylation might be one of the mechanisms underlying SIRT1-mediated neuroprotection.
In this study, we found that either preincubation of cortical neurons with RSV or overexpression of WT-SIRT1 in the SH-SY5Y cell line prevented NMDA-induced excitotoxicity including a decrease in cell viability, an increase in LDH release, and an increase in cell death, suggesting that SIRT1 has neuroprotection in NMDA-induced excitotoxicity. As has been reported, activation of SIRT1 using RSV has protection against disorders of the nervous system, for example, brain ischemia reperfusion injury [11], Alzheimer's disease, Parkinson's disease [12], and traumatic CNS injury [13]. We also found that Sirtinol, a pharmacological inhibitor of SIRT1, abolished the protection of RSV against NMDA-mediated nerve injury, indicating that the neuroprotective role of RSV is possibly achieved by activation of SIRT1. It has been shown that RSV ameliorates motor neuron degeneration and improves survival mainly through increasing the expression of SIRT1 in the SOD1 G93A mouse model of amyotrophic lateral sclerosis [14]. Inhibition of SIRT1 increased axonal injury and activation of SIRT1 prevented neuronal insults in in vivo and in vitro models of Wallerian degeneration [15,16]. Further evidence demonstrates that SIRT1 overexpression can also play a protective role in a variety of in vivo and in vitro models of nerve injury. Overexpression of SIRT1 improves motor function, reduces brain atrophy, and attenuates mutant-HTT-mediated metabolic abnormalities in a mouse model of Huntington's disease [17]. Overexpression of SIRT1 protein in neurons protects against experimental autoimmune encephalomyelitis through activation of multiple SIRT1 targets [18]. In addition to the findings in support of the protective effects of SIRT1 on neurodegeneration, there are also contradictory studies reporting the opposite effect. In this respect, it was shown that SIRT1 inhibition reduces IGF-1/ IRS-2/Ras/ERK1/2 signalling and protects neurons [19]. Further observation shows that RSV significantly ameliorated NMDA-reduced SIRT1 deacetylase activity in primary neurons, and this amelioration was prevented when SIRT1 activity was inhibited by Sirtinol. Therefore, it raises the possibility that the deacetylase activity is required for SIRT1's neuroprotection in the excitotoxicity model. In addition, we observed that overexpression of WT-SIRT1 reversed NMDA-induced reduction of SIRT1 mRNA, SIRT1 protein level, and SIRT1 deacetylase activity and inhibition of NMDA-induced insults of SH-SY5Y cell. However, overexpression of DN-SIRT1 increased the levels of SIRT1 mRNA and protein reduced by NMDA but had no effect on NMDA-induced decrease in the deacetylase activity and also did not inhibit subsequent excitotoxic cell death. These results clearly indicated that SIRT1 deacetylase activity is crucial to the neuroprotective effects of SIRT1 in NMDAinduced insults. A previous work by a number of other laboratories has also established that RSV potentiates SIRT1 activity and provides neuroprotection in recurrent stroke models [20], stress resistance, and prosurvival effects [21]. The deacetylase-deficient SIRT1 (H363Y) completely eliminated the protective effects of SIRT1 in HD models [17]. Modulation of sirtuin activity has been shown to impact the course of several aggregate-forming neurodegenerative disorders including Alzheimer's disease, Parkinson's disease, Huntington's disease, amyotrophic lateral sclerosis, and spinal and bulbar muscular atrophy [22]. The above evidences and our results support that SIRT1 deacetylase activity is critical to its neuroprotection. But there are different opinions about SIRT1 on neuronal survival that SIRT1-mediated neuroprotection is independent of its deacetylase activity, and this mechanism might involve interactions between SIRT1 and other apoptosis-regulatory proteins [23].
Additionally, we found that overexpression of WT-SIRT1 significantly inhibited NMDA-induced p53 acetylation and subsequent neurotoxicity. However, DN-SIRT1 overexpression has no such effect. The findings suggest that SIRT1 might provide potent neuroprotection against NMDA insult through regulating p53 acetylation. As a deacetylase, SIRT1 is known to deacetylate and modulate the activity of key transcription factors, such as P53, NF-κB, PGC-1α, LKB1, TSC2, HSF1, and other substrates, which participate in the adjustment of the process of a variety of injuries. The available evidence indicated that SIRT1 reduces the activity of p53 by removing these acetyl groups that inhibits apoptosis and promotes cell survival [24,25]. In this experiment, we observed that NMDA induced p53 acetylation which may be one of the mechanisms of inducing neuronal death via apoptosis. Acetylation is thought to be a key event for p53 activation and Ace-p53 induces apoptosis and is involved in neuronal death [26,27]. Together, these experiments demonstrate that deacetylation of p53 is at least in part required for SIRT1-mediated neuroprotection in the excitotoxicity model. SIRT1 is an endogenous neuroprotective factor and mediates protection through different pathways. The mechanisms of the neurotoxic effects of NMDA are very complex including calcium overload, oxidative stress, mitochondrial dysfunction, cell necrosis, and apoptosis [28]. Nonetheless, our results suggest that NMDA may inhibit the activity of SIRT1 and weakens the protective effect of SIRT1. Subsequent experimental observation confirmed this speculation, because SIRT1 activation by RSV or overexpression of SIRT1 ameliorates NMDA-induced neurotoxicity and exerts the neuroprotection.
In summary, a growing body of evidence has confirmed the neuroprotective effects of SIRT1. The finding of the present study suggests that SIRT1 might be a therapeutic target for certain neurological diseases related to NMDAmediated excitotoxicity. | 4,223.8 | 2017-09-01T00:00:00.000 | [
"Biology",
"Chemistry"
] |
An Epidemiologic Analysis of Associations between County-Level Per Capita Income, Unemployment Rate, and COVID-19 Vaccination Rates in the United States
The purpose of this longitudinal study was to examine associations between per capita income, unemployment rates, and COVID-19 vaccination rates at the county-level across the United States (U.S.), as well as to identify the interaction effects between county-level per capita income, unemployment rates, and racial/ethnic composition on COVID-19 vaccination rates. All counties in the U.S. that reported COVID-19 vaccination rates from January 2021 to July 2021 were included in this longitudinal study (n = 2857). Pooled ordinary least squares (OLS) with fixed-effects were employed to longitudinally examine economic impacts on racial/ethnic disparities on county-level COVID-19 vaccination rates. County-level per capita income and county-level unemployment rates were both positively associated with county-level COVID-19 vaccination rates across the U.S. However, the associations were divergent in the context of race/ethnicity. Public health efforts to bolster COVID-19 vaccination rates are encouraged to consider economic factors that are associated with decreases in COVID-19 vaccination rates.
Introduction
In February 2020, the novel coronavirus (COVID- 19) was declared a public health emergency in the United States (U.S.), and in March 2020 it was declared a pandemic by the World Health Organization (WHO) [1]. Soon after, states in the U.S. began implementing various community mitigation strategies (e.g., mandatory stay at home orders and business closures) to curb the spread of COVID-19. In total, 42 U.S. states and territories issued mandatory stay-at-home orders, covering 73% of U.S. counties [2]. Community mitigation strategies were effective in their aim of reducing close contact and movement outside of households [2], and consequently reduced the number of COVID-19 cases [3]; however, these public health strategies were associated with an array of negative economic impacts, including higher unemployment rates, decreased participation in the labor force, and reductions in income. For example, the most recent estimates indicate that the unemployment rate peaked in April 2020 (14.8%) during the pandemic, and the current unemployment rate remains higher than the pre-pandemic unemployment rate (5.4% vs. 3.5%) [4]. Since the start of the pandemic, over 100 million unemployment claims have been filed, with one in four workers accessing unemployment aid at some point during the pandemic [5]. Furthermore, approximately one in five U.S. adults reported a drop in income during the pandemic, resulting in difficulty covering various expenses (e.g., rent or mortgage payments, medical care, and food costs) [6].
Certain demographic groups in the U.S. have been disproportionately affected by the economic impacts of COVID-19. Socio-economic status is significantly associated with health status and socio-economic factors represent important risk factors for disparities in health status [7]. In the U.S., individuals who are Black, Indigenous, People of Color (BIPOC) are more likely to experience unemployment or a reduction in income during the pandemic [6,8]. This trend is likely due to the racial/ethnic composition of workers in the sectors hardest hit during the COVID-19 pandemic. For example, the leisure (e.g., travel industry) and hospitality (e.g., restaurant workers) sectors, industries in which BIPOC individuals are more likely to work, saw the largest increases in unemployment [9]. Further, compared to White individuals, BIPOC individuals, who are already more likely to work in lower paying jobs [6], were more likely to report reductions in income and to have difficulty paying their bills [9]. As well, BIPOC individuals are reporting slower job recovery than White individuals [10]. The economic impacts of COVID-19 further exacerbated wealth and income gaps between White and BIPOC Americans [11] and compounded issues of access to a paramount public health prevention strategy-COV ID-19 vaccination.
In order to curb the spread of COVID-19, multiple COVID-19 vaccinations were rapidly developed and eventually emerged as the primary public health approach to combat the COVID-19 pandemic [12]. Three COVID-19 vaccinations were granted emergency use authorization: the Pfizer-BioNTech and Moderna COVID-19 vaccines (December 2020) and the Johnson & Johnson (J&J) COVID-19 vaccine (February 2021) [13]. In August 2021, the Pfizer-BioNTech vaccine became the first to receive full FDA approval [14]. All three COVID-19 vaccines are effective, with twice vaccinated individuals being five times less likely to acquire COVID-19 infection and ten times less likely to experience hospitalization and death compared to unvaccinated individuals [15]. Despite their effectiveness as a primary prevention strategy, rates of vaccination lag behind desired targets set by the federal government [16]. The administration of vaccines began in December 2020, and by 24 January 2022, approximately 63.4% of Americans (~210.5 million) have been twice vaccinated (i.e., one shot of J&J vaccine, two doses of Pfizer or Moderna vaccine) [17].
Vaccination rates among BIPOC persons are lagging compared to their non-Hispanic White counterparts, with non-Hispanic Black and Hispanic Americans being less likely than non-Hispanic Whites to be twice vaccinated against COVID-19 [18]. Disparities in vaccination rates may be due to issues of access (e.g., lack of accessible clinic, inability to take time off of work), as well as vaccine hesitancy potentially rooted in mistrust in the medical field due to historical and contemporary experiences of healthcare discrimination [19,20]. This is of particular concern given that BIPOC persons have a high frequency of several COVID-19 risk factors (e.g., diabetes, heart disease, and obesity) [21]. As well, BIPOC individuals are more likely to work in "essential" jobs (e.g., factories, health care), and thus are less likely to be able to telework, ultimately increasing their exposure to COVID-19 [22]. As a result, compared to White Americans, BIPOC Americans have higher rates of COVID-19 infection and death [23], highlighting the importance of COVID-19 vaccination for this population and underscoring the need to address factors contributing to inequities in vaccine distribution.
As such, the Centers for Disease Control and Prevention (CDC) has identified COVID-19 vaccine equity for BIPOC individuals as a top priority, highlighting income and wealth gaps and employment as barriers to vaccination [24]. Burgeoning evidence suggests that at the individual and county level, household income and employment impact vaccination rates [25][26][27]. Furthermore, the extant literature suggests that social vulnerability, which takes into account the racial/ethnic composition of an area, is associated with lower vaccination rates [28].
However, social vulnerability is an aggregate score of all three factors which fails to allow for an examination of how unemployment rates and income may impact racial/ethnic disparities on COVID-19 vaccination rates. As such, using longitudinal data from the U.S. Census Bureau and the CDC, this study conducted a longitudinal analysis across the U.S. at the county-level (1) to examine the relationship between county-level per capita income and county-level COVID-19 vaccination rates, (2) to examine the relationship between county-level unemployment rates and county-level COVID-19 vaccination rates, and (3) to identify interaction effects between county-level per capita income, countylevel unemployment rates, and county-level racial/ethnic composition on county-level COVID-19 vaccination rates.
Study Design
An analysis of publicly available, secondary data was conducted in the U.S. at the county-level. County-level socio-economic demographics and county-level vaccination rates were extracted from the U.S. Census Bureau [29] and the CDC's COVID-19 vaccine tracker [29], respectively. All U.S. counties that reported COVID-19 vaccination rates from January 2021 to July 2021 were included in the sample (n = 2857). This time span included seven time points, namely the first day of the month spanning January 2021 to July 2021. In total, the present study analyzed 19,999 county-time waves.
Dependent Variable
The dependent variable was the county-level adult vaccination rate, defined as the percentage of twice vaccinated adults (age 18 or older) per county on the first day of each month (January 2021 to July 2021), as reported by the CDC's COVID-19 vaccine tracker [30].
Independent Variables
County-level unemployment rates were measured by the number of unemployed adults in each county divided by the number of adults in the labor force in each county, as indicated by U.S. Census Bureau data. Using U.S. Census Bureau data [29], county-level per capita income was calculated by dividing the county's total income by its population.
Moderating Variable
County-level racial/ethnic composition was measured by the percentage of BIPOC adults in each county as reported by the U.S. Census Bureau [28]. This percentage was then dichotomized into the top and bottom 5% of the distribution by county-level racial/ethnic composition. In this study, BIPOC refers to all people of color including but not limited to Black, Hispanic, and Asian individuals.
Covariates
Covariates included access to the COVID-19 vaccine (i.e., number of days the COVID-19 vaccine was available in each county), the number of nurse practitioners in each countya proxy for healthcare availability at the county-level, gender (male or female), education (percentage of adults with a bachelor's degree or higher), and the percentage of individuals who were older adults (≥65 years old).
Data Analysis
Measures of central tendency and frequency distributions were used to characterize the study sample. Pooled ordinary least squares (OLS) with fixed-effects were employed to longitudinally examine economic impacts on racial/ethnic disparities on county-level COVID-19 vaccination rates. Interaction effects between the percentage of BIPOC adults and economic factors (i.e., unemployment, per capita income) on county-level vaccination rates were analyzed using OLS models with fixed-effects. Table 1 contains descriptive statistics across 19,999 county-time-waves (2857 counties from January 2021 to July 2021). Across time-waves, the average county-level COVID-19 vaccination rate was 14.82% (SD = 15.22), the mean racial/ethnic composition of counties with BIPOC was 15.45% (SD = 0.16), and the average number of days that the COVID-19 vaccine was available to the general population was 25.29 (SD = 33.35). The average unemployment rate was 6.71% across time-waves (SD = 2.24), while the average per capita income was $25,000.92 (SD = $5921.20). Table 2 presents the results of the pooled OLS with fixed-effects. Aim 1 was to assess the relationship between county-level per capita income and county-level COVID-19 vaccination rates. Per capita income was positively associated with COVID-19 vaccination rates. For every $10,000 dollar increase in per capita income, county-level COVID-19 vaccination rates increased by 0.01%. Aim 2 was to assess the relationship between countylevel unemployment and county-level COVID-19 vaccination rates. The unemployment rate was positively associated with COVID-19 vaccination rates. For every 1% increase in unemployment rate, county-level COVID-19 vaccination rates increased by 0.41%.
Interaction Effects
Aim 3 was to analyze interaction effects among county-level per capita income and unemployment rates with racial/ethnic composition (% of BIPOC adults) on county-level COVID-19 vaccination rates. Significant interaction effects were found between the unemployment rates and the percentage of racial minorities. A graph of the interaction effect is presented in Figure 1. In counties with greater racial/ethnic minority populations, increases in per capita income were associated with lower vaccination rates; however, in counties with lower racial/ethnic minority populations, increases in per capita income were associated with higher vaccination rates. Significant interaction effects were also found between the unemployment rate and the percentage of racial/ethnic minorities. A graph of the interaction effect is presented in Figure 2. In counties with greater racial/ethnic minority populations, increases in unemployment rates were related to higher COVID-19 vaccination rates; however, in counties with lower racial/ethnic minority populations, increases in unemployment rates were related to lower COVID-19 vaccination rates.
Discussion
This study longitudinally examined county-level relationships between county-level economic factors (i.e., per capita income and unemployment rate) and racial/ethnic composition and county-level COVID-19 vaccination rates in the U.S. Several notable findings emerged from the longitudinal analysis. First, county-level per capita income was positively associated with county-level COVID-19 vaccination rates across U.S counties, and similar findings have been found elsewhere at the county level [28]. Interestingly, we found that this trend (i.e., increases in per capita income being associated with increases in COVID-19 vaccination rates) was divergent in the context of interactive effects with race/ethnicity. We found that increases in per capita income were associated with decreases in COVID-19 vaccination rates in counties with higher proportions of BIPOC adults. It is plausible that race-based political ideology, unequal health care resource distribution, lack of culture-sensitive public health policies, medical distrust, and contemporary healthcare discrimination may contribute to this negative association between per capita income and COVID-19 vaccination rates in counties with higher proportions of BIPOC adults [19,20,[31][32][33][34]. More studies are needed to explore reasons why COVID-19 vaccination rates in counties with higher proportions of BIPOC adults decrease with increasing per capita income. Despite state and national efforts to address racial inequalities in COVID-19 vaccination, without developing policy interventions that consider economic factors, lagging vaccination rates among BIPOCs will worsen.
Second, county-level unemployment rates were positively associated with countylevel COVID-19 vaccination rates. This finding is consistent with a prior study during the first 100 days of COVID-19 vaccination in the U.S., which found that higher statelevel unemployment rates were associated with higher state-level vaccination rates [27]. However, we found that county-level proportions of BIPOC adults moderated the effects of county-level unemployment rates on county-level COVID-19 vaccination rates. Increases in unemployment rates were associated with increases in COVID-19 vaccination rates in counties with a higher proportion of BIPOC, but increases in unemployment rates were associated with decreases in COVID-19 vaccination rates in counties with lower proportions of BIPOC (i.e., predominantly non-Hispanic White). In general, unemployment is known to negatively impact vaccination rates for other viral infections (e.g., influenza), but findings from this study suggest that unemployment does not impact COVID-19 vaccination rates in a similar fashion between BIPOC and non-BIPOC individuals at the population-level [35,36]. Interestingly, since BIPOC individuals have faced higher risks of unemployment during the COVID-19 pandemic [9], they may be more motivated to vaccinate against COVID-19 in order to return to the workforce [37]. Equally important, unlike other types of vaccinations, the need for COVID-19 vaccination is not driven by the presence of a pressing pandemic, and the COVID-19 vaccination is widely available at no cost for those who are unemployed or without health insurance [38]. It is plausible that no-cost access to the COVID-19 vaccine for those unemployed and likely without health insurance in some way may provide a means for those with increased risk of unemployment, in particular BIPOC individuals, to secure employment, especially provided that many employers are starting to require COVID-19 vaccination. However, future studies are needed to explore and determine what situational or underlying mechanisms of the COVID-19 pandemic lead to increases in COVID-19 vaccination at the population-level among BIPOC individuals who are unemployed.
This study had notable limitations and strengths. Causality cannot be inferred given the study design and statistical approach. Vaccine incentive programs may bolster vaccination, and the current analysis did not include vaccine incentive programs in the analysis. Also, the study did not explore other social factors, such as index of deprivation and geographical (including but not limited to urban and rural) differences. Future studies may consider comparing COVID-19 vaccination differences based on geographics and socio-economic classifications. However, all counties in the U.S. were included in the study, which considerably increased the study's generalizability. Unlike many studies, the study aimed to examine ways in which economic factors may contribute to disparities and impact outcomes in the context of race/ethnicity versus an examination of disparities and outcomes only based on race/ethnicity.
Conclusions
Our findings indicate that county-level per capita income is negatively associated with county-level COVID-19 vaccination rates in counties with higher proportions of BIPOC individuals, while the county-level unemployment rate is negatively associated with county-level vaccination rates in counties with higher proportions of non-Hispanic White individuals. Taken together, it is critical to develop policy interventions to increase vaccination rates in racial/ethnic minority communities in order to stimulate economic recovery. Public health efforts to bolster COVID-19 vaccination rates are encouraged to consider and respond to economic factors that are associated with decreases in COVID-19 vaccination rates. Future research exploring factors underlying these disparate findings at the county-level across the U.S. in the context of race/ethnicity are needed. Informed Consent Statement: This study did not involve humans. | 3,665.2 | 2022-02-01T00:00:00.000 | [
"Economics"
] |
Validation of the SpREUK — Religious Practices Questionnaire as a Measure of Christian Religious Practices in a General Population and in Religious Persons
Measures of spirituality should be multidimensional and inclusive and as such be applicable to persons with different worldviews and spiritual-religious beliefs and attitudes. Nevertheless, for distinct research purposes it may be relevant to more accurately differentiate specific religious practices, rituals and behaviors. It was thus the aim of this study to validate a variant version of the SpREUK-P questionnaire (which measures frequency of engagement in a large spectrum of organized and private religious, spiritual, existential and philosophical practices). This variant version was enriched with items addressing specific rituals and practices of Catholic religiosity, by further differentiating items of praying and meditation. The instrument was then tested in a sample of Catholics (inclusively nuns and monks), Protestants, and in non-religious persons. This 23-item SpREUK-RP (Religious Practices) questionnaire has four factors (i.e., Prosocial-Humanistic practices; General religious practices; Catholic religious practices; Existentialistic practices/Gratitude and Awe) and good internal consistency (Cronbach’s alpha ranging from 0.84 to 0.94). An advantage of this instrument is that it is not generally contaminated with items related to persons’ well-being, and it is not intermixed with specific religious attitudes and convictions.
Introduction
Our societies are becoming more and more diverse (i.e., culturally, ethnically, philosophically, politically), and thus a person's spiritual attitude may become more diverse, ranging from disinterest or strict a-religiosity to explicit dedicated religiosity or individualized patchwork spirituality (whatever the specific faith tradition is).Spirituality is a changing concept which is related to religiosity, but may also overlap with secular concepts such as humanism, existentialism, and probably also with specific esoteric views (Zwingmann et al. 2011).Therefore, measures of spirituality should be multidimensional not only in terms of the variety of topics, but also in terms of the related behaviors (Büssing 2012)-but not that exclusive that they are valid only for specific religious groups.To finally compare data from different societies and spiritual-religious orientation groups, inclusive instruments are preferred that account for this diversity.
Apart from this diversity, one also has to consider different 'layers' of spirituality that could be exemplified by Faith/Experience as the influencing core dimension, by Attitudes formed and shaped from this core dimension, and by subsequent Behaviors related to these attitudes and convictions (Table 1).It might be appropriate to use different valid measures related to these layers simultaneously instead of using instruments that condense all of these topics into one rather unsatisfying and less differentiated scale.Conceptually one has to clearly differentiate the 'core' dimensions (the faith/experience component) and the related 'outcomes' (i.e., attitudes, behaviors and rituals) (Table 1).Therefore, one may use different validated instruments to address the topics of these layers.A clear focus on common dimensions of spirituality which may be shared by specific religious groups and secular persons might be useful, but also on those dimensions which differ between religious and non-religious groups.One of those instruments, which measures the frequency of spiritual-religious practices (overview in Zwingmann et al. 2011) is the SpREUK-P questionnaire (SpREUK is the German language acronym for "Spiritual and Religiosity as a Resource to cope with Illness; P = practices).It was originally designed as a generic instrument to measure the engagement frequencies of a large spectrum of organized and private religious, spiritual, existential and philosophical practices (Büssing et al. 2005).In its shortened 17-item version (SpREUK-P SF17) it differentiates five factors (Büssing et al. 2012), e.g., Religious practices, Prosocial-humanistic practices, Existentialistic practices, Gratitude/Awe, and Spiritual (mind body) practices.Because of this diversity of spiritual-religious practices and engagements, the instrument is suited for both secular and also religious persons.The sub-scale "Religious practices" has a clear focus on mono-theistic religions, while the sub-scale "Spiritual (mind body) practices" refers more to Eastern religious practices.This latter (non-Christian) sub-scale does not make any demands to represent Eastern forms of spirituality/religiosity thoroughly, but to be a contrast to Christian religious practices.
Nevertheless, for specific research purposes it may be relevant to more accurately differentiate Christian practices, rituals and behaviors.In Catholic pastoral workers from Germany for example, private praying and also praying the Liturgy of Hours were to some extent related to life satisfaction and lower depression, while participating or celebrating the Holy Eucharist or partaking in Sacramental Confession were rather not related (Büssing et al. 2016).Further, in Italian Catholics working as volunteers for handicapped persons, praying the Rosary was moderately related to their perception of the Sacred in their lives, but not private prayers or attending the Sunday service (Büssing and Baiocco, unpublished data).Thus, further differentiating items may be of relevance to elucidate the underlying motives, intentions and perceptions.
Aim of the Study
The aim of this study was to validate a variant version of the SpREUK-P questionnaire that was enriched with items addressing specific Catholic rituals and practices, and with more differentiated praying and meditation items.This variant version was tested in a sample of Catholics (inclusively nuns and monks, Protestants, and in non-religious persons as a reference group.
Enrolled Persons
To test the new instrument, a heterogeneous sample of participants was recruited, among them religious persons from Franciscan but also from other religious congregations.Participation calls were sent to the German Congregation Superiors ("Ordens-Oberen-Konferenz"), to local Caritas societies, to university students (i.e., Alpen-Adria Universität Salzburg and Witten/Herdecke university), to a course on Christian Spirituality (University Zürich), to various social and management associations as well as to the private networks of the study team ('snowball sampling').The sample should be regarded as a convenience sample.
All participants were informed about the purpose of the study on the first page of the questionnaire (which did not ask for names, initials or location), and confidentiality and anonymity were guaranteed.With filling in the German language questionnaire and sending it back to the study team, participants agreed that their data would be anonymously evaluated.As most of the local Religious communities were small, we provided the opportunity to fill in the questionnaire either online (used by 25% of religious participants) or as a printout (used by 75% of the religious participants).
Engagement in Religious Practices (SpREUK-P)
The generic SpREUK-P (P-practices module) questionnaire was designed to measure the engagement frequencies of a large spectrum of organized and private religious, spiritual, existential and philosophical practices particularly in secular societies (Büssing et al. 2005).These practices and forms of engagement refer to the level of behaviors as described in Table 1.The shortened 17-item SpREUK-P differentiates 5 sub-constructs (Büssing et al. 2012), i.e.,
To make more accurate statements about religious practices of Catholics and derived a 'religious practices' module of the SpREUK-P (SpREUK-RP), we added 6 new items and more clearly differentiated the praying and meditation items (p1 and p4).Catholic items were PC1 (partaking Sacramental Confession), PC2 (receive the Holy Communion), PC3 (worship of the 'Sacrament), PC4 (ask the 'Mother of God' for help and support), PC5 (praying the Rosary) and PC6 (strong relation to special saints).Praying was differentiated as p1a (private praying, for myself, for others), p1b (praying the Liturgy of Hours) and p1c (intercessory prayer), while meditation was differentiated as p4a (meditation, Christian style) and p4b (meditation, Eastern styles).We also added items from the primary version of the SpREUK 1.1 (Büssing et al. 2005) which were not used in its 17-item short version (i.e., p26 feeling connected with others, p27 volunteer work for others, p6 reading religious/spiritual books, p9 turn to nature, p17 being aware of how I treat the world around, and p21 belief in (my) guardian angel).
The items are scored on a 4-point scale (0-never; 1-seldom; 2-often; 3-regularly).The scores were referred to a 100% level (transformed scale score), which reflect the degree of an engagement in the distinct forms of a spiritual/religious practice ("engagement scores").Scores > 50% would indicate higher engagement, while scores < 50 indicate rare engagement.
Transcendence Perception (DESES-6)
To refer to an experiential dimension as described in Table 1, we used the Daily Spiritual Experience Scale (DSES).This instrument was developed as a measure of a person's perception of the transcendent in daily life, and thus the items measure experience rather than particular beliefs or behaviors (Underwood 2002;2011).Here we used the 6-item version (DSES-6; Cronbach's alpha = 0.91) which uses specific items such as feeling God's presence, God's love, desire to be closer to God (union), finding strength/comfort in God, being touched by beauty of creation (Underwood 2002).The response categories from 1 to 6 are many times a day, every day, most days, some days, once in a while and never/almost never.Item scores were finally summed up.
Franciscan-Inspired Spirituality Questionnaire (FraSpir)
To measure whether or not a person's spirituality/religiosity is based on an attitude of searching for the Spirit of the Lord as a fundamental source, and living from the Gospel as a matter of religious dedication, we used a 13-item subscale from the Franciscan-inspired Spirituality Questionnaire (FraSpir) (Büssing et al. 2017).This "Live from the Faith/Search for God" scale (Cronbach's alpha = 0.97) refers to the attitudes layer as described in Table 1.The scale uses items such as "My faith is my orientation in life", "My faith/spirituality gives meaning to my life", "I try to live in accordance with my religious beliefs", "I feel a longing for nearness to God", "I keep times of silence before God", etc..For Christians, living from the Gospel and searching the Sacred is the core principle which would have an influence on their attitudes and behaviors (Table 1).
The 13 items were scored on a 5-point scale from disagreement to agreement (0-does not apply at all; 1-does not truly apply; 2-half and half (neither yes nor no); 3-applies quite a bit; 4-applies very much).
Life Satisfaction (SWLS)
To measure life satisfaction, as a construct that is conceptually not directly related to spiritual practices and engagement, we relied on the German version of Diener's Satisfaction with Life Scale (SWLS) (Diener et al. 1985).This 5-item scale (alpha = 0.92) uses general phrasings such as "In most ways my life is close to my ideal", "The conditions of my life are excellent", "I am satisfied with my life", "So far I have gotten the important things I want in my life", and "If I could live my life over, I would change almost nothing".Although this instrument does not differentiate the fields of satisfaction, it is nevertheless a good measure of a person's global satisfaction in life as it also addresses the self-assessed balance between the ideal and the given life situation.A benefit of the SWLS is the fact that it is not contaminated with positive affect variables, vitality, health function, etc.It can thus be used to analyze which other dimensions of spiritual engagement and experience would contribute to pastoral workers' overall life satisfaction.The extent of respondents' agreement or disagreement is indicated on a 7-point Likert scale ranging from strongly agree to strongly disagree.
Well-Being Index (WHO-5)
To assess participants' well-being, which is conceptually also not directly related to spiritual practices and engagement, we used the WHO-Five Well-being Index (WHO-5).This short scale avoids symptom-related or negative phrasings and measures well-being instead of absence of distress (Bech et al. 2013).Representative items are "I have felt cheerful and in good spirits" or "My daily life has been filled with things that interest me".Respondents assess how often they had the respective feelings within the last two weeks, ranging from 0 (at no time) to 5 (all of the times).
Statistical Analyses
Descriptive statistics, internal consistency (Cronbach's coefficient α) and factor analyses (principal component analysis using Varimax rotation with Kaiser's normalization) as well as analyses of variance (ANOVA), first order correlations and stepwise regression analyses were computed with SPSS 23.0.
To confirm the structure found by exploratory factor analysis, we performed a structured equation model (SEM) using the Lavaan packages of software R.This methodology involves many techniques such as multiple regression models, analysis of variance, confirmatory factor analysis, correlation analysis etc.With SEM one could determine the meaningful relationships between variables since the parameter estimates deliver the best scenario for the covariance matrix, and the better the model goodness of fit, the better the matrix is.The goodness of fit statistics used to evaluate the model are the root mean square error (RMSEA) which should be ≤0.05; the root mean square residual (RMSR) which should be ≤0.06; the comparative fit index (CFI) which should be ≥0.95 and the Tucker-Lewis index (TLI) which should be ≥0.95.
Given the exploratory character of this study, the significance level of ANOVA and correlation analyses were set at p < 0.01.With respect to classifying the strength of the observed correlations, we regarded r > 0.5 as a strong correlation, an r between 0.3 and 0.5 as a moderate correlation, an r between 0.2 and 0.3 as a weak correlation, and r < 0.2 as negligible or no correlation.
Participants
Among the 420 enrolled persons, men were predominant (62.5%); most had a high school education (70.0%) and were Catholics (65.1%).Participants from a religious congregation constituted 20.6% of the sample, 22.1% were university students, and the other participants were from the fields of pedagogy, medicine, psychology, theology, and others professions (Table 2).Among the religious, 72% were from Franciscan congregations, and 28% were from other religious congregations.All further sociodemographic data are depicted in Table 1.
Participants' life satisfaction was in the upper range, well-being scores in the upper mid-range, and transcendence perception in the mid-range (Table 2).Factor analysis revealed a Kaiser-Mayer-Olkin value of 0.93, which was a measure for the degree of common variance, indicating its suitability for statistical investigation by means of principal component factor analysis.Due to low item to scale correlations, six items were eliminated from the item pool prior to exploratory factor analysis (mainly from the previous scale "Spiritual (Mind-Body) practices").During the process of factor analyses, one item was eliminated because of too low factor loading (p27 volunteer work for others), and three items because of strong side loadings (p4a meditation (Christian style), p6 reading religious/spiritual books, PC3 worship of Sacrament).Exploratory factor analysis of the resulting 23 items pointed to four main factors which accounted for 72% of variance (Table 3):
•
The 8-item factor Prosocial-Humanistic practices (40% explained variance; Cronbach's alpha = 0.91) is comprised of five items from the primary "Prosocial-humanistic practices" scale, and items from other scales which all share the topic of conscious dealing with the world around and with others.The item p31 addressing the perception and the value of beauty in the world load on this factor, too.
•
The 6-item factor General religious practices (22% explained variance; Cronbach's alpha = 0.94) uses four items from the primary "Religious practices" scale and two new items.
The Difficulty Index (mean value 1.59/3) of these items is 0.53; all but one item (PC5) was in the acceptable range from 0.2 to 0.8 (Table 3).
Structured Equation Model
After exploratory factor analysis (EFA) to identify correlative structure between the variables to get specific factors, we intended to validate the suggested structure by structured equation modelling (SEM).This method is a comprehensive methodology which involves techniques such as multiple regression models, analyses of variance, confirmatory factor analysis, correlation analysis etc. Investigation of the model structure using Exploratory Factor Analysis (EFA) involving four factors, showed that the model could not be validated through structured equation modelling (SEM: CFI = 0.860, TLI = 0.842, RMSEA = 0.105, SRMR = 0.082).
With SEM we could determine the meaningful relationships between variables since the parameter estimates deliver the best scenario for the covariance matrix.This means, that the better the model goodness of fit, the better the matrix.The following factorial structures could be identified (Figures 1-4): Religions 2017, 8, 269 9 of 15
Structured Equation Model
After exploratory factor analysis (EFA) to identify correlative structure between the variables to get specific factors, we intended to validate the suggested structure by structured equation modelling (SEM).This method is a comprehensive methodology which involves techniques such as multiple regression models, analyses of variance, confirmatory factor analysis, correlation analysis etc. Investigation of the model structure using Exploratory Factor Analysis (EFA) involving four factors, showed that the model could not be validated through structured equation modelling (SEM: CFI = 0.860, TLI = 0.842, RMSEA = 0.105, SRMR = 0.082).
With SEM we could determine the meaningful relationships between variables since the parameter estimates deliver the best scenario for the covariance matrix.This means, that the better the model goodness of fit, the better the matrix.The following factorial structures could be identified (Figures 1-4):
Structured Equation Model
After exploratory factor analysis (EFA) to identify correlative structure between the variables to get specific factors, we intended to validate the suggested structure by structured equation modelling (SEM).This method is a comprehensive methodology which involves techniques such as multiple regression models, analyses of variance, confirmatory factor analysis, correlation analysis etc. Investigation of the model structure using Exploratory Factor Analysis (EFA) involving four factors, showed that the model could not be validated through structured equation modelling (SEM: CFI = 0.860, TLI = 0.842, RMSEA = 0.105, SRMR = 0.082).
With SEM we could determine the meaningful relationships between variables since the parameter estimates deliver the best scenario for the covariance matrix.This means, that the better the model goodness of fit, the better the matrix.The following factorial structures could be identified (Figures 1-4): The new paths found through SEM provide a better representation of the relationship between the variables better (CFI = 0.96, TLI = 0.96, RMSEA = 0.05, SRMR = 0.06).Two items (p24-thoughts are with those in need; p30-feeling of wondering awe) are shared by other factors, and both load with variable strength to all four factors.Such cross-loadings are common in more complex statistical models where less restrictions are made in order to allow the variables and its correlations to move free between the latent constructs (Asparouhov and Muthén 2009).This new model with the new paths between factors and variables, as well as the correlation, now has a (very) good reliability: Prosocial-humanistic α = 0.91, Catholic practices α = 0.84 General religious practices α = 0.93 and Gratitude Awe α = 0.85.
These four factors are moderately to strongly interconnected, particularly Prosocial-humanistic practices and Gratitude/Awe (r = 0.90), Catholic practices and General religious practices (r = 0.73), (Figure 5), as well as a strong interconnection between the variables p2 (celebrating the Eucharist) and pc2 (receive the Holy Communion) (r = 0.75) (Figure 6).Regression analyses indicate that General religious practices account for 43% of the variance found in Catholic practices (as depending variable).The new paths found through SEM provide a better representation of the relationship between the variables better (CFI = 0.96, TLI = 0.96, RMSEA = 0.05, SRMR = 0.06).Two items (p24-thoughts are with those in need; p30-feeling of wondering awe) are shared by other factors, and both load with variable strength to all four factors.Such cross-loadings are common in more complex statistical models where less restrictions are made in order to allow the variables and its correlations to move free between the latent constructs (Asparouhov and Muthén 2009).This new model with the new paths between factors and variables, as well as the correlation, now has a (very) good reliability: Prosocial-humanistic α = 0.91, Catholic practices α = 0.84 General religious practices α = 0.93 and Gratitude Awe α = 0.85.
These four factors are moderately to strongly interconnected, particularly Prosocial-humanistic practices and Gratitude/Awe (r = 0.90), Catholic practices and General religious practices (r = 0.73), (Figure 5), as well as a strong interconnection between the variables p2 (celebrating the Eucharist) and pc2 (receive the Holy Communion) (r = 0.75) (Figure 6).Regression analyses indicate that General religious practices account for 43% of the variance found in Catholic practices (as depending variable).The new paths found through SEM provide a better representation of the relationship between the variables better (CFI = 0.96, TLI = 0.96, RMSEA = 0.05, SRMR = 0.06).Two items (p24-thoughts are with those in need; p30-feeling of wondering awe) are shared by other factors, and both load with variable strength to all four factors.Such cross-loadings are common in more complex statistical models where less restrictions are made in order to allow the variables and its correlations to move free between the latent constructs (Asparouhov and Muthén 2009).This new model with the new paths between factors and variables, as well as the correlation, now has a (very) good reliability: Prosocial-humanistic α = 0.91, Catholic practices α = 0.84 General religious practices α = 0.93 and Gratitude Awe α = 0.85.
These four factors are moderately to strongly interconnected, particularly Prosocial-humanistic practices and Gratitude/Awe (r = 0.90), Catholic practices and General religious practices (r = 0.73), (Figure 5), as well as a strong interconnection between the variables p2 (celebrating the Eucharist) and pc2 (receive the Holy Communion) (r = 0.75) (Figure 6).Regression analyses indicate that General religious practices account for 43% of the variance found in Catholic practices (as depending variable).
Correlations with Life Satisfaction, Well-Being and Transcendence Perception
General religious practices (GRP) were strongly interrelated with Catholic religious practices (CRP), and Existentialistic practices/Gratitude and Awe (ExGA) with Prosocial-humanistic practices (PHP) (Table 4).However, CRP was only marginally related to PHP and weakly to ExGA.
The new scales correlated very strongly with the respective scales of the primary instrument (SpREUK-P SF17) (Table 4).The primary scale "Existentialistic practices" (SpREUK-P SF17) correlated strongly with PHP and ExGA, but only weakly with GRP, and not with CRP.Spiritual Mind-Body-practices (SpREUK-P SF17) correlated only weakly with ExGA, marginally with PHP and CRP, but not with GRP.
With respect to convergent validity, the new scales correlated moderately to strongly with Transcendence perception (DESE-6), and with "Live from the Faith/Search for God" (FraSpir) (Table 4).The subscales PHP and ExGA were moderately related to both measures of spiritual-religious perceptions and attitudes.With respect to discriminant validity, neither CRP nor GRP correlated significantly with life satisfaction or well-being.However, PHP was moderately related to life satisfaction and weakly to well-being, and ExGA marginally to life satisfaction and well-being.
Correlations with Life Satisfaction, Well-Being and Transcendence Perception
General religious practices (GRP) were strongly interrelated with Catholic religious practices (CRP), and Existentialistic practices/Gratitude and Awe (ExGA) with Prosocial-humanistic practices (PHP) (Table 4).However, CRP was only marginally related to PHP and weakly to ExGA.
The new scales correlated very strongly with the respective scales of the primary instrument (SpREUK-P SF17) (Table 4).The primary scale "Existentialistic practices" (SpREUK-P SF17) correlated strongly with PHP and ExGA, but only weakly with GRP, and not with CRP.Spiritual Mind-Body-practices (SpREUK-P SF17) correlated only weakly with ExGA, marginally with PHP and CRP, but not with GRP.
With respect to convergent validity, the new scales correlated moderately to strongly with Transcendence perception (DESE-6), and with "Live from the Faith/Search for God" (FraSpir) (Table 4).The subscales PHP and ExGA were moderately related to both measures of spiritual-religious perceptions and attitudes.With respect to discriminant validity, neither CRP nor GRP correlated significantly with life satisfaction or well-being.However, PHP was moderately related to life satisfaction and weakly to well-being, and ExGA marginally to life satisfaction and well-being.
Correlations with Life Satisfaction, Well-Being and Transcendence Perception
General religious practices (GRP) were strongly interrelated with Catholic religious practices (CRP), and Existentialistic practices/Gratitude and Awe (ExGA) with Prosocial-humanistic practices (PHP) (Table 4).However, CRP was only marginally related to PHP and weakly to ExGA.
The new scales correlated very strongly with the respective scales of the primary instrument (SpREUK-P SF17) (Table 4).The primary scale "Existentialistic practices" (SpREUK-P SF17) correlated strongly with PHP and ExGA, but only weakly with GRP, and not with CRP.Spiritual Mind-Bodypractices (SpREUK-P SF17) correlated only weakly with ExGA, marginally with PHP and CRP, but not with GRP.
With respect to convergent validity, the new scales correlated moderately to strongly with Transcendence perception (DESE-6), and with "Live from the Faith/Search for God" (FraSpir) (Table 4).The subscales PHP and ExGA were moderately related to both measures of spiritual-religious perceptions and attitudes.With respect to discriminant validity, neither CRP nor GRP correlated significantly with life satisfaction or well-being.However, PHP was moderately related to life satisfaction and weakly to well-being, and ExGA marginally to life satisfaction and well-being.
Younger persons scored significantly lower for GRP, CRP and ExGA, which were highest in older persons.For PHP, there were no significant age-related differences.A lower educational level was associated with higher CRP and GRP scores, while there were no significant differences for ExGA or PHP.There were no relevant gender-related differences.Catholics had the highest CRP and GRP scores compared to all other enrolled persons.Nuns and monks scored significantly higher on CRP and GRP compared to other respondents, but significantly lower on PHP; with respect to ExGA there were no significant differences.While it is in line with the expectations that persons without any religious denomination score low on GRP and CRP, they also had low scores on PHP and ExGA (Table 5).
Discussion
Our intention was to develop a variant version of the already established SpREUK-P questionnaire.This new version focused more clearly on Christian religious practices, and included items specific for Catholic rituals and practices.Adding the respective items resulted in an elimination of the primary items referring to the "Spiritual (Mind-Body) practices" scale of the original instrument.Two of the new items (p1c intercessory prayer, PC2 receive the Holy Communion) load to the primary scale "Religious practices" which is General religious practices, while the other new ('Catholic') items would build a discrete new factor labeled Catholic religious practices.
The primary scale "Prosocial-humanistic practices" was enriched by two items of primary SpREUK-P (p17 be aware of how I treat the world around; p26 feel connected with others), and by one item from the primary "Existentialistic practices" scale (p16 convey positive values and convictions to others) and one from the SpREUK-P SF17's scale "Gratitude/Awe" scale (p31 have learned to experience and value beauty).The two items of the SpREUK-P SF17's scale "Gratitude/Awe" (p30 wondering awe; p29 great gratitude) and two items from the primary scale "Existentialistic practices" (p11 try to get insight; p10 reflect upon the meaning of life) together form the new scale Existentialistic practices/Gratitude and Awe.Both of these short version scales have lost one item to the Prosocial-humanistic practices scale, and thus it is not a surprise that these scales are strongly interrelated.
While Prosocial-humanistic practices score highest in the sample (which means that socially desired activities are of high relevance for all participants), General religious practices were moderately related to these engagements and behaviors, while Catholic religious practices were only marginally related.It might be that these practices and rituals associated with Catholic religiosity focus more on transcendent sources (i.e., specific saints, mother Mary, praying the Rosary and the Liturgy of Hours) rather than sources related to concrete persons.This is interesting because from a theological point of view Christ can be experienced by others in need (Duncan 1998).In line with this observation, nuns and monks in particular, scored lower on Prosocial-humanistic practices, while Catholics as a more general group did not.This observation has to be interpreted with caution, because nuns and monks score high and in in the upper range for these religious rituals and practices (GRP: 68.1 ± 33.8; CRP: 62.0 ± 24.0), moreover their other engagement scores are in the upper range (ExGA: 62.0 ± 29.7; PHP: 64.1 ± 31.0).Nevertheless, persons not participating in religious congregations score much higher on Prosocial-humanistic practices (PHP: 72.5 ± 17.5) and highly in Existentialistic practices/Gratitude and Awe (ExGA: 66.6 ± 24.2).Whether they have more chances to meet and care for others or whether their religion is more focused on their encounter with God in their prayer life, remains a matter of further analyses.In fact, non-congregational persons score in the lower range of General religious practices and very low on Catholic religious practices, and a-religious persons scored lowest on all sub-scales.These effects cannot be explained by gender-related effects, because gender showed no relation to the engagement frequency of these practices.Apart from these observations we found significant difference on engagement in religious rituals and practices related to the educational level, an effect that has been observed in other studies (Büssing et al. 2005).
With respect to convergent validity, the new scales correlated moderately to strongly with spiritual-religious attitudes and perceptions (i.e., Transcendence perception, and "Live from the Faith/Search for God").These measures refer to the Faith/Experience level of the representation of different aspects of the spirituality model (Table 1) which will influence the levels of attitudes on the one hand and behaviors (rituals and practices) on the other hand.
With respect to discriminant validity, neither "Catholic religious practices" nor "General religious practices" were significantly related to a person's life satisfaction or well-being.These findings would indicate that the religious scales of the SpREUK-RP are not per se contaminated with perceptions of general well-being.However, PHP were moderately related to life satisfaction and weakly to well-being.Detail analyses revealed that life satisfaction correlated strongest with the experience of beauty (p31: r = 0.29) and with trying to actively help others (p22: r = 0.24).These perceptions and behaviors may result in feelings of ease and thus satisfaction in life.
Limitations
A limitation of this study is the imbalance of Christian denominations with a dominance of Catholics.Further, women and persons with lower educational level are underrepresented.For the validation process this is not of major relevance, but for future studies more balanced samples are needed.Sensitivity-to-change analyses are for spiritual-religious engagement practices less relevant; nevertheless, future should address the development of these engagements during different phases of life.
Conclusions
We can confirm the 23-item variant version (SpREUK-RP), which more specifically addresses Christian religious practices as compared to the SpREUK-P, as a valid and reliable multidimensional instrument to be used in future studies.A benefit of the instrument is that it is not generally contaminated with items related to persons' well-being, and is not intermixed with specific religious attitudes and convictions.Compared to the primary SpREUK-P, which was designed to address not only religious but also secular forms of spiritual practices, the SpREUK-RP is intended to be used in education programs that refer to value-based attitudes and behaviors derived from specific Christian contexts.
Figure 2 .
Figure 2. Factor Catholic practices from SEM.Figure 2. Factor Catholic practices from SEM.
Figure 2 .
Figure 2. Factor Catholic practices from SEM.Figure 2. Factor Catholic practices from SEM.
Figure 3 .
Figure 3. Factor General religious practices from SEM.
Table 1 .
Schematic levels of representation of different 'layers' aspects of spirituality (modified according to Büssing 2017).
Table 3 .
Reliability and factorial structure.
Table 5 .
Mean values in the sample. | 7,033.6 | 2017-12-09T00:00:00.000 | [
"Psychology",
"Philosophy"
] |
Electrochemical hydrodechlorination of perchloroethylene in groundwater on a Ni-doped graphene composite cathode driven by a microbial fuel cell
Enhancing the activity of the cathode and reducing the voltage for electrochemical hydrodechlorination of chlorohydrocarbon were always the challenges in the area of electrochemical remediation. In this study, a novel cathode material of Ni-doped graphene generated by Ni nanoparticles dispersed evenly on graphene was prepared to electrochemically dechlorinate PCE in groundwater. The reduction potential of Ni-doped graphene for PCE electrochemical hydrodechlorination was −0.24 V (vs. Ag/AgCl) determined by cyclic voltammetry. A single MFC with a voltage of 0.389–0.460 V and a current of 0.221–0.257 mA could drive electrochemical hydrodechlorination of PCE effectively with Ni-doped graphene as the cathode catalyst, and the removal rate of PCE was significantly higher than that with single Ni or graphene as the cathode catalyst. Moreover, neutral conditions were more suitable for Ni-doped graphene to electrochemically hydrodechlorinate PCE in groundwater and no byproduct was accumulated.
Introduction
Perchloroethylene (PCE) is used widely as an organic solvent and degreasing agent. 1,2 PCE is also a typical refractory contaminant in groundwater due to its improper handling and disposal practices. 3,4 Hydrodechlorination is an efficient way to eliminate PCE contamination because PCE can be dechlorinated into less chlorinated ethylene such as cDCE (cisdichloroethylene), VC (vinyl chloride) or ETH (ethylene). [5][6][7] Hydrodechlorination mainly involves microbial hydrodechlorination, 8-10 chemical hydrodechlorination 11,12 and electrochemical hydrodechlorination. 13,14 Microbial hydrodechlorination usually occurs with dechlorinating bacteria under strict anaerobic conditions. 8 Although microbial hydrodechlorination exhibits excellent decontamination of PCE, three unsolved problems hinder the application of microbial hydrodechlorination for PCE-contaminated groundwater remediation. First, the difficulty of controlling anaerobic dechlorinating bacteria limits the reliability of microbial hydrodechlorination because the ability to completely dechlorinate PCE seems to be restricted to microorganisms belonging to the genus Dehalococcoides. [15][16][17] Second, erce competition for a carbon source and hydrogen between dechlorinating bacteria and other bacteria (such as sulfatereducing bacteria and methanogenic bacteria) will decrease the dechlorination effects. 16 Third, a chemical electron donor (e.g. acetate) is necessary for dechlorinating bacteria, and the accumulation of microbial fermentation products and microorganisms can cause groundwater clogging. 15,18 Chemical hydrodechlorination generally uses chemical reductant (e.g. Zero Valent Iron, ZVI) as electron donor for PCE dechlorination. 19 To enhance the hydrodechlorination efficiency, bimetals (e.g. Fe/Ni) are widely used for hydrodechlorination of chlorinated aliphatic hydrocarbons. 20 In our previous study, we found that bimetals nano-Fe/Ni was more effective than single nano-Fe for PCE hydrodechlorination in groundwater. However, both nano-Fe and nano-Ni tend to aggregate due to their high interface energy and inherent magnetism, which can signicantly decrease the hydrodechlorination efficiency. 20 In addition, nano-Fe and nano-Ni will certainly cause groundwater contaminated by heavy metals according to the reactions: Fe À 2e À / Fe 2+ , Ni À 2e À / Ni 2+ .
Electrochemical hydrodechlorination directly utilizes external power as the electron donor, chlorinated aliphatic hydrocarbons (e.g. PCE) can be dechlorinated on the catalytic cathode through obtaining electrons and protons. 21,22 Compared to microbial hydrodechlorination and chemical hydrodechlorination, electrochemical hydrodechlorination needs not to cultivate dechlorinating bacteria and inject chemical electron donors into groundwater. Hence, electrochemical hydrodechlorination is recognized as an efficient technology with potential and development for chlorohydrocarbon-contaminated groundwater remediation. However, electrochemical hydrodechlorination usually is driven by external power with the voltages varied from 5-20 V, which may cause undesired reactions (e.g. water electrolysis) and consume plenty of electric energy. 18,23 In addition, cathode material has great inuences on the results of electrochemical hydrodechlorination. 24 Catalytic cathode can combine electrons and protons (or water moleculars) to generate activated hydrogen atoms which subsequently act on chlorohydrocarbon (e.g. PCE) so as to accomplish hydrodechlorination process. 25 Currently, carbon materials (e.g. particle graphite) and metal materials (e.g. Pt, Pd, Ni, Cu, Zn, Ag, Pb, stainless steel.) are widely used as cathode materials. [26][27][28][29] Especially noble metals such as Pt and Pd have excellent catalytic property for electrochemical hydrodechlorination due to their low electric potential of producing hydrogen and high capacity of adsorbing hydrogen. [30][31][32] However, high cost actually limits the application of noble metals Pt and Pd. 33 Fortunately, it had been veried that metal Ni also has the catalytic property for electrochemical hydrodechlorination, although the catalytic activity is lower than Pt and Pd. [34][35][36] Therefore, some efforts such as decreasing dechlorinating voltage, saving electric energy and enhancing the catalytic activity of Ni-cathode, should be made to improve the electrochemical hydrodechlorination of chlorohydrocarbon. Recent years, graphene is widely used as catalyst carrier due to its excellent characteristics such as low resistivity, high thermal conductivity and mechanical strength. [37][38][39] When metal nanoparticles load on the surface of graphene, the catalytic activity can be enhanced signicantly, and agglomeration of magnetic nanoparticles (e.g. nano-Ni) can be decreased effectively. 40,41 However, electrochemical hydrodechlorination of PCE in groundwater on Ni-doped graphene cathode has no report so far.
Microbial fuel cell (MFC) has been widely studied in the area of anaerobic biodegradation of organic pollutants in recent years, which can not only eliminate organic pollution with anaerobic electrogenesis microorganisms, but also produce electric energy through electronic transportation in external circuit. However, it is a pity that the open circuit voltage of single MFC is so small (the maximum is about 0.7 V) that the electric energy is hard to collect or utilize directly. 42 Fortunately, it had been reported that PCE/TCE (trichloroethylene) can be electrochemically dechlorinated under high cathode potentials varied from À450 to À550 mV (vs. SHE). 16,17 Hennebel also found that TCE can be effectively dechlorinated under the voltage of 0.8 V in microbial electrolysis cells with biogenic palladium nanoparticles. 23 These demonstrated that it is feasible to utilize MFC to electrochemically dechlorinate PCE as long as the hydrodechlorinating catalyst on cathode is proper. However, electrochemical hydrodechlorination of PCE in groundwater with MFC has not been reported yet.
This study aimed to prepare Ni-doped graphene as the cathode material and use MFC (based on anaerobic sanitary sewage treatment) as the electric power to electrochemically dechlorinate PCE in groundwater. This study would develop an efficient cathode material for electrochemical hydrodechlorination and exhibit a novel remediation technology for PCE-contaminated groundwater.
Preparation of Ni-doped graphene
Ni (nickel formate dihydrate, A.R.) and graphene (D 50 : 7-12 mm, monolayer content >80%) were dispersed sufficiently in absolute ethanol according to stoichiometric ratio 1 : 10. Then, the solid mixture of Ni and graphene heated in tube furnace under 400 C for 2 hours with nitrogen ow aer ethanol evaporated completely under room temperature.
Characterization of Ni-doped graphene
The microstructure of Ni-doped graphene was indicated by transmission electron microscope (JEM-2200FS JEOL Japan). The crystal lattice structures of Ni-doped graphene were showed by XRD (D/max-2550 RigaKu Japan) and HRTEM (JEM-2200FS JEOL Japan).
Electrochemical measurements
Electrochemical performance of Ni-doped graphene electrode was determined using a CHI660E electrochemical workstation (Shanghai Chenhua Instrument Co. Ltd., Shanghai, China). Three electrode systems consisting of Ni-doped graphene electrode, Ag/AgCl electrode, and Pt wire as working, reference, and counting electrodes, respectively, were used. The electrochemical analysis was performed with cyclic voltammetry in a 1 mM PCE solution containing 0.1 M KCl. All analytical measurements were performed at room temperature.
PCE-contaminated groundwater
PCE-contaminated groundwater was prepared in laboratory with the actual uncontaminated groundwater which was taken from a groundwater well of Changchun City. The hydrochemical components of groundwater contained 661.94 mg L À1 of salinity, 0.76 mg L À1 of K + , 65.53 mg L À1 of Na + , 162.27 mg L À1 of Ca 2+ , 231.51 mg L À1 of HCO 3 À , 193.08 mg L À1 of Cl À , 122.25 mg L À1 of SO 4 2À , 1.30 mg L À1 of NO 3 À , 0.01 mg L À1 of NO 2 À , 0.46 mg L À1 of F À , 7.05 of pH. Before adding PCE, the raw groundwater was put into anaerobic glove box (COY, USA) until dissolved oxygen (DO) was not detected by DO meter (310D-01A, ORION).
Set-up of MFC
Two-chambered MFC was constructed with anaerobic microbial anode and air-aerated cathode. Anaerobic activated sludge was taken from a sewage treatment plant of Changchun City. Anolyte was the raw groundwater dissolved with beef extract, which was simulated as sanitary sewage. The substrate of anode was a piece of graphite felt (100 mm  50 mm  2 mm). Salt bridge composed of agar and saturated KCl was used to connect anolyte and catholyte. Anode and cathode were connected by copper wires with a battery (Nanfu, 1.5 V) which was used to induce anaerobic electrogenesis microorganisms to preferentially inhabit on the graphite felt fastly. The battery would be dismantled as soon as the open circuit voltage and electric current of MFC maintained above 0.45 V and 0.25 mA, respectively.
Remediation of PCE-contaminated groundwater
The cathode of MFC was changed into Ni-doped graphene cathode instead of air-aerated cathode. The catholyte was PCEcontaminated groundwater which was sealed with rubber stopper to guard against oxygen intrusion and PCE volatilization. Ni-doped graphene cathode was prepared with Ni-doped graphene powder and graphite plate (50 mm  50 mm  3 mm). The Ni-doped graphene powder loaded on one side of graphite plate (50 mm  50 mm) through heating the graphite plate on which the solution containing 11 mg Ni-doped graphene (10 mg graphene and 1 mg Ni) dispersed in ethanol was coated evenly in tube furnace under 400 C for 2 hours with nitrogen ow. During the experimental process, concentration of PCE and corresponding degradation products such as TCE, cDCE, VC and ETH, were determined by Gas Chromatography-Flame Ionization Detector (GC 2010, Shimadzu). The open circuit voltage and electric current of MFC were also monitored periodically by an accurate multimeter (VC890D, Victor). The experimental schematic was described in Fig. 1.
Calculation methods
The calculation formula of dechlorination efficiency for PCE was described as follow: where, h d is the dechlorination efficiency, unit: %; C Cl À is the concentration of chloridion, unit: mmol L À1 ; C PCE is the initial concentration of PCE, unit: mmol L À1 ; the number 4 represents one PCE corresponding to four chloridion. The calculation formula of coulombic efficiency for microbial anode was described as follow: where, h c is the coulombic efficiency, unit: %; Q current is the electric quantity of external circuit, unit: C; Q COD is the theoretical electric quantity produced by microbial electrogenesis of anolyte COD, unit: C; I is the current of external circuit, unit: A; dt is the time frame to calculate coulombic efficiency, unit: s; dC COD is the concentration difference of anolyte COD, unit: mg L À1 ; V a is the volume of anolyte, unit: L; q e is electronic charge, 1.6 Â 10 À19 C; N A is Avogadro constant, 6.02 Â 10 23 mol À1 .
The calculation formula of current efficiency for Ni-doped graphene cathode can be described as follow: where, h e is the current efficiency, unit: %; Q Cl À is the total electric quantity used for electrochemical hydrodechlorination of PCE, unit: C; Q current is same as formula (3); C Cl À is the concentration of chloridion, unit: mmol L À1 ; V c is the volume of catholyte, unit: L; q e is electronic charge, 1.6 Â 10 À19 C; N A is Avogadro constant, 6.02 Â 10 23 mol À1 .
Characterization of Ni-doped graphene
Transmission electron micrographies of Ni-doped graphene had been recorded using a copper grid dipped in a solution containing Ni-doped graphene particles dispersed in ethanol by ultrasonication and presented in Fig. 2. TEM photo revealed the presence of a large number of nickel particles with uniform size and well dispersed on the graphene. A high-resolution transmission electron microscopic (HRTEM) image was given in Fig. 3, where most particles had sizes of about 5-10 nm, and the lattice fringe spacing was 0.204 nm, corresponding to (111) crystal planes of cubic nickel (JCDPS# 04-0850). Fig. 4 showed the selected area electron diffraction (SAED) pattern of the Ni-doped graphene. The appearance of strong diffraction spots rather than diffraction rings conrmed the formation of single crystalline cubic nickel. The ratio of the square of the ring radius was 3 : 4 : 8 : 11, which indicated that the structure was cubic nickel type, and the rings corresponded to the (111), (200), (220), and (311) crystal planes of cubic nickel structure. 43,44 The phase and crystallinity of Ni-doped graphene were characterized by using a Rigaku X-ray diffractometer with Cu K radiation over a range of 2q angles from 20 to 90 (Fig. 5). A sharp and strong typical peak corresponding to graphene appeared at the 2q angle of 26.6 . 45 Simultaneously, the peaks located at the 2q angles of 44.7 , 54.6 and 78.0 , indicated the (111), (200) and (220) crystal planes of cubic nickel lattice, respectively. 43,44 These results conrmed that Ni nanoparticles had been dispersed on the graphene evently. However, the peaks corresponding to (111) and (220) crystal planes of nickel oxide lattice appeared at the 2q angles of 38.4 and 65.0 , which indicated that some nickel oxide was produced due to the exposure of Ni-doped graphene to air. 46,47
Cyclic voltammetry behavior of Ni-doped graphene
Cyclic voltammogram of Ni-doped graphene was shown in Fig. 6 aer multiple electrochemical scans. A sharp and strong reductive peak located at À0.24 V (vs. Ag/AgCl) was observed in the cyclic voltammogram of Ni-doped graphene. The cyclic voltammograms of single Ni and graphene were also conducted as the experimental controls. It was found that the reductive peak of graphene was located at À0.33 V (vs. Ag/AgCl) which was signicantly lower than the reduction potential of Ni-doped graphene. An interesting result was shown that single Ni had no reductive peak over a range of potential from À0.50 to 0 V (vs. Ag/AgCl). These results demonstrated that the Ni-doped graphene can be used as catalytic composited cathode material for electrochemical hydrodechlorination of PCE under low voltage (0.24 V) which was signicantly lower than those reported up to now. [16][17][18]23
Remediation efficiency of PCE-contaminated groundwater
Electrochemical hydrodechlorination of PCE in groundwater driven by MFC was carried out on the graphite cathode coated with Ni-doped graphene. As the experimental controls, single Ni and graphene were also used as cathode materials to electrochemically dechlorinate PCE in groundwater, respectively (Fig. 7). The results showed that PCE can be removed effectively with Ni-doped graphene although PCE also can be electrochemically removed with single Ni or graphene. At the remediation time of 96 h, the removal rates of PCE were 23.6%, 17.1% and 46.3% with the cathode materials Ni, graphene and Ni-doped graphene, respectively (Fig. 7). These results demonstrated that the hydrodechlorination activity of Ni-doped graphene was actually higher than single Ni or graphene, resulting from the synergistic effect between superconductivity of graphene and high surface catalytic activity of nano-Ni particles.
The concentration of PCE in actual groundwater are varied along the contamination plume. Therefore, the effects of PCE concentration on electrochemical hydrodechlorination efficiency was investigated in this study and the results were shown in Fig. 8. It was obvious that the higher the initial PCE concentration was, the higher PCE removal rate was. At the remediation time of 96 h, the removal rates of PCE were 24.5%, 29.4%, 38.8% and 46.3% with the different initial PCE concentrations of 1, 5, 10 and 15 mg L À1 , respectively. These results suggested that the electrochemical hydrodechlorination efficiency of PCE had positive correlation with PCE concentration in groundwater. For low concentration of PCE, it would be spent more time to eliminate PCE contamination completely in groundwater. In addition, the total amount of nickel in Ni-doped graphene cathode was only 1 mg, which may lead to low removal rate of PCE. Therefore, the amount of Ni-doped graphene coated on cathode would be optimized in future study.
Proton is an important reactant for electrochemical hydrodechlorination of PCE. According to the hydrodechlorination mechanism of PCE, protons combined with electrons would replace the chlorines of PCE. 25 Generally speaking, the pH of groundwater oen varies from 5 to 9. Therefore, effects of pH on electrochemical hydrodechlorination efficiency of PCE in groundwater was investigated in this study and the results were shown in Fig. 9. It can be seen clearly that pH had nonsignicant effect on PCE hydrodechlorination (p < 0.05). At the remediation time of 96 h, the removal rates of PCE were 41.4%, 43.5%, 46.3%, 42.1% and 40.3% under different initial pH of 5, 6, 7, 8 and 9, respectively. These results demonstrated that neutral condition was more suitable for Ni-doped graphene to electrochemically hydrodechlorinate PCE. The cathode material of Ni-doped graphene prepared in this study can be better used for electrochemical hydrodechlorination remediation of actual PCE-contaminated groundwater.
Electrochemical hydrochlorination mechanism
To electrochemically dechlorinate PCE in groundwater with aboveground MFC as the electric driver, salt bridge had to be utilized to connect the microbial anode chamber and PCE- contaminated groundwater cathode chamber. However, the resisitance of entire electrochemical remediation system was so high (Fig. S2, ESI †) that the loop current was fairly low (Fig. 12), which actually led to low dechlorination efficiency of PCE. Therefore, more time was spent for PCE electrochemical hydrodechlorination and the degradation products such as TCE, cDCE, VC, ETH and chloridion were monitored synchronously. Fig. 10 showed the concentration variations of PCE and corresponding dechlorination products. PCE can be removed completely in 10 days, companied with the appearance and disappearance of TCE. cDCE and VC also were detected in the electrochemical reduction system. The maximum concentration of cDCE was detected at 6 days and completely eliminated at 14 days. VC appeared at 4 days and completely disappeared at 20 days. Finally, ETH was the only product for PCE electrochemical hydrodechlorination. Generally, the process of PCE electrochemical hydrodechlorination can be described as follow: 48 C 2 Cl 4 + me À + nH + / C 2 H n Cl 4Àm+n + (m À n)Cl À Therefore, TCE, cDCE, VC and ETH were the common dechlorination products for PCE.
As is known to all, less chlorinated ethylenes (e.g. TCE, cDCE and VC) is more difficult to be dechlorinated than PCE due to asymmetric p-p conjugation effects. Hence, the dechlorination process from cDCE and VC to ETH usually is the limited step for dechlorinating PCE into ETH. 18 Therefore, Ni-doped graphene most probably can be better used for electrochemical hydrodechlorination of less chlorinated ethylenes. In addition, the detection of chloridion (Fig. 11) and uncorrosion of nickel (Fig. S1, ESI †) further certied the electrochemical hydrodechlorination of PCE, and the dechlorination efficiency of PCE calculated by the concentration of chloridion was 91.40% at 22 days.
Electrical characteristics of MFC
The open circuit voltage and current were monitored regularly to investigate the electrical characteristics of MFC. The results in Fig. 12 showed that both the open circuit voltage and current decreased gradually from 0.460 to 0.389 V and 0.257 to 0.221 mA during the whole 96 h operation process. The variation of anolyte COD was also investigated in this study and the results showed that anolyte COD also decreased gradually from 386 to 166 mg L À1 (Fig. 13). Thus it can be inferred that the decreasements of open circuit voltage and current were caused by reduction of anolyte COD. Fortunately, the open circuit voltage of MFC was always more than 0.24 V consistently, which meant that electrochemical hydrodechlorination of PCE could occur from beginning to end with Ni-doped graphene cathode.
Coulombic efficiency of MFC is used to assess the transfer efficiency of microbial fuel in anolyte from chemical energy to electric energy. It can be seen from Fig. 14 that coulombic efficiency increased gradually from 2.70% to 39.74% and the increasing trend aer 54 h was signicantly faster than that before 54 h. The average coulombic efficiency was 11.98%. During the remediation time of 0-54 h, open circuit voltage, current and anolyte COD also fastly decreased to 0.398 V, 0.225 mA and 196 mg L À1 , respectively. These results suggested that rapid degradation of anolyte COD with high concentration (>196 mg L À1 ) didn't mean that more electric energy would be gained from MFC. On the contrary, low concentration (<196 mg L À1 ) of anolyte COD would enhance the coulombic efficiency of MFC although the total electric energy was relative less. The reason for increasing coulombic efficiency most probably was that electrons produced by microorganisms were partially absorbed by the electron acceptors (such as O 2 , NO 3 À , etc.) in anolyte before 54 h, rather than owed into cathode via external circuit. Aer all, it is easy for O 2 and NO 3 À to trap electron and react as follow: 17 In addition, microorganisms would consume more COD when O 2 and NO 3 À rather than electrode were the microbial electron acceptors, which directly led to low coulombic efficiency before 54 h. Therefore, the anolyte of MFC should be optimized before application for electrochemical hydrodechlorination of PCE so as to save microbial fuel and raise the electric energy yield. Current efficiency is used to assess the utilization efficiency of current produced by MFC on cathode. It was obvious that current efficiency of cathode was steady and the average current efficiency was only 9.28% (Fig. 14). The reason for the low current efficiency most probably was the electrochemical reduction of other electron acceptors such as NO 3 À and SO 4 2À in groundwater. 48 In addition, production of H 2 on the Nidoped graphene could also decreased the current efficiency due to the lower hydrogen evolution overpotential of nickel. 48 This is why electrochemical hydrodechlorination is hard to completely eliminate PCE in groundwater. Therefore, the effects of multi-electron acceptors on electrochemical hydrodechlorination of PCE would be investigated in future study.
Conclusions
A novel cathode material of Ni-doped graphene for electrochemical hydrodechlorination of PCE was prepared and investigated successfully in this study. Ni nanoparticles with 5-10 nm size dispersed on the graphene evenly. The reduction potential of Ni-doped graphene for PCE electrochemical hydrodechlorination was À0.24 V (vs. Ag/AgCl), which was signicantly lower than those reported up to now. Electrochemical hydrodechlorination of PCE with Ni-doped graphene could be driven by low-voltage MFC, and the hydrodechlorination efficiency of PCE with Ni-doped graphene as the cathode material was obviously higher than that with single Ni or graphene. Most important was that Ni-doped graphene had the best PCE removal efficiency under neutral condition and no byproduct was accumulated.
Conflicts of interest
There are no conicts to declare. | 5,237.4 | 2018-10-22T00:00:00.000 | [
"Materials Science"
] |
Exploration of Environmental Friendly adsorbents for Treatment of Azo Dyes from Textile Wastewater and its dosage optimization
The major industries contributing to water pollution is textile mills. In the present study, the synthetic waste water was treated by using red brick dust and alum. Its evaluation was done by measuring pH, EC, TDS and Color removal percentage. The experiment carried out with different concentration of red brick dust and alum to measure the dosage suitability of these adsorbents for dye color removal. The pH of synthetic dye was maintained at 4, 7 and 9 then passed through the red brick dust and alum. The sequential treatment, adsorption followed by coagulation was adopted to treat the wastewater. Dosage variability showed very significant results and 75% red brick dust with combination of 25% alum concentration was found favorable for color removal of dye. The material was capable of removing color up to 92% at pH 7 at normal temperature. Other parameters like EC and pH showed abnormal trends as the amount of alum was increased but TDS were tend to decreased with the increasing amount of alum. The experimental result showed that the material has good potential to remove color from effluent and good potential as an alternate low cost adsorbent. There are many physical and chemical treatment methods available for removal of color but all these methods have problems associated such as secondary effluent, hazardous and harmful end products, high energy consuming, non-economic etc. These problems can be overcome by the use of physical treatment method (adsorption and coagulation method) which is not hazardous for environment.
ABSTRACT
The major industries contributing to water pollution is textile mills.In the present study, the synthetic waste water was treated by using red brick dust and alum.Its evaluation was done by measuring pH, EC, TDS and Color removal percentage.The experiment carried out with different concentration of red brick dust and alum to measure the dosage suitability of these adsorbents for dye color removal.The pH of synthetic dye was maintained at 4, 7 and 9 then passed through the red brick dust and alum.The sequential treatment, adsorption followed by coagulation was adopted to treat the wastewater.Dosage variability showed very significant results and 75% red brick dust with combination of 25% alum concentration was found favorable for color removal of dye.The material was capable of removing color up to 92% at pH 7 at normal temperature.Other parameters like EC and pH showed abnormal trends as the amount of alum was increased but TDS were tend to decreased with the increasing amount of alum.The experimental result showed that the material has good potential to remove color from effluent and good potential as an alternate low cost adsorbent.There are many physical and chemical treatment methods available for removal of color but all these methods have problems associated such as secondary effluent, hazardous and harmful end products, high energy consuming, non-economic etc.These problems can be overcome by the use of physical treatment method (adsorption and coagulation method) which is not hazardous for environment.
Textile process contains wet and dry processes.The wet processes include scouring, singeing and dyeing which use a large amount of water.Among this process dyeing requires immense amount of water which involves changing the color of the textile spun (Wang et al., 2002).
Dyes are small molecules comprising two key components the functional group, which bonds the dye to the fiber and chromophore, responsible for the color (Waring and Hallas, 1990).Wastewater from dyeing units is frequently rich in color, containing residues of reactive dyes and chemicals like aerosols, complex components, high chroma, which possess high BOD and COD values and material hard to degrade.The harmful impacts of dyestuffs and other natural mixes, and in addition acidic and basic contaminants, from industrialization on the overall population are broadly acknowledged.The structure is more complex and stable, bringing about more trouble in degradation of coloring wastewater (Shaolan et al., 2010).Moreover, this wide range of dyes and various dyeing auxiliaries make textile wastewater hard to treat with a single treatment technique in all situations (Cooper, 1978).Dyes can be classified into different types depending on their chemical compositions and properties.Therefore, the usage of dyes varies from industry to industry depending on the fabrics they manufacture.The worldwide production of dyes is 700,000 to 1000,000 tons per year, which compatible to over 1×105 commercial products and azo dyes has 70% share to the dyes market (Pereira et al., 2012).
It is evaluated by the World Bank that 17 to 20% of water contamination originates from textile manufacturing industries under the dyeing and finishing process.There are seventy-two poisonous and health hazardous chemicals have been known in water from dyeing processes, and thirty of this cannot be removed (HSRC, 2005).
The wastewater has a great quantity of suspended solids, a strong coloration, a highly fluctuating pH, high temperatures and substantial quantities of heavy metals (Ni, Cr, Cu) and chlorinated organic compounds (Araujo and Yokoyama, 2006).Consequently, these dyes are unavoidably released in mills effluents.Azo dyes have a serious impact on environment, because their degradation products and precursors (such as aromatic amines) are extremely carcinogenic (Szymczyk et al., 2007).
It is expected to treat textile wastewater before the disposal in natural water system when it contains such noxious characteristics (Kabir et al., 2002).The high concentration of COD, BOD, total dissolved solids, total suspended solids and pH bring down the oxygen concentration from water streams that can be ameliorate the anaerobic conditions and kill the aerobic organisms of water (Savin and Romen, 2008).
Effluent treatment methods can be classified into chemical, physical and biological methods.Single treatment method is insufficient to remove color so; there is not an exclusive treatment technique among these three methods of treatments to deal with the textile effluent.Dyes exhibits different behavior to different methods like dyes are not easy to biodegrade, certain acidic dyes are not easily absorbed by active sludge; hence they escape treatment and few particularly the hydrolyzed reactive.Various treatment methods can be combined to eliminate more than 85% of unwanted matter (Donnet and Papirer, 1912).Two mechanisms adsorption and ion exchange are involved in decolonization, and is affected by various factors including adsorbent's surface area, dye & adsorbent interaction, particle size, pH, contact time and temperature (Ruthven et al., 1984).Surfactants and dyes with high molecular weights are easily removed by the coagulation processes followed by flotation, filtration and sedimentation respectively (Lee, 2000).This research work was planned in order to explore environment friendly adsorbents to remove coloring due to azo dyes from textile wastewater.In this regard to investigate the performance efficiency of red brick dust in combination with coagulant (alum) as an adsorbent for treatment of azo dyes.To evaluate the changing patterns of different parameters like pH, EC, TSS, TDS and color intensity, after treatment.
Research Methodology
A number of samples were prepared by dissolving 0.25g solute (azo dye) The same procedure was revised at pH7 and pH 9.
Experimental Methodology
First of all synthetic wastewater solution was prepared by adding 0.25 g of azo dyes in 3 liters of distilled water.Then shake it for 10 minutes at shaker with 5 0 c temperature and 5 rev to make a homogenous mixture.
After that the pH of solution was maintained at 4, 7 and 9 by adding 0.1N Hcl acid and 0.01N NaOH base as original pH of azo dye solution was 5.54 before any interference.The prepared solution of different pH levels was treated by adsorption method.The experiment was repeated at different amounts of adsorbents (red brick dust + alum) in proportion of (1000g+0), (750g+250), (500g+500g), ( 250g+750) and (0+1000g) respectively.A fixed amount of adsorbent was placed into designed pipe and synthetic azo dyes solution was passed through it with retention time 3 hours.For each hour, sample was collected from the pipe outlet after treatment.So, for each pH level 3 samples were taken out from pipe after passing the solution from different amounts of adsorbents.Evaluation was done by measuring different parameters like pH, EC, TDS and color removal percentage of solutions before and after treatment.
Results and discussion
Results were analyzed and discussed by RCBD two (2) way factor factorial design with significance level 5%.
Discussion
Experimental results showed that as the brick dust applied the results of EC increased from 170 µS/cm to 5020 µS/cm for pH4 however with the passage of time EC value reduced 4240 Unit.But with the usage of alum EC showed increasing pattern as the dosage of alum increased from 0% to 100% and highest value is19900µS/cm.For pH7 and pH 9 results are similar as the results of pH 4 expressed in Fig. 2. Values of EC for pH7 and pH 9 increased from 870 and 120 to 4550 and 4060 respectively.
In the case of pH, before treatment the value of pH is higher and after treatment the value of pH tends to decrease.Treatment with 100 % red brick dust the results increased on an average from 4 to 6 for all three pH (4, 7 and 9) no significant change with the hour of treatment.But with the usage of alum pH shows decreasing pattern as the dosage of alum increased from 0% to 100% pH of solution decreases tremendously up to 2.1.For pH4, pH7 and pH9 results are almost similar.Experimental results showed that the pH is inversely affected by the alum.
The results of analysis of variance for total dissolve solids for different treatments showed highly significant relations.The best treatment result is T5 (0% red brick dust + 100% alum) at 3 hour.The graphical representation showed that the control treatment has lower value of TDS then the other treatment values.After treatment results showed that TDS are increased using high dosage of red brick dust with the combination of alum, as the dosage of red brick dust decreasing TDS are also decreasing.
Experimental results for color removal percentage showed that as the brick dust treatment applied the results increased from 20 % to 98%.With the passage of time value increased tremendously.But with usage of alum color removal percentage showed decreasing pattern as the dosage of alum increased from 0% to 100% and it showed lowest values with 100% usage of alum is 20%.
Conclusion and Recommendation
The results of present study show that red brick dust and alum have suitable adsorption capacity with regard to the removal of azo dye from its aqueous solution.Red brick dust is good adsorbent and the adsorption is highly dependent to contact time and pH of aqueous solution of azo dye.The optimal pH for favorable adsorption of azo dyes is 7. Sequential treatment of textile wastewater, adsorption followed by coagulation is good to remove color of dye.Results of EC, pH, TDS and Color removal percentage found better for dosage of 75 % red brick dust and 25 % alum as compare to other dosage.Red brick dust and alum do not perform well separately to treat wastewater, combination of both is efficient to remove color as well as solid contents.
It is recommended that sequential treatment adsorption followed by coagulation can also be reversed as coagulation followed by adsorption for the good results of EC and TDS.The studies is also to be continued on increasing the adsorption capacity of the red brick dust by treating it with other acids .ithave to be further continued to find out if red brick dust can also be used for removal of other dyes.Conclusively, the expanding of red brick dust in field of adsorption science represents a viable and powerful tool, leading to the superior improvement of pollution control and environmental preservation.
Contents
List available at RAZI Publishing Earth Science Pakistan (ESP) Journal Homepage: http://www.razipublishing.com/journals/earth-sciencepakistan-esp/ISSN: 2521-2893 (Print) ISSN: 2521-2907 (Online) Exploration of Environmental Friendly adsorbents for Treatment of Azo Dyes from Textile Wastewater and its dosage optimization Ammara Khan 1 , Ramsha Rehman 1 , Haroon Rashid 2 and Abdul Nasir 3 1 * MSc (Hons) Agri.Engg students, Department of Structures & Environmental Engineering, University of Agriculture Faisalabad. 2 * Lecturer, Department of Structures & Environmental Engineering, University of Agriculture Faisalabad, Pakistan. 3* Associate Professor, Department of Structures & Environmental Engineering, University of Agriculture Faisalabad, Pakistan.https://doi.org/10.26480/esp.01.2017.05.07This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited 6 in 3 liters distilled water.These samples were passed through the red brick dust with different proportions of coagulant (alum).Red brick dust and coagulant (Alum) were taken in a pipe having a hole at bottom for extraction of treated dye solution.
Fig 1 :
Fig 1: Different Proportion Combination of Red Brick Dust and Alum at pH 4
Table 1
Analysis of Variance for EC * Significant; **
Highly Significant; ns Non-SignificantTable 2
Analysis of Variance for pH
Table 3
Analysis of Variance for TDS
Table 4
Analysis of Variance for Color Removal Percentage | 2,982 | 2017-01-09T00:00:00.000 | [
"Engineering"
] |
Diacerein Improves Left Ventricular Remodeling and Cardiac Function by Reducing the Inflammatory Response after Myocardial Infarction
Background The inflammatory response has been implicated in the pathogenesis of left ventricular (LV) remodeling after myocardial infarction (MI). An anthraquinone compound with anti-inflammatory properties, diacerein inhibits the synthesis and activity of pro-inflammatory cytokines, such as tumor necrosis factor and interleukins 1 and 6. The purpose of this study was to investigate the effects of diacerein on ventricular remodeling in vivo. Methods and Results Ligation of the left anterior descending artery was used to induce MI in an experimental rat model. Rats were divided into two groups: a control group that received saline solution (n = 16) and a group that received diacerein (80 mg/kg) daily (n = 10). After 4 weeks, the LV volume, cellular signaling, caspase 3 activity, and nuclear factor kappa B (NF-κB) transcription were compared between the two groups. After 4 weeks, end-diastolic and end-systolic LV volumes were reduced in the treatment group compared to the control group (p < .01 and p < .01, respectively). Compared to control rats, diacerein-treated rats exhibited less fibrosis in the LV (14.65%± 7.27% vs. 22.57%± 8.94%; p < .01), lower levels of caspase-3 activity, and lower levels of NF-κB p65 transcription. Conclusions Treatment with diacerein once a day for 4 weeks after MI improved ventricular remodeling by promoting lower end-systolic and end-diastolic LV volumes. Diacerein also reduced fibrosis in the LV. These effects might be associated with partial blockage of the NF-κB pathway.
Introduction
Myocardial infarction (MI) is a devastating event, especially when reperfusion does not occur [1,2]. Left ventricular (LV) remodeling after MI involves enlargement of the LV and thinning of the ventricular wall to maintain cardiac function [3]. The inflammatory response plays an important role in LV remodeling [4]. For example, reperfusion injury can trigger a cascade of signaling events that lead to inflammatory tissue damage. These signaling factors can include pro-inflammatory cytokines, such as tumor-necrosis factor α (TNF-α), anti-inflammatory molecules, adhesion molecules, and interleukins [5,6].
TNF-α is a pro-inflammatory cytokine that participates in the innate immune system and is expressed by cardiac tissues. At low levels, TNF-α exhibits a cardioprotective effect, whereas at high levels, TNF-α has been shown to mediate detrimental effects [6,7]. TNF-α has been implicated in the pathogenesis of ventricular remodeling after MI [5,8]. In clinical trials, blocking or decreasing the bioavailability of TNF-α produced disappointing results in patients with congestive heart failure [8,9]. However, these patients had already undergone remodeling and dilation of the LV, and the results may have been affected by the dichotomous effects of TNF-α.
To our knowledge, no clinical trial to date has evaluated the effects of anti-inflammatory intervention on LV remodeling immediately after MI. However, there is some evidence that inhibition of nuclear factor kappa B F-κB) improves LV remodeling and contributes to a decrease in cardiac dysfunction after MI. Moreover, NF-κB is regulated, in part, by TNF-α [8,10,11].
Diacerein is an anthraquinone compound with anti-inflammatory properties that inhibit the synthesis and activity of pro-inflammatory cytokines, such as TNF-α and interleukins 1 and 6 (IL-1 and IL-6, respectively) [12][13][14]. The active metabolite for diacereinis rhein (1,8-dihydroxy-3-carboxyanthraquinone), which is found in plants of the genus Cassia and has exhibited anti-inflammatory effects by inhibiting cytokine synthesis [14]. In the present study, LV remodeling was evaluated in the presence ofdiacerein4 weeks after MI was induced in a rat model.
Animals and Ethics Statement
Wistar rats had free access to water and a standard rat diet (State University of Campinas Central Breeding Center, Campinas, Brazil), and were housed in a room maintained at 21°C with a 12-hour light/12-hourdark cycle. All experimental protocols were established in accordance with the standards of the Brazilian Council in Animal Experimentation, and the "Guide for Care and Use of Laboratory Animals" published by the US National Institutes of Health (NIH Publication No. 85-23, revised 1996). The protocol was approved by the Institutional Committee on the Ethics of Animal Experiments of the State University of Campinas under the permit number 2428-1.
Induction of MI and Study Protocol
Male Wistar rats (120-150g) were subjected to an MI event according to the method of Gao et al. [15]. Briefly, rats were anesthetized with inhalation of 2% isoflurane with no endotracheal tube placement. A thoracic incision was made over the left region of the chest, and a purse string suture was made in the skin for incision closure at the end of the procedure. The thorax was accessed through the fourth intercostal space, and the heart was gently popped out through the incision.
MI was induced by performing a left coronary artery (LCA) ligation approximately 3 mm from its origin using a 6-0 polypropylene suture. After ligation, the heart was immediately placed back into the chest, manual evacuation of air was performed, and the suture was closed by snaring the previously placed purse string suture in the skin. If necessary, a needle was inserted into the eighth intercostal space to remove any residual pneumothorax. Rats were subsequently provided 40% oxygen and were monitored during recovery. All animals received acetaminophen 200 mg.kg -1 per os for 3 days.
After induction of MI and total recovery from anesthesia, animals were treated daily for 4 weeks with either 80 mg/kg diacerein diluted with 2 ml of saline solution (Diacerein group; n = 10) or a gavage of 2 ml of saline solution every day (Control group; n = 16). The diacerein dose applied was selected based on previous studies [14,16,17]and unpublished data from our laboratory. Two Sham groups were performed in the same way as described above with exception of no MI was induced by LCA ligation. One group was treated daily for 4 weeks with a gavage of 2 ml of saline solution (Sham Group; n = 8) and the second group was treated daily for 4 weeks with 80mg/kg diacerein diluted with 2 ml of saline solution (Sham group with Diacerein; n = 10). Body weights were recorded each week, and the diacerein dose administered was corrected for changes in weight. After 4 weeks, animals were submitted to a hemodynamic study and euthanasia for tissue harvesting. The euthanasia was induced with pentobarbital 100 mg.kg -1 and confirmed by the LV catheter inserted during the hemodynamic assessment.
Hemodynamic Assessment
Four weeks after the MI procedure, rats were anesthetized with xylazine (5 mg/kg) and ketamine (75 mg/kg) by intraperitoneal injection and were allowed to breathe spontaneously. A hemodynamic invasive assay was performed by using apressure-volume catheter (SPR-838, Millar Instruments, Houston, TX, USA) that was inserted into the LV cavity through the right carotid artery. The pressure and volume of the LV were continuously monitored for correct positioning of the catheter. The catheter was coupled to a PowerLab 8/30 A/D converter (AD Instruments; Mountain View, CA, USA) and a personal computer. Parallel conductance correction volumes were determined after injection of 30% hypertonic saline solution (20 μL). Upon completing the hemodynamic measurements, LV volume correction was determined by using heparinized blood that was obtained from each animal. The blood was calibrated by cuvette, according to the method of Parcher and colleagues [18].
Histopathological Analysis
After the invasive hemodynamic assessment was completed, the rats were euthanized, and their heart and lung tissues were harvested and weighed. The LV from each rat was dissected, and a mid-ventricle slice (~3 mm) from each was preserved in 4% paraformaldehyde and embedded in paraffin. Tissue sections (4μm) were stained with Masson's trichrome and Picrosirius Red to assess fibrosis and collagen deposition, respectively. Histological images were acquired using an optical light microscope with a 2.5× lens (Imager A2 Axio Carl Zeiss, Germany). On average, 12 images were needed to cover the entire LV slice. Images were reconstructed to create a single panoramic slice using PTGui software (version 9.1.3, Rotterdam, The Netherlands).
Fibrosis and collagen deposition were quantified using Image Pro Plus software (version 6.0,Warrendale, PA, USA). Data were expressed as a percentage of total tissue per LV panoramic slice. Fibrosis and collagen deposition were also analyzed for the opposite LV wall (remote area).
Cross-sectional Area of Cardiomyocytes
To assess cardiac hypertrophy, the cross-sectional area (CSA) of the cardiomyocytes was calculated. Briefly, sections of the LV(4μm) were stained with hematoxylin and eosin (H&E) according to Stefanon and collaborators [19]. Typically, 10 to 15 fields of the remote area were analyzed using a40×objective lens and transmitted light. A total of 70 cells were measured for each animal using Image Pro Plus software (version 6.0, Warrendale, PA, USA) for CSA assessment.
Transcription activity assay of NF-κB-Subunits p50 and p65
Activation of NF-κB p50 and p65 in nuclear extracts prepared from LV tissues was assayed using a commercially available kit and a chemiluminescent detection method according to the manufacturer's instructions (Thermo Scientific).
Gene Expression
Infarct samples and remote tissues from the LV were subjected to RNA extraction using Trizol reagent (Ambion, USA). RNA was quantified using 260/280 nm absorbance ratio data. Using a High Capacity cDNA Reverse Transcription kit (Applied Biosystems, Carlsbad, CA, USA), total RNA (1 g) was used for reverse transcription reactions. To detect levels of gene expression, real-time PCR was performed using commercially available Taqman primers for IL-1, IL-6, TNF, NF-κB p50, NF-κB p65, and actin (Applied Biosystems, Carlsbad, CA, USA). Detection of actin was used as an internal control.
Statistical Analysis
All data are reported as the mean ± standard deviation (SD).The Shapiro-Wilk test for normality was performed. Statistical significance was analyzed by using an unpaired t-test or Mann-Whitney test when appropriate. Statistical analyses were performed using GraphPad Prism software (for Mac, version 6, San Diego, CA, USA).
Results
Animal Weight, LV Weight, LV/Body Weight Index, and LV Fibrosis Four weeks after MI, the mean body weight of the Control group was greater than that of the Diacerein group, although both groups exhibited a similar mean weight at the beginning of the study. The Sham group treated with diacerein also exhibited a lower mean body weight compared with the Sham only group. The LV/body weight index values were also higher for the Control and Diacerein groups compared with the Sham and Sham plus diacerein groups 4 weeks post-MI. Using Masson's trichrome staining, less fibrosis was detected for the Diacerein group compared to the Control group (Fig. 1A) and lower collagen content was observed with Picrosirius Red staining (Table 1). Morphometric analysis further revealed that the Diacerein group had a lower cardiomyocyte CSA compared to the Control group in the remote LV area (Fig. 1B).
Diacerein Promotes Better Hemodynamics 4 Weeks after MI
When lower end-diastolic and end-systolic volumes were monitored, improved LV remodeling was observed for the Diacerein group compared to the Control group. The ejection fraction was also higher for the Diacerein group compared to the Control group. The Control group showed higher dP/dt max and heart rate values compared to the Diacerein group. However, when dP/dt max was normalized to the end-diastolic volume (EDV), the latter of which is a more reliable contractility index [23,24], higher contractility index values were observed compared to the Control group. In contrast, the Sham group treated with diacerein exhibited lower contractility index values compared with the Sham group, and higher values compared with the Control group. These data are summarized in Table 2. Lower levels of TNF receptor 1 (TNFR1) expression ( Fig. 2A) and IκBα activation (Fig. 2B) were detected for the Diacerein group compared to the Control group 4 weeks after MI. The latter result corresponded to lower levels of NF-κB p65 transcription for the Diacerein group (Fig. 3B). The levels of caspase 3/7 activity (Fig. 2D) and procollagen 1 and 2 deposition ( Fig. 2E) were also lower for the Diacerein group compared to the Control group. In contrast, expression levels of NF-κBp50 were similar between the two groups (Fig. 3A).
Diacerein Promotes Lower TNF Gene Expression in Infarcted and Remote Areas of the LV
After administration of diacerein for 4 weeks, levels of TNF gene expression were lower in both remote and infarcted areas of the LV (Fig. 4A). In contrast, levels of IL-1 gene expression did not significantly differ between the two groups, although lower levels of IL-1expression were detected for the Diacerein group (Fig. 4B). In the remote ventricle regions, lower levels of NF-κB p65 gene expression were detected in the Diacerein group compared to the Control group. The NF-κB p50 and IL-6 gene expression were not significantly different in the Diacerein group compared to the Control group (Fig. 4C).
Discussion
This study demonstrates the beneficial effects of an anti-inflammatory drug, diacerein, on LV remodeling in a rat experimental model of MI. Lower end-systolic and end-diastolic LV volumes, higher LV ejection fractions, less hypertrophy of cardiomyocytes in remote areas, and lower heart rates were detected in rats 4 weeks after an MI event and treatment with diacerein. Lower dP/dt max values were observed in the treatment group compared to the control and sham groups, and these data suggest that diacerein mediates a detrimental effect on myocardial contractility. However, there were no differences observed after the dP/dt max values were normalized to the EDVs. Lower dP/dt max values were also observed following normalization to EDV values for the Sham group treated with diacerein compared with the Sham group. The latter data may indicate that greater modulation of the parasympathetic system occurred [25]. However, further studies are needed to examine this possibility.
The linearity between dP/dtmax and EDV reported in the literature suggest this index might be a more reliable to assess the contractility state in vivo [23,26]. Of note, most of the studies that have used dP/dt max as a contractility index had similar heart rate between the groups, which makes the Treppe effect less likely to occur [27,28]. Ishikawa and colleagues have demonstrated that the ejection fraction is more sensitive at detecting systolic dysfunction over the dP/dt max in an elegant model of myocardial infarction in swine. They suggest the ejection fraction is more suited for inter animal comparisons and dP/dt max is more useful tool to assess contractility within the same animal [24]. Therefore, the present findings demonstrate the diaceiren interfere positively on the LV remodeling after MI, but it was unable to provide any beneficial effect on myocardial contractility state compared to the control group.
These beneficial effects of LV remodeling after MI may have been due to inhibition of TNFR1, which can lead to lower levels of IκBα activation and reduced transcription of NF-κB p65. No significant changes in TNF receptor 2 (TNFR2) were observed during the same experimental period. LV remodeling was also associated with lower levels of caspase 3/7 activity, less fibrosis, less collagen, and reduced procollagen deposition. To our knowledge, the effects of diacerein on LV remodeling after MI have not previously been investigated. The present findings suggest that modulation of the inflammatory response is a key factor in the healing process, in accordance with several reports in the literature [3,4,11,29], supporting the potential clinical utility of a drug with anti-inflammatory effects after MI.
Remodeling of the LV after MI is a complex process that includes several pathways that mediate biochemical, molecular, and morphological alterations in remote and infarcted myocardial regions [19]. The role of innate immunity in this process is not completely understood.
However, it appears that TNF mediates bimodal effects, ranging from tissue injury to tissue repair, during the acute and chronic stages after MI [3,30]. In the present study, selective inhibition of TNFR1 was identified as a potential mechanism by which inhibition of IκBα could regulate the NF-κB p65/p50 heterodimer [31]. This hypothesis is supported by the results of other studies involving TNFR1 [30,32,33]. For example, Hamid and colleagues subjected TNFR1 -/and TNFR2 -/mice to MI. They observed disparate and opposing effects on LV remodeling, hypertrophy, and NF-κB activation, with TNFR1 exacerbating the effects of MI and TNFR2 improving the response to MI. In the present study, lower levels of TNFR1 expression were observed with diacerein administration, accompanied by lower levels of fibrosis, caspase 3/7 activity, and NF-κBp65. These results are consistent with those of the TNFR1 -/model [30].
Onai and colleagues previously demonstrated that non-selective blockage of NF-κB improves LV remodeling after MI in their studies of IMD-0354 and IKK-β phosphorylation [8].
In the present study, selective blockage of TNFR1 resulted in an attenuation of NF-κB p65 transcription, lower levels of caspase 3/7 activity, and improved remodeling of the LV 4 weeks after treatment with diacerein. TNFR1 has previously been shown to activate TNFR1-associated death domain (TRADD), which is involved in mediating apoptosis via recruitment of caspase 8 and cleavage and activation of caspase 3, and also contributes to the activation of NF-κB [34]. Onai et al. did not evaluate the effects of IMD-0354 on the NF-κB subunits. However, Hamid and colleagues reported that persistent inhibition of NF-κB p65, coupled with negligible NF-κB p50 activation/inhibition, resulted in improved survival and cardiac remodeling. Decreased pro-inflammatory, pro-fibrotic, and anti-apoptotic effects were observed in transgenic mice that over expressed a mutation for phosphorylation-resistant IκB-α and were subjected to coronary ligation [31]. The latter results are consistent with those of the present study, although lower levels of TNF expression were also accompanied by an absence of changes in IL-1 and IL-6 gene expression in the present study.
Several reports have shown that NF-κB plays an important role in cardiac remodeling after MI in animal models and humans [19,35,36]. Furthermore, it has been demonstrated that the NF-κB subunits p50 and p65 translocate to the nucleus up to 24 hours after infarction, yet only the p65 subunit is consistently stimulated in animals that experience poor LV remodeling and heart failure [35]. The relationship between innate immunity and LV remodeling after MI remains unclear, although it appears that chronic activation of NF-κB is detrimental to remodeling of the LV [3].Three genetic programs have been identified as being controlled by the NF-κB pathway, with the timing and type of NF-κB activation being key events [37]. These programs include hypertrophy, acute cardiomyocyte protection from ischemia/reperfusion, and chronic cardiomyocyte injury due to a prolonged inflammatory response [30,31,37]. In all three programs, chronic activation of NF-κB appears to represent a maladaptive process that perpetuates the inflammatory process and compromises repair of LV function due to the presence of increased fibrosis, reduced myocardial contractility, increased hypertrophy in remote areas, and deterioration of diastolic function [37].
Recently, Gao and colleagues showed a decrease in IκBα phosphorylation in RAW 264.7 cells pretreated with diacerein and stimulated with lipopolysaccharide (LPS) [38]. The decrease in IκBα phosphorylation was also accompanied by a marked decrease in nuclear levels of NF-κB p65 compared to the cells that were not pretreated with diacerein [38]. These observations in an in vitro model are very similar to those described here in an animal model.
In the present study, diacerein partially blocked the inflammatory process after MI by inhibiting the NF-κB p65 subunit. This effect was associated with improved remodeling of the LV over a period of 4 weeks post-MI. However, different doses of diacerein were not evaluated to investigate a potential dose-dependent effect. The dose administered was used in a previous study [17].
There were limitations associated with the present study. First, it is possible that the observed decrease in contractility in the diacerein group was due to greater modulation of the parasympathetic system. The current data do not address this and further studies are needed. Second, a clear beneficial effect of diacerin on left ventricle remodeling was observed in the present study in the Western blot assays performed. Lower levels of p-IκBα and nuclear NF-κB p65 were detected in the diacerein group, and similar findings have been reported by Gao and colleagues [38]. Third, all of the Western blots were performed using total tissue extracts instead of cytosolic fraction extracts, and this may explain the similar observations made regarding IκBα expression levels.
Conclusions and Clinical Implications
In the presence of diacerein, a certain degree of inflammatory blockage, although not complete, was observed in the rat MI model investigated. This blockade may have contributed to the beneficial effects that were associated with LV remodeling. Given that there are several factors involved in cardiac remodeling, it is encouraging to find that inhibition of NF-κB is capable of influencing many aspects of this process. Thus, administration of an NF-κB inhibitor may represent a reasonable therapeutic approach for the treatment of MI. | 4,727.8 | 2015-03-27T00:00:00.000 | [
"Biology",
"Medicine"
] |
Study of Interactive Systems Based on Brain and Computer Interfaces
Scientists have always been looking for ways to create an effective relationship between humans and the machine, so that this relationship is as close as possible to human relationships, since even the most sophisticated machines do not have any particular effect without human intervention. This association results from brain-generated neural responses due to motor activity or cognitive activity. Communication methods include muscle and non-muscle activities that create brain activity or brainwaves and lead to a hardware device to perform a specific task. BCI was originally designed as a communication tool for patients with neuromuscular disorders, but due to recent advances in BCI devices such as passive electrodes, wireless headset, adaptive software, and cost reduction, it has been used to link the rest of the body. The BCI is a bridge between the signals generated by thoughts in our brain and the machines. BCI has been a successful invention in the field of brain imaging, which can be used in a variety of areas, including helping motor activity, vision, hearing, and any damage that the body sustains. The BCI device records brain responses using invasive, semi-invasive and non-invasive methods including Electroencephalography (EEG), Magnetizhenophyllography (MEG), and Magnetic Resonance Imaging (MRI). Brain response using pattern recognition methods to control any translation application. In this article, a review of various techniques for extracting features and classification algorithms has been presented on brain data. A significant comparative analysis of existing BCI techniques is provided.
Introduction
The human brain is the largest and most sophisticated member of human endocrinology, which consists of billions of neurons, and is considered as a multiprocessor system that receives information from different organs of our body, controlling our processing and activities.The human brain is an advantage, but the lack of a proper interface can lead to breakdown.Between man and machine is happening.If the musculoskeletal system does not respond well, BCI can be a brain-computer interface.Sometimes, it can lead to inappropriate functioning due to injuries to the brain and sometimes make the skeletal muscle system do not work well.BCI will in turn provide a platform for a wide range of applications [34].New paradigms such as Neuroscience, Artificial Intelligence, Cognitive Science and the Brain and Computer Interface (BCI) are used to deepen the understanding of the brain.The further development of BCI to provide diagnosis for people with skeletal musculoskeletal disabilities can open the door for these people [6].
Figure 1 shows the performance of the BCI system relative to the classical performance of the brain.BCI is the discovery of progress in the field of brain mapping, which interprets the neuron language and uses this interpretation.BCI is a blessing for people who are exposed to brain damage and are limited by physical abnormalities.The brain, if used with BCI, can lead an injured person to normal life.BCI programs record brainwaves and send signals to a device that can carry out the expected work.The responsibility of BCI is to transfer electrophysiologic activities to device instructions that can lead to the continuation of normal life in people with physical disabilities.This is a straightforward brain-to-machine platform for people.BCI can, in addition to medical applications, be able to perform tasks beyond traditional programs such as entertainment, play, experimentation and learning, translating thoughts into functions that are now as a vital issue known to be used [37].
Fig. 1 execution of the command by the brain against the BCI [23].The conventional BCI system includes a signal generation system, a signal processing technique, and an output device.Signaling can be done in three ways: aggressive, non-invasive, and semi-invasive.The invasive techniques include absorption of signals through the penetration of the micro-electrode in the brain.In semi-invasive techniques, electrodes are placed under the skin, but not in gray matter.Non-invasive techniques include inserting the electrode into the scalp without surgery.Some of the non-invasive techniques used to capture brain signals include electroencephalography (EEG) [5], magnetoencephalography (MEG) [8], and magnetic resonance imaging.Noninvasive techniques are widely used for research because these techniques are not prone to damage to brain tissues.
Brain signals have been amplified from well-known processed human signal processing devices for human use.Signal processing involves filtering, extracting features, and classifying brain potentials or brain signals.The important task of scientists and researchers is to eliminate contamination and extract useful information.Feature extraction involves the removal of noisy and artificial data for pure and non-infected data that can be used to develop BCI applications.Different extraction algorithms (also known as conversion transformations) are used to convert original data to a particular vector, such as Independent Component Analysis (ICA) [35], Common Spatial Patterns (CSPs) [27], linear filtering, Fast Fourier Transform (FFT), and Discrete Wavelet Transform (DWT) Select selected vectors by classifying algorithms such as Linear Discriminant Analysis (LDA), Support Vector Machines (SVMs) [30], Neural Networks (NNs), Fuzzy Inference Systems (FISs) [36], and many more.Classification is divided into desired classes.Finally, the signals processed by prosthetics, wheelchairs, electrical equipment, or computers are used.
The survey describes the follows: Section 2 explains various modes of signal acquisition, including techniques and devices available for recording brain signals.Section 3 reviews the various methods of feature extraction.Section 4 classifies the different types of signals.Section 5 contains our results and provides some research guidelines for the future.
Signal reception methods
The first step to using a brain signal to retrieve information is to obtain an appropriate signal.There are three types of signal acquisition techniques outlined below.
Invasive
This method of reading signals from within the gray matter of the brain.This necessitates the insertion of an electrode into the brain and, as a result, requires surgery, which can potentially lead to the extraction of the best quality of information, but is not used due to the resistance of the human body to any external culture.
Almost invasive
Reading the device signal outside the gray matter of the brain, which is expected to receive the signal strength clearly less powerful than the invasive methods and the possibility of being at risk of losing or damaging the brain.
Almost invasive methods are: 2.2.1.The penetration of micro-electrodes in the brain Micro-electrodes penetrate into the gray-mattered material (the region in which the neuron is present) to produce higher-quality, high-power brain signals than non-invasive techniques.The challenge of this method is to obtain highquality signals, such that the neural activity and the number of electrodes required to obtain a better signal and signal strength (record for a long time).Micro electrodes are used for the first time in brain signals of a monkey.
Electro Corticography (ECoG)
An invasive technique that requires the surgical cut of the skull to implant the device, but after implantation it can be used outside the operating room (extra ECoG).However, ECoG produces far more information than EEGs, because electrodes are placed directly on the brain surface.The level of detail of the information transmitted enables individuals to play computer games (Pong, Space Invaders) with mere mental efforts.However, EcoG only provides superficial vision in the brain.Its activity is regulated by neural networks associated with different brain structures, some of them with a greater depth of the brain, such as the thalamus sub unit and the hippocampus.Intrinsic electrodes have been created for the separation of these deep structures [1].Kang [2] confirmed the ECoG function for twodimensional tasks and how it functions from aggressive methods.In addition to the ECoG epidural (EECoG), a kind of ECoG provides more progress than ECoG.These methods provide rehabilitation for individuals with minor invasive states and significant accuracy.The electrode is located near the surface of the cortex (the outer layer of the nerve tissue).ECoG includes two types of electrode placement systems.The first system is a set of electrodes arranged equally on plastic strips or plastic silicone that can be changed.In order to improve spatial resolution, electrodes produce more density.The second system of electrical electrodes is separate on the spherical surface.The ECoG record has fewer effects than EEG, and this technique is less susceptible to contamination.
Non-invasive
This method is known as the most popular solution in which the electrodes are extracted from the skull to extract the signal.This method performs the quality of the signal at a low cost and ease of implementation.Non-invasive BCI methods include Electroencephalography (EEG), MEG, Magnetic Resonance imaging (MRI), functional magnetic resonance imaging (fMRI), and positron tomography (PET) release.The following figure compares some types of BCI in terms of aggression and performance range.To get a signal from our brain, we use an electrode that attaches to the skull with the help of some sticks.This is an easy way to track the detection of separate brain regions.When performing various brain activities, there are physiological neuropsychic phenomena, some of which can be detected in the EEG signal.Some of these phenomena are dependent on an external stimulus, and some of them are created in the brain without extrinsic stimuli and during normal tasks and the ability to produce some others after a learning process [4].EEG has many medical applications and many applications in the field of rehabilitation.In this section, EEG is used for non-invasive methods for signal reception.The advantage of this low-volume, low-cost, portable signal is that it does not have the potential risk of an offensive mode [5].Some of the devices on the market include NeuroSky, Neuroscan, EMOTIV EPOC and Brain Products.Various studies of EEG signals by Ramadan et al.Have been investigated.[6].At present, researchers are trying to create EEG signals that are reliable and offer high quality signals.
The brain signals are recorded by EEG devices based on frequencies called rhythmic brain activity or EEG rhythms [6].Delta waves (1-4Hz) are commonly found in infants and during sleep.Theta waves (4-7 Hazs) are commonly found in rodents; these waves are also found during meditation, unconsciousness or drowsiness in humans.Alpha waves (8-13 millenniums) are observed in humans in a quiet state with closed eyes.Hair waves are within the alpha frequency range and when the activity in the cortex region is maximized.Beta waves (14 to 30 Hz) occur when a person caution, attention, or thinking.Gamma waves (> 30 Hz) are generated during voluntary movements or when some stimuli are given.Figures 3 to 6 show different EEG rhythms that record healthy conditions in an appropriate environment and record normal activities.Considering the design of an BCI system, we need to take into account some of the important features.The first feature is noisy or marginal, because the EEG signals have a weak to noise ratio.Second, always in BCI systems, feature vectors are often of great dimensions.In fact, several features are generally extracted from several channels and from several sections before joining a single-vector vector.The third feature contains timelines, in which brain activity patterns are generally related to specific EEG time variations.Finally, the BCI features are non-constant because EEG signals may vary over time, especially during sessions [7].
Magnetic-magnetic fluography (MEG)
It was first used by David Cohen to measure the brain's electromagnetic waves from a healthy person and seizure.MEG generates magnetic circuits, which are the result of a nerve activity produced in response to an actuator, such as the EEG, and MEG, to capture the neuropsychological potentials produced by neurons, but in the form of a magnetic field.MEG has a clear spatial resolution and is not affected by artificial muscle movements in any way [8].MEGs are based on superconducting interference devices (SQUIDs) introduced in the 1960s.SQUIDs are filled with large helium liquid units that maintain a system temperature of -269 ° C, with low impedance, low temperature.The SQUID device identifies and amplifies the magnetic field generated by neural activity.Using MEG's activity, researchers have recorded differences between talking about some bilingual babies and a 11month-old one [9].In the recent study, a solution to the source reconstruction using MEG and EEG data has been developed by creating a Bayesian hierarchy algorithm.The data probability algorithm is used to reach the maximum speed using convergent laws, as well as auditory, visual, and facial processing data for simulation.The authors in this article considered only the location information for using the algorithm.Ford et al. [11] have performed statistical analysis of spatial-temporal data using MEG.They provided listening stimuli for their subject and found the difference between continuous data from MEG for frequent and new stimuli.In this study, it was observed that cortical activity is more than frequent triggers for new stimuli.
Magnetic Resonance Imaging (MRI)
MRI provides information about the brain through full images using rays and radio waves.Along with these abnormalities, MRI helps to diagnose the causes of these disorders, and thus helps identify potential recovery methods.MRI is able to cope with the two sides of the brain, which in general can determine which side and part of the brain are damaged.MRI is capable of detecting abnormalities in the brain at an early stage, and typically provides brain images in high contrast, and thus works better than (CT).
Applied Magnetic Resonance Imaging )fMRI(
fMRI provides an extra feature of capturing specific brain sections, while the subject performs special tasks; therefore, we can use parts of the brain that help with changes in the blood flow during any work Specific areas of the brain are active.fMRI shows strong evidence, since hemoglobin is a magnet in our brain, and so the magnetic field can detect activation, even with short-term simulation.Relative differences in response to hemoglobin in different parts of the body result in proper detection or inappropriate functioning of the regions.
In a study [12], fMRI was performed on preterm infants.This method was used to predict early damage to the brain and its neural growth.In another work, a new framework for improving the fMRI detection accuracy was used [13].
To increase the detection rate, the signal was extracted by converting the large volume of the brain into specific stimulus portions.The brain volume was cut for a specific point of time relative to the stimulus.Statistical analysis was applied at each time slice.Signals were used by a non-scheduled method of a non-standard time.For the analysis of fMRI data, some of the available software programs are GE's BrainWave [14], AFNI (Analysis of Functional Neuro Images), Brain Suite, and BrainVoyager.2.3.5 Functional Near-Infrared Spectroscopy (fNIRS) fNIRS uses light from the near-infrared spectrum (EM) to study the oxygenation and anaerobicity of the hemoglobin present in the brain.Hemoglobin Oxygenation and anaerobicity occurs in response to stimuli or during activity.The temperature measurement is done by three methods: continuous wave (CW), time interval (TR) and frequency domain (FD).fNIRS-CW is the most sophisticated system.The acquisition of fNIRS is done by placing an optode on a scalp.Optode includes a source and a tracker.The source of electromagnetic waves moves from the scalp to the brain and then transmitted to the detector.The change in intensity in the introduction of stimuli is recorded and analyzed using existing methods.Earning a signal with fNIRS is costly, but relatively cheaper than fMRI acquisition.Opri et al.Have developed a fNIRS-based BCI system in which the extraction of the characteristics and classification of data obtained after the motor vehicle tasks is performed.The fNIRS signals from the cerebral cortex have been obtained.After filtering raw data and eliminating noise, features such as mean, skewness, peak, and elongation were extracted.Later, a genetic SVM was used for data classification [15].
Single-Photon Emission Computed Tomography (SPECT)
SPECT is a nuclear medical method used to study the brain from gamma rays.SPECT gives a different view of how the brain functions.During the activity of the brain using SPECT, a radioactive substance is injected into the patient's body and scanned using the SPECT device.The SPECT device detects a radioactive substance that is absorbed by the brain in the patient's body.Radioactive materials allow physicians to see how the blood flow to the tissues and organs and which areas of the brain are inactive or active.SPECT produces an average brain activity within minutes.By reading these pictures, doctors can detect any reduction in brain activity.
Positron emission tomography (PET)
PET is another noninvasive method that measures brain function by infusing the positron material of the nuclear material.Ramadan et al. [6] used short-term radioactive drugs to monitor and diagnose any disorder in the metabolic activity of the human body.PET scan can analyze the amount of sugar consumed by the brain and analyze the information needed for both the above metabolic substances, that is, the use of a large amount of sugar or metabolism, including very small sugar, brain regions.At the same time, a positron camera is placed around the patient that provides crossover imaging of the tomography.PET generates more signal in any other way, so voltage correction is easy.It also has the benefits of achieving a higher sensitivity that is suitable for clinical admission.
The image forming process in PET with proper counting statistics is capable of performing accurate results with relatively simple algorithms.One of the main points of PET, confirmed by numerous studies, is the cost of installation and maintenance.PET is commonly used to diagnose brain disorders.7 shows strips of different frequencies, their ranges and their location.Also, the results and meaning of the above, below, and normal items are specified for each detail.Fig. 7. Frequency band, frequency range, location, and importance [24].
Linear Filtration
Usually used to eliminate noise in the form of signals that do not belong to the frequency range of the brain signals.
Linear filtering is essentially classified into low pass filters and high-speed filters.Artifacts are noise generated by the endogenous (muscle, eyes and other cardiovascular activities) or external (machine error).There are three techniques for artifact processing in gaining an EEG signal, as listed below.
-Avoiding Bug: Preventing Artifacts from Blind Artifact.
-Disapproval of flashing artifacts: abandoning contaminated experiments.
-Artifact Removal: The artifact created relies on its preprocessing methods.
The results show that the use of wet electrodes instead of dry electrodes reduces noise in the cable.The importance of massive electrical and muscular artifacts, simultaneously uncontrolled stimulus, requires online pre-processing and reduced number of electrodes.Compatibility filter and wavelet destruction are known as the appropriate time for EEG-BCI.Improving artificial purchasing techniques can help reduce the number of tests and subject training and simultaneously help to improve the extraction and classification of the feature.
Principal Component Analysis (PCA)
PCA [25,31] is another method that maximizes the rate of data variance reduction.The PCA uses a conversion matrix that contains elements with a low variance.The transformation matrix A can be written as by the Eq (1): Where ˊ are elements of the N dimension dataset and n is the total number of elements in the original dataset can be written as according to Eq (2): where Y is the matrix containing eigenvector y1,y2,...yn and Λ is the eigenvalue diagonal matrix with elements λ1,λ2,..λn.
Conventional PCA converts data to different lengths of lengths.To measure distances, functions such as the Euclid distance (ED) and dynamic time packing (DWT) are used.Li [26] applied the reduction of the EEG data dimensions and provides a novel, improved and effective PCA, which uses the covariance matrix to classify multiple time series variables (MTS) based on time -based variables.
Discrete Wavelet Transform (DWT)
Shen Sah used a spectral estimation technique in 1992 to express every general function as an infinite series of wavelets.It also allows the signal to be analyzed in various frequency bands, with different solutions.Decomposition is accomplished by using two sets of functions called scaling and wavelet functions that are related, respectively, to low pass filters and high passphrase filters.Then the coordinate space is reduced by the special vectors of the covariance matrix of the common covariance using each cluster.PCA various measurement methods like PCA, SVD, ICA and so on.The PCA uses most commonly used as the main component of the main components, the smaller it is and the more it maintains the MTS information.The use of CPCA, such as the high time complexity associated with MTS with variable-length and data mining accuracy, while the use of PCS results in a CPCA-based classification that is independent of the distance function.Hence, CCPCA has been used as the most effective method for classifying MTS theory.
Common Spatial Pattern (CSP)
CSP [27] designed a spatial or spatial filter to maximize the filtered signal disorders for classification.in order to perform CSP, the frequency and gaussian times are considered as known parameters.It converts multi -channel EEG data into a less space.This variance maximizes the classes for a two -class signal matrix.The following steps have been converted to an EEG matrix [16].In the case of spatial filtering, the most common way to filter the common spatial pattern is the analysis of the independent components and the Laplace filter.A variety of reverse oscilloscope models allow you to recognize the actual projected in three-dimensional submarine spherical networks.The extracted features can be translated using different linear and nonlinear algorithms.First, we calculate the spatial correlation variance of the EEG naturally according to Eq (3).Then, calculate the composite spatial co-variance according to Eq (4) and Eq (5), and the projection matrix V is shown in Eq (6) and the main EEG is "X" in Finally, it transforms into Eq (7), and eventually the first and last pillar, P 1 , describes the largest one of the variance of a task and the smallest other variance according to Eq (8).
(1) Calculate the spatial correlation variance of the EEG as normal Where K represents the classes and trace(x) the sum of diagonal values of x.
(2) Compute the composite spatial co-variance as = , Where VK represents the eigenvector matrix and λ represents the diagonal matrix of the eigenvalue.
(3) The projection matrix V is denoted as Where U is whitening transformation matrix = √ 0 .Uing the projection matrix, the original EEG signal is reduced to uncorrelated components Where W is the EEG signals' source component, which includes common and specific components of different tasks.(4) The original EEG "X" is finally transformed as Columns of P -1 are spatial patterns or can be called EEG source distribution vectors.
(5) The first and last column of P -1 , describes the largest variance of a task and the smallest other variance.
Herbert Ramsar et al.Proposed the above formulation to design a spatial filter for the classification of EEG data.The variance is just a few of the signals that are suitable for classification.The problem with the authors' proposed method is that if the signal is infected with an artificial signal, the filter design changes greatly.Design changes as a result of changes in covariance used to estimate spatial filters.
So there is a need for free EEG information with concepts.One of the limitations of using a CSP is that data does not provide filtered EEG signals.EEG data are not static, so we can not guarantee that the same information is extracted from the same issue at any given time.This is due to artifacts and other environmental conditions.Contaminated information affects covariance estimates and, in turn, causes additional problems.
There are various plugins for the CSP that can be easily applied to EEG data and have good performance.These are: (a) CSP permitted, (b) CSP specific gravity, and (c) fixed CSP.
Independent Component Analysis (ICA)
ICA is also a blind source separation, which, according to their statistical dependence, divides the data into various independent components.ICA provides good accuracy to remove the archive, but it is hard to get a component that contains only synthetic brain signals and also contains useful brain signals.therefore, ica improved using a combination of different methods.ICA -based algorithm was fabricated using temporal and spatial properties of independent components (ICs) for speller P300 by Neng Xu et al. [28] the ICA follows a simple linear conversion approach.Suppose X is an initial signal matrix of dimension N, so that T decreases according to Eq (9) and Eq (10).
A number of studies and experiments have been conducted to remove artifacts using ICA, such as ICA in fMRI time series data, muscle removal, decay, blinking and auditory data using brain magnetic stimulation and electrosensophalography TMS -EEG.The limitation of the use of ICA is that there is no way to choose the IC automatically and the risk that the selected ICs have chosen.To improve the limitation, it is necessary to identify and remove ICA eye arthritis (OA).This method follows two steps.First, a low pass filter is applied to the EEG, and then the independently filtered components are analyzed one by one.The artifact pattern generated by the motion of the eye is analyzed by external diagnosis, and then the artifacts are identified and zeroed.Independent components are removed with artifact and EEG signals are retrieved meaningfully.
Fast Fourier Transform (FFT)
In FFT, signal characteristics are analyzed using the power spectral density estimation.The quadruple alpha, beta, gamma and theta frequencies contain major spectra for EEG.The characteristics of an EEG signal that needs to be analyzed are calculated by estimating the power spectral density (PSD) to selectively select the signal of the EEG samples.
Signal Classification
Lotte et al. [18] classified a generic categorization as a feature vector by selecting the most suitable classes to control a feature vector.A distinctive categorization such as a backup vector machine (SVM) is a supervised learning model that knows how to classify a feature vector.
• A static classifier, a multi-layer perceptron, can not, for instance, consider dynamic temporary information during classification.A dynamic classifier, for example: a hidden markup model, can be adapted to variable time variations.
• A stable classification, such as a linear diagnostic analyzer (LDA), is not affected by minor variations in the training data set.The unstable classification is, conversely, complex, and any change in the educational data set contributes significantly to its performance.An example is a multi-layer proptron.
• A classifier with irregularity suffers from the complexity of the training, because it does not recognize the time to stop teaching.A regular classifier is stronger on the other hand and therefore works better.
Figure 7 provides a detailed view of the type of classifiers used in BCI signal classification.Examples of each type are also included.This gives some kind of decision-making frontier, each of which can be traced.These are the following in the following section.
• Linear Classifiers: These classifiers use linear functions to binary classes.These are simple and popular.Examples of linear classifiers are the SVM and LDA.
• Nervous networks (NNs): They use artificial neurons to lead to nonlinear borders and are widely used for BCI systems, and the most popular layer of proptron is used.
• Bayesian Nonlinear Classifiers: These are not popular in real time due to the slow availability of BCI.They are productive and create non-linear decision boundaries.
• Classifiers of the nearest neighbor: The nearest neighbor classification [29] is simple categories that recognize classes with non -linear boundaries.An ordinary example is the Mahalanobis Distance (MD) [18], which is given by the Eq (11): MD is a better option for classifying a sample data point, because the sample distance distance from the whole cluster is the same as in Eq ( 11) is given.The distance is another distance known as the Euclidean distance (ED), which diverges from the mean point regardless of the expansion of the data; therefore, MD is more accurate than ED.
• Combined Classification: A recent and popular way to improve classification, collection of classifications.Different combinational strategies reinforce each of the previous classifications, casting the simplest and most popular, and aggregate, in which the input into any meta-classification is the output of the previous classification.The most popular combination of classifications is the combination of outputs of different categories according to the application demand.A brief introduction of some of the common methods of signal classification is given below.
Support Vector Machine (SVM)
SVM is an effective classifier based on the speed of training, indifference to training, resistance, and ability to overcome problems.The SVM classification is effectively the input vector X in the scalar value f (X), which is indicated in Eq (12) [30].
Where N is the number of support vectors, b represents bias, i is adjustable weight, yi is a scalar in the range { -1, 1}, Xi are support vectors and K(Xi,X) is the kernel.
However, high dimensional feature space can cause higher generalization errors in SVM.An improvement based on granular computing and statistical machine learning has been proposed in Guo and Wang [19].GSVM can process fuzzy, corrupted, uncertain, incomplete and large data.
The highlighted features of GSVM are: GSVM generalization function is better than traditional SVM because it divides the total space into the sub-sample space.
-Grain SVM has the ability to transform a linear non-separation into a linear separation problem.
-The GSVM allows parallel execution because the training data is independent of the various granules.
-It finds an approximate, but low-cost, simple solution rather than a precise, cost-effective solution. Classifers
Neural Networks
NNs are inspired by biological nervous systems that have features such as parallel computing, non-linearity, compatibility, responsiveness, and error tolerance.Inputs in NNs are called neurons that are related to (which can be positive or negative).Inputs are weighed through processing units.The processing units include a compilation section that ultimately connects to the output.
Convolutional Neural Network (CNN)
A CNN is another type of NN that is similar in architecture to MLP.This system divides the neurons into three dimensions: width, height, and depth.A taxonomy is according to CNN for classifying the EEG data using the P300 component of ERP has been used [32].CNN can be created in a five-layer architecture: (1) input layer, (2) convolutional layer, (3) corrected linear unit layer, (4) aggregate layer, and (5) fully connected layer.Let L be the number of layers in the CNN architecture, x is the input vector, w is the weight vector, let be the number of maps in the L layer, m represents the map m, J represents the number of neurons in the L layer, The number of electrodes, Ns is the number of signal values, and Np is the number of partitions of the signal value.A number of Eq (13) to (16) apply in their work on different layers of CNNs and are used to update weights.
For Layer 1, Where xij is the input vector from input layer L0, 0 ≤ i < Ne, and 0 ≤ j < Nt, Nt points considered for analysis according to Eq (14) and Eq (15).
A CNN-classifier used to classify EEG data using the P300 ERP component.CNN includes five layers and several maps.The output layer consists of a map that has two neurons.These two neurons represent two classes (class 1, P300 identified, class 2, no P300).To select the first order of CNN, the initial filters (weight) apply to the width and height of the input, then signal processing is done in the time domain.Here, the kernel is used as a vector, not as a matrix according to Eq (17) and Eq (18), and a linear sigmoidal function is applied between the hidden layer 1 and the hidden layer 2.
Hidden layer 2. Input signal convolution can be represented as [33].
Where, σ is the first deviation and classical sigmoidal function was used in between the last two hidden layers according to Eq (18).
In the output layer, the grade score is calculated as according to Eq (19).
CNN implements better classification than other categories because it uses hidden layers, but the number of layers that is used to obtain better classification cannot be determined.In another job, Cecotti et al.Using Eqs. ( 13) to ( 16), they classify the set of images: the human face (target) and others (non -objective) [34].In their work, a CNN is embedded with a space filter.The filtering and classification are done on the ERP signals produced during the experiment.The learning method is based on maximizing the area under the curve (AUC).They compared CNN with SVM, BLDA (with the space filter and without).The finding suggests that CNN is better than the rest of the categories.
A CNN based on the AUC has no prior knowledge of the type of spatial filter, but requires prior information about the type of network architecture, so selecting the number of neurons and spatial filter depends on the previous experiment, which in turn affects the overall performance and performance of the network Affect.In order to achieve optimal performance for classification, CNN can be used with the legal choice of the neurons and hidden layers.
Probabilistic Neural Network (PNN)
The PNN was introduced by Spectv in 1990 and is based on the Bias rule, in which the PNN goal is a non-parametric predictor of probability density for obtaining optimal accuracy.The benefits of using the PNN are that it is easy to execute (much faster than recapture), has a parallel structure that converges optimally, has no minimum localization, and is in real time.The appropriate choice of smoothing parameter (σ) helps to correct the shape of the decision area.
The benefits of flexible neural networks include rapid training operations, internal parallel structures, ensuring the highest possible classification in the presence of adequate training data and continuing training with the use of new data without the need for re-networking.[21].
Fuzzy Inference System
In 1968, Zadeh suggests that in the real world, all classes or collections do not belong to a definite amount, such as yes or no, right or wrong, or real numbers; therefore, he introduced the concept of fuzzy sets: A fuzzy set is a definite boundless set, and the transition from a definite boundary to a flexible boundary is determined by membership function (MF).Using the benefits of a fuzzy set of flexible boundary conditions, many authors have used a fuzzy inference system in BCI applications.In another work, a fuzzy inference system was used to select the number of EEG channels for a hypothetical speech (in Spanish) [22].It is therefore used as an artificial eye-catching and discrete WT as a feature extraction approach.
Neuro-Fuzzy Systems
Advantages of using NN and FIS.Its architecture is similar to NN and the inputs or weights (both) are fuzzy.The FNN identifies the fuzzy rules and specifies the membership function by adjusting the connection weights.Many applications use FNN-1, FNN-2 and FNN-3.A fuzzy fuzzy fuzzy system is trained by a learning algorithm derived from neural network theory.The learning method (real) acts on local information and only creates local changes in the basic fuzzy system.
A neuro-fuzzy system can be seen as a neural network of the 3-layer feeder.The first layer shows the input variables, the middle layer (hidden) represents fuzzy rules, and the third layer represents the output variables.Fuzzy sets are encoded as fusion weights.To display a fuzzy system, this does not require a learning algorithm to apply.However, this could be convenient, because it reflects the flow of data processing input and learning within the model.
Sometimes a 5-layer architecture is used, where fuzzy sets appear in units of the second and fourth layers.
A neuro-fuzzy system can always be interpreted as a system of fuzzy rules (for example, before, during and after learning).It also creates the possibility of creating a system from educational information from the beginning, because it can be analyzed using fuzzy rules with prior knowledge.All fuzzy neural models do not specify the learning methods to create fuzzy rules.
The learning method of a neuro-fuzzy system takes into account the semantic properties of the basic fuzzy system.This result in possible constraints is applicable to system parameters.All fuzzy neural methods do not have this feature.
An approximate neural fuzzy system is a subsequent (unknown) function, which is partially defined by educational data.Fuzzy rules that are coded within the system are vague examples and can be trained as examples of data.A nerve fuzzy system should not be known as a specialist system (fuzzy) and has no connection with fuzzy logic in the limited sense.
Conclusion
The brain and computer interface creates a way that enables users to easily control a computer through their thoughts.
An interdisciplinary BCI is considered as an area for research in various aspects, such as the understanding, acquisition, and processing of brain signals.BCI research includes biology of psychology and neuroscience, engineering, computer science and applied mathematics.In this paper, we present a comprehensive review of each BCI stage.The first phase of BCI receives a brain signal.There are three types of signal acquisition systems: noninvasive, semi-invasive and aggressive.Getting an aggressive signal involves placing microelectrode and electrode chips under the scalp through surgery.Non-invasive technique eliminates brain potential by inserting a metallic electrode on the scalp (as in EEG) or recording brain activity and blood flow through specific devices (such as MEG, fMRI, etc).This paper presents a summary of these classification methods and uses these techniques to provide recent information.
Common BCI systems use diagnostic models for classification.
However, researchers are interested in deep learning methods such as deep belief networks, CNN, and a combination of different classification algorithms.The BCI system is a useful system that has good coordination among all of these sectors.The main goal of the BCI research is to provide a better communication approach; However, the methods used to achieve this goal can be different.
4. 2 . 2
Perceptrons and Multilayer Perceptron.Artificial neural networks offer a wide range of nonlinear categories, most notably MLP.Each neuron in the ANN mimics the neurons of the biological neural network, and the proper architecture can lead to effective classification; therefore, the MLP is a complex classification, and minor changes can lead to dramatic changes in the results.Experiments are not different based on having feature extraction (f), no extraction (nf), preprocessing (p), and preprocessing (np).
Table 1 .
Comparison between Various Modes of Acquisition 3. Feature extractionThe popularity of non-invasive techniques to collect signals avoids boring work and data filtering.This machine includes extraction of raw signals, including noise, and artifacts produced from the eyelid, muscle movement, hair, sweat and other factors.Figure These signal acquisition methods record different types of brain potentials, such as those generated by motor activity, cognitive activity, eye movement, or stimulus.Researchers prefer non-invasive techniques to aggressive methods because they are not prone to damage.The only limitation in non-invasive methods is that the resolution of the signals in the invasive methods is low.Future work can be developed to develop brain signaling devices that have lower density electrodes and more clearly.The second stage involves processing brain signals.In this paper, various extraction algorithms and classification are mentioned.Extracting features In order to extract useful signals and remove artifacts produced by eye movement, muscle movements, features extraction techniques including Linear Filter, CSP, PCA, ICA, FFT and DWT.ICA is best suited for the removal of artifacts, and has been widely used by various researchers.CSP systems and its variants are used to filter the spatial signals of the brain.The PCA helps to convert the feature space, and DWT helps to extract time and time information from raw signals.Categorical algorithms such as LDA, SVM, NNs, and fuzzy inference systems are applied to properties acquired using feature extraction techniques. | 8,950.2 | 2019-09-13T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Sarbecovirus ORF6 proteins hamper induction of interferon signaling
The presence of an ORF6 gene distinguishes sarbecoviruses such as severe acute respiratory syndrome coronavirus (SARS-CoV) and SARS-CoV-2 from other betacoronaviruses. Here we show that ORF6 inhibits induction of innate immune signaling, including upregulation of type I interferon (IFN) upon viral infection as well as type I and III IFN signaling. Intriguingly, ORF6 proteins from SARS-CoV-2 lineages are more efficient antagonists of innate immunity than their orthologs from SARS-CoV lineages. Mutational analyses identified residues E46 and Q56 as important determinants of the antagonistic activity of SARS-CoV-2 ORF6. Moreover, we show that the anti-innate immune activity of ORF6 depends on its C-terminal region and that ORF6 inhibits nuclear translocation of IRF3. Finally, we identify naturally occurring frameshift/nonsense mutations that result in an inactivating truncation of ORF6 in approximately 0.2% of SARS-CoV-2 isolates. Our findings suggest that ORF6 contributes to the poor IFN activation observed in individuals with coronavirus disease 2019 (COVID-19).
INTRODUCTION
An unusual outbreak of infectious pneumonia in Wuhan, Hubei Province, China, was first reported in December 2019. Within a few weeks, a novel coronavirus (CoV) was identified as the causative agent of this infectious disease, CoV disease 2019 (COVID-19) (Zhou et al., 2020b). Because this virus is phylogenetically related to severe acute respiratory syndrome (SARS)-CoV, it was called SARS-CoV-2. As of January 2021, SARS-CoV-2 is an ongoing pandemic; approximately 100 million cases of SARS-CoV-2 infection have been reported worldwide, and more than two million people have died of COVID-19 (WHO, 2020). Therefore, a better understanding of SARS-CoV-2 infection and pathogenesis is urgently needed.
SARS-CoV and SARS-CoV-2 are closely related, both belong to the family Coronaviridae, genus Betacoronavirus and subgenus Sarbecovirus Zhou et al., 2020b). SARS-CoV-related viruses have been detected in palm civets (Paguma larvata) , Chinese rufous horseshoe bats (Rhinolophus sinicus) Li et al., 2005), and a variety of additional bat species (mainly of the genus Rhinolophus) (Ge et al., 2013;He et al., 2014;Hu et al., 2017;Lau et al., 2010;Lin et al., 2017;Tang et al., 2006;Wang et al., 2017;Wu et al., 2016b;Yuan et al., 2010). Similarly, SARS-CoV-2-related viruses have been identified in intermediate horseshoe bats (Rhinolophus affinis) (Zhou et al., 2020b) and a Malayan horseshoe bat (Rhinolophus malayanus) (Zhou et al., 2020a) as well as Malayan pangolins (Manis javanica) Xiao et al., 2020). This suggests that zoonotic viral transmission from horseshoe bats to humans led to emergence of human pathogenic sarbecoviruses, including SARS- In addition to sarbecoviruses, other human pathogenic betacoronaviruses have been reported. Middle East respiratory syndrome (MERS)-CoV belongs to the subgenus Merbecovirus Zaki et al., 2012), whereas the human CoVs OC43 (McIntosh et al., 1967) and HKU1 belong to the subgenus Embecovirus. Non-human betacoronaviruses have also been reported; HKU4 and HKU5 are included in the subgenus Merbecovirus (Woo et al., 2007), whereas murine hepatitis virus (MHV) belonging to Embecovirus is a pathogenic CoV in mice (Lai et al., 1981;Spaan et al., 1988). Some CoVs classified in the subgenus Hibecovirus have been detected in bats (Quan et al., 2010;Wu et al., 2016aWu et al., , 2016b but have not yet been found in humans. Human betacoronaviruses are not only phylogenetically related but also cause similar respiratory symptoms, such as cough and pneumonia (reviewed in Chan- Yeung and Xu, 2003;Weiss, 2020). Nevertheless, they differ significantly in their pathogenicity and severity of infection. For instance, SARS-CoV (WHO, 2004) and MERS-CoV (WHO, 2019) are highly pathogenic, whereas OC43 and HKU1 cause relatively mild diseases (reviewed in Chan- Yeung and Xu, 2003; Perlman and Masters, 2020;Weiss, 2020). These features suggest that direct comparison of their genome structure and the functions of their viral proteins will help us understand determinants of disease.
In the present study, we show that an ORF6 gene is commonly encoded in all sarbecoviruses, including SARS-CoV and SARS-CoV-2, whereas no orthologs are found in other betacoronaviruses, such as MERS-CoV, OC43, and MHV. We demonstrate that all Sarbecovirus ORF6 proteins inhibit induction of IFN-I upon viral infection as well as antiviral signaling triggered by IFN-I/III. Intriguingly, the anti-IFN activities of the ORF6 proteins of SARS-CoV-2 lineages are more potent than those of SARS-CoV lineages. We further provide evidence suggesting that the emergence of SARS-CoV-2 variants expressing truncated ORF6 proteins may contribute to attenuation of viral pathogenicity.
RESULTS
ORF6 is conserved in the subgenus Sarbecovirus but absent from other betacoronaviruses We first assessed the phylogenetic relationship of betacoronaviruses, including SARS-CoV, SARS-CoV-2, MERS-CoV, OC43, and HKU1. The respective viral strains were classified based on their subgenera in the phylogenetic tree of the full-length viral genome ( Figure 1A; Table S1) as well as five viral core genes encoding ORF1ab, spike (S), envelope (E), membrane protein (M), and N ( Figure 1B). The inconsistent phylogenetic topologies of different viral genes of sarbecoviruses ( Figure 1B) are in agreement with recent studies suggesting gene recombination events among sarbecoviruses (Boni et al., 2020;Li et al., 2020b). However, some viral core genes such as E (encoding 75 amino acids in the case of SARS-CoV-2) are relatively short, making it difficult to reliably infer their phylogenetic relationships. Indeed, two vi-ruses belonging to the Sarbecovirus outgroup, BtKY72 and BM48, were separated in the phylogenetic tree of the E gene ( Figure 1B). In contrast, the other six phylogenetic trees showed almost identical relationships among the five subgenera of betacoronaviruses ( Figures 1A and 1B). These results suggest that viral recombination did not occur among the analyzed betacoronaviruses, although recombination events can occur among sarbecoviruses.
We then compared the genome organizations of the different subgenera. As shown in Figure 1C, the arrangement of the core genes (ORF1ab-S-E-M-N) is conserved. Insertions of additional open reading frames (ORFs) between ORF1ab and S were detected in Hibecovirus (Hp/Zhejuang2013) and Embecovirus species, whereas additional ORFs were detected between S and E in all betacoronaviruses ( Figure 1C). Interestingly, ORF insertions between M and N were observed only in members of the Sarbecovirus and Hibecovirus subgenera ( Figure 1C). When we compared the sequences of these ORFs, the genes in Sarbecovirus were unalignable with those in Hibecovirus, suggesting that these ORFs emerged independently after the divergence of these subgenera. Of note, we found that ORF6 is highly conserved in sarbecoviruses, including SARS-CoV and SARS-CoV-2, but absent from other betacoronaviruses ( Figure 1C).
ORF6 proteins of SARS-CoV-2 lineages inhibit activation of IFN signaling more potently than those of SARS-CoV lineages Because previous reports have suggested that SARS-CoV ORF6 has the ability to inhibit IFN-I activation as well as upregulation of IFN-stimulated genes (ISGs) Kopecky-Bromberg et al., 2007), we directly compared the phenotypic properties of representative Sarbecovirus ORF6 proteins. The phylogenetic topology of the Sarbecovirus ORF6 gene (Figure 2A) was similar to that of the full-length viral genome (Figure 1A), suggesting that recombination events involving the ORF6 gene have not occurred among sarbecoviruses. For our phenotypic analyses, we generated expression plasmids for ORF6 from SARS-CoV-2 (Wuhan-Hu-1) as well as SARS-CoV-2-related viruses from bats (RmYN02, RaTG13, and ZXC21) and a pangolin (P4L). We also included ORF6 from SARS-CoV (Tor2), SARS-CoV-related viruses from bats (Rs4231, Rm1, and HKU3-2), and the two bat sarbecoviruses that are phylogenetically located at the outgroup of SARS-CoV-2 and SARS-CoV (BtKY72 and BM48). Western blotting revealed that the expression levels of ORF6 proteins of the SARS-CoV-2 lineage are lower than those of the SARS-CoV lineage and the two outgroup viruses ( Figure 2B).
We then monitored human IFNB1 promoter activity in the presence of ORF6 using a luciferase reporter assay. Influenza A virus (IAV) non-structural protein 1 (NS1) served as a positive control because it potently suppresses induction of IFNB1 (García-Sastre et al., 1998;Krug et al., 2003). As shown in Figure 2C, top, all ORF6 proteins as well as IAV NS1 dose-dependently suppressed activation of the IFNB1 promoter upon Sendai virus (SeV) infection. Notably, ORF6 proteins of the SARS-CoV-2 lineage were more potent inhibitors than those of the SARS-CoV lineage ( Figure 2C, top), despite their lower expression levels ( Figure 2B). Next we analyzed the ORF6 proteins for their ability to inhibit signaling triggered by IFN-I (IFN-a) and IFN-III (IFN-l3).
In agreement with a previous study (Kochs et al., 2007), IAV NS1 failed to prevent activation of the IFN-stimulated response element (ISRE) promoter upon stimulation with IFN-I or IFN-III ( Figure 2C, center and bottom). In contrast, Sarbecovirus ORF6 proteins inhibited activation of the ISRE promoter upon IFN-I and IFN-III stimulation ( Figure 2C, center and bottom). Notably, ORF6 proteins of the SARS-CoV-2 lineage were again more active than those of the SARS-CoV lineage ( Figure 2C, center and bottom). Thus, our data demonstrate that SARS-CoV-2 ORF6 is a potent IFN antagonist that targets signaling triggered by SeV infection (leading to IFNB1 expression) and IFN-a and IFN-l3 stimulation (leading to ISG expression). In addition to the ORF6 proteins of SARS-CoV and SARS-CoV-2, the ORF6 proteins of the two outgroups, BtKY72 and BM48, significantly inhibited these antiviral signaling cascades, with BtKY72 ORF6 being less efficient than BM48 ORF6 ( Figure 2C). Two recent reports suggested that the compounds ivermectin (Caly et al., 2020) and selinexor (Gordon et al., 2020) may be candidates for treatment of COVID-19 as they target the activity of ORF6. However, both compounds failed to inhibit the anti-IFN activity of ORF6 ( Figure S1A).
To verify the immunosuppressive activity of ORF6 in different experimental systems, the expression levels of endogenous NS1 and p125Luc (C,top) or pISRE-luc (C, center and bottom). 24 h after transfection, cells were infected with SeV (MOI 10) (C, top) or treated with IFN-a (C, center) or bottom). 24 h after infection or treatment, cells were harvested for western blotting (B) and a luciferase assay (C). Note that the ORF6 sequence of ZXC21 is identical to that of ZC45. In (B), ''low,'' ''medium,' ' and ''high Figure 2D) or IFN-a treatment (Figure 2E). The ORF6 proteins of SARS-CoV-2 and SARS-CoV significantly suppressed upregulation of these genes ( Figures 2D and 2E). The suppressive effect mediated by SARS-CoV-2 ORF6 was significantly higher compared with its SARS-CoV counterpart ( Figures 2D and 2E). To further validate the anti-IFN activity of ORF6 in a more physiological setting, we generated a derivative of the human lung cell line A549 expressing ORF6 upon doxycycline (Dox) treatment ( Figure 2F) and monitored the expression levels of endogenous IFNB1, IFNL1, and IFI44L after SeV infection. Although the ORF6 expression level induced by Dox treatment in A549 cells was much lower than in transiently transfected HEK293 cells ( Figure S1B), this approach revealed that ORF6 significantly suppresses induction of antiviral genes in relevant target cells of SARS-CoV-2 (Figure 2G). Moreover, we show that the anti-IFN activity of ORF6 is, on average, higher than that of ORF3b (Figures S1C and S1D), a recently identified IFN antagonist of SARS-CoV-2 (Konno et al., 2020). For both proteins, the anti-IFN activities tended to be higher in the SARS-CoV-2 lineage compared with the SARS-CoV lineage ( Figure 2H). These findings identify SARS-CoV-2 ORF6 as a robust IFN antagonist.
E46 and Q56 determine the anti-innate immune activity of SARS-CoV-2 ORF6
Although all tested Sarbecovirus ORF6 proteins efficiently hampered induction of IFNB1 triggered by SeV infection and upregulation of ISGs induced by IFN-I/III, the inhibitory activities of SARS-CoV-2 ORF6 were significantly stronger than those of SARS-CoV ORF6 ( Figure 2). To determine the residue(s) that are responsible for this difference, we aligned and compared the ORF6 amino acid sequences ( Figure S2A). As shown in Figure 3A, we found 10 amino acids whose chemical properties are different between ORF6 proteins of the SARS-CoV lineage (n = 241) and those of the SARS-CoV-2 lineage (n = 57,648), and these 10 residues are highly conserved in each lineage. Furthermore, the ORF6 proteins of the SARS-CoV lineage harbor two additional amino acids (i.e., Y and P) at the C terminus compared with those of the SARS-CoV-2 lineage. Mutational analysis (Figure 3B) revealed that substitution E46K attenuates the anti-IFN activity of SARS-CoV-2 ORF6, whereas Q56E has the opposite effect ( Figures 3C and S2B). To verify the effect of residues 46 and 56 on ORF6-mediated anti-IFN activity, we introduced the respective reverse mutations into SARS-CoV ORF6 ( Figure 3D). As shown in Figures 3E and S2C, the inhibitory activity of the SARS-CoV ORF6 K46E mutant was significantly higher than that of the parental SARS-CoV ORF6 protein, whereas the E56Q mutant of SARS-CoV ORF6 antagonizes IFN induction less efficiently than the parental SARS-CoV ORF6. These findings show that the differences in anti-IFN activity between SARS-CoV-2 ORF6 and its SARS-CoV counterpart are determined by these two residues.
Inhibition of innate immune signaling depends on the C-terminal region of Sarbecovirus ORF6 The observation that residues 46 and 56 determine the ability of ORF6 to inhibit responsiveness to viral suggests that ORF6 targets a step that is common to these signaling pathways. A comprehensive proteome analysis by Gordon et al. (2020) has suggested that ORF6 interacts with two cellular proteins, ribonucleic acid export 1 (RAE1) and nucleoporin 98 (NUP98), via its C-terminal region. To verify the importance of the C-terminal region of ORF6 for its biological activity, we generated a series of ORF6 mutants in which we deleted the C-terminal region or changed a stretch of positively charged residues to alanines ( Figure 3F). With the exception of the DC1 mutant of SARS-CoV-2 (Wuhan-Hu-1), all mutants were expressed at levels similar to wild-type (WT) ORF6 ( Figure 3G). Luciferase reporter assays revealed that deletion of the C-terminal region (DC2) or substitution of acidic residues to alanines (Ala) completely abrogated the anti-IFN effects of SARS-CoV-2 ORF6 ( Figures 3H and S2D). Similarly, the anti-IFN activities of the ORF6 proteins of SARS-CoV (Tor2) and two outgroups (BtKY72 and BM48) were partially attenuated by mutations of the C-terminal region, although they still retained some of their anti-IFN activity ( Figures 3H and S2D). These findings suggest that the C-terminal region is crucial for efficient anti-IFN activity of ORF6.
To address whether ORF6 interacts with RAE1 and NUP98 via its C-terminal region, we performed co-immunoprecipitation (coIP) experiments. As shown in Figure 3I, Sarbecovirus ORF6 proteins, including those of SARS-CoV-2 (Wuhan-Hu-1), SARS-CoV (Tor2), and two outgroups (BtKY72 and BM48), bound to RAE1 and NUP98. In contrast, C-terminally truncated mutants thereof failed to bind these cellular proteins ( Figure 3I). These observations suggest that RAE1 and NUP98 associate with the anti-IFN-activity exerted by Sarbecovirus ORF6 proteins. In fact, a recent study demonstrated that SARS-CoV-2 ORF6 hampers nuclear translocation of IRF3 and STAT1 via RAE1 and NUP98 and suggested that overexpression of RAE1 and NUP98 rescues the IFN response in the presence of ORF6 (Miorin et al., 2020). Although our microscopy analyses (Figure 3J) confirmed that SARS-CoV-2 ORF6 inhibits nuclear translocation of IRF3 ( Figure 3K), overexpression of RAE1 and NUP98 did not rescue nuclear translocation of IRF3 ( Figures 3J and 3K), Figure 3L) in the presence of ORF6. Instead, western blotting revealed that overexpression of RAE1 and NUP98 increased the expression level of ORF6 in a dose-dependent manner ( Figure 3M). Our findings suggest that ORF6 binds to RAE1 and NUP98 via its C-terminal region. However, overexpression of RAE1 and NUP98 cannot overcome the anti-IFN activity of ORF6.
SARS-CoV-2 variants that lost a functional ORF6 gene have emerged during the current pandemic Finally, we assessed the diversity and evolution of SARS-CoV-2 ORF6 during the current pandemic. We downloaded 67,136 viral genome sequences from the global initiative on sharing all influenza data (GISAID) database (https://www.gisaid.org; as of July 16, 2020) and removed 395 sequences containing undetermined and/or mixed nucleotides in the ORF6 region. By analyzing the ORF6 region, we found that approximately 0.2% (124 of 66,741) sequences of pandemic viruses lost their C-terminal region because of frameshift and/or nonsense mutations (Figure 4A; Table S2). A SARS-CoV-2 variant encoding truncated ORF6 was first isolated in China on February 8, 2020 (GISAID: EPI_ISL_451350) ( Figure 4A). We assessed the frequency of SARS-CoV-2 variants encoding truncated ORF6 for each country but found no specific deviations of the emergence of the ORF6-truncated SARS-CoV-2 at the country level (Table S3).
Based on the classification into pangolin lineages (https:// github.com/cov-lineages/pangolin) and GISAID clades (Table S4), we then assessed how often ORF6-truncated SARS-CoV-2 variants have emerged during the current pandemic. We identified 54 separate clusters, strongly suggesting that at least 54 mutations shortening the coding sequence of ORF6 emerged independently during the current pandemic (Table S4). To investigate whether ORF6-truncated SARS-CoV-2 variants have also spread via human-to-human transmission during the current pandemic, we analyzed cluster 41, which comprises 13 ORF6-truncated SARS-CoV-2 sequences in more detail. Twelve of the 13 ORF6-truncated SARS-CoV-2 genomes in cluster 41 were isolated in Wales, United Kingdom, and classified into pangolin lineage B.1.5 and GISAID clade G (Table S4). We then obtained 137 SARS-CoV-2 genome sequences that meet the abovementioned criteria (isolated in Wales, United Kingdom; pangolin lineage B.1.5 and GISAID clade G), including the 12 sequences belonging to cluster 41, and conducted a phylogenetic analysis. As shown in Figure 4B, 11 of the 12 ORF6-truncated SARS-CoV-2 mutants in cluster 41 formed a single clade. This observation suggests that these ORF6-truncated SARS-CoV-2 mutants have sporadically spread via human-to-human transmission. Together with our findings that deletion of the C-terminal region of SARS-CoV-2 ORF6 abolishes its ability to suppress IFN responses (Figure 3H), our analyses suggest that SARS-CoV-2 variants that lost a functional ORF6 gene have emerged during the current COVID-19 pandemic and have the capacity to spread in the human population.
DISCUSSION
Here we provide evidence suggesting that SARS-CoV-2 ORF6 inhibits induction of human innate immune signaling, including induction of IFNB1 and IFNL1 as well as upregulation of ISGs triggered by IFN-I and IFN-III. During the review process of this paper, several recent publications have described mechanisms of IFN-I antagonism by different SARS-CoV-2 proteins, including ORF6 (Lei et al., 2020;Li et al., 2020a;Miorin et al., 2020;Xia et al., 2020;Yuen et al., 2020). Nevertheless, several findings of our work clearly set our study apart from previous work. We found that (1) the ORF6 gene is specific to sarbecoviruses (Figure 1C); (2) not only the ORF6 proteins of human sarbecoviruses (i.e., SARS-CoV-2 and SARS-CoV) but also those of non-human sarbecoviruses from bats and a pangolin exert anti-IFN activity ( Figure 2C); (3) the anti-IFN activity of ORF6 proteins of the SARS-CoV-2 lineage is higher than that of the SARS-CoV lineage ( Figure 2C); (4) two residues, E46 and Q56, determine the anti-IFN activity of ORF6 ( Figures 3A-3E, S2B, and S2C); (5) the anti-IFN activity of SARS-CoV-2 ORF6 completely depends on its C-terminal region whereas that of SARS-CoV ORF6 does not ( Figures 3F-3H and S2D); (6) although previous papers suggested that ivermectin (Caly et al., 2020) and/or selinexor (Gordon et al., 2020) target ORF6, these compounds do not affect the anti-IFN activity of ORF6 (Figure S1A); and (7) SARS-CoV-2 mutants with truncations of the ORF6 gene emerged during the current pandemic and most likely spread in the human population ( Figure 4).
The observation that ORF6 proteins from SARS-CoV-2 and related viruses in bats and pangolins are, on average, more active in suppressing IFN responses than their SARS-CoV counterparts is reminiscent of the recently identified IFN antagonist ORF3b. This protein is also more potent in viruses of the SARS-CoV-2 lineage than in SARS-CoV and related animal viruses (Konno et al., 2020). These findings suggest that multiple IFN antagonists, including ORF6 and ORF3b, can cooperatively contribute to the inefficient and delayed IFN-I/III responses in SARS-CoV-2-infected cells as well as individuals with COVID-19 (Blanco-Melo et al., 2020;Hadjadj et al., 2020).
Consistent with previous studies characterizing ORF6 of SARS-CoV-2 (Miorin et al., 2020) or SARS-CoV (Kopecky-Bromberg et al., 2007), we show that SARS-CoV-2 ORF6 inhibits nuclear import of IRF3. Importantly, our mutational analyses revealed that inhibition of innate immune signaling by Sarbecovirus ORF6 proteins depends on their C-terminal region. Because Sarbecovirus ORF6 proteins bind RAE1 and NUP98 via their C-terminal region ( Figure 3I), it has been suggested that ORF6 inhibits innate immune signaling by targeting these two host factors (Miorin et al., 2020). In contrast to this recent study (B) A maximum likelihood phylogenetic tree of the 137 SARS-CoV-2 genomes containing cluster 41. The tree was generated using the 137 SARS-CoV-2 genomes isolated in Wales, United Kingdom, and classified into pangolin lineage B.1.5 and GISAID clade G. The tree contains cluster 41 (pink), which is comprised of the 12 SARS-CoV-2 genomes with C-terminally truncated ORF6. The ORF6 sequence in cluster 41 is shown in Figure S3. GISAID ID and sampling date (in parentheses) are noted in each node. Bootstrap values; **, > 85%; *, > 60%. See also Figure S3 and Tables S2, S3, and S4. (Miorin et al., 2020), however, overexpression of RAE1 and NUP98 did not rescue the IFN response in the presence of ORF6 in our hands. Intriguingly, we found that the expression levels of ORF6 are increased upon expression of RAE1 and NUP98 ( Figure 3M). Although RAE1 exports RNA from the nucleus, NUP98 is a component of the nuclear pore complex (Pritchard et al., 1999;Ren et al., 2010). Thus, overexpression of RAE1 and NUP98 may exert two opposing effects on ORF6mediated IFN inhibition. On one hand, their overexpression may enhance IFN responses by compensating for RAE1/ NUP98 proteins targeted by ORF6. On the other hand, RAE1/ NUP98 may suppress IFN responses by increasing export of ORF6 mRNA and, hence, total ORF6 protein levels. Our experiments suggest that these two effects may potentially annul each other.
In contrast to SARS-CoV-2 ORF6, the C-terminally truncated mutants of the ORF6 proteins of SARS-CoV lineages and two outgroup viruses (BtKY72 and BM48) only partially lost their ability to suppress induction of IFN activation. These observations suggest that Sarbecovirus ORF6 proteins other than those of SARS-CoV-2 can exert anti-IFN activity independent of their C-terminal region. This inhibitory activity most likely involves a mechanism that is independent of RAE1/NUP98 because recruitment of these proteins depends on the C terminus of ORF6. Notably, Xia et al. (2020) recently demonstrated that SARS-CoV-2 ORF6 antagonizes IRF3 nuclear import via targeting KPNA2, a subunit of importin, inhibiting type I IFN induction. Thus, it might be plausible to assume that SARS-CoV-2 ORF6 has evolved several independent mechanisms to counteract IFN-mediated immune responses, only some of which involve the C terminus of ORF6.
Incidentally, in the Dox-inducible ORF6 expression system in A549 cells, differences in the ability of SARS-CoV-2 ORF6 and SARS-CoV ORF6 to suppress upregulation of IFNB1 seemed to disappear ( Figure 2G). Differences between HEK293 cells and A549 cells may be explained by at least two possibilities. First, the expression levels of ORF6 upon Dox stimulation in A549 cells are lower than those achieved by transient transfection of HEK293 cells ( Figure S1B). Second, induction of IFNB1 by SeV infection in A549 cells (1,500-fold) is dramatically higher than in HEK293 cells (50-to 100-fold) ( Figures 2D and 2G). Thus, the relative antagonistic activity of ORF6 may be lower in A549 cells compared with HEK293 cells.
By analyzing more than 67,000 SARS-CoV-2 sequences, we found that variants lacking the C-terminal region of ORF6 because of frameshift and/or nonsense mutations emerged more than 50 times during the current COVID-19 pandemic (Figure 4; Table S4). In contrast, truncated ORF6 genes have so far not been detected in SARS-CoV-2-related viruses isolated from animals. By analyzing the ORF6 sequences from a variety of sarbecoviruses belonging to the SARS-CoV lineage, however, we also found three SARS-CoV-related viruses isolated from two bats (GenBank: MK211374 and KJ473816) and a palm civet (GenBank: FJ959407) harboring truncated ORF6 sequences (44, 50, and 44 amino acids, respectively) because of frameshift mutations ( Figure S3). Furthermore, we detected a human SARS-CoV, strain TWJ (GenBank: AP006558) that encodes a shortened ORF6 protein because of a frameshift mutation (Fig-ure S3). Considering phylogenetic relationships and their mutation patterns, these ORF6 mutations emerged independently because the respective viruses do not form a single clade (Naka- gawa and Miyazawa, 2020). These results suggest that truncations of ORF6 occurred multiple times in the subgenus Sarbecovirus, although such mutations have not spread dominantly in the viral population.
Because the C-terminal region of SARS-CoV-2 ORF6 is essential to elicit its anti-IFN activity, SARS-CoV-2 variants expressing C-terminally truncated ORF6 most likely lost an IFN antagonist. Although the frequency of SARS-CoV-2 isolates with C-terminally truncated ORF6 is low (0.2%), our phylogenetic analyses provide strong evidence of human-to-human transmission of these viruses ( Figure 4B). Because ORF6 is a potent IFN antagonist, the emergence of SARS-CoV-2 ORF6 frameshift mutants may contribute to attenuation of viral pathogenicity. However, the relative contribution of ORF6 to disease severity is hard to assess at this point because most of the viral sequences currently deposited in GISAID are derived from symptomatic individuals (mostly severe cases). Thus, monitoring the ORF6 gene during the current pandemic, not only in symptomatic individuals but also in asymptomatic carriers, and possible associations with viral pathogenicity seem to be highly warranted.
A limitation of this study is that the biological activity of Sarbecovirus ORF6 was investigated using an overexpression system. Additionally, all previous studies characterizing the anti-innate immune activity of ORF6 (Lei et al., 2020;Li et al., 2020a;Miorin et al., 2020;Xia et al., 2020;Yuen et al., 2020) have exclusively used overexpression systems, mainly in HEK293 cells. To fully define the relative contribution of ORF6 to immune evasion of SARS-CoV-2 and its effects on viral replication and pathogenicity, use of infectious, gene-modified recombinant viruses, preferentially in primary target cells, will be required. A variety of techniques to artificially reconstruct infectious SARS-CoV-2 by reverse genetics have been established (Rihn et al., 2021;Thi Nhu Thao et al., 2020;Torii et al., 2020;Xie et al., 2020Xie et al., , 2021Ye et al., 2020). Future investigation using the recombinant SARS-CoV-2, in which ORF6 gene is artificially modified, will unveil the biological activity of ORF6 to immune evasion of SARS-CoV-2.
STAR+METHODS
Detailed methods are provided in the online version of this paper and include the following:
ACKNOWLEDGMENTS
We would like to thank all laboratory members in the Division of Systems Virology, Institute of Medical Science, The University of Tokyo, Japan and all authors who have kindly deposited and shared genome data on GISAID. We also thank Naoko Misawa (Institute for Life 16H06429 and 16K21723 (to S.N. and K.S.), 17H05823 and 19H04843 (to S.N.), and 17H05813 and 19H04826 (to K.S.) Report ll OPEN ACCESS generated by PCR using PrimeSTAR GXL DNA polymerase (Takara), the synthesized ORF6s as templates, and the primers listed in Table S5. The obtained DNA fragments were digested with EcoRI and BglII, and were inserted into the EcoRI-BamHI site of pLVX-TetOne-Puro. To construct a Flag-tagged RAE1 expression plasmid, pcDNA3.1 (Thermo Fisher Scientific) was used as a backbone. The Flag-tagged RAE1 sequence was generated by PCR using PrimeSTAR GXL DNA polymerase (Takara), human cDNA, which was synthesized using HEK293-derived mRNA as the template, and the primers listed in Table S5. The obtained DNA fragments were digested with EcoRV and NotI, and inserted into the EcoRV-NotI site of pcDNA3.1. The HA-tagged NUP98 expression plasmid was described in a previous study (Ebina et al., 2004). Nucleotide sequences were determined by a DNA sequencing service (Fasmac), and the sequence data were analyzed by Sequencher version 5.1 software (Gene Codes Corporation).
Transfection, Dox Treatment, IFN treatment and SeV Infection HEK293 cells were transfected using PEI Max (Polysciences) according to the manufacturer's protocol. For immunofluorescence staining, HEK293T cells were transfected using calcium phosphate as previously described (Langer et al., 2019). For western blotting, cells (in 12 well) were cotransfected with the pCAGGS-based HA-tagged expression plasmids (100, 300 or 500 ng for Figures 2B and S1C; 100 ng for Figures 3B and 3D; 100 or 300 ng for Figure 3G) together with an empty vector (normalized to 1 mg per well). For real-time RT-PCR, cells (in 12 well) were transfected the pCAGGS-based HA-tagged expression plasmids or empty vector (1,000 ng for Figures 2D and 2E). For luciferase reporter assay, cells (in 96 well) were cotransfected with 50 ng of either p125Luc (expressing firefly luciferase driven by human IFNB1 promoter; kindly provided by Dr. Takashi Fujita) (Fujita et al., 1993) Figure S1A). The amounts of transfected plasmids were normalized to 100 ng per well. For the compensation assay ( Figures 3L and 3M), cells (in 12 well) were cotransfected with the pCAGGS-based SARS-CoV-2 ORF6 expression plasmid (100 ng) and 100, 200 or 400 ng of Flag-tagged RAE1 expression plasmid and 100, 200 or 400 ng of HA-tagged NUP98 expression plasmid (kindly provided by Dr. Yoshio Koyanagi). The amounts of transfected plasmids were normalized to 1,000 ng per well. To induce ORF6-HA expression in A549 cells (described above), they were treated with 1 mg/ml Dox (Takara). At 24 h post transfection or Dox treatment, SeV (strain Cantell, clone cCdi; GenBank accession number AB855654) (Yoshida et al., 2018) was inoculated into the transfected cells at multiplicity of infection (MOI) 10, or IFN-a (100 unit/mL) (PBL Assay Science) or IFN-l3 (100 ng/ml) (R&D Systems) were treated. In Figure S1A, Ivermectin (Merck) or Selinexor (Selleck Chemicals) (solved with DMSO) was added at 24 h post transfection. For co-IP, HEK293 cells were transfected with pCAGGS-based HA-tagged ORF6 expression plasmids (20 mg, Figure 3I) were transfected into HEK293 cells (in 10-cm dishes) as described above.
Reporter Assay
The luciferase reporter assay was performed 24 h post infection as previously described (Kobayashi et al., 2014;Konno et al., 2018Konno et al., , 2020Ueda et al., 2017). Briefly, 50 mL cell lysate was applied to a 96-well plate (Nunc), and the firefly luciferase activity was measured using a PicaGene BrillianStar-LT luciferase assay system (Toyo-b-net), and the input for the luciferase assay was normalized by using a CellTiter-Glo 2.0 assay kit (Promega) following the manufacturers' instructions. For this assay, a GloMax Explorer Multimode Microplate Reader 3500 (Promega) was used.
Real-time RT-PCR
Real-time RT-PCR was performed as previously described (Konno et al., 2020;Yamada et al., 2018). Briefly, cellular RNA was extracted using QIAamp RNA blood mini kit (QIAGEN) and then treated with RNase-Free DNase Set (QIAGEN). cDNA was synthesized using SuperScript III reverse transcriptase (Thermo Fisher Scientific) and random primer (Thermo Fisher Scientific). Real-time RT-PCR was performed using a Power SYBR Green PCR Master Mix (Thermo Fisher Scientific) and the primers listed in Key resources
OPEN ACCESS
Immunofluorescence Staining HEK293T cells were seeded on poly-L-Lysine-coated coverslips in 24-well plates and cotransfected with a SARS-CoV-2 ORF6 expression vector either in combination with RAE1 and NUP98 expression vectors or an empty vector. 16-24 h post transfection, cells were infected with SeV or left untreated. Subsequently, cells were fixed for 20 min at room temperature with 4% paraformaldehyde and permeabilized using PBS containing 0.5% Triton X-100 and 5% FCS for 20 min at room temperature. IRF3 was detected using a unconjugated primary antibody (Cell signaling, dilution 1:200), and fluorophore-conjugated secondary antibody (Thermo Fisher Scientific, dilution 1:1000). Nuclei were visualized by DAPI staining. Cells were mounted in Mowiol mounting medium (Cold Spring Harbor Protocols) and analyzed using confocal microscopy (LSM 710, Zeiss) and the corresponding software (Zeiss Zen Software).
SARS-CoV-2 Sequence Analysis
To survey variants of ORF6 in pandemic SARS-CoV-2 sequences, we used the viral sequences deposited in GISAID (https://www. gisaid.org) (accessed July 16, 2020). A multiple sequence alignment of SARS-CoV-2 genomes was performed to obtain ORF6 regions. We first excluded viral genomes that contain undetermined and/or mixed nucleotides in the ORF6 region. We identified the variants containing shortened coding sequences of ORF6 as a result of frameshift and/or nonsense mutations. For the sequences, we extracted the information of each GISAID entry, i.e., country, Pangolin lineage, GISAID clade, and sampling date (Table S2). We then clustered the ORF6 mutants when the ORF6 sequences, their Pangolin lineages and GISAID clades were identical. Based on these criteria, 54 clusters were identified (Table S3). To infer the evolutionary dynamics of the ORF6-truncated SARS-CoV-2 mutants, we analyzed 137 SARS-CoV-2 genomes (isolated country, Wales, UK; Pangolin lineage, B.1.5; GISAID clade, G), including 12 ORF6truncated SARS-CoV-2 mutants in cluster 41 (Table S4). Using these sequences, we generated a multiple sequence alignment using FFT-NS-2 program in MAFFT software version 7.467. We then constructed a maximum likelihood-based phylogenetic tree using RAxML-NG version 1.0.0 with GTR model that was chosen based on AIC values using ModelTest-NG version 0.1.5. We applied a 1,000-time bootstrapping test.
QUANTIFICATION AND STATISTICAL ANALYSIS
Data analyses were performed using Prism 7 (GraphPad Software). The data are presented as averages ± SEM. Statistically significant differences were determined by Student's t test. Statistical details can be found directly in the figures or in the corresponding figure legends. | 7,541.8 | 2021-03-12T00:00:00.000 | [
"Biology"
] |
Applications of biaryl cyclization in the synthesis of cyclic enkephalin analogs with a highly restricted flexibility
A series of 10 cyclic, biaryl analogs of enkephalin, with Tyr or Phe residues at positions 1 and 4, were synthesized according to the Miyaura borylation and Suzuki coupling methodology. Biaryl bridges formed by side chains of the two aromatic amino acid residues are of the meta–meta, meta–para, para–meta, and para–para configuration. Conformational properties of the peptides were studied by CD and NMR. CD studies allowed only to compare conformations of individual peptides while NMR investigations followed by XPLOR calculations provided detailed information on their conformation. Reliability of the XPLOR calculations was confirmed by quantum chemical ones performed for one of the analogs. No intramolecular hydrogen bonds were found in all the peptides. They are folded and adopt the type IV β-turn conformation. Due to a large steric strain, the aromatic carbon atoms forming the biaryl bond are distinctly pyramidalized. Seven of the peptides were tested in vitro for their affinity for the µ-opioid receptor. Supplementary Information The online version contains supplementary material available at 10.1007/s00726-023-03371-5.
Introduction
The cyclization of peptides is an important issue in peptide chemistry.It has manifold goals.It is very helpful in conformational studies, especially in the case of short peptides since it limits their flexibility and enables determination of the dominant conformation.Still greater importance cyclization has in the case of biologically active peptides.Cyclic peptides are more resistant to enzymatic degradation and thus their stability and life span are increased (Bechtler and Lamers 2021;Frost et al. 2016).They show also enhanced membrane permeability (Hayes et al. 2021).These features improve significantly the pharmacological properties of such compounds and their therapeutic potential.
Peptides can be cyclized by various ways among which side chain-to-side chain cyclization is quite often used.A special method of such a cyclization in peptides is a direct connection of two aromatic rings in the side chains of aromatic amino acid residues, resulting in the biaryl bridge.Biaryl bridges in peptides can be obtained by various methods.One of them is catalytic oxidative cross-coupling reaction (Ben-Lulu et al. 2020).A series of biaryl-bridged (via two tyrosine residues) peptides were synthesized by this method with the use of iron catalyst and urea hydrogen peroxide as an oxidant, and later cyclized by lactamization.An important factor in this approach is the presence of an activating tert-butyl group at the ortho positions in the aromatic rings of both tyrosine residues.Another method of synthesis of cyclic biaryl peptides is cyclization of peptides with iodinated aromatic ring at the C-terminus and the N-terminal benzamide group, catalyzed by palladium (Bai et al. 2019).
A very useful method of synthesis of biaryl bridges in peptides is Miyaura borylation-Suzuki coupling (Miyaura and Suzuki 1995).Initially, it was a reaction between an organoboronic acid and a halide, catalyzed by palladium.In the case of biaryls, aryl iodides or bromides were used since aryl chlorides are quite inert to oxidative addition.Apart from organoboronic acids, many other boron-based compounds have been introduced into Miyaura-Suzuki reaction (Willemse et al. 2017).They are more stable and react more productively.The advantage of this reaction is that it proceeds under relatively mild conditions and with various organoboronic compounds as substrates.Synthesis of cyclic biaryl-bridged peptides by the Miyaura-Suzuki method requires then the presence of two suitably derivatized aromatic amino acid residues in their sequence-an arylboronic derivative and an aryl halide.Due to its versatility and reaction conditions, this method has found a wide application to the synthesis of cyclic biaryl peptides.It has been used for the first time for synthesis of the biphenomycin model (Carbonnelle and Zhu 2000) and soon later for synthesis of the functionalized macrocyclic core of proteasome inhibitors TMC-95A and B (Kaiser et al. 2002;Lin andDanishefsky 2001, 2002).Since then many cyclic biaryl peptides have been synthesized according to the Miyaura-Suzuki approach.They contain mainly Phe-Phe, Phe-Tyr, and Tyr-Tyr linkages (Afonso et al. 2011(Afonso et al. , 2012;;Carbonnelle and Zhu 2000;García-Pindado et al. 2017;Han et al. 2020;Kaiser et al. 2002;Lin andDanishefsky 2001, 2002;Mendive-Tapia et al. 2015;Meyer et al. 2012;Ng-Choi et al. 2019b).Syntheses of cyclic biaryl peptides with the Phg-Phe, Tyr-Trp, His-Tyr, Trp-Phe, and Trp-Trp bridges have been also reported (García-Pindado et al. 2018;Gruß et al. 2022;Gruß and Sewald 2020;Han et al. 2020;Kaiser et al. 2002;Kemker et al. 2019;Lin andDanishefsky 2001, 2002;Mendive-Tapia et al. 2015;Ng-Choi et al. 2019a, 2020).
In this paper, we describe the synthesis and conformational studies on cyclic biaryl analogs of enkephalins (Hughes et al. 1975).Enkephalins are natural linear pentapeptides isolated from brain extracts.Their sequences are Tyr-Gly-Gly-Phe-Leu and Tyr-Gly-Gly-Phe-Met.They are endogenous peptides which belong to the opioid peptides class and exhibit morphine-like properties (Beluzzi et al. 1976).During the structure-activity studies on the enkephalins, many cyclic analogs of the peptides have been synthesized (Remesic et al. 2016).Cyclizations have been performed by various ways.Cyclic enkephalin analogs were obtained most often by bridging the peptide chain at positions 2 and 5, e.g., with the use of penicillamines (Mosberg et al. 1983;Hruby et al. 1997), methylamine (Shreder et al. 1998), carbonyl group (Pawlak et al. 2001), and lanthionine (Rew et al. 2002).By contrast, to the best of our knowledge, the aromatic residual side chains of enkephalins were used only once for cyclization (Siemion et al. 1981).However, it may be a promising way in the search for potent enkephalin analogs since it was found then that azo-enkephalin, in which the Tyr 1 and Phe 4 aromatic rings were linked with each other by the azo bridge, is very active in in vivo tests.
In the analogs presented here, the aromatic rings of the parent or substituted residues at positions 1 and 4 are situated in a very close proximity, i.e., they are connected directly with each other at different positions of the rings.The configurations of biaryl bridges obtained are meta-meta, meta-para, para-meta, and para-para.The peptides were synthesized by the Miyaura-Suzuki coupling methodology (Miyaura and Suzuki 1995).Their general structure and a list of synthesized peptides are presented below in Fig. 1 and Table 1, respectively.
The Suzuki-Miyaura approach has already been used for synthesis of biaryl cyclic peptides with three to eight Fig. 1 Structures of the synthesized biaryl cyclic enkephalin analogs residues and with the para-para and meta-para configurations of the biaryl bridge (Afonso et al. 2011).In another study, biaryl cyclic tri-, tetra-, and pentapeptides have been synthesized with three different configurations: meta-meta, meta-ortho, and ortho-meta (Meyer et al. 2012).Planned CD and NMR studies on those peptides in buffered aqueous solution were not possible due to their insufficient solubility in such a medium.
The main goal of the present study was determination of spectroscopic properties of the obtained peptides and determination of their conformations.There are few papers in which X-ray structures and solution conformations of biaryl cyclic peptides are described.In 2001, the crystal structure of the 20 S proteasome:TMC-95A non-covalent complex was determined (Groll et al. 2001).TMC-95A is a cyclic biaryl peptide with the Tyr-Trp linkage which inhibits enzymatic activities of 20 S proteasome (Kohno et al. 2000).The peptide has been investigated by NMR (Kohno et al. 2000) and it has been found that its conformation in the unbound state is similar to its conformation in the complex with 20 S proteasome (Groll et al. 2001).The X-ray structure has been also determined for a cyclic biaryl peptide obtained from linear precursor PA-Phg-Gly-Leu-Phe-COOMe (PA-picolinamide), with the ortho-para Phg-Phe biaryl bridge (Han et al. 2020).NMR studies have been conducted also on a series of peptides with the Phe-Trp biaryl bridge, in which the two aromatic amino acid residues were separated by Asn-Gly-Arg, Arg-Gly-Asp, and Ser-Ala sequences and by a Val residue (Mendive-Tapia et al. 2015).
The peptides described in this paper were investigated by CD in MeOH, TFE, and water, pH 7, and NMR in H 2 O/D 2 O. NMR studies allowed a detailed characterization of their conformational properties.To the best of our knowledge, it is the first report presenting such results for cyclic biaryl peptides of the same sequences and differing in the configuration of their biaryl bridges.Since the peptides studied are analogs of enkephalins, seven of them were also tested in vitro for affinity at the μ-opioid receptor (MOR).For comparison, their linear counterparts, were tested as well.They served as reference peptides for the cyclic ones in biological studies.
General procedure A for Fmoc SPPS Chemistry
Peptides were synthetized manually by stepwise solid-phase synthesis on a Rink Amide MBHA resin (0.68 mmol/g) and an Fmoc-Leu-Wang resin (0.65 mmol/g) according to a standard Fmoc solid-phase synthesis procedure.For the coupling of standard Fmoc-amino acids (3 eq), HATU (3 eq) in presence of HOBt (3 eq) and DIEA (6 eq) in DMF were used.For couplings using synthesized or expensive amino acids, 1.5 − 2 eq amino acid, longer reaction times (8-16 h) were used.After each coupling and deprotection step, the resin was washed 7 times with DMF.Coupling yields were monitored by quantitative ninhydrin assay (Kaiser et al. 1970).Fmoc deprotections were achieved with 25% piperidine/DMF (2 × 10 min).
General procedure C for Suzuki solid-phase cyclization of peptides
Resins with linear precursors of cyclic peptides containing both boron-and iodine-modified amino acids, N-protected with the Boc or trityl group, after drying in desiccator were transferred to glass vials with PTFE Septa and magnetic stirring bar.Then Pd 2 (dppf)Cl 2 •CH 2 Cl 2 (0.2 eq), CsF (4 eq), and degassed dioxane/water (9:1) saturated with nitrogen were added.Vials were flushed with nitrogen and sonicated under vacuum, saturated with nitrogen three more times, and placed in the oil bath at 80 °C.After 8-24 h, the resin was transferred to a polypropylene syringe reactor and washed with dioxane/water (6 × 1 min), water (6 × 1 min), MeOH (6 × 1 min), and DCM (6 × 1 min).In the case of peptides with the N-terminal phenylalanine derivate, the peptidyl resins were additionally treated with TFA/H 2 O/ DCM (0.2:1:98.8, 2 × 1 min, 1 × 20 min) and then washed with DCM (6 × 1 min).The resulting biaryl peptidyl resin was vacuum-dried overnight and cleaved with TFA/H 2 O/ TIS (95:2.5:2.5),treated as described in general procedure for SPPS chemistry, and purified as described in procedure Purification of peptides.
General procedure D for cleavage from the resin
Peptides were cleaved from the resin using tri-fluoro-acetic acid/water/TIS (95:2.5:2.5, v:v:v) for 3 h at room temperature.In the case of H-(cyclo-m,p)-[Tyr-Gly-Gly-Phe]-Met-NH 2 , a mixture of TFA:H 2 O:EDT:TIS (v:v:v:v 94:2,5:2,5:1) was used.Then the products were precipitated with cold diethyl ether and washed twice with cold diethyl ether (centrifuged each time).After evaporation of the ether, the samples were dissolved in 10% acetonitrile in water, lyophilized, and submitted to HPLC purification.
Purification of peptides
The compounds were purified on a semipreparative Varian ProStar chromatograph equipped with a Tosoh Bioscience ODS-120T column (21.5 × 300 mm, particle size: 10 µm) and a 210/254 nm dual-wavelength UV detector.For HPLC purification, solvents containing 0.1% TFA in water and 0.1% TFA in 80% acetonitrile/water and a flow rate 7 ml/min were used.Collected fractions (products confirmed by MS analysis) were lyophilized.The purity of products was confirmed by HPLC analysis with a Thermo Separation HPLC system with a UV detection (210 nm) equipped with a Vydac Protein RP C18 column (4.6 × 250 mm, particle size: 5 µm).
Gradient elution of 0-80% B in 40 min was used (eluent A: 0.1% aqueous TFA in H 2 O, eluent B: 0.1% TFA in 80% acetonitrile/water), flow rate of 1 ml/min.Details concerning reagents used and syntheses of cyclic and linear peptides are given in Supplementary Information.
Mass spectrometry measurements
Mass spectrometric measurements (MS and MS/MS) were performed on an ESI-FT-ICR Apex-Qe 7T instrument (Bruker).The potential between the spray needle and the orifice was set to 4.5 kV.MS and MS/MS spectra an acetonitrile/water/formic acid (50:50:0.1)mixture or methanol were used.For fragmentation, the collision-induced dissociation (CID) technique was used with argon as the collision gas.The MS was calibrated with a Tunemix mixture (Bruker Daltonics).
MS and MS/MS experiment was also performed on a Shimadzu LC-IT-TOF instrument.CID fragmentation (with Argon) was used in the instrument, and the potential between the spray needle and the orifice was set to 4.5 kV.The LC systems were operated with the following mobile phases: A = 0.1% HCOOH in H 2 O and B = 0.1% HCOOH in MeCN.MS spectra were recorded without the LC column installation.
LC-MS
LC-MS experiments were performed on a Shimadzu LC-UV-IT-TOF instrument in the positive ion mode, with electrospray ionization.The LC system was operated with the following mobile phases: A = 0.1% HCOOH in H 2 O and B = 0.1% HCOOH in MeCN in a gradient separation from 0 to 60% B/A in 15 min at a 0.1 mL/min flow rate and a 2-5 μL injection.The separations were performed on a Phenomenex C18 column (3 × 50 mm, particle size: 3.6 μm).
CD studies
CD spectra were measured on a Jasco J-600 spectropolarimeter, at room temperature.Pathlengths of 1 and 10 mm were used for the peptide and aromatic region, respectively.Each spectrum represents the average of at least 7 scans.Concentrations of the solutions were in the range of 0.05-0.07and 0.5-2.5 mg/ml for the peptide and aromatic region, respectively.Spectra were measured in MeOH, TFE, and water, pH 7 (0.01 M sodium phosphate buffer).The data are presented as total molar ellipticity [θ].
NMR studies
All NMR spectra were recorded on 950 and 700 MHz Bruker Avance NEO spectrometers equipped with a cryogenic TCI probe and a 600 MHz Bruker Avance spectrometer, at 25 °C in H 2 O/D 2 O (90:10, v/v).Concentrations of the solutions were 20 mM.All parameters of NMR measurements are presented in Table S4.Two-dimensional NMR spectra were processed with TOPSPIN (Bruker) and analyzed with SPARKY (Goddard and Kneller 2000) programs.Complete assignments of the 1 H, 15 N, and 13 C resonances for all the peptides were done by application of a standard procedure (Wüthrich 1986) based on inspection of the 2D experiments: 1 H-1 H TOCSY (Braunschweiler and Ernst 1983) (with mixing times 80 and 90 ms), 1 H-1 H ROESY (Bax and Davis 1985) (with mixing time 100 and 300 ms), 1 H-13 C HSQC (Bodenhausen and Ruben 1980) focused on the aromatic and aliphatic regions separately, and 1 H-15 N HSQC (Bodenhausen and Ruben 1980).Through space distances between protons were determined by analysis of the 2D 1 H-1 H ROESY spectra inter proton cross peaks.The lowest-energy structures of the peptides studied were calculated with the XPlor package (Schwieters et al. 2003).It is a widely used software package used for biomolecular structure determination from NMR and other data sources.
Radioligand receptor binding assays
The binding affinity of cyclic and linear enkephalin analogs for μ-opioid receptor (MOR) was determined in competitive radioligand binding assays using membrane preparations from rat brain homogenates.The homogenates were obtained as described previously (Matalińska et al. 2020).The membrane preparations were incubated at 25 °C for 60 min in the presence of 1.0 nM [ 3 H]DAMGO (obtained from PerkinElmer, USA) and the set of concentrations of the assayed compound proper for a particular compound and type of determination (screening or a full-displacement curve).
For screening purposes, we performed the experiments with the following concentrations of the compounds tested: 30 μM, 10 μM, and 3 μM.For full-displacement curves, 10 concentrations ranging either from 0.03 nM to 1 μM or from 300 nM to 30 μM were used.The range was chosen for the single compounds in a manner such that the expected IC 50 would fall in the middle of the range.In the case of several experiments, only five concentrations were used (they are marked in the results table ).
Non-specific binding was measured in the presence of 10 μM naloxone.The assays were conducted with the assay buffer made of 50 mM Tris-HCl (pH 7.4), bacitracin (100 μg/ml), bestatin (30 μM), and captopril (10 μM), phenylmethylsulfonyl fluoride (PMSF, 30 μg/ml) in a total volume of 0.5 ml.After the incubation, a rapid filtration with a M-24 Cell Harvester (Brandel/USA) through GF/B Whatman glass fiber strips was conducted.The filters were soaked with 0.5% PEI just before the harvesting so as to minimize the extent of non-specific binding.Filter disks were cut from the sheet and placed separately in 24-well plates.The Optiphase Supermix scintillation solution (Perkin Elmer, USA) was added to each well.Radioactivity was measured in a scintillation counter MicroBeta LS, Trilux (PerkinElmer, USA).Displacement curves were drawn and the mean IC 50 values were determined with SEMs (Graph-Pad Prism v. 5.0, San Diego, CA).
The results (of full concentration range measurements) are means ± standard error of the mean of two or three independent experiments done with two repetitions, if not stated otherwise.The results of screening experiments are means ± standard deviation of two independent experiments done with three repetitions, if not stated otherwise.
Quantum chemical calculations
The structure of c-(Tyr-m-Phe-p)-M-NH 2 as calculated by XPLOR was subject to quantum mechanical structure optimization.The calculations were done using Gaussian 09 (Frisch et al. 2013).The initial geometry was minimized at the B3LYP/6-31G(d,p) level in the gas phase.The obtained structure which exhibited proton transfer between NH 3 + group of Tyr 1 and the adjacent C = O group was modified by moving the proton back to the amino group and subjected to optimization at the B3LYP/6-31G(d,p) level with the PCM solvent model (Mennucci 2018).The obtained geometry served harmonic frequency calculation at the same level to check if it is a minimum (no imaginary frequencies).Atomic coordinates of the starting point geometry and the QM optimized structure are given in Supplementary Materials (Listings S1 and S2).
Molecular docking
Molecular docking was performed in AutoDock Vina (Trott and Olson 2010).The structure of the linear peptide [Met 5 ] enk-NH 2 (in extended conformation) was prepared in Biovia Discovery Studio Visualizer (Dassault Systèmes 2018).The structure of the biaryl c-(Tyr-m-Phe-p)-M-NH 2 was the one resulting from the quantum chemical optimization started from the NMR-derived geometry, as described in the previous section.The MOR structure used for docking was 8EFQ structure (Zhuang et al. 2022).The protein preparation was done in Biovia Discovery Studio Visualizer by removing the experimental ligand and G-protein, as well as by adding hydrogens.The protonation states were set as expected at pH ~ 7. The docking box was set to encompass the binding site of MOR but significantly extended (box sizes: 41.0 Å × 32.7 Å × 33.7 Å).The receptor structure was treated as rigid.In the case of [Met 5 ] enk-NH 2 , full ligand flexibility (except for amide bonds) was allowed.In the case of the biaryl c-(Tyr-m-Phe-p)-M-NH 2 , the exocyclic bonds (except for amide bonds) were treated as flexible.The docking exhaustiveness was set to 20.The top docking results were inspected visually.The docking scores are given in Tables S4 and S5 in Supplementary Information.In the case of [Met 5 ]enk-NH 2 , a pharmacophoric filter was applied according to which poses without Asp3.32•••Tyr 1 NH 3 + interaction were discarded from analysis.Such interaction is expected for typical high affinity peptide opioids based on experimental data from mutagenetic studies (Li et al. 1999;Chavkin et al. 2001).Molecular graphics were prepared in Biovia Discovery Studio Visualizer and in open-source PyMol (Version 2.0 Schrödinger, LLC).
Chemistry
Synthesis of cyclic enkephalin analogs bearing biaryl bond was based on solid-phase Miyaura-Suzuki macrocyclization of linear peptides containing both the boronate and iodinated analogs of aromatic amino acids-phenylalanine and tyrosine.From a synthetic point of view, the obtained compounds can be divided into two groups-either with analogs of phenylalanine or tyrosine at a fourth position of the peptide chain.In the case of the first group (Scheme 1), the synthesis was based on obtaining the C-terminal tetrapeptide fragment of the target compound, with iodinated Phe, protected at the N-terminus with a trityl group.The obtained tetrapeptide fragment was subjected to Miyaura reaction on a solid support as described in (Afonso et al. 2010(Afonso et al. , 2012)).The Miyaura borylation reaction was carried out in DMSO at 80 °C for 4-8 h using bispinacolatodiboron (B 2 pin 2 , 4 eq), PdCl 2 (dppf) 2 •CH 2 Cl 2 (0.18 eq), dppf (0.09 eq), and KOAc (6 eq) as a base.After carrying out a borylation reaction and removing the trityl group, the elongation of the peptide chain was continued until attaching a terminal iodine derivative of phenylalanine or tyrosine, protected at the N-terminus with a Boc or Fmoc group.In the case of peptides with the N-terminal phenylalanine derivative, the Fmoc group was cleaved and trityl protection was introduced.
As regards the peptides with the Tyr residue at a fourth position (Scheme 2), the Miyaura reaction on the resin with N-trityl protected, t-butyl ether of iodinated tyrosine gave a low yield.Therefore, the boronic derivative was introduced at the end of the amino acid sequence in the form of one of Peptides containing both the boronate and iodinated analogs of aromatic amino acids were cyclized at 80 °C for 18-24 h by solid supported Suzuki coupling using Pd(dppf) Cl 2 •CH 2 Cl 2 , dppf, base, and degassed dioxane/water (9:1) as a solvent.
NMR studies
Important conformational information on the peptides studied can be obtained from the [ 1 H-1 H] ROESY spectra.Apart of their contribution to the assignment of the 1 H, 15 N, and 13 C resonances, they were used in the process of determining the peptides spatial structures because they allowed us to detect the nontrivial, inter-residue NOE contacts.Such numerous contacts were found for all the peptides investigated.They are presented in Table S1 in Supplementary Information.It was assumed that if there is a contact between two protons in the spectrum, they are at a distance not greater than 5.5 Å.
The NOE contacts were used as the input data for calculations of the lowest-energy conformations.Using the XPLOR program (Schwieter et al. 2003), five hundred structures were generated for each of the peptide studied, from which sets of fifty lowest-energy ones were then selected.The structures in each set were superimposed in PyMOL and the conformation with the lowest energy was separated.RMSDs (root-mean-square deviations) and torsion angles were calculated with MOLMOL (Koradi et al. 1996).The results of these calculations are shown in Table 2 and Fig 2).It can therefore be concluded that for these analogs there is a preferred conformation in which the molecule reaches its energy minimum.Residues 1-4 are more rigid due to cyclization while the fifth amino acid residue has more conformational freedom than the rest of the molecule.However, for example in c-(Tyr-m-Phe-p)-M-NH 2 , a partial rigidification of atoms of the C-terminal methionine residue can be observed (Fig. 2c).
In the case of c-(Phe-m-Phe-p)-L-NH 2 , a closer inspection of the 50 lowest-energy structures allowed us to distinguish four conformational families of that peptide.It was done by superimposition of the structures based on their cyclic fragments and looking for differences between them.The group of structures with the lowest energies is the most populated and the number of structures in other groups decreases with increasing energy.The isolated conformational families of c-(Phe-m-Phe-p)-L-NH 2 are shown in Fig. 3.For c-(Phe-p-Phe-m)-L-NH 2 , it was not possible to create groups containing more than two structures which proves that it has a greater conformational freedom than other analogs.
Closing of the enkephalin peptide chain by formation of the biaryl moiety forces, it into a folded conformation of the β-turn type.In the case of enkephalins, there are two β-turns possible, with either residues 2 and 3 or residues 3 and 4 sitting at the i + 1 and i + 2 positions of a β-turn, respectively.It was found from the calculations that generally no intramolecular hydrogen bonds are present in the peptides studied.The Met 5 NH•••Gly 2 C = O hydrogen bond was found in 20 from 50 lowest-energy structures of c-(Tyr-m-Phe-p)-M-NH 2 , but these are mainly higher-energy conformations and a few of lower energy, so it is difficult to say that a hydrogen bond is distinctly present in that analog. 1 show that in all the peptides studied, only β-turn of type IV (in which there is As regards the multiplicities of NMR signals of the peptides studied, most of them corresponds very well with the theory.However, there are some unusual patterns (indicated by question marks in the assignments of the NMR signals in the spectra of cyclic peptides in Supplementary Information).They are very often a result of the signals overlap which made it very difficult to find unequivocally the kind of a multiplet.In the case of NH protons of glycine residues, it happened sometimes that broad, low singlets (described as "broad s") appeared, instead of triplets.Close inspection of the XPLOR-calculated three-dimensional structures of the studied peptides (see 3-D_structures.zip file in Supplementary Information) shows a significant distortion in the geometry around the biaryl linkage.The aromatic carbons joining the rings are pyramidalized to some extent (an example in Fig. 4b).This is observed in all the peptides except for c-(Phe-p-Phe-p)-L-NH 2 and c-(Phe-p-Phe-m)-L-NH 2 .Similar examples of pyramidalized aromatic carbons in cyclic molecules with biaryl fragment are known in literature (Cochrane et al. 2012;Zhao Zhu et al. 2018).On the other hand, we suspected that XPLOR force field may not be entirely suitable for modeling the biaryl fragment.We were then curious to see if this pyramidalization persists after quantum mechanical (QM) geometry optimization.To explore this possibility, we subjected the XPLOR-obtained lowest-energy structure of c-(Tyr-m-Phe-p)-M-NH 2 to QM optimization at the B3LYP/6-31G(d,p) level in water (PCM implicit solvent model (Mennucci 2012)).As a result, the distortion in the biaryl junction was significantly but not entirely relieved (Fig. 4c).At the same time, the rest of the molecule retains its overall shape and conformation (superposition of the structures Fig. 4a, comparison of the dihedral angles Table S2).This suggests that the biaryl fragment may indeed have some distorted geometry although XPLOR modeling could overestimate it due to lack of proper parametrization.This issue warrants further investigation beyond the scope of this report.One of the features of substituted biaryl compounds is their ability to appear as atropisomers.In the case of peptides discussed in this paper, such an isomerism can occur only in compounds with the biaryl bridge of the para-para configuration.The only peptide with such a configuration of the biaryl bridge is c-(Phe-p-Phe-p)-Leu-NH 2 .The aromatic rings of phenylalanine residues in that peptide can undergo a rotation around the bond joining them.If this rotation was hindered, then chemical shifts of both δ and ε protons in the aromatic rings of the Phe residues should be different.Instead, δ and ε protons in both aromatic rings of c-(Phe-p-Phe-p)-Leu-NH 2 give single, averaged signals which shows a free rotation of the rings.It is consistent with the lack of aromatic carbon atoms pyramidalization in this peptide.Otherwise distorted bond geometry of γ and/or ζ aromatic carbons would severely limit a free aromatic rings rotation.
CD studies
Analysis of CD spectra of enkephalins is difficult due to the presence of two aromatic amino acid residues.Such residues give large CD bands in the far-UV region originating from their 1 L a transitions.They are usually positive and do not depend strongly on the peptide conformation (Woody 1978).These bands overlap the signals of the peptide chromophores which give information on the secondary structure of peptides and proteins hindering the conformational analysis.The situation is even more complicated in the case of biaryl peptides like the ones discussed in this paper.A direct connection of two aromatic rings with each other can result in exciton coupling of their electronic transitions.The exciton coupling is a very useful tool of configurational and conformational studies (Harada and Nakanishi 1972;Pescitelli 2022), but when aromatic exciton bands appear in the region of peptide chromophores absorption, they additionally obscure the far-UV region and make CD analysis of peptides more difficult.
In light of the above, the CD spectra of the biaryl enkephalin analogs (Fig. 5) do not allow to draw reliable conformational conclusions.However, in this case, they can show conformational differences between individual analogs and provide information on their rigidity on the basis of their solvent-dependence.But when comparing conformations of the peptides studied a proper caution should be observed because the differences between their CD spectra may result not so much from their conformational differences as from varying aromatic contributions.There is no such problem in the case of c-(Tyr-m-Phe-p)-L-NH 2, c-(Tyr-m-Phe-p)-M-NH 2 , and c-(Tyr-m-Phe-p)-L-OH.These analogs have the same biaryl ring structure and differ only by their last amino acid residue and the C-terminal group.The CD spectra of those peptides (Fig. 5a, c, d) show that their conformations are similar and they are not influenced significantly by these structural differences.Instead, a change of the biaryl bridge configuration from meta-para to meta-meta leads to a change of the CD spectrum resulting from decreasing of the biaryl ring size and hence its different conformation.It can be seen by comparison of the spectra of c-(Tyr-m-Phep)-L-NH 2 (Fig. 5a) and c-(Tyr-m-Phe-m)-L-NH 2 (Fig. 5b).Effect of different biaryl bridge configurations and ring sizes is also distinctly reflected in the spectra of c-(Phe-p-Phep)-L-NH 2 (Fig. 5g), c-(Phe-m-Phe-p)-L-NH 2 (Fig. 5h), c-(Phe-p-Phe-m)-L-NH 2 (Fig. 5i), and c-(Phe-m-Phe-m)-L-NH 2 (Fig. 5j).
μ-Opioid receptor affinity
Seven of the synthesized biaryl enkephalin analogs and their linear counterparts were tested for MOR affinity in radioligand displacement assays.The results are shown in Table 3 and Table S3.
Interestingly, the biaryl analog devoid of Tyr in the first position, c-(Phe-p-Tyr-m)-L-NH 2 exhibits binding slightly better (IC 50 = 6671.0nM) than the linear [Phe 1 ,Tyr 4 ,Leu 5 ] enk-NH 2 (10 000 < IC 50 < 30 000).This is (remotely) consistent with the fact that while Tyr in the first position is usually perceived as critical to high affinity MOR binding, there were a few previous reports for cyclic analogs in which exocyclic Phe in the first position did not negatively affect affinity yielding nanomolar analogs (Weltrowska et al. 2008;Burden et al. 1999).
Molecular docking
In the desire to understand low MOR affinity of the biaryl analogs, we modeled the linear [Met 5 ]enk-NH 2 and its cyclic counterpart c-(Tyr-m-Phe-p)-M-NH 2 in the binding site of the receptor.Both peptides were docked to 8EFQ structure of MOR (Zhuang et al. 2022).This structure seems particularly suitable for modeling of enkephalins in MOR, as the experimental ligand present therein is an enkephalin-related, MOR selective agonist DAMGO ([d-Ala 2 ,N-MePhe 4 ,Gly 5 -ol]-enkephalin).
The linear [Met 5 ]enk-NH 2 is predicted to insert its N-terminus deep in the binding pocket (Fig. 6).Protonated amine of Tyr 1 forms a salt bridge to Asp3.32 and additionally, it is located relatively close to the aromatic ring of Tyr3.33.The aromatic ring of Tyr 1 is wedged between Tyr7.43 and Met3.36 side chains and π-stacks with the aromatic ring of the former.Other receptor residues that contact Tyr 1 by van der Waals interactions include Ala2.53,Trp6.48,Ser7.46,and Gly7.42.The middle fragment of the peptide (-Gly 2 -Gly 3 -) interacts with the side chains of Ile6.51,Trp7.35,and Ile7.39 (van der Waals interactions).The aromatic ring of Phe 4 is inserted between Asp3.32 and Ile3.29 side chains and in proximity of Val3.28 side chain.The carbonyl oxygen of Phe 4 is H-bonded to side chain amide hydrogen of Asn2.63.Met 5 side chain approaches Tyr2.64.C-terminal carbonyl oxygen of the peptide H bonds to side chain amide hydrogen of Gln2.60 and this fragment is flanked by the side chains of His7.36 and Tyr1.39.This predicted binding orientation for the linear [Met 5 ] enk-NH 2 is in the main features similar to that found by DAMGO in the experimental structure (their comparison is given in Fig. S1 in Supplementary Information).The differences include orientation of Tyr 1 aromatic ring (protruding deeper in the case of [Met 5 ]enk-NH 2 ) and somewhat displaced position of Phe 4 aromatic ring.
The predicted binding mode of c-(Tyr-m-Phe-p)-M-NH 2 is shown in Fig. 7. Most strikingly, while the N-terminal portion of the biaryl peptide is directed to the bottom binding site, the protonated amine does not form interaction to Asp3.32 but is placed in the vicinity of Met3.36,Trp6.48,Ile6.51,His6.52,and Gly7.42.Tyr 1 aromatic ring interacts with the side chain of Val6.55 (van der Waals interactions).The other ring of the biaryl fragment lies by Lys6.58 and Trp7.35.The -Gly 2 -Gly 3 -backbone locates close to the side chains of Gln2.60,Ile3.29,Asp3.32, and Tyr7.43.Met 5 side chain is inserted between Tyr1.39,Tyr2.64,Trp7.35, and His7.36 side chains.The only two H bonds in the predicted binding pose of the biaryl analog are located in the C-terminal part of the peptide.They are the interaction between the peptide's amide hydrogen and the backbone carbonyl oxygen of Gln2.60 and between the peptide's amide oxygen and the side chain carbonyl oxygen of Asn2.63.
This binding pose would be intuitively expected to be of rather limited affinity, in particular due to lack of the amine-Asp3.32 interaction which is usually assumed to be prerequisite for high-affinity binding to MOR (Li et al. 1999) although exceptions are known (De Marco and Gentilucci 2017).Furthermore, it is to be noted that the binding mode predicted for the biaryl c-(Tyr-m-Phe-p)-M-NH 2 is clearly different than that predicted for the linear analog (comparison given in Fig. S2) or that obtained experimentally for DAMGO.The docking results correspond to low MOR affinity of c-(Tyr-m-Phe-p)-M-NH 2 and suggest that the studied family of biaryl enkephalin analogs does not mimick the bioactive conformation of enkephalins.It is consistent with the hypothesis that the aromatic rings in the biologically active conformation of enkephalin should be located at a similar distance from each other like the tyramine moiety and atoms C-5 and C-6 of the C-ring of morphine as proposed by Gorin and Marshall (Gorin and Marshall 1977).
Conclusion
We obtained 10 cyclic biaryl analogs of enkephalins using the Miyaura-Suzuki approach.The peptides were not easy to synthesize due to a large steric strain present in their cyclic parts.The novel compounds were studied by CD and NMR.The CD studies did not bring any essential information on their conformation due to their structural elements which make analysis of the CD spectra difficult.Much more useful in this case were the NMR investigations which allowed detailed determination of conformations of the peptides studied.The NMR-derived structures were obtained from the XPlor calculations.Because we were not sure if the XPlor package is well adapted for calculations on such a kind of compounds, we confronted the XPlor results with quantum chemical calculations on one of the peptide analogs.A satisfactory agreement was found between the results obtained by the two methods.The NMR studies showed that the peptides may adopt the conformations which can be described as a type IV β-turn.No intramolecular hydrogen bonds were found for all the peptides.An interesting feature of their conformations is a distinct pyramidization of the aromatic carbon atoms forming the biaryl bridge.Seven of the peptides were checked in vitro for their affinity for the µ-opioid receptor.Unfortunately, it was found that none of them exhibits a significant MOR affinity with the best analog having IC 50 as low as 1372.5 nM which is a several-100-fold worse value that those found for linear enkephalins.This results most probably from the direct bond between two aromatic rings as it has been postulated earlier that these rings should be located at a similar distance from each other like the corresponding structural elements in morphine.
This work confirms that the Miyaura-Suzuki method is a very effective and versatile way of preparation of cyclic biaryl peptides.This kind of peptide cyclization can be very attractive and promising in the case of other biologically active peptides which contain aromatic amino acid residues, allowing diverse modifications of the parent molecules.
Scheme 2 .
Scheme 2. Synthesis of biaryl bridged cyclic enkephalin analogs with the tyrosine derivative at position 4 by the example of H-(cyclom,m)-[Tyr-Gly-Gly-Tyr]-Leu-NH 2 .Reaction conditions: (1) Fmocamino acid, HATU in the presence of HOBt and DIEA in DMF, rt; Moreover, in most of the peptides (c-(Tyr-m-Tyr-m)-L-NH 2 , c-(Tyr-m-Phe-m)-L-NH 2 , c-(Tyr-m-Phe-p)-L-NH 2 , c-(Phe-p-Tyr-m)-L-NH 2 , c-(Phe-m-Phe-m)-L-NH 2 , and c-(Phe-p-Phe-p)-L-NH 2 ), additional couplings of aromatic protons were detected, which do not result directly from the compound structure.In c-(Phe-m-Phe-m)-L-NH 2 in turn, between theoretically different but in fact very similar protons gave a triplet in the place of a predicted doublet of doublets.Unusual multiplets were also observed for α protons of Gly residues in c-(Phep-Phe-m)-L-NH 2 .There are doublets where theoretically doublets of doublets should be present.It can be related with the broadening of NH proton signals of both Gly residues in this peptide.This broadening may result from the presence of many conformations in the peptide's conformational equilibrium.It is in agreement with a very large number of various calculated structures of c-(Phe-p-Phe-m)-L-NH 2 which indicates its large conformational freedom.The Gly-Gly dipeptide fragment may play here a substantial role.
Table 1
List of cyclic biaryl peptides synthesized
Table 2
Torsion angles values of the peptides studied calculated on the basis of NMR parameters with the XPlor program and RMSDs (rootmean-square-deviations) a RMSD of atomic positions of the ensemble, calculated using backbone atoms of residues 1-4 | 8,520.4 | 2024-03-01T00:00:00.000 | [
"Chemistry"
] |
Computations of volumes in five candidates elections
We describe several analytical (i.e., precise) results obtained in five candidates social choice elections under the assumption of the Impartial Anonymous Culture. These include the Condorcet and Borda paradoxes, as well as the Condorcet efficiency of plurality, negative plurality and Borda voting, including their runoff versions. The computations are done by Normaliz. It finds precise probabilities as volumes of polytopes in dimension 119, using its recent implementation of the Lawrence algorithm.
INTRODUCTION
In [32, p. 382] Lepelley, Louichi and Smaoui state: "Consequently, it is not possible to analyze four candidate elections, where the total number of variables (possible preference rankings) is 24.We hope that further developments of these algorithms will enable the overcoming of this difficulty."This hope has been fulfilled by previous versions of Normaliz [13].In connection with the symmetrization suggested by Schürmann [37], it was possible to compute volumes and Ehrhart series for many voting events in four candidates elections; see [12].As far as Ehrhart series are concerned, we cannot yet offer progress.But the volume computation was already substantially improved by the descent algorithm described in [10].Examples of Normaliz being used for voting theory computations by independent authors can be found in [5], [6] and [21].The purpose of this paper is to present precise probability computations in five candidates elections under the assumption of the Impartial Anonymous Culture (IAC).They are made possible by Normaliz' implementation of the Lawrence algorithm [31].
The connection between rational polytopes and social choice was established independently in [32] and [38].Solutions for the four candidates quest were proposed for example in [37], [12] and [10].The similar, but much more challenging computational problem of performing precise computations in five candidates elections is wide open.Various authors have used the well known Monte Carlo methods in order to perform computations with five or more candidates, but fundamentally these methods can only deliver approximative results, without even clear bounds for errors.We note that methods that were successful in obtaining precise results in the four candidates case are ineffective in the five candidates case due to the huge leap in computational complexity implied by the increase in the dimension of the associated polytopes (from 23 to 119).Therefore a different algorithmic approach is needed in order to obtain the desired precise results.
To the best of our knowledge, we present here the first precise results obtained for computations with five candidates.By precise we mean either absolutely precise rational numbers, or results obtained using the fixed precision mode of Normaliz where the desired precision is set and fully controlled by the user.
The polytopes in five candidates elections have dimension 119, and are defined as subpolytopes of the simplex spanned by the unit vectors of R 120 .The number of the inequalities cutting out the subpolytope is the critical size parameter, but fortunately we could manage computations with ≤ 8 inequalities (in addition to the 120 sign inequalities) on the hardware at our disposal, although the algorithm allows an arbitrary number of inequalities.This covers the Condorcet paradox [18] (computable on a laptop in a few minutes), the Borda winner and loser paradoxes [3], and the Condorcet efficiency of plurality, negative plurality and Borda voting, including their runoff extensions.We also compute the probabilities of all 12 configurations of the five candidates that are defined by the Condorcet majority relation.
As Table 6 shows, the computations for 5 candidates are very demanding on the hardware in memory and computation time.Therefore we consider it a major value of the new algorithm that it improves the situation in four candidates elections considerably, where it is now possible to allow preference rankings with all types of partial indifference.Moreover one can run series of parameterized computations for four candidates like those that one finds in [24] for three candidates.In order to illustrate this possibility we compute the probability of the Condorcet paradox in the presence of voters with indifference and the Condorcet efficiency of approval voting (see Subsection "Indifference").Note that potential applications are not only limited to voting theory, as can be seen in [30, Table 3].There the new algorithm is performing better (as the dimension grows) for the first family of examples.
Normaliz computes lattice normalized volume and uses only rational arithmetic without rounding errors or numerical instability.But there is a slight restriction: while it is always theoretically possible to compute the probabilities as absolutely precise rational numbers, the fractions involved can reach sizes which are unmanageable on the available hardware.For these cases Normaliz offers a fixed precision mode whose results are precise up to an error with a controlled bound that can be set by the user.
In contrast to algorithms that are based on explicit or implicit triangulations of the polytope P (or the cone C(P) defined by P) under consideration, the Lawrence algorithm uses a "generic triangulation" of the dual cone C(P) * .We make a brief discussion of the available Lawrence algorithm implementations and their limitations in Section "Implementations of the Lawrence algorithm and their limitations".In order to reach the order of magnitude that is necessary for five candidates elections, one needs a fine tuned implementation.It is outlined in [7].Moreover, the largest of our computations need a high performance cluster to finish in acceptable time.Section "Computational report" gives an impression on the computation times and memory requirements by listing them for selected examples.
The computations that we report in this note were done by version 3.9.0 of Normaliz.Meanwhile it has been succeeded by version 3.10.1 without changes in the Lawrence algorithm.Both versions are available at https://www.normaliz.uni-osnabrueck.de/For details on the implementation and the performance of the previous versions of Normaliz we point the reader to [14], [9], [11], [15].
A CHALLENGING COMPUTATIONAL PROBLEM ARISING FROM SOCIAL CHOICE
Voting schemes and rational polytopes.The connection between voting schemes and rational polytopes is based on counting integral points in the latter.In this subsection we sketch the connection.As a general reference for discrete convex geometry we recommend [8].The interested reader may also consult [28] and [29].
The basic assumption in the mathematics of social choice is the existence of individual preference rankings ≻: every voter ranks the candidates in linear order.Examples for three candidates named by capital letters: For n candidates there exist N = n! preference rankings, usually numbered in lexicographic order.(By an extension it is possible to allow indifferences; for example see [24].) The result or profile of the election is the N-tuple (x 1 , . . ., x N ), x i = #{voters of preference ranking i}.
Thus an election result for three candidates may be written in the following tabular form: number of voters In the following we want to compute probabilities of certain events related to election schemes.This requires a probability distribution on the set of election results.The Impartial Anonymous Culture (IAC) assumes that all election results for a fixed number of voters, in the following denoted by k, have equal probability.In other words, it is the equidistribution on the set of voting profiles for a fixed number of k voters.
The Marquis de Condorcet (1743-1794) was a leading intellectual in France before and during the revolution.He already observed that there is no ideal election scheme, a fact now most distinctly manifested by Arrow's impossibility theorem.We say that candidate A beats candidate B in majority, A > M B, if #{voters with A ≻ B} > #{voters with B ≻ A}.
A (necessarily unique) Condorcet winner (CW) beats all other candidates in majority.
There is general agreement that the CW is the person with the largest common approval.However, Condorcet realized that a CW need not exist: the relation > M is not transitive: a minimal example is the profile (1, 0, 0, 1, 1, 0).This phenomenon is called the Condorcet paradox.From a quantitative viewpoint, the most ambitious goal is to find the exact number of election profiles exhibiting the Condorcet paradox (or the opposite), given the number of voters k.For large k, this number is gigantic.It is much more informative to understand the behavior for k → ∞: what is the probability that an election result exhibits the Condorcet paradox?Since we assume the IAC, this probability is lim k→∞ #{electionresults without CW for k voters} #{all election results for k voters} .
It is a crucial consequence of (IAC) that the event "A is the CW" can be characterized by a system of homogeneous linear inequalities.For three candidates they are If we are only interested in probabilities for k → ∞, standard arguments of measure theory allow ties and replacement of > by ≥.
We now consider an event E defined for an n candidates election by a system of homogeneous linear inequalities on the set of election profiles.As above, set N = n!.The election profiles (x 1 , . . ., x N ) are the lattice points (points with integral coordinates) in the positive orthant R N + satisfying the equation The real points in the positive orthant satisfying this equation form a polytope ∆ k , and the linear inequalities whose validity defines E cut out a subpolytope P k .We illustrate this assertion by the (necessarily unrealistic) Figure 1.
Subpolytope defined by linear inequalities
For large numbers of voters we want to find the probability prob(E) of the event E. Under (IAC) it is given by prob We project ∆ k orthogonally onto ∆ 1 , and thus P k onto P 1 .The density, roughly speaking, of the projections of the lattice points converges to 1, and therefore prob(E) = vol(P 1 ) vol(∆ 1 ) .
For volume computations in connection with the counting of lattice points one uses the lattice normalized volume vol, giving volume 1 to ∆ 1 .With this choice prob(E) = vol(P 1 ).It is not difficult, but would take many pages, to write down the linear inequalities for the voting schemes and events discussed in the following.For four candidates the complete systems are contained in [12].For the inequalities one must often fix the roles that certain candidates play, like the Condorcet winner A above.Then probabilities must be computed carefully, and this may require the inclusion-exclusion principle.
Both from the theoretical as well as from the computational viewpoint it is better to consider the cone C defined by the homogeneous linear inequalities as the prime object, and the polytopes as intersections of C with the hyperplane defined by the equation It is not difficult to see that a voting event that can be realized by a voting profile has positive probability: Proposition 1.Let E be a subset of all voting profiles defined by strict homogeneous rational inequalities.If E is nonempty, then it has probability > 0 under (IAC).
Proof.Clearing denominators, one can assume that the coefficients of the inequalities are integers.Let m be the maximum of all their absolute values and x ∈ E be a voting profile.Then x ′ = (m + 1)x ∈ E as well by homogeneity.It is easily checked that also x ′ + e i ∈ E where e i , i = 1, . .., N is the i-th unit vector.The parallel translation by −x ′ maps the the polytope P spanned by the x ′ + e i bijectively onto ∆ 1 .Thus P has lattice normalized volume 1, and therefore its orthogonal projection to ∆ 1 has positive volume.
The Condorcet paradox in five candidates elections.The Condorcet paradox, introduced in Subsection "Voting schemes and rational polytopes", does not occur in the case of two candidates (if draws are excluded).For three candidates the exact probability of an outcome with a Condorcet winner (under IAC) was first computed by Gehrlein and Fishburn [26] while for four candidates it was first determined by Gehrlein in [25].
For five candidates, we have computed in the full precision mode of Normaliz (and the method presented in Section "Implementations of the Lawrence algorithm and their limitations") that In decimal notation with 100 decimals, we obtain In order to illustrate the fixed precision mode of Normaliz, we compare the above exact result with the result obtained for fixed precision of 100 decimal digits, namely In decimal notation with 100 decimals, we obtain The reader should observe that in the decimal notation only the last 4 digits are different.The error bound is 6572904 • 10 −100 < 10 −93 , where 6, 572, 904 is the size of the "generic triangulation" (see Section "Implementations of the Lawrence algorithm and their limitations" and Table 5).This means that using the fixed precision mode of Normaliz is sufficient for many applications, while it saves computation time and is significantly less demanding on the hardware.
For practical reasons, in the following we use shorter decimal representations of the rational numbers.(The full rational representations of these numbers are available on demand from the authors.)A decimal representation is called rounded to n decimals when the first n − 1 printed decimals are exact and only the last decimal may be rounded up.
Rule versus rule runoff, Condorcet efficiencies.The most common voting scheme in elections is the plurality rule PR: for each candidate X one counts the voters that have X on first place in their preference ranking, and the winner is the candidate with most first places.However, in many elections one uses a second ballot, called runoff, if the winner has not got the votes of more than half of the voters.In the runoff only two candidates are left, namely the two top candidates of the first round.A typical example is the French presidential election.
If the ideal winner of an election is the Condorcet winner CW, then one must ask for the probability that the plurality winner is the CW under the condition that a CW exists.This conditional probability is called the Condorcet efficiency, studied intensively by Gehrlein and Lepelley [28] as a quality measure for voting schemes.
Another important question is whether the runoff is a real improvement: (i) what is the probability that the winner of the first ballot also wins the second, and (ii) by how much does the Condorcet efficiency increase by the runoff.
An often discussed variant of plurality is negative plurality NPR: the winner is the least disliked candidate X , defined by the least number of voters who have placed X on the last place in their preference ranking.As for plurality one can have a runoff, and again it makes sense to compute the Conndorcet efficiencies and the probability that the first round winner also wins the runoff.
Both plurality and negative plurality are special cases of weighted voting schemes in which the places in the preference ranking have a fixed weight, and every candidate is counted with the sum of the weights in the preference ranking of the voters.In plurality the first place has weight 1 and the other places have weight 0, wheres negative plurality gives weight −1 to the last place.In addition to these two rules we discuss the Borda rule BR that for n candidates gives weight n − p to place p.
In the case of four candidates the plurality voting versus plurality runoff problem was first computed by De Loera, Dutra, Köppe, Moreinis, Pinto and Wu in [20] using LattE Integrale [2] for the volume computation.The Condorcet efficiency of plurality voting was first computed by Schürmann in [37], whereas the Condorcet efficiency of the runoff plurality voting was given in [12].According to [29], it was obtained independently in [12] and [34].In [10, Section 6] we additionally discuss the influence of a third ballot on the Condorcet efficiencies of plurality and negative plurality.
Our results for five candidates are listed in Table 1.The first line contains the probability that the first round winner also wins the runoff.These three computations were done using the full precision mode of Normaliz.The next two lines contain the Condorcet efficiencies, computed the fixed precision mode of Normaliz.For practical reasons we have only included the results rounded to 15 decimals.
TABLE 1. Probabilities computed by Normaliz
In Table 2 we reproduce the results for the Condorcet efficiency of all three rules contained in Table 7.6 of [29], which were obtained using Monte Carlo methods in [33].The numbers are relatively close, which confirms the correctness of all algorithms involved.However, at least 14 decimals printed in Table 1 are exact, while for the numbers printed in Table 2 we have 2, 4 and 3 exact decimals.
Strong Borda paradoxes.The Borda paradoxes are named after the Chevalier de Borda who studied them in [3].The strict Borda paradox is the event that for a voting profile plurality and majority rank the candidate in opposite order.A less sharp paradox is the Rule R PR NPR BR CondEffR 0.6139 0.5090 0.8541 TABLE 2. Results obtained by Monte Carlo, according to [29] and [33] strong Borda paradox: the plurality winner is the Condorcet loser, and the reverse strong Borda paradox occurs if the Condorcet winner finishes last in plurality.These paradoxes can be discussed for all voting schemes for which every profile defines a linear order of the candidates.There is however no point in computing them for negative plurality.As shown in [12, Section 2.5] plurality and negative plurality are dual to each other: the strong Borda paradox and the reverse strong Borda paradox exchange their roles.
For three candidates elections a detailed study of the family of Borda paradoxes [3] is contained in [27], while the case of four candidates is discussed in [12, Section 2.5].
For the time being, the computation of the strict Borda paradox in the case of five candidates seems not to be reachable.The strong paradoxes have been computed in the fixed precision mode of Normaliz.The results are rounded to 15 decimals.
For large numbers of voters the probability of the strong Borda paradox is and the probability of the reverse strong Borda paradox is Indifference.We want to point out that the Normaliz implementation of Lawrence's algorithm does not only yield precise results in five candidates elections, but also extends the range of computations for four candidates considerably by allowing preference rankings with partial indifference that increase the dimension of the related polytopes considerably.We demonstrate this by two examples.
In the examples we allow all possible types of indifference except the equal ranking of all candidates: no indifference, equal ranking of two candidates in three possible positions (top, middle, bottom), two groups of two equally ranked candidates, and equal ranking of three candidates (top and bottom).In total one obtains 74 rankings.Compared to the 24 rankings without indifference this is a substantial increase in dimension.We assume that all rankings have the same probability.The authors of [24] allow weights for the types of indifference, for example that the number of voters with a linear order of the candidates is twice the number of voters with indifference.Such weights can easily be realized as a system of homogeneous linear equations in the Normaliz input file.
The first computation is the probability of a Condorcet winner under the Extended Impartial Anonymous Culture (EIAC), as discussed in [24] for 3 candidates (and varying weights for the different types of indifference).This requires only 3 inequalities to fix the Condorcet winner, and the computation is very fast.We obtained the value of 0.884041566089553 for the probability of the existence of a Condorcet winner under EIAC (rounded to 15 decimals).
The second example is the Condorcet efficiency of approval voting.Under this rule one additionally assumes that every voter casts a vote for each candidate on first place in his or her preference ranking.This requires 6 inequalities, namely 3 to mark the CW and 3 to make the same candidate the winner of the approval voting.Consequently the computation time is going up considerably.See the data for CondEffAppr 4cand in Table 6.Normaliz obtains 0.695293409282039 as the probability that there exists a CW who finishes first in the approval voting.This yields the Condorcet efficiency of 0.786494024661739 for approval voting (under the assumptions above).The computations were done using the full precision mode of Normaliz.
From three to five candidates.In Table 3 we give an overview of the probabilities of voting events for three, four and five candidates as far as we have computed them for five candidates.We use the shorthands PR, NPR and BR for the plurality rule, negative plurality rule and Borda rule as introduced above.The remaining abbreviations are self explanatory.For better overview we have rounded all probabilities to 4 decimals.One observes that all probabilities are decreasing from three to five candidates.This reflects the increase in the number of configurations defined by the voting profiles.The Condorcet efficiencies and the probabilities of the Borda paradoxes are conditioned on the probabilities of the existence of a Condorcet winner, which itself is decreasing.But this does not compensate the decrease of the absolute probabilities.
In view of our observations above it is justified to formulate Conjecture 2. All series of probabilities associated to voting events in Table 3 are monotonically decreasing with the number of candidates n.
CONDORCET CLASSES
A voting outcome without ties imposes an asymmetric binary relation on the n candidates that we call a Condorcet configuration.A Condorcet configuration is also called a dominance relation, according to [4].Evidently there are 2 ( n 2 ) such configurations.The permutation group S n acts on the set of configurations by permuting the candidates.We call the orbits of this action Condorcet classes.For n = 4 the classes and their probabilities are discussed in [12].
From the graph theoretical viewpoint the Condorcet configurations are nothing but simple directed complete graphs with n labeled vertices, i.e., graphs with n labeled vertices without loops, in which each two vertices are connected by a single directed edge.These graphs are also know as tournament graphs.
In this section we present the precise probabilities of the Condorcet classes under IAC.First we make a presentations of the classes, which is needed in order to understand a reduction critical to be made for the computations to be successful.
For n = 5 these Condorcet configurations fall into 12 classes under the action of the group S 5 .There are 6 classes that have a Condorcet winner (CW) or a Condorcet loser (CL): LinOrd CW4cyc CW2nd3cyc 3cyc4thCL CW3cycCL 4cycCL here "cyc" stands for "cycle".For example, CW2nd3cyc denotes the class that has a Condorcet winner, a candidate in second position majorizing the remaining three, and the latter are ordered in a 3-cycle.
There are 6 further classes as has been known for a long time.Presumably Davis [19] is the oldest source.(For more sources and cardinalities of the set of classes see [36].)The classes can be structured by the signatures (p, q) of a candidate in which p counts the candidates majorized by the chosen candidate and q = n − 1 − p is the number of the candidates majorizing the chosen one.In graph theoretical language, p is the in-degree and q is the out-degree of the chosen node.Without a CW or CL, the signatures (4, 0) and (0, 4) are excluded.The number of signatures (2, 2) must now be odd, and using this observation one easily finds the 6 classes without a CW or CL.They are named in Figure 2. In the figure candidates of signature (3, 1) are colored red, those of signature (2, 2) are blue, and green indicates the signature (1,3).
The cardinalities of all classes and their probabilities (rounded to 6 decimals) are listed in Table 4.We have computed these probabilities not only for aesthetic reasons: that they sum to 1 is an excellent test for the correctness of the algorithm.
For effective computations the following reduction is critical.At first it seems that one must use 10 inequalities representing the relation > M between the five candidates in addition to the 120 sign inequalities in order to compute the probability of a single class (or configuration).But computations with 130 inequalities are currently not reachable on the hardware at our disposal.Some observations help to reduce the number of inequalities, significantly easing the computational load.For example, LinOrd can be (and is) computed with 128 inequalities if one exploits that it is enough to choose the first two in arbitrary order and the candidate for third place.Once the probability of LinOrd is known, the remaining 5 classes with a CW or CL can be obtained from the Condorcet paradox (124 inequalities), CWand2nd (126), CWandCL (127) and the symmetry between CW and CL (see [12]).
For the other 6 classes it is best to "relax" the direction of some edges and to count which configurations occur if one chooses directions for the relaxed edges.For a proper choice of relaxed edges one gets away with 127 inequalities for Γ 1,1 and only 126 or 125 inequalities for the remaining cases.
It is no surprise that all Condorcet classes have positive probability.In fact, by a theorem of McGarvey [35] (also see [4,Theorem 3.1]) all Condorcet configurations can be realized by a voting profile.So Proposition 1 implies positive probability.
The problem of finding the minimal number of voters that are necessary to realize a given Condorcet configuration or even a voting event is largely unknown; see [22] for an asymptotic lower bound.Some values for four candidates elections have been computed by Normaliz; see [12,Remark 8].
IMPLEMENTATIONS OF THE LAWRENCE ALGORITHM AND THEIR LIMITATIONS
The Lawrence algorithm is based on the fact that a "signed decomposition" into simplicies of the polytope in the primal space may be obtained from a "generic triangulation" ∆ of its dual cone.For each δ ∈ ∆ we get a simplex R δ in the primal space and the volume of the polytope in the primal space is the sum of volumes of simplices R δ induced by the "generic triangulation" with appropriate signs e(δ ) = ±1.Thus the following formula can be used for computing the volume of P: For mathematical details we refer the reader to Filliman [23].Details of its implementation in Normaliz are described in [7].
In order to compute a "generic triangulation", Normaliz, following Lawrence's suggestion, finds a "generic element" ω, which in turn induces the "generic triangulation" ∆ = ∆ ω .Since ω almost inevitably has unpleasantly large coordinates, the induced simplices R δ have even worse rational vertices, and their volumes usually are rational numbers with very large numerators and denominators.This extreme arithmetical complexity makes computations with full precision sometimes very difficult on the hardware at our disposal.In the fixed precision mode the volumes vol R δ are computed precisely as rational numbers.But the addition of these numbers may result in gigabytes filling fractions.Therefore in order to make computations feasible the precise rational numbers are truncated to a predetermined set of exact decimal digits, which is typically 100 digits.Then the error is bounded above by T • 10 −100 where T is the size of the "generic triangulation" (i.e. the total number of simplices).Remark 3. Before Normaliz, the program vinci [16] has provided an implementation of the Lawrence algorithm using floating point arithmetic.As it is noted by the authors in [17], their floating point implementation is numerically unstable.We point out at least one possible reason for this problem, which is indicated by the above discussion.
In any implementation of the Lawrence algorithm the alternating sum 0.1 must be evaluated.When using floating point arithmetic for subtracting nearby quantities it is possible that the most significant digits are equal and they will cancel each other.This is a severe limitation of the floating point arithmetic that may lead to a phenomenon known as "catastrophic cancelation".It is a fact that, because of the relative error involved, the evaluation of a single subtraction in floating point arithmetic could produce completely meaningless digits.
This problem is visible already when computing voting problems with 4 candidates and only becomes worse for 5 candidates.Consider the problem of comparing 4 voting rules for 4 candidates as it is presented in detail in [10,Sect. 6.1].With its HOT algorithm vinci computes the precise associated Euclidean volume of 1.260510232743 • 10 −25 .At the same time, a computation with the Lawrence algorithm as it is implemented in vinci provides the erroneous value of 9.287423132835 • 10 −8 for the same volume.So is clear that the results provided by the vinci implementation of the Lawrence algorithm may lack any kind of precision, therefore it does not make sense to include in this paper a benchmark of the (different) implementation of the Lawrence algorithm in vinci.
Remark 4. The program polymake [1] has also implemented a simplified version of the Lawrence's algorithm.This implementation is restricted to the "smooth" case.Note that smooth implies "simple", which in turn implies that the dual polytope is "simplicial", so its boundary has a trivial triangulation.The polytopes that appear in voting theory are not smooth, in fact they are not even simple.Thus the implementation in polymake of the Lawrence algorithm cannot be compared with the Normaliz implementation for the polytopes presented here.
COMPUTATIONAL REPORT
Selected examples.In order to give the reader an impression of the computational effort, we illustrate it by the data of several selected examples.Except (1) and ( 2) they are all computations for elections with 5 candidates: (1) strictBorda 4cand is the computation of the probability of the strict Borda paradox for elections with 4 candidates as discussed in [12].(2) CondEffAppr 4cand is the Condorcet efficiency of approval voting for 4 candidates.(3) Condorcet stands for the existence of a Condorcet winner in elections with 5 candidates.(4) PlurVsRunoff computes the probability that the plurality winner also wins the runoff.
(5) CWand2nd computes the probability that there exists Condorcet winner and a second candidate dominating the remaining three.(6) CondEffPlurRunoff is used to compute the probability that the Condorcet winner exists and finishes at least second in plurality.(7) CondEffPlur computes the probability that the Condorcet winner exists and wins plurality.In all cases one has to make choices for the candidates that have certain roles in the computation in order to define the polytope for the computation.(2) The volumes of the first 5 polytopes were computed with full precision, whereas for CondEffPlur and CondEffPlurRunoff fixed precision was used.(3) The following rule of thumb can be used to estimate the computation time for a smaller number of threads: if one reduces the number of parallel threads from 32 to 8, then one should expect the computation time to go up by a factor of 3. A further reduction to 1 thread increases it by another factor of 7. (4) From the selected examples, only strictBorda is computable with the algorithms previously implemented in Normaliz.For this example, the data in Table 6 may be compared with the data in [10, Table 2] which was recorded on the same system.(5) The data in Table 6 shows why computations with more than 128 inequalities are currently not reachable on the hardware at our disposal.Each additional inequality added leads to a significant jump in the required RAM memory and there exists a 1 TB limit on our system.Stage (4) of the last two polytopes was computed on a high performance cluster (HPC) because the computation time would become extremely long on the R640, despite of the high degree of internal parallelization.The time for CondEffPlurRunoff would still be acceptable, but CondEffPlur would take several weeks.Instead doing step (4) directly, the result of steps (1)-( 3) is written to a series of compressed files on the hard disk.Each of these files contains a certain number of simplices and this number can be chosen by the user, for example 10 6 simplices.For CondEffPlur we need 12277 seconds for writing the input files of the distributed computation, and CondEffPlurRunoff needs 528 seconds.
The compressed files are then collected and transferred to the HPC.The Osnabrück HPC has 51 nodes, each equipped with 1 TB of RAM and 2 AMD Epyc 7742 so that 128 threads can be run on each node.In our setup each node ran 16 instances of chunk simultaneously and every instance used 8 threads of OpenMP parallelization.Consequently 816 input files could be processed simultaneously.For a CondEffPlur input file of 10 6 simplices one needs about 165 MB of RAM and 3 hours of computation time.Therefore the volume of CondEffPlur could be computed in ≈ 9 hours.
Even on a less powerful system it can be advisable to choose this type of approach since one loses only a small amount of data when a system crash should happen and the amount of memory used remains low.Also "small" computations can profit from fixed precision.For example, step (4) of Condorcet takes 13.9 seconds with fixed precision, but 52.5 seconds with full precision.
TABLE 3 .
Probabilities of voting events for 3, 4 and 5 candidates
TABLE 4 .
Condorcet classes, their cardinalities and probabilities | 7,581 | 2021-09-01T00:00:00.000 | [
"Mathematics",
"Political Science",
"Computer Science"
] |
Adaptive Handover Decision Algorithm Based on Multi-Influence Factors through Carrier Aggregation Implementation in LTE-Advanced System
. Although Long Term Evolution Advanced (LTE-Advanced) system has benefited from Carrier Aggregation (CA) technology, the advent of CA technology has increased handover scenario probability through user mobility. That leads to a user’s throughput degradation and its outage probability. Therefore, a handover decision algorithm must be designed properly in order to contribute effectively for reducing this phenomenon. In this paper, Multi-Influence Factors for Adaptive Handover Decision Algorithm (MIF-AHODA) have been proposed through CA implementation in LTE-Advanced system. MIF-AHODA adaptively makes handover decisions based on different decision algorithms, which are selected based on the handover scenario type and resource availability. Simulation results show that MIF-AHODA enhances system performance better than the other considered algorithms from the literature by 8.3 dB, 46%, and 51% as average gains over all the considered algorithms in terms of SINR, cell-edge spectral efficiency, and outage probability reduction, respectively.
Introduction
In mobile wireless systems, there are several handover decision algorithms (HODAs) which have been proposed based on different parameters such as (i) Received Signal Strength (RSS), (ii) RSS with a threshold, (iii) RSS with hysteresis, (iv) RSS with hysteresis and threshold (parameters (i) to (iv) are discussed in detail from Pollini) [1], (v) RSS with hysteresis and distance [2], (vi) Signal-to-Interference-plus-Noise-Ratio (SINR) [3], and (vii) Interference-to-Interference-plus-Noise-Ratio (IINR) [4].All of these HODAs have been proposed for the purpose of taking an intact handover decision in order to enhance system performance through the user's mobility.However, in [1,3,4], all the HODAs are taken based on a single parameter, while there are other influencing factors which have not been considered.That leads to taking nonintact handover decisions, which in turn degrades a user's throughput and increases its outage probability.Thus, the communication efficiency between the user and serving network is negatively affected.In [2], HODA is taken based on multiple factors, but there are other influencing factors that have not been considered such as the interferences, noise, and resource availability.These effectively impact system performance.Furthermore, the advent of CA technology has added a new handover scenario, which can be performed between the serving component carriers (CCs) under the same sector and the same evolved node B (eNB) to change the primary component carriers (PCCs).This leads to increased handover probability, which in turn leads to increased throughput degradation and user outage probability.This type of handover scenario can be reduced as long as the serving PCC provides acceptable RSS to the served user equipment (UE).Therefore, more efficient HODA is needed, which should contribute for reducing user throughput degradation and high outage probability.
In this paper, MIF-AHODA has been proposed in order to provide a seamless handover process through CA implementation in LTE-Advanced system.MIF-AHODA is automatically
Related Work
HODA is an essential step of the handover procedure in cellular wireless networks.It should be designed carefully in order to take an intact and a proper handover decision to the suitable target cell.That provides a seamless connection between the UE and serving eNB through its roaming within the cells.Anyway, handover decision is taken by the serving eNB based on the measurement report (MR) that is received from the served UE.MR contains the signals levels list of specific neighbor cells, and it can contain other information based on the implemented HODA.However, there are several HODAs that have been proposed [1][2][3][4] based on different parameters, such as HODA based on RSS [1], RSS and distance [2], SINR [3], and IINR [4] with considering the hysteresis level.All these HODAs aim to enhance system performance through the user's mobility within the cells.
In [1], handover decision algorithm is proposed to be taken based on the Received Signal Strength (HODA-RSS).The algorithm triggers handover once the target RSS (RSS ) level becomes sufficiently stronger than the serving RSS (RSS ) by a handover margin level ( RSS ) in dB.That algorithm can be simplified by In [2], handover decision algorithm based on distance and relative Received Signal Strength (HODA-D-RSS) has been proposed in a log-normal fading environment.The handover decision output becomes true and starts for initiating handover procedure once the two following conditions are met; (i) the measured distance between user and target eNB becomes less than that between the user and target eNB by a certain threshold distance and (ii) the average target RSS becomes stronger than that received from the serving eNB by a given hysteresis level.That HODA can be simplified by the following: where Dis and Dis represent the distance from the user to the target and serving eNBs, respectively, while is the distance margin level.
In [3], handover decision algorithm has been designed utilizing SINR (HODA-SINR) as control handover parameters for taking the handover decision.The algorithm allows the served user to trigger the handover once the target SINR quality (SINR ) becomes sufficiently better than the serving SINR quality (SINR ) by a certain hysteresis margin level ( SINR ).For simplicity, this algorithm can be represented by where SINR and SINR represent the SINR of target and serving cells, respectively, while SINR represents the hysteresis SINR margin level in dB.
In [4], an optimal handover decision algorithm is proposed based on Interference to other-Interferences-plus-Noise Ratio (IINR) parameter (HODA-IINR).It is designed from the perspective of throughput enhancement by considering two handover schemes (Fast Cell Selection (FCS) and Soft Handover (SHO)).In case of considering FCS the proposed HODA is represented by → SINR − IINR < −1, where SINR represents SINR from the serving eNB, while IINR represents IINR from the target eNB.In the other case, when SHO is considered the proposed HODA is represented by → SINR − IINR < 0. However, that HODA decides to perform handover only when a throughput gain exists.
These four HODAs take the handover decision based on single parameters (i.e., RSS, Distance, SINR, and IINR).So, they cannot give always a proper handover decision, because there are several influence factors that have not been considered, such as channel condition, Rayleigh fading, interferences, noise, and traffic loads.Also, handover scenario should be considered due to the additional scenario that is added by CA technique, which will be explained in the following section.Therefore, a new handover decision algorithm is needed when CA is considered in LTE-Advanced system.
Handover with CA Technique
The advent of CA technique in LTE-Advanced system increases the number of aggregated CCs that can be deployed at one eNB and assigned to one UE simultaneously.These CCs are classified into two different types.The first one is known as a PCC, while the second type of CCs is called a SCC [5,6].
The PCC is the carrier that is always being active through the active mode operation of UE.It should provide full cell coverage among the active adjacent CCs or provide the best signal quality over all the active CCs [6,7].However, PCC is normally used for exchange control signaling messages and traffic date between a UE and eNB.It is also used for random access procedure and the allocation of the SCC.In addition, Radio Link Failure (RLF) is recorded when the radio link connection over the PCC is failed, and then the Radio Resource Control (RRC) reestablishment procedure is triggered over the PCC too.Also, the Nonaccess Stratum-(NAS-) recovery procedure is triggered if the RCC reestablishment procedure over the PCC is failed within T310 (T310 is the maximum allowed time for recovering connection through the RRC reestablishment procedure) period of time [5,8].
The UE in LTE-Advanced system release 10 and release 11 (rel.10 and rel.11) can be configured with only one CC among the plurality of assigned CCs as a PCC.At the beginning, when the UE sets up the connection to the serving network the PCC is automatically selected by the serving eNB.If only one CC is assigned to the UE, it is configured as a PCC.Otherwise, when several CCs are paired to one UE, one CC among the plural active carriers must be configured as a PCC, while the rest of active CCs should be configured as SCCs [9].In addition, the configured PCC may be selected from fully configured CCs, rather than being fixed to a particular CC [5].The selected PCC can differ between UEs which are served by the same eNB.In other words, one CC (i.e., CC1) can be configured as a PCC for UE1 and configured as a SCC for UE2 as illustrated in Figure 1 [8].
The SCC is an additional component carrier that can be configured and activated by eNB when the UE requests a wider bandwidth in order to provide higher data rate to the served UE.In other words, SCC is an additional component carrier which is used for providing additional resources to the served UE, while it cannot be used for exchange control signaling messages between a UE and eNB.However, SCC can be activated or deactivated according to especial conditions, which can be specified according to the UE's request or according to the instructions of the eNB [5].
Implementing CA technique in LTE-Advanced system adds an additional handover scenario, which can occur between component carriers in the same sector, from PCC (CC1) to SCC (CC2) or from PCC (CC2) to SCC (CC1).In other words, the PCC may be switched from CC1 to CC2 or from CC2 to CC1 to change the PCC.So LTE-Advanced system differs than LTE (rel.8 and rel.9),where in LTE system (rel.8and rel.9) handover occurs between eNBs in different cells or between different sectors under the same eNB only.However, changing the PCC is subjected to several considerations such as looking for the best signal quality or balancing loads between adjacent cells.Switching the CC from PCC to SCC and vice versa is achieved by performing a handover procedure from the PCC (i.e., CC1) to the SCC (i.e., CC2).The handover procedure is performed by UE from the served PCC to the target PCC (which is the SCC) under the same eNB [8].
Consequently, the number of handover scenarios can be increased by implementing CA technique.Thus, there are five handover scenarios that can occur in LTE-Advanced system when CA technology is implemented, which are described in Figure 2 and can be introduced by (i) interfrequency intrasector and intra-eNB handover, (ii) intrafrequency intersector and intra-eNB handover, (iii) interfrequency intersector and intra-eNB handover, (iv) intrafrequency inter-eNB handover, and (v) interfrequency inter-eNB handover [6].All these handover scenarios are considered in this paper.
Intrafrequency means that the target and the serving carrier frequencies are the same, while interfrequency means that the target and serving carrier frequencies are differentiated from each other.Intrasector means that the target and serving sectors are the same and intersector means that the target and serving sectors are differentiated from each other.Intra-eNB means that the target and serving eNBs are the same, and inter-eNB means that the target and serving eNBs are differentiated from each other.
Increasing handover scenarios leads to increasing the handover probability, which is undesired to users since it leads to increasing the throughput degradation and outage probability.Therefore, an optimal handover decision is requested to reduce the handover probability in order to decrease throughput degradation and outage probability.
Proposed Algorithm
In this paper, MIF-AHODA based on SINR with handover hysteresis, threshold, and resource availability has been proposed.MIF-AHODA adaptively makes handover decisions based on different decision algorithms, which are selected based on the handover scenario type and resource availability as illustrated in Figure 3.If the handover scenario type is targeting changing the PCC, the handover decision can be taken based on the SINR with handover hysteresis () and threshold () levels as illustrated in Figure 4(a).Thus, the handover decision algorithm can be expressed as follows: where PCC and represent the SINR over the serving PCC and target CC, respectively.On the other hand, if the handover scenario type is targeting changing the serving sector or serving eNB, the handover decision can be adaptively taken based on two different decision algorithms, which are selected based on the resource availability.In the first decision algorithm, if the serving cell has more resources available than the target cell, the handover decision is taken based on the average SINR over both aggregated CCs (PCC and SCC) with handover hysteresis levels as illustrated in Figure 4(b).Also, SINR over the target PCC ( PCC ) should be greater than the threshold () level ( PCC > ).In the second decision algorithm, if the target cell has more resources available than the serving cell by resource Loads Margin level (LM), the handover decision is taken based on the SINR quality over the PCC with hysteresis and threshold levels only, as it is explained in Figure 4(c).Consequently, the handover decision algorithm can be represented by the following expression: where AS , AS represent the average SINR over all the aggregated CCs of serving and target eNBs, respectively. , represent the resource Loads availability of serving and target eNBs, respectively.LM is assumed to be 10% of the average resource Loads availability of the serving and target eNBs.
System Model
The LTE-Advanced system is modeled based on 3GPP specifications that were introduced in [10].The network consists of 61 macrohexagonal cell layout models with 500 meter inter-site-distance.One eNB located at the centre of each cell with considering three sectors in each cell and each sector configured with two contiguous CCs.20 MHz is considered as carrier bandwidth for each CC.Operating frequencies of CC1 and CC2 are assumed to be 2 and 2.0203 GHz, respectively.The antenna of each CC is pointed toward a different flat side of the hexagonal cell.The transmitted power from all the eNBs for each CC is assumed to be the same.Random numbers of UEs are generated and removed randomly at random uniform positions in the serving and target cells in every Transmission Time Interval (TTI).The UEs' directional movements are selected randomly with a fixed speed throughout the simulation, which contains five different mobile speed scenarios (30, 60, 90, 120, and 140 km/hour).The mobility movement of all users is considered to be inside the first 37 cells which are located in the close positions to the centre cell.Six eNBs are considered as the stations that cause the interference signals for each user during all the simulation time.The Frequency Reuse Factor (FRF) has been assumed to be one.Moreover, the Adaptive Modulation and Coding (AMC) scheme is considered based on the sets of Modulation Schemes (MS) and Coding Rate (CR) that were introduced in [10,11].Handover procedure for LTE-Advanced system that was introduced in [12] is followed with assuming 6 dB as a handover margin level and 600 milliseconds as time-to-trigger (TTT).In addition, the Radio Link Failure (RLF) detection, Radio Resource Control (RRC) reestablishment procedure, and Nonaccess Stratum (NAS) recovery procedure are considered through the simulation in order to achieve high accuracy in the performance evaluation.The vital essential parameters used in this paper are considered based on the LTE-Advanced system profile that were defined by 3GPP specifications in [10-13], as listed in Table 1.
Results and Discussions
In this study, a simulation was used to validate the proposed HODA.The evaluation methodology of 3GPP LTE-Advanced system [10-13] is observed in the simulation as mentioned in Section 3. System performance evaluations achieved by MIF-AHODA and the other considered HODAs are presented in terms of user SINR, spectral efficiency, and user's outage probability as shown in Figures 5, 6, and 7, respectively.Figure 5 shows user SINR in dB based on different handover decision algorithms.The presented SINR represents the average users' SINR over the serving PCC, which is evaluated as the ratio of reference signal received power (RSRP) to the Interferences-plus-Noise-Ratio over each subcarrier assigned to the served user [14].However, the results show that the MIF-AHODA enhanced user SINR by 13.5, 13.4, 3.45, and 3 dB better than the HODAs in the literature which were taken as a base: RSS, RSS-D, SINR, and IINR, respectively.Figure 6 shows a cell-edge user spectral efficiency based on different HODAs.The cell-edge user spectral efficiency is defined as the lower 5% of the evaluated throughput [bps/Hz] that can be received by the user [13,14].However, the presented results show that MIF-AHODA achieves around 79.7, 80.7, 12.7, and 10.7% as average enhancement gains of celledge user spectral efficiency over HODAs based on RSS, RSS-D, SINR, and IINR, respectively.
Figure 7 shows the user's outage probabilities that resulted from the simulation based on different HODAs.The user's outage probability (SINR PCC < ) is recorded when the user's SINR over the serving PCC (SINR PCC ) falls below the threshold level, () [15], whereas the quality of service becomes unacceptable when SINR PCC falls below threshold level.However, Figure 7 shows that MIF-AHODA reduces the user's outage probability by around 80, 70, 30, and 25% as average reduction gains less than that resulting from HODAs based on RSS, RSS-D, SINR, and IINR, respectively.
The enhancements achieved by MIF-AHODA are due to the consideration of multiple influence factors and the optimal proposed algorithm that adaptively selects the suitable handover decision algorithm based on the handover scenario type and resource availability.
In case of a handover scenario type targeting switching the PCC, the handover decision is taken based on SINR with hysteresis and threshold levels (SINR > + ).This algorithm takes true handover decision when the SINR over the serving PCC falls below the threshold plus margin level, as was illustrated in Figure 4(a) and expression (4).This then allows prevention of unnecessary handover procedure that can be performed between the PCC and SCC as long as the SINR over the PCC is greater than the threshold by margin level.Furthermore, this algorithm taking a handover decision before the signal over the serving PCC falls below the threshold level.That leads to decreasing user's throughput degradation and it contributes to avoiding the disconnection probability, which in turn leads to reducing user's outage probability.
In case a handover scenario type is targeting switching a user's connection to a new sector or new eNB, the handover decision can be adaptively taken based on two different algorithms, which are selected based on the resource availability as illustrated in Figures 4(b) and 4(c) and expression (5).If the resource availability of the serving cell ( ) is more than the target cell ( ) by resource margin level (LM), handover decision can be taken based on the average SINR over both PCC and SCC (AS > AS + SINR ).This leads to performing the handover procedure to the best target eNB and can provide better signal quality over both CCs, which in turn leads to providing more resources to the served user during the active mode time.That enhances user throughput and reduces outage probability.On the other hand, if the resource availability of the target cell ( ) becomes more than the serving cell ( ) by resource margin level (LM), handover decision can be taken based on the SINR over the target PCC only SINR PCC > + .This leads to performing an early handover procedure to the target cell that has more resources.That leads to assigning more resources to the served user with acceptable signal quality, which in turn leads to enhanced user throughput and reduces outage probability.
Conclusion
It may be concluded that the proposed MIF-AHODA is a useful algorithm through the implementation of CA technology in LTE-Advanced system.It contributes to enhanced system performance from the perspective of user SINR, spectral efficiency, and reducing the user's outage probability.It is notably enhanced over the legacy RSS HODA, HODA-RSS-D, HODA-SINR, and HODA-IINR.Consequently, the
Figure 1 :
Figure 1: Configuration of CCs for different UEs served by the same eNB.
Figure 3 :
Figure 3: Flowchart of our proposed handover decision algorithm.
time-to-trigger γ: : threshold level of SINR T1: : the beginning of TTT Tn: : the end of TTT X: : greater or less than M M: HO decision based on resources availability | 4,571 | 2014-11-26T00:00:00.000 | [
"Computer Science"
] |